OpenAPI Spec
- T3 API OpenAPI Spec
- Why OpenAPI?
- Interactive Docs
- The Raw Spec
- Pruning the Spec
- Feeding Subsets to LLMs and Agents
- Postman
- Other Consumers
- Next Steps
T3 API OpenAPI Spec¶
The T3 API is described end-to-end by an OpenAPI 3.0.2 specification. Every path, parameter, and response shape is defined in that spec, and the spec itself is a first-class part of the API.
This page is the reference for what the spec offers, where to find it, and how to get only the pieces you need.
Why OpenAPI?¶
OpenAPI is the standard schema format for HTTP APIs. Publishing a spec lets the T3 API support:
- Browseable, interactive documentation with a "Try it out" button.
- Code generation — any language that has an OpenAPI generator can produce a typed client.
- Postman import, for users who prefer a GUI client to writing code.
- LLM / AI agent consumption — an agent writing a one-off script can be handed just the relevant slice of the spec instead of a full reference manual.
The spec is the source of truth. The interactive docs, the Postman workspace, and the Python libraries are all generated from it.
Interactive Docs¶
The Swagger UI lives at:
https://api.trackandtrace.tools/v2/docs/
This is where most users start. Every endpoint is listed, grouped by tag (Packages, Transfers, Labels, etc.), with:
- Parameter descriptions and examples
- Request body schemas
- Response schemas with example payloads
- A "Try it out" button that authenticates with your T3 session and sends a real request
The Raw Spec¶
The full OpenAPI document is available as JSON at:
https://api.trackandtrace.tools/v2/spec/openapi.json
This is a single self-contained file — every $ref is resolved against definitions inside the same document, so it can be consumed directly by tools without any further network fetches.
Individual YAML source files are also served under /v2/spec/<filename>.yaml if you'd rather read a single component (for example, /v2/spec/components/schemas/MetrcPackage.yaml).
Pruning the Spec¶
The full spec is large. When you only care about a few endpoints or schemas, you can ask the API to return a pruned version that contains just what you requested — plus every schema and parameter those pieces transitively reference.
All filters are applied on the same endpoint:
Filters are case-insensitive and repeatable. Multiple filters are combined as a union — the response contains everything matched by any filter. If a requested name doesn't exist in the spec, the endpoint returns 404.
?path — filter by API path¶
Returns the path definition(s) plus every schema and parameter reachable from them:
GET /v2/spec/openapi.json?path=/v2/packages/active
GET /v2/spec/openapi.json?path=/v2/packages/active&path=/v2/auth/credentials
?schema — filter by schema name¶
Returns the named schema(s) plus any transitively-referenced schemas. No paths are included:
GET /v2/spec/openapi.json?schema=MetrcPackage
GET /v2/spec/openapi.json?schema=MetrcPackage&schema=Pagination
?parameter — filter by parameter name¶
Returns the named reusable parameter(s) plus any schemas they reference. No paths are included:
GET /v2/spec/openapi.json?parameter=LicenseNumber
GET /v2/spec/openapi.json?parameter=CollectionFilter¶meter=CollectionSort
?tag — filter by tag¶
Returns every path tagged with this tag (for example, Packages, Items, Labels) plus their transitively-referenced schemas and parameters:
Combining filters¶
Filters can be mixed freely:
GET /v2/spec/openapi.json?path=/v2/auth/credentials&schema=MetrcPackage
GET /v2/spec/openapi.json?tag=Items¶meter=LicenseNumber
GET /v2/spec/openapi.json?path=/v2/packages/active&tag=Authentication&schema=Pagination
Feeding Subsets to LLMs and Agents¶
A common use case: an agent is writing a Python script that calls one or two endpoints. Handing it the full OpenAPI document wastes tokens and buries the relevant signal. Hand it a pruned subset instead.
For example, to generate a script that loads active packages, fetch the slice that covers just that endpoint and its dependencies:
import requests
response = requests.get(
"https://api.trackandtrace.tools/v2/spec/openapi.json",
params={"path": "/v2/packages/active"},
timeout=10,
)
subset_spec = response.json()
# `subset_spec` now contains only /v2/packages/active plus every schema and
# parameter it references. Feed this into your LLM prompt instead of the full
# document.
If the script needs to hit multiple endpoints, pass each one:
params = [
("path", "/v2/packages/active"),
("path", "/v2/transfers/incoming/active"),
("parameter", "LicenseNumber"),
]
response = requests.get(
"https://api.trackandtrace.tools/v2/spec/openapi.json",
params=params,
timeout=10,
)
The response is still a valid OpenAPI 3.0.2 document, so the same tools that consume the full spec (Swagger, generators, SDK builders) can consume a pruned one.
Postman¶
Postman is a GUI API client. The T3 team maintains a public Postman workspace generated from the OpenAPI spec:
https://www.postman.com/track-trace-tools/t3-api/overview
Inside that workspace is an example Postman Flow that demonstrates the authentication handshake and a simple data read:
https://www.postman.com/track-trace-tools/t3-api/flow/680a542024c4ab0040f2bb99
You can also import the spec yourself: in Postman, File → Import → Link, paste https://api.trackandtrace.tools/v2/spec/openapi.json, and you'll get a collection that mirrors the current API.
Other Consumers¶
Any tool that accepts OpenAPI 3.0.2 will work against the T3 spec. A few common ones:
- Code generators (e.g. OpenAPI Generator, openapi-python-client) — produce typed client SDKs. The t3api Python library is generated this way.
- Editors and IDEs — VS Code, JetBrains, and similar editors have OpenAPI plugins that autocomplete paths and payloads when writing API calls.
- Mocking tools — Prism and similar projects can stand up a mock server from the spec for local development.
Next Steps¶
- Explore the interactive docs.
- Learn how to write and execute a Python script against the T3 API.
- Check out Supercollections for endpoints that eagerly load related metadata.
- Check out Reports for bulk dataset exports.
- If you're not subscribed to T3+, start your 30-day free trial here.