This feature is available on all Portkey plans. Request-level metadata works everywhere; workspace/API key metadata and required-metadata enforcement are Enterprise capabilities. Reference: Metadata. Per-user cost APIs: Track costs using metadata.
Key rules at a glance
| Property | Details |
|---|---|
| Number of keys | No fixed limit—send as many pairs as needed |
| Value type | Strings only; max 128 characters per value |
| Key names | Any string; some keys have special behaviour (see Metadata) |
_user | Drives per-user analytics in the dashboard |
| Scope | Request, API key, or workspace (Enterprise) |
| Precedence (gateway v1.10.20+) | Workspace → API key → request (workspace wins on conflict) |
Gateways before 1.10.20 used the opposite precedence (request highest). See Metadata.
Use case 1 — User-level analytics and attribution
The problem
In a multi-user SaaS product, AI usage pools in one undifferentiated stream. Heavy users, inactive users, and fair-use enforcement are hard to reason about without per-user attribution; customer billing for AI credits becomes guesswork.The solution
Pass_user on every request made on behalf of an end user. Portkey surfaces this in Analytics and supports >Meta filters in the dashboard.
What you unlock
- Per-user token consumption — prompt and completion tokens over time
- Cost per user — token counts combined with model pricing
- Request frequency — outliers for rate limits or plan upgrades
- Cohort comparison — group by keys like
planand compare cost or usage - Support debugging — filter Logs to one user when triaging bad outputs
Use case 2 — Feature and component cost attribution
The problem
Products ship many AI surfaces—chat, summarisation, code completion, search. Without labelling calls by feature, total spend is a single line item: impossible to answer which feature is expensive.The solution
Tag every request withfeature or component. Combine with model and token data in Analytics for a feature-level view of AI spend.
What you unlock
- Feature-level cost breakdown — compare
doc-summariservscode-completionin one view - ROI analysis — join exported data with product analytics
- Optimisation targets — prioritise caching, prompt compression, or cheaper models on the costliest feature
- Team accountability — map keys to owning teams
- Version comparison — tag
versionorprompt_versionfor A/B cost and quality trade-offs
Use case 3 — Environment segmentation (dev / staging / prod)
The problem
The same app runs in dev, staging, and production. Without separating environments in observability, dev noise pollutes production cost reports and alerts may fire on non-production traffic.The solution
Set anenvironment key (and optional region, deployment) so Analytics and Logs filter cleanly inside one workspace.
What you unlock
- Clean production reports — filter to
environment=productionfor billing views - Regression detection — compare error and latency patterns across environments
- Canary checks — tag
deployment=canaryvsstablefor side-by-side metrics - Dev cost visibility — track experimentation spend before it hits prod
Use case 4 — Multi-tenant SaaS and tenant isolation
The problem
B2B products generate usage on behalf of many customer orgs. Per-tenant consumption matters for billing, SLAs, and support—and tenant traffic must not blur together in Logs.The solution
Add tenant identifiers (for exampletenant_id, tenant_plan) alongside _user and feature.
What you unlock
- Per-tenant billing — aggregate tokens and cost by
tenant_id - SLA monitoring — filter Logs to one tenant for latency and error review
- Tenant-scoped debugging — reproduce issues without mixing other customers’ traffic
- Plan analysis — compare usage across
tenant_planvalues - Quota alerts — export or poll Analytics to warn when a tenant nears a monthly cap
Use case 5 — Session and conversation tracking
The problem
Chat and agent flows issue many LLM calls per logical conversation. Without a shared id, conversation-level cost and multi-turn debugging stay opaque.The solution
Use a stablesession_id for every turn in the same conversation. Optionally add turn or similar for ordering.
What you unlock
- End-to-end conversation cost — sum tokens across one
session_id - Multi-turn debugging — inspect prior turns when a later response fails
- Context growth — relate token counts across turns to pruning or summarisation needs
- Drop-off analysis — correlate abandoned sessions with errors
Use case 6 — Internal request tracing and correlation
The problem
LLM calls must tie to application logs, traces, or tickets. Without a shared id, correlating Portkey Logs to the rest of the stack is slow.The solution
Propagaterequest_id (or trace id) from gateways, queues, or APM. Add service / caller when multiple services share one API key.
What you unlock
- One-hop correlation — search Logs by
request_idto match Datadog, Sentry, or internal traces - Service attribution — see which microservice originated the call
- Latency breakdown — compare app latency vs model latency
- Incident review — filter all LLM calls tied to known trace ids
x-portkey-trace-id for correlating Portkey requests.
Use case 7 — Prompt version and experiment tracking
The problem
Prompt iteration runs experiments without clear labelling—hard to compare cost, latency, or quality between variants.The solution
Tagexperiment, variant, and prompt_version (or your own names) on each call.
What you unlock
- Cost and latency by variant — compare treatments in Analytics
- Error and guardrail rates — segment by
variant - Gradual rollout — track metrics as traffic shifts between variants
Use case 8 — Compliance, audit, and data governance
The problem
Regulated teams need auditable records: who invoked the model, with what data classification, under which policy.The solution
Add governance-oriented keys (examples:user_role, data_class, regulation, consent_ref, case_id, jurisdiction). Metadata is stored with request context in Logs—pair with export workflows for evidence packs.
What you unlock
- Structured audit fields — filter and export by classification and case
- DSAR support — filter by
_userfor access-request bundles - Role segmentation — review usage by
user_role
Use case 9 — AI agent and workflow observability
The problem
Agent runs fan out to many LLM calls (planning, tools, reflection). A flat log list hides which step or tool each call belongs to.The solution
Tagagent_run_id, agent_name, step, and optionally tool on every call in the run.
PortkeyAdk usage).
What you unlock
- Per-run cost — aggregate tokens by
agent_run_id - Step-level debugging — replay ordering when the final answer is wrong
- Tool usage — group by
toolto see hot tools and token impact - Loop detection — repeated
stepvalues under oneagent_run_idflag potential loops
Use case 10 — Enterprise metadata governance
The problem
Ad-hoc tagging leaves gaps: missing keys break dashboards and compliance reports.The solution
Define metadata at workspace, API key, and request levels. Higher levels merge down; on v1.10.20+, workspace wins on key conflicts, then API key, then request.| Level | Precedence | Typical use |
|---|---|---|
| Workspace | Highest | Org-wide tags: company, compliance_region |
| API key | Middle | Team or service: team, service, environment |
| Request | Lowest | Per-call: _user, session_id, feature, request_id |
Enforcing required metadata
Enterprise orgs can attach JSON Schema requirements to new or updated API keys and workspaces so required keys are always present. See Enforcing request metadata.Self-hosted: inject metadata from headers
Enterprise self-hosted; gateway 2.5.0+.
HEADERS_TO_METADATA so named inbound headers merge into metadata (case-insensitive). Useful when proxies already send x-request-id or x-tenant-id.
Where metadata appears
Analytics

Logs
Filter by any key used in traffic:
Implementation reference
Best practices
Naming conventions
Inconsistent keys (user_id vs userId vs _user) fragment Analytics. Define a small schema for the org and route all calls through a shared helper.
Default _user for end-user traffic
Include _user whenever the call is on behalf of a known user so dashboard user analytics stay populated.
Stay within 128 characters
Use short ids (UUIDs, slugs). Long prose belongs in the message body, not metadata.No secrets in metadata
Metadata is visible in Logs and to workspace members with access. Never store API keys, passwords, or sensitive PII—only opaque identifiers and classification labels.Saved filters
Use Filters for common combinations (for example production + onefeature) to speed up triage.
API key metadata for service identity
When each microservice has its own Portkey API key, setteam, service, environment on the key so attribution survives forgotten request-level tags.
Summary: metadata use cases at a glance
| Use case | Example keys | Primary benefit |
|---|---|---|
| User analytics | _user, plan, account_id | Per-user cost, usage, outliers |
| Feature attribution | feature, team, version | Feature-level AI spend |
| Environment segmentation | environment, region, deployment | Clean prod vs non-prod views |
| Multi-tenant SaaS | tenant_id, tenant_plan, _user | Per-tenant billing and isolation |
| Session tracking | session_id, _user, turn | Conversation cost and debugging |
| Internal tracing | request_id, service, caller | Cross-system correlation |
| Prompt experiments | experiment, variant, prompt_version | A/B cost and quality |
| Compliance / audit | _user, data_class, regulation, case_id | Auditable, filterable records |
| AI agents | agent_run_id, agent_name, step, tool | Per-run cost and step debugging |
| Enterprise governance | Workspace + API key metadata | Consistent tags org-wide |
Further reading
- Metadata — Special keys, precedence, screenshots
- Analytics
- Filters
- Logs export
- Enforcing request metadata
- Track costs using metadata

