service_tier
parameter in the Groq provider configurationAssumedRole
for bedrock application inference profilescohere
and titan
.createTranscription
,createTranslation
, imageGeneration
, batch
and files
endpoints.file_url
and mime_type
for file
content parts in Anthropic requests.EntraID
and ManagedIdentity
REDIS_USERNAME
and REDIS_PASSWORD
environment variables.CACHE_STORE
with azure-redis
as value.AZURE_REDIS_AUTH_MODE
and AZURE_REDIS_MANAGED_CLIENT_ID
for a different auth setup.AZURE_AUTH_MODE
and AZURE_MANAGED_CLIENT_ID
if not providedAZURE_REDIS_AUTH_MODE
and AZURE_REDIS_ENTRA_CLIENT_ID
, AZURE_REDIS_ENTRA_CLIENT_SECRET
, AZURE_REDIS_ENTRA_TENANT_ID
for a different auth setup.AZURE_AUTH_MODE
and AZURE_ENTRA_CLIENT_ID
, AZURE_ENTRA_CLIENT_SECRET
, AZURE_ENTRA_TENANT_ID
if not providedHTTPS_PROXY
environment variable to enable this feature.background
and service_tier
parametersinference profiles
Ref/v1/otel/v1/traces
to collect any OTEL traces as Portkey tracesDataservice
is required for log exports to work via Data Plane.file
content parts in request.file
content parts: file_url
and mime_type
.OTEL_SERVICE_NAME
: Sets the service.name
resource attribute value.OTEL_RESOURCE_ATTRIBUTES
: Comma-separated key=value
pairs which will be sent as individual resource attributes.stream_options
was included in the request, because the response transformer was not handling usage
chunk mapping as expected./images/generations
route.usage
request parameter and response mapping.ping
event returned in stream and mapped usage
field returned in the response.object
field in chat completions response from chat_completion
to chat.completion
for OpenAI spec compliance.model
label was set as N/A for Bedrock requests.response_format
support for Deepseek partner models$schema
property in tools properties JSON Schema.blocklist
handling for Azure Content Safety guardrail.Workspace Default Metadata > API Key Default Metadata > Incoming Request Metadata
./v1
was getting added in the final URL, causing request failures.targets
field with a single virtual key in it.OTEL_PUSH_ENABLED
OTEL_ENDPOINT
pinecone
milvus
VECTOR_STORE
VECTOR_STORE_ADDRESS
VECTOR_STORE_API_KEY
VECTOR_STORE_COLLECTION_NAME
/v1/prompts/:id/render
.dimensions
for embeddingsportkey_processing_time_excluding_last_byte_ms
metric which provides Portkey processing time excluding the LLM last byte diff latency (llm_last_byte_diff_duration_milliseconds
).OTEL_PUSH_ENABLED
OTEL_ENDPOINT
pinecone
milvus
VECTOR_STORE
VECTOR_STORE_ADDRESS
VECTOR_STORE_API_KEY
VECTOR_STORE_COLLECTION_NAME
dimensions
for embeddingsweb_search
, file_search
and code_execution
.use_retry_after_header
. When set to true
, if the provider returns the x-retry-after
or x-retry-after-ms
headers, Gateway will use these headers for retry wait times instead of applying the default exponential backoff for 429 responses.logprobs
and top_logprobs
request parameters.response_format
and search_recency_filter
request parameters.\n\n
).model
field in responses for the /chat/completions
API if the providers do not natively return this field, ensuring alignment with the OpenAI signature.custom_id
will be preserved in the VertexAI batch output.logprobs
and top_logprobs
parameters.AWS_ENDPOINT_DOMAIN
) which can be used to override the default value (amazonaws.com
)x-portkey-retry-attempt-count
response header was set to -1
even when no retries were configured.llm_last_byte_diff_duration_milliseconds
) to track LLM last byte latency for chunked JSON responses.stream
) for all metrics. Possible values: 0/1TLS_KEY_PATH
and TLS_CERT_PATH
environment variables will be used to fetch the certificate and key from the volume.webm
mimeType.extra-parameters: ignore
with extra-parameters: drop
due to deprecation by Azure.params
to specify body fields in conditional router queries. Previously, only metadata-based routing was supported.error
type stream chunks returned by the provider.webhook
plugin now has mutation capability.timeout
parameter can be used for all the guardrails that make a fetch call internally.stream_options
parameter.cause
and name
in logs for provider level fetch failures.debug
flag is set to false.transformed
set to true.llm_request_duration_seconds
to llm_request_duration_milliseconds
portkey_request_duration_milliseconds
to track Portkey’s processing latency.logprobs
support compatible with OpenAI format via logprobs
and top_logprobs
parameterstotal_tokens
in stream response to make it compliant with OpenAI spec.control_plane
(for hybrid deployments) so that batching/retries can managed by Portkey.input_guardrails
and output_guardrails
fields in config which accept array of guardrail slugs.explanation
property to clarify why checks passed or failed.developer
Role Support Across All Providers:hate_and_discrimination
, violence_and_threats
, etc.stream
parameter from the Bedrock-Cohere integration.(/render)
is a control plane API. Added detailed message to highlight this in case a user tries to use this API on their deployed Gateway.finish_reason
mapping for streaming response.model
param mapping for VertexAI Meta partner models.method
route
code
custom_labels
provider
model
source
tools
.S3_CUSTOM
which can be used to integrate any S3-compatible storage service for request logging.LOG_STORE_BASEPATH
.citations
in response if strict_open_ai_compliance flag is set to false.openrouter/auto
.code
in error response.prediction
, store
, metadata
, audio
and modalities
parameters.lambda
): Supports chat completions and completions.<model-name>
).llm_cost_sum
prometheus metric to avoid unnecessary labels.