# Portkey Docs > Comprehensive documentation for Portkey's AI Gateway, Guardrails, Observability, Prompts, and Governance features. ## Docs - [How to Contribute](https://docs.portkey.ai/docs/README.md) - [Get cache hit latency data](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/analytics/graphs-time-series-data/get-cache-hit-latency-data.md) - [Get cache hit rate data](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/analytics/graphs-time-series-data/get-cache-hit-rate-data.md) - [Get cost data](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/analytics/graphs-time-series-data/get-cost-data.md) - [Get error rate data](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/analytics/graphs-time-series-data/get-error-rate-data.md) - [Get errors data](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/analytics/graphs-time-series-data/get-errors-data.md) - [Get feedback data](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/analytics/graphs-time-series-data/get-feedback-data.md) - [Get feedback per ai models data](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/analytics/graphs-time-series-data/get-feedback-per-ai-models-data.md) - [Get feedback score distribution data](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/analytics/graphs-time-series-data/get-feedback-score-distribution-data.md) - [Get latency data](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/analytics/graphs-time-series-data/get-latency-data.md) - [Get requests data](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/analytics/graphs-time-series-data/get-requests-data.md) - [Get requests per user data](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/analytics/graphs-time-series-data/get-requests-per-user-data.md) - [Get rescued requests data](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/analytics/graphs-time-series-data/get-rescued-requests-data.md) - [Get status code data](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/analytics/graphs-time-series-data/get-status-code-data.md) - [Get tokens data](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/analytics/graphs-time-series-data/get-tokens-data.md) - [Get unique status code data](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/analytics/graphs-time-series-data/get-unique-status-code-data.md) - [Get users data](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/analytics/graphs-time-series-data/get-users-data.md) - [Get weighted feedback data](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/analytics/graphs-time-series-data/get-weighted-feedback-data.md) - [Get Metadata Grouped Data](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/analytics/groups-paginated-data/get-metadata-grouped-data.md) - [Get Model Grouped Data](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/analytics/groups-paginated-data/get-model-grouped-data.md) - [Get User Grouped Data](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/analytics/groups-paginated-data/get-user-grouped-data.md) - [Get User Grouped Data](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/analytics/summary/get-all-cache-data.md) - [Create API Key](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/api-keys/create-api-key.md): Creates a new API key. - [Delete an API Key](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/api-keys/delete-an-api-key.md) - [List API Key](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/api-keys/list-api-keys.md) - [Retrieve an API Key](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/api-keys/retrieve-an-api-key.md) - [Rotate API Key](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/api-keys/rotate-api-key.md): Rotates an existing API key and returns a newly generated key value. The previous key remains valid during the transition period and expires at `key_transition_expires_at`. - [Update API Key](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/api-keys/update-api-key.md): Updates an existing API key. The API key type (user vs service) and associated user_id cannot be changed after creation. - [List Audit Logs](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/audit-logs/list-audit-logs.md) - [Create Config](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/configs/create-config.md) - [Delete Config](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/configs/delete-config.md) - [List Config Versions](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/configs/list-config-versions.md) - [List Configs](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/configs/list-configs.md) - [Retrieve Config](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/configs/retrieve-config.md) - [Update Config](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/configs/update-config.md) - [Create Guardrail](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/guardrails/create-guardrail.md): Creates a new guardrail with specified checks and actions - [Delete Guardrail](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/guardrails/delete-guardrail.md): Deletes an existing guardrail - [List Guardrails](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/guardrails/list-guardrails.md): Retrieves a paginated list of guardrails for the specified workspace or organisation - [Retrieve Guardrail](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/guardrails/retrieve-guardrail.md): Retrieves details of a specific guardrail by ID or slug - [Update Guardrail](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/guardrails/update-guardrail.md): Updates an existing guardrail's name, checks, or actions - [Create Integration](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/integrations/create-integration.md) - [Delete Integration](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/integrations/delete-integration.md) - [List Integrations](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/integrations/list-integrations.md) - [Delete Custom Model](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/integrations/models/delete-custom-model.md): Removes multiple custom models from an integration by their slugs. - [List Model Access](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/integrations/models/list-model-access.md): Retrieves all model access for a specific integration with their configuration and pricing details. - [Update Model Access](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/integrations/models/update-model-access.md): Updates model access, pricing configuration, and settings for multiple models in an integration. Can enable/disable models and configure custom pricing. - [Retrieve Integration](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/integrations/retrieve-integration.md) - [Update Integration](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/integrations/update-integration.md) - [List Workspace Access](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/integrations/workspaces/list-workspace-access.md): Retrieves workspace access configuration for an integration, including usage limits and rate limits. - [Update Workspace Access](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/integrations/workspaces/update-workspace-access.md): Updates workspace access permissions, usage limits, and rate limits for an integration. Can configure global workspace access or per-workspace settings. - [List MCP Integration Capabilities](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/mcp-integrations/capabilities/list-mcp-integration-capabilities.md) - [Update MCP Integration Capabilities](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/mcp-integrations/capabilities/update-mcp-integration-capabilities.md) - [Create MCP Integration](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/mcp-integrations/create-mcp-integration.md): Create a new MCP Integration. Requires either organisation_id (with admin API key) or workspace_id in body. - [Delete MCP Integration](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/mcp-integrations/delete-mcp-integration.md) - [List MCP Integrations](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/mcp-integrations/list-mcp-integrations.md): List MCP Integrations for the organisation or workspace. Requires organisation_id (when using org admin API key) or x-portkey-api-key header. - [Retrieve MCP Integration](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/mcp-integrations/retrieve-mcp-integration.md) - [Retrieve MCP Integration Metadata](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/mcp-integrations/retrieve-mcp-integration-metadata.md) - [Update MCP Integration](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/mcp-integrations/update-mcp-integration.md) - [List MCP Integration Workspaces](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/mcp-integrations/workspaces/list-mcp-integration-workspaces.md) - [Update MCP Integration Workspaces](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/mcp-integrations/workspaces/update-mcp-integration-workspaces.md) - [Bulk update MCP Server capability overrides](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/mcp-servers/capabilities/bulk-update-mcp-server-capabilities.md) - [List MCP Server capabilities](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/mcp-servers/capabilities/list-mcp-server-capabilities.md) - [Create MCP Server](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/mcp-servers/create-mcp-server.md): Create a new MCP Server (workspace instance of an MCP Integration). Requires workspace_id or x-portkey-api-key header. - [Delete MCP Server](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/mcp-servers/delete-mcp-server.md) - [List MCP Servers](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/mcp-servers/list-mcp-servers.md): List MCP Servers for the workspace. Requires workspace_id or x-portkey-api-key header. - [Retrieve MCP Server](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/mcp-servers/retrieve-mcp-server.md) - [Test MCP Server connection](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/mcp-servers/test-mcp-server.md): Test connectivity to the MCP server via its integration URL. - [Update MCP Server](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/mcp-servers/update-mcp-server.md) - [Bulk update MCP Server user access](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/mcp-servers/user-access/bulk-update-mcp-server-user-access.md) - [List MCP Server user access](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/mcp-servers/user-access/list-mcp-server-user-access.md) - [Create Rate Limits Policy](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/policies/rate-limits/create-rate-limits-policy.md): Create a new rate limits policy to control the rate of requests or tokens consumed per minute, hour, or day. - [Delete Rate Limits Policy](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/policies/rate-limits/delete-rate-limits-policy.md): Delete a rate limits policy. - [List Rate Limits Policy](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/policies/rate-limits/list-rate-limits-policy.md): List all rate limits policies with optional filtering. - [Retrieve Rate Limits Policy](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/policies/rate-limits/retrieve-rate-limits-policy.md): Get a single rate limits policy by ID. - [Update Rate Limits Policy](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/policies/rate-limits/update-rate-limits-policy.md): Update an existing rate limits policy. - [Create Usage Limits Policy](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/policies/usage-limits/create-usage-limits-policy.md): Create a new usage limits policy to control total usage (cost or tokens) over a period. - [Delete Usage Limits Policy](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/policies/usage-limits/delete-usage-limits-policy.md): Archive (soft delete) a usage limits policy. - [List Usage Limits Policy](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/policies/usage-limits/list-usage-limits-policy.md): List all usage limits policies with optional filtering. - [List Usage Limits Policy Entities](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/policies/usage-limits/list-usage-limits-policy-entities.md): List entities tracked by a usage limits policy with their current usage. - [Reset Usage Limits Policy Entity](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/policies/usage-limits/reset-usage-limits-policy-entity.md): Reset the current usage for a specific entity in a usage limits policy. - [Retrieve Usage Limits Policy](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/policies/usage-limits/retrieve-usage-limits-policy.md): Get a single usage limits policy by ID. - [Update Usage Limits Policy](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/policies/usage-limits/update-usage-limits-policy.md): Update an existing usage limits policy. - [Create Prompt Collection](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/collections/create-collection.md): Creates a new collection in the specified workspace - [Delete Prompt Collection](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/collections/delete-collection.md): Deletes a collection - [List Prompt Collections](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/collections/list-collections.md): Lists all collections in the specified workspace - [Retrieve Prompt Collection](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/collections/retrieve-collection.md): Retrieves details of a specific collection - [Update Prompt Collection](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/collections/update-collection.md): Updates a collection's details - [Create Prompt](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/create-prompt.md) - [Delete Prompt](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/delete-prompt.md) - [Create Label](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/labels/create-label.md): Creates a new label in the system - [Delete Label](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/labels/delete-label.md): Deletes a label - [List Labels](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/labels/list-labels.md): Returns a list of labels based on filters - [Retrieve Label](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/labels/retrieve-label.md): Returns a specific label by its ID - [Update Label](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/labels/update-label.md): Updates an existing label - [List Prompt Versions](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/list-prompt-versions.md) - [List Prompts](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/list-prompts.md) - [Create Partial](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/partials/create-partial.md) - [Delete Partial](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/partials/delete-partial.md) - [List Partial Versions](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/partials/list-partial-versions.md) - [List Partials](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/partials/list-partials.md) - [Publish Partial](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/partials/publish-partial.md) - [Retrieve Partial](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/partials/retrieve-partial.md) - [Update Partial](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/partials/update-partial.md) - [Publish Prompt](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/publish-prompt.md) - [Retrieve Prompt](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/retrieve-prompt.md) - [Retrieve Prompt Version](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/retrieve-prompt-version.md) - [Update Prompt](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/update-prompt.md): Update a prompt's metadata and/or create a new version with updated template content. - [Update Prompt Version](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/prompts/update-prompt-version.md): Updates metadata for a specific prompt version. **This endpoint only supports updating the `label_id` field.** - [Create Provider](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/providers/create-provider.md) - [Delete Provider](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/providers/delete-provider.md) - [List Providers](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/providers/list-providers.md) - [Retrieve Provider](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/providers/retrieve-provider.md) - [Update Provider](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/providers/update-provider.md) - [Create Secret Reference](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/secret-references/create-secret-reference.md) - [Delete Secret Reference](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/secret-references/delete-secret-reference.md) - [List Secret References](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/secret-references/list-secret-references.md) - [Retrieve Secret Reference](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/secret-references/retrieve-secret-reference.md) - [Update Secret Reference](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/secret-references/update-secret-reference.md) - [Delete a user invite](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/user-invites/delete-a-user-invite.md) - [Invite a User](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/user-invites/invite-a-user.md): Send an invite to user for your organization - [Resend a user invite](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/user-invites/resend-a-user-invite.md): Resend an invite to user for your organization - [Retrieve all user invite](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/user-invites/retrieve-all-user-invites.md) - [Retrieve an user invite](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/user-invites/retrieve-an-invite.md) - [Remove a user](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/users/remove-a-user.md) - [Retrieve a user](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/users/retrieve-a-user.md) - [Retrieve all users](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/users/retrieve-all-users.md) - [Update a user](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/users/update-a-user.md) - [Create Virtual Key](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/virtual-keys/create-virtual-key.md) - [Delete Virtual Key](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/virtual-keys/delete-virtual-key.md) - [List Virtual Key](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/virtual-keys/list-virtual-keys.md) - [Retrieve Virtual Key](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/virtual-keys/retrieve-virtual-key.md) - [Update Virtual Key](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/virtual-keys/update-virtual-key.md) - [Add a Workspace Member](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/workspace-members/add-a-workspace-member.md) - [Remove Workspace Member](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/workspace-members/remove-workspace-member.md) - [Retrieve a Workspace Member](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/workspace-members/retrieve-a-workspace-member.md) - [Retrieve all Workspace Member](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/workspace-members/retrieve-all-workspace-members.md) - [Update Workspace Member](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/workspace-members/update-workspace-member.md) - [Create Workspace](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/workspaces/create-workspace.md) - [Delete a Workspace](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/workspaces/delete-a-workspace.md) - [Retrieve a Workspace](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/workspaces/retrieve-a-workspace.md) - [Retrieve all Workspaces](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/workspaces/retrieve-all-workspaces.md) - [Update Workspace](https://docs.portkey.ai/docs/api-reference/admin-api/control-plane/workspaces/update-workspace.md) - [Create Feedback](https://docs.portkey.ai/docs/api-reference/admin-api/data-plane/feedback/create-feedback.md): This endpoint allows users to submit feedback for a particular interaction or response. - [Update Feedback](https://docs.portkey.ai/docs/api-reference/admin-api/data-plane/feedback/update-feedback.md): This endpoint allows users to update existing feedback. - [Insert a Log](https://docs.portkey.ai/docs/api-reference/admin-api/data-plane/logs/insert-a-log.md): Submit one or more log entries - [Cancel a Log Export](https://docs.portkey.ai/docs/api-reference/admin-api/data-plane/logs/log-exports-beta/cancel-a-log-export.md) - [Create a Log Export](https://docs.portkey.ai/docs/api-reference/admin-api/data-plane/logs/log-exports-beta/create-a-log-export.md) - [Download a Log Export](https://docs.portkey.ai/docs/api-reference/admin-api/data-plane/logs/log-exports-beta/download-a-log-export.md) - [List a Log Export](https://docs.portkey.ai/docs/api-reference/admin-api/data-plane/logs/log-exports-beta/list-log-exports.md) - [Retrieve a Log Export](https://docs.portkey.ai/docs/api-reference/admin-api/data-plane/logs/log-exports-beta/retrieve-a-log-export.md) - [Start a Log Export](https://docs.portkey.ai/docs/api-reference/admin-api/data-plane/logs/log-exports-beta/start-a-log-export.md) - [Update a Log Export](https://docs.portkey.ai/docs/api-reference/admin-api/data-plane/logs/log-exports-beta/update-a-log-export.md) - [Errors](https://docs.portkey.ai/docs/api-reference/admin-api/error.md): Error codes returned by the Admin API and how to resolve them. - [Introduction](https://docs.portkey.ai/docs/api-reference/admin-api/introduction.md): Manage your Portkey organization and workspaces programmatically - [OpenAPI Specification](https://docs.portkey.ai/docs/api-reference/admin-api/open-api-specification.md) - [Agentic Usage](https://docs.portkey.ai/docs/api-reference/inference-api/agentic-usage.md): Add Portkey SDK skills to your AI coding assistant - [Create Assistant](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/assistants/create-assistant.md) - [Delete Assistant](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/assistants/delete-assistant.md) - [List Assistant](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/assistants/list-assistants.md) - [Modify Assistant](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/assistants/modify-assistant.md) - [Retrieve Assistant](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/assistants/retrieve-assistant.md) - [Create Message](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/messages/create-message.md) - [Delete Message](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/messages/delete-message.md) - [List Message](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/messages/list-messages.md) - [Modify Message](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/messages/modify-message.md) - [Retrieve Message](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/messages/retrieve-message.md) - [List Run Steps](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/run-steps/list-run-steps.md) - [Retrieve Run Steps](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/run-steps/retrieve-run-steps.md) - [Cancel Run](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/runs/cancel-run.md) - [Create Run](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/runs/create-run.md) - [Create thread and Run](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/runs/create-thread-and-run.md) - [list Run](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/runs/list-runs.md) - [Modify Run](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/runs/modify-run.md) - [Retrieve Run](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/runs/retrieve-run.md) - [Submit Tool Outputs to Run](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/runs/submit-tool-outputs-to-run.md) - [Create Thread](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/threads/create-thread.md) - [Delete Thread](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/threads/delete-thread.md) - [Modify Thread](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/threads/modify-thread.md) - [Retrieve Thread](https://docs.portkey.ai/docs/api-reference/inference-api/assistants-api/threads/retrieve-thread.md) - [Create Speech](https://docs.portkey.ai/docs/api-reference/inference-api/audio/create-speech.md) - [Create Transcription](https://docs.portkey.ai/docs/api-reference/inference-api/audio/create-transcription.md) - [Create Translation](https://docs.portkey.ai/docs/api-reference/inference-api/audio/create-translation.md) - [Authentication](https://docs.portkey.ai/docs/api-reference/inference-api/authentication.md) - [Cancel Batch](https://docs.portkey.ai/docs/api-reference/inference-api/batch/cancel-batch.md) - [Create Batch](https://docs.portkey.ai/docs/api-reference/inference-api/batch/create-batch.md) - [List Batch](https://docs.portkey.ai/docs/api-reference/inference-api/batch/list-batch.md) - [Retrieve Batch](https://docs.portkey.ai/docs/api-reference/inference-api/batch/retrieve-batch.md) - [Chat](https://docs.portkey.ai/docs/api-reference/inference-api/chat.md) - [Completions](https://docs.portkey.ai/docs/api-reference/inference-api/completions.md) - [Gateway Config Object](https://docs.portkey.ai/docs/api-reference/inference-api/config-object.md) - [Embeddings](https://docs.portkey.ai/docs/api-reference/inference-api/embeddings.md) - [Errors](https://docs.portkey.ai/docs/api-reference/inference-api/error-codes.md) - [Delete File](https://docs.portkey.ai/docs/api-reference/inference-api/files/delete-file.md) - [List Files](https://docs.portkey.ai/docs/api-reference/inference-api/files/list-files.md) - [Retrieve File](https://docs.portkey.ai/docs/api-reference/inference-api/files/retrieve-file.md) - [Retrieve File Content](https://docs.portkey.ai/docs/api-reference/inference-api/files/retrieve-file-content.md) - [Upload File](https://docs.portkey.ai/docs/api-reference/inference-api/files/upload-file.md) - [Cancel Fine-tuning](https://docs.portkey.ai/docs/api-reference/inference-api/fine-tuning/cancel-fine-tuning.md) - [Create Fine-tuning Job](https://docs.portkey.ai/docs/api-reference/inference-api/fine-tuning/create-fine-tuning-job.md): Finetune a provider model - [List Fine-tuning Checkpoints](https://docs.portkey.ai/docs/api-reference/inference-api/fine-tuning/list-fine-tuning-checkpoints.md) - [List Fine-tuning Events](https://docs.portkey.ai/docs/api-reference/inference-api/fine-tuning/list-fine-tuning-events.md) - [List Fine-tuning Jobs](https://docs.portkey.ai/docs/api-reference/inference-api/fine-tuning/list-fine-tuning-jobs.md) - [Retrieve Fine-tuning Job](https://docs.portkey.ai/docs/api-reference/inference-api/fine-tuning/retrieve-fine-tuning-job.md) - [Gateway to Other APIs](https://docs.portkey.ai/docs/api-reference/inference-api/gateway-for-other-apis.md): Access any custom provider endpoint through Portkey API - [Headers](https://docs.portkey.ai/docs/api-reference/inference-api/headers.md): Header requirements and options for the Portkey API - [Create Image](https://docs.portkey.ai/docs/api-reference/inference-api/images/create-image.md) - [Create Image Edit](https://docs.portkey.ai/docs/api-reference/inference-api/images/create-image-edit.md) - [Create Image Variation](https://docs.portkey.ai/docs/api-reference/inference-api/images/create-image-variation.md) - [Introduction](https://docs.portkey.ai/docs/api-reference/inference-api/introduction.md): This documentation provides detailed information about the various ways you can access and interact with Portkey - **a robust AI gateway** designed to simplify and enhance your experience with Large Language Models (LLMs) like OpenAI's GPT models. - [Models](https://docs.portkey.ai/docs/api-reference/inference-api/models/models.md): Lists the currently available models that can be used through Portkey, and provides basic information about each one. - [Moderations](https://docs.portkey.ai/docs/api-reference/inference-api/moderations.md) - [OpenAPI Specification](https://docs.portkey.ai/docs/api-reference/inference-api/open-api-specification.md) - [Prompt Completions](https://docs.portkey.ai/docs/api-reference/inference-api/prompts/prompt-completion.md): Execute your saved prompt templates on Portkey - [Prompt Render](https://docs.portkey.ai/docs/api-reference/inference-api/prompts/render.md): Renders a prompt template with its variable values filled in - [Rerank](https://docs.portkey.ai/docs/api-reference/inference-api/rerank.md): Reranks a list of documents based on their relevance to a query. This endpoint provides a unified interface to reranking models from multiple providers including Cohere, Voyage, Jina, Pinecone, Bedrock, and Azure AI. - [Response Schema](https://docs.portkey.ai/docs/api-reference/inference-api/response-schema.md) - [Delete a Response](https://docs.portkey.ai/docs/api-reference/inference-api/responses/delete-response.md) - [Create a Response](https://docs.portkey.ai/docs/api-reference/inference-api/responses/responses.md) - [List Input Items](https://docs.portkey.ai/docs/api-reference/inference-api/responses/retrieve-inputs.md) - [Get a Response](https://docs.portkey.ai/docs/api-reference/inference-api/responses/retrieve-response.md) - [Supported Providers](https://docs.portkey.ai/docs/api-reference/inference-api/supported-providers.md) - [C# (.NET)](https://docs.portkey.ai/docs/api-reference/sdk/c-sharp.md): Integrate Portkey in your `.NET` app easily using the OpenAI library and get advanced monitoring, routing, and enterprise features. - [Supported SDKs](https://docs.portkey.ai/docs/api-reference/sdk/list.md): Find the best way to use Portkey in your preferred language. - [Node.js](https://docs.portkey.ai/docs/api-reference/sdk/node.md): Official Portkey Node.js SDK – robust, modern, and fully typed integration for JavaScript and TypeScript developers. - [Python](https://docs.portkey.ai/docs/api-reference/sdk/python.md): Official Portkey Python SDK to help take your AI apps to production - [December](https://docs.portkey.ai/docs/changelog/2024/dec.md) - [November](https://docs.portkey.ai/docs/changelog/2024/nov.md) - [October](https://docs.portkey.ai/docs/changelog/2024/oct.md) - [April](https://docs.portkey.ai/docs/changelog/2025/apr.md) - [August](https://docs.portkey.ai/docs/changelog/2025/august.md) - [December](https://docs.portkey.ai/docs/changelog/2025/december.md) - [February](https://docs.portkey.ai/docs/changelog/2025/feb.md) - [January](https://docs.portkey.ai/docs/changelog/2025/jan.md) - [July](https://docs.portkey.ai/docs/changelog/2025/july.md) - [June](https://docs.portkey.ai/docs/changelog/2025/june.md) - [March](https://docs.portkey.ai/docs/changelog/2025/mar.md) - [May](https://docs.portkey.ai/docs/changelog/2025/may.md) - [November](https://docs.portkey.ai/docs/changelog/2025/november.md) - [October](https://docs.portkey.ai/docs/changelog/2025/october.md) - [September](https://docs.portkey.ai/docs/changelog/2025/september.md) - [February](https://docs.portkey.ai/docs/changelog/2026/february.md) - [January](https://docs.portkey.ai/docs/changelog/2026/january.md) - [March](https://docs.portkey.ai/docs/changelog/2026/march.md) - [Backend](https://docs.portkey.ai/docs/changelog/backend.md) - [Data Service](https://docs.portkey.ai/docs/changelog/data-service.md) - [Enterprise Gateway](https://docs.portkey.ai/docs/changelog/enterprise.md) - [Frontend](https://docs.portkey.ai/docs/changelog/frontend.md) - [Helm Chart](https://docs.portkey.ai/docs/changelog/helm-chart.md) - [Node.js](https://docs.portkey.ai/docs/changelog/node-sdk-changelog.md) - [Latest Updates](https://docs.portkey.ai/docs/changelog/product.md) - [Python](https://docs.portkey.ai/docs/changelog/python-sdk-changelog.md) - [Setup Claude Code or Codex using Agent CLI](https://docs.portkey.ai/docs/guides/coding-agents/agent-cli.md): One command to connect Claude Code or Codex to Portkey — gateway routing, MCP tools, and team skills configured in a single interactive flow. - [Agent Skills](https://docs.portkey.ai/docs/guides/coding-agents/skills.md): Create versioned, team-shared skills in Portkey and sync them to Claude Code, Cursor, Codex, and more with one command. - [Converting STDIO to Remote MCP Servers](https://docs.portkey.ai/docs/guides/converting-stdio-to-streamable-http.md): Step-by-step guide to converting local STDIO MCP servers to production-ready Streamable HTTP servers - [Overview](https://docs.portkey.ai/docs/guides/getting-started.md) - [101 on Portkey's Gateway Configs](https://docs.portkey.ai/docs/guides/getting-started/101-on-portkey-s-gateway-configs.md): You are likely familiar with how to make an API call to GPT4 for chat completions. - [A/B Test Prompts and Models](https://docs.portkey.ai/docs/guides/getting-started/a-b-test-prompts-and-models.md): A/B testing with large language models in production is crucial for driving optimal performance and user satisfaction. - [Function Calling](https://docs.portkey.ai/docs/guides/getting-started/function-calling.md): Get the LLM to interact with external APIs! - [Getting Started with AI Gateway](https://docs.portkey.ai/docs/guides/getting-started/getting-started-with-ai-gateway.md): Connect to 1,600+ LLMs through Portkey's unified API with observability, reliability, and cost controls. - [Image Generation](https://docs.portkey.ai/docs/guides/getting-started/image-generation.md) - [Llama 3 on Groq](https://docs.portkey.ai/docs/guides/getting-started/llama-3-on-groq.md) - [Return Repeat Requests from Cache](https://docs.portkey.ai/docs/guides/getting-started/return-repeat-requests-from-cache.md): If you have multiple users of your GenAI app triggering the same or similar queries to your models, fetching LLM response from the models can be slow and expensive. - [Tackling Rate Limiting](https://docs.portkey.ai/docs/guides/getting-started/tackling-rate-limiting.md): LLMs are **costly** to run. As their usage increases, the providers have to balance serving user requests v/s straining their GPU resources too thin. They generally deal with this by putting _rate limits_ on how many requests a user can send in a minute or in a day. - [Trigger Automatic Retries on LLM Failures](https://docs.portkey.ai/docs/guides/getting-started/trigger-automatic-retries-on-llm-failures.md) - [Overview](https://docs.portkey.ai/docs/guides/integrations.md) - [Anyscale](https://docs.portkey.ai/docs/guides/integrations/anyscale.md): Portkey helps bring Anyscale APIs to production with its abstractions for observability, fallbacks, caching, and more. Use the Anyscale API **through** Portkey for. - [Deepinfra](https://docs.portkey.ai/docs/guides/integrations/deepinfra.md) - [Groq](https://docs.portkey.ai/docs/guides/integrations/groq.md) - [Introduction to GPT-4o](https://docs.portkey.ai/docs/guides/integrations/introduction-to-gpt-4o.md) - [Langchain](https://docs.portkey.ai/docs/guides/integrations/langchain.md) - [Llama 3 on Portkey + Together AI](https://docs.portkey.ai/docs/guides/integrations/llama-3-on-portkey-+-together-ai.md): Try out the new Llama 3 model directly using the OpenAI SDK - [Mistral](https://docs.portkey.ai/docs/guides/integrations/mistral.md): Portkey helps bring Mistral's APIs to production with its observability suite & AI Gateway. - [Mixtral 8x22b](https://docs.portkey.ai/docs/guides/integrations/mixtral-8x22b.md) - [Sync Open WebUI Feedback → Portkey](https://docs.portkey.ai/docs/guides/integrations/openwebui-to-portkey.md): How to export thumbs-up/down from Open WebUI and ingest into Portkey using a one-file Python or Node script. - [Segmind](https://docs.portkey.ai/docs/guides/integrations/segmind.md) - [Vercel AI](https://docs.portkey.ai/docs/guides/integrations/vercel-ai.md): Portkey is a control panel for your Vercel AI app. It makes your LLM integrations prod-ready, reliable, fast, and cost-efficient. - [Prompts](https://docs.portkey.ai/docs/guides/prompts.md) - [Build a chatbot using Portkey's Prompt Templates](https://docs.portkey.ai/docs/guides/prompts/build-a-chatbot-using-portkeys-prompt-templates.md): Portkey's prompt templates offer a powerful solution for testing and building chatbots. - [Optimizing Prompts for Customer Support using Portkey | LLama Prompt Ops Integration](https://docs.portkey.ai/docs/guides/prompts/llama-prompts.md) - [Building an LLM-as-a-Judge System for AI (Customer Support) Agent](https://docs.portkey.ai/docs/guides/prompts/llm-as-a-judge.md) - [Ultimate AI SDR](https://docs.portkey.ai/docs/guides/prompts/ultimate-ai-sdr.md): Building a sophisticated AI SDR agent leveraging internet search and evals to draft personalized outreach emails in 15 seconds - [Overview](https://docs.portkey.ai/docs/guides/use-cases.md) - [Build an article suggestion app with Supabase pgvector, and Portkey](https://docs.portkey.ai/docs/guides/use-cases/build-an-article-suggestion-app-with-supabase-pgvector-and-portkey.md): Consider that you have list of support articles that you want to suggest it to users when users search for it. You want to suggest as best fit as possible. With the availability of tools like Large Language Model (LLMs) and Vector Databases, the approach towards suggestions & recommendation systems… - [Combining Routing Strategies: Conditional, Load Balancing & Fallbacks](https://docs.portkey.ai/docs/guides/use-cases/combining-routing-strategies.md): Every Portkey routing strategy — conditional, load balancing, fallback — can be nested inside any other. This guide covers five real-world patterns. - [Comparing Top10 LMSYS Models with Portkey](https://docs.portkey.ai/docs/guides/use-cases/comparing-top10-lmsys-models-with-portkey.md) - [Creating your own partner guardrails](https://docs.portkey.ai/docs/guides/use-cases/creating-partner-guardrails.md): Build custom guardrail plugins for the Portkey Gateway and test them locally. - [Comparing DeepSeek Models Against OpenAI, Anthropic & More Using Portkey](https://docs.portkey.ai/docs/guides/use-cases/deepseek-r1.md) - [Detecting Emotions with GPT-4o](https://docs.portkey.ai/docs/guides/use-cases/emotions-with-gpt-4o.md) - [Enforcing JSON Schema with Anyscale & Together](https://docs.portkey.ai/docs/guides/use-cases/enforcing-json-schema-with-anyscale-and-together.md): Get the LLM to adhere to your JSON schema using Anyscale & Together AI's newly introduced JSON modes - [Unified LLM API with Automatic Failover & Error Handling](https://docs.portkey.ai/docs/guides/use-cases/enterprise-ready-unified-api.md): Build a single API interface with automatic failover and unified error handling across OpenAI, Anthropic, and AWS Bedrock - [Fallback from SDXL to Dall-e-3](https://docs.portkey.ai/docs/guides/use-cases/fallback-from-sdxl-to-dall-e-3.md): Generative AI models have revolutionized text generation and opened up new possibilities for developers. - [Testing Application Resilience with Fallbacks](https://docs.portkey.ai/docs/guides/use-cases/fallbacks-test.md) - [Few-Shot Prompting](https://docs.portkey.ai/docs/guides/use-cases/few-shot-prompting.md): LLMs are highly capable of following a given structure. By providing a few examples of how the assistant should respond to a given prompt, the LLM can generate responses that closely follow the format of these examples. - [How to use Portkey Guardrails for PII Protection](https://docs.portkey.ai/docs/guides/use-cases/guardrail-pii-use-case.md): Portkey Guardrails is a powerful tool for protecting Personally Identifiable Information (PII) in your AI applications. This guide provides a comprehensive walkthrough of implementing PII protection using Portkey's guardrail capabilities. - [How to use OpenAI SDK with Portkey Prompt Templates](https://docs.portkey.ai/docs/guides/use-cases/how-to-use-openai-sdk-with-portkey-prompt-templates.md): Portkeys Prompt Playground allows you to test and tinker with various hyperparameters without any external dependencies and deploy them to production seamlessly. Moreover, all team members can use the same prompt template, ensuring that everyone works from the same source of truth. - [Web Search for Any LLM in LibreChat](https://docs.portkey.ai/docs/guides/use-cases/librechat-web-search.md) - [Multi-Tenant LLM Access with Enterprise Control for Customer-Facing Apps](https://docs.portkey.ai/docs/guides/use-cases/multi-tenant-ai-feature.md) - [Portkey with OpenAI Computer Use](https://docs.portkey.ai/docs/guides/use-cases/openai-computer-use.md): Leverage Portkey with OpenAI's Computer Use tool for automated browser interactions with enterprise-grade observability and controls - [Using Private MCP Servers with Responses API](https://docs.portkey.ai/docs/guides/use-cases/private-mcp-servers.md): Client-side tool handling for private MCP servers that aren't accessible to model providers - [How to Run Structured Output Evals at Scale](https://docs.portkey.ai/docs/guides/use-cases/run-batch-evals.md) - [Run Portkey on Prompts from Langchain Hub](https://docs.portkey.ai/docs/guides/use-cases/run-portkey-on-prompts-from-langchain-hub.md): Writing the right prompt is often hard to get a quality LLM response. You want the prompt to be specialized and exhaustive enough for your problem. There is a high chance someone else might’ve stumbled across a similar situation and written the prompt you’ve been figuring out all this while. - [Setting up resilient Load balancers with failure-mitigating Fallbacks](https://docs.portkey.ai/docs/guides/use-cases/setting-up-resilient-load-balancers-with-failure-mitigating-fallbacks.md): Companies often face challenges of scaling their services efficiently as the traffic to their applications grow - when you’re consuming APIs, the first point of failure is that if you hit the API too much, you can get rate limited. Loadbalancing is a proven way to scale usage horizontally without ov… - [Setup OpenAI -> Azure OpenAI Fallback](https://docs.portkey.ai/docs/guides/use-cases/setup-openai-greater-than-azure-openai-fallback.md): Portkey Fallbacks can automatically switch your app's requests from one LLM provider to another, ensuring reliability by allowing you to fallback among multiple LLMs. - [Smart Fallback with Model-Optimized Prompts](https://docs.portkey.ai/docs/guides/use-cases/smart-fallback-with-model-optimized-prompts.md): Portkey can help you easily create fallbacks from one LLM to another, making your application more reliable. While Fallback ensures reliability, it also means that you'll be running a prompt optimized for one LLM on another, which can often lead to significant differences in the final output. - [Tracking LLM Costs Per User with Portkey](https://docs.portkey.ai/docs/guides/use-cases/track-costs-using-metadata.md): Monitor and analyze user-level LLM costs across 1600+ models using Portkey's metadata and analytics API. - [4. Advanced Strategies for Performance Improvement](https://docs.portkey.ai/docs/guides/whitepapers/optimizing-llm-costs/advanced-strategies.md) - [5. Architectural Considerations](https://docs.portkey.ai/docs/guides/whitepapers/optimizing-llm-costs/architectural-considerations.md) - [10. Conclusion and Key Takeaways](https://docs.portkey.ai/docs/guides/whitepapers/optimizing-llm-costs/conclusion-and-key-takeaways.md): Summarizing the key strategies for LLM cost optimization and performance improvement - [7. Cost Effective Development Practices](https://docs.portkey.ai/docs/guides/whitepapers/optimizing-llm-costs/cost-effective-development.md) - [Executive Summary](https://docs.portkey.ai/docs/guides/whitepapers/optimizing-llm-costs/executive-summary.md): Overview of LLM cost optimization and performance improvement strategies - [3. FrugalGPT Techniques for Cost Optimization](https://docs.portkey.ai/docs/guides/whitepapers/optimizing-llm-costs/frugalgpt-techniques.md) - [9. Future Trends in LLM Cost Optimization](https://docs.portkey.ai/docs/guides/whitepapers/optimizing-llm-costs/future-trends.md) - [1. Introduction](https://docs.portkey.ai/docs/guides/whitepapers/optimizing-llm-costs/introduction.md): An overview of the challenges and opportunities in LLM cost optimization - [2. Understanding LLM Cost Drivers](https://docs.portkey.ai/docs/guides/whitepapers/optimizing-llm-costs/llm-cost-drivers.md): An overview of the factors that influence costs in Large Language Model applications - [6. Operational Best Pracitces](https://docs.portkey.ai/docs/guides/whitepapers/optimizing-llm-costs/operational-best-practices.md) - [8. User Education and Change Management](https://docs.portkey.ai/docs/guides/whitepapers/optimizing-llm-costs/user-education.md) - [Error AB03: You do not have enough permissions](https://docs.portkey.ai/docs/help-center/you-do-not-have-enough-permissions.md): Troubleshoot and resolve the AB03 permission error in Portkey — covers Playground, Completions API, Admin API, and all common scenarios. - [Overview](https://docs.portkey.ai/docs/integrations/agents.md): Portkey helps bring your agents to production - [AWS AgentCore](https://docs.portkey.ai/docs/integrations/agents/agentcore.md): Run Portkey-powered agents inside Amazon Bedrock AgentCore - [Agno AI](https://docs.portkey.ai/docs/integrations/agents/agno-ai.md): Use Portkey with Agno to build production-ready autonomous AI agents - [Autogen](https://docs.portkey.ai/docs/integrations/agents/autogen.md): Use Portkey with Autogen to take your AI Agents to production - [Bring Your own Agents](https://docs.portkey.ai/docs/integrations/agents/bring-your-own-agents.md): You can also use Portkey if you are doing custom agent orchestration! - [Claude Agent SDK](https://docs.portkey.ai/docs/integrations/agents/claude-agent-sdk.md): Use Portkey with Claude Agent SDK for production-ready AI agents with observability and governance - [Control Flow](https://docs.portkey.ai/docs/integrations/agents/control-flow.md): Use Portkey with Control Flow to take your AI Agents to production - [CrewAI](https://docs.portkey.ai/docs/integrations/agents/crewai.md): Use Portkey with CrewAI to take your AI Agents to production - [Langchain Agents](https://docs.portkey.ai/docs/integrations/agents/langchain-agents.md) - [LangGraph](https://docs.portkey.ai/docs/integrations/agents/langgraph.md): Use Portkey with LangGraph to take your AI agent workflows to production - [Langroid](https://docs.portkey.ai/docs/integrations/agents/langroid.md) - [LiveKit](https://docs.portkey.ai/docs/integrations/agents/livekit.md): Build production-ready voice AI agents with Portkey's enterprise features - [Llama Agents by Llamaindex](https://docs.portkey.ai/docs/integrations/agents/llama-agents.md): Use Portkey with Llama Agents to take your AI Agents to production - [Mastra Agents](https://docs.portkey.ai/docs/integrations/agents/mastra-agents.md): Use Portkey with Mastra to take your AI Agents to production - [OpenAI Agents SDK (Python)](https://docs.portkey.ai/docs/integrations/agents/openai-agents.md): Use Portkey with OpenAI Agents SDK to take your AI Agents to production - [OpenAI Agents SDK (TypeScript)](https://docs.portkey.ai/docs/integrations/agents/openai-agents-ts.md): Use Portkey with OpenAI Agents SDK to take your AI Agents to production - [OpenAI Swarm](https://docs.portkey.ai/docs/integrations/agents/openai-swarm.md): The Portkey x Swarm integration brings advanced AI gateway capabilities, full-stack observability, and reliability features to build production-ready AI agents. - [Phidata](https://docs.portkey.ai/docs/integrations/agents/phidata.md): Use Portkey with Phidata to take your AI Agents to production - [Pydantic AI](https://docs.portkey.ai/docs/integrations/agents/pydantic-ai.md): Use Portkey with PydanticAI to take your AI Agents to production - [Strands Agents](https://docs.portkey.ai/docs/integrations/agents/strands.md): Use Portkey with AWS's Strands Agents to take your AI Agents to production - [Overview](https://docs.portkey.ai/docs/integrations/ai-apps.md) - [Microsoft Azure](https://docs.portkey.ai/docs/integrations/cloud/azure.md): Discover how you can build your Gen AI platform on Azure using Portkey - [Integrations](https://docs.portkey.ai/docs/integrations/ecosystem.md) - [Acuvity](https://docs.portkey.ai/docs/integrations/guardrails/acuvity.md): Acuvity is model agnostic GenAI security solution. It is built to secure existing and future GenAI models, apps, services, tools, plugins and more. - [Akto](https://docs.portkey.ai/docs/integrations/guardrails/akto.md): Akto Agentic Security provides advanced threat detection and security scanning for your LLM inputs and outputs. - [Aporia](https://docs.portkey.ai/docs/integrations/guardrails/aporia.md) - [Azure Guardrails](https://docs.portkey.ai/docs/integrations/guardrails/azure-guardrails.md): Integrate Microsoft Azure's powerful content moderation services & PII guardrails with Portkey - [AWS Bedrock Guardrails](https://docs.portkey.ai/docs/integrations/guardrails/bedrock-guardrails.md): Secure your AI applications with AWS Bedrock's guardrail capabilities through Portkey. - [Bring Your Own Guardrails](https://docs.portkey.ai/docs/integrations/guardrails/bring-your-own-guardrails.md): Integrate your custom guardrails with Portkey using webhooks - [CrowdStrike AIDR](https://docs.portkey.ai/docs/integrations/guardrails/crowdstrike-aidr.md): CrowdStrike AI Detection and Response (AIDR) integration for scanning LLM inputs and outputs to block or redact harmful content. - [F5 Guardrails](https://docs.portkey.ai/docs/integrations/guardrails/f5-guardrails.md): F5 Guardrails (powered by CalypsoAI) provides advanced content moderation and PII detection capabilities for your LLM inputs and outputs. - [Javelin (Highflame)](https://docs.portkey.ai/docs/integrations/guardrails/javelin.md): Javelin provides comprehensive AI security guardrails including Trust & Safety, Prompt Injection Detection, and Language Detection for enterprise AI agents & applications. - [JWT Token Validator](https://docs.portkey.ai/docs/integrations/guardrails/jwt.md): Validate JWT tokens with signature verification, claim validation, and custom business logic rules. - [Lasso Security](https://docs.portkey.ai/docs/integrations/guardrails/lasso.md): Lasso Security protects your GenAI apps from data leaks, prompt injections, and other potential risks, keeping your systems safe and secure. - [Mistral](https://docs.portkey.ai/docs/integrations/guardrails/mistral.md): Mistral moderation service helps detect and filter harmful content across multiple policy dimensions to secure your AI applications. - [Palo Alto Networks Prisma AIRS](https://docs.portkey.ai/docs/integrations/guardrails/palo-alto-panw-prisma.md): Comprehensive AI security platform providing runtime protection against prompt injections, data leakage, and AI-specific threats - [Pangea](https://docs.portkey.ai/docs/integrations/guardrails/pangea.md): Pangea AI Guard helps analyze and redact text to prevent model manipulation and malicious content. - [Patronus AI](https://docs.portkey.ai/docs/integrations/guardrails/patronus-ai.md): Patronus excels in industry-specific guardrails for RAG workflows. - [Pillar](https://docs.portkey.ai/docs/integrations/guardrails/pillar.md) - [Prompt Security](https://docs.portkey.ai/docs/integrations/guardrails/prompt-security.md): Prompt Security detects and protects against prompt injection, sensitive data exposure, and other AI security threats. - [Qualifire](https://docs.portkey.ai/docs/integrations/guardrails/qualifire.md): Qualifire provides comprehensive AI reliability and quality checks including content moderation, hallucination detection, and policy compliance. - [Replace Custom Regex Patterns](https://docs.portkey.ai/docs/integrations/guardrails/regex.md): Redact custom patterns with Regex in Portkey. - [Zscaler AI Guard](https://docs.portkey.ai/docs/integrations/guardrails/zscaler.md): Zscaler AI Guard integration for enforcing security policies on LLM inputs and outputs, including Data Loss Prevention (DLP) and prompt injection protection. - [Overview](https://docs.portkey.ai/docs/integrations/libraries.md) - [Android Studio](https://docs.portkey.ai/docs/integrations/libraries/android-studio.md): Add observability, governance, and reliability to Android Studio with Portkey. - [Anthropic Computer Use](https://docs.portkey.ai/docs/integrations/libraries/anthropic-computer-use.md) - [AnythingLLM](https://docs.portkey.ai/docs/integrations/libraries/anythingllm.md): Add usage tracking, cost controls, and security guardrails to your AnythingLLM deployment - [Autogen (DEPRECATED)](https://docs.portkey.ai/docs/integrations/libraries/autogen.md): AutoGen is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks. - [Claude Code](https://docs.portkey.ai/docs/integrations/libraries/claude-code.md): Integrate Portkey with Claude Code for enterprise-grade AI coding assistance with observability, reliability, and governance - [Claude Code with Anthropic](https://docs.portkey.ai/docs/integrations/libraries/claude-code-anthropic.md): Route Claude Code through Anthropic Direct API via Portkey for observability, governance, and reliability - [Claude Code with Amazon Bedrock](https://docs.portkey.ai/docs/integrations/libraries/claude-code-bedrock.md): Route Claude Code through Amazon Bedrock via Portkey for observability, governance, and reliability - [Claude Code with Google Vertex AI](https://docs.portkey.ai/docs/integrations/libraries/claude-code-vertex.md): Route Claude Code through Google Vertex AI via Portkey for observability, governance, and reliability - [Cline](https://docs.portkey.ai/docs/integrations/libraries/cline.md): Add enterprise-grade observability, cost tracking, and governance to your Cline AI coding assistant - [OpenAI Codex](https://docs.portkey.ai/docs/integrations/libraries/codex.md): Add usage tracking, cost controls, and security guardrails to Codex with Portkey - [Conductor](https://docs.portkey.ai/docs/integrations/libraries/conductor.md): Add usage tracking, cost controls, and security guardrails to Conductor with Portkey - [Cursor](https://docs.portkey.ai/docs/integrations/libraries/cursor.md): Add observability, governance, and reliability to Cursor with Portkey. - [DSPy](https://docs.portkey.ai/docs/integrations/libraries/dspy.md): Integrate DSPy with Portkey for production-ready LLM pipelines - [GitHub Copilot](https://docs.portkey.ai/docs/integrations/libraries/github-copilot.md): Add observability, governance, and cost controls to GitHub Copilot with Portkey. - [Goose](https://docs.portkey.ai/docs/integrations/libraries/goose.md): Add usage tracking, cost controls, and security guardrails to your Goose AI agent - [Instructor](https://docs.portkey.ai/docs/integrations/libraries/instructor.md): Add observability and reliability to your Instructor structured output pipelines. - [Jan](https://docs.portkey.ai/docs/integrations/libraries/janhq.md): Add usage tracking, cost controls, and security guardrails to your Jan deployment - [Langchain (JS/TS)](https://docs.portkey.ai/docs/integrations/libraries/langchain-js.md): Add Portkey's enterprise features to any Langchain app—observability, reliability, caching, and cost control. - [Langchain (Python)](https://docs.portkey.ai/docs/integrations/libraries/langchain-python.md): Add Portkey's enterprise features to any Langchain app—observability, reliability, caching, and cost control. - [Langflow](https://docs.portkey.ai/docs/integrations/libraries/langflow.md): Add enterprise-grade features to your Langflow AI workflows with Portkey - [LibreChat](https://docs.portkey.ai/docs/integrations/libraries/librechat.md): Cost tracking, observability, and more for LibreChat - [LlamaIndex (Python)](https://docs.portkey.ai/docs/integrations/libraries/llama-index-python.md): Add Portkey's enterprise features to any LlamaIndex app—observability, reliability, caching, and cost control. - [Microsoft Semantic Kernel](https://docs.portkey.ai/docs/integrations/libraries/microsoft-semantic-kernel.md) - [MindsDB](https://docs.portkey.ai/docs/integrations/libraries/mindsdb.md): Integrate MindsDB with Portkey for production-grade AI pipelines - [MongoDB](https://docs.portkey.ai/docs/integrations/libraries/mongodb.md): Store Portkey logs in MongoDB for enterprise deployments - [n8n](https://docs.portkey.ai/docs/integrations/libraries/n8n.md): Add observability, cost controls, and security guardrails to your n8n workflows - [OpenAI Agent Builder (TypeScript)](https://docs.portkey.ai/docs/integrations/libraries/openai-agent-builder.md): Add Portkey to visual agent workflows exported from OpenAI Agent Builder. - [OpenAI Agent Builder (Python)](https://docs.portkey.ai/docs/integrations/libraries/openai-agent-builder-python.md): Add Portkey to visual agent workflows exported from OpenAI Agent Builder. - [Any OpenAI-Compatible Project](https://docs.portkey.ai/docs/integrations/libraries/openai-compatible.md): Add Portkey to any OpenAI-compatible app—just change 2 settings. - [OpenClaw](https://docs.portkey.ai/docs/integrations/libraries/openclaw.md): Route OpenClaw through Portkey for observability, cost tracking, and reliability - [OpenCode](https://docs.portkey.ai/docs/integrations/libraries/opencode.md): centralised cost monitoring for opencode - [Open WebUI](https://docs.portkey.ai/docs/integrations/libraries/openwebui.md): Enterprise-grade cost tracking, observability, and more for Open WebUI - [Image Generation with Open WebUI](https://docs.portkey.ai/docs/integrations/libraries/openwebui/image-generation-with-openwebui.md) - [Promptfoo](https://docs.portkey.ai/docs/integrations/libraries/promptfoo.md): Run Promptfoo evals on 1600+ LLMs with observability and cost tracking via Portkey. - [Roo Code](https://docs.portkey.ai/docs/integrations/libraries/roo-code.md): Add enterprise-grade observability, cost tracking, and governance to your Roo AI coding assistant - [Supabase](https://docs.portkey.ai/docs/integrations/libraries/supabase.md): Generate embeddings with Portkey and store them in Supabase pgvector - [ToolJet](https://docs.portkey.ai/docs/integrations/libraries/tooljet.md): Add AI capabilities to ToolJet apps with Portkey - [Vercel AI SDK](https://docs.portkey.ai/docs/integrations/libraries/vercel.md): Use Portkey with Vercel AI SDK to build production-ready AI apps with full observability, reliability, and 250+ model support - [Zed](https://docs.portkey.ai/docs/integrations/libraries/zed.md): Learn how to integrate Portkey's enterprise features with Zed for enhanced observability, reliability and governance. - [Overview](https://docs.portkey.ai/docs/integrations/llms.md): Portkey connects with all major LLM providers and orchestration frameworks. - [AI21](https://docs.portkey.ai/docs/integrations/llms/ai21.md): Integrate AI21 models with Portkey's AI Gateway - [Anthropic](https://docs.portkey.ai/docs/integrations/llms/anthropic.md): Integrate Anthropic's Claude models with Portkey's AI Gateway - [Computer use tool](https://docs.portkey.ai/docs/integrations/llms/anthropic/computer-use.md) - [Count Tokens](https://docs.portkey.ai/docs/integrations/llms/anthropic/count-tokens.md) - [Prompt Caching](https://docs.portkey.ai/docs/integrations/llms/anthropic/prompt-caching.md) - [Remote MCP Support](https://docs.portkey.ai/docs/integrations/llms/anthropic/remote-mcp.md) - [Structured Outputs](https://docs.portkey.ai/docs/integrations/llms/anthropic/structured-outputs.md): Direct models to return data that conforms to a specific JSON schema. - [Anyscale](https://docs.portkey.ai/docs/integrations/llms/anyscale-llama2-mistral-zephyr.md): Use Anyscale's serverless endpoints for Llama, Mistral, and other open-source models through Portkey. - [AWS SageMaker](https://docs.portkey.ai/docs/integrations/llms/aws-sagemaker.md): Route to your AWS Sagemaker models through Portkey - [Azure AI Foundry](https://docs.portkey.ai/docs/integrations/llms/azure-foundry.md): Learn how to integrate Azure AI Foundry with Portkey to access a wide range of AI models with enhanced observability and reliability features. - [Azure Government Cloud](https://docs.portkey.ai/docs/integrations/llms/azure-openai/azure-govcloud.md) - [Azure OpenAI](https://docs.portkey.ai/docs/integrations/llms/azure-openai/azure-openai.md): Azure OpenAI is a great alternative to accessing the best models including GPT-4 and more in your private environments. Portkey provides complete support for Azure OpenAI. - [Batches](https://docs.portkey.ai/docs/integrations/llms/azure-openai/batches.md): Perform batch inference with Azure OpenAI - [Files](https://docs.portkey.ai/docs/integrations/llms/azure-openai/files.md): Upload files to Azure OpenAI - [Fine-tune](https://docs.portkey.ai/docs/integrations/llms/azure-openai/fine-tuning.md): Fine-tune your models with Azure OpenAI - [AWS Bedrock](https://docs.portkey.ai/docs/integrations/llms/bedrock/aws-bedrock.md) - [AWS GovCloud with Bedrock](https://docs.portkey.ai/docs/integrations/llms/bedrock/aws-govcloud.md) - [Batches](https://docs.portkey.ai/docs/integrations/llms/bedrock/batches.md): Perform batch inference with Bedrock - [AWS Bedrock Knowledge Bases](https://docs.portkey.ai/docs/integrations/llms/bedrock/bedrock-knowledgebase.md): Create, manage, and connect your LLMs to organizational data using AWS Bedrock Knowledge Bases through Portkey. - [Embeddings](https://docs.portkey.ai/docs/integrations/llms/bedrock/embeddings.md): Get embeddings from Bedrock - [Files](https://docs.portkey.ai/docs/integrations/llms/bedrock/files.md): Upload files to S3 for Bedrock batch inference - [Fine-tune](https://docs.portkey.ai/docs/integrations/llms/bedrock/fine-tuning.md): Fine-tune your models with Bedrock - [Prompt Caching on Bedrock](https://docs.portkey.ai/docs/integrations/llms/bedrock/prompt-caching.md) - [Rerank](https://docs.portkey.ai/docs/integrations/llms/bedrock/rerank.md): Rerank documents with Bedrock - [Structured Outputs](https://docs.portkey.ai/docs/integrations/llms/bedrock/structured-outputs.md): Direct models to return data that conforms to a specific JSON schema. - [Bring Your Own LLM](https://docs.portkey.ai/docs/integrations/llms/byollm.md): Integrate your privately hosted LLMs with Portkey for unified management, observability, and reliability. - [Cerebras](https://docs.portkey.ai/docs/integrations/llms/cerebras.md): Integrate Cerebras models with Portkey's AI Gateway - [Cohere](https://docs.portkey.ai/docs/integrations/llms/cohere.md): Integrate Cohere models with Portkey's AI Gateway - [Dashscope](https://docs.portkey.ai/docs/integrations/llms/dashscope.md): Integrate Dashscope with Portkey for seamless completions, embeddings, and advanced features. - [Databricks](https://docs.portkey.ai/docs/integrations/llms/databricks.md): Integrate Databricks Model Serving with Portkey's AI Gateway - [Deepbricks](https://docs.portkey.ai/docs/integrations/llms/deepbricks.md): Use Deepbricks' AI inference platform through Portkey for fast model deployment. - [Deepgram](https://docs.portkey.ai/docs/integrations/llms/deepgram.md): Use Deepgram's Speech-to-Text API through Portkey for audio transcription. - [Deepinfra](https://docs.portkey.ai/docs/integrations/llms/deepinfra.md): Integrate Deepinfra models with Portkey's AI Gateway - [DeepSeek](https://docs.portkey.ai/docs/integrations/llms/deepseek.md): Integrate DeepSeek models with Portkey's AI Gateway - [Featherless AI](https://docs.portkey.ai/docs/integrations/llms/featherless.md): Access 11,900+ open-source models through Featherless AI and Portkey. - [Fireworks](https://docs.portkey.ai/docs/integrations/llms/fireworks.md): Use Fireworks for chat, vision, embeddings, and image generation with advanced grammar and JSON modes through Portkey. - [Files](https://docs.portkey.ai/docs/integrations/llms/fireworks/files.md): Upload files to Fireworks - [Fine-tune](https://docs.portkey.ai/docs/integrations/llms/fireworks/fine-tuning.md): Fine-tune your models with Bedrock - [Google Gemini](https://docs.portkey.ai/docs/integrations/llms/gemini.md) - [GitHub Models](https://docs.portkey.ai/docs/integrations/llms/github.md): Use GitHub Models Marketplace through Portkey for AI model integration. - [Groq](https://docs.portkey.ai/docs/integrations/llms/groq.md): Use Groq's ultra-fast inference for chat completions, tool calling, and audio processing through Portkey. - [Hugging Face](https://docs.portkey.ai/docs/integrations/llms/huggingface.md): Use Hugging Face Inference endpoints through Portkey for thousands of open-source models. - [Inference.net](https://docs.portkey.ai/docs/integrations/llms/inference.net.md): Use Inference.net's distributed GPU compute platform through Portkey. - [Jina AI](https://docs.portkey.ai/docs/integrations/llms/jina-ai.md): Use Jina AI's embedding and reranker models through Portkey. - [Lambda Labs](https://docs.portkey.ai/docs/integrations/llms/lambda.md): Use Lambda's GPU-powered inference for Llama and open-source models through Portkey. - [Lemonfox-AI](https://docs.portkey.ai/docs/integrations/llms/lemon-fox.md): Integrate LemonFox with Portkey for chat, image generation, and speech-to-text. - [Lepton AI](https://docs.portkey.ai/docs/integrations/llms/lepton.md): Use Lepton AI's serverless AI endpoints for chat completions and speech-to-text through Portkey. - [Lingyi (01.ai)](https://docs.portkey.ai/docs/integrations/llms/lingyi-01.ai.md): Use Lingyi's Yi models through Portkey for advanced Chinese and multilingual AI. - [LocalAI](https://docs.portkey.ai/docs/integrations/llms/local-ai.md): Integrate LocalAI-hosted models with Portkey for local LLM deployment with observability. - [Mistral AI](https://docs.portkey.ai/docs/integrations/llms/mistral-ai.md): Integrate Mistral AI models with Portkey's AI Gateway - [Modal Labs](https://docs.portkey.ai/docs/integrations/llms/modal.md): Integrate Modal with Portkey for seamless completions and streaming. - [Monster API](https://docs.portkey.ai/docs/integrations/llms/monster-api.md): Access generative AI models at 80% lower costs through Monster API and Portkey. - [Moonshot](https://docs.portkey.ai/docs/integrations/llms/moonshot.md): Use Moonshot's AI models through Portkey for Chinese language processing. - [Ncompass](https://docs.portkey.ai/docs/integrations/llms/ncompass.md): Use Ncompass's AI models through Portkey for Snowflake-integrated deployments. - [Nebius](https://docs.portkey.ai/docs/integrations/llms/nebius.md): Use Nebius AI's inference platform through Portkey for scalable model deployment. - [Nomic](https://docs.portkey.ai/docs/integrations/llms/nomic.md): Use Nomic's superior embedding models through Portkey. - [Novita AI](https://docs.portkey.ai/docs/integrations/llms/novita-ai.md): Use Novita AI's inference platform through Portkey for diverse model access. - [Nscale (EU Sovereign)](https://docs.portkey.ai/docs/integrations/llms/nscale.md): Use Nscale's EU-based sovereign AI infrastructure through Portkey for compliant model deployment. - [Ollama](https://docs.portkey.ai/docs/integrations/llms/ollama.md): Integrate Ollama-hosted models with Portkey for local LLM deployment with full observability. - [OpenAI](https://docs.portkey.ai/docs/integrations/llms/openai.md): Integrate OpenAI's GPT models with Portkey's AI Gateway - [Batches](https://docs.portkey.ai/docs/integrations/llms/openai/batches.md): Perform batch inference with OpenAI - [Files](https://docs.portkey.ai/docs/integrations/llms/openai/files.md): Upload files to OpenAI - [Fine-tune](https://docs.portkey.ai/docs/integrations/llms/openai/fine-tuning.md): Fine-tune your models with OpenAI - [Prompt Caching](https://docs.portkey.ai/docs/integrations/llms/openai/prompt-caching-openai.md) - [Remote MCP Support](https://docs.portkey.ai/docs/integrations/llms/openai/remote-mcp.md) - [Structured Outputs](https://docs.portkey.ai/docs/integrations/llms/openai/structured-outputs.md): Structured Outputs ensure that the model always follows your supplied [JSON schema](https://json-schema.org/overview/what-is-jsonschema). Portkey supports OpenAI's Structured Outputs feature out of the box with our SDKs & APIs. - [OpenRouter](https://docs.portkey.ai/docs/integrations/llms/openrouter.md): Integrate OpenRouter models with Portkey's AI Gateway - [Oracle Cloud Infrastructure](https://docs.portkey.ai/docs/integrations/llms/oracle.md) - [OVHcloud AI Endpoints](https://docs.portkey.ai/docs/integrations/llms/ovhcloud.md) - [Perplexity AI](https://docs.portkey.ai/docs/integrations/llms/perplexity-ai.md): Use Perplexity's online reasoning models with advanced search capabilities through Portkey. - [Predibase](https://docs.portkey.ai/docs/integrations/llms/predibase.md): Use Predibase's open-source and fine-tuned LLMs through Portkey. - [Recraft AI](https://docs.portkey.ai/docs/integrations/llms/recraft-ai.md): Use Recraft AI's advanced image generation models through Portkey. - [Reka AI](https://docs.portkey.ai/docs/integrations/llms/reka-ai.md): Integrate Reka AI models with Portkey's AI Gateway - [Replicate](https://docs.portkey.ai/docs/integrations/llms/replicate.md): Use Portkey as a proxy to Replicate for auth management and logging. - [SambaNova](https://docs.portkey.ai/docs/integrations/llms/sambanova.md): Use SambaNova's fast inference for Llama and other open-source models through Portkey. - [Segmind](https://docs.portkey.ai/docs/integrations/llms/segmind.md): Use Segmind's Stable Diffusion models for fast, serverless image generation through Portkey. - [SiliconFlow](https://docs.portkey.ai/docs/integrations/llms/siliconflow.md): Use SiliconFlow's AI inference platform through Portkey for fast, cost-effective model deployment. - [Snowflake Cortex](https://docs.portkey.ai/docs/integrations/llms/snowflake-cortex.md): Use Snowflake Cortex AI models through Portkey for enterprise data cloud AI. - [Stability AI](https://docs.portkey.ai/docs/integrations/llms/stability-ai.md): Use Stability AI's Stable Diffusion models for image generation through Portkey. - [Suggest a new integration!](https://docs.portkey.ai/docs/integrations/llms/suggest-a-new-integration.md) - [Together AI](https://docs.portkey.ai/docs/integrations/llms/together-ai.md): Integrate Together AI models with Portkey's AI Gateway - [Triton Inference Server](https://docs.portkey.ai/docs/integrations/llms/triton.md): Integrate Triton-hosted custom models with Portkey for production observability and reliability. - [Upstage AI](https://docs.portkey.ai/docs/integrations/llms/upstage.md): Integrate Upstage with Portkey for chat, embeddings, streaming, and function calling. - [Google Vertex AI](https://docs.portkey.ai/docs/integrations/llms/vertex-ai.md) - [Batches](https://docs.portkey.ai/docs/integrations/llms/vertex-ai/batches.md): Perform batch inference with Vertex AI - [Controlled Generations](https://docs.portkey.ai/docs/integrations/llms/vertex-ai/controlled-generations.md): Controlled Generations ensure that the model always follows your supplied [JSON schema](https://json-schema.org/overview/what-is-jsonschema). Portkey supports Vertex AI's Controlled Generations feature out of the box with our SDKs & APIs. - [Embeddings](https://docs.portkey.ai/docs/integrations/llms/vertex-ai/embeddings.md): Get embeddings from Vertex AI - [Files](https://docs.portkey.ai/docs/integrations/llms/vertex-ai/files.md): Upload files to Google Cloud Storage for Vertex AI fine-tuning and batch inference - [Fine-tune](https://docs.portkey.ai/docs/integrations/llms/vertex-ai/fine-tuning.md): Fine-tune your models with Vertex AI - [vLLM](https://docs.portkey.ai/docs/integrations/llms/vllm.md): Integrate vLLM-hosted custom models with Portkey for production observability and reliability. - [Voyage AI](https://docs.portkey.ai/docs/integrations/llms/voyage-ai.md): Use Voyage AI's embeddings and reranking models through Portkey. - [Workers AI](https://docs.portkey.ai/docs/integrations/llms/workers-ai.md): Use Cloudflare Workers AI models through Portkey for serverless AI inference. - [xAI (Grok)](https://docs.portkey.ai/docs/integrations/llms/x-ai.md): Use xAI's Grok models through Portkey for chat completions, function calling, and vision capabilities. - [Z.AI](https://docs.portkey.ai/docs/integrations/llms/z-ai.md): Use Z.AI's GLM models through Portkey for unified API access and easier model routing across providers. - [ZhipuAI / ChatGLM / BigModel](https://docs.portkey.ai/docs/integrations/llms/zhipu.md): Use ZhipuAI's GLM models through Portkey for advanced Chinese and multilingual AI. - [Claude Desktop](https://docs.portkey.ai/docs/integrations/mcp-clients/claude.md): Connect your MCP servers to Claude Desktop through Portkey's secure MCP gateway - [Claude Code](https://docs.portkey.ai/docs/integrations/mcp-clients/claude-code.md): Connect your MCP servers to Claude Code CLI through Portkey's secure MCP gateway - [Cursor](https://docs.portkey.ai/docs/integrations/mcp-clients/cursor.md): Connect your MCP servers to Cursor IDE through Portkey's secure MCP gateway - [LibreChat](https://docs.portkey.ai/docs/integrations/mcp-clients/librechat.md): Connect your MCP servers to LibreChat through Portkey's secure MCP gateway - [VS Code](https://docs.portkey.ai/docs/integrations/mcp-clients/vs-code.md): Connect your MCP servers to Visual Studio Code through Portkey's secure MCP gateway - [Atlassian Remote MCP server](https://docs.portkey.ai/docs/integrations/mcp-servers/atlassian-mcp-server.md): The Atlassian Remote MCP Server is a cloud-based bridge between your Atlassian Cloud site and MCP-compatible clients, enabling secure interaction with Jira, Confluence, and Compass in real time. - [Cerebras Code MCP server](https://docs.portkey.ai/docs/integrations/mcp-servers/cerebras-code-mcp-server.md): The **Cerebras Code MCP Server (v1.3.3)** integrates high-speed code generation and editing capabilities into AI-assisted IDEs such as Claude Code, Cline, and Cursor. It allows you to use your preferred AI (Claude, Qwen, etc.) for planning and strategy, and delegate the actual code-writing and modif… - [Datadog MCP server](https://docs.portkey.ai/docs/integrations/mcp-servers/datadog-mcp-server.md): The Datadog MCP server enables AI agents to interact with Datadog monitoring, dashboards, metrics, logs, and alerts through MCP. Built for conversational and automated observability workflows. - [Figma MCP server](https://docs.portkey.ai/docs/integrations/mcp-servers/figma-mcp-server.md): The Figma MCP server connects Figma's design environment to MCP clients, enabling agents to translate designs into code, extract variables, and align output with your design system. - [Firebase MCP server](https://docs.portkey.ai/docs/integrations/mcp-servers/firebase-mcp-server.md): The Firebase MCP server gives AI agents programmatic access to Firebase projects, including Auth user management, Firestore data access, Storage rules, and GraphQL features via the Firebase CLI. - [Firecrawl MCP server](https://docs.portkey.ai/docs/integrations/mcp-servers/firecrawl-mcp-server.md): The Firecrawl MCP server enables AI agents to perform web scraping, crawling, search, extraction, and research through MCP. Built for content discovery and information retrieval workflows. - [GitHub MCP server](https://docs.portkey.ai/docs/integrations/mcp-servers/github-mcp-server.md): The GitHub MCP server enables AI agents to safely interact with repositories, issues, pull requests, commits, and code search through MCP. Built for conversational and automated engineering workflows. - [Gitlab MCP server](https://docs.portkey.ai/docs/integrations/mcp-servers/gitlab-mcp-server.md): The GitLab MCP server enables AI agents to interact with GitLab—the DevOps platform for software development, security, and operations. - [Grafana MCP server](https://docs.portkey.ai/docs/integrations/mcp-servers/grafana-mcp-server.md): The Grafana MCP server enables AI agents to interact with Grafana dashboards, metrics, logs, incidents, alerts, and monitoring tools through MCP. Built for observability and monitoring workflows. - [Kubernetes MCP server](https://docs.portkey.ai/docs/integrations/mcp-servers/kubernetes-mcp-server.md): The Kubernetes MCP server provides a flexible Model Context Protocol (MCP) interface for managing and interacting with Kubernetes and OpenShift clusters. - [Linear MCP server](https://docs.portkey.ai/docs/integrations/mcp-servers/linear-mcp-server.md): The Linear MCP server provides a standardized interface for AI models and agents to securely access and manage data from your Linear workspace. - [Notion MCP server](https://docs.portkey.ai/docs/integrations/mcp-servers/notion-mcp-server.md): The Notion MCP server enables AI agents to interact with Notion, a versatile workspace that combines note-taking, project management, knowledge bases, and relational databases. Through MCP, assistants can programmatically retrieve, search, and update pages, blocks, and databases—turning Notion into… - [Open Library MCP server](https://docs.portkey.ai/docs/integrations/mcp-servers/open-library-mcp-server.md): The Open Library MCP server provides AI agents with access to Open Library's book and author catalog through a simple read-only API with no authentication required. - [Playwright MCP server](https://docs.portkey.ai/docs/integrations/mcp-servers/playwright-mcp-server.md): The Playwright MCP server provides browser automation capabilities using Playwright through the Model Context Protocol (MCP), enabling deterministic web interaction via accessibility trees rather than screenshots. - [Postman MCP server](https://docs.portkey.ai/docs/integrations/mcp-servers/postman-mcp-server.md): The Postman MCP Server connects Postman to AI tools, enabling assistants and agents to securely access and interact with Postman workspaces, collections, environments, and APIs through natural language. It bridges Postman's API platform with AI-driven automation, allowing developers to query, evalua… - [Qdrant MCP server](https://docs.portkey.ai/docs/integrations/mcp-servers/qdrant-mcp-server.md): The Qdrant MCP server enables AI agents to store and retrieve semantic information using vector search through MCP. Built for persistent memory, contextual knowledge, and efficient semantic retrieval workflows. - [Stripe MCP server](https://docs.portkey.ai/docs/integrations/mcp-servers/stripe-mcp-server.md): The Stripe MCP server exposes Stripe's payments and billing operations to AI agents through the Model Context Protocol, providing secure access to customers, products, pricing, invoices, and more. - [Tavily MCP server](https://docs.portkey.ai/docs/integrations/mcp-servers/tavily-mcp-server.md): The Tavily MCP server enables AI agents to perform real-time web search, extraction, crawling, mapping, and deep research through MCP. - [Vectara MCP server](https://docs.portkey.ai/docs/integrations/mcp-servers/vectara-mcp-server.md): The Vectara MCP server provides AI agents with access to Vectara's semantic search and retrieval-augmented generation (RAG) APIs. - [Submit an Integration](https://docs.portkey.ai/docs/integrations/partner.md) - [Exa Online Search](https://docs.portkey.ai/docs/integrations/plugins/exa.md): Transform offline LLMs into online models with real-time internet search capabilities. - [Arize Phoenix](https://docs.portkey.ai/docs/integrations/tracing-providers/arize.md): Extend Portkey’s powerful AI Gateway with Arize Phoenix for unified LLM observability, tracing, and analytics across your ML stack. - [FutureAGI](https://docs.portkey.ai/docs/integrations/tracing-providers/future-agi.md): Integrate FutureAGI with Portkey for automated LLM evaluation and comprehensive observability - [HoneyHive](https://docs.portkey.ai/docs/integrations/tracing-providers/honeyhive.md): Integrate HoneyHive observability with Portkey's AI gateway for comprehensive LLM monitoring and advanced routing capabilities - [Langfuse](https://docs.portkey.ai/docs/integrations/tracing-providers/langfuse.md): Integrate Langfuse observability with Portkey's AI gateway for comprehensive LLM monitoring and advanced routing capabilities - [Langsmith](https://docs.portkey.ai/docs/integrations/tracing-providers/langsmith.md): Integrate LangSmith observability with Portkey's AI gateway for comprehensive LLM monitoring and advanced routing capabilities - [Pydantic Logfire](https://docs.portkey.ai/docs/integrations/tracing-providers/logfire.md): Modern Python observability with automatic OpenAI instrumentation and intelligent gateway routing - [MLflow Tracing](https://docs.portkey.ai/docs/integrations/tracing-providers/ml-flow.md): Enhance LLM observability with automatic tracing and intelligent gateway routing - [OpenLIT](https://docs.portkey.ai/docs/integrations/tracing-providers/openlit.md): Simplify AI development with OpenTelemetry-native observability and intelligent gateway routing - [OpenTelemetry Python SDK](https://docs.portkey.ai/docs/integrations/tracing-providers/opentelemetry-python-sdk.md): Direct OpenTelemetry instrumentation with full control over traces and intelligent gateway routing - [Phoenix(Arize) Open-Telemetry](https://docs.portkey.ai/docs/integrations/tracing-providers/phoenix.md): AI observability and debugging platform with OpenInference instrumentation and intelligent gateway routing - [Traceloop (OpenLLMetry)](https://docs.portkey.ai/docs/integrations/tracing-providers/traceloop.md) - [Milvus](https://docs.portkey.ai/docs/integrations/vector-databases/milvus.md) - [Qdrant](https://docs.portkey.ai/docs/integrations/vector-databases/qdrant.md) - [Portkey Features](https://docs.portkey.ai/docs/introduction/feature-overview.md): Explore the powerful features of Portkey - [Make Your First Request](https://docs.portkey.ai/docs/introduction/make-your-first-request.md): Integrate Portkey and analyze your first LLM call in 2 minutes! - [What is Portkey?](https://docs.portkey.ai/docs/introduction/what-is-portkey.md): Portkey AI is a comprehensive platform designed to streamline and enhance AI integration for developers and organizations. It serves as a unified interface for interacting with over 250 AI models, offering advanced tools for control, visibility, and security in your Generative AI apps. - [Configure Analytics Access Permissions for Workspaces](https://docs.portkey.ai/docs/product/administration/configure-analytics-access-permissions.md) - [Configure API Key Access Permissions for Workspaces](https://docs.portkey.ai/docs/product/administration/configure-api-key-access-permissions.md) - [Configure Data Visibility Settings for Workspaces](https://docs.portkey.ai/docs/product/administration/configure-data-visibility-settings.md) - [Configure Logs Access Permissions for Workspace](https://docs.portkey.ai/docs/product/administration/configure-logs-access-permissions-in-workspace.md) - [Configure Model Access Permissioning for AI Integrations](https://docs.portkey.ai/docs/product/administration/configure-model-access-permissioning-for-ai-integratinon.md) - [Configure Prompt Access Permissions for Workspaces](https://docs.portkey.ai/docs/product/administration/configure-prompt-access-permissions.md) - [Configure Provider Access Permissions](https://docs.portkey.ai/docs/product/administration/configure-virtual-key-access-permissions.md) - [Configure Workspace Provisioning for Your AI Integrations](https://docs.portkey.ai/docs/product/administration/configure-workspace-provisioning-for-your-ai-integrations.md) - [Configure Request Logging](https://docs.portkey.ai/docs/product/administration/configuring-request-logging.md): Control what data is stored for all LLM requests in your organization - [Enforce Budget Limits and Rate Limits for Your API Keys](https://docs.portkey.ai/docs/product/administration/enforce-budget-and-rate-limit.md): Configure budget and rate limits on API keys to effectively manage AI spending and usage across your organization - [Enforce Budget Limits on Your AI Provider](https://docs.portkey.ai/docs/product/administration/enforce-budget-limits-on-your-ai-provider.md) - [Enforcing Default Configs on API Keys](https://docs.portkey.ai/docs/product/administration/enforce-default-config.md): Learn how to attach default configs to API keys for enforcing governance controls across your organization - [Enforcing Org Level Guardrails](https://docs.portkey.ai/docs/product/administration/enforce-orgnization-level-guardrails.md) - [Enforce Rate Limits on Your AI Provider](https://docs.portkey.ai/docs/product/administration/enforce-rate-limits-on-your-ai-provider.md) - [Enforce Workspace Budget and Rate Limits](https://docs.portkey.ai/docs/product/administration/enforce-workspace-budget-limts-and-rate-limits.md): Configure budget and rate limits at the workspace level to effectively manage AI spending and resource allocation - [Enforcing Workspace Level Guardrails](https://docs.portkey.ai/docs/product/administration/enforce-workspace-level-guardials.md) - [Enforcing Request Metadata](https://docs.portkey.ai/docs/product/administration/enforcing-request-metadata.md) - [AI Gateway](https://docs.portkey.ai/docs/product/ai-gateway.md): The world's fastest AI Gateway with advanced routing & integrated Guardrails. - [Automatic Retries](https://docs.portkey.ai/docs/product/ai-gateway/automatic-retries.md): Automatically retry failed LLM requests with exponential backoff. - [Unified Batch Inference](https://docs.portkey.ai/docs/product/ai-gateway/batches.md): Run large‑scale inference jobs through one consistent endpoint - [Cache (Simple & Semantic)](https://docs.portkey.ai/docs/product/ai-gateway/cache-simple-and-semantic.md): Speed up requests and reduce costs by caching LLM responses. - [Canary Testing](https://docs.portkey.ai/docs/product/ai-gateway/canary-testing.md): You can use Portkey's AI gateway to also canary test new models or prompts in different environments. - [Chat Completions](https://docs.portkey.ai/docs/product/ai-gateway/chat-completions.md): Use OpenAI-compatible Chat Completions with any LLM provider through Portkey's AI Gateway. - [Circuit Breaker](https://docs.portkey.ai/docs/product/ai-gateway/circuit-breaker.md): Automatically stop routing to unhealthy targets until they recover. - [Conditional Routing](https://docs.portkey.ai/docs/product/ai-gateway/conditional-routing.md) - [Configs](https://docs.portkey.ai/docs/product/ai-gateway/configs.md): This feature is available on all Portkey plans. - [Custom hosts](https://docs.portkey.ai/docs/product/ai-gateway/custom-hosts.md): Route requests to privately hosted or local models using custom host URLs, and understand Portkey's host validation and security rules. - [Fallbacks](https://docs.portkey.ai/docs/product/ai-gateway/fallbacks.md): Automatically switch to backup LLMs when the primary fails. - [Files](https://docs.portkey.ai/docs/product/ai-gateway/files.md): Upload files to Portkey and reuse the content in your requests - [Fine-tuning](https://docs.portkey.ai/docs/product/ai-gateway/fine-tuning.md): Run your fine-tuning jobs with Portkey Gateway - [gRPC (Beta)](https://docs.portkey.ai/docs/product/ai-gateway/grpc.md): Use gRPC as an alternative transport protocol for lower latency and efficient binary serialization - [Load Balancing](https://docs.portkey.ai/docs/product/ai-gateway/load-balancing.md): Distribute traffic across multiple LLMs for high availability and optimal performance. - [Messages](https://docs.portkey.ai/docs/product/ai-gateway/messages-api.md): Use Anthropic's Messages format with any provider — 3000+ models, one endpoint. - [Multimodal Capabilities](https://docs.portkey.ai/docs/product/ai-gateway/multimodal-capabilities.md) - [Function Calling](https://docs.portkey.ai/docs/product/ai-gateway/multimodal-capabilities/function-calling.md): Portkey's AI Gateway supports function calling across major foundational models. Define functions in your API call, and the model can output the function name with parameters. - [Image Generation](https://docs.portkey.ai/docs/product/ai-gateway/multimodal-capabilities/image-generation.md): Portkey's AI gateway supports image generation capabilities that many foundational model providers offer. - [Speech-to-Text](https://docs.portkey.ai/docs/product/ai-gateway/multimodal-capabilities/speech-to-text.md): Use Portkey's AI gateway to transcribe and translate audio using speech-to-text models across all supported providers. - [Text-to-Speech](https://docs.portkey.ai/docs/product/ai-gateway/multimodal-capabilities/text-to-speech.md): Portkey's AI gateway currently supports text-to-speech models on `OpenAI` and `Azure OpenAI`. - [Thinking Mode](https://docs.portkey.ai/docs/product/ai-gateway/multimodal-capabilities/thinking-mode.md) - [Vision](https://docs.portkey.ai/docs/product/ai-gateway/multimodal-capabilities/vision.md): Portkey's AI gateway supports vision models like GPT-4V by OpenAI, Gemini by Google and more. - [Nitro Mode (Beta)](https://docs.portkey.ai/docs/product/ai-gateway/nitro-mode.md): Forward the request body directly to the provider without any transformations. Best when your request and response structure already matches the provider's API. - [Realtime API](https://docs.portkey.ai/docs/product/ai-gateway/realtime-api.md): Use OpenAI's Realtime API with logs, cost tracking, and more! - [Remote MCP](https://docs.portkey.ai/docs/product/ai-gateway/remote-mcp.md): Portkey's AI gateway has MCP server support that many foundational model providers offer. - [Request Timeouts](https://docs.portkey.ai/docs/product/ai-gateway/request-timeouts.md): Manage unpredictable LLM latencies effectively with Portkey's **Request Timeouts**. - [Open Responses](https://docs.portkey.ai/docs/product/ai-gateway/responses-api.md): Use the Responses API with any LLM provider through Portkey's AI Gateway — fully compliant with the Open Responses specification. - [Strict OpenAI Compliance](https://docs.portkey.ai/docs/product/ai-gateway/strict-open-ai-compliance.md) - [Universal API](https://docs.portkey.ai/docs/product/ai-gateway/universal-api.md): One API for 200+ LLMs across every major provider. Use OpenAI's Chat Completions, Responses API, or Anthropic's Messages format -- Portkey translates between them all. - [Virtual Keys](https://docs.portkey.ai/docs/product/ai-gateway/virtual-keys.md): Virtual Keys have been migrated to Model Catalog - learn about the new system - [Connect Bedrock with Amazon Assumed Role](https://docs.portkey.ai/docs/product/ai-gateway/virtual-keys/bedrock-amazon-assumed-role.md): How to create a new integration for Bedrock using Amazon Assumed Role Authentication - [Budget Limits](https://docs.portkey.ai/docs/product/ai-gateway/virtual-keys/budget-limits.md): Budget Limits lets you set cost limits on providers/integrations - [Rate Limits](https://docs.portkey.ai/docs/product/ai-gateway/virtual-keys/rate-limits.md): Set Rate Limts to your Integrations/Providers - [Coding Agents](https://docs.portkey.ai/docs/product/coding-agent.md): The governance layer for Claude Code, Codex, and other AI coding agents — centralized credentials, cost controls, observability, and provider failover for platform teams. - [Enterprise Offering](https://docs.portkey.ai/docs/product/enterprise-offering.md) - [Access Control Management](https://docs.portkey.ai/docs/product/enterprise-offering/access-control-management.md): With customizable user roles, API key management, and comprehensive audit logs, Portkey provides the flexibility and control needed to ensure secure collaboration & maintain a strong security posture - [Audit Logs](https://docs.portkey.ai/docs/product/enterprise-offering/audit-logs.md): Track and monitor all administrative activities across your Portkey organization with comprehensive audit logging. - [Budget Limits](https://docs.portkey.ai/docs/product/enterprise-offering/budget-limits.md) - [Usage & Rate Limit Policies](https://docs.portkey.ai/docs/product/enterprise-offering/budget-policies.md): Create fine-grained controls over API usage and rate limits at the workspace level - [Enterprise Components](https://docs.portkey.ai/docs/product/enterprise-offering/components.md) - [KMS Integration](https://docs.portkey.ai/docs/product/enterprise-offering/kms.md): Customers can bring their own encryption keys to Portkey AI to encrypt data at storage. - [Logs Export](https://docs.portkey.ai/docs/product/enterprise-offering/logs-export.md) - [Org Management](https://docs.portkey.ai/docs/product/enterprise-offering/org-management.md): A high-level introduction to Portkey's organization management structure and key concepts. - [API Key Rotation](https://docs.portkey.ai/docs/product/enterprise-offering/org-management/api-key-rotation.md): Periodically replace API key secrets without downtime using manual or automatic rotation with configurable transition periods. - [API Keys (AuthN and AuthZ)](https://docs.portkey.ai/docs/product/enterprise-offering/org-management/api-keys-authn-and-authz.md): Discover how Admin and Workspace API Keys are used to manage access and operations in Portkey. - [JWT Authentication](https://docs.portkey.ai/docs/product/enterprise-offering/org-management/jwt.md): Configure JWT-based authentication for your organization in Portkey - [Organizations](https://docs.portkey.ai/docs/product/enterprise-offering/org-management/organizations.md): Understand the role and features of Organizations, the highest level of abstraction in Portkey's structure. - [Azure Entra](https://docs.portkey.ai/docs/product/enterprise-offering/org-management/scim/azure-ad.md): Setup Azure Entra for SCIM provisioning with Portkey. - [SCIM Group Management](https://docs.portkey.ai/docs/product/enterprise-offering/org-management/scim/group-management.md): Map SCIM groups to Portkey workspaces and roles without naming restrictions. - [Okta](https://docs.portkey.ai/docs/product/enterprise-offering/org-management/scim/okta.md): Set up Okta for SCIM provisioning with Portkey. - [Overview](https://docs.portkey.ai/docs/product/enterprise-offering/org-management/scim/scim.md): SCIM integration with Portkey. - [SSO](https://docs.portkey.ai/docs/product/enterprise-offering/org-management/sso.md): SSO support for enterprises - [User Roles & Permissions](https://docs.portkey.ai/docs/product/enterprise-offering/org-management/user-roles-and-permissions.md): Learn about Portkey's comprehensive role-based access control system across Organizations and Workspaces. - [Workspaces](https://docs.portkey.ai/docs/product/enterprise-offering/org-management/workspaces.md): Explore Workspaces, the sub-organizational units that enable granular project and team management. - [Analytics Export](https://docs.portkey.ai/docs/product/enterprise-offering/otel/analytics.md): Export Portkey analytics data to OpenTelemetry-compatible collectors for centralized observability - [Complete Logs Export](https://docs.portkey.ai/docs/product/enterprise-offering/otel/complete-logs.md): Export complete LLM request/response logs to OpenTelemetry endpoints following GenAI semantic conventions - [OpenTelemetry(OTel) Export](https://docs.portkey.ai/docs/product/enterprise-offering/otel/otel.md): Export Portkey data to OpenTelemetry compatible endpoints for external observability - [Secret References](https://docs.portkey.ai/docs/product/enterprise-offering/secret-references.md): Reference secrets stored in external secret managers like AWS Secrets Manager, Azure Key Vault, and HashiCorp Vault instead of storing them directly in Portkey. - [Security @ Portkey](https://docs.portkey.ai/docs/product/enterprise-offering/security-portkey.md): Portkey AI provides a secure, reliable AI gateway for the seamless integration and management of large language models (LLMs). - [Guardrails](https://docs.portkey.ai/docs/product/guardrails.md): Ship to production confidently with Portkey Guardrails on your requests & responses - [Supported Endpoints & Capabilities](https://docs.portkey.ai/docs/product/guardrails/capabilities.md): Which endpoints, providers, input types, and execution modes Portkey Guardrails support. - [Creating Raw Guardrails (in JSON)](https://docs.portkey.ai/docs/product/guardrails/creating-raw-guardrails-in-json.md): With the raw Guardrails mode, we let you define your Guardrail checks & actions however you want, directly in code. - [Guardrails for Embedding Requests](https://docs.portkey.ai/docs/product/guardrails/embedding-guardrails.md): Apply security and data validation measures to vector embedding requests to protect sensitive information and ensure data quality. - [List of Guardrail Checks](https://docs.portkey.ai/docs/product/guardrails/list-of-guardrail-checks.md) - [PII Redaction](https://docs.portkey.ai/docs/product/guardrails/pii-redaction.md): Replace any sensitive data in requests with standard identifiers - [MCP](https://docs.portkey.ai/docs/product/mcp.md) - [MCP Gateway](https://docs.portkey.ai/docs/product/mcp-gateway.md): Centralized authentication, access control, and observability for MCP servers. - [Team Provisioning](https://docs.portkey.ai/docs/product/mcp-gateway/access-control.md): Control which workspaces and users can access MCP servers and their capabilities. - [MCP Advanced Configuration](https://docs.portkey.ai/docs/product/mcp-gateway/advanced-configuration.md): Control how the gateway authenticates, forwards context, and connects to upstream MCP servers. - [Custom Auth](https://docs.portkey.ai/docs/product/mcp-gateway/authentication/custom-auth.md): Configure custom authentication for MCP servers. - [Bring Your Own Auth](https://docs.portkey.ai/docs/product/mcp-gateway/authentication/external-oauth.md): Use your own identity provider for gateway authentication. - [Forwarding Headers](https://docs.portkey.ai/docs/product/mcp-gateway/authentication/forwarding-headers.md): Pass headers from agent requests to MCP servers. - [Identity Forwarding](https://docs.portkey.ai/docs/product/mcp-gateway/authentication/identity-forwarding.md): Pass authenticated user identity to MCP servers. - [JWT Validation](https://docs.portkey.ai/docs/product/mcp-gateway/authentication/jwt.md): Validate JWTs from external identity providers. - [OAuth](https://docs.portkey.ai/docs/product/mcp-gateway/authentication/oauth.md): Portkey's built-in OAuth 2.1 authentication for MCP Gateway. - [OAuth Client Metadata](https://docs.portkey.ai/docs/product/mcp-gateway/authentication/oauth-client-metadata.md): Customize OAuth client metadata for MCP servers. - [Authorization](https://docs.portkey.ai/docs/product/mcp-gateway/authorization.md): Control what users can access at the MCP server and tool level. - [Circuit Breakers](https://docs.portkey.ai/docs/product/mcp-gateway/circuit-breakers.md): Automatic failure handling for upstream MCP servers. - [Add External MCP Servers](https://docs.portkey.ai/docs/product/mcp-gateway/external-mcp-servers.md): Connect third-party MCP servers like Linear, GitHub, Slack, and more through Portkey. - [Guardrails](https://docs.portkey.ai/docs/product/mcp-gateway/guardrails.md): Apply policies to MCP requests. - [Integrations](https://docs.portkey.ai/docs/product/mcp-gateway/integrations.md): Connect MCP servers to AI agents and applications. - [Add Internal MCP Servers](https://docs.portkey.ai/docs/product/mcp-gateway/internal-mcp-servers.md): Add your internal MCP servers to Portkey. Get enterprise-grade auth, access control, and logging without building it. - [MCP Registry](https://docs.portkey.ai/docs/product/mcp-gateway/mcp-registry.md): Add, manage, and govern MCP servers across your organization. - [Observability](https://docs.portkey.ai/docs/product/mcp-gateway/observability.md): Log, monitor, and debug MCP interactions. - [Quickstart](https://docs.portkey.ai/docs/product/mcp-gateway/quickstart.md): Add DeepWiki to Portkey MCP Gateway and connect it to Claude. - [Rate Limits](https://docs.portkey.ai/docs/product/mcp-gateway/rate-limits.md): Throttle requests per user, team, or server. - [Tool Provisioning](https://docs.portkey.ai/docs/product/mcp-gateway/tool-provisioning.md): Control which tools, resources, and prompts are available to your organization and workspaces. - [Using MCP Servers](https://docs.portkey.ai/docs/product/mcp-gateway/using-mcp-servers.md): Connect to MCP servers from AI agents and applications. - [Model Catalog](https://docs.portkey.ai/docs/product/model-catalog.md): A single pane to view and manage every AI provider and model in your organization. It provides centralized governance, discovery, and usage controls for all your AI resources. - [Budget Limits](https://docs.portkey.ai/docs/product/model-catalog/budget-limits.md) - [Connect Bedrock with Amazon Assumed Role](https://docs.portkey.ai/docs/product/model-catalog/connect-bedrock-with-amazon-assumed-role.md): How to create an integrate Bedrock using Amazon Assumed Role Authentication on Portkey - [Adding Custom Models](https://docs.portkey.ai/docs/product/model-catalog/custom-models.md): Learn how to add custom and fine-tuned models to your Portkey Model Catalog for seamless integration. - [Integrations](https://docs.portkey.ai/docs/product/model-catalog/integrations.md): Securely store and manage AI provider credentials across your organization with centralized governance controls - [Overriding Model Details](https://docs.portkey.ai/docs/product/model-catalog/model-overrides.md): Learn how to set custom pricing for any base model in your Portkey Model Catalog to reflect your specific costs. - [AI Model Provisioning](https://docs.portkey.ai/docs/product/model-catalog/model-provisioning.md) - [Portkey Models](https://docs.portkey.ai/docs/product/model-catalog/portkey-models.md): Open-source pricing and configuration database for 2,300+ LLMs across 35+ providers - [Rate Limits](https://docs.portkey.ai/docs/product/model-catalog/rate-limits.md) - [Workspace Provisioning](https://docs.portkey.ai/docs/product/model-catalog/workspace-provisioning.md) - [Observability (OpenTelemetry)](https://docs.portkey.ai/docs/product/observability.md): Gain real-time insights, track key metrics, and streamline debugging with our comprehensive observability suite. - [Analytics](https://docs.portkey.ai/docs/product/observability/analytics.md) - [Auto-Instrumentation [BETA]](https://docs.portkey.ai/docs/product/observability/auto-instrumentation.md): Portkey's auto-instrumentation allows you to instrument tracing and logging for multiple LLM/Agent frameworks and view the logs, traces, and metrics in a single place. - [Budget Limits](https://docs.portkey.ai/docs/product/observability/budget-limits.md) - [Model Pricing and Cost Management](https://docs.portkey.ai/docs/product/observability/cost-management.md): Learn how Portkey handles pricing data, cost calculations, and pricing updates across different deployment modes. - [Feedback](https://docs.portkey.ai/docs/product/observability/feedback.md): Portkey's Feedback APIs provide a simple way to get weighted feedback from customers on any request you served, at any stage in your app. - [Filters](https://docs.portkey.ai/docs/product/observability/filters.md) - [Logs](https://docs.portkey.ai/docs/product/observability/logs.md): The Logs section presents a chronological list of all the requests processed through Portkey. - [Logs Export](https://docs.portkey.ai/docs/product/observability/logs-export.md): Easily access your Portkey logs data for further analysis and reporting - [Metadata](https://docs.portkey.ai/docs/product/observability/metadata.md): Add custom context to your AI requests for better observability and analytics - [OpenTelemetry for LLM Observability](https://docs.portkey.ai/docs/product/observability/opentelemetry.md): Leverage OpenTelemetry with Portkey for comprehensive LLM application observability, combining gateway insights with full-stack telemetry. - [Supported OTel Libraries](https://docs.portkey.ai/docs/product/observability/opentelemetry/list-of-supported-otel-instrumenters.md): Portkey works with any OpenTelemetry-compatible instrumentation. Here are some popular options. - [Tracing](https://docs.portkey.ai/docs/product/observability/traces.md): The **Tracing** capabilities in Portkey empowers you to monitor the lifecycle of your LLM requests in a unified, chronological view. - [Open Source](https://docs.portkey.ai/docs/product/open-source.md) - [Feature Comparison](https://docs.portkey.ai/docs/product/product-feature-comparison.md): Comparing Portkey's Open-source version and Dev, Pro, Enterprise plans. - [Prompt Engineering Studio](https://docs.portkey.ai/docs/product/prompt-engineering-studio.md) - [Prompt API](https://docs.portkey.ai/docs/product/prompt-engineering-studio/prompt-api.md): Learn how to integrate Portkey's prompt templates directly into your applications using the Prompt API - [Guides](https://docs.portkey.ai/docs/product/prompt-engineering-studio/prompt-guides.md): Learn how to get the most out of Portkey Prompts with these practical guides - [Integrations](https://docs.portkey.ai/docs/product/prompt-engineering-studio/prompt-integration.md) - [Prompt Library](https://docs.portkey.ai/docs/product/prompt-engineering-studio/prompt-library.md) - [Prompt Observability](https://docs.portkey.ai/docs/product/prompt-engineering-studio/prompt-observability.md) - [Prompt Partials](https://docs.portkey.ai/docs/product/prompt-engineering-studio/prompt-partial.md): With Prompt Partials, you can save your commonly used templates (which could be your instruction set, data structure explanation, examples etc.) separately from your prompts and flexibly incorporate them wherever required. - [Prompt Playground](https://docs.portkey.ai/docs/product/prompt-engineering-studio/prompt-playground.md) - [Prompt Versioning & Labels](https://docs.portkey.ai/docs/product/prompt-engineering-studio/prompt-versioning.md) - [Tool Library](https://docs.portkey.ai/docs/product/prompt-engineering-studio/tool-library.md) - [Model Pricing for Air-Gapped Deployments](https://docs.portkey.ai/docs/self-hosting/airgapped/model-pricing.md): Configure model pricing and capabilities data for fully air-gapped Portkey deployments - [Cache Behavior](https://docs.portkey.ai/docs/self-hosting/cache-behavior.md): How the Gateway cache works: population, TTL, sync, resync, and cache invalidation and refresh. - [Enterprise Architecture](https://docs.portkey.ai/docs/self-hosting/hybrid-deployments/architecture.md): Comprehensive guide to Portkey's hybrid deployment architecture for enterprises - [EKS](https://docs.portkey.ai/docs/self-hosting/hybrid-deployments/aws/eks.md): This enterprise-focused document provides comprehensive instructions for deploying the Portkey software in a hybrid mode on Amazon EKS clusters, designed to meet the needs of large-scale, mission-critical applications. It includes specific recommendations for component sizing, high availability, and… - [AWS Marketplace](https://docs.portkey.ai/docs/self-hosting/hybrid-deployments/aws/marketplace.md): This enterprise-focused document provides comprehensive instructions for deploying the Portkey software using AWS Marketplace. - [ACA](https://docs.portkey.ai/docs/self-hosting/hybrid-deployments/azure/aca.md): This enterprise-focused document provides comprehensive instructions for deploying the Portkey software on Azure Container Apps (ACA), tailored to meet the needs of large-scale, mission-critical applications. It includes specific recommendations for component sizing, high availability, and integrati… - [AKS](https://docs.portkey.ai/docs/self-hosting/hybrid-deployments/azure/aks.md): This enterprise-focused document provides comprehensive instructions for deploying the Portkey software on Azure Kubernetes Service (AKS), tailored to meet the needs of large-scale, mission-critical applications. It includes specific recommendations for component sizing, high availability, and integ… - [GCP](https://docs.portkey.ai/docs/self-hosting/hybrid-deployments/gcp.md): This enterprise-focused document provides comprehensive instructions for deploying the Portkey software in a hybrid mode on Google Kubernetes Engine clusters, designed to meet the needs of large-scale, mission-critical applications. It includes specific recommendations for component sizing, high ava… - [Prometheus Metrics](https://docs.portkey.ai/docs/self-hosting/prometheus-metrics.md): Comprehensive monitoring and observability for Portkey Enterprise Gateway through Prometheus metrics - [Common Errors & Resolutions](https://docs.portkey.ai/docs/support/common-errors-and-resolutions.md): Since Portkey functions as a gateway - you may encounter Portkey-related, as well as non-Portkey related errors while using our services. - [Contact Us](https://docs.portkey.ai/docs/support/contact-us.md) - [Developer Forum](https://docs.portkey.ai/docs/support/developer-forum.md): Are you navigating the challenging journey of transitioning LLMs from prototype stages to full-scale production? You're not alone. As this frontier of technology continues to expand, the roadmap isn't always clear. Best practices, guidelines, and efficient methodologies are still on the horizon. - [December '23 Migration](https://docs.portkey.ai/docs/support/portkeys-december-migration.md) - [Upgrade to Model Catalog](https://docs.portkey.ai/docs/support/upgrade-to-model-catalog.md): Learn how to upgrade to Model Catalog to replace Virtual Keys ## OpenAPI Specs - [openapi](https://raw.githubusercontent.com/Portkey-AI/openapi/refs/heads/master/openapi.yaml) - [portkey-models](https://docs.portkey.ai/docs/openapi/portkey-models.json) - [package](https://docs.portkey.ai/docs/package.json) - [package-lock](https://docs.portkey.ai/docs/package-lock.json) - [sample_metric_file](https://docs.portkey.ai/docs/images/enterprise/private-cloud-deployments/architecture/sample_metric_file.json) - [sample_log_file](https://docs.portkey.ai/docs/images/enterprise/private-cloud-deployments/architecture/sample_log_file.json) Built with [Mintlify](https://mintlify.com).