Integrate your privately hosted LLMs with Portkey for unified management, observability, and reliability.
/chat/completions
, Anthropic’s /messages
, etc.).Adding a private LLM as a Virtual Key
OpenAI
)Custom Host
fieldcustom_host
must include the API version path (e.g., /v1/
). Portkey will automatically append the endpoint path (/chat/completions
, /completions
, or /embeddings
).forward_headers
parameter to pass them directly to your private LLM:
X-My-Custom-Header
becomes xMyCustomHeader
.forward_headers
in your Gateway Config for consistent header forwarding:
Portkey Analytics Dashboard for Private LLMs
Issue | Possible Causes | Solutions |
---|---|---|
Connection Errors | Incorrect URL, network issues, firewall rules | Verify URL format, check network connectivity, confirm firewall allows traffic |
Authentication Failures | Invalid credentials, incorrect header format | Check credentials, ensure headers are correctly formatted and forwarded |
Timeout Errors | LLM server overloaded, request too complex | Adjust timeout settings, implement load balancing, simplify requests |
Inconsistent Responses | Different model versions, configuration differences | Standardize model versions, document expected behavior differences |
Can I use any private LLM with Portkey?
How do I handle multiple deployment endpoints?
Are there any request volume limitations?
Can I use different models with the same private deployment?
Can I mix private and commercial LLMs in the same application?