Supercharge Langchain apps with Portkey: Multi-LLM, observability, caching, reliability, and prompt management.
ChatOpenAI
to effortlessly work with 1600+ LLMs (Anthropic, Gemini, Mistral, etc.) without needing different SDKs. Portkey enhances your Langchain apps with interoperability, reliability, speed, cost-efficiency, and deep observability.
langchain-openai
includes langchain-core
. Install langchain
or other specific packages if you need more components.ChatOpenAI
with PortkeyChatOpenAI
to route requests via Portkey using your Portkey API Key and createHeaders
method.
ChatOpenAI
Parameters:
api_key
: The underlying provider’s API key.base_url
: Set to PORTKEY_GATEWAY_URL
to route via Portkey.default_headers
: Uses createHeaders
for your PORTKEY_API_KEY
. Can also include a virtual_key
(for provider credentials) or a config
ID (for advanced routing).llm
instance now use Portkey, starting with observability.
Langchain requests now appear in Portkey Logs
ChatOpenAI
interface unlocks powerful capabilities:
Use ChatOpenAI
for OpenAI, Anthropic, Gemini, Mistral, and more. Switch providers easily with Virtual Keys or Configs.
Reduce latency and costs with Portkey’s Simple, Semantic, or Hybrid caching, enabled via Configs.
Build robust apps with retries, timeouts, fallbacks, and load balancing, configured in Portkey.
Get deep insights: LLM usage, costs, latency, and errors are automatically logged in Portkey.
Manage, version, and use prompts from Portkey’s Prompt Library within Langchain.
Securely manage LLM provider API keys using Portkey Virtual Keys in your Langchain setup.
ChatOpenAI
becomes your universal client for numerous models.
Mechanism: Portkey Headers with Virtual Keys
Switch LLMs by changing the virtual_key
in createHeaders
and the model
in ChatOpenAI
. Portkey manages provider specifics.
ChatOpenAI
structure remains. Only virtual_key
and model
change. Portkey maps responses to the OpenAI format Langchain expects. This extends to Mistral, Cohere, Azure, Bedrock via their Virtual Keys.
createHeaders
.
A Portkey Config can specify mode
(simple
, semantic
, hybrid
) and max_age
(cache duration).
langchain-semantic-cache
) specifying caching strategy.
cfg_semantic_cache_123
.
createHeaders
:
gpt-4o
then claude-3-sonnet
).
cfg_reliable_123
.
createHeaders
:
ChatOpenAI
via Portkey provides instant, comprehensive observability:
Langchain requests are automatically logged in Portkey
Prompt ID
.ChatOpenAI
.virtual_key
in createHeaders
.OpenAIEmbeddings
via Portkey.
OpenAIEmbeddings
. For other providers (Cohere, Gemini), use the Portkey SDK directly (docs).ChatOpenAI
instances. Portkey features (logging, caching) apply automatically.
chat_llm
are processed by Portkey.
This concludes the main features. Redundant examples have been removed for clarity.