Portkey provides a robust and secure gateway to integrate various Large Language Models (LLMs) into applications, including Cohere’s generation, embedding, and reranking endpoints. With Portkey, take advantage of features like fast AI gateway access, observability, prompt management, and more, while securely managing API keys through Model Catalog.Documentation Index
Fetch the complete documentation index at: https://docs.portkey.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Quick Start
Get Cohere working in 3 steps:Tip: You can also set
provider="@cohere" in Portkey() and use just model="command-r-plus" in the request.Add Provider in Model Catalog
- Go to Model Catalog → Add Provider
- Select Cohere
- Choose existing credentials or create new by entering your Cohere API key
- Name your provider (e.g.,
cohere-prod)
Complete Setup Guide →
See all setup options, code examples, and detailed instructions
Other Cohere Endpoints
Embeddings
Embedding endpoints are natively supported within Portkey:Re-ranking
Use Cohere reranking with theportkey.post method and the body expected by Cohere’s reranking API:
Managing Cohere Prompts
Manage all prompt templates to Cohere in the Prompt Library. All current Cohere models are supported, and you can easily test different prompts. Use theportkey.prompts.completions.create interface to use the prompt in an application.
Next Steps
Add Metadata
Add metadata to your Cohere requests
Gateway Configs
Add gateway configs to your Cohere requests
Tracing
Trace your Cohere requests
Fallbacks
Setup fallback from OpenAI to Cohere
SDK Reference
Complete Portkey SDK documentation

