Mistral
Portkey helps bring Mistral’s APIs to production with its observability suite & AI Gateway.
Use the Mistral API through Portkey for:
- Enhanced Logging: Track API usage with detailed insights and custom segmentation.
- Production Reliability: Automated fallbacks, load balancing, retries, time outs, and caching.
- Continuous Improvement: Collect and apply user feedback.
1.1 Setup & Logging
- Obtain your Portkey API Key.
- Set
$ export PORTKEY_API_KEY=PORTKEY_API_KEY
- Set
$ export MISTRAL_API_KEY=MISTRAL_API_KEY
pip install portkey-ai
ornpm i portkey-ai
1.2. Enhanced Observability
- Trace requests with single id.
- Append custom tags for request segmenting & in-depth analysis.
Just add their relevant headers to your request:
Here’s how your logs will appear on your Portkey dashboard:
2. Caching, Fallbacks, Load Balancing
- Fallbacks: Ensure your application remains functional even if a primary service fails.
- Load Balancing: Efficiently distribute incoming requests among multiple models.
- Semantic Caching: Reduce costs and latency by intelligently caching results.
Toggle these features by saving Configs (from the Portkey dashboard > Configs tab).
If we want to enable semantic caching + fallback from Mistral-Medium to Mistral-Tiny, your Portkey config would look like this:
Now, just set the Config ID while instantiating Portkey:
For more on Configs and other gateway feature like Load Balancing, check out the docs.
3. Collect Feedback
Gather weighted feedback from users and improve your app:
Conclusion
Integrating Portkey with Mistral helps you build resilient LLM apps from the get-go. With features like semantic caching, observability, load balancing, feedback, and fallbacks, you can ensure optimal performance and continuous improvement.
Read full Portkey docs here. | Reach out to the Portkey team.