Portkey Docs home page
Search or ask...
Support
Book Demo
Sign in
Sign in
Search...
Navigation
Getting Started
Overview
Documentation
Integrations
Inference API
Admin API
Cookbook
Changelog
Prompt Engineering
Prompts
Ultimate AI SDR
Build a chatbot using Portkey's Prompt Templates
Building an LLM-as-a-Judge System for AI (Customer Support) Agent
Whitepapers
Optimizing LLM Costs
Getting Started
Overview
A/B Test Prompts and Models
Tackling Rate Limiting
Function Calling
Image Generation
Getting started with AI Gateway
Llama 3 on Groq
Return Repeat Requests from Cache
Trigger Automatic Retries on LLM Failures
101 on Portkey's Gateway Configs
Integrations
Overview
Llama 3 on Portkey + Together AI
Introduction to GPT-4o
Anyscale
Mistral
Vercel AI
Deepinfra
Groq
Langchain
Mixtral 8x22b
Segmind
Use Cases
Overview
Few-Shot Prompting
Enforcing JSON Schema with Anyscale & Together
Detecting Emotions with GPT-4o
Build an article suggestion app with Supabase pgvector, and Portkey
Setting up resilient Load balancers with failure-mitigating Fallbacks
Run Portkey on Prompts from Langchain Hub
Smart Fallback with Model-Optimized Prompts
How to use OpenAI SDK with Portkey Prompt Templates
Setup OpenAI -> Azure OpenAI Fallback
Fallback from SDXL to Dall-e-3
Comparing Top10 LMSYS Models with Portkey
Tracking LLM Costs Per User with Portkey
Getting Started
Overview
A/B Test Prompts and Models
Tackling Rate Limiting
Function Calling
Image Generation
Getting started with AI Gateway
Llama 3 on Groq
Return Repeat Requests from Cache
Trigger Automatic Retries on LLM Failures
101 on Portkey's Gateway Configs
Was this page helpful?
Yes
No
Suggest edits
Raise issue
10. Conclusion and Key Takeaways
A/B Test Prompts and Models