Portkey Provider Slug:
lambda
Overview
Portkey offers native integrations with Lambda for Node.js, Python, and REST APIs. By combining Portkey with Lambda, you can create production-grade AI applications with enhanced reliability, observability, and advanced features.Getting Started
1
Obtain your Lambda API Key
Visit the Lambda dashboard to generate your API key.
2
Create a Virtual Key in Portkey
Portkey’s virtual key vault simplifies your interaction with Lambda. Virtual keys act as secure aliases for your actual API keys, offering enhanced security and easier management through budget limits to control your API usage.Use the Portkey app to create a virtual key associated with your Lambda API key.
3
Initialize the Portkey Client
Now that you have your virtual key, set up the Portkey client:
Portkey Hosted App
Use the Portkey API key and the Lambda virtual key to initialize the client in your preferred programming language.Open Source Use
Alternatively, use Portkey’s Open Source AI Gateway to enhance your app’s reliability with minimal code:Supported Models
Supported Lambda Models
Supported Lambda Models
- deepseek-coder-v2-lite-instruct
- dracarys2-72b-instruct
- hermes3-405b
- hermes3-405b-fp8-128k
- hermes3-70b
- hermes3-8b
- lfm-40b
- llama3.1-405b-instruct-fp8
- llama3.1-70b-instruct-fp8
- llama3.1-8b-instruct
- llama3.2-3b-instruct
- llama3.1-nemotron-70b-instruct
Supported Endpoints and Parameters
Endpoint | Supported Parameters |
---|---|
chatComplete | messages, max_tokens, temperature, top_p, stream, presence_penalty, frequency_penalty |
complete | model, prompt, max_tokens, temperature, top_p, n, stream, logprobs, echo, stop, presence_penalty, frequency_penalty, best_of, logit_bias, user, seed, suffix |
Lambda Supported Features
Chat Completions
Generate chat completions using Lambda models through Portkey:Streaming
Stream responses for real-time output in your applications:Function Calling
Leverage Lambda’s function calling capabilities through Portkey:Portkey’s Advanced Features
Track End-User IDs
Portkey allows you to track user IDs passed with the user parameter in Lambda requests, enabling you to monitor user-level costs, requests, and more:
user
parameter, Portkey allows you to send arbitrary custom metadata with your requests. This powerful feature enables you to associate additional context or information with each request, which can be useful for analysis, debugging, or other custom use cases.
Using The Gateway Config
Here’s a simplified version of how to use Portkey’s Gateway Configuration:1
Create a Gateway Configuration
You can create a Gateway configuration using the Portkey Config Dashboard or by writing a JSON configuration in your code. In this example, requests are routed based on the user’s subscription plan (paid or free).
2
Process Requests
When a user makes a request, it will pass through Portkey’s AI Gateway. Based on the configuration, the Gateway routes the request according to the user’s metadata.

3
Set Up the Portkey Client
Pass the Gateway configuration to your Portkey client. You can either use the config object or the Config ID from Portkey’s hosted version.
Load Balancing
Distribute requests across multiple targets based on defined weights.
Fallbacks
Automatically switch to backup targets if the primary target fails.
Conditional Routing
Route requests to different targets based on specified conditions.
Caching
Enable caching of responses to improve performance and reduce costs.
Guardrails
Portkey’s AI gateway enables you to enforce input/output checks on requests by applying custom hooks before and after processing. Protect your user’s/company’s data by using PII guardrails and many more available on Portkey Guardrails:Learn More About Guardrails
Explore Portkey’s guardrail features to enhance the security and reliability of your AI applications.
Next Steps
The complete list of features supported in the SDK are available in our comprehensive documentation:Portkey SDK Documentation
Explore the full capabilities of the Portkey SDK and how to leverage them in your projects.
For the most up-to-date information on supported features and endpoints, please refer to our API Reference.