Your AI Gateway. Your Infrastructure. 2 Hours.

Deploy Portkey’s data plane in your VPC and start routing LLM traffic securely. No data leaves your network.
247Teams deployed
2.3 hrsAverage time
15 minTo first API call

🎯 What You’ll Have in 2 Hours

Working AI Gateway

Route requests to OpenAI, Anthropic, and 1600+ LLMs through your private endpoint

Complete Data Privacy

All prompts and responses stay in your VPC. Only metrics leave (no sensitive data).

Full Observability

See every request, cost, latency, and error in the Portkey dashboard
Successful deployment showing gateway handling requests

πŸ“ Your 4-Step Journey

Check Prerequisites

Verify your Kubernetes cluster and create storage buckets

Get Credentials

Receive and test your access tokens from Portkey

Deploy Gateway

Run one Helm command with our production-ready template

Validate Success

Make your first API call and see logs flowing

Step 1: Check Prerequisites (5 min)

βœ“ Quick Checks

# 1. Kubernetes ready?
kubectl version --short
# Need: v1.24+

# 2. Helm installed?
helm version --short
# Need: v3.0+

# 3. Can reach internet?
curl -I https://control.portkey.ai
# Need: HTTP/2 200

βœ“ Pick Your Storage

# Create a bucket
aws s3 mb s3://my-portkey-logs

Need help? Start a quick chat

Most issues resolved in under 10 minutes

Step 2: Get Credentials (10 min)

You’ll receive a 1Password link with two items:

🐳 Docker Access

Username & password for our container registry

πŸ” Gateway Token

JWT to connect your gateway to control plane

Quick Test

# 1. Test Docker access
docker login -u <username> -p <password>
docker pull portkey/gateway-enterprise:latest

# 2. Create Kubernetes secret
kubectl create namespace portkey
kubectl -n portkey create secret docker-registry portkey-creds \
  --docker-username=<username> \
  --docker-password=<password>

Step 3: Deploy Gateway (30 min)

3.1 Save This Config

Create values.yaml with your details:
# values.yaml - Production-ready config
replicaCount: 2

images:
  gatewayImage:
    repository: "portkey/gateway-enterprise"
    tag: "1.10.24"

environment:
  data:
    # Your identifiers
    SERVICE_NAME: "my-company-gateway"
    PORTKEY_CLIENT_AUTH: "<token-from-1password>"

    # Storage (pick one)
    LOG_STORE: "s3"  # or "gcs", "mongo", "wasabi"
    LOG_STORE_REGION: "us-east-1"
    LOG_STORE_GENERATIONS_BUCKET: "my-portkey-logs"
    LOG_STORE_ACCESS_KEY: "<your-access-key>"
    LOG_STORE_SECRET_KEY: "<your-secret-key>"

    # Cache (auto-deployed)
    CACHE_STORE: "redis"
    REDIS_URL: "redis://portkey-redis:6379"

    # Analytics
    ANALYTICS_STORE: "control_plane"

# Expose your gateway
ingress:
  enabled: true
  className: "nginx"
  hosts:
    - host: gateway.internal.mycompany.com
      paths:
        - path: /
          pathType: Prefix

# Auto-scaling
autoscaling:
  enabled: true
  minReplicas: 2
  maxReplicas: 10

3.2 Deploy with One Command

# Add Portkey charts
helm repo add portkey https://portkey-ai.github.io/helm
helm repo update

# Deploy!
helm upgrade --install portkey-gateway portkey/gateway \
  -n portkey -f values.yaml

# Watch it come up
kubectl -n portkey get pods -w
Expected output:
NAME                                READY   STATUS
portkey-gateway-7f9b8c5d4-abc123   1/1     Running
portkey-gateway-7f9b8c5d4-def456   1/1     Running
portkey-redis-master-0             1/1     Running

Step 4: Validate Success (15 min)

4.1 Check Health

# Port-forward for quick test
kubectl -n portkey port-forward svc/portkey-gateway 8787:8787

# In new terminal
curl http://localhost:8787/v1/health
# βœ… Should return: {"status":"healthy"}

4.2 Get Your API Keys

  1. Go to app.portkey.ai
  2. Click β€œCreate Virtual Key”
  3. Select your LLM provider (e.g., OpenAI)
  4. Add your OpenAI API key
  5. Copy the generated virtual key

4.3 Make Your First Call! πŸŽ‰

# Using your gateway URL
curl https://gateway.internal.mycompany.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "x-portkey-api-key: <your-virtual-key>" \
  -d '{
    "model": "gpt-4",
    "messages": [{"role": "user", "content": "Hello from my private gateway!"}]
  }'

4.4 See It in Action

βœ… Check Logs

Visit app.portkey.ai β†’ Logs Your request appears instantly

βœ… View Analytics

Visit app.portkey.ai β†’ Analytics See costs and latencies

🎊 You Did It! What’s Next?

πŸ“ž Book Your Success Review

Optional: Show us your deployment and get optimization tips (15 min)

πŸ†˜ Troubleshooting


πŸ’¬ Get Help Fast

Live Chat

portkey.wiki/chat ~10 min response time

Slack Community

portkey.wiki/slack #deployment-help channel

Email Support

[email protected] 24-hour response SLA