GCP
This enterprise-focused document provides comprehensive instructions for deploying the Portkey software on Google Cloud Platform (GCP), tailored to meet the needs of large-scale, mission-critical applications.
It includes specific recommendations for component sizing, high availability, disaster recovery, and integration with monitoring systems.
Components and Sizing Recommendations
Component: AI Gateway
- Deployment: Deploy as a Docker container in your Kubernetes cluster using Helm Charts.
- Instance Type: GCP n1-standard-2 instance, with at least 4GiB of memory and two vCPUs.
- High Availability: Deploy across multiple zones for high reliability.
Component: Logs Store (optional)
- Options: Hosted MongoDB, Google Cloud Storage (GCS), or Google Firestore.
- Sizing: Each log document is ~10kb in size (uncompressed).
Component: Cache (Prompts, Configs & Virtual Keys)
- Options: Google Memorystore for Redis or self-hosted Redis.
- Deployment: Deploy in the same VPC as the Portkey Gateway.
Deployment Steps
Prerequisites
Ensure the following tools are installed:
- Docker
- Kubectl
- Helm (v3 or above)
Step 1: Clone the Portkey Repo Containing Helm Chart
Step 2: Update values.yaml for Helm
Modify the values.yaml
file in the Helm chart directory to include the Docker registry credentials and necessary environment variables. You can find the sample file at ./helm-chart/helm/enterprise/values.yaml
.
Image Credentials Configuration
The Portkey team will share the credentials for your image.
Environment Variables Configuration
These can be stored & fetched from a vault as well
Notes on the Log Store
LOG_STORE
can be:
- Google Cloud Storage (
gcs
) - Hosted MongoDB (
mongo
)
If the LOG_STORE
is mongo
, the following environment variables are needed:
If the LOG_STORE
is gcs
, the following values are mandatory:
You need to generate the Access Key and Secret Key from the respective providers.
Notes on Cache
If CACHE_STORE
is set as redis
, a Redis instance will also get deployed in the cluster. If you are using custom Redis, then leave it blank. The following values are mandatory:
REDIS_URL
defaults to redis://redis:6379
and REDIS_TLS_ENABLED
defaults to false
.
Notes on Analytics Store
This is hosted in Portkey’s control plane and these credentials will be shared by the Portkey team.
The following are mandatory and are shared by the Portkey Team.
Step 3: Deploy Using Helm Charts
Navigate to the directory containing your Helm chart and run the following command to deploy the application:
This command installs the Helm chart into the _portkeyai_
namespace.
Step 4: Verify the Deployment
Check the status of your deployment to ensure everything is running correctly:
Step 5: Port Forwarding (Optional)
To access the service over the internet, use port forwarding:
Replace _<pod-name>_
with the name of your pod.
Uninstalling the Deployment
If you need to remove the deployment, run:
This command will uninstall the Helm release and clean up the resources.
Network Configuration
Step 1: Allow Access to the Service
To make the service accessible from outside the cluster, define a Service of type LoadBalancer
in your values.yaml
or Helm templates. Specify the desired port for external access.
Replace _<desiredport>_
with the port number for external access with the port the application listens on internally.
Step 2: Ensure Outbound Network Access
By default, Kubernetes allows full outbound access, but if your cluster has NetworkPolicies that restrict egress, configure them to allow outbound traffic.
Example NetworkPolicy for Outbound Access:
This allows the gateway to access LLMs hosted within your VPC and outside as well. This also enables connection for the sync service to the Portkey Control Plane.
Step 3: Configure Inbound Access for Portkey Control Plane
Ensure the Portkey control plane can access the service either over the internet or through VPC peering.
Over the Internet:
- Ensure the LoadBalancer security group allows inbound traffic on the specified port.
- Document the public IP/hostname and port for the control plane connection.
Through VPC Peering:
- Set up VPC peering between your GCP account and the control plane’s GCP account. Requires manual setup by Portkey Team.
This guide provides the necessary steps and configurations to deploy Portkey on GCP effectively, ensuring high availability, scalability, and integration with your existing infrastructure.