About
A leading health insurance company serving millions of members across the Pacific Northwest, providing comprehensive health benefits and tailored wellness services.
Industry
Insurance
Company Size
North America
Headquarters
North America
Why Portkey:
Centralized observability, team-level access controls.
Great AI initiatives, but not enough infrastructure support
Across the organization, different teams were already experimenting with LLMs, analyzing thousands of internal policies, summarizing member call histories, generating synthetic data, and improving member-facing experiences.
But every team was doing it in isolation. Each team wanted access to different models. All of it was being routed through Azure OpenAI's private endpoints, which added another layer of complexity.
Hallucinations were already showing up in PoC-stage projects. The cybersecurity team had strict requirements: every request needed to be logged and stored securely. The platform team needed stricter access controls, visibility, and cost control.
"We were running everything through Azure OpenAI private endpoints, but we did not have a standard way to manage usage or control access yet."
— Platform Engineering Lead
The health insurer needed infrastructure. A secure, observable, and flexible foundation that could support both experimentation and enterprise rollout, while staying fully compliant with healthcare standards.
A centralized AI gateway designed for scale and control
Portkey offered exactly what the platform engineering team was looking for: a secure, enterprise-grade AI gateway they could deploy within their own Azure environment, fully compatible with Azure OpenAI's private endpoints.
From the very first PoC, Portkey's integration meant teams didn't need to rewrite their code or workflows. With just a simple base URL change, developers could start routing their requests through Portkey, while the platform team retained full control behind the scenes.
Every request was logged. Metadata could be added at the config level for team-level tracking. Budget limits, rate limits, and access policies were enforced, all without leaking the underlying API keys to individual teams.
Portkey also gave the health plan:
Fine-grained observability: Request/response logging, cost tracking, latency metrics, and error filtering
Secure hosting options: Full support for hybrid deployments with the data plane inside their own VPC
Flexible routing: Ability to load balance across regions, test new models, or failover in case of rate limits
Guardrails: Built-in redaction, PII detection, prompt injection protection, and output schema checks
Prompt versioning: A centralized prompt library with version control and deployment workflows
The gateway is about visibility and enablement. With Portkey, the health plan could empower teams to experiment freely while keeping compliance and security intact.
From PoC to production, with guardrails and governance baked in
The AI journey is still in early stages, but the foundation is now in place. Over the next few months, the team plans to onboard more internal teams onto the gateway, expand model access, and formalize the policies that will govern how AI is used across the company.
The team is also evaluating prompt versioning, automated fine-tuning, and workspace-level budgeting to make AI development faster and safer for everyone involved.
What started as a platform engineering initiative is quickly becoming the backbone for AI at the organization, setting a new standard for how GenAI is governed in healthcare.




