Skip to main content

Introduction

Mastra is a TypeScript framework for building AI agents with tools, workflows, memory, and evaluation scoring. Portkey enhances Mastra agents with observability, reliability, and production-readiness features. Portkey turns your experimental Mastra agents into production-ready systems by providing:
  • Complete observability of every agent step, tool use, and interaction
  • Built-in reliability with fallbacks, retries, and load balancing
  • Cost tracking and optimization to manage your AI spend
  • Access to 1600+ LLMs through a single integration
  • Guardrails to keep agent behavior safe and compliant
  • Version-controlled prompts for consistent agent performance

Mastra Official Documentation

Learn more about Mastra’s core concepts and features

Installation & Setup

1

Install the required packages

npm install @mastra/core
npm install --save-dev mastra
2

Generate API Key

Create a Portkey API key from the Portkey dashboard. You can attach optional budget/rate limits and configurations.
3

Set Up Provider Integration

In Portkey, set up your LLM provider integration:
  1. Go to Integrations in Portkey
  2. Connect your LLM provider (OpenAI, Anthropic, etc.)
  3. Note your provider slug (e.g., openai-dev, anthropic-prod)
You’ll use this slug in your Mastra model configuration.
4

Configure Mastra Agent with Portkey

Configure your Mastra agent’s model to use Portkey as the gateway:
import { Agent } from '@mastra/core/agent';

export const agent = new Agent({
  name: 'Assistant',
  instructions: 'You are a helpful assistant.',
  model: {
    id: 'openai/@YOUR_PROVIDER_SLUG@gpt-4o',  // Format: openai/@provider-slug@model-name
    url: 'https://api.portkey.ai/v1',
    apiKey: 'YOUR_PORTKEY_API_KEY',
    headers: {
      // Optional: Add Portkey configuration
      'x-portkey-trace-id': 'agent-session-123',
      'x-portkey-metadata': JSON.stringify({ 
        agent: 'assistant',
        env: 'production' 
      })
    }
  }
});
Model ID Format: Use openai/@provider-slug@model-name because Mastra uses OpenAI-compatible interfaces under the hood. The @provider-slug should match the slug from your Portkey integration.

Production Features

1. Enhanced Observability

Portkey provides comprehensive observability for your Mastra agents, helping you understand exactly what’s happening during each execution.
Traces provide a hierarchical view of your agent’s execution, showing the sequence of LLM calls, tool invocations, and state transitions.
// Add tracing to your Mastra agents
export const agent = new Agent({
  name: 'Research Assistant',
  instructions: 'You are a helpful research assistant.',
  model: {
    id: 'openai/@YOUR_PROVIDER_SLUG@gpt-4o',
    url: 'https://api.portkey.ai/v1',
    apiKey: process.env.PORTKEY_API_KEY,
    headers: {
      'x-portkey-trace-id': 'unique_execution_trace_id', // Add unique trace ID
      'x-portkey-metadata': JSON.stringify({ 
        agent_type: 'research_agent' 
      })
    }
  }
});

2. Reliability - Keep Your Agents Running Smoothly

When running agents in production, things can go wrong - API rate limits, network issues, or provider outages. Portkey’s reliability features ensure your agents keep running smoothly even when problems occur. It’s this simple to enable fallback in your Mastra agents:
import { Agent } from '@mastra/core/agent';

// Create a config with fallbacks
// It's recommended that you create the Config in Portkey App rather than hard-code the config JSON directly
const portkeyConfig = {
  strategy: {
    mode: 'fallback'
  },
  targets: [
    {
      provider: '@YOUR_OPENAI_PROVIDER',
      override_params: { model: 'gpt-4o' }
    },
    {
      provider: '@YOUR_ANTHROPIC_PROVIDER',
      override_params: { model: 'claude-3-opus-20240229' }
    }
  ]
};

export const agent = new Agent({
  name: 'Resilient Agent',
  instructions: 'You are a helpful assistant.',
  model: {
    id: 'openai/@YOUR_OPENAI_PROVIDER@gpt-4o',
    url: 'https://api.portkey.ai/v1',
    apiKey: process.env.PORTKEY_API_KEY,
    headers: {
      'x-portkey-config': JSON.stringify(portkeyConfig)
    }
  }
});
This configuration will automatically try Claude if the GPT-4o request fails, ensuring your agent can continue operating.

3. Prompting in Mastra Agents

Portkey’s Prompt Engineering Studio helps you create, manage, and optimize the prompts used in your Mastra agents. Instead of hardcoding prompts or instructions, use Portkey’s prompt rendering API to dynamically fetch and apply your versioned prompts.
Prompt Playground Interface

Manage prompts in Portkey's Prompt Library

Prompt Playground is a place to compare, test and deploy perfect prompts for your AI application. It’s where you experiment with different models, test variables, compare outputs, and refine your prompt engineering strategy before deploying to production. It allows you to:
  1. Iteratively develop prompts before using them in your agents
  2. Test prompts with different variables and models
  3. Compare outputs between different prompt versions
  4. Collaborate with team members on prompt development
This visual environment makes it easier to craft effective prompts for each step in your Mastra agent’s workflow.

Prompt Engineering Studio

Learn more about Portkey’s prompt management features

4. Guardrails for Safe Agents

Guardrails ensure your Mastra agents operate safely and respond appropriately in all situations. Why Use Guardrails? Mastra agents can experience various failure modes:
  • Generating harmful or inappropriate content
  • Leaking sensitive information like PII
  • Hallucinating incorrect information
  • Generating outputs in incorrect formats
Portkey’s guardrails protect against these issues by validating both inputs and outputs. Implementing Guardrails
import { Agent } from '@mastra/core/agent';

// Create a config with input and output guardrails
// It's recommended you create Config in Portkey App and pass the config ID in the headers
const guardrailConfig = {
  provider: '@YOUR_PROVIDER',
  input_guardrails: ['guardrails-id-xxx', 'guardrails-id-yyy'],
  output_guardrails: ['guardrails-id-xxx']
};

export const agent = new Agent({
  name: 'Safe Agent',
  instructions: 'You are a helpful assistant that provides safe responses.',
  model: {
    id: 'openai/@YOUR_PROVIDER_SLUG@gpt-4o',
    url: 'https://api.portkey.ai/v1',
    apiKey: process.env.PORTKEY_API_KEY,
    headers: {
      'x-portkey-config': JSON.stringify(guardrailConfig)
    }
  }
});
Portkey’s guardrails can:
  • Detect and redact PII in both inputs and outputs
  • Filter harmful or inappropriate content
  • Validate response formats against schemas
  • Check for hallucinations against ground truth
  • Apply custom business logic and rules

Learn More About Guardrails

Explore Portkey’s guardrail features to enhance agent safety

5. User Tracking with Metadata

Track individual users through your Mastra agents using Portkey’s metadata system. What is Metadata in Portkey? Metadata allows you to associate custom data with each request, enabling filtering, segmentation, and analytics. The special _user field is specifically designed for user tracking.
import { Agent } from '@mastra/core/agent';

export const agent = new Agent({
  name: 'Personalized Agent',
  instructions: 'You are a personalized assistant.',
  model: {
    id: 'openai/@YOUR_PROVIDER_SLUG@gpt-4o',
    url: 'https://api.portkey.ai/v1',
    apiKey: process.env.PORTKEY_API_KEY,
    headers: {
      'x-portkey-metadata': JSON.stringify({
        _user: 'user_123',  // Special _user field for user analytics
        user_name: 'John Doe',
        user_tier: 'premium',
        user_company: 'Acme Corp'
      })
    }
  }
});
Filter Analytics by User With metadata in place, you can filter analytics by user and analyze performance metrics on a per-user basis:

Filter analytics by user

This enables:
  • Per-user cost tracking and budgeting
  • Personalized user analytics
  • Team or organization-level metrics
  • Environment-specific monitoring (staging vs. production)

Learn More About Metadata

Explore how to use custom metadata to enhance your analytics

6. Caching for Efficient Agents

Implement caching to make your Mastra agents more efficient and cost-effective:
import { Agent } from '@mastra/core/agent';

const cacheConfig = {
  provider: '@YOUR_PROVIDER',
  cache: {
    mode: 'simple'
  }
};

export const agent = new Agent({
  name: 'Cached Agent',
  instructions: 'You are a helpful assistant.',
  model: {
    id: 'openai/@YOUR_PROVIDER_SLUG@gpt-4o',
    url: 'https://api.portkey.ai/v1',
    apiKey: process.env.PORTKEY_API_KEY,
    headers: {
      'x-portkey-config': JSON.stringify(cacheConfig)
    }
  }
});
Simple caching performs exact matches on input prompts, caching identical requests to avoid redundant model executions.

7. Model Interoperability

With Portkey, you can easily switch between different LLMs in your Mastra agents without changing your core agent logic.
import { Agent } from '@mastra/core/agent';

// Using OpenAI
const openaiAgent = new Agent({
  name: 'OpenAI Agent',
  instructions: 'You are a helpful assistant.',
  model: {
    id: 'openai/@YOUR_OPENAI_PROVIDER@gpt-4o',
    url: 'https://api.portkey.ai/v1',
    apiKey: process.env.PORTKEY_API_KEY
  }
});

// Using Anthropic
const anthropicAgent = new Agent({
  name: 'Anthropic Agent',
  instructions: 'You are a helpful assistant.',
  model: {
    id: 'openai/@YOUR_ANTHROPIC_PROVIDER@claude-3-opus-20240229',
    url: 'https://api.portkey.ai/v1',
    apiKey: process.env.PORTKEY_API_KEY
  }
});

// Using Google Gemini
const geminiAgent = new Agent({
  name: 'Gemini Agent',
  instructions: 'You are a helpful assistant.',
  model: {
    id: 'openai/@[email protected]',
    url: 'https://api.portkey.ai/v1',
    apiKey: process.env.PORTKEY_API_KEY
  }
});
Portkey provides access to over 200 LLMs through a unified interface, including:
  • OpenAI (GPT-4o, GPT-4 Turbo, etc.)
  • Anthropic (Claude 3.5 Sonnet, Claude 3 Opus, etc.)
  • Mistral AI (Mistral Large, Mistral Medium, etc.)
  • Google Vertex AI (Gemini 1.5 Pro, etc.)
  • Cohere (Command, Command-R, etc.)
  • AWS Bedrock (Claude, Titan, etc.)
  • Local/Private Models

Supported Providers

See the full list of LLM providers supported by Portkey

Set Up Enterprise Governance for Mastra Agents

Why Enterprise Governance? If you are using Mastra agents inside your organization, you need to consider several governance aspects:
  • Cost Management: Controlling and tracking AI spending across teams
  • Access Control: Managing which teams can use specific models
  • Usage Analytics: Understanding how AI is being used across the organization
  • Security & Compliance: Maintaining enterprise security standards
  • Reliability: Ensuring consistent service across all users
Portkey adds a comprehensive governance layer to address these enterprise needs. Let’s implement these controls step by step. Enterprise Implementation Guide Portkey allows you to use 1600+ LLMs with your Mastra agents setup, with minimal configuration required. Let’s set up the core components in Portkey that you’ll need for integration.
1

Create Integration

To create a new LLM integration:Go to Integrations in the Portkey App. Set budget / rate limits, model access if required and save the integration.This creates a “Portkey Provider” that you can then use in any of your Portkey requests without having to send auth details for that LLM provider again.
2

Create Config

Configs in Portkey define how your requests are routed, with features like advanced routing, fallbacks, and retries.To create your config:
  1. Go to Configs in Portkey dashboard
  2. Create new config with:
{
  "provider": "@YOUR_PROVIDER_FROM_STEP1",
  "override_params": {
    "model": "gpt-4o" // Your preferred model name
  }
}
  1. Save and note the Config ID for the next step
3

Configure Portkey API Key

Now create a Portkey API key and attach the config you created in Step 2:
  1. Go to API Keys in Portkey and Create new API key
  2. Select your config from Step 2
  3. Generate and save your API key
4

Connect to Mastra

After setting up your Portkey API key with the attached config, connect it to your Mastra agents:
import { Agent } from '@mastra/core/agent';

export const agent = new Agent({
  name: 'Enterprise Agent',
  instructions: 'You are a helpful assistant.',
  model: {
    id: 'openai/@YOUR_PROVIDER_SLUG@gpt-4o',
    url: 'https://api.portkey.ai/v1',
    apiKey: 'YOUR_PORTKEY_API_KEY'  // The API key with attached config from step 3
  }
});

Step 1: Implement Budget Controls & Rate Limits

Integrations enable granular control over LLM access at the team/department level. This helps you:
  • Set up budget limits
  • Prevent unexpected usage spikes using Rate limits
  • Track departmental spending

Setting Up Department-Specific Controls:

  1. Navigate to Integrations in Portkey dashboard and create a new Integration
  2. Provision this Integration for each department with their budget limits and rate limits
  3. Configure model access if required

Step 2: Define Model Access Rules

As your AI usage scales, controlling which teams can access specific models becomes crucial. Portkey Configs provide this control layer with features like:

Access Control Features:

  • Model Restrictions: Limit access to specific models
  • Data Protection: Implement guardrails for sensitive data
  • Reliability Controls: Add fallbacks and retry logic

Example Configuration:

Here’s a basic configuration to route requests to OpenAI, specifically using GPT-4o:
{
  "strategy": {
    "mode": "single"
  },
  "targets": [
    {
      "provider": "@YOUR_OPENAI_PROVIDER",
      "override_params": {
        "model": "gpt-4o"
      }
    }
  ]
}
Create your config on the Configs page in your Portkey dashboard.
Configs can be updated anytime to adjust controls without affecting running applications.

Step 3: Implement Access Controls

Create User-specific API keys that automatically:
  • Track usage per user/team with the help of metadata
  • Apply appropriate configs to route requests
  • Collect relevant metadata to filter logs
  • Enforce access permissions
Create API keys through:Example using Node.js SDK:
import { Portkey } from 'portkey-ai';

const portkey = new Portkey({
  apiKey: 'YOUR_ADMIN_API_KEY'
});

const apiKey = await portkey.apiKeys.create({
  name: 'engineering-team',
  type: 'organisation',
  workspaceId: 'YOUR_WORKSPACE_ID',
  defaults: {
    configId: 'your-config-id',
    metadata: {
      environment: 'production',
      department: 'engineering'
    }
  },
  scopes: ['logs.view', 'configs.read']
});
For detailed key management instructions, see our API Keys documentation.

Step 4: Deploy & Monitor

After distributing API keys to your team members, your enterprise-ready Mastra setup is ready to go. Each team member can now use their designated API keys with appropriate access levels and budget controls.Apply your governance setup using the integration steps from earlier sections. Monitor usage in Portkey dashboard:
  • Cost tracking by department
  • Model usage patterns
  • Request volumes
  • Error rates

Enterprise Features Now Available

Mastra agents now has:
  • Departmental budget controls
  • Model access governance
  • Usage tracking & attribution
  • Security guardrails
  • Reliability features

Frequently Asked Questions

Portkey adds production-readiness to Mastra agents through comprehensive observability (traces, logs, metrics), reliability features (fallbacks, retries, caching), and access to 1600+ LLMs through a unified interface. This makes it easier to debug, optimize, and scale your agent applications.
Yes! Portkey integrates seamlessly with existing Mastra agents. You only need to update your agent’s model configuration to point to Portkey. The rest of your agent code remains unchanged.
Portkey supports all Mastra features, including tools, workflows, memory, and scoring. It adds observability and reliability without limiting any of the framework’s functionality.
Mastra uses OpenAI-compatible interfaces under the hood, so the model ID format is openai/@provider-slug@model-name. This allows Mastra to work with any LLM provider through Portkey, not just OpenAI. The @provider-slug corresponds to your Portkey integration slug.
Portkey allows you to add custom metadata and trace IDs to your agent runs through the model headers. Add fields like agent_name, agent_type, or session_id to easily find and analyze specific agent executions.
Yes! Simply change the provider slug in the model ID. For example, use openai/@openai-provider@gpt-4o for one agent and openai/@anthropic-provider@claude-3-opus-20240229 for another. All agents route through Portkey.

Resources