ZhipuAI has developed the GLM series of open source LLMs that are some of the world’s best performing and capable models today. Portkey provides a robust and secure gateway to seamlessly integrate these LLMs into your applications in the familiar OpenAI spec with just 2 LOC change!

With Portkey, you can leverage powerful features like fast AI gateway, caching, observability, prompt management, and more, while securely managing your LLM API keys through a virtual key system.

Provider Slug. zhipu

Portkey SDK Integration with ZhipuAI

1. Install the Portkey SDK

Install the Portkey SDK in your project using npm or pip:

npm install --save portkey-ai

2. Initialize Portkey with the Virtual Key

To use ZhipuAI / ChatGLM / BigModel with Portkey, get your API key from here, then add it to Portkey to create the virtual key.

import Portkey from 'portkey-ai'

const portkey = new Portkey({
    apiKey: "PORTKEY_API_KEY", // defaults to process.env["PORTKEY_API_KEY"]
    virtualKey: "VIRTUAL_KEY" // Your ZhipuAI Virtual Key
})

3. Invoke Chat Completions

const chatCompletion = await portkey.chat.completions.create({
    messages: [{ role: 'user', content: 'Who are you?' }],
    model: 'glm-4'
});

console.log(chatCompletion.choices);

I am an AI assistant named ZhiPuQingYan(智谱清言), you can call me Xiaozhi🤖


Next Steps

The complete list of features supported in the SDK are available on the link below.

SDK

You’ll find more information in the relevant sections:

  1. Add metadata to your requests
  2. Add gateway configs to your ZhipuAI requests
  3. Tracing ZhipuAI requests
  4. Setup a fallback from OpenAI to ZhipuAI