Libraries
Instructor
With Portkey, you can confidently take your Instructor pipelines to production and get complete observability over all of your calls + make them reliable - all with a 2 LOC change!
Instructor is a framework for extracting structured outputs from LLMs, available in Python & JS.
Integrating Portkey with Instructor
Caching Your Requests
Let’s now bring down the cost of running your Instructor pipeline with Portkey caching. You can just create a Config object where you define your cache setting:
You can write it raw, or use Portkey’s Config builder and get a corresponding config id
. Then, just pass it while instantiating your OpenAI client:
Similarly, you can add Fallback, Loadbalancing, Timeout, or Retry settings to your Configs and make your Instructor requests robust & reliable.
Was this page helpful?