facility_prompt_sys.txt
- System prompt for the taskfacility_v2_test.json
- Dataset with customer service messagesfacility-simple.yaml
- Simple configuration fileeval.ipynb
- Evaluation notebookCreate Virtual Key
Create Default Config
Configure Portkey API Key
Step 2
.env
Fileconfig.yaml
file to use Portkey instead of OpenRouter:
name
parameter can be set to openai/openai/gpt-4o
or any other model identifier, but the actual model selection is handled by your Portkey config.api_base
to "https://api.portkey.ai/v1"
.results/
directory with a filename like facility-simple_YYYYMMDD_HHMMSS.yaml
. When you open this file, you’ll see something like this:
virtual_key
in your default config
object.
How do I update my Virtual Key limits after creation?
Can I use multiple LLM providers with the same API key?
How do I track costs for different teams?
What happens if a team exceeds their budget limit?
How do I control which models are available in llama-prompt-ops workflows?