Chat Completions
Portkey Endpoints
Embeddings
Other APIs
Completions
Moderations
Fine-tuning
Assistants
- Assistants
- Threads
- Messages
- Runs
- Run Steps
Create Run
POST
/
threads
/
{thread_id}
/
runs
curl https://api.portkey.ai/v1/threads/thread_abc123/runs \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-H "x-portkey-virtual-key: $PORTKEY_PROVIDER_VIRTUAL_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2" \
-d '{
"assistant_id": "asst_abc123"
}'
{
"id": "<string>",
"object": "thread.run",
"created_at": 123,
"thread_id": "<string>",
"assistant_id": "<string>",
"status": "queued",
"required_action": {
"type": "submit_tool_outputs",
"submit_tool_outputs": {
"tool_calls": [
{
"id": "<string>",
"type": "function",
"function": {
"name": "<string>",
"arguments": "<string>"
}
}
]
}
},
"last_error": {
"code": "server_error",
"message": "<string>"
},
"expires_at": 123,
"started_at": 123,
"cancelled_at": 123,
"failed_at": 123,
"completed_at": 123,
"incomplete_details": {
"reason": "max_completion_tokens"
},
"model": "<string>",
"instructions": "<string>",
"tools": [],
"metadata": {},
"usage": {
"completion_tokens": 123,
"prompt_tokens": 123,
"total_tokens": 123
},
"temperature": 123,
"top_p": 123,
"max_prompt_tokens": 257,
"max_completion_tokens": 257,
"truncation_strategy": {
"type": "auto",
"last_messages": 2
},
"tool_choice": "none",
"parallel_tool_calls": true,
"response_format": "none"
}
Path Parameters
The ID of the thread to run.
Body
application/json
Response
200 - application/json
OK
Represents an execution run on a thread.
Was this page helpful?
curl https://api.portkey.ai/v1/threads/thread_abc123/runs \
-H "x-portkey-api-key: $PORTKEY_API_KEY" \
-H "x-portkey-virtual-key: $PORTKEY_PROVIDER_VIRTUAL_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2" \
-d '{
"assistant_id": "asst_abc123"
}'
{
"id": "<string>",
"object": "thread.run",
"created_at": 123,
"thread_id": "<string>",
"assistant_id": "<string>",
"status": "queued",
"required_action": {
"type": "submit_tool_outputs",
"submit_tool_outputs": {
"tool_calls": [
{
"id": "<string>",
"type": "function",
"function": {
"name": "<string>",
"arguments": "<string>"
}
}
]
}
},
"last_error": {
"code": "server_error",
"message": "<string>"
},
"expires_at": 123,
"started_at": 123,
"cancelled_at": 123,
"failed_at": 123,
"completed_at": 123,
"incomplete_details": {
"reason": "max_completion_tokens"
},
"model": "<string>",
"instructions": "<string>",
"tools": [],
"metadata": {},
"usage": {
"completion_tokens": 123,
"prompt_tokens": 123,
"total_tokens": 123
},
"temperature": 123,
"top_p": 123,
"max_prompt_tokens": 257,
"max_completion_tokens": 257,
"truncation_strategy": {
"type": "auto",
"last_messages": 2
},
"tool_choice": "none",
"parallel_tool_calls": true,
"response_format": "none"
}