Portkey Gateway supports fine-tuning in two ways:

  1. Fine-tuning with portkey as a provider, supports multiple providers with a single unified API.
  2. Fine-tuning with Portkey as a client, supports following providers:

1. Fine-tuning with Portkey as a client

With Portkey acting as a client, gives you the following benefits:

  1. Provider specific fine-tuning.
  2. More control over the fine-tuning process.
  3. Unified API for all providers.

2. Fine-tuning with Portkey as a provider

With Portkey acting as provider, gives you the following benefits:

  1. Unified API for all providers
  2. No need to manage multiple endpoints and keys
  3. Easy to switch between providers with limited changes
  4. Easier integration with Portkey’s other features like batching, etc.

How to use fine-tuning

Portkey supports a Unified Fine-tuning API for all providers which is based on OpenAI’s Fine-tuning API.

While the signature of API is same for all providers, there are places where a specific provider require some more information which can be passed via headers if used as a client, or by portkey_options params if used as a provider.

Fine-tuning with Portkey as a client

Upload a File

curl -X POST --header 'x-portkey-api-key: <portkey_api_key>' \
 --form '[email protected]' \
 --form 'purpose=fine-tune' \
 'https://api.portkey.ai/v1/files'
Learn more about files

Create a Fine-tuning Job

curl -X POST --header 'Content-Type: application/json' \
 --header 'x-portkey-api-key: <portkey_api_key>' \
 --data \
 $'{"model": "<base_model>", "suffix": "<finetune_name>", "training_file": "<file_id>", "portkey_options": {"x-portkey-virtual-key": "<virtual_key>"}, "method": {"type": "supervised", "supervised": {"hyperparameters": {"n_epochs": 1}}}}\n' \
'https://api.portkey.ai/v1/fine_tuning/jobs'

portkey_options supports all the options that are supported by each provider in the gateway. Values can be provider specific, refer to provider’s documentation for each provider’s specific options required for fine-tuning.

List Fine-tuning Jobs

curl -X GET --header 'x-portkey-api-key: <portkey_api_key>' \
'https://api.portkey.ai/v1/fine_tuning/jobs'

Get Fine-tuning Job

curl -X GET --header 'x-portkey-api-key: <portkey_api_key>' \
'https://api.portkey.ai/v1/fine_tuning/jobs/<job_id>'

Cancel Fine-tuning Job

curl -X POST --header 'x-portkey-api-key: <portkey_api_key>' \
'https://api.portkey.ai/v1/fine_tuning/jobs/<job_id>/cancel'

When you use Portkey as a provider for fine-tuning, everything is managed by Portkey hosted solution, this includes job submission with provider, monitoring job status and able to use the fine-tuned model with Portkey’s Prompt Playground.

Providers like OpenAI, Azure OpenAPI does provider a straight forward approach to use the fine-tuned model for inference whereas providers like Bedrock abd Vertex does require you to manually provider compute which is hard for portkey to manage.

Notes:

  • For Bedrock, model param should point to the modelId from portkey and the modelId to submit to the provider should be passed via provider_options. Example is available below.
  curl -X POST --header 'Content-Type: application/json' \
  --header 'x-portkey-api-key: <portkey_api_key>' \
  --data \
  $'{"model": "amazon.titan-text-lite-v1:0", "suffix": "<finetune_name>", "training_file": "<file_id>", "provider_options": {"model": "amazon.titan-text-lite-v1:0:4k"}, "method": {"type": "supervised", "supervised": {"hyperparameters": {"n_epochs": 1}}}}\n' \
  'https://api.portkey.ai/v1/fine_tuning/jobs'

For more info about modelId, please refer to the provider’s documentation.

Please refer to the provider’s documentation for fine-tuning with Portkey as a gateway.

  1. OpenAI
  2. Bedrock
  3. Azure OpenAI