This feature is in private beta.
Please drop us a message on [email protected] or on our Discord if you’re interested.
What is Autonomous LLM Fine-tuning?
Autonomous Fine-tuning is a powerful feature offered by Portkey AI that enables organizations to automatically create, manage, and execute fine-tuning jobs for Large Language Models (LLMs) across multiple providers. This feature leverages your existing API usage data to continuously improve and customize LLM performance for your specific use cases.Benefits
- Automated Workflow: Streamline the entire fine-tuning process from data preparation to model deployment.
- Multi-Provider Support: Fine-tune models across 10+ providers, including OpenAI, Azure, AWS Bedrock, and Anyscale.
- Data-Driven Improvements: Utilize your actual API usage data to create relevant and effective fine-tuning datasets.
- Continuous Learning: Set up periodic fine-tuning jobs to keep your models up-to-date with the latest data.
- Enhanced Performance: Improve model accuracy and relevance for your specific use cases.
- Cost-Effective: Optimize your LLM usage by fine-tuning models to better suit your needs, potentially reducing the number of API calls required.
- Centralized Management: Manage all your fine-tuning jobs across different providers from a single interface.
Data Preparation
- Log Collection: Portkey’s AI gateway automatically collects and stores logs from your LLM API requests.
- Data Enrichment:
- Filter logs based on various criteria.
- Annotate logs with additional information.
- Use Portkey’s Guardrails feature for automatic log annotation.
- Dataset Creation: Utilize filters to select the most relevant logs for your fine-tuning dataset.
- Data Export: Export the enriched logs as a dataset suitable for fine-tuning.
Fine-tuning Process
- Model Selection: Choose from a wide range of supported LLM providers and models.
- Job Configuration: Set up fine-tuning parameters through an intuitive UI.
- Execution: Portkey triggers the fine-tuning job on the selected provider’s platform.
- Monitoring: Track the progress of your fine-tuning jobs through Portkey’s dashboard.
- Deployment: Once complete, the fine-tuned model becomes available for use through Portkey’s API gateway.
How It Works: Step-by-Step
- Data Collection: As you use Portkey’s AI gateway for LLM requests, logs are automatically collected and stored in your Portkey account.
- Data Enrichment:
- Apply filters to your log data.
- Add annotations and additional context to logs.
- Utilize Portkey’s Guardrails feature for automatic input/output annotations.
- Dataset Creation:
- Use the enriched log data to create a curated dataset for fine-tuning.
- Apply additional filters to select the most relevant data points.
- Fine-tuning Job Setup:
- Access the Fine-tuning feature in Portkey’s UI.
- Select your desired LLM provider and model.
- Choose your prepared dataset.
- Configure fine-tuning parameters.
- Job Execution:
- Portkey initiates the fine-tuning job on the chosen provider’s platform.
- Monitor the progress through Portkey’s dashboard.
- Model Deployment:
- Once fine-tuning is complete, the new model becomes available through Portkey’s API gateway.
- Continuous Improvement (Optional):
- Set up periodic fine-tuning jobs (daily, weekly, or monthly).
- Portkey automatically creates and executes these jobs using the latest data.
Partnerships
Portkey AI has established partnerships to extend the capabilities of its Autonomous Fine-tuning feature:- OpenPipe: Integration allows Portkey’s enriched data to be used on OpenPipe’s fine-tuning platform.
- Pipeshift: Portkey’s datasets can be seamlessly utilized in Pipeshift’s inference platform.
Getting Started
To begin using Autonomous Fine-tuning:- Ensure you have an active Portkey AI account with the AI gateway set up.
- Navigate to the Fine-tuning section in your Portkey dashboard.
- Follow the step-by-step wizard to create your first fine-tuning job.
- For assistance, consult our detailed documentation or contact Portkey support.
Best Practices
- Regularly review and update your data filtering criteria to ensure the quality of your fine-tuning datasets.
- Start with smaller, focused datasets before scaling up to larger fine-tuning jobs.
- Monitor the performance of your fine-tuned models and iterate as needed.
- Leverage Portkey’s analytics to gain insights into your model’s performance improvements.