Portkey Blog Portkey Blog
  • Home
  • Production Guides
  • New Releases
  • Talks
  • Upcoming Events
  • Paper Summaries
  • Portkey Docs
  • Join Community
Sign in Subscribe

finetuning

Prompt engineering vs. fine-tuning: What’s better for your use case?

Discover the key differences between prompt engineering and model fine-tuning. Learn when to use each approach, how to measure effectiveness and the best tools for optimizing LLM performance.
Drishti Shah Feb 17, 2025

Instruction Tuning with GPT-4 - Summary

The paper presents the first attempt to use GPT-4 to generate instruction-following data for Large Language Models (LLMs) finetuning. The 52K English and Chinese instruction-following data generated by GPT-4 leads to superior zero-shot performance on new tasks compared to the instruction-following
The Quill Apr 16, 2023

AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts - Summary

The paper introduces AUTOPROMPT, an automated method to create prompts for a diverse set of tasks based on a gradient-guided search. The prompts elicit more accurate factual knowledge from masked language models (MLMs) than manually created prompts on the LAMA benchmark. MLMs can perform sentiment
Rohit Agarwal Apr 15, 2023

Subscribe to Portkey Blog

  • Portkey Blog
  • Portkey Website
Portkey Blog © 2025. Powered by Ghost