Portkey Blog Portkey Blog
  • Home
  • Production Guides
  • New Releases
  • Talks
  • Upcoming Events
  • Paper Summaries
  • Portkey Docs
  • Join Community
Sign in Subscribe

T5

Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes - Summary

The paper introduces a new mechanism called Distilling step-by-step that trains smaller models to outperform larger language models (LLMs) while using less training data and smaller model sizes. The mechanism extracts LLM rationales as additional supervision for small models within a multi-task tra
The Quill May 7, 2023

Segment Everything Everywhere All at Once - Summary

The paper presents SEEM, a promptable, interactive model for Segmenting Everything Everywhere all at once in an image. It introduces a versatile prompting engine for different types of prompts, including points, boxes, scribbles, masks, texts, and referred regions of another image. The model can ef
The Quill Apr 16, 2023

The Power of Scale for Parameter-Efficient Prompt Tuning - Summary

The paper explores prompt tuning, a mechanism for learning soft prompts to condition frozen language models for specific downstream tasks. The approach outperforms GPT-3's few-shot learning and becomes more competitive with scale. Prompt tuning confers benefits in robustness to domain transfer and
Rohit Agarwal Apr 15, 2023

Subscribe to Portkey Blog

  • Portkey Blog
  • Portkey Website
Portkey Blog © 2025. Powered by Ghost