Portkey Blog Portkey Blog
  • Home
  • Production Guides
  • New Releases
  • Talks
  • Upcoming Events
  • Paper Summaries
  • Portkey Docs
  • Join Community
Sign in Subscribe

Sabrina Shoshani

Prompt Injection Attacks in LLMs: What Are They and How to Prevent Them

In February 2023, a Stanford student exposed Bing Chat’s confidential system prompt through a simple text input, revealing the chatbot’s internal guidelines and behavioral constraints. It was one of the first prompt injection attacks highlighting a critical security loophole in Large Language Models (LLMs) – AI models powering everything
Sabrina Shoshani Dec 10, 2024

Subscribe to Portkey Blog

  • Portkey Blog
  • Portkey Website
Portkey Blog © 2025. Powered by Ghost