Portkey Blog Portkey Blog
  • Home
  • Production Guides
  • New Releases
  • Talks
  • Upcoming Events
  • Portkey Docs
Sign in Subscribe

ai red teaming

Prompt Injection Attacks in LLMs: What Are They and How to Prevent Them

In February 2023, a Stanford student exposed Bing Chat’s confidential system prompt through a simple text input, revealing the chatbot’s internal guidelines and behavioral constraints. It was one of the first prompt injection attacks highlighting a critical security loophole in Large Language Models (LLMs) – AI models powering everything
Sabrina Shoshani 10 Dec 2024
Portkey x Pillar - Enterprise-grade Security for LLMs in Production

Portkey x Pillar - Enterprise-grade Security for LLMs in Production

Bringing Pillar's AI guardrails onboard Portkey's open source Gateway!
Vrushank Vyas 15 Aug 2024

Subscribe to Portkey Blog

  • Blog Home
  • Portkey Website
Portkey Blog © 2026. Powered by Ghost