Portkey Blog Portkey Blog
  • Home
  • Production Guides
  • New Releases
  • Talks
  • Upcoming Events
  • Portkey Docs
Sign in Subscribe

ai guardrails

Prompt Injection Attacks in LLMs: What Are They and How to Prevent Them

In February 2023, a Stanford student exposed Bing Chat’s confidential system prompt through a simple text input, revealing the chatbot’s internal guidelines and behavioral constraints. It was one of the first prompt injection attacks highlighting a critical security loophole in Large Language Models (LLMs) – AI models powering everything
Sabrina Shoshani 10 Dec 2024
Portkey's AI Guardrails

Prompt Security and Guardrails: How to Ensure Safe Outputs

Prompt security is an emerging and essential field within AI development making sure that AI-generated responses are safe, accurate, and aligned with the intended purpose.   When prompts are not secured, the resulting outputs can unintentionally generate or amplify misinformation. Compliance risks are also a major concern. Enterprises deploying AI systems
Drishti Shah 14 Nov 2024
Portkey x Pillar - Enterprise-grade Security for LLMs in Production

Portkey x Pillar - Enterprise-grade Security for LLMs in Production

Bringing Pillar's AI guardrails onboard Portkey's open source Gateway!
Vrushank Vyas 15 Aug 2024

Open Sourcing Guardrails on the Gateway Framework

We are solving the *biggest missing component* in taking AI apps to prod → Now, enforce LLM behavior and route requests with precision, in one go.
Rohit Agarwal, Ayush 14 Aug 2024
Portkey & Patronus - Bringing Responsible LLMs in Production

Portkey & Patronus - Bringing Responsible LLMs in Production

Patronus AI's suite of evaluators are now available on the Portkey Gateway.
Vrushank Vyas 14 Aug 2024

Subscribe to Portkey Blog

  • Blog Home
  • Portkey Website
Portkey Blog © 2026. Powered by Ghost