Delimiters in Prompt Engineering
Learn how to use delimiters in prompt engineering to improve AI responses. This blog explains delimiter types, best practices, and practical examples for developers working with large language models
Prompt engineering is a bit like having a conversation with someone who takes everything literally. Without clear signals, your AI might miss what you actually meant. That's where delimiters help.
Delimiters are boundary markers that separate different sections of your prompt. They help language models understand and differ between parts, making your AI interactions clearer and more productive.
What are delimiters in prompt engineering?
Delimiters are the boundary markers that separate different sections of your prompt. They structure your input data so large language models can parse and interpret the components correctly.
When you're writing prompts, you have several delimiter options:
- Quotes ("", '') work well for delineating specific text inputs you want the model to process or generate.
- Triple backticks (```) are specifically designed for code blocks—they signal to the model that the enclosed content follows programming syntax rather than natural language.
- Pipes (|) are effective for structured data input or creating distinct columns in tabular information.
- XML/JSON-style tags (<section>...</section>) provide semantic structure for complex prompts with multiple components that require clear identification.
- Special markers (###, ---) function as visual section breaks that create obvious boundaries between different prompt components.
The delimiter choice should match your specific use case and prompt architecture. Maintaining consistent delimiter patterns throughout your prompts creates predictable parsing patterns for the model.
What is the purpose of using delimiters in prompt engineering?
Delimiters are not just used for formatting in prompt engineering - they can improve prompt quality and model performance in several ways:
- Improved parsing: When you use delimiters properly, LLMs can recognize and process structured input more accurately. The model can identify distinct sections of your prompt and handle each according to its purpose.
- Context separation: Delimiters create clear boundaries that prevent different parts of your prompt from bleeding into each other. This separation helps the model maintain the appropriate context for each section rather than treating everything as one continuous block of text.
- Reducing ambiguity: By clearly marking where instructions end and examples begin, delimiters eliminate confusion about the role of each prompt component. This clarity is especially important for complex prompts with multiple sections.
- Better output consistency: When your input is well-structured with delimiters, you're more likely to get consistently formatted responses. The model learns from the structure you provide and often mirrors it in its output.
Practical use cases of delimiters
1. Providing structured input
Using delimiters to separate instructions from examples significantly improves clarity in your prompts. Take a look at this example:
Summarize the following text. "Delimiters are essential for structuring LLM inputs..."
2. Multi-part conversation
System
You are a customer service agent for a software company. Be helpful but concise.
Chat History
Customer: I can't log into my account.
Agent: I'm sorry to hear that. Could you tell me if you're getting any specific error message?
Customer: It says "invalid credentials" but I'm sure my password is correct.
Current Input
What troubleshooting steps should I recommend to the customer?
The hash-mark delimiters (###) clearly establish three distinct sections: system instructions, previous conversation history, and the current prompt. This structure helps the LLM understand the complete context before generating a response.
Without these delimiters, the model might confuse instructions with conversation history or misinterpret which part is the actual question it needs to answer. The clear separation allows the model to reference the history while focusing on addressing the specific current request.
3. Data formatting
Please return the response in the following JSON format:
{
"summary": "text",
"keywords": ["word1", "word2"]
}
By showing the exact format with proper JSON structure, you're giving the model a clear template to follow. The curly braces and quote marks serve as delimiters that define different data elements within the response.
This approach is particularly useful when you need to process the AI's output programmatically. The structured format makes it easy to parse the response and extract specific pieces of information without having to deal with unstructured text.
4. Code generation and debugging
Write a Python function that:
- Takes a list of integers as input
- Removes all duplicates
- Returns the sorted list in descending order
Your output should include:
# Your code here
The triple backticks clearly mark where your requirements end and where you expect the code to begin. This structure helps the model understand that you want actual working Python code, not just a description of how to write it.
Best practices for using delimiters
- Choose clear and unambiguous delimiters that don't interfere with your content. If your text contains quotes, using quotes as delimiters could cause confusion.
- Use industry-standard delimiters where they make sense (e.g., JSON for structured outputs). The model is already familiar with these formats.
- Maintain consistency in how you use delimiters throughout your prompts. Switching delimiter styles can confuse the model.
- Avoid excessive nesting of delimiters, which can make your prompts hard to follow, both for you and the AI.
Bringing it all together
Delimiters might seem like a small detail, but they significantly impact how well language models understand what you're asking for. As you work with AI systems, experiment with different delimiter styles for your prompts to find what works best for your specific needs.
Remember that well-structured prompts with clear delimiters lead to better AI-driven results—making your interaction with language models more productive and predictable