AI Documentation
Prompt Engineering Best Practices
A practical guide to writing effective prompts for large language models (LLMs). Covers structure, context management, output formatting, and common patterns for reliable results.
Overview
Prompt engineering is the practice of designing inputs that guide LLMs toward accurate, useful outputs. Well-crafted prompts reduce ambiguity, improve consistency, and minimize the need for follow-up corrections.
Core Principles
- Be specific. Vague prompts produce vague answers. State exactly what you need.
- Provide context. Include relevant background information the model needs to respond accurately.
- Define the output format. Specify structure (JSON, markdown, bullet points) when consistency matters.
- Use examples. Show the model what good output looks like (few-shot prompting).
- Iterate. Treat prompts as code: test, refine, and version them.
Prompt Structure
Effective prompts typically include these components:
1. Role or Persona
Set the context for how the model should respond.
You are a senior backend engineer reviewing code for security vulnerabilities.
2. Task Description
Clearly state what you want the model to do.
Analyze the following Python function and identify any SQL injection risks.
3. Input Data
Provide the content the model should process, clearly delimited.
```python
def get_user(user_id):
query = f"SELECT * FROM users WHERE id = {user_id}"
return db.execute(query)
```
4. Output Requirements
Specify format, length, and any constraints.
Respond with:
1. A risk assessment (High/Medium/Low)
2. A brief explanation (2-3 sentences)
3. A corrected code snippet using parameterized queries
Common Patterns
Zero-shot Prompting
Direct instruction without examples. Works well for straightforward tasks.
Summarize this article in three bullet points:
[article text]
Few-shot Prompting
Provide examples to demonstrate the expected pattern.
Convert these sentences to past tense:
Input: "She walks to the store."
Output: "She walked to the store."
Input: "They run every morning."
Output: "They ran every morning."
Input: "He writes code."
Output:
Chain-of-Thought
Ask the model to show its reasoning for complex problems.
Solve this step by step:
A train travels 120 miles in 2 hours. It then travels 90 miles in 1.5 hours.
What is the average speed for the entire journey?
Show your work before giving the final answer.
System + User Message Pattern
Separate persistent instructions from the specific request.
System: You are a technical writer. Write in active voice.
Keep responses under 100 words. Use bullet points for lists.
User: Explain what a load balancer does.
Output Formatting
When you need structured output, be explicit about the format.
JSON Output
Extract the following information and return it as JSON:
- product_name (string)
- price (number)
- in_stock (boolean)
Product description: "The UltraWidget Pro costs $49.99 and ships immediately."
Return only valid JSON, no additional text.
Markdown Tables
Compare these three databases. Return a markdown table with columns:
Database | Type | Best For | Limitations
Databases: PostgreSQL, MongoDB, Redis
Handling Edge Cases
Unknown or Uncertain Information
If you don't know the answer or the information is outside your training data,
say "I don't have reliable information about this" rather than guessing.
Constraining Scope
Answer only based on the provided document. Do not use external knowledge.
If the answer is not in the document, say "Not found in the provided text."
Length Control
Provide a brief explanation (2-3 sentences maximum).
Or:
Write a detailed analysis (approximately 500 words).
Testing and Iteration
- Test with varied inputs. A prompt that works for one example may fail on edge cases.
- Check for consistency. Run the same prompt multiple times to verify output stability.
- Version your prompts. Track changes so you can roll back if a revision performs worse.
- Measure quality. Define success criteria (accuracy, format compliance, relevance) and evaluate systematically.
Anti-patterns to Avoid
- Ambiguous instructions. "Make it better" gives no actionable guidance.
- Missing context. Assuming the model knows your specific use case or domain terminology.
- Overloading a single prompt. Asking for too many things at once reduces quality. Break complex tasks into steps.
- Ignoring output variability. LLMs are non-deterministic. Build systems that handle variation gracefully.
Related Samples
This is a sample article to demonstrate how I write.