As large language models (LLMs) become more powerful, **how we communicate with them, so called _prompting_ has become a critical skill.** Prompting is the art and science of crafting inputs (prompts) that guide the LLM to generate useful, coherent, and accurate outputs.
Unlike traditional programming, where rules are rigidly coded, prompting is a dynamic and creative interface: you "nudge" the model in the right direction through structured or descriptive language.
Below you can find a variety of prompting techniques, both foundational and advanced. Each technique is **designed for specific goals such as improving reasoning, managing complexity, boosting accuracy, or integrating external tools.**
For each method, you’ll find a description of what it does, why you’d use it, an ELI5 (Explain Like I’m 5) explanation, and examples to put it into practice.
## **A. Foundational Prompting Techniques**
### **1. Zero-Shot Prompting**
- **Purpose**: Quickly get the model to perform a task with no prior examples. Useful when testing new tasks or generating answers in minimal time.
- **ELI5**: Like giving a question to someone without showing them how to answer it first.
- **Example**:
> "Translate this sentence into French: 'The weather is nice today.'"
### **2. One-Shot & Few-Shot Prompting**
- **Purpose**: Give one or a few examples to help the model learn a pattern. Especially effective when the task is ambiguous or nuanced.
- **ELI5**: Show the model one or two solved examples, then ask it to solve the next one.
- **Example**:
> "Happy face → Positive
> Sad face → Negative
> 'I'm feeling okay today.' → ?"
### **3. In-Context Learning**
- **Purpose**: Teach the model new tasks by embedding examples directly in the prompt - no training needed.
- **ELI5**: Like teaching someone new skills just by giving enough examples, without formal training.
- **Example**:
> Input: “I failed my exam.”
> Examples: “Great job!” → Positive, “This is terrible.” → Negative
> Classify the input.
### **4. System Prompting**
- **Purpose**: Set the global behavior or personality of the model for an entire session.
- **ELI5**: Like telling a robot to "act like a friendly librarian" before asking it questions.
- **Example**:
> "You are an expert travel advisor. Always respond helpfully and concisely."
### **5. Role Prompting**
- **Purpose**: Assign a specific persona or point of view for better control over tone and knowledge.
- **ELI5**: Pretend the model is an expert in a particular job or field.
- **Example**:
> "As a seasoned financial analyst, explain what inflation means to a 10-year-old."
### **6. Contextual Prompting**
- **Purpose**: Provide extra background or prior information needed to improve the model’s accuracy.
- **ELI5**: Like giving someone the backstory before asking their opinion.
- **Example**:
> "Based on this article about global warming, summarize the potential effects on agriculture."
## **B. Reasoning-Based Prompting Techniques**
### **7. Chain-of-Thought (CoT) Prompting**
- **Purpose**: Break down complex problems into logical steps. Boosts reasoning and accuracy.
- **ELI5**: Show your thinking step-by-step, like in a math class.
- **Example**:
> "If a train leaves at 3 PM and arrives at 5 PM, how long was the trip? Let's think step-by-step."
### **8. Zero-Shot CoT**
- **Purpose**: Trigger step-by-step reasoning without needing examples - just by asking.
- **ELI5**: You say, "Let's think this through," and the model starts explaining.
- **Example**:
> "How many months have 31 days? Let’s think step-by-step."
### **9. Step-Back Prompting**
- **Purpose**: Encourage the model to think at a higher level before solving. Useful for debugging or strategizing.
- **ELI5**: First, think about how to solve _any_ problem like this before tackling the actual one.
- **Example**:
> "Before answering this riddle, what are general steps to solve logic puzzles?"
### **10. Tree-of-Thought (ToT) Prompting**
- **Purpose**: Explore multiple possible answers in a structured way to improve solution quality.
- **ELI5**: Like creating a tree of options and picking the best branch.
- **Example**:
> "List three strategies to improve a website’s SEO and evaluate each one."
### **11. Self-Consistency Prompting**
- **Purpose**: Improve reliability by sampling multiple answers and choosing the most consistent one.
- **ELI5**: Like asking someone five times and trusting the answer they gave most often.
- **Example**:
> Ask: "What is the capital of Brazil?"
> Compare multiple generated answers and choose the most frequent one.
### **12. Problem Decomposition**
- **Purpose**: Divide complex problems into manageable subtasks.
- **ELI5**: Chop up the big question into little ones and solve each.
- **Example**:
> "What’s the average of 20, 30, and 40?"
> Step 1: Add the numbers
> Step 2: Divide by 3
## **C. Functional Prompting Techniques**
### **13. ReAct (Reason + Act) Prompting**
- **Purpose**: Combine reasoning with actions, such as using a calculator, tool, or API.
- **ELI5**: Think out loud, then do something (like search Google or run a command).
- **Example**:
> "What’s the current temperature in Paris? Use a weather API to find out."
### **14. Retrieval-Augmented Generation (RAG)**
- **Purpose**: Provide real-time retrieved documents to ground the model's responses.
- **ELI5**: Like Googling something before writing a summary.
- **Example**:
> "Given this research article, explain how CRISPR works."
Read more about: [[RAG (Retrieval-Augmented Generation)]]
### **15. Tool-Use Prompting**
- **Purpose**: Let the model use external tools for computation or data retrieval.
- **ELI5**: Ask it to use a calculator or spreadsheet.
- **Example**:
> "Calculate the compound interest on $1,000 at 5% for 3 years."
### **16. Code Prompting**
- **Purpose**: Guide the model to generate, explain, or translate code.
- **ELI5**: Like asking a coder buddy to write or explain a program.
- **Example**:
> "Write a Python script to count word frequency in a text file."
## **D. Advanced Prompt Engineering**
### **17. Automatic Prompt Engineering**
- **Purpose**: Automatically generate variations of prompts for testing and optimization.
- **ELI5**: Have the AI brainstorm different ways to ask the same thing.
- **Example**:
> "Generate 5 ways to ask: 'Do you like coffee?'"
### **18. Soft Prompting (Prompt Tuning)**
- **Purpose**: Use learned, non-linguistic vectors instead of text prompts.
- **ELI5**: Feed the model invisible instructions only it understands.
- **Example**:
> Applied in back-end training rather than direct user interaction.
### **19. Ensembling Prompts**
- **Purpose**: Use multiple prompts and combine their results for better accuracy.
- **ELI5**: Like polling friends for opinions and choosing the average.
- **Example**:
> Summarize a text three different ways, then merge them for clarity.
### **20. Self-Refinement**
- **Purpose**: Let the model review and improve its own answers.
- **ELI5**: Like editing your own writing to make it better.
- **Example**:
> "Here’s my answer. Now review it for mistakes and revise."