All Posts

Must Know Prompting Techniques

Prompt engineering is a term that is used to describe the process of refining and controlling the output of large language models (LLMs) through the use of carefully crafted prompts. While this term is commonly used, the actual techniques behind prompt engineering, beyond simply “add more detail to your prompts”, are often not clear. With numerous articles and online postings claiming six figure salaries for those who know the magic of prompt engineering, it seems like a skill most people would like to develop. To that end, in this post I’ll provide background on some techniques for crafting better prompts.

Disclaimer: After learning these techniques, you may not be able to command a six figure salary for prompt engineering. However, you will be able to improve the performance of your language models.

Prompting Techniques and Examples

  1. Zero-shot Prompting

    Explanation: Ask the model to perform a task without any examples, relying only on its pre-existing knowledge.

    Example: “Translate this sentence into Spanish: ‘Can machines be trusted?’.”

  2. Few-shot Prompting

    Explanation: Provide a handful of examples (typically 1-5) to help the model understand the format or task better. This is the most remedial form of “training” a model.

    Example:

    “Convert Python code to JavaScript:”

    • Example 1: Convert: print(“hello”) -> console.log(“hello”)
    • Example 2: Convert: for i in range(5): -> for (let i = 0; i < 5; i++)
    • Example 3: Convert: def add(a, b): -> function add(a, b)
  3. Iterative Prompting

    Explanation: Use prompts to further refine the models responses based on previous outputs.

    Example:

    • Prompt 1: “Summarize the impact of the Internet on communication.”
    • Prompt 2: “Can you refine your summary by focusing on online cultures impact on everyday communication?”
  4. Tree of Thoughts Prompting

    Explanation: Instruct the model to generate multiple ideas or solutions and explore various pathways before reaching a conclusion.

    Example: “What are the pros and cons of renewable energy? Now, break down pros and cons for each major type (solar, wind, etc.).”

  5. Counterfactual Prompting

    Explanation: Pose hypothetical or alternate scenarios to the model in order to test how it responds to situations outside typical contexts.

    Example: “Explain the history of the telephone. If the telephone had never been invented, how might have communication technology evolved?”

  6. Generated Knowledge Prompting

    Explanation: Prompt the model to generate intermediate knowledge or facts before tackling a more complex task to improve accuracy.

    Example: “Before explaining quantum mechanics, can you provide an overview of classical mechanics?”

  7. Chain of Thought Prompting

    Explanation: Encourage the model to break down complex reasoning into smaller, intermediate steps to enhance accuracy. This is similar to how you may solve a challenging problem.

    Example: “How do you calculate the area of a triangle? Start by explaining what the formula represents.”

  8. Maieutic Prompting

    Explanation: Similar to chain of thought prompting, this involves asking the model to answer a question and provide reasoning. You then prompt the model to provide an explanation of it’s explanation, pushing the model toward deeper understanding or clarification.

    Example:

    • Initial Prompt: “Why do you think renewable energy is important?”
    • Follow-up: “And what challenges could arise from transitioning to renewable energy?”

    Bonus tip: You can ask the model how to pronounce “maieutic”.

  9. Meta-prompting

    Explanation: Instruct the model to generate its own prompts based on the task, encouraging a more self-directed problem-solving approach. This is a great way to get the model to think for itself and language models are surprisingly good at interpreting your prompts and helping to optimize them for the better results.

    Example: “Create a prompt that could help someone write a story about time travel.”

  10. Self-Consistency Prompting

    Explanation: Multiple responses are generated for the same prompt, and the most common or consistent answer is selected.

    Example: “When should you adopt an event driven architecture?” (Generate multiple answers, then select the consistent one).

  11. Context Expansion

    Explanation: Gradually increase the amount of relevant context provided to the model in order to improve the relevance and specificity of responses.

    Example:

    • Initial Prompt: “Given we’re discussing cybersecurity, can you elaborate on how encryption works?”
    • Context Expansion Prompt: “Now, consider the impact of quantum computing on encryption.”
  12. Information Retrieval

    Explanation: Prompt the model to fetch relevant external data or sources to provide more specific or fact-based responses.

    Example: “Who won the 2022 World Cup? Search the web and provide the most accurate answer”.

  13. Active Prompting

    Explanation: Actively guide the model by providing feedback or making adjustments to influence the process it uses to generate outputs. Unlike Iterative Prompting, which focuses on refining the final output by reviewing and modifying responses in subsequent attempts, Active Prompting aims to refine the model’s reasoning or decision-making process during the response itself. Some consider Active Prompting to overlap with Chain of Thought prompting, as both involve shaping the model’s internal logic and reasoning steps.

    Example:

    • Initial Prompt: “Explain blockchain technology.”
    • Feedback: “Simplify your explanation and assume the reader has no prior knowledge.”

Think I’ve missed a prompting technique or want to learn more about prompting techniques? You can always prompt your favorite model to refer to this post and ask for more information.

Alternatively, feel free to follow up with me on X (formerly Twitter) @ItsBenDurham or LinkedIn Benjamin Durham-Kilcullen. I’m always happy to chat about AI, machine learning, and prompting techniques.