This cheatsheet distills key techniques for crafting powerful LLM prompts, inspired by a real-world code generation workflow.
🗝️ Key Prompting Techniques
🎭 Role-Based Prompting
- What it is: Tell the LLM to “act” like an expert in a specific domain. This helps it generate more relevant responses.
- Example: In our code generation example, the prompt tells the LLM, “You are an expert software engineer.”
- Quick Tip: Before writing your prompt, define the role most suited to your desired outcome.
🎯 Goal-Oriented Prompting
- What it is: Clearly state the objective you want the LLM to achieve.
- Example: “Your goal is to generate a detailed specification for a new software feature.”
- Quick Tip: Frame your goal as an action for the LLM to perform.
🔗 Prompt Chaining
- What it is: Break down a complex task into smaller steps, feeding the output of each step into the next prompt.
- Example: Generating code involves three chained prompts: 1) Specification, 2) Plan, 3) Implementation. Each prompt builds on the previous one.
- Quick Tip: Map out the logical steps of your task to structure your prompt chain.
📝 Crafting Effective Prompts
🗣️ System Prompt vs. User Message
- System Prompt: Sets the overall context and rules for the LLM. This remains constant across different runs.
- User Message: Contains specific instructions, context, or data that varies with each prompt.
- Example:
- System: “You are a helpful assistant.”
- User: “Translate this sentence into Spanish: ‘Hello, how are you?'”
- Quick Tip: Clearly separate static instructions (System) from dynamic inputs (User).
🧠 Chain-of-Thought Prompting with Scratchpad Tags
- What it is: Encourage the LLM to “think out loud” by providing space for its reasoning process.
- How it works: Use tags like “Scratchpad:” or “Thoughts:” within your prompts.
- Benefits: Improves accuracy and provides insights into the LLM’s decision-making.
- Quick Tip: Experiment with different tag placements within your prompts.
🏗️ Structured Responses with XML Tags
- What it is: Define the format of the LLM’s response using XML tags for easy parsing and extraction of information.
- Example:
<specification>
<step>Install necessary libraries.</step>
<step>Write the main function.</step>
</specification>
- Quick Tip: Choose XML tag names that clearly represent the data being structured.
➡️ Response Prefilling
- What it is: Guide the LLM’s response by providing a starting point or structure.
- Example: If you want the LLM to list items, start the user message with “1.” to prompt a numbered list.
- Quick Tip: Prefill responses strategically to ensure the output aligns with your desired format.
🧰 Tools and Resources
- Anthropic API (Claude): https://www.anthropic.com/ – Access powerful LLMs like Claude for code generation and more.
- GitHub: https://github.com/ – A platform for collaborating on software projects and hosting code repositories.
🚀 Level-Up Your LLM Skills!
By mastering these advanced prompting techniques, you can unlock the full potential of LLMs for a wide range of tasks.
Remember:
- Experiment! The best way to learn is by trying different prompting strategies.
- Analyze the results! Pay attention to how changes in your prompts affect the LLM’s output.
- Keep learning! The field of LLM prompting is constantly evolving – stay updated on the latest techniques and tools.