01 / Fundamentals

What Is Prompt
Engineering?

Prompt engineering is the discipline of crafting inputs to language models that reliably produce the outputs you need. It's part art, part science — and 100% learnable.

Claude ChatGPT Grok Gemini

Language models are next-token predictors — they output the most statistically likely continuation of whatever you give them. This sounds limiting, but it means the model's output is exquisitely sensitive to how you frame your input. A vague prompt gets a vague response. A precise, structured prompt gets a precise, structured response.

Why does it matter?

The gap between a mediocre prompt and an expert one can mean the difference between a response that's barely useful and one that saves hours of work. Studies show that structured prompting techniques can improve output quality by 40–70% on complex tasks.

The three variables you control

1. Framing — Who is the model? What context does it have? What is it trying to achieve?
2. Instruction clarity — Is the task specific, actionable, and unambiguous?
3. Output scaffolding — Does the model know what format you expect?

02 / Fundamentals

Anatomy of a
Great Prompt

Every high-performing prompt shares the same structural DNA. Master these five components and you'll never write a weak prompt again.

A complete, well-structured prompt contains five elements. You don't always need all five — but knowing when to include each is the core skill.

The five components

structure <role>You are [expert identity with specific credentials]</role> <context>[Situation, constraints, who this is for, what's at stake]</context> <task>[Specific, action-verb-led instruction with measurable output]</task> <output_format>[Exactly how you want the response structured]</output_format> <constraints>[Reasoning style, length, style, anti-hallucination guard]</constraints>

Why XML tags?

Claude and many modern LLMs are trained on structured data. Using XML-style tags to delimit each component dramatically improves adherence to instructions — the model can parse each element independently rather than treating the whole prompt as an undifferentiated string of text.

The role component

This is the highest-leverage element. A strong role doesn't just say "you are a marketing expert" — it says "You are a B2B SaaS growth strategist with 12 years of experience scaling companies from $1M to $50M ARR." Specificity activates a much richer set of capabilities in the model.

03 / Core Techniques

Role
Prompting

Assigning an expert persona to the model is the single highest-leverage technique in prompt engineering. Here's why it works and how to do it right.

Claude ChatGPT

When you give a model a specific expert identity, you're activating a different distribution of tokens — essentially shifting which patterns in its training data it draws from. A generic "helpful assistant" draws from a broad, averaged distribution. A "senior quantitative analyst at a hedge fund" draws from a narrower, more technically precise one.

Weak vs. strong role prompting

weak — too generic You are a helpful marketing assistant. Help me with my email campaign.
strong — specific & credentialed <role> You are a direct-response copywriter with 15+ years of experience in B2B SaaS. You've written campaigns that generated $50M+ in pipeline. You know what converts in the mid-market segment, and you don't waste words. </role>

Depth vs. breadth

For complex, domain-specific tasks, add experiential depth: "reference your experience explicitly," "speak as someone who has navigated this specific situation." For simpler tasks, a one-line role is sufficient.

Combining role with persona depth

The "deep persona" modifier tells the model to maintain the role throughout, reference its expertise naturally, and avoid breaking character with generic hedges. Enable this in the builder's Step 7 for maximum effect.

04 / Core Techniques

Chain of
Thought

A single phrase — "think step by step" — can improve model performance on complex reasoning tasks by 30–60%. Here's the mechanism behind why it works.

Claude ChatGPT Gemini

Language models generate tokens sequentially — each token is conditioned on all previous tokens. When you instruct the model to reason before answering, the intermediate reasoning steps become context for the final answer, allowing the model to "correct" itself as it goes.

Basic chain of thought

constraint addition Think step by step before answering. Show your reasoning process before providing your final conclusion.

Zero-shot vs. few-shot CoT

Zero-shot CoT adds "think step by step" to any prompt. It works for most tasks with minimal effort.
Few-shot CoT provides examples of the reasoning process before your actual question. Use this for complex multi-step problems where you need a specific reasoning format.

When to use it

Chain of thought shines on math, logic puzzles, multi-step analysis, and any task where the answer requires sequential reasoning. It's less valuable for simple factual lookup or creative generation where reasoning doesn't improve outcomes.

05 / Advanced Techniques

XML Structure
(Claude)

Claude is specifically optimized to respond to XML-structured prompts. This is not cosmetic — it fundamentally changes how Claude processes and responds to your instructions.

Claude

Anthropic's training data includes vast amounts of structured XML, and Claude's system prompt architecture uses XML tags internally. This means Claude has learned to treat XML-delimited sections as distinct processing units — it can independently weight, follow, and reference each tagged section.

Practical XML structure for Claude

production-grade claude prompt <role> You are a senior product strategist with deep experience in B2B SaaS. You are known for brutal honesty and data-driven recommendations. </role> <context> We are a Series A company ($5M ARR) building compliance software for fintech. Our primary competitors are ComplyAdvantage and Alloy. Current NRR: 108%. Churn is at 2.3% monthly in the $10k-$50k ARR segment. </context> <task> Analyze our churn rate and recommend 3 specific retention interventions, each with estimated impact, implementation difficulty, and a 90-day action plan. </task> <constraints> Think step by step. Only include information you are confident about. Flag any assumptions explicitly. Be direct — no corporate hedging. </constraints>

Key tags and their effects

<role> — activates expert persona and calibrates vocabulary/depth.
<context> — grounds the response in specific facts, prevents hallucination.
<task> — defines exactly what is being asked; the clearer, the better.
<constraints> — modifies reasoning style, format, and guardrails.

06 / Core Techniques

Few-Shot
Prompting

Demonstrate the exact output you want by providing examples directly in the prompt. The model learns the pattern — and replicates it precisely.

Claude ChatGPT Grok Gemini

Few-shot prompting exploits the model's in-context learning capability — its ability to infer patterns from examples provided in the prompt, without any weight updates. Provide 2–5 examples of input → output pairs, then give your real input. The model maps its output to match the demonstrated pattern.

Three-shot example structure

few-shot pattern Convert each company description into a punchy one-line value proposition. Example 1: Input: We build analytics software for e-commerce stores. Output: Real-time revenue intelligence for Shopify brands — zero setup, instant ROI. Example 2: Input: Our platform helps HR teams track employee performance. Output: Replace annual reviews with continuous performance clarity. Example 3: Input: We provide cybersecurity training for employees. Output: Turn your biggest security vulnerability into your first line of defense. Now convert this: Input: We build AI-powered scheduling tools for healthcare clinics. Output:

Optimal shot count

1 shot often beats zero. 3 shots is usually the sweet spot. 5+ shots only helps for very specific format requirements. More shots consume context window — use only what you need.

07 / Model Guides

The Claude
Guide

Claude (by Anthropic) is the most nuanced and instruction-following model available. These patterns unlock its full capability.

Claude

Claude's unique strengths

Long context mastery. Claude handles 200k+ token context windows without degradation. Use this for document analysis, code review of entire codebases, and multi-document synthesis.

XML responsiveness. No other major model responds to structured XML prompts as consistently as Claude. Always use XML structure for complex Claude prompts.

Constitutional AI alignment. Claude is trained to be helpful, harmless, and honest. Don't prompt against these values — work with them. Ask it to flag assumptions and uncertainties explicitly.

Claude-specific patterns that work

extended analysis pattern <task> Before answering, reflect on this problem from multiple perspectives. Consider: (1) The immediate question, (2) Second-order effects, (3) What a skeptic would say, (4) What evidence would change your view. Then synthesize your analysis. </task>

What to avoid with Claude

Avoid vague, open-ended prompts without clear success criteria. Claude's helpfulness means it will fill ambiguity with its own assumptions — which may not match yours. The more specific your task definition, the better the output.