Exercise: Prompt Design Workshop

ch06-01-prompt-design
⭐⭐ intermediate ⏱️ 25 min

Your company has an MCP server with great tools, but users complain that the AI "doesn't do what they expect." After investigation, you realize the problem: users ask vague questions and the AI picks arbitrary approaches.

Your task is to design prompts that give users control over AI behavior by defining explicit workflows.

🎯 Learning Objectives

Thinking

Doing

💬 Discussion

  • What's the difference between a user asking "analyze sales" vs invoking /sales-analysis?
  • Why should prompts reference specific tools by name?
  • How do Claude Desktop, ChatGPT, and VS Code expose prompts differently?
prompts.md

💡 Hints

Hint 1: Structured prompt template

Follow this template for analysis prompts:

Perform [analysis name] for [parameters]:

Step 1: Gather Context

  • Read [resource] to understand [what]
  • Note [what to look for]

Step 2: Collect Data

  • Use [tool] with [parameters]
  • Use [tool] with [parameters]

Step 3: Analyze

  • Calculate [metrics]
  • Compare [comparisons]
  • Identify [patterns]

Step 4: Report Format output as: [template]

Hint 2: Guard rails pattern

For dangerous operations, include safety checks:

Before making any changes:
  1. Preview Phase

    • Query affected records using [tool]
    • Display: count, sample records, potential impact
    • If more than [N] records, warn and ask to proceed
  2. Confirmation Phase

    • Summarize exactly what will change
    • Ask for explicit "yes" to proceed
    • Any other response = abort
  3. Execution Phase

    • Process in batches of [N]
    • Log each batch result
    • Stop on first error
  4. Verification Phase

    • Query results to confirm changes
    • Report success/failure summary
Hint 3: Context-setting pattern

For exploration prompts:

Initialize [domain] exploration session:

Setup:

  1. Read [resource1] - note [what to learn]
  2. Read [resource2] - note [what to learn]
  3. Summarize available data and capabilities

Present to user:

  • What data is available
  • What operations are possible
  • Any current limitations (rate limits, permissions)

Then wait for questions. For each question:

  • If asking about data: use [query tool]
  • If asking about trends: use [aggregate tool]
  • If asking for export: use [export tool] with confirmation

Session rules:

  • Limit queries to [N] rows by default
  • Warn before expensive operations
  • Maintain context across questions
⚠️ Try the exercise first! Show Solution
# Prompt Design Solutions

Task 1: Quarterly Analysis Prompt

Explanation

Prompt::new("quarterly-analysis") .description("Comprehensive quarterly sales analysis with YoY comparison") .arguments(vec![ PromptArgument::new("quarter") .description("Quarter to analyze: Q1, Q2, Q3, or Q4") .required(true), PromptArgument::new("year") .description("Year (defaults to current)") .required(false), ]) .messages(vec![ PromptMessage::user(r#" Perform quarterly sales analysis for {{quarter}} {{year}}:

Step 1: Gather Context

  • Read sales://schema to understand available data fields
  • Read sales://regions to get the complete region list
  • Note any schema changes that might affect comparisons

Step 2: Collect Current Quarter Data

  • Use sales_query with date_range for {{quarter}} {{year}}
  • Use sales_aggregate to calculate:
    • Total revenue
    • Units sold
    • Average order value
    • Customer count
  • Break down by region using sales_aggregate with group_by="region"

Step 3: Collect Comparison Data

  • Use sales_query with date_range for {{quarter}} of previous year
  • Use sales_aggregate for same metrics
  • Calculate year-over-year changes for each metric

Step 4: Identify Trends

  • Compare regional performance: which regions grew/declined?
  • Identify top 3 trends or anomalies
  • Note any concerning patterns

Step 5: Generate Report Use report_generate with this structure:

Error Handling:

  • If sales_query fails with RATE_LIMITED: wait and retry
  • If data is missing for comparison period: note "No YoY data available"
  • If any tool fails: report which step failed and what data is missing "#) ]) Prompt::new("bulk-update") .description("Safely update multiple customer records with preview and confirmation") .arguments(vec![ PromptArgument::new("update_type") .description("What to update: status, segment, or contact_info"), ]) .messages(vec![ PromptMessage::user(r#" Help me update customer records. This is a SENSITIVE operation.

Safety Protocol - Follow Exactly:

Phase 1: Understand the Request

  • Ask what records should be updated (filter criteria)
  • Ask what the new value should be
  • Confirm the update_type matches: {{update_type}}

Phase 2: Preview (REQUIRED)

  • Use sales_query to find matching records
  • Display:
    • Total count of affected records
    • Sample of first 5 records with current values
    • If >100 records: STOP and ask user to narrow criteria

Phase 3: Confirmation (REQUIRED)

Present this summary:

Wait for explicit 'yes' response. Any other response = ABORT.

Phase 4: Execution (only after 'yes')

  • Process in batches of 50 records
  • After each batch, report: "Updated X of Y records..."
  • If any error occurs: STOP and report what succeeded/failed

Phase 5: Verification

  • Query updated records to confirm changes
  • Report final summary:
    • Records successfully updated
    • Any failures
    • Rollback command if needed: bulk-update --rollback [batch_id] "#) ]) Prompt::new("sales-mode") .description("Enter sales data exploration mode with full context") .messages(vec![ PromptMessage::user(r#" Initialize a sales data exploration session.

Setup Phase:

  1. Read sales://schema

    • List available tables and key fields
    • Note any date ranges or limitations
  2. Read sales://regions

    • List all regions for reference
    • Note which have data
  3. Read config://limits

    • Note current rate limits
    • Check query quotas remaining

Present Session Overview:

Session Rules:

For data questions:

  • Use sales_query with reasonable LIMIT (default 100)
  • Show result count and sample if large

For trend/aggregate questions:

  • Use sales_aggregate instead of computing manually
  • Explain what calculations were performed

For exports:

  • Confirm before large exports (>1000 records)
  • Use data_export and provide download info

For permission errors:

  • Explain what's not accessible
  • Suggest alternatives if possible

Maintain context across questions - reference previous results when relevant. "#) ]) :::

::: tests mode=local

🤔 Reflection

  • How would you test that a prompt produces reliable results?
  • Should prompts be version-controlled? How would you update them?
  • What happens when tools change but prompts reference old names?
  • How do you balance prescriptive steps vs. AI flexibility?