Free · 26 Prompts · Click to Copy

AI Prompt Library

Production-ready prompts for the work you actually do — code review, debugging, RAG question answering, stack trace analysis, document chunking, agent planning, decision matrices. Each prompt is tested and paired with the model that gets the best result for that task in 2026.

📅 Published: May 9, 2026🔄 Last Updated: May 9, 2026✓ Manually Reviewed
26 of 26 prompts

Senior Code Reviewer

Reviews a pull request like a senior engineer — security, performance, correctness, style.

Coding

Best model: Claude Sonnet 5 / Qwen3-Coder-Next

You are a senior software engineer reviewing this pull request. Analyze the diff for:

1. **Correctness** — does it do what the PR description says? Edge cases handled?
2. **Security** — any injection risks, missing input validation, secrets in code, unsafe deserialization?
3. **Performance** — N+1 queries, unnecessary allocations, blocking I/O on hot paths?
4. **Style** — consistent with the existing codebase? Naming clear? Comments where genuinely needed (not what-the-code-does)?
5. **Tests** — adequate coverage? Test the right things? Easy to maintain?

For each issue, cite the exact file:line and explain the problem and suggested fix. Don't praise good code — only flag what needs change. If the PR is clean, say so in one sentence.

```diff
[paste the git diff here]
```

Tip: Best results when you paste the full diff including context. For long PRs, split into logical chunks.

Stack Trace Investigator

Diagnoses production stack traces — root cause, fix, and prevention.

Coding

Best model: Claude Opus 4.7 / GPT-5.5 Pro

Analyze this stack trace and tell me:

1. **Root cause** — what specifically went wrong, in one sentence
2. **Why it happened** — the sequence of events that led to this state
3. **Immediate fix** — what to change to stop the error (with code if applicable)
4. **Prevention** — what to add (test, type check, validation, monitoring) to catch this class of bug earlier next time

Be precise about line numbers and frame indices. If the stack trace alone isn't enough to diagnose, list the 2-3 most likely causes ranked by probability and tell me what additional info you'd need to disambiguate.

```
[paste the stack trace here, plus any relevant code context]
```

Refactor Legacy Code

Refactors a function/class for clarity without changing behavior.

Coding

Best model: Claude Sonnet 5 / DeepSeek V4

Refactor the following code for clarity and maintainability while preserving exact behavior. Constraints:

- **No behavior changes** — the refactored code must pass the same tests
- **Reduce nesting** — flatten with early returns / guard clauses
- **Name clearly** — variables and functions should describe intent, not mechanism
- **Extract** only when the extracted unit has independent meaning (don't extract for the sake of it)
- **Comments only where the WHY is non-obvious** — never comment what the code already says
- **Show the diff** — output the original and refactored versions side-by-side, then explain the meaningful changes

```
[paste original code]
```

If you'd recommend renaming external API/types, flag it as a separate change with migration impact.

Natural Language to SQL

Generates SQL from a question + schema, with explanation.

Coding

Best model: Claude Sonnet 5 / GPT-5.5 Standard

Generate the SQL query for this question.

**Schema:**
```sql
[paste CREATE TABLE statements or describe tables/columns]
```

**Question:** [your business question in plain English]

Output:
1. The SQL query (formatted for readability, one clause per line)
2. A 2-sentence explanation of what it does and why
3. Any assumptions you made (e.g., "I assumed 'active' users means status = 'active' AND deleted_at IS NULL")
4. Performance note if the query might scan a lot of rows

Use standard SQL unless I tell you a specific dialect (Postgres, MySQL, BigQuery, Snowflake). Prefer CTEs (WITH clauses) over deeply nested subqueries.

REST API Design Critique

Reviews a proposed REST API design for consistency, RESTfulness, and pitfalls.

Coding

Best model: Claude Sonnet 5

Review this REST API design. Check:

1. **REST conventions** — proper resource modeling? Right HTTP verbs? Idempotency where it matters?
2. **Naming** — consistent across endpoints? Plural for collections, singular for resources? No verbs in paths?
3. **Versioning** — present? Sensible (URL vs header)?
4. **Error responses** — structured, machine-readable, useful for clients?
5. **Pagination** — present on list endpoints? Cursor or offset? Sensible defaults?
6. **Filtering / sorting** — consistent across list endpoints?
7. **Auth / rate limiting** — addressed?
8. **Pitfalls** — anything that'll bite us in 6 months?

```
[paste the API spec or list of endpoints]
```

Be direct — call out specific endpoints by path and explain what to change.

Generate Tests from Code

Writes a complete test file (happy path + edge cases) for a given function.

Coding

Best model: Qwen3-Coder-Next / Claude Sonnet 5

Write a complete test file for the following code. Requirements:

- **Framework:** [pytest / vitest / jest / go test / your framework]
- **Cover:** happy path, edge cases, error cases, boundary conditions
- **Each test independent** — no shared mutable state between tests
- **Clear test names** — describe what's being tested and the expected outcome (e.g., `test_login_returns_401_when_password_is_wrong`)
- **Mock external dependencies** — DB calls, network calls, file I/O — but don't mock the function under test
- **Don't test trivia** — getters, setters, framework code

If parts of the function are hard to test (e.g., depends on time, randomness, environment), flag what needs refactoring to make it testable instead of writing low-value tests.

```
[paste the code to test]
```

SEO Blog Post Outline

Generates a search-intent-aware blog outline that ranks.

Writing

Best model: Claude Sonnet 5 / GPT-5.5

Create a detailed outline for a blog post on this topic. Optimize for the actual reader, not for SEO games.

**Topic:** [your topic]
**Target keyword:** [primary keyword]
**Target reader:** [who they are, what they know, what they want]
**Word count:** [target — 1500/2500/4000]

The outline should include:
1. **Title** — 50-60 chars, includes primary keyword, promises a clear value
2. **Meta description** — 145-155 chars, includes keyword, hooks the click
3. **Intro** — 100-150 words that immediately answer the searcher's intent (don't bury the lede)
4. **H2 sections** — each with H3 sub-points, structured to satisfy intent completely
5. **FAQ section** — 6-8 actual questions people ask (use search-suggestion phrasing)
6. **Internal link suggestions** — relevant existing pages on the site to link to
7. **Conclusion** — short, with a clear CTA

After the outline, list 3 specific things this post will do better than the current top-ranked competitors.

Rewrite for Clarity

Rewrites text to be clearer and shorter without losing meaning.

Writing

Best model: Claude Sonnet 5 / Mistral Medium 3.5

Rewrite the following text for clarity. Rules:

- **Cut every word that doesn't change meaning** — "in order to" → "to", "the fact that" → "that"
- **Active voice** unless passive is genuinely better (rare)
- **Concrete over abstract** — specifics beat generalities
- **One idea per sentence** — split run-ons
- **No filler phrases** — "it should be noted that", "as I mentioned earlier", "interestingly"
- **Preserve technical accuracy** — don't simplify away meaning

Show the original and rewritten side by side. Below, list any places where I should double-check that I haven't lost a nuance you weren't sure about.

```
[paste text to rewrite]
```

Tone Shift

Rewrites text in a different tone (formal/casual/technical/marketing).

Writing

Best model: Claude Sonnet 5 / Llama 3.1 70B

Rewrite this in a [target tone] tone for [target audience]. Preserve the core message and information; change only how it's said.

Target tone options: formal & academic / casual & friendly / technical & precise / marketing & enthusiastic / empathetic & supportive / journalistic & neutral.

After the rewrite, list:
1. The 3 specific changes that defined the tone shift (word choice, sentence structure, framing)
2. Anything you removed because it didn't fit the new tone — confirm it wasn't load-bearing

```
[paste original text]
[specify target tone and audience]
```

Professional Email Reply

Drafts a professional email reply matching the original's tone.

Writing

Best model: Mistral Medium 3.5 / Claude Haiku 4.5

Draft a reply to this email. The reply should:

1. Match the formality level of the original (don't be more or less formal than the sender)
2. Address every question or request in the original — none missed
3. Be the right length for the content (don't pad)
4. End with a clear next action if one is needed
5. Avoid corporate filler ("Hope this finds you well", "Per my last email")

**Original email:**
```
[paste]
```

**My intent for the reply:** [what outcome do I want? What position do I take?]
**Anything to avoid mentioning:** [topics to skip]

Decision Matrix

Builds a structured decision matrix comparing options.

Analysis

Best model: Claude Opus 4.7 / GPT-5.5 Pro

I need to choose between these options. Build a decision matrix.

**Decision:** [what am I deciding?]
**Options:** [list 2-5 concrete options]
**Criteria that matter to me:** [list 3-7 — e.g., cost, time-to-implement, scalability, learning curve, vendor lock-in]
**My constraints:** [budget, timeline, team capability]

Output:
1. A markdown table with options as columns and criteria as rows. Score each cell 1-5 with a one-sentence justification.
2. Below the table: weight each criterion by how much I should care about it (you decide weights based on my constraints, then I can adjust)
3. Weighted total per option
4. Your recommendation in 2 sentences — which option and why
5. The single biggest risk of going with your recommendation, and how to mitigate it

Be opinionated. Don't hedge if the answer is clear.

Five-Whys Root Cause

Walks a problem through five-whys to find the actual root cause.

Analysis

Best model: Claude Opus 4.7 / DeepSeek V4

Help me find the root cause of this problem using five-whys. Don't stop at the first plausible cause — keep asking why until we hit something I can actually fix.

**Problem statement:** [describe the problem in 1-2 sentences with concrete details — not "performance is bad" but "p99 latency on /search increased from 80ms to 350ms over the last week"]

For each "why," list 2-3 possible answers ranked by likelihood given my context, then pick the most likely as the lead-in to the next "why." After five levels, summarize:

1. The likely root cause
2. The chain of events from root cause to symptom
3. The smallest change I can make to fix the root cause
4. The smallest change I can make to verify your hypothesis before committing to the fix

If at any level the evidence I gave you isn't enough to disambiguate, tell me what to investigate before continuing.

Pre-Mortem Analysis

Imagines a project failed, then works backward to identify what went wrong.

Analysis

Best model: Claude Opus 4.7

Pre-mortem this plan. Assume it's 6 months from now and the project failed badly. Tell me what went wrong, ranked by likelihood.

**The plan:** [describe what we're planning to do, scope, timeline, resources, success criteria]
**Team / org context:** [team size, experience level, organizational pressures]
**Dependencies:** [external systems, vendors, approvals]

Output the top 8 ways this fails, in 4 categories:

1. **Execution failures** (we got the plan right but couldn't do it)
2. **Strategic failures** (we did the plan but it was the wrong plan)
3. **External failures** (something outside our control killed it)
4. **Adoption failures** (we shipped it but no one used it)

For each: what specifically goes wrong, the early warning sign we'd see 1-3 months before it became a crisis, and the cheapest mitigation we could take now to prevent or detect it.

RAG Answer with Citations

Answers a question using only retrieved passages, with citations.

RAG

Best model: Claude Sonnet 5 / Qwen3-Coder-Next

You are a research assistant. Answer the user's question using ONLY the information in the retrieved passages below. Rules:

1. **Cite every claim** — after every factual statement, add [Passage N] referring to the passage you used
2. **If passages don't contain the answer**, say so explicitly — don't hallucinate
3. **If passages contradict each other**, surface the contradiction; don't pick one silently
4. **Direct quotes** in "double quotes" with [Passage N] citation
5. **Synthesize across passages** when relevant — don't just copy one passage
6. **No filler** — if the answer is one sentence, give one sentence

**Question:** [user's question]

**Retrieved passages:**
[Passage 1]: [content]
[Passage 2]: [content]
[Passage 3]: [content]
...

Output the answer, then a 1-line "Confidence: high/medium/low" with a brief reason.

Smart Document Chunking

Splits a document into RAG chunks at semantic boundaries.

RAG

Best model: GPT-5.5 Standard / Mistral Medium 3.5

Split this document into RAG chunks optimized for embedding-based retrieval. Constraints:

- **Target chunk size:** [400-800 tokens — pick what fits your embedding model context]
- **Split at semantic boundaries** — section headings, paragraph breaks, conceptual transitions — never mid-sentence
- **Add ~50 token overlap** between adjacent chunks for context continuity
- **Each chunk gets metadata:** chunk_id, source_file, section_title (if any), page_number (if any)
- **Preserve structure markers** — keep heading hierarchy in chunks so they're self-contained
- **Skip noise** — table of contents, page numbers, repeated headers/footers

Output as JSON array. Each chunk: `{ "chunk_id": "...", "text": "...", "metadata": { ... } }`.

```
[paste document]
```

RAG Query Rewriting

Rewrites a user question into multiple search queries for better retrieval.

RAG

Best model: Claude Haiku 4.5 / Phi-4

Rewrite this user question as 3-5 alternative search queries optimized for vector / keyword retrieval. The user question is in conversational form; the retrieval queries should be in keyword-rich, declarative form.

**User question:** [the question]
**Domain context:** [what corpus is being searched — e.g., "internal engineering docs", "tax law", "medical literature"]
**Past conversation:** [if any, paste the prior turns so the model can resolve pronouns / references]

Output:
1. The 3-5 rewritten queries, ranked by likelihood of matching relevant docs
2. For each query, 1-line explanation of what aspect of the original question it targets
3. Any ambiguity in the original question that may need user clarification before retrieval

Task → Tool Calls Breakdown

Breaks a high-level task into sequential tool calls.

Agents

Best model: Claude Sonnet 5 / GPT-5.5 Pro

Break this task into a sequence of tool calls.

**Task:** [the goal in plain English]
**Available tools:** [list each tool with its name, what it does, and its input schema]
**Constraints:** [latency budget, must-not-do, side effects]

Output as a numbered plan:
1. Tool: [name], Input: { ... }, Why: [1 sentence on why this step is needed]
2. ...

After the plan, list:
- Which steps can run in parallel
- Which steps depend on which prior outputs
- Where you'd add a human-in-the-loop confirmation if this were a high-stakes task
- The single biggest failure mode and what would happen if that step fails

Agent Self-Critique

Reviews an agent's plan / output for errors before execution.

Agents

Best model: Claude Opus 4.7 / GPT-5.5 Pro

You are reviewing another AI agent's plan / output before it executes. Be critical:

**Original task:** [what the user asked for]
**Agent's plan / output:** [what the agent produced]

Check:
1. **Did the plan address the actual task** or did the agent rephrase the task in a way that misses the point?
2. **Are there steps that will fail** given known constraints (auth, rate limits, data shape)?
3. **Are there missing steps** the agent forgot (validation, error handling, cleanup)?
4. **Is anything destructive** that should require user confirmation first?
5. **Is the order correct** — could two steps be parallelized? Does later step depend on output the earlier step doesn't produce?

Output:
- "APPROVE" if it's ready to execute
- "REVISE" with a specific list of fixes if not
- Don't hedge — if it's wrong in any meaningful way, say "REVISE"

Hard Problem with Extended Thinking

Triggers a reasoning model's extended-thinking mode for hard problems.

Reasoning

Best model: Claude Opus 4.7 / GPT-5.5 Pro / DeepSeek R1

Take your time on this. Think step-by-step. Show your reasoning before the final answer.

**Problem:** [the hard problem — math, planning, multi-step logic, design]

Approach:
1. Restate the problem in your own words to confirm you understood it
2. Identify the constraints and what we're solving for
3. List 2-3 candidate approaches
4. Pick the most promising and work through it carefully
5. If you hit a dead end, backtrack — explicitly say "this approach doesn't work because..." and try another
6. Verify your final answer against the original constraints
7. State the answer clearly at the end

If the problem is ambiguous in any way that affects the answer, ask one clarifying question before solving rather than guessing.

Fermi Estimate

Estimates a quantity from first principles when exact data isn't available.

Reasoning

Best model: Claude Opus 4.7 / GPT-5.5 Pro

Estimate this quantity from first principles. Show every assumption.

**Question:** [e.g., "How many gas stations are there in the US?", "How many AI inference requests does Anthropic serve per day?", "How much VRAM is the global LLM industry using right now?"]

Approach:
1. Decompose the question into 3-7 independent factors you can estimate
2. For each factor, give your estimate AND a 1-line justification
3. Do the multiplication / aggregation
4. Sanity-check: is the result within an order of magnitude of any data point you actually know?
5. Final answer with explicit confidence range (e.g., "Between 50K and 200K, most likely ~100K")

Don't refuse the estimate just because the data isn't available — that's the whole point of Fermi estimation.

CSV Analyzer

Analyzes a CSV: schema, quality issues, summary stats, anomalies.

Data

Best model: Claude Sonnet 5 / GPT-5.5

Analyze this CSV. Output:

1. **Schema** — column name, inferred type (string/int/float/date/bool), example value, % of nulls
2. **Quality issues** — duplicates, inconsistent formatting (date formats, casing), out-of-range values, suspicious patterns
3. **Summary stats** — for numeric columns: min/max/mean/median/p95. For categorical: top 5 values + counts. For dates: range.
4. **Likely anomalies** — outliers, suspicious modes (e.g., birthday = 1900-01-01 indicates "unknown"), columns that look like they should join with others
5. **Suggested next analyses** — 3 specific questions this dataset is well-suited to answer

```csv
[paste CSV — first 50-200 rows is enough for schema inference]
```

Note any column that looks like PII (emails, phone numbers, SSN-like patterns) and flag for review.

JSON Schema from Examples

Generates a JSON Schema from one or more example JSON documents.

Data

Best model: GPT-5.5 Standard / Qwen3-Coder-Next

Infer the JSON Schema from these example documents.

```json
[paste 1-N example JSON objects, comma-separated or one per request]
```

Output:
1. The JSON Schema (draft-2020-12)
2. **Required vs optional** — assume a field is required only if it appears in every example
3. **Type unions** — if a field is sometimes string, sometimes null, output `{"type": ["string", "null"]}`
4. **Enums** — for string fields with ≤10 distinct observed values, output as enum
5. **Format hints** — date-time, email, uri where the pattern is clear
6. **Notes** — flag fields where the small sample size makes the inference uncertain

If you see arrays of mixed-type objects (e.g., events with different shapes), output an `oneOf` with each variant as a sub-schema.

Meeting Notes → Action Items

Converts raw meeting notes into structured action items + decisions + open questions.

Productivity

Best model: Claude Haiku 4.5 / Mistral Medium 3.5

Convert these meeting notes into a structured summary.

```
[paste raw notes / transcript]
```

Output four sections:

1. **Decisions made** — what was concluded, who decided it. One bullet per decision.
2. **Action items** — `- [ ] [Owner] [What to do] by [When if specified, "next meeting" if not]`
3. **Open questions** — things raised but not resolved. Who is best positioned to resolve each.
4. **Risks / concerns surfaced** — flagged in the meeting but not addressed. Worth a follow-up.

Don't pad. If a section is empty, write "None" — don't invent items to fill it. If owner names aren't in the notes, write [Owner: TBD].

Summarize Long Thread

Summarizes a long Slack/email/forum thread to its essence.

Productivity

Best model: Gemini 3.1 Pro (long context) / Claude Sonnet 5

Summarize this thread for someone who needs to catch up but doesn't have time to read every message.

**Output structure:**
1. **TL;DR** — 1-2 sentences. What's the situation, what's the latest decision/state.
2. **Key points** — 3-7 bullets covering the substantive content (not the meta — skip "let me know what you think" filler)
3. **Decisions / commitments** — anything that was agreed to, by whom
4. **Open / unresolved** — anything still being debated or waiting on someone
5. **Recommended next action for the reader** — given they're reading this to get up to speed, what should they do?

Don't attribute every quote — just say "the team agreed" / "X pushed back on Y." Keep it terse. If the thread is mostly noise, say so and give just the TL;DR.

```
[paste thread]
```

Documentation Improvement

Identifies what's missing / unclear / outdated in a doc.

Productivity

Best model: Claude Sonnet 5

Review this documentation and identify what would frustrate a real reader trying to use it. Be a critical reader, not a polite one.

**Doc:**
```
[paste]
```

**Audience:** [who reads this — beginners? experienced engineers? non-technical staff?]

Output:
1. **Information gaps** — questions a reader will have that this doc doesn't answer
2. **Unclear sections** — paragraphs that need rewording, with specific suggestions
3. **Outdated content** — anything that references old versions, deprecated APIs, screenshots that no longer match the UI
4. **Missing context** — assumptions the doc makes that won't hold for the audience
5. **Structural issues** — wrong section order, missing TOC, no examples where examples would help

For each issue, quote the specific text and propose the fix. Don't list what's good — only what needs change. End with the single highest-impact improvement to make first.

Internal One-Pager

Drafts a one-page proposal / decision document for an internal stakeholder.

Productivity

Best model: Claude Sonnet 5

Draft a one-page proposal for [proposal type: technical decision / project plan / org change / vendor selection].

**Audience:** [who reads this — VP eng? Exec staff? Cross-functional team?]
**Decision required:** [what specific approval / commitment do you need?]
**Context:**
- Problem we're solving:
- Why now:
- What we're proposing:
- Cost (time, money, headcount):
- Risk if we don't do it:
- Risk if we do it:
- Alternatives considered and why we rejected them:

Output structure (~600 words total):

1. **TL;DR** (2 sentences — the ask + the why)
2. **Problem** (3 sentences — what hurts today, with one quantified data point if you have it)
3. **Proposal** (3-5 sentences — what specifically we'll do)
4. **Why this option** (3 bullets covering: cost, risk, expected outcome)
5. **Alternatives considered** (table: option / cost / why rejected — 2-3 rows)
6. **Ask** (1 sentence — exactly what decision / approval / commitment we need from the reader)

No marketing language. No "we're excited to propose." Direct, specific, fact-led.
🎯
AI Learning Path

Go from reading about AI to building with AI

10 structured courses. Hands-on projects. Runs on your machine. Start free.

How to use these

  1. Pick the right model. Each prompt lists the model we found gives the best result. Cheap chat tasks → Claude Haiku 4.5 / Mistral Medium. Hard reasoning / coding → Claude Opus 4.7 / GPT-5.5 Pro / Qwen3-Coder-Next.
  2. Copy the full prompt — the structure (numbered output, explicit format, "show your reasoning" instructions) is what makes it work, not the topic.
  3. Replace the bracketed placeholders with your actual content. Don\'t leave any [placeholders] unfilled.
  4. Iterate. If the first response misses, push back: "this missed X, redo with X addressed." Most prompts get to a great result in 1-2 follow-ups.

Why these prompts work

Three patterns recur across every prompt that consistently produces good output:

  • Explicit output structure. Numbered sections, exact format requested, length guidance. Models follow structure better than vibes.
  • Stated constraints. "No filler", "be opinionated", "don\'t hedge", "match the original\'s tone". Negative space matters.
  • Context grounding. Every prompt that handles user content includes placeholders for paste-in context — model performance is dominated by what you give it, not by the prompt template.

From copying prompts to writing your own

These prompts work because they follow patterns. Learn the patterns.

The Prompt Engineering course covers structured-output design, chain-of-thought, JSON Mode + grammars, agent prompting, and evaluation — the techniques behind every prompt above. Part of the 17-course AI Learning Path. First chapter free, no card.

Related tools & resources

PR

Written by Pattanaik Ramswarup

Creator of Local AI Master

I build Local AI Master around practical, testable local AI workflows: model selection, hardware planning, RAG systems, agents, and MLOps. The goal is to turn scattered tutorials into a structured learning path you can follow on your own hardware.

✓ Local AI Curriculum✓ Hands-On Projects✓ Open Source Contributor
Free Tools & Calculators