Skip to content

Claude Code Mastery8 / 12

Building Complete Features

From Linear ticket to merged PR with Claude Code. A real, honest walk-through — what the prompt looked like, what the agent got right, what I caught in review.

Theory is cheap. Here is a real ticket I shipped last week with Claude Code, exactly as it played out — what I prompted, what the agent did, what I caught, what I let ship.

I'm using the techniques from Articles 3-7 (four-part prompts, sub-agents, slash commands, the linear pipeline). No magic, no hidden secret sauce.

The ticket

Title: Add per-user rate limiting to /api/notes/* endpoints
Acceptance:
- 60 req/min/user, sliding window
- 429 with Retry-After header on overflow
- Bypass for users with `internal: true`
- Existing tests must stay green
- Add coverage for the new logic

This is a textbook small feature: clear goal, clear constraints, testable. The kind of ticket Claude Code eats for breakfast.

Step 1 — Frame the prompt with /feature

> /feature
> Goal: Add per-user sliding-window rate limiting on /api/notes/*.
>   60 req/min/user, response 429 + Retry-After header on overflow.
>   Bypass when user.internal === true.
>
> Constraints:
>   - Reuse the existing redis client in lib/redis.ts.
>   - Do NOT add a new dependency.
>   - Implement as a Next.js middleware in middleware.ts (path-scoped).
>   - Do NOT touch existing tests.
>
> Definition of Done:
>   - pnpm test passes.
>   - New vitest covers: under-limit pass, over-limit 429, internal bypass.
>   - pnpm lint clean.
>
> Files:
>   - middleware.ts (entry point)
>   - lib/redis.ts (existing client)
>   - lib/rate-limit.ts (NEW)
>   - tests/rate-limit.test.ts (NEW)

Notice the four parts: Goal, Constraints, DoD, Context. Article 3 wasn't kidding.

Step 2 — Implementer takes over

I switched to my implementer sub-agent and let it run. Here is roughly what it did, narrated:

  1. cat middleware.ts, cat lib/redis.ts — built context.
  2. Drafted lib/rate-limit.ts with a sliding-window using Redis sorted sets.
  3. Wrote middleware.ts to call the limiter on /api/notes/* and respect the internal flag.
  4. Wrote tests/rate-limit.test.ts with the three cases from the DoD.
  5. Ran pnpm test. One test failed — the 429 wasn't including Retry-After.
  6. Re-read its own code, found the missing header, fixed, reran. Green.
  7. Ran pnpm lint. Clean.
  8. Emitted a structured handoff: status: ok, artifacts: [...].

Wall clock: ~6 minutes. Tokens: ~14k. My intervention: zero so far.

Step 3 — code-reviewer second pair of eyes

> /agents code-reviewer
> Review the diff.

The reviewer's verdict (paraphrased):

SHIP, with two notes:

1. Bug-adjacent: the sliding window uses Date.now() inside the Redis ZADD.
   In a multi-region deploy with clock drift this can cause off-by-one
   eviction. Recommend using ZADD with redis TIME instead.

2. Style: rate-limit.ts exports a class but is used as a function elsewhere.
   Recommend exporting a factory function for consistency with lib/cache.ts.

Both are non-blocking for the ticket, but worth a follow-up.

This is exactly the value of a separate reviewer agent. The implementer's code worked; the reviewer flagged two real concerns the implementer would never raise about its own work.

I made a snap call: ship the feature, file follow-up tickets for both notes. That is a human decision, not an agent decision.

Step 4 — Diff review by me (the human)

I opened the diff. About 180 lines across 3 files. I read it top to bottom.

What I caught that neither agent did:

  • The new lib/rate-limit.ts had a comment that just said // rate limit logic. I deleted it (per Article 3's anti-pattern: don't narrate the code).
  • The test imports were not sorted. Tiny lint config gap on my side, not the agent's.

That is what review looks like in this workflow: 5 minutes of reading, 2 micro-edits, ship.

Step 5 — Release notes + PR description

> /agents release-bot
> Draft a PR description and a CHANGELOG entry.

Output (lightly edited):

## Add per-user rate limiting on /api/notes/*

- Sliding window, 60 req/min/user, Redis-backed.
- 429 with `Retry-After` header.
- Bypass for `user.internal === true`.

Closes ENG-2451.

I copied that into gh pr create and pushed. Total wall clock for the ticket: about 22 minutes. Of that, ~6 was the agent, ~5 was the reviewer agent, ~7 was my diff review, ~4 was PR + push.

What this workflow looked like a year ago, by hand

For me, this same ticket would have been a 90-minute job:

  • 20 min reading existing code.
  • 30 min implementing.
  • 20 min on the test (esp. mocking the Redis sorted set).
  • 10 min on lint / cleanup.
  • 10 min PR description.

22 min vs 90 min is the headline number. But the more honest unlock is the reduction in cognitive load. I was not deeply concentrated for 22 minutes; I was actively reviewing for 12 of them, and waiting / thinking about the next ticket for the rest.

Things I did NOT delegate

  • The decision on which Redis primitive to use (sorted set vs counter w/ TTL). I told the agent.
  • The decision on bypass semantics (internal: true vs a new role). I decided in the prompt.
  • The push to remote. Always human.

This is the right division of labour. The agent owns implementation. I own design and irreversible actions.

A pattern you can steal

For every "small feature" ticket:

  1. Spend 5 minutes writing a four-part prompt. Literally write it down.
  2. implementer runs to green tests.
  3. code-reviewer runs, reads diff, notes risks.
  4. You read the diff. 5 minutes.
  5. release-bot drafts the PR description.
  6. You push.

Do this 10 times in a week. By Friday it will be the most natural workflow you've ever had.


Next article: Testing and Debugging — how to let Claude Code own the entire test loop, including the parts that make engineers nervous (regressions, flakies, integration tests).

Share this article

#ClaudeCode #AgenticAI #DevTools #Productivity #SoftwareEngineering

LinkedInX / TwitterBlueskyThreadsRedditHacker NewsWhatsAppEmail

Series — Claude Code Mastery

  1. Part 01Claude Code vs ChatGPT vs Copilot vs AgentsMost developers are using the wrong AI tool for the wrong job. Here is why — and what to do instead.
  2. Part 02Installation + The Antigravity WorkflowInstalling Claude Code is a 30-second job. Setting up the workflow that makes the agent feel like it's doing the heavy lifting — that's the part nobody writes about.
  3. Part 03Writing Prompts That Work"Make it better" is not a prompt. "Refactor this for performance" is not a prompt. Here is the four-part structure that makes Claude Code actually finish what you asked.
  4. Part 04Slash Commands — Building a Project from A to Z/init, /agents, /compact and your own custom commands. The toolkit that lets you go from empty folder to running app without leaving the Claude prompt.
  5. Part 05Sub-Agents — The 11 Specialized Experts Inside Claude CodeSlash commands reuse prompts. Sub-agents reuse whole personas — code-reviewer, test-writer, migration-runner. Here is the team you should have on day one.
  6. Part 06Production Codebase SafetyPermissions, guardrails, and what not to automate. The unsexy article that decides whether Claude Code becomes infrastructure or becomes the reason you got paged at 2 AM.
  7. Part 07Multi-Agent PipelinesChaining sub-agents, running them in parallel, and the patterns for 'review-while-coding' without losing your mind. Where Claude Code starts to feel like a small engineering org.
  8. Part 08Building Complete Featuresyou are hereFrom Linear ticket to merged PR with Claude Code. A real, honest walk-through — what the prompt looked like, what the agent got right, what I caught in review.
  9. Part 09Testing and DebuggingLetting Claude Code own the entire test loop. Including the parts that make engineers nervous: regressions, flakies, integration tests, and the stack-trace whisperer.
  10. Part 10Team WorkflowsHow engineering teams are actually integrating Claude Code today. The shared .claude/ folder, the review rituals, and the anti-patterns I keep seeing in the wild.
  11. Part 11Advanced Patterns — Hooks, MCP Servers, Custom Tools, System PromptsOnce you've outgrown the defaults: hooks for deterministic side effects, MCP servers for org-specific data, custom tools, and system-prompt surgery.
  12. Part 12The Future of Agentic DevelopmentWhere this is going in 2026 and beyond. What I'd bet on, what I would not, and the line where I get sceptical of the hype.

Keep learning

Skill in the catalogue

prompt-engineer

Transforms user prompts into optimized prompts using frameworks (RTF, RISEN, Chain of Thought, RODES, Chain of Density, RACE, RISE, STAR, SOAP, CLEAR, GROW)

Open the skill →

Course

The Claude Mastery course

12 modules · 5 languages · certificate · 3-day free trial.

See plans →
LinkedInX / TwitterBlueskyThreads