Claude Code in My Daily Workflow: What Actually Works
I've been using Claude Code daily for months now. Not as an experiment — as a core part of how I build software. Every project I work on, every client engagement, every open source package. It's open
Claude Code in My Daily Workflow: What Actually Works
I've been using Claude Code daily for months now. Not as an experiment — as a core part of how I build software. Every project I work on, every client engagement, every open source package. It's open in a terminal pane next to my editor, and I talk to it more than I talk to most humans during the work day.
Here's the honest version.
What It's Actually Good At
Codebase exploration is where it shines brightest. "Find everywhere we handle webhook processing" or "show me how subscription billing flows through the app" — these queries that would take me 15 minutes of grepping and file-hopping take Claude Code about 30 seconds. It reads files, follows references, and gives you a coherent answer. This alone is worth having it running.
Boilerplate generation is the obvious one. Need a new controller with form requests, a migration, a factory, and a Pest test? Describe what you want, watch it happen. I still review everything it produces — but the time from "I need a thing" to "here's a first draft" dropped from 20 minutes to 2 minutes.
Test writing is surprisingly good. Give it a class and say "write Pest tests for this, covering happy path, failure cases, and edge cases" and you'll get something 80% right. The other 20% needs adjustment — maybe a factory state doesn't exist, maybe it misunderstands a business rule. But 80% of a test suite written in minutes is a massive head start.
Refactoring existing code is underrated. "Extract this into an Action class following the pattern in app/Actions/" works remarkably well because it can read the existing patterns and replicate them. It's essentially a very good copy-paste-and-adapt machine.
Where It Genuinely Struggles
Complex business logic across multiple files. If the logic spans a controller, two Actions, a service, a model with custom scopes, and a queue job — Claude Code will lose the thread. It tries, but it makes assumptions about how data flows between files that are often wrong. The more files involved, the less reliable it gets.
Understanding "why" not just "what." It can tell you what code does. It struggles to tell you why a particular decision was made. "Why do we process refunds differently for annual vs monthly subscriptions?" requires business context that isn't in the code. This is where experienced developers still matter enormously.
Novel architecture decisions. "Should we use event sourcing for this feature?" isn't something Claude Code can meaningfully answer. It'll give you a balanced pros-and-cons list that reads like a blog post, but it won't make the call. Architecture is still a human job.
Knowing when to stop. Ask it to make a change and it'll sometimes keep going — adding related changes you didn't ask for, refactoring adjacent code, "improving" things that were fine. You have to be specific about scope or it'll run wild.
The CLAUDE.md Game Changer
This is the single most impactful thing I've done with Claude Code. Every project gets a CLAUDE.md file in the root, and it completely changes the quality of output.
Mine includes:
- The stack and key packages with versions
- Hard rules ("never commit to main", "always use Actions for business logic")
- How to run tests (the exact command, because our test command has env variable overrides)
- Generator commands to use instead of writing boilerplate
- Links to coding standards documents
- Personal preferences ("be concise", "don't over-engineer")
I also have a global ~/.claude/CLAUDE.md for cross-project preferences — my shell aliases, editor setup, general coding philosophy.
Before CLAUDE.md, I'd spend half my time correcting Claude Code's output. Wrong test commands, wrong directory structures, controllers that should have been Actions. After CLAUDE.md, it gets the project conventions right about 90% of the time on the first attempt.
If you take one thing from this article: write a CLAUDE.md file. Spend an hour on it. Update it as you learn what Claude Code gets wrong. It's the highest-ROI hour you'll spend.
My Actual Daily Workflow
Here's how a typical day looks:
Morning — planning and exploration. I'll open Claude Code and ask it to summarise recent changes, remind me where I left off, or explore a part of the codebase I haven't touched in a while. "What does the webhook processing pipeline look like?" type questions.
Building — iterative conversation. I describe what I want to build, usually feature by feature. "Create a new Action that processes incoming feed items, following the pattern in ProcessTranscript." Then review, adjust, ask for tests, review those. It's a conversation, not a magic wand.
Testing — bulk generation then refinement. I'll ask for a full test suite for whatever I've built, then go through each test adjusting assertions, adding edge cases, removing tests that don't add value.
Refactoring — end of feature. Once something works, I'll sometimes ask Claude Code to review it. "Look at what we just built. Any obvious improvements? Anything not following the patterns in the rest of the codebase?" It catches things. Not always, but often enough.
The Productivity Numbers
I tracked my output over a three-month period, roughly comparing to my pre-Claude Code pace. These aren't scientific — they're vibes with data:
- Boilerplate/scaffolding: 4-5x faster. This is the biggest win.
- Test writing: 3x faster. Good first drafts, need refinement.
- Bug investigation: 2x faster. The codebase exploration is genuinely useful.
- Complex feature development: Maybe 1.3x faster. The gains are smaller because the hard part is thinking, not typing.
- Overall weekly output: Roughly 2x what I was doing before. But the quality of that output varies — I review more carefully now because AI-generated code can be subtly wrong.
Things I Tried That Didn't Work
Letting it make architectural decisions. I tried describing a feature at a high level and letting Claude Code design the architecture. The results were technically correct but always too complex. It defaults to "proper" solutions when simple ones would do.
Long autonomous sessions. Giving it a big task and walking away doesn't work well. It'll make a decision in step 3 that cascades into problems by step 10. Short, iterative loops with human review at each step is the way.
Using it for code review. I tried having it review PRs. It finds formatting issues and obvious bugs, but it misses the things a human reviewer catches — "this approach won't scale", "we tried this before and it didn't work", "the product team will change their mind about this in two weeks."
Who Should Use It
If you're a solo developer or on a small team, Claude Code is worth the subscription. The productivity gains on boilerplate and exploration alone justify the cost. If you write a good CLAUDE.md and learn to work with it iteratively, you'll ship faster.
If you're a junior developer, use it carefully. It'll make you productive before you understand why things work, and that's a trap. You need to understand the code it writes, not just accept it.
If you're on a large team with established processes, the value is less clear. You've already got code review, architectural guidance, and institutional knowledge. Claude Code adds less when those things already exist.
The Honest Summary
Claude Code is the best development tool I've added to my workflow in years. It's not magic. It doesn't write production code unsupervised. It's a very fast, very knowledgeable junior developer who needs clear instructions and regular review.
The CLAUDE.md file is the key. Without it, you're fighting the tool. With it, you're collaborating with it. Big difference.
I write about Laravel, AI tooling, and the realities of building software products. If you found this useful, there's more on stuartmason.co.uk.
Get the Friday email
What I shipped this week, what I learned, one useful thing.
No spam. Unsubscribe anytime. Privacy policy.