Building an AI Developer Intelligence Platform in 10 Days
Ten working days. Two developers. An idea that sounded too ambitious on day one and was live in production on day ten. DevTrends — a developer intelligence platform that ingests content from over 100
Building an AI Developer Intelligence Platform in 10 Days
Ten working days. Two developers. An idea that sounded too ambitious on day one and was live in production on day ten. DevTrends — a developer intelligence platform that ingests content from over 100 sources, processes it through AI pipelines, and makes the resulting intelligence available through both a web dashboard and MCP tools.
Here's how it actually happened.
Day 1-2: The Idea and the Architecture
The pitch was straightforward: developers are drowning in content. Blog posts, release notes, conference talks, podcasts, newsletters — there's too much to keep up with. What if we could ingest all of it, process it through AI to extract what actually matters, and present it in a way that's useful?
Emma and I sat down and sketched the architecture on day one. We knew a few things immediately:
- Laravel for the backend. Not even a discussion. It's what we know, it's fast to build with, and the queue system would be essential for processing.
- Queue-driven pipeline. Every piece of content would go through: fetch → clean → extract → classify → store. Each step a separate job.
- PostgreSQL for storage with full-text search. We considered Meilisearch but didn't want another service to manage in a 10-day sprint.
- Inertia + React for the frontend. Standard stack, no surprises.
The key architectural decision was making the AI processing completely asynchronous. Content comes in, gets queued, gets processed in the background. The web layer never waits for AI. This meant the dashboard could be fast from day one, even when processing was still running.
Day 3-4: The Ingestion Engine
This was Emma's domain. Building feed readers for different source types: RSS feeds, API endpoints, web scraping for sources without feeds. The challenge wasn't any individual source — it was handling 100+ sources reliably.
Sources → FetchSourceJob → ParseContentJob → Store Raw Content
Each source got a driver — an interface with fetch() and parse() methods. RSS sources used one driver, API sources another. We could add new source types without touching the pipeline.
The mistake we made here: trying to normalise content too early. Different sources have wildly different structures. A blog post is not a release note is not a podcast transcript. We wasted half a day trying to force them into a single schema before accepting that the raw content should stay raw, and normalisation should happen during AI processing.
Day 5-6: The AI Processing Pipeline
My bit. Taking raw content and turning it into structured intelligence. The pipeline:
- Clean — strip HTML, remove boilerplate, extract the actual content
- Summarise — generate a concise summary using Claude
- Extract — pull out key topics, technologies mentioned, sentiment
- Classify — categorise by relevance, technology area, content type
- Link — find connections between this content and other content we've processed
Each step was a separate queue job. If summarisation failed (API timeout, rate limit), it didn't block extraction. Jobs could retry independently. This was crucial — with 100+ sources pushing content daily, you can't have one API hiccup stall everything.
The prompt engineering took longer than the code. Getting Claude to consistently return structured JSON with the fields we needed, in the format we expected, with the quality we wanted — that was iteration after iteration. We went through maybe 15 prompt revisions across the sprint.
Caching was aggressive. Same content hash? Skip processing. Similar content from the same source within 24 hours? Probably a duplicate, skip it. We were burning through API credits during testing and needed to be smart about it.
Day 7-8: The Dashboard and MCP Integration
The dashboard came together fast because the data layer was solid. Inertia pages pulling from well-structured Eloquent models. Trends over time, topic clustering, source reliability scores. Standard Laravel CRUD with some charting on top.
The MCP integration was the interesting part. We wanted DevTrends data to be accessible to AI tools — so developers could ask their AI assistant "what's trending in Laravel this week?" and get an answer backed by our processed data.
Laravel's MCP package made this surprisingly straightforward. We defined tools that queried our processed data:
// Simplified — the actual implementation has more parameters
Mcp::tool('get_trends', function (string $topic, int $days = 7) {
return Trend::query()
->where('topic', $topic)
->where('detected_at', '>=', now()->subDays($days))
->with('sources')
->get()
->map(fn ($trend) => [
'topic' => $trend->topic,
'momentum' => $trend->momentum_score,
'sources' => $trend->sources->count(),
'summary' => $trend->ai_summary,
]);
});
This was probably the most forward-looking decision we made. The web dashboard is useful, but the MCP integration means DevTrends data flows into whatever AI workflow developers are already using.
Day 9: Testing and Breaking Things
Day nine was dedicated to testing. Not just Pest tests (though we wrote plenty) — actually running the full pipeline with real data and seeing what broke.
What broke:
- Memory issues with large podcast transcripts. Some were 50,000+ words. We added chunking for content over a certain size.
- Rate limiting from OpenAI when processing a batch of 50+ items. Added better backoff logic and parallelism controls.
- Duplicate detection wasn't aggressive enough. Some sources republish content with minor changes. We tightened the similarity threshold.
- The dashboard was slow when loading trend data across all topics. Added database indexes we should have added on day four.
This is why you leave a day for breaking things. Every ambitious project has a day-nine list. If you don't, you shipped something you haven't actually tested.
Day 10: Production
Deployed to production via Coolify. DNS, SSL, environment variables, queue workers, scheduler. The boring stuff that takes half a day even when you've done it a hundred times.
By end of day we had: 100+ sources ingesting, AI pipeline processing content, dashboard showing trends, MCP tools responding to queries. 81 commits across 10 days. Two developers. No all-nighters (well, one late evening on day six when the prompt engineering wasn't cooperating).
The Commit History Tells the Story
81 commits in 10 days is roughly 8 per day. That's not "move fast and break things" — that's steady, deliberate progress. Small commits, each one doing one thing. The git log reads like a build diary:
- Day 1: Project setup, models, migrations
- Day 2: Source drivers, feed parsing
- Day 3-4: Ingestion pipeline, queue configuration
- Day 5-6: AI processing jobs, prompt iteration
- Day 7: Dashboard pages, data visualisation
- Day 8: MCP integration, API endpoints
- Day 9: Bug fixes, performance improvements, tests
- Day 10: Deployment, production configuration
What We Got Right
The queue-driven architecture. Making everything async from day one meant we never had to retrofit it. Processing could be slow, fail, retry — the user experience was never affected.
Two-person team with clear domains. Emma owned ingestion and data. I owned AI processing and the frontend. We barely had merge conflicts because we were working in different parts of the codebase.
Shipping incrementally. We had something deployable by day five. Not complete, but functional. This meant we could test with real data from day six onwards, catching issues early.
What We'd Change
Start with fewer sources. We tried to support 100+ sources from day one. Should have started with 20, got the pipeline perfect, then scaled up. We spent time on edge cases for obscure source formats that could have waited.
Better prompt versioning. We iterated on prompts by editing them in place. Should have versioned them from the start so we could track what changed and roll back when a "improvement" made things worse.
More aggressive caching during development. We burned through more API credits than necessary during the build phase. A development mode that cached everything by default would have saved money and time.
The Two-Person Advantage
Working with Emma on this was a masterclass in why small teams ship faster. No standups. No sprint planning. A shared Notion page and a Slack channel. "I'm doing X, you do Y, sync at lunch." That's the process. It works when both people are senior enough to make decisions independently and aligned enough on the direction.
Ten days. From nothing to production. Not because we're geniuses — because we used tools we know well, made decisions quickly, and didn't over-engineer anything. Laravel, queues, AI APIs, and two developers who trust each other's judgement. That's the recipe.
I write about Laravel, AI tooling, and the realities of building software. More at stuartmason.co.uk.
Get the Friday email
What I shipped this week, what I learned, one useful thing.
No spam. Unsubscribe anytime. Privacy policy.