I spent six years at FAANG companies watching architecture debates consume entire sprints. Three senior engineers in a room, whiteboard covered in boxes and arrows, arguing about whether to use event sourcing or CQRS for a feature that might get killed next quarter. Sound familiar?
After 4,000+ hours of building with AI coding agents, I have a confession: I don't debate architecture anymore. I build both options and let the results speak.
The Old Way: Design Docs as a Way of Life
At my last FAANG role, the process looked something like this. Someone proposes a new feature or a refactor. You spend three to four days writing a design document. Then two to three other engineers review it. They leave comments. You have a meeting. Disagreements surface. Someone suggests an alternative approach. You rewrite sections. Another round of review. Eventually, you reach consensus -- or more accurately, the most senior person in the room gets tired of debating and just picks one.
The entire process takes one to two weeks. And here's the thing nobody wants to admit: after all that deliberation, you still don't actually know if the chosen architecture will perform well under production load, integrate cleanly with the existing codebase, or surface edge cases you didn't think of during the design phase.
You're making a bet. An educated one, sure, but still a bet. With weeks of analysis paralysis baked in.
The New Reality: Build Both, Decide with Evidence
Here's what my workflow looks like now. I identify an architecture decision that needs to be made -- say, choosing between a microservice and a modular monolith approach for a new subsystem. Instead of writing a design doc comparing them theoretically, I spin up both.
Day one: I feed my AI agent the full context. System requirements, existing interfaces, data models, performance targets. I give it Approach A and tell it to build a working prototype.
Day two: Approach A is functional enough to run integration tests against. I note the results.
Day three: Same context, different instructions. Build Approach B.
Day four: Both approaches exist. I run them side by side. Actual benchmarks. Actual integration points. Actual code complexity metrics.
By the end of the week, I have something no design document ever gave me: evidence. Real, measurable evidence about which approach is better for this specific situation.
The Mental Model Shift
The key insight is this: code is now cheap to produce. The bottleneck moved from "writing code" to "making decisions."
When code was expensive -- when every line required a human to think it through, type it, test it, debug it -- it made sense to front-load decisions with extensive planning. You couldn't afford to build the wrong thing.
But AI agents changed the cost equation dramatically. A prototype that would have taken two weeks of developer time can be assembled in hours. Not perfect code, not production-ready code, but enough to validate whether an approach has legs.
This doesn't mean planning is dead. It means planning can now include empirical validation at a cost that was previously unthinkable. You're not guessing anymore. You're testing.
How to Run an Architecture Race
After doing this dozens of times, I've developed a repeatable process that I call an "architecture race." Here's how to set one up:
1. Define the Decision Boundary
Be specific about what you're deciding. Not "how should we build the backend" but "should the notification subsystem use a push model or a pull model?" Narrow scope means faster prototypes.
2. Write the Shared Context Document
Both approaches need the same inputs. Write one document that covers the requirements, constraints, existing interfaces, performance targets, and acceptance criteria. This document is the same context you feed to your AI agent for both implementations. If you're using Claude Code, put this in your CLAUDE.md or a dedicated context file.
3. Build Approach A in Isolation
Create a clean branch. Give the agent the shared context plus specific instructions for Approach A. Let it build. Don't over-optimize -- you want a working prototype, not production code. Focus on the core architectural concern.
4. Build Approach B the Same Way
Different branch, same shared context, different architectural direction. The agent doesn't carry over assumptions from Approach A, which is exactly what you want.
5. Define Comparison Criteria Before Looking at Results
Decide in advance what matters: response latency? Code complexity? Number of integration points? Memory footprint? Write your evaluation criteria before you see the code. This prevents post-hoc rationalization.
6. Compare and Decide
Run both prototypes against the same test suite. Measure against your criteria. The answer is usually obvious. And when it's not obvious -- when both approaches are roughly equivalent -- that tells you something valuable too: the architecture choice doesn't matter much for this problem, so pick the simpler one and move on.
When This Doesn't Work
I want to be honest about the limits. Architecture races work best for bounded decisions with clear evaluation criteria. They're less useful for cross-cutting concerns that touch everything in your system, decisions that depend heavily on long-term maintenance costs that you can't measure with a prototype, or organizational alignment decisions where the "right" architecture depends on team structure.
For those situations, you still need human judgment and probably a meeting. But in my experience, at least 70% of architecture debates I've been in could have been resolved faster with prototypes than with documents.
The Bigger Point
This shift isn't just about architecture decisions. It's a fundamental change in how we should think about software engineering in the age of AI agents.
When code is cheap, the expensive thing is decision-making under uncertainty. And the best way to reduce uncertainty isn't more meetings or longer documents. It's running experiments.
AI agents make experimentation nearly free. The engineers who figure this out first -- the ones who stop debating and start building -- will ship faster and make better decisions than everyone still arguing in front of a whiteboard.
I stopped debating architecture six months ago. I haven't looked back.
Related Articles
Want help setting up architecture races for your team?
We help engineering teams build AI agent workflows that turn debates into data. Book a free strategy call.
Book a Free Strategy Call