When AI Writes the Code, Engineers Become Architects
Last week I needed to organize about 400 photos I’d received — duplicates scattered everywhere, no structure, a mess. I didn’t write a single line of code. I described what I wanted, AI did the work, and thirty minutes later I had a clean, organized folder.
It was a one-off task. I didn’t think about scaling. I didn’t worry about decoupling or abstractions. I didn’t architect anything for the long term. I just needed it done, and it got done.
That’s the easy part now.
But what happens when the task isn’t a one-off? When the code has to last? When someone else will need to understand it, modify it, extend it? When today’s quick solution becomes tomorrow’s technical debt?
That’s when the real work begins. And it’s not writing code.
The Role Didn’t Disappear — It Shifted
There’s a conversation happening right now about what happens to software engineers when AI can generate most of the code. Some people think the role disappears. I think they’re looking at the wrong part of the job.
For 35 years, I’ve watched the “writing code” part of software engineering get progressively easier. Better languages, frameworks, tooling. Each wave promised to change everything. (Anyone remember CA-Visual Objects? Exactly.) Each wave made the mechanical act of producing code faster.
And each wave made the other parts of the job more important — understanding what to build, designing systems that last, making tradeoffs that compound well over time.
AI is the same pattern, just accelerated.
When code generation becomes cheap and fast, the bottleneck moves upstream. The constraint isn’t “can we build it?” anymore. The constraint is “should we build it?” and “will this still make sense in two years?”
Those are architecture and design questions — judgment calls, not code.
What “Knowing What Good Looks Like” Actually Means
One phrase keeps surfacing: knowing what good looks like.
It’s easy to say. It’s harder to define. Let me try.
When AI generates code, someone has to evaluate it. Not just “does it run?” but:
- Does it fit the patterns we’ve established in this codebase?
- Does it handle the edge cases we know about from production?
- Does it create coupling that will hurt us later?
- Is it solving the actual problem, or a simplified version of the problem?
- Will the next engineer understand why this code exists?
None of those questions have answers you can look up. They require context — about the system, the team, the business, the history of decisions that got you here.
That context lives in people. AI can generate code that passes tests. It’s less equipped to tell you whether the tests are testing the right things.
The Question That Matters
When AI generates code quickly, it’s tempting to accept solutions that work now. Speed feels like progress. But speed without direction just gets you lost faster.
The question I keep coming back to: Will this decision make future decisions easier or harder?
That’s it. Before accepting AI-generated code, before committing to an approach, pause and consider:
Maintainability: Can someone else understand and modify this code without the context you have today?
Flexibility: Does this approach keep options open, or does it lock you into assumptions that might change?
Consistency: Does this fit the patterns already in the codebase, or does it introduce a new way of doing things that future developers will have to reconcile?
AI can help surface some of these considerations. But knowing which ones matter most for your system, your team, your trajectory — that still requires judgment built from experience.
What This Means for Teams
If you’re leading a team right now, the shift is already happening whether you’re managing it or not. Here’s what I’d pay attention to:
Your experienced engineers are more valuable, not less. In my experience, the people who know what good looks like — who can review AI-generated code and catch the subtle problems — are the ones you can’t afford to lose. Their judgment is the quality control layer.
Code review becomes architecture review. When code appears fast, the review process carries more weight. It’s no longer just checking for bugs. It’s checking for fit — does this belong here? Does this approach match our standards? Is this solving the right problem?
Documentation of why matters more than documentation of what. AI can read code and tell you what it does. It’s harder for AI to tell you why this approach was chosen over alternatives, what tradeoffs were considered, what constraints shaped the decision. That context needs to be captured, or it disappears.
New engineers need different onboarding. The skill isn’t “learn the syntax” anymore — AI handles that. The skill is “understand this system well enough to evaluate AI’s suggestions.” That requires deeper context, faster.
The Human Part Didn’t Go Away
There’s a version of this story where AI replaces engineers entirely. I don’t buy it.
Every technology transition I’ve lived through — and there have been many — tends to follow a similar shape. The mechanical part gets automated. The judgment part gets more important. The people who understand why become more valuable than the people who only knew how.
AI writing code is a tool. A powerful one. But tools don’t make decisions about what to build, how it should fit together, or whether it’s worth building at all.
Those decisions require understanding goals, navigating constraints, and making tradeoffs that don’t have clear right answers. They require the kind of wisdom that comes from watching systems succeed and fail over time.
That’s not something you can prompt for.
The best engineers I know aren’t the fastest coders. They’re the ones who know what questions to ask before anyone starts coding. That skill isn’t going anywhere.