There's a fantasy making the rounds in enterprise AI strategy decks.
It goes like this: deploy AI across the company, watch it rewrite the software stack, ship faster, win the decade.
It doesn't work that way.
Not at scale. Not in mission-critical software. Not when your codebase has 1000+ engineers, decades of accumulated decisions, and the kind of complexity that breaks the moment you stop respecting it.
What works at 10 people doesn't scale to 1000.
What works at 50 doesn't either.
The shape of the problem changes.
The answer is bottom-up.
Roll out Copilot. Roll out Cursor. Roll out Claude. Engineers need access to the latest tools - that part is non-negotiable.
But access isn't transformation.
In a 1000+ person engineering org, the most valuable knowledge isn't in your repos. It's not in your wiki. Some of it lives in the heads of senior engineers - who know why an IP block was designed a certain way, which subsystem will break if you change a config, which test failure means something and which is noise.
But a lot of it isn't in anyone's head either. It's implicit in the source code itself - millions of lines of interactions across components and IPs, often documented once, rarely kept up to date, drifting further from reality with every commit.
You can't feed that to AI as raw context. Not today.
But AI can generate documentation and graphs directly from the code - regenerating them as the code changes, keeping them anchored to the source of truth. Extracting the relationships. Surfacing the structure. Giving itself the context it needs to reason about your codebase.
That's a bottom-up project.
Some of it AI does for itself. Some of it humans have to surface first. Either way - it's work.
Stop trying to "AI-transform" the SDLC. Start solving individual problems:
Each one is a real engineering project. Not a slide. Not a vision. A working system that an actual team uses every day.
And here's the thing: what didn't work two years ago works today.
The capability moved. The teams that kept trying are the ones who caught it.
Picture what one of those connected workflows looks like in practice. A dashboard. AI is:
Any one of these stages can fail. That's the point.
The workflow detects the failure. A human steps in. Gets it past the breakpoint. The intervention is logged. Next time, the agent does more on its own.
This is how you build agentic infrastructure that compounds. Humans aren't replaced. Humans are upstream of the agents - teaching, correcting, encoding judgment. Over time, less intervention. Never zero. Not if we keep innovating.
Accelerating one phase is a win. Accelerating one workflow is a bigger win. Real time-to-market gains come from connecting all of it.
Design feeding code generation. Code generation feeding test generation. Continuous integration with problem-solving in the loop. Triage feeding documentation. Documentation feeding the next design.
That's where individual phase wins compound into something an executive can actually feel on a roadmap. But getting there isn't free.
Design-to-code-to-test is the easy unlock.
Continuous integration with real problem-solving is harder.
The hardest unlock - the one that compresses weeks into days - is still ahead.
That's the next article.
aren't the ones with the best
AI strategy deck.
They're the ones that built the
bottom-up infrastructure now.