// Engineering · AI · Scale · May 2026 · 7 min read

Bottom-Up
Beats
Top-Down.

How AI actually lands in a 1000+ person engineering org. Two years of doing this. It's coming together quickly.

There's a fantasy making the rounds in enterprise AI strategy decks.

It goes like this: deploy AI across the company, watch it rewrite the software stack, ship faster, win the decade.

It doesn't work that way.

Not at scale. Not in mission-critical software. Not when your codebase has 1000+ engineers, decades of accumulated decisions, and the kind of complexity that breaks the moment you stop respecting it.

What works at 10 people doesn't scale to 1000.
What works at 50 doesn't either.
The shape of the problem changes.

The answer isn't top-down.
The answer is bottom-up.
01 · Why top-down breaks at scale

Roll out Copilot. Roll out Cursor. Roll out Claude. Engineers need access to the latest tools - that part is non-negotiable.

But access isn't transformation.

In a 1000+ person engineering org, the most valuable knowledge isn't in your repos. It's not in your wiki. Some of it lives in the heads of senior engineers - who know why an IP block was designed a certain way, which subsystem will break if you change a config, which test failure means something and which is noise.

But a lot of it isn't in anyone's head either. It's implicit in the source code itself - millions of lines of interactions across components and IPs, often documented once, rarely kept up to date, drifting further from reality with every commit.

You can't feed that to AI as raw context. Not today.

But AI can generate documentation and graphs directly from the code - regenerating them as the code changes, keeping them anchored to the source of truth. Extracting the relationships. Surfacing the structure. Giving itself the context it needs to reason about your codebase.

That's not a tool deployment.
That's a bottom-up project.

Some of it AI does for itself. Some of it humans have to surface first. Either way - it's work.

02 · Solve each phase independently

Stop trying to "AI-transform" the SDLC. Start solving individual problems:

Design to code
Code to test
Auto-triage
Failure reproduction
Root cause analysis
Code to documentation

Each one is a real engineering project. Not a slide. Not a vision. A working system that an actual team uses every day.

And here's the thing: what didn't work two years ago works today.

// 2024
An engineer on my team tried to get AI to write meaningful unit tests. It tested the wrong things. It didn't understand what we wanted. It added no real value. We shelved it.
// 2026
Same team, same problem, new AI model - completely different result. It does exactly what we need. Quickly. Correctly. Cutting that phase from 3+ days to under 1.

The capability moved. The teams that kept trying are the ones who caught it.

03 · The dashboard

Picture what one of those connected workflows looks like in practice. A dashboard. AI is:

01 Testing
02 Detecting failures
03 Reproducing them
04 Isolating recent changes
05 Triaging to a component, an IP, a function
06 Hypothesizing root cause
07 Attempting a fix
08 Validating the fix

Any one of these stages can fail. That's the point.

The workflow detects the failure. A human steps in. Gets it past the breakpoint. The intervention is logged. Next time, the agent does more on its own.

This is how you build agentic infrastructure that compounds. Humans aren't replaced. Humans are upstream of the agents - teaching, correcting, encoding judgment. Over time, less intervention. Never zero. Not if we keep innovating.

04 · From phase wins to TTM

Accelerating one phase is a win. Accelerating one workflow is a bigger win. Real time-to-market gains come from connecting all of it.

Design feeding code generation. Code generation feeding test generation. Continuous integration with problem-solving in the loop. Triage feeding documentation. Documentation feeding the next design.

That's where individual phase wins compound into something an executive can actually feel on a roadmap. But getting there isn't free.

Design-to-code-to-test is the easy unlock.
Continuous integration with real problem-solving is harder.
The hardest unlock - the one that compresses weeks into days - is still ahead.

That's the next article.

05 · What this requires from leaders
Stop measuring AI rollout by license counts. Start measuring it by the number of SDLC phases your teams have actually accelerated end-to-end.
Prioritize the boring infrastructure work - the test harnesses, the triage automation, the documentation pipelines, the AI-built graphs of how your code actually fits together. That's where the compounding happens.
Build the human-in-the-loop systems now. The agents need somewhere to fail safely and somewhere for that failure to become training signal.
Keep retrying the things that didn't work last year. The capability is moving fast. What was impossible in 2024 is shipping in 2026. What's hard today will be obvious in 2027.
The orgs that win the next decade
aren't the ones with the best
AI strategy deck.
They're the ones that built the
bottom-up infrastructure now.
ACT ACCORDINGLY...
MF
MARIO FILIPAS
Senior Director, Cloud GPU Software · AMD · University of Waterloo

Leading 150 engineers across Canada, Serbia, and China building GPU virtualization software for AMD's Instinct AI accelerators. I think about what's next...with urgency. I run on AI.

All posts