Compounding Beats Breakthroughs

Why small, iterative improvements consistently outperform big, ambitious plans—in software, motorsports, and business.

Rick at SKUSA SuperNationals 2010

When I first started my career, the dominant way to manage complex projects was the waterfall model.

Executives would spend months — sometimes years — defining large, comprehensive plans. Requirements documents were thick. Roadmaps were ambitious. Strategies were carefully crafted in conference rooms far away from the people who would actually build and use the systems.

On paper, it all made sense.

In reality, many of those projects failed.

They went over budget. They slipped their timelines. They shipped late, or not at all. And when they did ship, they often did not deliver the value that was originally promised.

The pattern was always the same.

The assumptions were wrong.

Not because the people involved were incompetent or careless — but because it is almost impossible to correctly predict technical complexity, user behavior, and business value upfront, especially for something new.

We were trying to design certainty into environments that were inherently uncertain.

The shift from planning to learning

Over time, a different way of working started to take hold.

Agile methodologies did not solve complexity, but they changed how we approached it. Instead of treating planning as something you do once at the beginning, planning became something you do continuously — informed by what you have actually learned.

Instead of betting everything on a single, large plan, teams started shipping smaller pieces of working software and using reality as the feedback mechanism:

Each iteration was not just about delivering functionality. It was about reducing uncertainty.

You were not trying to be right upfront — you were trying to become less wrong over time.

That is a fundamentally different posture.

And it works.

Not because it is faster in the short term, but because it compounds in the long term.

Why big deliveries are so tempting

I see the same pattern with clients all the time.

Many clients naturally ask for large sets of changes and expect one big delivery. The reasoning is understandable: it feels more efficient to bundle everything into a single project. Fewer meetings. Fewer contracts. Fewer disruptions.

It feels like it should save time and money.

In practice, it usually does the opposite.

A large portion of what gets specified upfront turns out to be wrong:

Not because anyone failed — but because no one had enough information yet.

The cost is not just the wasted development. It is the opportunity cost, the friction introduced, the maintenance burden of things that do not pull their weight, and the erosion of trust when expectations do not match outcomes.

Why iterative work feels slower — but produces better results

Clients who work with me in a more iterative way are almost always happier with the outcome.

Not because they get more — but because what they get fits.

Instead of building a system based on a hypothetical future, we let the system grow in response to real behavior:

Over time, the system becomes tailored to the organization like a glove — not because it was perfectly designed upfront, but because it evolved in context.

That process builds alignment, confidence, and trust. It also builds much better systems.

The same logic applies to my own work

This is not just how I work with clients — it is how I try to work on my own projects as well.

I constantly have to resist the temptation to just build the whole thing.

I force myself to ask:

Not because I lack ambition — but because I have learned that ambition without feedback is just guessing.

Small, incremental changes keep me honest.

I learned this in motorsports before I fully understood it in software

In motorsports, nobody expects to find a two-second lap time improvement in a single change.

That would be reckless.

Instead, you make small, controlled adjustments:

You test.
You measure.
You compare.
You keep what works.
You discard what does not.

Each change might be worth a tenth of a second.

But ten small improvements compound into something meaningful.

More importantly, you always know why performance changed.

You are never guessing which variable caused which outcome — because you are changing one thing at a time.

That is not just good engineering. It is good risk management.

Why breakthroughs are seductive — and dangerous

Breakthroughs feel efficient.

But breakthroughs require one thing we rarely have: confidence in our assumptions.

They assume we understand the problem, the solution, the constraints, the side effects, and what actually matters.

Most of the time, we do not.

That is why breakthroughs fail more often than they succeed.

Not because they are wrong — but because they are premature.

They try to harvest value before the ground has been properly mapped.

Compounding is slower — and that is its advantage

Compounding feels boring.
It feels conservative.
It feels unsophisticated.

But it has two properties that breakthroughs do not:

  1. It spreads risk instead of concentrating it.
  2. It builds understanding at the same time it builds results.

Each small improvement does three things:

That learning compounds just as much as the output.

Over time, you do not just end up with a better system.

You end up with better judgment.

And better judgment is what actually scales.

Why I default to compounding

Whenever I am faced with a complex problem — technical, operational, or strategic — my first instinct is almost always the same:

What is the smallest useful change we can make?

Not because I am risk-averse.
Not because I lack ambition.

But because I have learned that:

Compounding beats breakthroughs not because it is clever, but because it is honest.

It acknowledges uncertainty instead of pretending it is not there.

And in complex systems, that is not a weakness.

It is the only thing that actually works.

Share this article
LinkedIn Facebook X