What 14 years of backend development taught me about the difference
There’s a conversation I’ve had more times than I can count.
A team proudly announces they’ve migrated to microservices. The architecture diagrams look clean. There are dozens of small services, each with its own repository. Everyone feels modern and scalable.
Then reality kicks in.
You change one field in a database. Suddenly you need to coordinate deployments across five services. One of them fails at 2 AM and takes down half the system. Nobody is quite sure which service owns which data anymore.
Sound familiar?
After 14 years as a backend developer — starting with Java 5 monoliths and watching the industry evolve — I’ve come to an uncomfortable conclusion: most systems I’ve seen that claim to be microservices are actually distributed monoliths. And the difference matters more than most teams realize.
What Even Is a Distributed Monolith?
A distributed monolith is the worst of both worlds. You’ve taken all the complexity of a distributed system — network latency, partial failures, eventual consistency — but kept all the tight coupling of a monolith.
In practice, it looks like this:
- Services share a database. Three different services read and write to the same tables. You can’t change the schema without touching all of them.
- Deployments are synchronized. You can’t deploy Service A without also deploying Service B and C because they’re too tightly coupled.
- One service fails, everything fails. There’s no resilience. The whole system behaves as if it’s one big fragile application.
- Teams can’t work independently. Every change requires coordination across multiple teams because the boundaries between services are blurry.
If any of these sound familiar, you might not have microservices. You have a monolith that’s been cut into pieces and scattered across a network.
How Did We Get Here?
In my experience, it usually happens for one of two reasons.
The first is technical migration without architectural thinking. A team takes an existing monolith, extracts pieces of it into separate deployable units, but doesn’t rethink the underlying domain boundaries. The code is now in different services, but the logic still assumes everything is tightly coupled.
The second is cargo cult microservices. The team reads that Netflix and Amazon use microservices. They want to be like Netflix and Amazon. So they split everything into services — not because the problem requires it, but because it feels like the right thing to do.
Both paths lead to the same place: distributed complexity without the distributed benefits.
What Real Microservices Actually Give You
Before we talk about when NOT to use microservices, it’s worth being clear about what you’re actually trying to achieve.
Microservices aren’t about having many small services. They’re about independent deployability, independent scalability, and clear ownership.
In a well-designed microservices system:
- The Order Service can be deployed without touching the Payment Service
- If your Notification Service goes down, orders still get placed — the system degrades gracefully instead of collapsing
- The team that owns User Service can change their database schema without asking anyone’s permission
- When Black Friday hits and orders spike, you scale Order Service independently without wasting resources scaling everything else
These are real, tangible benefits. But they come with a cost — and that cost is often underestimated.
When You Should NOT Use Microservices
This is where most articles stop short. Everyone talks about the benefits. Fewer people talk about when microservices are simply the wrong tool.
When your team is small. Microservices introduce operational overhead: multiple repositories, multiple CI/CD pipelines, distributed tracing, service discovery. A team of two or three developers will spend more time managing infrastructure than building features. A well-structured monolith will serve you better.
When the domain isn’t understood yet. Getting service boundaries wrong is expensive. If you split a system too early — before you understand how the business actually works — you’ll end up with services that need to constantly talk to each other to get anything done. Wrong boundaries are worse than no boundaries. Start with a monolith, understand the domain, then extract services when the boundaries become obvious.
When you don’t have the infrastructure maturity. Microservices need proper observability: distributed logging, tracing, monitoring. Without these, debugging a production issue becomes an archaeological expedition across five different log files. If you don’t have this in place, you’re flying blind.
When all your services need to be deployed together anyway. This is the distributed monolith smell. If you can’t deploy a single service independently, you haven’t gained the primary benefit of microservices. You’ve just added complexity.
The Question to Ask Before You Split Anything
There’s one question I now ask before every architectural decision:
“Can this service be deployed, scaled, and maintained completely independently?”
If the answer is no — if deploying it requires coordinating with other services, if it reads from a shared database, if another team needs to be involved — then the boundary is wrong.
This single question has saved me from a lot of bad architectural decisions.
A Practical Example
Let’s say you’re building an e-commerce platform. You decide to split it into:
- User Service
- Product Service
- Order Service
- Payment Service
- Notification Service
This looks clean on paper. But here’s a trap that’s easy to fall into:
You make Order Service read directly from the Product Service database to check prices and availability. It seems faster than making an API call.
Congratulations, you’ve just created a distributed monolith. Now you can’t change the Product database schema without breaking Order Service. You can’t scale or deploy them independently anymore. Their fates are tied together.
The correct approach? Order Service calls Product Service through a well-defined API. Or better yet — for operations that don’t require real-time data — it reacts to events. Product Service publishes a “price changed” event, Order Service updates its own local cache.
Each service owns its data. Each service exposes a clear interface. Nobody reaches into anyone else’s database.
So Where Should You Start?
If you’re building something new, my honest advice is: start with a monolith.
Not because monoliths are better. But because a well-structured monolith with clear internal module boundaries is much easier to migrate from when the time comes. And crucially, you’ll understand your domain well enough by then to draw the right service boundaries.
When you do start extracting services, extract them one at a time. Identify a piece of your system that has genuinely different scaling needs, or that a separate team needs to own independently. Extract that. Deploy it. Learn from the process. Then decide if another extraction makes sense.
This is known as the Strangler Fig Pattern — gradually replacing parts of a monolith while keeping it running. It’s slower than a big-bang rewrite. It’s also how you avoid ending up with a distributed monolith.
The Uncomfortable Truth
After working on many systems across different industries, I think the industry has oversold microservices to teams that didn’t need them.
Microservices solve real problems — but only specific problems, at a specific scale, with a specific level of team and operational maturity. For many applications, a well-structured monolith is not only simpler but actually the better architectural choice.
The goal was never to have microservices. The goal was to build systems that are easy to change, scale, and maintain.
Sometimes that means microservices. Often, it doesn’t.
Stop Calling It Microservices — You Probably Built a Distributed Monolith was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.