Micro-services solve what scaling?

In my opinion, we missed the true scaling microservices can solve: human side of engineering.

Micro-services solve what scaling?
Photo by Piret Ilver / Unsplash

In my opinion, we missed the true scaling microservices can solve: human side of engineering.

There are two dimensions I mean under this:

  • a single human mind’s ability to grasp complexity and reliably work with,
  • the number of human mind’s effective work on the same code-base.

While of course, the good micro-service architecture allows you to scale indefinitely, separates and decouples concerns, and brings a few other benefits. However these were there with earlier approaches, like N-layer architecture, MVVM and MVP/MCP variants out there.

Another thing, micro-services won’t structure themselves. The effectiveness of a system like this starts with experienced architects and lead engineers setting up the groundwork in form of good principles and practices to follow. Which may be unique to each and every organization with some common guidelines.

And this complexity has new costs. A need in maturity of technology. A complex abstraction layer or a large amount of gruntwork just to make the foundations work.

However when you move beyond a size, both in code-base and organization, new ways of structuring the problem may become necessary. Here came the concept of micro-services.

Create such a small code-base that even a newcomer engineer could pick up in a short time and deliver new value in it.

Decouple this small code-base into its own hostable unit, so it does not bring down the other parts of the system even if it fails. (See, I don’t use “affect” here, because it does, logically, business process sense it will always affect it.)

When paired with driven and proactive engineers, this collection of small code-base can turn into a great asset: allowing speed of modification while preserving resiliency in the system.

You can run 3 different email sender services, each written in different years, maybe for different purposes. And you can still create a 4th, better one for new needs without affecting the previous 3.

Assuming you kept to decoupling principles correctly. Coupling is another tricky concept that hides in the logic of the code and not in the written code itself. It is not in plain sight. Worth another post focusin on it.

You can add a 56th product business code to a new service, deploy it, and be entirely sure it does not affect the previous 55 product’s in any way. This is the power of such systems. However their drawback too.

Don’t forget the extraordinary cost it takes to build and run such a system. It is not for everyone.

The cost manifests itself in multiple dimensions: server parks, new specialized expertise, learning curve of all engineers, product and adjacent principles.

And these systems come with heterogeneous architectures. Meaning you can’t have just a single language in there. To use the best tool for the problem, you will have a mix of languages and technologies each may be superficially coded and operated by generalists. But may require very specific expertise, like machine learning for Python code.

This necessitates acceptance a loss of macro-efficiency (the organization as a whole seems less efficient, one email services should be enough instead of 4-10) to utilize micro-efficiency (finding and modifying an existing email service may be hard, time-consuming, risky and riddled with quality assurance cause it risks normal operations).

How do you choose the best way for your organization? Hate or love micro-services? Monoliths?