When I was working for Beeline, there was a big push to build a completely new version of their SaaS platform that was all done with microservices. To me, relegated to the lesser position of managing integrations in the legacy monolithic application, it seemed like a lot of work for very little point. Turns out that, unless you're Google, there probably won't be a lot of benefit from microservices, and, in fact, I think that more than a little bit of "resume-driven development" was at work there.
While microservices do have many advantages, such as scalability, the ability to deploy independently, and the ability to employ multiple technologies in development, they're not a panacea. They are more complex, require more support in tools and operations to manage effectively, and are harder to test end-to-end. And you can't get rid of network latency; even if the code is all on trhe same machine, you have to marshal all the calls into a REST, gRPC, or similar interface, then unarshal them on the other side, and then do the same thing with any returned data. And you may have fragmented your stored data across multiple databases, too, which makes it harder to maintain consistency and optimize access.
Anecdotally, there are a lot of issues that adding microservices brings to the table. This comment page on Hacker News offers several stories:
(Bringing "microservices" into the room is definitely not helpful) It's a pretty good way to increase technical debt. The microservices themselves might become small enough that it is easy to refactor some at a time, but then you have the actual deployment and communication infrastructure to contend with. And if you get that architecture wrong, oh boy you'll be begging to go back to a monolith in a day.
I remember at a previous job, the monolith was drowning in technical debt. Someone decided the solution was Go microservices. Fast-forward 18 months and 95% of the functionality is still in the monolith, but there are now 25 microservices, and no environment (except production) where you can test everything together.
And, most tellingly:
stop following Google. You are not Google. You do not have the needs of Google. Just stop it.
If you are a Google, or a Netflix or Spotify, this does not apply to you. But, in that case, you have the engineers, and IT support, and budget to make it work.
For the rest of us, though, it's overkill. Even under high scale, if the domain model is simple enough, you don't redeploy often, or your team isn't big enough to handle the complexity, you ain't gonna need it. If I ever do rewrite or update Venice, for instance, I probably won't split it into microservices.
But there is one thing I could do...structure the code internally so that pieces of it could eventually be easily split up into microservices, if that's ever warranted. Such a design is colloquially known as a "modulith." And, to me, that makes a hell of a lot of sense anyway.
One way of structuring modulithic applications is via the hexagonal architecture pattern, also known as the ports and adapters pattern. The principle here is that you define the innermost functionality as the application core, and everything else interfaces to it via defined "ports." The UI is a port. The database is a port. Notifications are a port. And the only thing your core application knows is to call on those ports to accomplish anything dealing with the "outside world."
As time goes on, you may see parts of the application that might be better factored out as microservices. You can then do so, building on the application in a natural evolution, rather than dictating a swarm of services from the start.
I don't know how well Beeline has done with their microservices-heavy system. I hope they're doing OK. But the design isn't the end-all, be-all of designs, no matter what the "architecture astronauts" might tell you. Often times, it's better to take the intermediate course.