This model is not new...
... and it is not universally useful.
One can decompose *any* "monolithic" application into a collection of functions, assign each of them TCP/IP URIs, send them messages and await the results which they produce. In fact, one of the models of procedural programming consists of sending messages, i.e. activation records, to functions.
(Entire operating systems are based on the concept of message-passing, e.g. QNX, AmigaOS.)
Even the problem of global variables can be resolved in this model by replacing them with accessor functions.
Where the notion of this being a universal panacea breaks down is this: overhead. For top-level procedures in interactive information applications, which are expected to run at UI speed, this sort of thing is fine. On the other hand, if one is running something that is compute- or I/O-bound instead of UI-bound, one suffers a huge loss of performance.
To make matters worse, much of this interaction now takes place over a network, rather than by passing such messages in memory. And a network is a bus, no matter how many switches are put into place to make it behave as a mesh. All those "serverless" calls can make it quite congested.
Try calculating a 50K employee payroll this way. On the other hand, don't.
(One sees the same problem in "hyper-kernel" systems which are scaled up too far.)
Loosely coupled has its place. Tightly coupled has its place. Like any new paradigm, or in this case a new name for an old paradigm, being shiny and new makes it interesting, but doesn't necessarily make it universally useful.