> What I've found is that the event driven architecture did tend to lead to less abstractions and more leaked internal details. This isn't fundamental (you can treat events like an API)
I'd put it the other way: event driven architecture makes it safer to expose more internal details for longer, and lets you push back the point where you really need to fully decouple your API. I see that as an advantage; an abstract API is a means not an end.
> Another problem with distributed systems with persistent queues passing events is that if the consumer falls behind you start developing a lag.
Isn't that what you want? Whatever your architecture, fundamentally when you can't keep up either you queue or you start dropping some inputs.
> If you have a request/response system you can simulate events over that.
How? I mean you can implement your own eventing layer on top of a request/response system, but that's going to give you all the problems of both.
> If we look at things like consensus protocols or distributed/persistent queues then obviously we would need some underlying resources (e.g. you might need a database behind your request/response model).
Huh?
> Don't know if others have a similar experience but when one system is mandated people will invent workarounds that end up looking like the other paradigm, which makes things worse.
I agree that building a request/response system on top of an event sourcing system gives you something worse than using a native request/response system. But that's not a good reason to abandon the mandate, because building a true event-sourcing system has real advantages, and most of those advantages disappear once you start mixing the two. What you do need is full buyin and support at every level rather than a mandate imposed on people who don't want to follow it, but that's true for every development choice.
Everything is a means and not an end but decoupling via an explicit API makes change easier. Spreading state across your system via events (specifically synchronizing data across systems via events relating to how that data changes) creates coupling.
re: Huh. Sorry I was not clear there. What I meant is you can not create persistent queue semantics out of a request/response model without being able to make certain kinds of requests that access resources. Maybe that's an obvious statement.
re: mandate. I think I'm saying these sort of mandates inevitable result in poor design. even the purest of purest event sourcing systems actually use requests/response simply because that is the fundamental building block of systems. E.g. Kafka uses gRPCs from the client and waits for a response in order to inject something into a queue. The communication between Kafka nodes is based on messages. The basic building block of any distributed computer system is a packet (request) being sent from one machine to another, and a response being sent back (e.g. TCP control messages). A mandate that says though shall build everything on top of event sourcing is sort of silly in this context since it should be obvious the building blocks of event sourced systems use requests/response. Even without this nit-picking restricting application developers to only build on top of this abstraction inevitably leads to ugliness. IMO anyways and having seen this mandate at work in a large organizations. Use the right tool for your job is more or less what I'm saying or the other famous way of stating this is when all you have is a hammer everything looks like a nail.
re: isn't that what you want. well, if it is what you want then it is what you want, but many systems are ok with things just getting lost and not persisted. e.g. an HTTP GET request from a browser, in the absence of a network connection, is just lost, it's not persisted to be played later, and so there is no way to build a lagging queue with HTTP GET requests that are yet to be processed. Again, maybe an obvious statement.
> decoupling via an explicit API makes change easier. Spreading state across your system via events (specifically synchronizing data across systems via events relating to how that data changes) creates coupling.
An explicit API comes at a cost; the way I'd put it is that the inherently lower coupling of events (because e.g. you can publish the same events in multiple formats, whereas a request-response API generally needs to have a single response format) means that you have more slack to defer that cost for longer (i.e. it takes you longer to reach the point where the overall system coupling is too bad and you need to introduce those API layers).
I'm not sure I follow what you're saying about sharing events relating to how data changes. IMO if you need a shared view of "the current version of the data", the right solution is to publish that as events too.
> E.g. Kafka uses gRPCs from the client and waits for a response in order to inject something into a queue. The communication between Kafka nodes is based on messages. The basic building block of any distributed computer system is a packet (request) being sent from one machine to another, and a response being sent back (e.g. TCP control messages).
I don't know the details of kafka's low-level protocols, but it's certainly possible to build these systems based on one-way messaging all the way down; gRPC has one-way messages, plenty of protocols are built on one-way UDP rather than TCP...
> e.g. an HTTP GET request from a browser, in the absence of a network connection, is just lost, it's not persisted to be played later, and so there is no way to build a lagging queue with HTTP GET requests that are yet to be processed.
Right, because HTTP is a request-response protocol. Whereas Kafka does buffer messages and send them later if you lose your network connection for a short time (of course there is a point at which it will give up and call your error handler).
I don't think the fact that HTTP works that way means it's desirable to just abandon those requests - e.g. in fact these days if you navigate to a page with Chrome when you have no network connection it will make the request when you're back online and send you a notification that it's loaded the page that it couldn't load earlier.
Usually when you're ingesting something into Kafka it's important to know whether that was successful or not, hence the more or less inherent request/response that's part of that. That said it's an interesting thought experiment to see how far you can go without that.
When I think of large scale success stories around the request/response model I think AWS (where famously Bezos mandated APIs first) and Google. Both now have services that look more event oriented (e.g. SQS or Firebase). And ofcourse the modern web (though the ugly hacks needed to make something look like event driven was certainly not fun).
Events related to data changes are about keeping data structures in sync via events. Also known as state-based architecture. Something I worked on in the early 2000's kept a remote client/UI in sync with the server and a database using events like that and was a pretty lean/neat implementation.
Good one on the Chrome re-making requests when you're online for an active tab. That's certainly an interesting use case.
My intuition is that some things are very naturally events. Let's say a packet arriving into your computer. A mouse click. And some things are naturally a request-response. Let's say calculating the Cosine of an angle. You can replace x = sin(y) with an event, and an event that comes back, but that feels awkward as a human. Maybe not the best example...
It's another variation on the sync vs. async debates I guess. Coroutines or callbacks...
I'd put it the other way: event driven architecture makes it safer to expose more internal details for longer, and lets you push back the point where you really need to fully decouple your API. I see that as an advantage; an abstract API is a means not an end.
> Another problem with distributed systems with persistent queues passing events is that if the consumer falls behind you start developing a lag.
Isn't that what you want? Whatever your architecture, fundamentally when you can't keep up either you queue or you start dropping some inputs.
> If you have a request/response system you can simulate events over that.
How? I mean you can implement your own eventing layer on top of a request/response system, but that's going to give you all the problems of both.
> If we look at things like consensus protocols or distributed/persistent queues then obviously we would need some underlying resources (e.g. you might need a database behind your request/response model).
Huh?
> Don't know if others have a similar experience but when one system is mandated people will invent workarounds that end up looking like the other paradigm, which makes things worse.
I agree that building a request/response system on top of an event sourcing system gives you something worse than using a native request/response system. But that's not a good reason to abandon the mandate, because building a true event-sourcing system has real advantages, and most of those advantages disappear once you start mixing the two. What you do need is full buyin and support at every level rather than a mandate imposed on people who don't want to follow it, but that's true for every development choice.