I skimmed the article but I imagined they were using it as a secondary data store. I think they want to it to be durable in the sense that even if the events are already consumed they can still play them back to reindex elastic search (which is a thing you need to do periodically).
"With the log as the source of truth, there is no longer any need for a single database that all systems have to use. Instead, every system can create its own data store (database) – its own materialized view – representing only the data it needs, in the form that is the most useful for that system. This massively simplifies the role of databases in an architecture, and makes them more suited to the need of each application."
Fair enough. It seems like it still ought to be able to burn the kafka+elasticsearch world down and submit everything to kafka with such a setup (and thus elasticsearch). I would certainly not sleep very well at night if I could not.
> I think they want to it to be durable in the sense that even if the events are already consumed they can still play them back to reindex elastic search (which is a thing you need to do periodically).
That (replaying if needed) is exactly what Kafka allows you to do, unless I misunderstood what you wrote.