In this architecture, events are a persistence strategy. Once created, they are saved forever, so if they get dropped, it calls for an investigation and a restore from backup.
Having worked with such a system before, the real pain is that any data exposed by one microservice to another becomes an API, and it is very hard to design events that are both a good public API and a private data persistence system. Immutability means consumers of your API will need to understand, and deal with, obscure concepts and business rules from five years ago, because those events are still around.
One of the major points of an API is to hide private complexity behind a public abstraction that you can design and update as needed.
It seems like they had something that was working - the monolith. Then they read about the new shiny microservice thing. Then changed to that and started to break stuff by "updating" fields from text to uuid and so on. This will not be fixed with events. An "eventlog" can not fix breaking changes like that. Maybe the monolith should be considered again?
My initial reaction is that I would worry about the impact of events getting dropped on the overall state of the system.