We have an application that might be similar. It receives analytics events from frontends. It uses (currently) RabbitMQ to distribute it to multiple "sinks", including InfluxDB, ElasticSearch and websockets; the main sink is one that stores the events as flat files (one JSON hash per line) in S3. That's what we consider our master data.
For all application-data events I consider postgres to be the ground-truth. That is somewhat unfortunate, because one can't easily place a queue in front of the database. For metrics and logs, the Kafka topic itself (which is persisted similiar to your flat files) would become the master. The use-case is pretty similiar.
Might it be feasible to have something like postgres work with an external WAL? That would solve the problem I guess, as well as leave us with a single "persistent" system.
We have an application that might be similar. It receives analytics events from frontends. It uses (currently) RabbitMQ to distribute it to multiple "sinks", including InfluxDB, ElasticSearch and websockets; the main sink is one that stores the events as flat files (one JSON hash per line) in S3. That's what we consider our master data.