Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Mongo is great if you want a distributed replicated log. Existing tools sorely lack. (Postgres and Kafka are broken by design.)


Curious as to why you think Kafka is broken by design?


1. No reliable way to delete already processed entries.

2. No reliable way to handle queue overflow.

Combine both and you are 100% guaranteed to have an incident. (I guess it keeps devops and sysadmins employed, though.)


I wouldn’t have really called these issues “broken by design”…

Rough edges sure. No reliable way to delete processed messages. Well, who’s to say they were processed? It’s a persistent queue, stuff sticks around by construction. Besides, this can be managed with tombstones and turning on compaction for that topic.

How would you want to “handle” queue overflow? You’ve either got storage space, or you don’t, this feels a bit like asking “how do I make my bounded queue unbounded”. You don’t, that’s an indicator you’re trying to hold it wrong.

The configs could be a bit easier to drive, but confusing and massive configs is pretty par for the course for Java apps ime.


> Well, who’s to say they were processed?

The queue, which should keep a reference count for messages.

> How would you want to “handle” queue overflow?

At the producer end, of course.

> You’ve either got storage space, or you don’t

Kafka assumes you have infinite storage space. That might be okay for toy projects or for the insane architectures you see in the enterprise space, but not for a serious project.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: