Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Don't settle ... unless you can.

Some business processes/problems can handle eventual consistency. Banking is the classic (and perhaps un-expected) example. You can overdraw your account, there are processes to go back and solve conflicts. Some are automatic, some are manual some are legal.

Bitcoin at the blockchain level is eventually consistent, just wait 10 minutes or whatever the current time it (I don't use it personally just know about it) and then you can be fairly sure of the validity of the transaction. There are built-in incentive to assure histories will converge. But, wallets kept at some exchange should _not_ be eventually consistent. Should _not_ be able to take 100x more than your wallet holds and send it to someone else. There is no regulatory, automatic of any other kind of framework to revert transactions that went through.

There is crdt (a commutative replicated data type) research. So these are data types that an always solve inconsistencies should they arise and instead of diverging they auto-converge, in face of conflicts. Think set union operation or max() function.

These kind of trade-off will percolate up through your data layer into your business problem. For some cases you'd want to pick one, for some pick another.



Yah but banking sucks as an eventual consistency problem. Sure there are processes to go back and solve conflicts, but they suck too.

Part of the reason we can't transfer money instantly between two accounts in the USA is due to sorting out eventual consistency. Federal guidelines on transfer intentionally make it a slow process so the manual processes can catch up.


> Part of the reason we can't transfer money instantly between two accounts in the USA is due to sorting out eventual consistency.

Yeah, but that's because everything is a case of eventual consistency. Causality itself is limited to the speed of light, and "instantly" is impossible. The only question is whether you want to block/wait, or gloss over it with eventual-tricks.

Just look at online FPS games! Even with some of the best consumer-grade communication links and high expectations for each node, it's impossible to provide actual "instant" behavior, and all modern games contain huge reams of code dedicated to maintaining an eventually-consistent environment.


I came here to say exactly this ... there are many problem domains where you can work just fine knowing that your data will be eventually consistent.


Very true - the issue is perhaps that you need to be absolutely sure of your problem domain, and absolutely confident in your engineers' ability to maintain eventual consistency (as opposed to simply inconsistency) in the face of changing requirements. More-consistent systems allow much more freedom to change the way your system works without worrying as much about the ways in which it might impact your data storage.

You don't have to go for full serializability - you can very often get away with something simpler like consistent writes and potentially out of date reads. That sort of system scales a long way unless you're very write heavy.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: