Immutable append only persistent log doesn't imply store everything _forever_.
If you want to remove something you could add a tombstone record (like Cassandra) and eventually remove the original entry during routine maintenance operations like repacking into a more efficient format, archival into cold storage, TTL handling etc.
A notable example of a large-scale app built with a very similar architecture is ATproto/Bluesky[1].
"ATProto for Distributed Systems Engineers" describes how updates from the users end up in their own small databases (called PDS) and then a replicated log. What we traditionally think of as an API server (called a view server in ATProto) is simply one among the many materializations of this log.
I personally find this model of thinking about dataflow in large-scale apps pretty neat and easy to understand. The parallels are unsurprising since both the Restate blog and ATProto docs link to the same blog post by Martin Kleppmann.
This arch seems to be working really well for Bluesky, as they clearly aced through multiple 10x events very recently.
Pkl was one of the best internal tools at Apple, and it’s so good to see it finally getting open sourced.
My team migrated several kloc k8s configuration to pkl with great success. Internally we used to write alert definitions in pkl and it would generate configuration for 2 different monitoring tools, a pretty static documentation site and link it all together nicely.
Would gladly recommend this to anyone and I’m excited to be able to use this again.
Was about to ask if you had k8s api models available internally, and that someone should create some tool to generate that from the spec. But turns out it already exists in the open!
I wish it would give me a good curated news feed from dozens of sources, and adapt based on feedback. I badly wanted to love it, but no matter how much I tried it ended up looking something like a mix of Buzzfeed and Murdoch propaganda.
Happy to see the idea is not dead and new companies are giving it a shot.
They're definitely not, that's just what their IP lookup would have you believe. There are in fact tons of root servers located all over the world leveraging anycast.
ASCII is English and limiting access to knowledge for the rest of humanity for a simpler encoding is just not an acceptable option. Someone needs to interpret those 7k words and write a (complicated?) program once so that billions can read in their own language? Sounds like an easy win to me.
Sure spoken, but both Arabic and CJK ideograms are written in far more countries in the world, with far more people, and for far longer in history than the ASCII set. The oldest surviving great works of Mathematics were written in Arabic and some of the oldest surviving great works of Poetry where written in Chinese, as just two easy and obvious examples of things worth preserving in "plain text".
Playing the devil's advocate here. I am not a native English speaker, I'm a French speaker, but I'm happy that English is kind of the default international language. It's a relatively simple language. I actually make less grammar mistakes in English than I do in my native language. I suppose it's probably not a politically correct thing to say, the English are the colonists, the invaders, the oppressors, but eh, maybe it's also kind of a nice thing for world peace, if there is one relatively simple language that's accessible to everyone?
Go ahead and make nice libraries that support Unicode effectively, but I think it's fair game, for a small software development shop (or a one-person programming project), to support ASCII only for some basic software projects. Things are of course different when you're talking about governments providing essential services, etc.
I know almost no one who actually types the accented e, let alone the c with the cedilla. I scarcely ever see the degree symbol typed. Rather, I see facade, cafe, and "degrees".
That aside, the big problem with unicode is not those characters; they're a simple two-byte extension. They obey the simple bijective mapping of binary character <-> character on screen. Unicode doesn't. You have to deal with multiple code points representing one on-screen grapheme, which in turn may or may not translate into a single on-screen glyph. Also bi-directional text, or even vertical text (see the recent post about Mongolian script). Unicode is still probably one of the better solutions possible, but there's a reason you don't see it everywhere: it means not just updating to wide chars but having to deal with a text shaper, re-do your interfaces, and tons of other messy stuff. It's very easy for most people to look at that and ask why they'd bother if only a tiny percentage of users use, say, vertical text.
The first point is just because of the keys on a keyboard.
I see many uses of "pounds" or "GBP" on HN. Anyone with the symbol on the keyboard (British and Irish obviously, plus several other European countries) types £. When people use a phone keyboard, and a long-press or symbol view shows $, £ and €, they can choose £.
Danish people use ½ and § (and £). These keys are labelled on the standard Danish Windows keyboard.
There's plenty of scope for implementing enough Unicode to support most Latin-like languages without going as far as supporting vertical or RTL text.
For some reason people seem to think that the only options are UTF-8 and ASCII. That choice never existed. There are thousands upon thousands of character encodings in use. Before Unicode every single writing system had its own character encoding that is incompatible with everything else.
You didn't say spoken by every person. Merely spoken in every country. Even the existence of tourists in a country would pass this incredibly low bar...
https://blog.jabid.in/2024/11/24/sqlite.html