Hacker News new | past | comments | ask | show | jobs | submit login
Enum of Arrays (tigerbeetle.com)
143 points by signa11 6 days ago | hide | past | favorite | 48 comments





Worth mentioning that you can always safely switch between AoS and SoA. Either can represent the other; all you've done is transpose the data. The same is not true of AoE/EoA. The AoE [Spam1, Egg1, Spam2, Spam3, Egg2] has no corresponding EoA that can represent it.

What they're actually doing is an AoE => AoEoA transformation: find batches elements with the same tag and reorder the elements so that redundant tags can be eliminated. Essentially, a kind of run-length encoding. It's a nice idea.


Good insight.

Ah... category theory :-)

Array-of-Stuct (AoS) treats order in arrays as meaningful, arrays as lists, so AoS => Struct-of-Array (SoA) doesn't loose information. It is a sound transformation because it is a homomorphism.

Some languages (homoiconic, or with macros or template support) can express this code transformation: e.g. Julia, https://github.com/JuliaArrays/StructArrays.jl, or Rust, https://www.abubalay.com/blog/2019/02/16/struct-of-arrays

In a sense, you can see this transformation through the concept of monads (although Haskell monads or F# computational expressions cannot directly express it, as far as I know). Then the corresponding category diagrams leads to sets or multi-sets (run-length encoding requires or implies some concept of identity, so unordered lists with repetitions = bags and multi-sets are equivalent in this specific context), as the right concept for Enums of Arrays.


Zig can represent AoS to SoA very nicely, it's a favored technique for the Zig compiler itself and well supported by the standard library where it's known as a MultiArrayList.

Another way to represent an EoA that would be homomorphic to AoE would be to use a SoA that as an array of the tags/discremenants, and a separate array containing unions for the values. Although, that would be a little harder to work with.

If the order doesn't matter, you could use a separate field for each variant of the enum.


The idea that arrays of structs are inherently more cache friendly and thus data-oriented-er is a bit reductive of the whole practice of data-oriented code. The point is to optimize data layout for access patterns. Putting fields of a struct into their own arrays is only actually an optimization if you're only accessing that field in-bulk. And if so, why is it even in a struct in the first place? If you use all fields of a struct in your algorithm, then an array of structs is the optimal way.

All the same is true for enums.


Access patterns matter, but just as important is to have less stuff to access. That's why arrays-of-structs are considered cache friendly - columnar data layouts open the door to optimizations that significantly reduce memory footprint. You no longer waste memory with struct padding. Boolean fields can become bitsets. Enums can be bit-packed. Often-null optional fields can become sparse maps. 8-byte pointers can become narrower-sized indices into object pools.

> “That's why arrays-of-structs are considered cache friendly”

Sounds like you mean structs-of-arrays?


Oops, brainfart on my part. Unfortunately, the edit window has passed.

"Putting fields of a struct into their own arrays is only actually an optimization if you're only accessing that field in-bulk" ... "If you use all fields of a struct in your algorithm, then an array of structs is the optimal way."

This is wrong! Cache optimization isn't the only factor here. Even given an algorithm that seemingly handles each object one-by-one and uses all fields, SIMD turns individual operations into a hidden bulk access, and giving each field its own array will speed things up. This is counter-intuitive at first but becomes obvious if you write SIMD by hand (the article mentions this but doesn't make it super clear IMO)


Indeed, a struct can also be cooked to pack down with no padding, and or be dynamically redefined with a union.

Performance issues start to crop up with naive pre-fetching, and thus 100% guaranteed cache misses if the arrays are larger than L2.

This is why LLM AI generated slop degrades blogs into slop delivery services. =3


> This is why LLM AI generated slop degrades blogs into slop delivery services. =3

Not sure what LLMs and AI have to do with any of this.


That is the primary problem domain, as there are a lot of folks that see well-structured nonsense as meaningful. =3

Why do you keep putting a penis "=3" at the end of your messages?

Same with row-major vs. column major, accessing contiguous data is faster than non-contiguous data, so you should align your algorithms and data structures.

> The point is to optimize data layout for access patterns.

Yes. That's the point.

> Putting fields of a struct into their own arrays is only actually an optimization if you're only accessing that field in-bulk.

Yes, that's the scenario.

> And if so, why is it even in a struct in the first place?

Because that's how everyone is taught to model domains.

> If you use all fields of a struct in your algorithm, then an array of structs is the optimal way.

No. Your personal belief goes against both theoretical and empirical evidence. Others already talked about cache, padding, vectorized instructions, etc. I recommend you do a quick googling on the topic.


This is a somewhat, hmm, bilingual post. The enum in question here is what Zig calls a tagged union, while Rust calls it an enum, with what Zig calls an enum being the degenerate case where the tag is the only payload.

I thought this would be about std.enum.EnumArray[0], an array of some T which is indexed by an enum. I've gotten a lot of mileage out of those as well. But it's about std.MultiArrayList[1], as used with a tagged union. I've had occasion to use that with structs, but not with unions, and didn't actually know that you could, although finding out it makes sense.

Actually a variation on MultiArrayList which is optimized for homogenous collections of one specific union variant, since if that's the useful way to structure the data then the tag would be redundant to store one of per element.

Good read, mostly wanted to add a few links for those who want to know more. The comptime metaprogramming used in MultiArrayList is a great illustration of what Zig is capable of IMHO.

[0]: https://ziglang.org/documentation/master/std/#std.enums.Enum... [1]: https://ziglang.org/documentation/master/std/#std.multi_arra...


> This is a somewhat, hmm, bilingual post. The enum in question here is what Zig calls a tagged union, while Rust calls it an enum, with what Zig calls an enum being the degenerate case where the tag is the only payload.

To be fair, I think that most languages typically use enum to refer to the same thing as Zig; if anything, Rust (and Swift, iirc) are somewhat outliers for using that term for tagged unions.


Scala also calls them enums fyi. Personally, I wish everyone would call them variant types.

I often use the term "sum types" for them, since I think it helps explain why they're useful compared to "product" types like structs or objects or tuples. I've heard people refer to them as "algebraic" types, but I don't really like that as a term for them because that feels like it should refer to sum and product types as a categorization rather than one of the categories specifically. Unfortunately, "sum type" doesn't really work super clearly in verbal conversations that often; people often tend to hear it as "some types".

In wit for wasm they call them variants, which makes more sense to me. Enum is kind of an odd name for them. https://component-model.bytecodealliance.org/design/wit.html...

> This is a somewhat, hmm, bilingual post.

Yeah, I wish the author had just mentioned what language they were using in the blog post text. I was looking at it and I couldn't identify it. Now I know it is Zig, but I'm not familiar with Zig so I can't identify it by sight. I was looking at it and thinking "this looks a bit like Rust but isn't Rust".


The representation of enum of arrays reminds me of a technique for "de-polymorphicking" or devirtualisation in an object oriented paradigm. Instead of having an array of polymorphic base class instances, you have a separate array for each concrete derived type. This takes advantage of the observation that often the set of derived types is quite limited. As a result, indirection and virtual calls disappear, improving optimisation, cache performance, and branching performance. I think it's quite a smart technique, noticing that the degree of polymorphism provided is unnecessary for the actual use case.

an Enum of Arrays would be an enum where each enumerator was a product of each possible enumerator. there would be N^M enumerators where N is the length of the array and M is the number of enumerators. for example, if the original type was enum { red, green } then the enum of array[3] would have to be an enum containing the 8 enumerators:

    { red-red-red, red-red-green, red-green-red, red-green-green ... green-green-green }
so that's essentially completely useless. i think the exact same problem would occur with array-of-tagged-union to tagged-union-to-array "transformation".

you can't just say "hey: arrays and structs and unions are words and if you can do array of struct and struct of array and enum is also a similar word, then why not enum-of-array?".

while tfa talks about "batches" of items with the same tag, and the advantages therein, that isn't something captured by the example given, at least without extending the EoA to a variable sized array of EoA and something else to track the number of items in a "run" (as in RLE).

this is better thought of as a data-structure problem than a type theory.


I don't think I've had the need for a uniformly tagged array of enums. Generally, when I do an AoS to SoA transform that includes tagged data, I just factor out the tag into its own array. In fact, if the tag is 2-valued, I just build a bitmap, rather than burning a whole byte. If the tag is a resource indicator, then I have a group of 1-hot bitmaps.

The SoA transformation makes sense to me and is quite general. The EoA transformation on the other hand feels like a rare special case though it seems perhaps less rare for the OP.

Either way, these types of optimizations are typically marginal in the context of end to end performance of most programs. It's good to have some knowledge of these kinds of techniques, but most of the time it makes sense to do the thing that is most straightforward to implement and optimize later once the program is already working. Of course if the problem maps neatly onto EoA then that should be preferred in the initial implementation. I though in my 30+ years of programming cannot think of a particular problem that I have solved that would have been enhanced by this.


It's an alternative to OOP. You can get there via a series of transformations:

1. Start with OOP (heap-allocated objects with shared base structs)

2. Transform to using tagged unions instead

3. Transform to the approach outlined in the OP (I call it the "encoding" approach in this talk: https://vimeo.com/649009599)

It's handy because you get to use an index to refer to an object, and you get serialization benefits. The zig compiler uses this pattern in quite a few places:

* https://github.com/ziglang/zig/blob/77c63ac36034db577a9287c5...

* https://github.com/ziglang/zig/blob/77c63ac36034db577a9287c5...

* https://github.com/ziglang/zig/blob/77c63ac36034db577a9287c5...

* https://github.com/ziglang/zig/blob/77c63ac36034db577a9287c5...


I'll tell you my experience with Zig. I don't have any. I saw maybe Primagen talking about it and I see your post here. I watched 10 minutes of your vimeo video. I see it has 30k+ stars on github. So now I have to try to understand it in a nutshell.

First like any language, I go to indeed.com and put in "Zig" to see if there are any jobs listed which use it. I don't see any.

Then I click to https://ziglang.org/ and it describes Zig as "robust, optimal and reusable". Well that doesn't really say much of anything.

I read the example listed, which appears to be a test case, and I wonder how the 'try' mechanism works without a 'catch'

Then I go to https://ziglang.org/documentation/master/ and see that it says: Robust Behavior is correct even for edge cases such as out of memory.

I wonder how that works, but there are no links to support this claim.

I read a little more then move on.

This isn't to say anything one way or another about Zig, its just my 30 minutes of reading about Zig.


I spent 2 seconds clicking on your bio and saw this account was created 4 hours ago.

Makes me wonder why you felt the need to create a burner account.

This isn't to say anything one way or another about you, its just my 2 second of reading about you.


> First like any language, I go to indeed.com and put in "Zig" to see if there are any jobs listed which use it. I don't see any.

What does that have to do with anything? Zig is still in beta and they explicitly do not recommend that you use it in production yet unless you're ok with frequent breaking changes. Of course there will be very few jobs (though it's being used by a few notable projects already, including Tigerbeetle - authors of the post we're discussing - and Bun, the JS runtime).


I’d like to unsubscribe from your blog

I disagree. My blog is quite robust and optimal

What a random and untimely user report.

> I though in my 30+ years of programming cannot think of a particular problem that I have solved that would have been enhanced by this.

One example that I frequently deal with that can benefit from this is compiler data structures.


Isn't performance and memory usage generally enhanced by this?

So why not simply default to this instead of defaulting to Interfaces/traits doing dynamic polymorphism?

Makes everyone a bit more happy.


The opposite of "struct" isn't "enum", it's "union" (or "variant"). This blog post isn't about turning an array of enums into an "enum of arrays"; it's about turning an array of unions into a union of arrays.

Which breaks down if any of your unions hold different alternatives. The "array of structs to struct of arrays" transformation, OTOH, cannot break.

It's also common to transform an "array of unions" (or "array of [pointers to] polymorphic types") into a "struct of shorter [homogeneous] arrays."


But the author can't be bothered to tell us what language his code snippets are written in.

inb4: "The concepts are language independent."


Yeah ... honestly, having an "in Zig" dropped subtly at the top would have alleviated a lot of confusion as to "why this c code looks so weird". :facepalm:

see also: Richard Fabian's data-oriented design book -- the chapter on existential processing discusses enums

https://www.dataorienteddesign.com/dodbook/node4.html

Rough idea: model everything as relational data - define 1 table for each state. membership of a record in the table corresponding to state X implies that record is in the given state X.

> the reason why you would put an enum in table form, is to reduce control flow impact. Given this, it's when we aren't using the enumerations to control instruction flow that it's fine to leave them alone

An example of the latter might be some kind of state machine, where you can write branch-free code to determine the successor state from current state, and no other processing needs to branch on the state tag.


I don't get it, why wouldn't you just store tag + count instead? Am I missing something?

The article is eliding the enum's payload. A more realistic example would, I think, have each leg of the enum contain a distinct type of struct (or some other data) in addition to the tag itself, and then have each EoA factored into its own internal SoA.

These are enums as Rust coined the term, meaning sum types, not as C did, meaning a subrange of ints with magic names. The Spam and Eggs types contain data

"enums" here are not like C enums, but rather tagged unions as in Rust where the individual items can store data rather than just being empty tags

if you have all of the enum variants constrained to be the same variant, this is just AoS/SoA with a single extra u8 field lifted out of the individual variants, not what you would expect from the title (the variants…not all being the same)

now this can then be wrapped in another layer (SoSoA?) when partitioning a set of heterogeneous enum values but the post makes no mention of that


Now do a single class pointer for an array of values...

This thing should be a poster example of premature optimization. Sure you can squeeze a few milliseconds out in a performance critical task. Most things won't measurably benefit though, while making all handling super awkward.

If your abstract domain description is fundamentally a collection of things that have a few parts each, then have your data type represent that, instead of turning it inside out for cache effects. If those become relevant at some point, try to abstract that away and do the optimized internal representation under the hood. But don't preemptively design your data structures in a cumbersome way just in case. That's bad advice.


In what context?

You are assuming the poster is doing something like your typical IO-bound backend, and not, say, a High Performance Computing simulation on a compute cluster.

I have done this kind of optimization to go from 24 hour compute time to 6 hour compute time instead for instance -- per simulation run.

How can you say "a few milliseconds" when you know absolutely nothing about the context?

I do not consider your advice any better at all; you assume all computer code is in the same context -- it really is not. Not all code is written as backend to websites.

You could have said "keep in mind that if you service is IO-bound, these kinds of optimizations are likely a waste" or similar to make the context clear.


> I have done this kind of optimization to go from 24 hour compute time to 6 hour compute time instead for instance -- per simulation run.

I'm sure there are workloads where this kind of optimization makes a lot of sense. But they are comparatively rare. And they are not for free, in terms of code complexity and robustness. So, for the broad masses reading HN, its a premature optimization.

> How can you say "a few milliseconds" when you know absolutely nothing about the context?

Most code that gets written is not performance critical. Programmers would generally be better advised to think about robustness, correctness and maintainability of their code than about cache effects. The world would be a better place and we'd see fewer app crashes and fewer security holes.


> In what context?

This is a great question for the article's author, I think! They give very little information as to when this class of optimization makes sense, and because it's much more complex to implement than the AoS -> SoA transformation in the general case when the total ordering of enums is important, either a case-study or some general heuristics as to when this transformation is worth the effort would make the article more useful and interesting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: