"gojq does not keep the order of object keys" is a bit disappointing.
I care about key order purely for cosmetic reasons: when I'm designing JSON APIs I like to put things like the "id" key first in an object layout, and when I'm manipulating JSON using jq or similar I like to maintain those aesthetic choices.
I know it's bad to write code that depends on key order, but it's important to me as a way of keeping JSON as human-readable as possible.
After all, human readability is one of the big benefits of JSON over various other binary formats.
Go actually went in the other direction for a bunch of reasons (e.g. hash collision dos) and made key order quasi-random when iterating. Small maps used to maintain order, but a change was made to randomize that so people didn't rely on that and get stung when their maps got larger: https://github.com/golang/go/issues/6719
Right, the startling thing about Python's previous dict was that it was so terrible that the ordered dict was actually significantly faster.
It's like if you did such a bad job making a drag racer that the street legal model of the same car was substantially faster over a quarter mile despite also having much better handling and reliability.
In some communities the reaction would have been to write a good unordered dict which would obviously be even faster, but since nobody is exactly looking for the best possible performance from Python, they decided that ordered behaviour was worth the price, and it's not as though existing Python programmers could complain since it was faster than what they'd been tolerating previously.
Randomizing is the other choice if you actually want your maps to be fast and want to resist Hyrum's law, but see the absl experience - they initially didn't bother to randomize tiny maps but then the order of those tiny maps changed for technical reasons and... stuff broke. Because hey, in testing I made six of this tiny map, they always had the same order therefore (ignoring the documentation imploring me not to) I shall assume the order is always the same...
> In some communities the reaction would have been to write a good unordered dict which would obviously be even faster
Actually an ordered dictionary has improved performance over an unordered dictionary for the kinds of common Python workloads you encounter in the real world. The reason why is that the design is only incidentally ordered, the design arises from trying to improve memory efficiency and iteration speed. The dict ends up ordered because they stash the real k/v pairs in a regular array which is indexed by the hash table, populating the array is most efficient in insertion order. For pure "unordered map" type operations the newer implementation is actually a tiny bit slower.
The main thrust of your claim obviously can't be true and I'm not sure what confusion could lead you to believe that.
Maybe it's easier to see if we're explicit about what the rules are: OrderedDict (now the Python dict) is exactly the same features as a hypothetical UnorderedDict except OrderedDict has the additional constraint that if we iterate over it we get the key/values in the order in which they were inserted, while UnorderedDict can do as it pleases here.
This means OrderedDict is a valid implementation of UnorderedDict. So, necessarily OrderedDict does not have, as you claim, "improved performance over an unordered dictionary". At the very worst it's break even and performance is identical. This is why it's remarkable that Python's previous dict was worse.
But, that's a pretty degenerate case, we can also see that after deletion OrderedDict must use some resources ensuring the ordering constraint is kept. An UnorderedDict needn't do that, and we can definitely do better than OrdererDict.
>The main thrust of your claim obviously can't be true
It's surprising that iterating a dense array is faster than iterating a hashmap? I don't think you are parsing the parent post correctly.
If dictionaries are commonly iterated in python, then iterating an array of 100 items that fits in one cache-line will be faster than iterating a hashmap which might have 100 items in 100 cache lines.
The claim was that: "Actually an ordered dictionary has improved performance over an unordered dictionary"
Having a dense array is not, as it seems both you and chippiewill imagine, somehow a unique property of ordered dictionaries. An unordered dictionary is free to use exactly the same implementation detail.
The choice to preserve order is in addition to using dense arrays. The OrderedDict must use tombstones in order to preserve ordering, and then periodically rewrite the entire dense array to remove tombstones, while a hypothetical UnorderedDict needn't worry because it isn't trying to preserve ordering so it will be faster here despite also having the dense arrays.
"iterating an array of 100 items that fits in one cache-line will be faster"
On today's hardware a cache line is 64 bytes, so fitting 100 "items" (each 3x 64-bit values, so typically total 2400 bytes with today's Python implementation) in a cache line would not be possible. A rather less impressive "almost three" items fit in a cache line.
But to be sure the dense array is faster for this operation, the problem is that that's not an optimisation as a result of being ordered. It's just an implementation choice and the UnorderedDict is free to make the same choice.
Enjoy and appreciate the discussion. Is this in the same neighborhood of why the reworked dict implementation in python 3.6 had insertion order as a detail, but explicitly claimed it was not a feature that should be relied upon? At least until python 3.7 cemented the behavior as a feature.
The problem with Python here is that CPython is not only the reference implementation but the de-facto specification. So dicts are still "supposed to be" unordered collections, but now dicts must also preserve insertion order as per the docs and the reference implementation, so now all alternative implementations must also conform to this even if it doesn't make sense for them to conform to it, or they must specifically choose to be non-comformant on this point.
Of course in this case, the order-preserving optimization was actually first implemented by an alternative implementation (PyPY), but I don't think that changes the issue.
Right, but that's kind of my point. Adding it to the language spec now creates an additional and frankly somewhat unnecessary point of compliance for other implementations. Python is already so damn big and complicated, my opinion is that we shouldn't makes its spec even more complicated, even if its reference implementation adds more features like this.
> Right, the startling thing about Python's previous dict was that it was so terrible that the ordered dict was actually significantly faster.
I've never heard that before and it would be really surprising, given that Python's builtin dict is used for everything from local symbol to object field lookup. Do you have more information?
Note that this applies more for python than efficient languages. In python, objects are big, and require an indirection. In faster languages, many objects can be smaller than a pointer and stored inline. As such, dictionaries that have vectorized lookups generally can be made faster.
`select{..}` cases with multiple valid channel operations also select randomly.
I really like it, it helps you discover (and fix) order-dependent logic WAY earlier. Though I would really like some way to influence how long it blocks before selecting one (to simulate high load scenarios, and trigger more logical races).
It seems as though a lot of people view it as hypocritical, e.g., generics for me but not for thee (dated example since there are now generics for everyone).
The fact that they needed to make a map a part of the language in order to allow it to be generic and statically-typed proves that generics are useful and should therefore have been a language feature much earlier than they became one.
There are a variety of things that the standard library or compiler deal with using weird workarounds that seem to indicate missing language features.
The thing is, the features are only "missing" if the language is designed to do the things those features permit. So the counterargument is that Go is a very opinionated language designed to do solve a few classes of problem very easily, like writing database-backed web services, and the reason the standard library or compiler teams have to do weird hacks at times is because Go wasn't made for writing those things, and designing to make those use cases easy would pollute the language from the perspective of someone using it for its intended purpose.
It's really hard to take those arguments without a whole serving of salt because the things that it's ostensibly good at handling really aren't that much easier. Why is (un)marshalling data in a type safe way so damn hard in Go? Why does doing the same thing over and over and over and over never get easier? (Because the language lacks high level abstractions.
I used Go extensively at my last job and I was left feeling that there were pretty much always better choices. If you care about developer velocity with your more unskilled engineers, Go is a bad choice for a multitude of reasons. If you're going to need the performance over something else, the JVM is right there, and so too is Rust.
It's fair to just say "Well it's not for that" if you're not a general purpose language.
Like it sure is hard to write a grammar checker in WUFFS. Well, it's not for that, it's a special purpose language, it doesn't even have strings, stop trying to write a grammar checker.
For a general purpose language this is a poor excuse. I think the best argument might be a desire to avoid the Turing Tar-pit where everything is possible but nothing is easy. C++ allows you to write a type that's generic over the floating point values, so e.g. Foo<NaN>. What does that mean? Nothing useful. But they could so they did.
In avoiding the Turing Tar-pit you must make some choices which weren't necessary but were, in your opinion (as Benevolent dictator, Steering Committee, Riotous Assembly Of Interested People, or whatever) more aesthetic, more practical for some particular purpose you had in mind, easier to implement or whatever.
My impression with Go, which I spent a few years programming but never really loved, was that it's main value was in being surprisingly efficient for that type of language. Particularly startup of Go is good, which would matter for gojq for example.
Nobody forces golang programmers to use the built-in map.
Also, some people would argue any map is unsuitable for many use cases of jq. If you want to keep the keys in the order of the input file, it certainly isn’t.
And yes, formally, json doesn’t change when reordering keys, but json often is treated as text and then, it is. You can use jq, for example, to do some transforms on a json file and get a nice git diff.
This tool may (or may not) produce a diff that’s much larger than necessary.
No, it isn’t. “gojq does not keep the order of object keys” isn’t about ordering keys consistently across runs, it’s about keeping them in the order of the input file.
Which it can't do because, as mentioned, Go randomly iterates over maps. That's the data structure that most would use to load arbitrary input files into the program.
If you have a hammer in your toolbox, it doesn’t mean you have to use it in every job. It golangs maps don’t do what you want, pick a different data structure.
This is like claiming that, because updating native integers isn’t guaranteed to be atomic in a language, you can’t do multi-threaded programming.
> gojq does not keep the order of object keys. I understand this might cause problems for some scripts but basically, we should not rely on the order of object keys. Due to this limitation, gojq does not have keys_unsorted function and --sort-keys (-S) option. I would implement when ordered map is implemented in the standard library of Go but I'm less motivated.
And later in the same file:
gojq does not support some functions intentionally;
<snip>
--sort-keys, -S (sorts by default because map[string]interface{} does not keep the order),
> "gojq does not keep the order of object keys" is a bit disappointing
with
> “I bet it's an artifact of Go having a randomized iteration order over maps. Getting a deterministic ordering requires extra work.”
But deterministic iteration order doesn’t imply that the order of keys is kept the same. There are map implementations that keep iteration follow insertion order, but the canonical map does not guarantee that. https://en.wikipedia.org/wiki/Associative_array#Hash_table_i...:
“The most frequently used general purpose implementation of an associative array is with a hash table: an array combined with a hash function that separates each key into a separate "bucket" of the array“
Such implementations iterate over maps in order of hash value (and hash collisions may or may not follow (reverse) insertion order)
I don't think the distinction you're trying to make is helpful here if I've understood you correctly. A good faith interpretation of haastad's comment would be that they were thinking of "insertion order" when they said "a deterministic ordering". Even if we were being pedantic, their comment is still correct - for iteration order to be the same as input order then deterministic iteration ordering isn't sufficient (this seems to be the point you're making) but it is necessary.
Their first sentence:
> I bet it's an artifact of Go having a randomized iteration order over maps
is correct per Gojq's author [0]:
> gojq cannot implement keys_unsorted function because it uses map[string]interface{} to represent JSON object so it does not keep the order. Currently there is no plan to implement unordered map so I do not implement this function.
It would of course be possible to work around this limitation of Go's built-in map type but that's not the point. The author makes it clear that this limitation is the cause for Gojq's behaviour.
Yeah, this is a deal breaker. While technically the key order doesn’t matter, in the real world it really does matter. People have to read this stuff. People have to be able to differentiate between actual changes and stuff moving around just because. Luckily it’s a solved problem and you can write marshalers that preserve order, but it’s extra work and generally specific to an encoding format. It would be nice to have ordered maps in the base library as an option.
This. For one project, I even write a tool to reorder keys to a specific order. And of course this has no technically reason. But I used JSON here for the human readability and that non-technical people have best changes to understand and change the data. And therefore starting with id and name on top is important than with an huge array of data.
Ordered keys in json is not only for cosmetic reasons. If this ever touches disk you want the ability to diff them or stash them in git without the whole file changing with every update.
I care about key order purely for cosmetic reasons: when I'm designing JSON APIs I like to put things like the "id" key first in an object layout, and when I'm manipulating JSON using jq or similar I like to maintain those aesthetic choices.
I know it's bad to write code that depends on key order, but it's important to me as a way of keeping JSON as human-readable as possible.
After all, human readability is one of the big benefits of JSON over various other binary formats.