This is a nice package, and a great illustration of how languages other than R suffer from the lack of an aesthetically elegant way to select list elements with bare words, like R's $ operator.
Because their lists don't have selection by bare words, they have to go one of several other specialized, distinct, built-in Abstract Data Types to get it. They have to create whole so-called "Classes" and "Modules", when all they really needed was a list whose elements can be accessed with a dot and a bare word.
The pandas package for tabular data manipulation requires even more complicated workarounds. It has a DataFrame Class composed of objects of Column Class. Then it makes an arbitrary bunch of common functions, so common that many are built into Python itself, Methods of said Columns. (In R, a table is just a list of vectors, and no Methods are needed.)
So now you've got a thing that's supposedly a real Class, but it's really just a container of completely arbitrary fields and data types. These fields are themselves instances of another Class that is supposedly specific to pandas, but is really just a vector, and a vector doesn't necessarily have anything to do with being part of a table. And that Class has some random methods that give you additional ways to do basic things the language already does, and are often not the functions you actually need to work with the data therein.
All that just so that we can write stuff like df.col.max(), and... gosh, what is that even supposed to mean? Can we all just admit that we like writing code in chains separated by dots, and stop tying that capability to hierarchies of Official Abstract Data Types?
These non-R languages make you utter such strange incantations just to put something in a key-value container and access that thing with nice-looking code. I feel like this makes it harder to realize that very often this is the best way of doing things.
R has a bit more varied and sometimes mildly ugly syntax than other languages, but once you get used to the building blocks it gives you, it has all these powers to do very dynamic things in very easy ways, without a bunch of ponderous specialized concepts.
What do you mean? Many languages allow accessing named properties like that. Even JavaScript :)
The strange thing here seems to be R’s use of ”list” as a name for a map-like key-value structure. The word ”list” is commonly understood to refer to a data structure which needs to be linearly (linked list) or partially (skiplist) iterated through to access a value at a particular index.
Total nitpick - you say list is commonly understood to be linearly iterated. I’d expect a list to refer to an ordered sequence - default implementation of access and mutation varies wildly between languages. E.g. java code usually defaults to ArrayList, lisps to cons cells, C++ doubly linked list, etc.
Sql has “tuples” for the rows of a result-set which are neither tuples nor lists in the “general sense” and are of a “record” type - names with values.
I guess I don't know enough about enough other languages to make broad generalizations. Oh well, it's too late to edit now.
My impression is that JavaScript is another language like R that values flexibility a lot.
And yeah, I agree that R is rather casual about lists vs maps. It doesn't really care that maps are a great data structure in their own right. It just wants to slap names on list elements when it's convenient to access elements of the list by name.
It scares me that people think like this. Not only with respect to AI but in general, when it comes to other life forms, people seem to prefer to err on the side of convenience. The fact that cows could be experiencing something very similar to ourselves should send shivers down our spine. The same argument goes for future AGI.
I find it strange that people believe cows and sentient animals don’t believe something extremely similar to what we do.
Evolution means we all have common ancestors and are different branches of the same development tree.
So if we have sentience and they have sentience, which science keeps recognizing, belatedly, that non human animals do, shouldn’t the default presumption be our experiences are similar? Or at the very least their experience is similar to a human at an earlier stage of development, like a 2 year old?
Which is also an interesting case study given that out of convenience, humans also believed that toddlers also weren’t sentient and felt no pain, and so until not that long ago, our society would conduct all sorts of surgical procedures on babies without any sort of pain relief (circumcision being the most obvious).
It’s probably time we accept our fellow animals’s sentience and act on the obvious ethical implications of that instead of conveniently ignoring it like we did with little kids until recently.
This crowd would sooner believe silicon hardware (an arbitrary human invention from the 50s-60s) will have the physical properties required for consciousness than accept that they participate in torturing literally a hundred billion consciousness animals every year.
I’m actually a vegan because I believe cows have consciousness. I believe consciousness is the only trait worth considering when applying morality questions. Arbitrary hardware is conscious.
In general, I found starting with a Erlang/Elixir framework tutorial helps. Phoenix includes a generic wrapper on top of PostgreSQL (Ecto provides data mapping and language integrated query), and hit a surprising number of users per host with trivial code (common game engine back-end.)
The only foot-gun I would initially avoid, is a fussy fault-tolerant multi-host cluster deployment. Check out RabbitMQ package maintainers, as those guys certainly offer a fantastic resource for students ( https://www.rabbitmq.com/docs/which-erlang .)
Isn't the issue that historical data is consistent with the overturning model, which adds weight to our assumption that it has previously been stable. These new measurements (and observations?) are consistent with the overturning circulation weakening.
I agree with you in principle - my impression (perhaps wrong - not an expert) was that there are additional data points supporting the idea of a consistent status quo.
Is this you have implemented in practice? Sounds like a great idea, but I have no idea how you would make it work it a structured way (or am I missing the point…?)
Can be easy depending on your setup - you can basically just write high level functional tests matching use cases of your API, but as prompts to a system with some sort of tool access, ideally MCP. You want to see those tests pass, but you want them to pass with the simplest possible prompt (a sort of regularization penalty, if you like). You can mutate the prompts using an LLM if you like to try different/shorter phrasings. The Pareto front of passing tests and prompt size/complexity is (arguably) how good a job you're doing structuring/documenting your API.