It took me quite a while to understand the actual role of `_` here, and the description is quite unclear if you are not into this kind of esolang things.
What's going on here is that there are actually two operations, where one is spelled `_` and another is spelled every other character. The program starts with an empty array of trees (what is called the "current branch" here), and any character other than `_` appends to that array, while `_` takes the last element and does something accordingly. And that's a common theme for light-hearted esolangs: that single operation encodes a lot of semantics so that there are in fact multiple super-operations described with that operation. It is much harder to design a minimal esolang without no obvious super-operation structures.
One thing less obvious from the description is the Turing completeness. I think it is not, because the actual computation is only possible via `_`, but the number of `_` operations is set from the initial program. If `_` can be executed undefinitely, it is relatively easy to construct a program that replicates itself (cf. quine) in the current branch and `_` keeps running the replicated program over and over. Alternatively `_` can have a super-operation that can execute `_` multiple times, for example by interpreting a particular branch as a program [1]. But as it stands, it doesn't seem to have any sort of looping. A simple fix would be to assume an infinite number of `_` following the input program.
Indeed: it looks like on each iteration of `execute()`, exactly one leaf node is popped from `left`, no more and no less. So since there's nothing that can add more nodes to `left` after the program is first loaded, there's no way to call `execute()` more times than the total length of the program.
Another fix I believe sufficient (and perhaps a bit more interesting) would be an operation to exchange `left` and `right`, so that the program is replaced with the data, and vice versa.
I know that this is mostly for fun, and it looks cool. But on a serious note, as someone who's dabbled in both J, stack languages (Forth and Factor), as well as tacit programming in Haskell, I'll say that naming is an awesome way of making programs more readable.
Trying hard to avoid names (by, say, eta reduction in Haskell) is a fun excercise that rarely leads to more understandable programs.
I think it is not just "a way" of making programs readable, but that it is the main readability factor. Good names make a program readable, bad names ones do the opposite.
Almost all time spent improving names is worth it. The only exception might be one-off scripts.
Completely agree. Reading another engineer's code recently, which had egregiously bad names, the following thought occurred to me:
Writing software is like writing poetry.
Finding the words that perfectly capture and express the essence of your thinking and the underlying model or theory behind your code is so important. In software as in poetry, one is always on the lookout for that "mot juste".
I did not fully appreciate this until I had to edit a script written by a non-English speaker. It was absolutely incomprehensible until I ran all the variable names through Google translate. After that, piece of cake.
Awesome effort but I also kind of hate stuff like this. I just feel like there are more constructive ways to waste time. But I guess everyone has a right to waste their time how they want.
Still tempted to use the flag link though. So maybe that's actually a compliment to the author in a weird way.
> I just feel like there are more constructive ways to waste time
From my own experience, working on "silly" projects like this, with severe artificial constraints for example, can be quite interesting and challenging and lead to lessons learned that can be applied to real projects later.
So for the author, it might have been quite constructive indeed.
I see the more creative esolangs like this as beautiful and strange to think and code in; a formal play on language design. Not everything needs to be practical or a learning experience.
Yeah, this always struck me as an anti-feature of Unison, and having listened to some podcasts and read some blog posts of the creators explaining the wonderful things it enables hasn't really changed my opinion.
Decoupling and abstraction via naming is fundamental to both computation and data storage.
> Decoupling and abstraction via naming is fundamental to both computation and data storage.
While I don't think Unison to become the next big thing(TM), that's irrelevant to what Unison claims in my understanding. You can always name any term as in most other languages, and in fact you can also have a nominal type as well (cf. `unique type`). So you can distinguish multiple terms with otherwise structurally equivalent representations, and use names to refer to them.
The obvious following question is: what is different then? In conventional languages where you define a function `f(x, y) = x + y`, `f` is engraved into some sort of namespace, like a symbol table of the static or shared library file. That obviously implies a possibility of collision, and a common remedy is to have multiple or nested namespaces to separate them, but that doesn't fully fix this issue because such separation is not automatic. In Unison everything is first automatically named according to a hash of its implementation, and then can be aliased to other names. So the collision is impossible by default. Unison still has a type system, so I believe decoupling is still possible by multiple terms with the same type but different implementations (thus different hashes).
If that's your definition, I believe no one actually follows that definition because there are a lot of ways for abstractions to leak their implementation details, which might not be fully captured by interfaces. A good example is the performance. If you need a fast enough implementation with the same interface, you are not doing the abstraction in your definition because the "fast enough" implementation is an implicit coupling factor.
More commonly, and in my opinion, more reasonably, any abstraction involves a trade-off. Some abstractions can be quite leaky but darn easy to use. Others may be less leaky but difficult to use (or misuse, depending on your view). There is no non-leaky abstraction here. Once you've chosen a trade-off and thus abstraction, you are still not entirely free from coupling for the aforementioned reason, but can enjoy some level of freedom thanks to the trade-off made. Unison is no different from other conventional languages in this sense.
Of course, programming languages without names of any kind are not new.
Like this one, they tend to be esoteric.
There's the Turing Machine inspired Brainfuck, with its 8 symbol alphabet <>+-[].,
Some are based on Combinatory Logic, like Lazy-K and unlambda,
and others on Lambda Calculus with de-Bruijn indices, like binary lambda calculus and λDNA.
And several others, like this one, are stack based.
Not really a programming language because there are no loops. Just a syntax for expressions that are evaluated using the stack of trees.
It goes to unnecessary lengths to avoid naming the output files themselves.
Also there's a significant copout that the language allows you to operate on explicitly named files.
This makes it possible to have named "variables" by writing whatever you need to keep into named files.
The stack of trees is interesting.
Also using _ to execute top thing on the stack is a cool concept that avoids need for quoting anything because nothing executes by default. Although you still need to quote _ itself as U_ if you don't want it to execute and the lang doesn't have subroutines yet so to make them sensible some quoting will probably still be necessary.
There's also a weirdness where strings are always compared as if they were numbers. At least according to the docs.
This could easily be designed to avoid superfluous the _ dud in the syntax.
It's like having to use an explicit call operator for everything (call + (call * 2 2) (call * 3 3)). You do that for applicators other than call like map or apply. Call is special, so you don't give it a name; it is named by the absence of a name.
You want the actual operators like ^ and H to be implicitly active. When you want ^ and H to just be the literal characters themselves, that's where some quoting syntax should be applied.
I used to encounter a lot more esoteric languages than I do now (I at one point made a working befunge compiler which was considered impressive at the time). I don't know if I've just aged out of it or if it's gone out of fashion or what. But that's a bit of whimsy I definitely miss from the 90s and early 00s.
What's going on here is that there are actually two operations, where one is spelled `_` and another is spelled every other character. The program starts with an empty array of trees (what is called the "current branch" here), and any character other than `_` appends to that array, while `_` takes the last element and does something accordingly. And that's a common theme for light-hearted esolangs: that single operation encodes a lot of semantics so that there are in fact multiple super-operations described with that operation. It is much harder to design a minimal esolang without no obvious super-operation structures.
One thing less obvious from the description is the Turing completeness. I think it is not, because the actual computation is only possible via `_`, but the number of `_` operations is set from the initial program. If `_` can be executed undefinitely, it is relatively easy to construct a program that replicates itself (cf. quine) in the current branch and `_` keeps running the replicated program over and over. Alternatively `_` can have a super-operation that can execute `_` multiple times, for example by interpreting a particular branch as a program [1]. But as it stands, it doesn't seem to have any sort of looping. A simple fix would be to assume an infinite number of `_` following the input program.
[1] This sort of iteration is not very uncommon in esolangs. See, for example: https://esolangs.org/wiki/Muriel