Hacker News new | past | comments | ask | show | jobs | submit login

> abstruse and obscurantist syntax/glyphs/typing

I've often wondered what are the real blocks to having it both ways? Have the source saved as some lowest common denominator - IL, tagged AST, or somesuch - and then read and written in whatever pleases the individual. It would be hard to make it work between languages with drastically different semantics that don't map to anything in another language, I suppose would be the real reason... but I always think about the fact that it makes it down to machine code eventually, so it seems intuitive that the process could be reversed.

Or less drastically, an IDE for APL beginners: symbols have human language helpers or UI overlays, but the source is still APL.




The concise notation of vector languages is not just an affectation; it's a crucial property. Obscuring the notation with long names and "more familiar" syntax diminishes its value. Translating it into something "easier" leaves beginners unable to communicate clearly with APL experts or engage with resources from the broader community.

Imagine suggesting to a Prolog programmer that they make the language more accessible to beginners with tools that dynamically translate it into the equivalent Fortran, which is, after all, a Turing-complete language...


I'd need some stronger evidence to believe the claim that in-place translation of individual glyphs into words is equivalent to translating between languages with wildly different paradigms.


I'd need some stronger evidence to believe the claim that in-place translation of individual glyphs into words is equivalent to translating between languages with wildly different paradigms.


Indeed, here's the reference manual for kx systems k 2.0[1]

One thing you'll notice is that most of the operations are given names, because trying to communicate about @[d; i; f ;y] doesn't really work, while "Amend Item" does. I'd equally need some very compelling evidence that @[d; i; f; y] is inherently superior to d.amend_item(i,f,y) - or more likely if we were to translate this into a syntax more acceptable to others it'd be more likely to be something like d[i] = f y (but I haven't bothered to figure out exactly the semantics of amend item; I just picked an item from the manual) - to anyone but diehard APL/J/K fans.

Most of it can be decomposed and expanded fairly straightforwardly. E.g. after lengthy back and forth a few days back it turns out ",/+|(8#2)\textb" that someone offered up translates into a more familiar OO-syntax roughly to:

    textb.encode(8.reshape(2)).reverse.transpose.concat
... where the two most unfamiliar parts to most of us would be 1) a standard library where operations that we usually expect to be on an array are usually on each item. So e.g. "reverse" is here not "reverse the array", but "reverse each item in the array", same for the rest. The other two which doesn't have "reasonable" definitions is "reshape", which in this case (two int arguments) just creates an array of 8 elements of the value 2, and "encode" which repeatedly takes a modulo from one array, applies it to each element in the other array, passes the quotient forwards and repeats until the "modulo array" has been processed (putting (8#2), which translates to [2,2,2,2,2,2,2] then basically decomposes bytes into their constituent bits, but you can also do e.g. (24 60 60)\somenumberofseconds to "encode" a number of seconds into hours, minutes and seconds, which might give a better idea of how it works)

I'd read the "decoded" version over the terse one any day. I think a "midpoint" that introduced some of these operators in a more "regular" language might well be able to remain readable, but too much density and I can't scan code and get a rough idea of what it does anymore, and that to be is a hard no.

Because while some aspects of thinking in terms of arrays this way is also very foreign, the syntax is frankly more than half the battle.

[1] https://www.nsl.com/k/k2/k295/kreflite.pdf

EDIT: Here are some of my notes from trying to figure the above out, and some horrific abuse of Ruby to modify some of the core classes to make something similar-ish to the above actually work:

https://gist.github.com/vidarh/3cd1e200458758f3d58c88add0581...


Trying to communicate about @[d; i; f; y] works once you know what that does. Like with any mathematical notation, it's given a name just like '+' has a name (plus).

I don't know if it's "inherently" superior (everything is subjective preference), but @[d;i;f;y] seems more amenable to algebraic manipulation. Also, just the fact that it is a function that takes four arguments vs some property of d means along with k's projections (similar to currying in functional languages) you can leave any of the 4 arguments empty to create a curried function.

For example you could do something like @[d;;:;x]'(a; b) which will replace the items at indices a and b with x. Or any number of other possibilities. Compare to python where to cover all possible behaviour you'll need various lambdas + for + zip + map + ...


So that to me seems separate from the minimalist syntax. The problem isn't the flexibility, but the sheer combined density. The problem isn't even necessarily the odd extra operator, but that there are so many of them that are heavily overloaded/

I don't think the projections example is possible to reproduce in an ergonomic way in e.g. Ruby or Python, but you could certainly imagine a syntax extension to allow something similar. Since a closure in Ruby is just another object, capturing arguments and returning a modified closure is trivial, but the syntax won't allow leaving out arbitrary unnamed parameters, and so you'd need an ugly placeholder value; amending the syntax to allow e.g. "f(x,,z)" with an omitted value in between set to some placeholder object wouldn't break anything, and might well be useful. There are certainly parts like that in k that seems valuable.

The dot-syntax vs. a more functional syntax I think is purely subjective - all of us understand both, I just happen to prefer an OO syntax. d.amend(i, f, y) or amend(d, i, f, y) isn't what'd break us. Not the "@" either if it was one of a smaller number...


There aren't really that many, not when you compare it to the total number of python/ruby functions that would be roughly equivalent. (Ignoring the fact a lot of k functions operate by default on matrices and other nested structures in ways that python/ruby/etc can't really do at all)

Yes, it's dense, but that's by design. And in a vacuum it may seem like replacing @ with amend wouldn't have much of a difference, which is true. But replacing every character with the same word (which is tricky - because it's hard to precisely define things, and some characters have multiple meanings) would end up with a line of k turning into a page of prose, losing the mathematical nature of it in the first place.


The nice thing about the "decoded" syntax is that it's extensible - you can design your own operations like "encode" and "reshape". But if you don't care about that, then a terse syntax allows you to read more code at a glance once you've become familiar with it. (Of course APL-like languages have other pitfalls, such as most operators being overloaded based on arity, such that meaning can change completely depending on the context. This is barely acceptable for a "terse" syntax where every operation has to be denoted by a single character but becomes quite silly otherwise.)


Yeah, I experimented with Ruby for it, and frankly I'm not adverse to golfing it down a bit, but I find there is a big gulf between people like me who treats text (whether code or not) as something to skim, find elements of to study in isolation, "zoom in and out" of, and navigate by shape.

I'm extremely picky about syntax and formatting because of how I process code, because even layout and formatting matters for me in how I digest and work with code, and my recall of code is often "visual" in that I remember the shape of different parts of the code I work with.

APL/J/K etc. throw off everything about that for me. I need to sit down and decode symbol by symbol. I get that I'll get faster at recognising them the more I try, but it's too visually compact for me.

On the other end of the spectrum you have at least a subset of users of these languages who see code as something to read symbol by symbol beginning to end with perhaps some referencing back. I get that if you treat code that way, then it's nice for it to be tiny and compact, and it doesn't matter as much whether or not the layout is conducive to skimming and jumping around if that's not how you process code.

The same groups, to me, also seem to correlate with a preference for language vs. maths, though maybe I'm just biased because I'd firmly be in the language camp.

I'm not sure how reconcilable those two ways of reading code really are.

On the other hand, I also deeply care about being able to see the important bits of the code at the same time, and so I tend to spend a lot of time decomposing my projects into small pieces and focus on size more than many "in the language camp", and I guess that's part of the reason why these languages still fascinate me, though I'd have to scratch my eyes out if I had to read code in them on a daily basis.


Ah, that makes sense. Okay, you've convinced me my original idea doesn't make much sense after all!


> Or less drastically, an IDE for APL beginners: symbols have human language helpers or UI overlays

Dyalog, which maintains a commercial APL, provides a free development environment which does exactly what you want[0]. You can hover over all of the available glyphs for documentation.

It’s a really fun development environment in that it does something that I’ve never really experienced in any another: you can modify prior inputs to experiment with new ideas.

Playing with APL (it often does feel like play, or even sculpting with clay) is really fun. Be careful, you might get attached!

0: https://www.dyalog.com/


> you can modify prior inputs to experiment with new ideas

having trouble understanding what is meant. like, changing things like in excel?


Try that experience at https://tryapl.org

Enter 1+2 where the cursor is below the copyright notice.

Now click or use arrow keys to navigate up and change the 2 to a 4 and hit Enter again.


This thing of having a screen editor where you could move the cursor and execute any command instead of a linear terminal was a staple of 1970's and 1980's computers. Oberon also is notable for letting you execute code written anywhere (and so allowing code to be used as command palettes or menus)


i mean that's cool, but from my background i would say since the input is "variable", assign it to a named variable and run the function with the variable assigned to 2, then again with it assigned to 4. that's doable in most any REPL.

or even take the function, and map a range of values to the function, to get a range of outputs. again, doable in most REPLs where the language supports some sort of mapping syntax.

can you help me understand what makes this unique? i'm not an APL user, so maybe it's just a different mindset altogether that i'm missing.


APL was one of the first interactive environments along with Lisp. They've had this back when you had a typewriter with a rotating ball with the symbols connected to the time share computer. So you'd type an expression on the typewriter, computer would run it, and it would be printed on actual paper lol.

With APL you can also do the stuff you're referring to by creating a function and mapping it to a list of numbers. If you're familiar with REPLs, the experience is similar.


You know how in a typical REPL, you’ll make use of the up arrow to repeat a previously executed expression?

Dyalog allows you to click on the previously printed expression, modify it in place and when you hit enter, it’ll evaluate as if entered on the read line.

It’s a simple little feature but it makes everything feel so malleable.


You beat me to saying the same thing lol. Their IDE is pretty helpful.


> "I've often wondered what are the real blocks to having it both ways?"

That it's not particularly useful. Here's a classic APL: {(~T∊T∘.×T)/T←1↓⍳⍵}

And in words: direct function (not T member T jot dot multiply T) compress T gets one drop iota omega.

Or in Pythonic words: lambda w: [n for n in range(2, w) if n not in [i * j for i in range(2, w) for j in range(2, w)]]

That is, the APL in Englishy words is no clearer. The Python is mostly clear because it's familiar (there are Python programmers who don't know or understand list comprehensions). And isn't it dull to have to type out the "for in range if n in for in range for in range" and a lambda and two list comprehensions? Just to say "numbers which aren't in the multiplication table of those numbers and themselves" Wouldn't it be clear enough if you could write more like a hybrid with JavaScript or C# anonymous functions with curly braces?

    lambda w: [n for n in range(2, w) if n not in [i * j for i in range(2, w) for j in range(2, w)]]

    w=>{nums=range(w).Drop(1); nums.where(n=>n not in outer_product(__op__.mul, nums, nums))}

    w=>{T=range(w).Drop(1); T.where(n=>n not in outer_product(__op__.mul, T, T))}   ⍝ no idea why the original APL code used T

    w=>{T=range(w).Drop(1); T.where(n=>n not in T ∘.× T)}  ⍝ outer product as a symbol with normal multiply symbol

    w=>{T=iota(w).↓(1); T.where(n=>n not in T ∘.× T)}    ⍝ drop as a symbol ↓

    w=>{T=1↓⍳w; T.where(n=>n not in T ∘.× T)}      ⍝ range as a symbol (iota ⍳)

    w=>{T=1↓⍳w; (T not in T ∘.× T)/T}      ⍝ where as a symbol (compress /)

    w=>{T=1↓⍳w; (~ T ∊ T ∘.× T)/T}    ⍝ not ~ and in ∊ as symbols

    w=>{T←1↓⍳w ⋄ (~ T ∊ T ∘.× T)/T}   ⍝ APL statement separator ⋄

    {T←1↓⍳⍵ ⋄ (~ T∊T∘.×T)/T}     ⍝ implicit lambda argument as a symbol (omega ⍵)

    {(~T∊T∘.×T)/T←1↓⍳⍵}      ⍝  inlined two statements into one, reads right to left
Presumably you're fine with symbols in Python like the colon when introducing functions, brackets for lists, asterisk for multiplication, parens and commas for tuples and function arguments, equals for assignment. You're likely fine with ! for Boolean 'not' in C-likes. If you have symbols for range, outer product, filter, in, drop, then you start to realise how annoying it is to type them out. We shorten the things we do often, or make them implicit. And we often do loops and data transforms when programming. It isn't the symbols which are the hard bit, and making them words doesn't make them usefully easier.


The Dyalog IDE lets you hover over each symbol and get an English name, description, and example of it being used monadically and dyadically iirc. The former is like using the symbol as a prefix to an array input and the latter in between two arrays. It doesn't take too long before a lot of symbols just click. It's easier to learn than you think.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: