Hacker News new | past | comments | ask | show | jobs | submit login
Arthur Whitney's one liner sudoku solver (2011) (dyalog.com)
282 points by secwang 83 days ago | hide | past | favorite | 195 comments



Here is the line, it is written in K. K is a language created by the same person (Arthur Whitney) based on APL and Scheme.

  x(,/{@[x;y;]'(!10)^x*|/p[;y]=p,:,3/:-3!p:!9 9}')/&~*x


I'll sometimes gauge code complexity by comparing the number of lines of code against the output of

  tar -cf - . | gzip | base64 | wc -l
IE "how much does it compress?"

Looking at APL -- I'm reminded of what happens if I accidentally send the gzipped output to my tty...

I'm impressed that there's anyone who can follow along (can you find the bug?) to code like

p←{(↑⍵)∘{(⍺∨.=⍵)/⍳n×n∘}¨,⍵},(n*÷2){⍵,⍺⊥⌊⍵÷⍺}'⍳n n←⍴⍵

It really feels like compressed binary data where everyone's got a copy of the dictionary already...


Legitimately curious how APL programmers think about maintainability and readability. Is code just thoroughly commented or otherwise documented?


The most uncompromisingly APL-ish code I've written is the BQN compiler[0]. Hard to write, hard to extend, hard to refactor. I generally recommend against writing this way in [1]. But... it's noticeably easy to debug. There's no control flow, I mean, with very few exceptions every line is just run once, in order. So when the output is wrong I skim the comments and/or work backwards through the code to find which variable was computed wrong, print stuff (possibly comparing to similar input without the bug) to see how it differs from expectations, and at that point can easily see how it got that way.

The compiler's whole state is a bunch of integer vectors, and •Show [a,b,c] prints some equal-length vectors as rows of a table, so I usually use that. The relevant code is usually a few consecutive lines, and the code is composed of very basic operations like boolean logic, reordering arrays with selection, prefix sum, and so on, so they're not hard to read if you're used to them. There are a few tricks, which almost all are repeated patterns (e.g. PN, "partitioned-none" is common enough to be defined as a function). And fortunately, the line prefaced with "Permutation to reverse each expression: more complicated than it looks" has never needed to be debugged.

Basically, when you commit to writing in an array style (you don't have to! It might be impossible!) you're taking an extreme stance in favor of visible and manipulable data. It's more work up front to design the layout of this data and figure out how to process it in the way you want, but easier to see what's happening as a result. People (who don't know APL, mostly) say "write only" but I haven't experienced it.

[0] https://github.com/mlochbaum/BQN/blob/master/src/c.bqn

[1] https://mlochbaum.github.io/BQN/implementation/codfns.html#i...


God bless, my hat goes off to you sir. I have trouble wrapping my head around the concept of first class functions in ndarrays, let alone implementing it in hardcore APL. That has to be a feat on par with Hsu's Co-Dfns.

Don't suppose you can point to any resources to help wrap your head around BQN, do you?


Well this is pretty much the goal of the BQN website so my best attempts are there. I might point to the quick start page https://mlochbaum.github.io/BQN/doc/quick.html as a way to feel more comfortable with the syntax right away. And the community page https://mlochbaum.github.io/BQN/community/index.html collects links by others; Sylvia's blog in particular focuses on the sorts of flat array techniques that are useful for a compiler.


While I've seen BQN mentioned previously on pages that discuss APL, K & J I finally took a look at it.

I've got to say, it's a really impressive language. Very well thought through, it brings some nice ideas. And as someone still newer to the space, it seems to do a great job of eliminating some of the unnecessary complexity of other languages. The straightforward approach on syntax / parsing is really fresh air.


Just looked at the github -- wait, you wrote BQN? My God. Is there any prior art on this -- arraylangs with first class functions? I don't think very many people realize how incredible the semantic power of BQN is. The idea of an arraylang with first class functions... it truly staggers the imagination.

I feel like if I were able to wrap my head around it I would never want to code in anything else. Thanks again and excited to take another look at it!


K, for a start. Whitney's earlier dialect A+ too. See https://aplwiki.com/wiki/First-class_function .


Don't most array languages have first class functions?


They have functions but not first class functions. Think (the ability to make) a vector/matrix of functions rather than just numbers :O

What could you do with that?

I don't know, but I bet some pretty cool stuff.


> What could you do with that?

Just a couple quick ideas:

    fns ← f g h         ⍝ array of function

    fns[condition] args ⍝ select function to run
    (3 0 0⌿fns)    args ⍝ f f f args


What is this witchcraft? I fear that I have seen something that I cannot unsee...


Once you've learned the syntax of the language, long expressions like that are about as readable as however-many-dozen lines of JS/Python with 1-to-3-character variable names; i.e. some parts may be obvious if they're a common pattern or simple enough, but the big picture may take a while to dig out.

Probably the biggest readability concern of overly-golfed expressions really is just being dynamically typed, a problem shared with all dynamically-typed languages. But array languages have the problem worse, as nearly all operations are polymorphic over array vs number inputs, whereas in e.g. JS you can use 'a+b' as a hint that 'a' and 'b' are numbers, & similar.

If you want readable/maintainable code, adding comments and splitting things into many smaller lines is just as acceptable as in other languages.


I am kind of curious if you have to mentally keep track of the rank/shape/dimensions in your head or if there is some implicit/explicit convention for conveying that to the reader. Does tracking rank/shape become second nature after awhile?

I'm also wondering about things like (APL-style) inner products -- they are undeniably powerful, but it's hard for me to conceptual use cases above rank 3.


That depends on the specific code. Some code is written to be agnostic to the rank, while others make certain assumptions.

In my code I'd sometimes write assertions in the beginning of a function to not only ensure it's called with the right shape but also as documentation.

Also, in practice really high rank arrays aren't used much. Even 4 is pretty rare.


If there's information on input format, it is simple enough to trace through the following shapes, but it does force reading the code rather linearly. Operations which implicitly restrict the allowed shapes are unfortunately intentionally rather few.

I basically never use the generalized inner product; it's rather unique to the original APL - J has a variant that doesn't have the built-in reduction, and k and BQN and many if not most other array languages don't have any builtin for it at all. And in general I don't typically use rank higher than like one plus the natural dimensionality of the operation/data in question.


I programmed in APL a long time ago... even got 'not bad' at it.

The best analogy i can give of my thought process is that first i unfolded the problem into one or more many-dimension object(s) ... then took a different "stance" of looking at the object, then refolded them into the final solution.

So yes... I had it all in my head at some point.


You don't really have to worry about keeping track of tons of functions, variables, structs, classes, etc., and trying to keep all the names straight in your head - all you need is to know the symbols, so it's in some ways easier than reading a complex function in more verbose languages where you might need to lookup stuff from several libraries just to understand what's going on. Also, that one line is ~100 characters, each of which probably covers ~0.5-1 lines in other languages, so you should expect to set aside a similar amount of time to reading and understanding it.


I suspect that if you're fluent in the language, understanding an expression written in it comes just as easily and quickly as reading a sentence in a book does to me.


Information density is studied in linguistics. It could likely apply to programming languages similarly.


That’s exactly what they say. Though most kdb I’ve see in business looks more like Python.


my impression is that the language is used more for scripts than for "code" in a true sense. A bit of "how much can you juggle in your mind" going on


i've only seen these style of languages commented after a contest is over on stack programming challenges. I have no idea how one would learn all this stuff from code in the wild (like i learned most of python, for example). then again, i don't go searching github for k, apl, or perl for that matter.

I'm sure each of those languages makes some guarantee about the sorts of errors that can be introduced - as opposed to C (let me pick on it) where the errors you know you can introduce, and the errors that are introduced aren't a large union. However i have a hard enough time typing english consistently, so the various "symbol-y" languages just glaze my eyes, unfortunately.

It almost "feels" like these languages are an overreaction to the chestnut "they must get paid by LoC".


Late to the game here but...

> can you find the bug?

Several stand out immediately:

- Two syntax errors: unclosed single quote in '⍳n n←⍴⍵ and no right operand in the second use of Jot (∘). It's not clear how those could have snuk in naturally by accident, but I'll just assume cosmic rays and that they should be simply elided.

- n n←⍴⍵ is setting n twice, which is a bit surprising, though it signals that you probably expect ⍵ to have rank 2. In such cases _ n←⍴⍵ or n←⊃⌽⍴⍵ may be more natural, depending on intent.

- However, Decode (⊥) will error if ⍴⍵ returns anything other than a single integer (or an empty vector), so n n←⍴⍵ is equivalent to just n←⍴⍵ and doubly confusing.

- Which means that (n*÷2){⍵,⍺⊥⌊⍵÷⍺}⍳n n←⍴⍵ can only return a vector, i.e. 1..n with a number tacked on the end: the value of (1-x^n)/(1-x) evaluated at sqrt(n), which is a bit of a strange data structure IMHO. Something to do with geometric series of n^2?

- The second use of Ravel (,) in ,⍵ is redundant, and given the constraints we know above, so is the first use: ,(n*÷2)...

- It also means that (↑⍵) is the same as just ⍵

- But then (⍺∨.=⍵) is always just 1

- Meaning that the whole code is essentially equivalent to p←(n+1)⍴⊂⍳n×n←⍴⍵. I.e. it just outputs n+1 vectors of the integers 1 to n^2.

- Which, without context, is hard to guess intent, but that data structure feels a bit strange. Instead of a vector of uniform-length vectors, a matrix would be more efficient: (n+1)(n*2)⍴⍳n×n←⍴⍵. But that's just a matrix with rows that are all the same, so maybe we could just use the single vector (⍳2*⍨⍴⍵) directly?

Really, despite looking strange, once you learn the symbols and basic operations, APL is surprisingly straightforward. If you're on HN, then you're already smart enough to learn the basics easily enough.

Admittedly, though, becoming proficient in APL does take some time and learning pains. Once there, though, it does feel like a superpower.


I'm not sure why it would be any more impressive or surprising than the billions of people who read and write in non English alphabets


That's a really good point...

But -- (and forgive me if I'm totally wrong) -- this isn't just "non-english" but "non-phonetic" which is a smaller set of written languages, and the underlying language is ... math.... so understanding the underlying grammer itself relies on having decades of math education to really make it jive.

If this code is just a final result of "learn math for 2-3 decades, and spend years learning this specific programming language" -- my statement stands. Interacting with this kinda binary blob as a programming language is impressive. I think I read somewhere that seymour cray's wife knew he was working too hard when he started balancing the checkbook in hex...


The underlying language isn't really very mathematical, at most there's a bit of linear algebra in the primitives but that's it. You certainly don't need any sort of formal maths education to learn APL. There are about 50 or so new symbols, which is not a big ask, with any sort of focus the majority of the syntax etc can be learned very quickly. The "bugs" in your original code stand out very clearly because things like "∘}" don't make sense, ∘ being "dyadic" (infix).


and it bears mention that a decent chunk of those symbols are things nearly everyone is familiar with from other languages (+, -, =, etc), symbols you've probably seen in math class or on your graphing calculators (÷, ×, ≠, ⌈, ←, etc), and symbols with very strong mnemonic associations once you've seen them explained (≢, ⍋, ⍳, ⌽, etc).


  > Advocates of the language emphasize its speed, facility in handling arrays, and expressive syntax.
Indeed.

https://en.m.wikipedia.org/wiki/K_(programming_language)


“Expressive” = like two cats fought while standing on the keyboard


I've been messing with Uiua (https://www.uiua.org/) a good amount recently, and find its sort of dance between having a stack and being an array language somehow gets you to a nice level of legibility despite being a combo of two styles that tend to generate line noise.


The front page there has examples like “÷3/+∿⊞×⊟×1.5.220×τ÷⟜⇡&asr” - is that closer to noise, or does it actually look more readable than K once you get used to both? I’m kind-of intrigued by the built-in multimedia output, but still this language looks scary and impractical at first glance. How does it compare to using numpy & jupyter? Do a lot of people prefer the extreme tenseness over using typeable keywords? I’m curious why it lets you type the readable operators but wants to turn them into glyphs; wouldn’t it be more approachable, more readable, and make more maintainable code, if it just used the keywords instead of glyphs?


Well firstly you can just type keywords and they get swapped by glyphs

Secondly, the power with Uiua is you can easily split up that expression literally by just hitting enter in some spots (and line swapping but…). You can give names

And finally if you are constantly doing operations across arrays it can be more legible to go a symbol like approach, like how most people prefer x+y to add x y. The conciseness helps you build up larger expressions before you need to break things up.


Cool language. I happened to notice the ⍜ operator, which operates on a transformed array, then reverts the transformation. Not sure if other array languages include this, but it's a really cool idea. I always found the traditional map/filter operators to be limiting in this regard, kind of like trying to write expressions without using parentheses.


It's in several, particularly newer APL dialects; see https://aplwiki.com/wiki/Under#History . Proud to say I originated the "structural" form used by Uiua, which is able to deal with transformations like filtering that lose parts of the input. Every language now seems to have its own take on what exactly Under means, with different implementations leading to different supported functions and various ways to relax the theory to be more convenient or handle intuitively expected things better.


Under is a concept in array languages, though it’s supported in a very adhoc way


Thanks for the link, it looks like a fascinating language


I work with it daily in a bank, and I couldnt find a better way to express it. Many colleagues throwing their keyboard in despair at this stupid impossible to remember syntax.


There are a lot of things in various programming languages which are hard to remember, but k and array languages have such a small surface area, not being able to remember it while working with it daily amounts to learned helplessness.

(source: mostly amateur k programmer, also worked with it in a bank, find it vastly easier to read/write/remember than most mainstream languages)


Not that it's impossible to remember, bit it's definitely contrary to most traditional use of the symbols employed in it, though not without logic. My favorite is the functions from io package, called 0, 1, and 2 (yes, numbers) which handle interaction with stdin, stdout, and stderr respectively. In dyadic form they at least have a colon, but in monadic form they look like plain numbers: 1 "Hello world".

I suspect that to study k (and use kdb) efficiently, you need to actively forget what you knew about the syntax of other languages, and study k as a language from Mars that happens to map to ASCII characters somehow.


It is really easy to remember; it is so small that remembering is the least of the issue. The rest is just using it a lot; I find it readable and nice to work with. Unlike other some other languages we get shoved down your throats.


[flagged]


"Debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?"

— Kernighan, Brian. The Elements of Programming Style (2e). McGraw-Hill, 1978.



It's a nice interpretation.

I prefer a different approach: "smart is good; clever isn't smart". If you have to express something in a clever, that is, highly contrived but actually working way, it means that you lack the right way to express it, and maybe your mental model of the problem is not that good. The theory of epicycles is clever; Kepler's laws are smart.


Very unconvincing. If you become cleverer you can just write even more clever code and still not be able to debug it.


Great!


This can probably true for some people, but still will not work for many other. One probable outcome of a frustrated debugging session is "let's rewrite/refactor it to make it easier to debug next time", and not self-enlightenment.


> how will you ever debug it?

By being so smart that your program has obviously zero bugs in it!


This view is too static.

That is not possible, because the environment can (and at some point always will) change which wasn't planned for due to a lack of working crystal balls. Data, user behavior, the network, the system(s) the software runs on can all change over time.

Also, it is way too expensive to try to cover every single conceivable possibility, so we deliberately leave holes.

For non-trivial things we often prefer to wait to see what problems actually come up during use, and then fix exactly those, but not the many more problems that could come up but are too unlikely and/or too costly to guard against.

In a living environment the software lives too, and keeps changing and adapting.


you might've missed the quip, since this whole thread is about a quote, which i'm countering with an alternative quote from Hoare

> There are two methods in software design. One is to make the program so simple, there are obviously no errors. The other is to make it so complicated, there are no obvious errors.


> That is not possible, because the environment can (and at some point always will) change which wasn't planned for due to a lack of working crystal balls. Data, user behavior, the network, the system(s) the software runs on can all change over time.

It sounds to me like you are describing a change of problem, not bugs in the solution. If in the future someone redefines the concept of a Sudoku puzzle such that this solution is no longer applicable, or tries to use the solution verbatim in a language which is different from K and therefore yields different results, it's not a bug in the original code that it's not a solution to that new problem. It's still a solution to the same problem it was always a solution to.

I can see what you mean in a practical sense, but also consider (practically) that a lot of problems can be broken down into smaller, well-defined problems which are themselves practically immutable. You can throw the solutions to such problems out when you no longer need them, and come up with solutions to whatever new problems replaced the original problems.


In my experience, the vast majority of problems are insufficiently specified. No matter how well you solve the current problem, there are bound to be certain assumptions you've made about the requirements. And when those assumptions don't hold true, your solution may no longer work.

> What do you mean the input file can't be ISO-2WTF encoded?


I believe that you are addressing maintainability, not debugging.


In your book, debugging problems is not part of maintenance?

What even is the difference, apart from trying to start a discussion about definitions (I'll let somebody else comment on that terribel habit: https://www.lesswrong.com/posts/7X2j8HAkWdmMoS8PE/disputing-...).


Debugging problems is part of maintenance, but a small part. Extensibility is probably a much larger part, and what I think of first when someone says "maintenance".


The average bank/company would rather have an average solution maintained by 10 easily replaceable average developers than a nutty, smart solution only understood by 1 highly talented developer.


You could also say that the average bank/company should have learned from previous mistakes doing exactly that for many decades. Select a language that is well tested, understood and supported. Set a limit on cleverness and instead focus on maintainability and simplicity.


If only. In my experience, banks end up building a solution that is maintained by 100 mediocre developers that a reasonably smart developer can't make sense of when it behaves erratically or has extremely poor performance.


I described the theory. You have described the practice :)


Which was precisely my point (and I agree with all the responses in this thread), though my wording and light sarcasm seems to have been a bit too dry, and didn't quite take up as intended.


Keeping skill barriers low keeps wages low as well.


But possibly not its maintainability.


Lines of code is a poor metric, because languages use lines differently.

A much better measure would be the number of nodes in a parse tree, of semantically meaningful non-terminals like "a constant" or "a function call".

An even better measure would also involve the depth and the branching factor of that tree.


Just... no. What are you even trying to compare? UX of a language matters. Clarity, thinking paradigm, expressability etc. all matter and are affected by the visual size of code.

A one line solution takes up very little visual real estate. That matters a lot when you are working on some more complex problem. Flitting your eyeballs around a screen takes orders of magnitude less effort than scrolling around and navigating files. Cognitive load is important.

We really need to burn this vague "only semantics matter" scourge that's creeped into our programmer values these days. I'm sorry, but I care about things like incentives against over-engineering, ease of directly thinking in the problem domain, and simplicity of the encompassing ecosystem.

A terse one-line solution tells me there is virtually no room for over-engineering. Even without knowing K, I can see obvious constants side-by-side, telling me it's likely using a direct data representation of the problem in its code. Does K culture encourage code like that? Does programming in K bias you towards directness and simplicity? Then please, I want some of that special sauce on my team.

</rant>


"A one line solution takes up very little visual real estate. That matters a lot when you are working on some more complex problem."

When I work on some more complex problem, I like to think about the problem, not spend energy decoding condensed text. Scrolling a bit more verbose, but clear code, is faster for me.


I think the difference is not a bit of scrolling, but rather the whole program on half a page vs 10 files à 200 lines of mostly noise


There is noise and there is self explaining code. One liners for complex problems are a nice challenge, but are seldom clear to read.


Don’t get confused between using smaller keywords and actually understanding the problem at hand. Terse languages do absolutely nothing to prevent over-engineering. They might even contribute by giving a false sense of simplicity and a tendency to prevent certain kinds of code reuse. To prevent over-engineering on large projects, you don’t need a terse language at all, you need the right mentality, the right management & product team, good team culture & cohesion, strong code review process, and job performance metrics that align with not over-producing code.

It seems like parent’s metric (size of parse tree) would easily optimize for terseness and penalize bloat, regardless of language, so maybe your reaction was too reflexive. UX of a language does matter a bit, and one that’s too terse incurs development friction and technical debt when used in larger projects. Just study the history of Perl and why it’s not widely used.

What a one liner looks like is more or less the worst possible metric to use for large software projects. In any language, the style of code changes the larger the codebase, and cleverness and terseness become a liability. https://www.teamten.com/lawrence/writings/norris-numbers.htm...


> Don’t get confused between using smaller keywords and actually understanding the problem at hand.

Absolutely. I'm not arguing for maximal terseness and spent a lot of words attempting to say otherwise. (IMHO, we're both reflexively reacting to our parent comments a bit).

What I am wanting to point out is that form affects function and how we think about and use our languages. This in turn shapes our ecosystems and cultures, which influence our heuristics for keeping a pulse on project health and the surrounding support structures. Which all in turn reflect back to affect the forms we like and produce!

The mechanics of middle-sized and large businesses naturally incentivize bloat and churn to some degree, not because we necessarily want those things, but because coordination and communication are large information bottlenecks, a la Mel Conway's observations [0]. When our heads are filled with syntax, tooling, best practices, and absolute ideas about what correct code looks like, we make it that much harder to fill our heads with direct problem-domain concerns.

While senior engineers can and often do successfully wade through all that, it's a long grind, and anything that empowers team members to get there faster seems like an obvious win.

I wish that we as a community could better recognize that our values like "semantics over syntax" or "code review is necessary" or "readability is good" implicitly invoke a large set of contingencies in order to become heuristics that serve well.

Tools that violate our common sense and aesthetic values present an opportunity to sanity check those sentiments and potentially sharpen our understanding of them. In my experience, APL and probably K are really excellent at that, offering a new way of thinking that highlights clear disfunctionalities within our current "mainstream" languages and cultures.

If true, isn't that something we obviously want to eagerly investigate?

So, to bring it back, I think the statement that "AST node count is a better" embodies exactly the kind of values and mindset that make it harder for us to grow. What do we think about readable, maintainable, highly terse code? Does this K sample exemplify that? Why are K and APL successfully (if quietly) running significant portions of our economy? What structural, organizational, and cultural lessons do they embody that we can learn from?

[0]:http://www.melconway.com/Home/Committees_Paper.html


I don’t know what you mean, and currently do not agree with the idea of parse tree size being a bad mentality or making it harder to grow, and I don’t feel like you’ve provided any objective evidence to back up that claim. Parse tree size seems like a better metric than terseness, for the reasons I and others have already mentioned, despite the fact that sometimes tight small code has some advantages that might be invisible if your only metric is AST size. (Neither parent nor I made such a claim, but your argument depends on that assumption.) K specifically is unreadable by most people, and somehow examples never seem to come with comments or error handling. Is it the best poster child for programming progress as a whole? I have my doubts, even though it might be very useful for some people in some situations.

I would be willing to bet that Excel spreadsheets are running more of our economy than K code, likely by orders of magnitude. I’d also be willing to bet that COBOL exceeds K use by similar multiples. Business/economic use doesn’t seem like a very good metric for what you’re talking about.


The original post I replied to made an explicit claim with zero evidentiary support, so your demands for rigor are a bit lopsided. But more to the point, I'm really not saying anything about which metric is better or whatnot. What does it even mean to be a "better metric" when we have no consensus on what criteria we're using for comparison?

In effect, replies here effect to provide some criteria, but each is slightly different and somewhat begs the question, since said criteria can be easily chosen to support whatever conclusion you want.

When the object level discussion (i.e. in this case, the comparison of code size metrics) is ill-defined, the only natural thing we can reach for are cached ideas, heuristics, memetic trends, etc. That is, the discussion essentially becomes an implicit sharing of which cultural ideas we consider salient and important.

That is why I keep going on about culture and values in my previous replies. I'm not saying anything about ASTs vs. token count or whatever. I'm trying to say, "Hey, fellow devs. You know all the obvious ways that K and APL violate our sense of what good and proper code should look like? You know how they feel wrong? Well, actual experience by APL and K devs provides us evidence that we're potentially wrong and should give these languages more open-minded attention."


All programmers think they have culture and values. Getting them to agree on what those are and to prioritize them the same is the hard part. Having an open mind is great, and if you’d said that from the start I totally would have agreed, but you didn’t, you dismissed the GP’s comment with relatively strong language as being very wrong headed on behalf of the entire community, and doubled down on that stance in your followup comment, without acknowledging (or maybe without even realizing) that the suggestion might ultimately align with your message, that you might be picking a fight with someone who agrees with you. I don’t feel like the top comment required any evidence, partly because the claim is nearly tautological and quite easy to agree with, and partly because it wasn’t shitting on someone specific or being mean. You made a much bolder and more antagonistic claim and called their message a “scourge” and framed your point of view about over-engineering in opposition to their comment (which seems like straw man and very presumptuous), so yes I think your argument does require some evidence if you’re going to do that.

There’s a famous couple of comments by @arcfide defending the extremely terse K/APL style of coding, and it really makes a strong case. From what I can tell, this might be what you’re trying to say. I would just say to take note that he does not try to make the poster of the comment he replied to feel wrong headed, he focuses on the positives of his own approach, and he does not project his style on everyone else or make any claims that everyone should use it, he focuses on why it works for him. https://news.ycombinator.com/item?id=13571159


First off, thanks for continuing this exchange with me. Text communication between strangers is fraught with miscommunication perils left and right, so hanging in there is a really nice gesture. Cheers.

It's so weird, though. Everything disagreeable you point out with my comments, I kind of feel like is true of GP's comment. It casually dismisses the content of OP, injecting a strawman about a "metrics". Wore still, it just touts normative and relatively mainstream opinions about better vs. worse thing, without even offering up a hint as to what these supposed metrics are supposed to be measuring.

That's a dick move, IMHO, and a common one at that in these array language posts. I think we as a community do ourselves a disservice by allowing ourselves to propagate such echo chambers.

Which is all a pretty different message than @arcfide (eloquently) attempts.

Please note that nowhere do I attack GP or GP's comment specifically. If you're inclined to reread my comments, please note that I really do try hard to delineate ideas and cultural trends as the target. Heck, I even own up to being part of those trends and culture. Where you have thought I say "you", try rereading it as "we".

Anyway, cheers!


I appreciate that your tone is softening with me, and that you took the time to mention some positives. De-escalating from a potential misunderstanding is the right way to go.

I’ve reread your comments, and they still read like an attack to me, while the top comment does not. You may feel like you’ve drawn a line, but the implications you made were quite clear. Calling it a dick move is a more direct attack, and talking about how it offends you tends to demonstrate that you have been and still are in fact attacking. Using strong language in a direct reply and talking about how wrong the mentality is is always going to be taken as an attack on the comment you’re replying to.

Personally I feel like the “one line” part of the article title is intentionally provocative, and as such, it invites critique, which is what the top comment is. It’s both impressive to fit a sudoku solver on one (short) line, and also at the same time, making a claim that can’t be fairly compared to other languages. As such, it is fair to point out that there’s a more universal way to evaluate the size of Arthur Whitney’s solution that is more compatible with other languages, and combined with the fact that everyone (including you?) already agrees that lines of code aren’t a good metric for anything, it’s not clear why you’ve taken such issue with that casual comment.

The article, disappointingly, doesn’t explain Whitney’s solution in words that non K readers can understand. At a glance, I would assume it’s a more or less brute force search over all possible sudoku boards and then matching against the cells, rows, and columns rules. In a way then, Whitney’s solver might be seen as a succinct statement of the rules of sudoku, which are indeed relatively short in any language.


Oh well. It's clear you read my posts as personal or uncouth attacks. I genuinely disagree and tried to explicate intent clearly, but c'est la vie. IMHO, it's helpful to separate out ideas and actions from identity, freeing us to deal with the former without mercy as needed.

That said, this exchange will definitely bounce around in my subconscious, so whether or not I explicitly agree, you've definitely moved the needle!

Anyway, whatever happened here, it was a genuine meeting of minds, so much obliged fellow HNer. Be well.


Why not create a programming language, that uses all possible unicode codepoints to further decrease the number of characters used? That would be so much more readable!


How about having a "binary operator" codepoint, that is then specialized by a following combining character codepoint. That way it's also intrinsically obvious that the thing is a binary operator at first glance. Even more expressiveness!


I mentally work like what parent described. I plug ast node in my mind when I read. I like operating with combinators, graphs/trees of them that I almost naturally understand the results of.

Any language that add complexity at that layer loses me, and APL, even with crude visuals is not far from that.


The built-in functions and API to a system library spoil these metrics. As an example, consider HQ9+, which is pretty good at printing "Hello, world!" for instance.

https://cliffle.com/esoterica/hq9plus/


The preferred measure of information content is simply number of bits as used for instance in Algorithmic Information Theory [1].

[1] https://en.wikipedia.org/wiki/Algorithmic_information_theory


By that measure naming a variable “objUser” instead of “user” is better because it has more information, and naming the same variable “cgjkkytdvjkftujmhffetb” is even better because it contains more information.

The parse tree approach is trying to get at a fuzzy notion of useful information and useful density of information.


This oneliner was obviously done for the giggles, and nobody pretends it's reasonably readable code. Getting anal about definitions here is entirely missing the point. (which is "look, K lets you write extremely dense code!")


I don’t know if that’s the case, simply because all code that I see written by array language programmers looks like code golf. Even the language implementation itself!

https://code.jsoftware.com/wiki/Essays/Incunabulum


Is this because all the code you see is through HN or similar? No one's going to share something titled "an unremarkable script I use to help run my business" here. Not sure what your threshold for code golf is, but you can see APL written in a variety of styles by searching Github. It doesn't recognize K but does have Q, which is basically K plus keywords, obviously promoting more verbose code. Whitney originated the dense style of implementation shown at your link, and a few other implementers (including myself in the past) have picked it up, but it's not that common. For example April, GNU APL, Kap, Goal, and Uiua all use an idiomatic style for their implementation languages.

APL: https://github.com/search?type=code&q=language%3AAPL

Q: https://github.com/search?type=code&q=language%3Aq

Implementation: https://aplwiki.com/wiki/List_of_open-source_array_languages



I’ve often wondered about languages like APL/k, are the programmers actually able to think about problems more efficiently?


As a kdb+/Q programmer I would say it depends on the type of problem.

For example, when working with arrays of data it certainly is easier to think and write “avg a+b” to add two arrays together and then take the average.

In a non-array programming language you would probably first need to do some bounds checking, then a big for loop, a temporary variable to hold the sum and the count as you loop over the two arrays, etc.

Probably the difference between like 6ish lines of code in some language like C versus the 6 characters above in Q.

But every language has features that help you reason about certain types of problems better. Functional languages with algebraic data types and pattern matching (think OCaml or F#) are nicer than switch statements or big if-else-if statements. Languages with built-in syntactic sugar like async/await are better at dealing with concurrency, etc.


Well no, not in a non-array programming language. In any language that has a semi-decent type/object system and some kind of functional programming support, `avg a+b` would just be `avg(a, b)`, which is not any easier or harder, with an array type defined somewhere. Once you make your basic array operations (Which they have to be made in q anyways, just in the stdlib), you can compose them just like you would in q, and get the same results. All of the bounds checking and for-loops is unnecessary, all you really need are a few HKTs that do fancy maps and reduces, which the most popular languages already have.

A very real example of this is Julia. Julia is not really an array-oriented programming language, it's a general language with a strong type system and decent functional programming facilities, with some syntactic sugar that makes it look like it's a bit array oriented. You could write any Q/k program in Julia with the same complexity and it would not be any more complex. For a decently complex program Julia will be faster, and in every case it will be easier to modify and read and not any harder to write.


Why would it be avg(a, b)?

What if I want to take the average difference of two arrays?


mean(a - b)


I don't know what you mean by the q array operations being defined in the standard library. Yes there are things defined in .q, but they're normally thin wrappers over k which has array operations built in.


I don't consider an interpreted language having operations "built-in" be significantly different from a compiled language having basic array operations in the stdlib or calling a compiled language.


Hmm, why not? Using K or a similar array language is a very different experience to using an array library like numpy.


It is syntactically different, not semantically different. If you gave me any reasonable code in k/q I'm pretty confident I could write semantically identical Julia and/or numpy code.

In fact I've seen interop between q and numpy. The two mesh well together. The differences are aesthetic more than anything else.


There are semantic differences too with a lot of the primitives that are hard to replicate exactly in Julia or numpy. That's without mentioning the stuff like tables and IPC, which things like pandas/polars/etc don't really come close to in ergonomics, to me anyway.


Do you have examples of primitives that are hard to replicate? I can't think of many off the top of my head.

> tables and IPC

Sure, kdb doesn't really have an equal, though it is very niche. But for IPC I disagree. The facilities in k/q are neat and simple in terms of setup, but it doesn't have anything better than what you can do with cloudpickle, and the lack of custom types makes effective, larger-scale IPC difficult without resorting to inefficient hacks.


None of the primitives are necessarily too complicated, but off the top of my head things like /: \: (encode, decode), all the forms of @ \ / . etc, don't have directly equivalent numpy functions. Of course you could reimplement the entire language, but that's a bit too much work.

Tables aren't niche, they're very useful! I looked at cloudpickle, and it seems to only do serialisation, I assume you'd need something else to do IPC too? The benefit of k's IPC is it's pretty seamless.

I'm not sure what you mean by inefficient hacks, generally you wouldn't try to construct some complicated ADT in k anyway, and if you need to you can still directly pass a dictionary or list or whatever your underlying representation is.


> None of the primitives are necessarily too complicated, but off the top of my head things like /: \: (encode, decode), all the forms of @ \ / . etc, don't have directly equivalent numpy functions. Of course you could reimplement the entire language, but that's a bit too much work.

@ and . can be done in numpy through ufunc. Once you turn your unary or binary function into a ufunc using food = np.frompyfunc, you then have foo.at(a, np.s_[fancy_idxs], (b?)) which is equivalent to @[a, fancy_idxs, f, b?]. The other ones are, like, 2 or 3 lines of code to implement, and you only ever have to do it once.

vs and sv are just pickling and unpickling.

> Tables aren't niche,

Yes, sorry, I meant that tables are only clearly superior in the q ecosystem in niche situations.

> I looked at cloudpickle, and it seems to only do serialisation, I assume you'd need something else to do IPC too? The benefit of k's IPC is it's pretty seamless.

Python already does IPC nicely through the `multiprocess` and `socket` modules of the standard library. The IPC itself is very nice in most usecases if you use something like multiprocessing.Queue. The thing that's less seamless is that the default pickling operation has some corner cases, which cloudpickle covers.

> Im not sure what you mean by inefficient hacks, generally you wouldn't try to construct some complicated ADT in k anyway, and if you need to you can still directly pass a dictionary or list or whatever your underlying representation is.

It's a lot nicer and more efficient to just pass around typed objects than dictionaries. Being able to have typed objects whose types allow for method resolution and generics makes a lot of code so much simpler in Python. This in turns allows a lot of libraries and tricks to work seamlessly in Python and not in q. A proper type system and colocation of code with data makes it a lot easier to deal with unknown objects - you don't need nested external descriptors to tag your nested dictionary and tell you what it is.


Again, I'm not saying anything is impossible to do, it's just about whether or not it's worth it. 2 or 3 lines for all types for all overloads for all primitives etc adds up quickly.

I don't see how k/q tables are only superior in niche situations, I'd much rather (and do) use them over pandas/polars/external DBs whenever I can. The speed is generally overhyped, but it is significant enough that rewriting something from pandas often ends up being much faster.

The last bits about IPC and typed objects basically boil down to python being a better glue language. That's probably true, but the ethos of array languages tends to be different, and less dependent on libraries.


Which is why C# is the giant ever increasing bag of tricks that it is (unkind people might say bloat…) ;-) Personally, I’m all for this; let me express the problem in whatever way is most natural.

There are limits, of course, and it’s not without downsides. Still, if I have to code in something all day, I’d like that “something” be as expressive as possible.


For some classes of problems that are easily vectorized, using an array-focused language can certainly make thinking about them and their solutions more efficient, since you can abstract over the data structure and iteration details.

As a quant, I used kdb+/q quite a bit for 5+ years for mid-frequency strategies, but as I moved towards higher frequency trading that required calculations on the order book that couldn't be easily or efficiently vectorized, then continuing to use array-focused languages would have only complicated reasoning about those problems.


What did you switch to after that?


I went to this tech talk on Dyalog (a modern APL-like language), and the speaker makes the argument that the notation allows certain idioms to be recognized more easily:

https://youtu.be/PlM9BXfu7UY?si=ORtwI1qmfmzhJGZX&t=3598

This particular snippet was in the context of compilers, but the rest of the talk has more on Dyalog and APL as a system of mathematical notation. The underlying theme is that optimizing mathematical expressions may be easier than optimizing general code.


Hillel Wayne writes about it on his newsletter every once in a while. He's convinced me that he does in fact think through some problems better in array languages but I still can't really conceive of what that experience is like.


there are several open-source K environments available, some which even run in the browser:

http://johnearnest.github.io/ok/index.html

if it's something you're interested in trying i'd be happy to point you toward more resources, and i'm sure there are plenty of other arraylang tinkerers reading this thread who could help, too


one nice thing about the array language style is that it's possible to talk about variations on algorithms where the relevant code snippets, being a few characters, fits inline into the discussion; more traditional vertically-oriented languages that take handfuls or dozens of lines to say the same things need to intersperse code display blocks with expository prose


"More efficiently"? Maybe. It opens up a new way to think about solutions to problems. Sometimes those solutions are more efficient, and sometimes they are just different.

It's a useful thing to learn though. And dare I say it, fun. Even if there was zero benefit to it, it'd still be fun. As it turns out, there really are benefits.

For me, the biggest benefit is when I'm working with data interactively. The syntax allows me to do a lot of complex operations on sets of data with only a few characters, which makes you feel like you have a superpower (especially when comparing to someone using Excel to try to do the same thing).


I've found that the challenge is to "think in vector operations" rather than of iterating over the same data. The tricky part is figuring out how to get an operator to do the right thing over an array of stuff on the left hand side and this list/bag/etc of arguments on the right


Every K program ought to end in QED, and then I remember that KQED is also a thing, and I wonder if their two worlds have ever overlapped.

(KQED is the Bay Area PBS partner. PBS is the US public television org.)


For me one of the most important things here is the clarity of the problem -maker- at the top. That's the difference between the "Iversonian" symbolic languages (J and K included) and others. It doesn't have the elegance and power of a one line solution, but it's just so clean and comprehensible even without the disciplined commenting. (Although I really think lamp is not a good comment glyph. Sorry about the sacred cow I just took a swipe at fellow array nerds.)

One line solutions are incredible, and tacit is mind-bendingly cool. To use the unique compactness of a glyph-based language as a way to efficiently describe and perform functional programming - then to do that all over arrays!? - whoever had these ideas [0] is utterly genius.

But as someone trying to make time to write a program ground up in APL, knowing that I won't be able to make it just a set of really good one liners, that example is also significant for me.

[0] https://www.jsoftware.com/papers/fork.htm


Just because you can write everything on one line without any spaces doesn't mean you should.

You can ofcourse removethe capability to do thatand you'll effectively force the programmer to write more venous code, but then its strength as an interfacing tool is very much reduced.

The Iversonian languages has the capability to write incredibly terse code which is really useful when working interactively. When you do, your code truly is write-only because it isn't even saved. This is the majority of code that at least I write in these languages.

When writing code that goes in a file, you can choose which style you want to use, and I certainly recommend making it a bit less terse in those cases. The Iversonian languages are still going to give you organs that are much shorter than most other languages even even it's written in a verbose style.


Most people are put off by the symbols, that wasn't really the issue I had.

So I do love APL and arraylangs, and learning them was really helpful in a lot of other languages.

But they never became a daily driver for me not because of the symbols, which were honestly fine if you stick with it long enough, but after about 3-4 years of dabbling on and off I hit a wall with APL I just couldn't get past.

Most other languages I know there is a "generic-ish" approach to solving most problems, even if you have to cludge your way through suboptimally until you find "the trick" for that particular problem and then you can write something really elegant and efficient.

APL it felt like there was no cludge option -- you either knew the trick or you didn't. There was no "graceful degredation" strategy I could identify.

Now, is this actually the case? I can't tell if this is a case of "yeah, thats how it is, but if you learn enough tricks you develop an emergent problem solving intuition", or if its like, "no its tricks all the way down", or if its more like, "wait you didn't read the thing on THE strategy??".

Orrr maybe I just don't have the neurons for it, not sure. Not ruling it out.


You're not wrong. It's very easy to get that impression when trying to learn the array languages. It's very easy for someone who's used these languages for a long time to look at a problem, and say "why did you use that really elaborate solution, when you can just use ⍸⍣¯1?". No one probably ever told you that ⍸ has an inverse, and how you could use it.

Even today, after having worked in these languages for years, I am still put off a bit by the walls of code that some array programmers produce. I fully understand the reasoning why it's written like that, but I just prefer a few spaces in my code.

I've been working on an array language based on APL, and one of my original goals was to make "imperative style" programming more of a first-class citizen and not punish the beginner from using things like if-statements. It remains to be seen how well I succeeded, but even I tend to use a more expressive style when terseness doesn't matter.

Here's an example of code I've written which is the part of the implementation that is responsible for taking any value (such as nested arrays) and format them nicely as text using box drawing characters. I want to say that this style is a middle ground between the hardcore pure APL style found in some projects and the style you'll see in most imperative languages: https://codeberg.org/loke/array/src/branch/master/array/stan...


Very nice! I like the readability-- not sure if thats just indicative of your style or the language, and the map construct is also nice. I don't remember any off-the-shelf map construct, at least not in Dyalog.


Dyalog doesn't have an explicit implementation for maps, but you get the same effect with column-major table stores and the implicit hashmap backing of the search-like primitives [0]. E.g.

    keys←'foo' 'bar' 'baz'
    values←1729 42 0.5721

    indexOf←keys∘⍳  ⍝ The dyadic ⍳ here is what builds a hashmap
Then you can use it like

    data←(values⍪¯1)[indexOf 'bar' 'bar' 'baz' 'foo' 'invalid' 'foo']
where ¯1 is just the value you want missing keys to map to. If you're okay erroring in that case, it can be left off. For map "literals", a syntax like the following gets you there for now:

    k v ←'foo' 1729
    k v⍪←'bar' 42
    k v⍪←'baz' 0.5721
In version 20, proper array literal notation [1] is landing, where you'll be able to do:

    keys  values←↓⍉[
    'foo' 1729
    'bar' 42
    'baz' 0.5721]
In practice, I suspect that this ends up being more ergonomic than actual maps would be in the language. That said K is all about maps and the entire language is designed around them instead of arrays like APL. IIRC, there was also some discussion on the J forums a while back about whether or not to have explicit hashmap support [2].

[0]:https://help.dyalog.com/19.0/#Language/Defined%20Functions%2...

[1]:https://aplwiki.com/wiki/Array_notation

[2]:https://groups.google.com/a/jsoftware.com/g/forum/c/VYmmHyRo...


It's likely a combination of both. It's certainly possible to write Kap in a much more condensed form. But things like if-statements and hash maps does allow for a more imperative style.


The wall you describe is a legitimate problem with the current APL on-ramp. One of my talks last year was on this exact issue [0]. It's definitely not you.

That said, it's also really not a limitation with the languages either. In my experience, punching past that wall is exactly the process of making the paradigm click. It took me a good 500 hours hacking on my YAML parser prototype over the course of a year before the puzzle pieces began to click in place.

Those lessons are still percolating out, but it feels like some combination of 1) data-driven design principles, 2) learning how to concretely leverage the Iversonian characteristics of good notation [1] in software architecture, and 3) simple familiarity with idioms and how they express domain-specific concepts.

Feel free to contact me if you'd like to chat directly about this and overcoming the wall.

[0]:https://dyalog.tv/Dyalog23/?v=J4cg6SV92C4 [1]:https://www.jsoftware.com/papers/tot.htm


There is a video about this.

https://www.youtube.com/watch?v=DmT80OseAGs

You can try the solution at https://tryapl.org/


It may be interesting to compare this one line to "Code Golfed" equivalents in different programming languages:

https://codegolf.stackexchange.com/questions/tagged/sudoku?t...


Funnily top[1] solution for specific problem (brute-force Sudoku solver) is the K snippet. Second comes a J solution that replicates K's.

[1]: https://codegolf.stackexchange.com/a/5030


The LoC count and similar metrics have the advantage of an easy calculation.

Ultimately though,they are a proxy to a more relevant but difficult to determine attributes such as

Given a reasonably proficient engineer, the amount of time it would take them to resolve a bug in code written by someone else or alternatively extend its functionality in some way.


Not knowing K, am I correct in assuming this is a backtracking brute force solver?


From the linked page (and the one linked beyond that), it's a breadth-first search actually. Keep a list of possible puzzle states at all times, pick a blank cell (theoretically arbitrary, but in practice intelligently for performance), add copies of the state with each possibility for that state added.


That sounds like 100+ lines in python or similar languages…


You should be able to do it in under 20 lines using the same matrix operations as the K code via numpy.


Numpy is indeed very apl. Just more horrible to me; not python-y and annoyingly verbose for the apl-er.


A few years back I made a modest attempt at writing a concise yet readable sudoku solver in Python - in about 29 lines: https://github.com/hrzn/sudoku/blob/master/sudoku.py

Could have been made shorter at the price of readability.


Looks nice. Since imports numpy can utilize (more of) numpy's operations to squeeze validation functions and nested fors to one. Should result in shorter code but readability will probably depend on reader's experience in array programming.


It probably isn't. At least, not for Python.


The k code at least isn't doing any heuristics for the iteration order, and is just doing a fold over the indices of zeroes in index-ascending order.


For sudokus of size 9x9 and 16x16 almost any unoptimised DFS will work just fine (even for hard sudokus [0]). The real challenge is for sudokus of size 25x25 and above.

[0] https://cdn.aaai.org/ocs/2517/2517-11201-1-PB.pdf


Much better than some of the garbage solutions I have seen, including from sources that should know better, like The Algorithm Design Handbook. Some really absurd approaches out there, so bad I wrote a blog post about it in 2015: https://www.grahamwheeler.com/post/sudoku/


Well if we are showing off sudoku solvers, it would be a sin not to share this one:

  sudoku(Rows) :-
        length(Rows, 9),
        maplist(same_length(Rows), Rows),
        append(Rows, Vs), Vs ins 1..9,
        maplist(all_distinct, Rows),
        transpose(Rows, Columns),
        maplist(all_distinct, Columns),
        Rows = [As,Bs,Cs,Ds,Es,Fs,Gs,Hs,Is],
        blocks(As, Bs, Cs),
        blocks(Ds, Es, Fs),
        blocks(Gs, Hs, Is).

  blocks([], [], []).
  blocks([N1,N2,N3|Ns1], [N4,N5,N6|Ns2], [N7,N8,N9|Ns3]) :-
        all_distinct([N1,N2,N3,N4,N5,N6,N7,N8,N9]),
        blocks(Ns1, Ns2, Ns3).
While not one line, to me it is pareto optimal for readable, elegant, and incredibly powerful thanks to the first class constraint solvers that ship with Scryer Prolog.

If you want to learn more about it or see more of Markus's work:

https://www.metalevel.at/sudoku/

https://youtu.be/5KUdEZTu06o

More about Scryer Prolog (a modern , performant, ISO-compliant prolog written mostly in rust)

https://www.scryer.pl/

https://github.com/mthom/scryer-prolog


It has strong perl vibes and it brings back ptsd :D. Maybe this overshortification of things is a personnel or intelligence indicator of some sorts.


So how to feed in the instance if code is only

Nebulous1:

Here is the line, it is written in K. K is a language created by the same person (Arthur Whitney) based on APL and Scheme. x(,/{@[x;y;]'(!10)^x|/p[;y]=p,:,3/:-3!p:!9 9}')/&~x


I put 2011 in the title above because https://web.archive.org/web/20110813135700/https://dfns.dyal... appears to have the main thing - is there a better year?


The discussions around “line noise”-languages are always intersting.

Most programmers would agree the ‘/’ symbol is at least as clear as writing ‘divideBy’. The question is how often the symbols are used and if their frequency in code justifies learning them.



This is (predictably) wrong.


hah


It's cool in a novelty way that it’s so short, but I would infinitely prefer something like this for actual work and understanding:

  def solve(grid):
      def find_empty(grid):
          for r in range(9):
              for c in range(9):
                  if grid[r][c] == 0:
                      return r, c
          return None

      def is_valid(grid, num, pos):
          r, c = pos
          if num in grid[r]:
              return False
          if num in [grid[i][c] for i in range(9)]:
              return False
          box_r, box_c = r // 3 * 3, c // 3 * 3
          for i in range(box_r, box_r + 3):
              for j in range(box_c, box_c + 3):
                  if grid[i][j] == num:
                      return False
          return True

      def backtrack(grid):
          empty = find_empty(grid)
          if not empty:
              return True
          r, c = empty
          for num in range(1, 10):
              if is_valid(grid, num, (r, c)):
                  grid[r][c] = num
                  if backtrack(grid):
                      return True
                  grid[r][c] = 0
          return False

      backtrack(grid)
      return grid


Why is this getting down-voted without comment? Comparative analysis is taboo, now? I don't think Arthur Whitney would feel the least bit threatened by some Python code.


Speculation, but maybe because there is nothing of interest or to note in the comment.

It's not clear why the poster prefers that other implementation, or that they understand APL or array programming.

So as a result the comment reads as "it's in a language I don't know. I'd prefer it in a language I do know." Which is a fairly useless comment.

If that's not what they intended, it would be helpful for them to add some context to their comment.


The K-mafia is in control. Just kidding, I don’t really care either way…


Someone should collate exceptional human coding achievements to test future AI.

AFAICT AI cannot replicate this, yet, will be interesting when that day comes.


I thought it was written by Ursula K. Le guin.

Not sure where I got that from.


"one line in your custom language" is not one line at all lol


To be fair K is a real language that's used by more than just him.

Why array languages seem to gravitate to symbol soup that makes regex blush I'll never know.


Yeah I think MATLAB and Mathematica are waaay more used than K et al. They just don't look insane so people aren't posting them on HN as much.


[flagged]


Array language have been around far longer than any "HN crowd".


Which is totally orthogonal to the original statement, and my reflection to it, which was on one hand statig that seemingly array languages tend to be letter soupy, for which I replied that a selection bias is at play, as array languges are used widely, most notably Matlab is used widely which is not a letter soup. It is simply not regurgulated on the site as it does not seem so hardcore.

Nevertheless you are right, array langueges have been around earlier, for example Matlab itself dates back to the 1970s.

I do not understand the awe some are giving them in the comments, they are an easy to understand paradigm, which is very well suited for certain types of problems. Some having overly terse syntax is a thing, but I do not feel that only geniuses can comprehend array programming, anyone who did learn some university level physics or signal processing has the tools in their belt.


Does anyone have any thoughts on what motivates people to play sudoku or write solvers for sudoku ? I have trouble finding motivation to solve artificial problems. That said I sink hundreds of hours into factorio.


For me personally, I have little motivation to do classical sudokus. They either have a not-so-elegant solve path (usually set by a computer) or are too difficult for me to solve.

Variant sudokus on the other hand are a lot of fun. They often have very elegant solve paths and there are many neat tricks you can discover and reason about.

Some fun ones, if you'd like to try:

- https://logic-masters.de/Raetselportal/Raetsel/zeigen.php?id...

- https://logic-masters.de/Raetselportal/Raetsel/zeigen.php?id...

- https://logic-masters.de/Raetselportal/Raetsel/zeigen.php?id...


To each their own, but the puzzles you linked seem really convoluted compared to regular Sudoku.

The last puzzle has no fewer than 9 custom rules, in addition to the regular Sudoku rules, and then it also says “every clue is wrogn [sic]” implying there is some meta out-of-the-box thinking required to even understand what the rules are. That is more a riddle than a logic puzzle.

By contrast, the charm of classical Sudoku is that the rules are extremely simple and straightforward (fill the grid using digits 1 through 9, so that each digit occurs exactly once in each row, column, and 3x3 box) and any difficulty solving comes from the configuration of the grid.


I also mostly enjoy Sudoku variants, most of which I discovered via Geocaches, interestingly. After solving a few I then implemented a solver with customizable constraints, if anyone's interested, should still be available here:

https://www.sudoku-solver.ch/


Like many puzzles, there’s a regular release of endorphins as you progress, and a lot of satisfaction in completing something. I enjoy puzzles just like reading a book or playing a game, it’s another world I can step into for a bit of an escape, but I like to think it’s decent mental exercise. Overall I vastly prefer cryptic crosswords where solving each clue genuinely brings a smile to my face, but that’s more of a commitment of time (and for me sometimes a guarantee of frustration). I also like doing puzzles in the newspaper because me and my kids can sit together and all contribute. Coffee, breakfast, sat in the sun with a newspaper and a good pencil[1], absolute bliss if you ask me.

As for solvers, it’s a very elegant, well-formed problem with a lot of different potential solutions, many of which involve useful general techniques. I used to dabble clumsily in chess engines and honestly it’s the only time I’ve ever ended up reading Knuth directly for various bit twiddling hacks, so it’s always educational.

1: https://musgravepencil.com/products/600-news-wood-cased-roun...


I don't particularly enjoy sudoku but I like word puzzle games.

They're all artificial problems, but your brain likes a challenge and you get a dopamine hit when you solve it, I suppose.


All games are artificial problems, so your question actually is, what motivates people to engage in pastimes?

Sudoku, crosswords, Simon Tatham's puzzles etc. are an excellent way to pass the time while keep training the mind. Sports are their equivalent for the body.

Finally, writing solvers for a problem, be it real or artificial, for many is just another variety of puzzle to engage in.


idk man, you ask a good question. I think the idea has to do with the saddle you put on the invisible horse that is the game’s problem. Factorio has several complex saddles you must master to tame the beast. In factorio, you can get progressively better at using these saddles to tame even the most unwieldy scenario. Sudoku, at its heart, is not much different than factorio. However sudoku has one narrow problem with many different, increasingly nuanced ways of solving it. Factorio has many different “sudoku” style problems, but each problem needs to be handled differently, with each problem having increasing levels of sophistication. I think you might like factorio more because it’s just a bigger steak to chew on, and you’ve got the right appetite.


I don’t care much for sudoku but I do enjoy crosswords quite a lot, which feels like a somewhat arbitrary exercise. I enjoy the fact that I know a lot of words and it makes me feel clever. There’s probably something to that with most puzzle type challenges.


I wasted too much time in my youth trying to min-max, and now I get bored as soon as I figure out, roughly, what the rules and mechanics look like for any game.


I teach C++ and I made my students code a Sudoku solver last year. It's a very convenient project to give them: self-contained, already familiar, no OS-specific weirdness, you get to use STL data structures, algorithms, very gentle I/Os...


Normally I would concur, but I recently fell into a klondike solitaire binge and the only way out was to write a solver.


I play sudoku almost exclusively on the plane. It's a good way to lose 5-15min.


What baud is that? /s


My cat puked in the modem receiver cup, sorry.


mismatched, whatever it is, that's for sure. It's not quite line noise, so maybe it's just the wrong stop bit?


[flagged]


Please don't litter HN with LLM spam, it adds nothing of value to the discussion. You even said it yourself, you have no idea if any of the word vomit is true or not.


It's your comment that's adding nothing

It's very interesting that Claude can at least figure out it's a sudoku solver in K where chatgpt fails


Gemini also identifies it as J and this is the output for comparison:

The programming language used in the code is *J* (pronounced "Jay"). It's a concise, array-oriented programming language known for its expressive syntax and powerful capabilities.

Here's a breakdown of the code:

*1. Verb Definition:* * `x(,/{@[x;y;]'(!10)^x|/p[;y]=p,:,3/:-3!p:!9 9}')/&~x` defines a verb (a function in J terminology) and assigns it to the variable `x`.

*2. Verb Structure:* * `x( ... )/&~x` is the basic structure of the verb. `x( ... )` applies the verb defined within the parentheses to its argument, which will be `x` itself. * `/&~x` is a hook, a control flow construct in J. It applies the verb defined within the parentheses to each element of `x` and then applies the verb `&~x` to the resulting array.

*3. Verb Body:* * `,/{@[x;y;]'(!10)^x|/p[;y]=p,:,3/:-3!p:!9 9}` is the body of the verb. Let's break it down further: * `{@[x;y;]` creates a gerund (a verb-like noun) that takes two arguments, `x` and `y`. * `'(!10)^x` generates an array of `x` elements, each raised to the power of `!10` (factorial of 10). `/p[;y]=p` is a conjunction that appends the value of `p` to itself for each element in `y`. * `,:,3/:-3!p:!9 9}` generates an array of 3 elements, each of which is the factorial of `-3` (which is undefined and results in an error) followed by the number 9.

*4. Overall Functionality:* * The verb takes an array `x` as input. * It applies the gerund to each element of `x`, creating an array of arrays. * It then applies the conjunction to each of these arrays, appending the value of `p` (which is likely defined elsewhere in the code) to itself. * Finally, it generates an array of 3 elements with errors and 9s. * The hook `/&~x` applies the verb to each element of `x` and then applies a function that is likely defined elsewhere in the code (since `&~x` is not defined within this verb).

*Note:* Without more context about the definitions of `p` and other variables or functions used in the code, it's difficult to provide a more precise explanation of the verb's exact behavior. However, the breakdown above should give you a general understanding of the code's structure and logic.


For completeness, I tested o1-preview and o1-mini. Both were able to identify the language, but only o1-preview realized it was a sudoku solver.


Please kindly delete your account, destroy your devices, and move to the woods far away from technology.


Sure. but please elaborate ..


I cannot comprehend the mindset of people who decide to spam (because that is what your comment is) any forum with a page of bullshit GPT slop. Do you think it's helpful or interesting?


Personally I believe it is interesting since it shows the current state of the art for the 3 LLMs.

I was surprised that Claude was able to identify the language and explain the code.

But, please feel free to downvote my comments. I guess the aggregate in the end will demonstrate whether it was a useful comment or spam + bullshit GPT slop.


It wasn't able to explain the code, it was wrong.


Perhaps you’ll be kind enough to post the explanation as a top level comment.

It’ll certainly help those of us who do not know K.


Sudoku was always a meditative thing for me. It’s impossible not to win so long as you pay attention. Optimizing solutions seems contrary to the point to me.


Solvers are useful for confirming that a puzzle you've recieved or generated is solvable. The meditative process can really go sideways when there is no solution for you to stumble upon.

Puzzles in commercial collections don't usually have that problem, but those from other sources sometimes do.

Solvers also make for a nice craft exercise, as here. Simple but not trivial, you can approach them in a lot of different ways and thereby work through different techniques or constraints you mean to explore.


> Puzzles in commercial collections don't usually have that problem,

I would argue that puzzles in commercial collections are more likely to have that problem than ones made freely available by hobbyists, as commercial enterprises inevitably cut corners on things like labour costs for an actual human setter.

I have seen dozens of commercial puzzle games and applications that do not make any attempt to verify the (auto-generated) puzzles as solvable, but I don't think I've ever had the same problem on a site like LMD.


I guess I'm the opposite. After doing a couple of sudoku many years ago my thought was "Hey, I could just automate this" and started thinking of algorithms.


Optimising solutions is the meditative exercise for me.

I enjoy running simulation after simulation after simulation, studying possible outcomes and optimising everything. Everyone is different :)


I find that sodoku is not a math or even a logic puzzle, but rather an epistemology puzzle. Lots of how we know/how much we know, and if you get into speed with some failure tolerance through estimating probability it adds even more thought provoking rabbit holes.


Optimising a Sudoku solver can be seen as a different puzzle entirely and not as a mode of playing Sudoku.


Interesting position that was not expressed before. However please note that the same could be said about writing a solver.


Meta: No need to DV a comment you don't like for no reason. Engage instead. Why not have a chat?


Wouldn't it be more productive/rewarding to instead engage with comments I do like?


Only you can say what's best for you.

If have to ask: What's rewarding about only having your viewpoint reinforced?


Just under this submission I have upvoted a handful of comments with which I disagreed, mainly because of its replies.


Where are you getting the viewpoints thing from?


Downvotes and upvotes work together to manage the visibility of posts that align with the community's tastes.

While I myself found an opportunity to reply to the GP and didn't down vote them, their comment only engaged with the article in a shallow way and only then, seemingly, to just dismiss the concept of solver altogether.

It wasn't a offensive comment, but it didn't really contribute to the site in the way many people digging into deep technical walkthroughs like this expect to see.

Some downvotes weren't guaranteed, but they're not surprising and they're probably helping new readers stay engaged with more topical and technical alternatives.

It's not the end of the world to get a few downvotes, and it's almost never personal. It certainly isn't here.


Aside: Downvotes on HN can be an expression of age related, self-righteous sniper pique; Opinions on what contributes to a conversation can be all over the place and are entirely subject to biases, which can be interesting (I guess). Doesn't really matter, and Hail Satan anyway. Also "Q for Mortals" is an interesting book.


People are saturated with anger and frustration after doom scrolling. They engage with their pitchforks.


A few anonymous downvotes are what qualifies as pitchforks these days?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: