I'm still convinced that the "Clojure from the ground up" series [1] is the best and most effective language guide/tutorial I've ever read. The level of understanding at play there, in terms of building the correct mental models in the right order for a broad audience is excellent.
'typing' is one of my all-time favorites. It's pure magic watching him pop what I suspect are dependent types out of Haskell's type system, and that prose is just laugh-out-loud magic.
True, now it's coming back to me. It blends different orders of kinds (I'm still probably butchering it).
The abuse makes sense in terms of the type system behaving as a logic / goal-seeking engine. It does showcase the truly impressive work going in behind the scenes of Haskell.
I understood this was about finding a cycle in a linked list. I wish I could understand the code part. But the writing part was pretty brilliant.
On another note, I only wish my technical interviews were so simple. People talk about weeding out those who can’t program but finding a cycle in a linked list is a prerequisite to the prerequisite for passing the google interview questions I was expected to do in 45 min at the whiteboard.
FWIW, I don’t particularly like this interview question since it’s dangerously close to a riddle. Sorting, graph traversal, and other algorithms are well covered by undergraduate computer science curriculums, but this algorithm is basically impossible to come up with unless you know it already and it’s not even something that you would “know to know”…
Yeah, it's a terrible interview question because it's just trivia. It's one of those things where the best response may be "I know this problem. It's called tortoise and the hare. Do you still want me to do it on the whiteboard?"
Yeah seems like theres somewhat of a debate over whether you should say “I know this one” on an interview, which I generally would not, but most linked list questions are so much trivia its like you either know it or you don’t. Contrast with a graph problem that I immediately know requires dfs but I can never remember the details so I can honestly put on a convincing show of working them out.
Next time I do a FizzBuzz, I'm going to add a printline("duck") to the beginning, just to see if anyone tells me, "That looks great. Just one thing--get rid of the duck." If they don't, I'm walking out.
I tried that at Amazon interview where the interviewer gave me the “Alien Dictionary” problem unmodified. Calling her out on it wasn’t wise, as I’m sure she torpedoed my chances while still doing decent on the rest of the loop.
An Amazon interviewer once gave me the word ladder problem unmodified.
I am not a recent college grad, and I don't encounter a lot of interesting algorithm problems in my work, so I had prepared by working through some leetcode problems. But I had worked through only "easy" and some "medium" problems because I had the impression they didn't ask the harder ones. I was able to recognize it as a BFS graph problem and gave the time complexity (and space complexity, IIRC), but I did not finish it on the whiteboard.
Live and learn. Maybe I'll have another go in six months. Can't argue with the money and the resume cred.
I consider that to be a fairly reasonable interview question. If you're polite about it ("I've seen this one before, would you still like me to show it to you?") I can't see why anyone would penalize you for it.
> Sorting, graph traversal, and other algorithms are well covered by undergraduate computer science curriculums, but this algorithm is basically impossible to come up with unless you know it already and it’s not even something that you would “know to know”…
Even a lot of the "basic algorithms" taught in undergrad courses would be gnarly to invent on your own without clues in less than an hour. There's a reason they're named after famous computer scientists. If they were easy, we wouldn't bother teaching them.
Most LeetCode questions like this are just riddles, especially at the moderate+ range. You either studied the question and know the answer or haven’t but could figure it out in a day or say, but not 45 minutes. And studying these questions have no value outside of acing interviews. But at least they know the interviewee cares enough to cram for the interview.
I never have had to suffer these types of interviews. But I keep thinking, the problem was thought up and answered with a lot of work by someone smarter than anyone in the room. And they want to see if the victim can cough up the answer like a trained monkey? What does that prove exactly.
actually, that's a good observation - this question depends a lot on knowing one specific solution to a problem. It really is an old "riddle", as was amusingly observed in the line: "Tim retells an old riddle, though he does not know its origins, and has the words wrong." I knew this was going to be a good one when I read that line, had me chuckling within the first few minutes.
I don't know why this was downvoted - it only takes a single counter example to invalidate an absolute claim like "impossible".
Of course I can't prove I independently reinvented the tortoise-and-hare algorithm, you'll have to take my word for it. But it should be obvious that somebody did it or it wouldn't be well known today.
Yeah, considering it was an open question for 12 years, I'd be pretty surprised if you solved it immediately after hearing the solution. If you were provided hints by the interviewer and got to it with some directions, I most would believe it more.
What's more likely, somebody figured the solution out to a problem that previously took over a decade in under an hour and then felt the need to brag about it to strangers on the internet; or somebody lied online to feel better about themselves. People are certainly more likely to assume the latter.
I didn't know the history, I had no idea it was an open question for 12 years!
As I said, it was a very long time ago when I heard of the problem. I don't think it was in an interview, but I honestly don't remember the circumstances. I do remember being quite proud of myself, so I guess there's truly an element of bragging in my statement. Doesn't mean I lied though.
It appears the tortoise-and-hare solution was found by Floyd in 1967. That means the problem was first proposed in 1955? I'd love to learn more about it, do you have a source for that?
mark-r, when somebody uses the term "impossible" - generally it may mean 2 things: a) it's truly impossible b) It's very unlikely but still possible.
You settled for a). Typically of what geeks do ( yeah, I'm guilty of it too).
Anyway. you seem to be an outlier to being able to solve the problem without having being familiar with it before hand. Do you realize you are extraordinary?
(FYI: I'm not saying that with sarcasm, you might indeed be "riddle" genius, or perhaps even a "real problem", problem solver)
I can see now that I was interpreting the statement contrary to the intent of the writer. Sorry about that, and thanks for pointing it out.
I've always thought of myself as being pretty clever, but I'll bet many HN readers would say the same about themselves. "extraordinary" seems a little over the top, but thank you.
>I've always thought of myself as being pretty clever, but I'll bet many HN readers would say the same about themselves. "extraordinary" seems a little over the top, but thank you.
There are enormous implications to this seemingly harmless labeling of yourself ( not blaming you). I insist you are extraordinary, not merely 'pretty clever', not only based on my observation but also judging by other people's reactions to you your post.
Off the top of my head there are 2 implications that can be typically attributed to such traits ( and I'm not saying that you are particularly cause those implications, it's just typical):
- if someone like you is an interviewee for a programming position, your average Joe interviewer who asks you algorithmic questions ( or often some clever question) will not like the fact that you can easily solve the problems that you are presented with.
- If a person like you is an interviewer for a regular corporate programming position( which does not involve algorithms), the person is likely to pose problems of this nature to candidates for the job. When candidates routinely fail to solve those problems the person is left wondering why the candidate pool is so bad. To the interviewer the solutions seems so obvious ( or can be figured out easily) to someone who is just 'clever'.
And the industry complains how it cannot find people for CRUD programming jobs....
I figured, that's what most "normal" people would assume. ( and so would I).
But computer geeks ( we are talking about kind of people who code for 'fun') are not normal people. Their default is to take words' meaning literally, unless they have somehow (often painstakingly) managed to learn the ways of normal folks communicate.
Other people are implying that named algorithms, especially those named after someone famous, are somehow more special/difficult. Sometimes they are, but often they aren't. "Linear search" for instance is the name given to the easy idea of: iterate through an array until the value == search value or you hit the end. Most candidates could do this, though they might sweat a bit if you said "find x by implementing linear search" as the name for such a straightforward idea might not be known. They might have trouble writing proofs about its properties, which is where I imagine a lot of supposed scariness comes from. (i.e. "This approach looks correct but I haven't proved it yet." Academics aren't satisfied with just a test suite, or "looking at it".)
My favorite named-after-a-person one like this is Dijkstra's algorithm, which he claimed to have come up with in 20 minutes on the back of a napkin. If we suppose the average professional engineer is at most 3x slower/less brilliant than Dijkstra, it's not that unreasonable to imagine someone could reproduce the design on a whiteboard in a full hour...
Of course I don't buy that assumption, nor do I think it's a good problem or good idea to have as an interview filter even if it was true. (While I enjoy the occasional programming puzzle, I hate that they're lazily used to evaluate people in interviews so at least I avoid ever giving pure algorithm puzzles for interviews.) Nevertheless I agree with Mark that it's not "basically impossible" to come up with a good algorithm for many classes of algorithms and problems. I do wonder though how many people who could reinvent tortoise+hare without seeing it explicitly before would then be able to reinvent the teleporting turtle optimization right after.
Iterate over the list, re-tying it backwards as you go. If you reach the end, there's no loop. If the order is important, you can re-tie again, still in O(N) time. If you end up at the head, there's a loop.
Fun and really dirty solution! I would never come up with this because I would not dare to modify a passed data structure unless this was the purpose of the procedure.
One could use that answer to gauge the humour of the interviewer :-) For me the expected reaction of a peer to this would be "Cute! Now explain the problems I could encounter when using it." If they can't deal with some fun, well, their loss.
If you can't modify the original list, you can create another as you iterate. But that will require extra O(N) memory, so is worse than the textbook solution with two pointers.
How would you end up there? With a loop, head+1 would point to head after the first pass, so you'll end up at the head. With no loop, head+1 would only be passed once.
> This seems just as magical as the original algorithm.
If you do a "change of coordinates" the original algorithm becomes trivial: If you know that any loop in the list doesn't begin after the node you're on, you can just mark where that is, run ahead, and check if you ever come back. In the general case, "change coordinates" so that with every step, the origin advances one step. Now you'll eventually be in the previous case.
since it's java, you have access to a Set. add each node to the set if it is not present, O(1).
A bit more aggressive, check System.Runtime.heapSize() calculate how many nodes can fit in the heap, and subtract 1 for each element visited. Still O(1)
With lower level access, recognize that pointers are always powers of 2, so go ahead and tag that pointer (p = p | 1, if *p & 1, it's a cycle.) you'll build up a stack, and have to fix the tagged pointers, but also O(1).
I dunno. the double advance is a cute trick, but if the fast moving pointer doesn't do the check, you can easily spin into an infinite loop.
It's not a _hard_ problem. You just have to keep your wits about you. Lots of ways to solve it.
I think it's an open secret that you're expected to have seen and practiced similar questions.
Perhaps unintentionally showing that these companies are more interested in you showing that you care about algorithms tricks you will realistically never use there than being a productive engineer (which is a pretty hard thing to measure )
I have a cynical take on this. They want to see if you can afford the time it takes to front load data structures and algorithms in short term memory. This is an excellent indicator of whether you have other demands (such as parenting) or even just outside interests that would interfere with your devotion to the job.
I have a similarly cynical take. It ensures successful candidates have studied a formal CS degree, and haven't "just" come in from a bootcamp or are self taught. It's not deliberate, but does ensure that exclusive FAANG jobs remain the domain of the privileged.
Even as a self-taught guy I don't agree with this take. I do get hung up on technical interview trivia, but that's because I refuse to load my brain with useless interview trivia. If I was going to play the game, there are plenty of special-purpose resources available that I could cram before an interview. There are also resources like Anki through which I could use spaced repetition to load it into longer-term memory.
I'd like to add that, given my personal experience, a formal CS education does not mean that learning is any easier. In my case, there have been few courses where attending actually helped at all.
Formal CS education requires good self-learning skills and is not a counter-indicator for self-learning.
(I agree with it being a mostly arbitrary criterion for evaluating a candidate.)
It’s also a great indicator for whether you already have a job or not. If you do have a job, you probably don’t have as much time to cram for interviews. Not a great thing to select for, in my opinion.
Yea, that’s pretty cynical. It maybe had an inkling of truth when the companies in question were wee startups. But working at FANG companies these days affords a pretty nice work-life balance.
Yeah once i figured this out and just memorized everything I started passing interviews at top tech. I used to think their employees were geniuses, really they just study alot to get in
I'm trying to find the original source for this, but there's a quote I've seen about how finding a cycle in a linked list used to regarded as a FizzBuzz like test. It came of age when most people worked in C - if you worked in C for a year you'd know that cold.
I wonder how much the technical question approach isn't so much wrong as it is testing for things that matter much less now. There don't seem to be many questions about concurrency and distributed systems in this kind of interview, or at least not good ones. Everything now is "it depends" and the hard solutions are about ten lines of code for ten pages of problem explanation.
they already do, people just memorize a ton of algorithms then hope to get the right questions, and boom theyre hired. Rank and file at these tech companies really are not that impressive
This was much more about Java than Lisp. It was just a really out-of-the-way way of writing Java. Clojure almost made me hate Lisp because of all the leaning on Java that's done as shown perfectly in this post. Actual Lisp is a lot more fun.
Install SBCL, pick up a Lisp book from 30 years ago (e.g., Paul Graham's, Peter Seibel's, Norvig's), and start hacking.
We can keep debating all day long how Java and Javascript suck, but the massive ecosystems based on these two suckers sadly won't go away anytime soon.
There are of course languages that are more fun to work with, but at the end of the day we all have to eat. I would love to see Common Lisp's former glory to be restored, but sadly the language use in the industry seems to be shrinking every year.
Clojure is a modern, practical Lisp dialect that lets me use real REPL and structural editing and at the same time keep my sanity intact, plus I get paid. Please don't be so harsh about it.
I think having a Lisp as a hosted language was an awesome idea, I'd rather write (slightly crippled) Lisp that targets multiple platforms and get paid, rather than "hacking" in a (real) Lisp for free, or even worse - no Lisp at all.
You don't need to apologize for what you get paid to do. Barely anyone gets to work with the software they would use in their free time. That managers let you use some language does justify learning it, but it doesn't make that language better on technical (or aesthetic) merits.
PHP is bad. If you're getting paid for it, it's still bad. So what? People also get paid to scrub toilets.
Personally, I would rather write Java than Clojure. But I wouldn't quit if I suddenly had to use Clojure.
I haven't written much Java professionally (I know the language, I've used it), but I have written tons of Javascript code. I have to say - Clojurescript just makes much more sense. Java and Javascript are good as low-level languages for their respective platforms. But if you want to build something practical, Clojure really is much better.
And by the way, demonstrated Clojure code is very unusual. Nobody in practice writes anything even close to that. Java interop is baked into the language, but in practice it doesn't look as scary as in the blogpost.
If your grandmothers had taught you the elder language of the Sun, you would see the words, rituals, and invocations of javac as mere shadows of the sigils of the true magic.
I don't know, Clojure was a breath of fresh air when I first learned it and it's definitely more fun than what's shown in this post on a day-to-day, this post is just whimsy... (Though truly the more Java interop you need, the less pleasant it becomes.) But I agree Common Lisp is even more fun than that. It can also do the same sort of madness as this post shows. Here's my favorite related blog about it, even though it's not written in such a hilarious style: https://www.pvk.ca/Blog/2014/03/15/sbcl-the-ultimate-assembl...
> SBCL has definitely proven itself to be a good companion to explore the generation of domain-specific machine code. ... Steel Bank Common Lisp: because sometimes C abstracts away too much ;)
One would do well to keep in mind the third commandment of technology management: "Thou shalt not suffer a witch to live." A.k.a., nobody likes a smartass, regardless of how good they are. If you interview one, it's a no hire. If you're unfortunate enough to find out there's one on your staff, it's pink slip time.
The interviewer didn't do so hot either. Instead of staying silent (mentioned a few times) when things got off track, he should have intervened.
Ultimately, the purpose of the interview is to allow the interviewer to become convinced to recommend hiring. If you're going down a path that isn't going to result in that, you're depriving the candidate of that opportunity. Not to mention wasting everyone's time.
Sometimes that means the candidate will have to adapt their answer to the interviewer. Everybody is different, and this is the interviewer who has been assigned. Occasionally, that may even mean dumbing down your answer.
Of course, for narrative reasons, the interviewer in this story has to be passive. Otherwise the story would be about 90% shorter.
The entire SV mythology relies on competitive smartassery. That's the reason for asking this kind of thing in the first place. The culture fit metric usually requires a minimum level of it too.
Someone is generally classified as a smartass when the balance of intelligence is on the smartass side. So this is just giving idiot managers carte blanche to fire people smarter than they are. For job security I guess.
Not sure I agree. I've worked with a few "smartasses" in my time, and it can be a real pain. Some people are absolute masters of the technical details of a subject, but fail to see the wider context.
For example, you get developers optimising the hell out of a block of code - making it unmaintainable in the process - when the real bottleneck is developer time, not processor time. Or someone builds a giant vector autoregression model powered by MCMC when a rolling mean would have been adequate for the task.
Build the simplest tool that will solve the task at hand is not a mantra that comes easily to some people.
I've worked with smartasses, and I've worked with many people smarter than me. While there can be overlap, there generally isn't, in my experience.
Smartassery is something I've most often witnessed in people having a need for self-assertion. Most really smart colleagues I've had are confident enough in their skills that they don't need to prove them by being smartasses.
> Someone is generally classified as a smartass when the balance of intelligence is on the smartass side.
Sometimes that's true, when a dumbass is involved.
At least equally often, though, that's what smartasses tell themselves either to avoid facing their social deficiencies or because they lack the self-awareness to recognize them.
Part of the reason why the blogpost is fun is the neat technical details and outside the box way of approaching small problems.
It's nice if the interview can be used to figure out if the interviewer/candidate want to work together.
I think it's fair to be frustrated at the "whiteboard interview questions" that most people encounter when interviewing.
I’m not sure I agree. I just rewrote code that another team member just wrote that was four layers of strategy patterns and factories into one single one page static class. Even they agreed that my code was easier to read and understand. I don’t think they could have written my version. Yet though. I’m trying to get them there.
My point is that it is sometimes easier to write complicated and unmaintainable code than writing clean and clear code.
If it's not code someone else could write, you aren't a disposable cog whose replaceability the company can leverage against you when negotiating compensation.
Conversely, well-written and document code that the maintainer wouldn't be able to make the leap to invent can still be maintained. Comprehension regularly exceeds ability to create.
Slightly related, since this blog post has inspired me for a while. I would absolutely love to read a whole fantasy-themed "tome" that treats a programming topic as a school of magic. Maybe VR/AR is like illusion/alteration magic. Maybe robotics is the equivalent of summoning, or beast training, or something; maybe electrical engineering is like smithing or crafting. Computer history is lore. And so on...
I wonder if there's anything else out there like this idea? If not, I'll make it happen.
Agree - this style is so rich with possibility. I’m reminded also of the Young Lady's Illustrated Primer from Diamond Age, which included fairy stories that taught fundamental computer science. It also imagines a cultural upbringing where algorithms are the solutions to ancient riddles.
You might be interested in Rick Cook's "Wizardry" fantasy series from around 1990. A Silicon Valley programmer gets pulled into a fantasy world, and ends up writing a compiler for his own Forth variation that treats spells as assembly.
And then I wrote a list of other stories in this category (Ra, linked from sibling comment, is brilliant, by the way. Less programming oriented by the concepts do underlie the story).
https://fiftysevendegreesofrad.github.io/hard-comp-fi-fictio...
You're welcome to send pull requests to the list with more reviews etc. Though it's drifing from my control - having started to read some Vernor Vinge I'm not sure he belongs on the list.
How about Charlie Stross' "Laundry Files" novels, in which magic is mathematics and software is a way of executing your magic quickly (and automating it). You can also do it inside your own head, but since it attracts the attention of Cthuluesque beings, there's a risk attached.
Awesome. I've read it on my phone. Nice thing about Clojure - like any Lisp it doesn't care about the width of your screen, it wraps, yet retains readability.
Who are all those people who constantly whine: "Lisps are unreadable". Dyslisplexic programmers?
No, I'm serious. Lisp is incredibly fast to write, and (with a bit of practice) is amazingly readable. People foreign to Lisp often look at it and without even slightest attempt to give it a try, immediately reject it as "hard to read".
But in reality, as I said - Lisp retains readability even on small screens, good luck trying that with literally any other programming language.
The only drawback of Lisp syntax I can think of - probably it is not a very good choice for white-boarding interviews.
> People foreign to Lisp often look at it and without even slightest attempt to give it a try, immediately reject it as "hard to read".
You'll have to take my word for it, but I have put significant time and effort into trying to familiarize myself with lisp. I still found it extremely difficult to read at the end of that effort. I don't think I'm alone in this. But there are definitely people such as yourself who find it extremely readable. I'm not sure why there's such a divide.
> Lisp retains readability even on small screens, good luck trying that with literally any other programming language.
Is this a need that arises frequently? I don't think I've ever wanted to do that, but I am willing to believe it's something many people might want.
In editors that have good support for vertical splits, yes. Before Lisp I would never keep more than two-three split windows, with Lisp I sometimes do four, even six vertical splits at the same time - just because I can.
I'm fairly fluent in for example regex and awk, but I'm not blind to what it looks like to someone coming into it with no experience. That doesn't mean they are bad programmers. There are a lot of languages, and some are closer to each other than others.
Maybe you started out with an easier way into Lisp, or had a knack for it, or just spent time in it at a point when you had more motivation or time to get into it. There's more to learn in the world than people could do in a thousand lifetimes, so I wouldn't fault anyone for turning around in the door if it's something they have no interest in at the get-go.
I'm not trying to belittle anyone. Sometimes, I don't understand tribalism against Lisp syntax. I question its sincerity, probably because of my own indisposition towards Lisp that I carried for decades (without much rationale to support my skepticism). When I finally got to start using Lisp, I was fascinated by how much joy it brought to my work, and I hated my younger self and regretted that I have never encountered someone who would've convinced me to try it sooner.
All of those statements about Lisp would do better with the qualifier "for me". Lisp people tend to assume that what they can read most easily is most readable for everyone and that makes them dismissive of those who find it hard to read - and that dismissiveness counts against Lisp adoption.
> Lisp retains readability even on small screens, good luck trying that with literally any other programming language
I learned from Sinclair BASIC on a 32x24 character display on a black-and-white PAL television, as did a lot of people. Readability is highly subjective.
I've seen it multiple times - it's only the initial reaction, it doesn't usually take too long for anyone to adjust to the syntax. I have never met anyone who used a Lisp for several months and still hates it and finds it unreadable. I wouldn't count anecdotal encounters of people online, claiming for it to be mostly true.
I have seen people using one Lisp e.g., Clojure and having difficulty quickly parsing a different Lisp dialect, e.g., EmacsLisp, but that's not "Lisp being unreadable," it's just unfamiliarity with specific language idioms.
I had also seen people who learned Clojure before any other languages and then tried learning a non-lispy language (java, python, etc.) and surprisingly claimed it to be harder to read (initially).
> that dismissiveness counts against Lisp adoption
I agree, but how do you fix this problem? You can't remove parens and keep it homoiconic. Tools like Parinfer do help, but they don't address the problem: Lisp doesn't look "sexy" for those who are unfamiliar with it. I kept ignoring Lisp for many years, simply because I didn't know better. I wish there were people who'd keep telling me that parentheses are not a problem, they are a solution.
I manage to misplace brackets and parenthesis in python. I fear what sort of trouble I could get myself into with a language with as many parentheses as lisp.
I have to say that I often have more problem with indentation and the like in Python or YAML, especially given the insistence that indentation means a specific amount of specific whitespace character.
The lower level of S-expression serialization kind of resulted in offloading this task to editor, where it should lie, instead of burdening the developer with it. That said, even I will admit that without experiencing the fact that the editor will handle parens for you, one's experience in other languages is going to suggest bad things. How many of us ended up having to manually count "end" keywords in Pascal and the like?
Still, with a proper editor (not just emacs), the experience is really, really different.
To elaborate on what others have said (editors), Emacs' auto-indentation of lisp is _phenomenal_, even if you're not using "paredit". You basically can press tab on any line, and it will be indented to the Right Place. It quickly gets relatively easy to correlate indentation depth to how many parens you need to close your s-expression.
I have encountered indentation problem in Python far way often than the problem with a misplaced paren in any Lisp. It is very rare and every editor/compiler/linter has ways to detect that and help you fix it.
Readability is indeed subjective. Few years ago I myself would strongly oppose my own words about Lisp being readable. Several years ago even plain English was pretty much unreadable for me. I guess it's a good thing I haven't dismissed it, otherwise we wouldn't be having this conversation :)
People generally use Emacs with ParEdit or parinfer to handle balancing parenthesis. Instead of working with the code at a textual level, you end up working with it at a structural one.
My friend and I both work at FAANG. He told me a story once about how he wrote the solution for a whiteboard problem in Haskell. Once he was done the interview asked “great, so do you want to start coding this up?”
This is beautiful, and horrifying. A cleverly written tale around some deep magic.
And yet.
Who among you would hire this witch? Who among you would want to maintain this kind of code, should this witch move on to challenges more suited to her skills?
Who among you would dare to meddle with the work of a Real Programmer[1]?
It should not be taken too literally. Part of the context is how asking this type of puzzle question with a simple expected solution is trivialising to smart programmers; part of it is the question of how many places cargo-cult difficult interviews because google do it that way but then expect people to work on CRUD apps with little originality or autonomy. It would indeed be a disaster to hire someone like this for that job.
On the other hand, there are places where this kind of thing would fit right in: exploit writing, games development, certain kinds of HPC, cryptography, some firmware development. And it demonstrates the kind of deep understanding where being able to read this kind of thing is very useful to puzzle out some horrendous low-level mess that has been inflicted on you by a vendor. If it was John Carmack or Richard Feynman doing it, it would be celebrated.
The risk with "rockstars" is they do this stuff when you don't want them to. Codewitches are much rarer, and have a much better sense of when it's a good idea. Neither is ever really going to be a "team player" but can deliver amazing things.
Just a tip for anyone wanting to write Norwegian words. O and Ø are different letters. You don't make a word more Norwegian by adding slashes over random Os.
A coumparisoun would be to change "o" to "ou" tou make sentences louk moure English.
Hm, the wrist scars don't remind me of anything in particular. He later clarifies that the scars make out the letters HJKL around his wrist - so maybe it's just a gag for that reason.
When reading the story, I thought, "well of course that's the solution ...", but then couldn't recall ever learning it. I'm grateful that someone shared the name, as Wikipedia's explanation is surprisingly clear:
It's a very distant parody of the famous "Cracking the Coding Interview" book, and the whole set of expectations around technical interviews.
What's actually happening in the code: it starts off in the Lisp-on-JVM that is called Clojure, but for "performance reasons" drops down to hand-writing Java bytecode. A useful reminder that it's all just bytes and we don't have to arrive at them by the "normal" route.
I think it’s all just a joke or satire where he’s technically very talented and optimizing the code ad absurdum which goes well beyond what the interviewer expected. This is slightly nuanced, but it also shows how inadequate the technical interview is for finding top talent because of the subjectivity involved.
Guess I'll be one of the few to speak up and admit, I just don't get it. I'm sure I'll be downvoted for it by a dozen people smarter than me who resent my ignorance.
The prose seems to resemble something dark and beautiful and meaningful. But if it's meaningful it's lost on me, for the most part. And dark and beautiful and meaningless is honestly a bit torturous. Like trying to stare at a Pollock and understand something.
And the bytecode just couldn't be more boring to me. Why is bytecode for this mundane algorithm interesting?
Someone could do us second class citizens (who I have no doubt are orders of magnitude more numerous than those who privileged few who get it) a favor by producing us with a line-by-line analysis. But I guess that would ruin the feeling of privilege.
Put down your pre-emptive feelings of defensiveness.
FFS, there's no shame in not understanding something. There is shame to be had in throwing around accusations of privilege at people who understand something you don't.
People won't downvote you for not understanding. Some people will applaud you for saying you don't understand, up until the point they realise you're not seeking help in understanding, that you're not trying to improve your own knowledge and wisdom; that instead, you're trying to make yourself feel better by ascribing yourself some moral high-ground for not being one of those "privileged" people who understand something you don't. They might downvote you for being a passive-aggressive child about it.
Some people will applaud you for saying you don't understand
I've been around enough to know how wrong that is. Do you actually believe this? Honest questions asking for explanation are almost guaranteed to result in a negative vote sum.
instead, you're trying to make yourself feel better by ascribing yourself some moral high-ground for not being one of those "privileged" people who understand something you don't.
Make myself feel better for what? Maybe I needed to include the /s tags...
Honest questions asking for explanation are almost guaranteed to result in a negative vote sum.
Boo hoo hoo. Oh no, lost some internet points. Anyone whose opinion is worth a damn will applaud you asking honest questions; especially here, on defensive poser central.
Make myself feel better for what? Maybe I needed to include the /s tags...
You've gone for the "I wasn't being serious" defense, pretending you didn't mean it. However, your post wasn't remotely sarcastic and contained far too much detail and defensiveness, with a tone taking itself very seriously. You're convincing nobody.
What exactly do you now claim you were being sarcastic about? "The prose seems to resemble something dark and beautiful and meaningful"; that's sarcasm, is it? So you're saying it IS something dark and beautiful and meaningful? "And the bytecode just couldn't be more boring to me." Right, so you were saying the bytecode was some of the most interesting code you've ever read? If your post was sarcasm, you're truly terrible at it and should switch to irony or satire instead. Little bit of literary term privilege there for you to sarcastically pretend to object to.
FFS, this is the anonymous internet. If you can't be honest with yourself here, where will you be honest with yourself.
I don’t get anything about the code either, but the stuff around it is funny. I think it’s a bit funnier since I only understand half of it, which is exactly how the interviewer is supposed to be feeling.
I suspect in the end it's not much faster, but it is smaller. Really the fun in the essay is that that's not what any normal person would do as a solution to that problem, it's a stunt to show off deep mastery.
To me, mastery would be if you can use mathematics to trivialise an implimentation in a counter intuitive way. I understand that this is a different kind of mastery. But does the essay write code that is a sign of mastery, or does it just use the byte code as a talisman? I don't follow the code so I wouldn't know.
I think it counts as mastery, precisely because it does choose a thing so counterintuitive that it's past what most of us would even think of as a possible solution.
I spent many years writing Java, but never took the time to learn how to write the bytecode (that's why we have a compiler ...). I know that Clojure lets you call Java classes, but it's a completely crazy, magical, frightening, and awesome thing to see someone do it by writing their own class loader, and then writing their own class -- in bytecode! -- to load, when the _simplest_ and clearest solution would be far from that.
It's a bit like solving FizzBuzz with Tensorflow: terrifying on at least some level, as it makes it so clear that there's so much more depth I could be learning, yet exciting for almost the same reasons.
> It's a bit like solving FizzBuzz with Tensorflow
I had to wiki that, but it's a good point. The first thing about NN that interested me, rather than images or audio or anything like that, is that a one level NN (IIRC) can approximate any function (by adding more and more nodes to that level).
I guess the point is also that any really deep rabbit hole is in some sense better than a selection of random 1000 rabbit holes.
Basically when you work on creative stuff you're drawn to it because you have good taste, but for a long time your work will disappoint you because of your good taste.
You don't have to publish them immediately; when it's only you reading your writing, you might feel embarrassed, but no harm or loss of status will come to you, so you absolutely should at least give it a try!