Is there a book somewhere that tries to set out all of the things that experts know about computing that they don't remember learning?
(In Zed Shaw's conception, this might correspond to "learn computing the hard way".)
I see his examples and other examples here in this discussion, and it makes me wonder about the value (or existence) of a very thorough reference.
I've also encountered this when working with lawyers who wanted to have a reference to cite to courts about very basic facts about computing and the Internet. In some cases, when we looked at the specifications for particular technologies or protocols, they didn't actually assert the facts that the lawyers wanted to cite to, because the authors thought they were obvious. I remember this happening with the BitTorrent spec, for example -- there was something or other that a lawyer wanted to claim about BitTorrent, and Bram didn't specifically say it was true in the BitTorrent spec because no BitTorrent implementer would have had any doubt or confusion about it. It would have been taken for granted by everyone. But the result is that you couldn't say "the BitTorrent spec says that" this is true.
Another example might be "if a field is included in a protocol and neither that layer nor a lower layer is encrypted with a key that you don't know, you can see the contents of the field by sniffing packets on the network segment". It might be challenging to find a citation for this claim!
So we could also wish for a "all our tacit knowledge about computing, programming, and computer networking, made explicit" kind of reference. (I'm not sure what kind of structure for this would be most helpful pedagogically.)
I would find some resource of assumed knowledge useful in my career field of chemistry and biology, but what really resonated with me from zedshaw's piece was something even more basic to the field. The biggest hurdle I had to learning to code was finding a text editor. The next big hurdle I had was finding somewhere to host my one-page HTML site. I ended up asking a friend who suggested Sublime Text and Amazon AWS. At that point I'd been reading HN for two years and had heard of these things, but not understood their central utility (or the utility of similar such general services) to doing anything with code. This is the level of beginner that I would hope zedshaw's efforts would target, someone like me from last year.
I want to emphasize that while learning abstract functions, and understanding that code syntax is an abstraction for electrons moving through logic gates are fundamental concepts for an early programmer, learning those concepts was less frustrating for me than finding somewhere to write the text I had learned from codecademy. I am a chemist by formal training, I took an extremely abstract multivariable calc course in college that taught me Big and Little O, and functionalized concepts most people learn by rote "drills," and I consider learning new concepts my strongest career skill. I don't mean to humblebrag here, but rather to refute the only-somewhat-popular sentiment I've seen on HN that non-coders "can't get shit done." No. I am a non-coder that does shit. In moving from chemistry to neurobiology and biophysics, there are basic skills that can't be found in a textbook, like _this is a pipette_, and _this is a flask to grow cells_, and if you don't know those things you won't be able to do experiments, and you'll fail the informal subtext of an interview. The best resource I've found in (three years of reading HN anonymously) for analogous tool-teaching in code has been Michael Hartl's book on learning Rails, so thanks again mhartl! The first two chapters of that resource were more treacherous (but ultimately well-guided and successful) than teaching myself d3.js. A true, zero-level, adult beginner's guide to some code---manipulating an excel sheet in Python, writing an API-interacting tweet bot---would be a great boon to people like me.
Charles Petzold's book 'Code' has a lot of this very basic, low-level information. It basically builds up from basic information theory to computers. It's not going to have everything you're looking for but I was surprised how much of it I "knew" without recalling where I learned it or how it connected to other things.
I had a similar experience with Code Complete (I know most of what's in it but I have no idea when I learned it). I don't know if it's as basic as @schoen wanted but it might be close.
You will likely have seen advanced developers recommending it. But it is meant to teach routine daily stuff like how to name variables, lay out functions, why we want abstraction layers, and the like. And it really is aimed at people who may know nothing about code construction.
Okay, thanks. I read it cover-to-cover before I interviewed at Microsoft my senior year in college. But - like the developer in Zed's article - I must have forgotten. :)
> Another example might be "if a field is included in a protocol and neither that layer nor a lower layer is encrypted with a key that you don't know, you can see the contents of the field by sniffing packets on the network segment". It might be challenging to find a citation for this claim!
The term of art here (as used in patent law, for instance) would be "person having ordinary skill in the art". Something like that (if you don't encrypt something in a network protocol, it can be sniffed) is "obvious to a person having ordinary skill in the art".
But yeah, that does make it difficult to cite sources for them. And in particular, it's difficult to throw the book at someone who doesn't take those things as obvious when there's no such book.
Another "obvious" thing that I don't know a citation for: it's impossible to store a piece of information in a binary such that it cannot be read by a person who has a copy of that binary. (Practical corollary: there's no such thing as a piece of information that's too secret to include in an Open Source driver, but that can be included in a binary driver.)
Patent cases are definitely going to include expert testimony on what someone with ordinary skill in the art knows or believes, but other kinds of court cases (or types of participation in a case) don't always have a simple opportunity to introduce expert testimony -- like writing an amicus brief.
So it would be pretty awesome to have the Book of Tacit Computing Knowledge somewhere.
I don't know of such a book that's targeted to experts, but you can get a good idea of those things by skimming the first 2 or 3 chapters of Zed's [Learn Python the Hard Way](http://learnpythonthehardway.org/book/ex0.html), and Appendix A.
I was recently working on a case and I need a definition of software library. Couldn't find one anywhere. So many of the basics of programming are undefined it makes arguing about programming incredibly difficult -- see every discussion of what a functional language is or strong vs weak typing.
Wikipedia has an article on libraries, which includes the following definition:
"In computer science, a library is a collection of implementations of behavior, written in terms of a language, that has a well-defined interface by which the behavior is invoked."
> things that experts know about computing that they don't remember learning?
After a certain point, you can learn a lot about something by teaching it, because doing so forces you to re-evaluate things which you've internalized and forgotten.
This is good. I've had problems that were somewhat related to what the author talks about.
When I was learning C# and was already quite fluent in C/C++. I had a big problem with the C# type system/management. I'd been reading guides that were in the first category the author mentions, eg. "not really a beginner, but new to this language".
I was trying to retrieve the bytes that a certain string represented. I was looking for ages and everywhere everyone mentioned that "this shouldn't be done", "just use the string", etc. A stack overflow answer mentions a way to use an 'encoding' to get the bytes and this seemed to be the only way.
How strange I thought, I just want access to a pointer to that value, why do I have to jump through all these hoops. None of the guides I was reading provided an answer, until I found a _real_ beginners book. This book, helpfully starting at the real beginning of every language: the type system, finally gave me the answer I was looking for:
.net stores/handles all strings by encoding them with a default encoding. It turned out that the whole notion of 'strings are only bytes' that I carried over from C++ does not work in C#. All those other helpful guides gleefully glossed over this, and started right in at lambdas and integration with various core libraries. Instead of focusing at the basics first.
This has nothing to do with learning a programming language and everything to do with learning how to process text in a computer. Being a C programmer doesn't mean you have only a PDP-11's understanding of text ("it can be ASCII or EBCDIC, and I know how to convert between the two!").
When I learned C# (in 2003?), I learned that String in an array of Char and Char is 16 bit, and that .NET used the same encoding as Windows NT (UTF-16 in native endian).
I knew that both WinNT and Java made the mistake of being designed at a time when people assumed 16 bits are enough and consequently caused the surrogate pairs mess. I knew that Java assumes UTF-16BE and Windows assumes UTF-16LE. I knew what UTF-16 means in C/C++ and how to work with it or transform such data to and from UTF-8 and UCS-4.
When learning a new programming language, I know to look up whether strings are mutable and whether they're sequences of bytes, code units or code points. If they's immutable, I look up how to create a copy instead of a reference when using substring and when they're not bytearrays I look up how real bytearrays are called in this language.
Should early programmers be taught this? Absolutely. At what stage? I don't know. But they must be taught from the start that this has nothing to do with a programming language and everything to do with how data is represented in memory.
I'd actually consider that kind of knowledge pretty advanced. Beginners (and early up to even junior coders) usually don't know much about the internals of their environment; they just use stuff.
I'm always interested in the internals; but it's often surprisingly hard to find information on the internals. There are few books, and you'll often need to read lots of source code and specifications and reverse engineer things to find out how stuff works under the hood.
> I'd actually consider that kind of knowledge pretty advanced.
For someone from a C background, that's not advanced: it's simply what strings are. The whole idea that characters aren't bytes may be very strange to someone who's only ever done C and C++. It's probably just as strange to them as the idea that there's any relationship between bytes and "the characters that make up a piece of text" is to someone entirely new to programming.
In a related anecdote, I was once in a room with 4 C programmers and a Haskell programmer who was trying to write his first C program. The Haskell guy asked "hm ok so how can I compare two functions to see if they're the same?" and after a 20 minute discussion the C guys still couldn't understand why you would possibly want that and the Haskell guy still didn't know how to continue (I was one of the C guys). All had many years of programming experience, but the frames of reference were simply so different.
I think it's smart of Zed to confront people with multiple programming languages from the beginning, so that this kind of issue thing never really becomes a problem.
You are almost certainly misstating the Haskell programmer's question, because C makes it very easy to test if two function pointers are equal (intensional equality) whereas Haskell makes it very hard.
I think they might have meant "whether two functions are structurally identical"—i.e. whether their post-link-load-phase object-code hashes the same, presuming they're both position-independent.
I'd have to disagree, but maybe I'm not getting your point.
It's more work in C and C++ (and a lot of other languages) to treat strings correctly, but unless you're talking about a C programmer who's been living under a rock for 20 years, most of them are familiar with the issues around Unicode, UTF-*, etc., and they choose to ignore the issue when they can get away with it. When it's important, there are libraries like iconv and ICU for handling it. C++ even has some character conversion builtin to the locale system, but it's super ugly (which goes without saying, because almost everything in C++ is ugly ;-)
As far as your anecdote goes, I know both C and Haskell, and the question doesn't make any sense to me either. It's provably impossible to compare two functions for equality. Even in Haskell, function types don't derive from the Eq type class, so it wouldn't be possible.
No instance for (Eq (a0 -> a0)) arising from a use of `=='
That is, functions aren't comparable (for equality, anyway), so the type system won't allow you to compare them.
The better answer is either "Look at their type signatures" or "See if they evaluate to the same values when given the same input"; the first is trivial, the second won't, in general, terminate, so you need a more nuanced conception of "equality" to apply in this instance. This is non-trivial to come up with.
Kent Pitman has an interesting essay on this problem from a Common Lisp perspective: "The Best of Intentions: EQUAL Rights—and Wrongs—in Lisp"
That's only "basics" if you've got the wrong idea. There are millions of possible mistakes, no beginners' guide can explicitly address every one. People told you to just use the string - wasn't that a good enough answer?
> People told you to just use the string - wasn't that a good enough answer?
"Don't do that" isn't a sufficient answer without explaining exactly why, though. And if you aren't asking the right question, then the explanation might even seem obtuse.
I can relate, in this case the op was actually hindered because he knew there are bytes behind the string. A key insight into why you shouldn't simply get the bytes is the .net char size is actually 2 byte, and the internal encoding is utf-16. Thus encoding/decoding is required, and if you haven't worked with encoding before, can be a bit confusing imo.
"Don't do that" is the right answer when you're asking the wrong question. It's an invitation to take a step back and ask how to do what you actually want to do, at a higher level.
There is nothing inviting about someone saying "don't do that" to you. If you actually want to understand a beginner's intentions, you're a lot better off asking them what their goal is.
In my experience mentoring new developers, it's much more helpful to ask "what is your goal?" instead of "don't do that."
When I was a kid learning line number BASIC, adults answered my questions knowing that I'd figure out The Right Way before anyone hired me to write the code for radiation treatment devices.
The tech community's obsession with "The Five Why's" is toxic. When asking questions, you always have to first prove that you deserve an answer. It becomes a process of trying to anticipate any potential reason someone might have to argue "You're doing it wrong" - and preempt that. You can't just ask a question: you have to both ask and justify.
Mostly I just don't bother, and I suspect that I'm not alone. And I have a degree and industry experience. It must be incredibly frustrating and discouraging for beginners.
Dunno. If someone asked me how to get the bytes of a string, I would ask why. Not because theres The Right Way to do things, but because they might be doing things The Hard Way.
A why can reduce the amount of code written by 100%.
Sure, but when someone is first learning, frequently the true answer to "why are you doing that" is "to see what happens" (even if they have some flimsy justification within their pet-project at the time.) Giving them the answer lets them go back to experimenting so they can see, for themselves, why the path they're heading down might not be such a good idea. Formative experiences and such.
Yeah, that's a fine answer. But they might just not know of the other way to do things.
Like, in Java, for a long time I didn't know there was an output stream that you could write a string directly, so I was always getting the bytes to write it. I wouldn't call that a formative experience.
This ties into another of my pet-peeves: people who answer my "how do I do this" question with "don't do that". Quite often I have very good reason for wanting to do that (getting around another bug, esoteric requirements, etc). Whenever I answer a question, I'll first tell them exactly how to do what they're asking for, and THEN explain why doing that is usually a bad idea, and THEN show some alternatives that will probably do what they want.
The above link is to a Hyperbole and a Half cartoon image of a person and the words "No, see, that solution is for a different problem than the one I have."
I've been teaching coding to beginners for the past year now...and even after having done coding workshops/tutorials for many years previous, I've found I can never overestimate how wide the knowledge gap is for new coders.
Yesterday I was talking to a student who had taken the university's first-year CS course, which is in Java...she complained about how missing just one punctuation mark meant the whole program would fail...While I can't passionately advocate for the use of Java in first-year courses (too much boilerplate, and the OOP part is generally just hand-waved-away)...I've realized that the exactness of code must be emphasized to beginners. And not just as something to live with, but something to (eventually) cherish (for intermediate coders, this manifests itself in the realization that dynamic languages pay a price for their flexibility over statically-typed languages).
Is it a pain in the ass that missing a closing quotation mark will cause your program to outright crash, at best, or silently and inexplicably carry on, at worst? Sure. But it's not illogical. Computers are dumb. The explicitness of code is the compromise we humans make to translate our intellectual desire to deterministic, wide-scale operations. It cannot be overemphasized how dumb computers are, especially if you're going to be dealing with them at the programmatic level...and this is an inextricable facet of working with them. It's also an advantage...predictable and deterministic is better than fuzziness, when it comes down to doing things exactly right, in an automated fashion.
I think grokking the exactness of code will provide insight to the human condition. While using the wrong word in a program will cause it to fail...we perceive human communication as being much more forgiving with not-quite-right phrasing and word choices? But is that true? How do you know, really? How many times have you done something, like forget to say "Please", and the other person silently regards you as an asshole...and your perception is that the transaction went just fine? Or what if you say the right thing but your body (or attire) says another? Fuzziness in human communication is fun and exciting, but I wouldn't say that it's ultimately more forgiving than human-to-computer communication. At least with the latter, you have a chance to audit it at the most granular level...and this ability to debug is also inherent to the practice of coding, and a direct consequence of the structure of programming languages.
A good analogy I've heard to explain this is how you'd request a glass of water from the kitchen from a friend versus a computer. You can simply tell your friend "get me a glass of water" and they'll understand what you're asking. With a computer though, you must be completely explicit with your instructions, for example: walk to the kitchen, open the top left cabinet, take out a glass, put it underneath the faucet, turn the faucet on until the glass is 80% full... etc.
Bring in a few loaves of bread, jars of peanut butter, and jars of jelly, with a few utensils. Also, lots of paper napkins.
Have the students spend 10-15 minutes writing a 'how to make PB&J sandwiches'. Select volunteers to read their instructions while you follow them as a computer would. Explain that this is how computers work.
Get some bread: grabs entire loaf of bread, uses the entire loaf for following instructions.
Open the loaf of bread: Rips open the entire package.
Open the jar of peanut butter: Fails to rip the lid off.
Spread peanut butter onto bread: Grab a big handful of peanut butter and spread it messily over the bread.
You can demonstrate how a more flexible language allows you to not just stop and crash (ie. Unknown command 'open peanut butter') but can result in far worse results.
You can even get into the different levels of languages by showing the difference between a motion by motion (assembly) instructions vs. one which assumes general sandwhich knowledge (C/C++/etc).
I saw this exact demonstration (PB&J) in I think 5th or 6th grade. Which would have been 1995/1996. This was before I had any interest in software. But I do remember that, in fact I was thinking about it when reading earlier comments in this thread! So maybe it stuck with me and helped in some way.
One problem is that programmers accept this stupidity from the machine rather than try to fix it. For example, if the computer had a definition of a good PBJ and some sort of solver, it might just figure out how to construct a PBJ with the given materials. Or maybe it would ask for clarification of some steps. Of course we don't want to treat all problems as general cases to be solved on the fly, but I don't think we should accept complete stupidity from the machine either.
Programmers don't accept it, which is why higher level languages exist. At some point it reaches 'good enough' and most of us settle with some language, but there are always others pushing out new languages to fix some shortcoming they see.
It would be great to add to this to break each 'module' down to a separate student, and then try and use them in sequence to complete the task, with a master sheet telling you when to use each module.
I do this. I just don't give them props. And it is a great effective approach that also gets them thinking about basic algorithms by breaking down problems.
Oddly enough, you could repeat this exercise today, strictly with the computer itself. How would you request from a friend a list of the files in a particular folder whose names end with a particular extension? If it's a GUI based system, you have to be just as explicit as with the glass of water. In fact, written instructions for performing computer operations are often page after page of pictures of dialogs with circles and arrows, plus paragraphs of text. And your dialogs don't look the same because you have a newer version of the OS.
And people get them wrong.
This didn't really strike me until I started using Linux, even though I had originally learned on command line based computers. The same instructions for Linux users are a few lines that you enter into a terminal. I've even noticed a trend towards documenting Windows operations via a series of commands entered into the DOS box.
While that is generally true, one should remember that programmers spend a lot of their time trying to help abstract that all away. I wouldn't ask my OS to open the top left cabinet and take out a glass, I would ask it to give me a glass. The OS was built by people who already know the optimal glass fetching method, so I just need to tell it what I want, not how.
The problem there is that if you want to get anything original done, you're gonna need to start getting to that low level where the abstraction isn't there. We sort of side-step this in education by starting with heavily abstracted systems like greenfoot, where you ONLY say what you 'want to happen', like 'object should move left'. Once students raised on those systems start to encounter real problems, they falter, because they've never had to cope with the computers stupidity before.
Thinking about the OA, your student, and the people who don't know how to find | character and about what people tinker with now.
OA refers to more experienced programmers forgetting that they typed programs from computer magazines and soaked up the basics that way - that is a specific point in history wasn't it, the early 1980s with BASIC listings to type and try to save on the cassette. I'm older and did BASIC exercises line by line on a teletype connected via an accoustic coupler and a modem (19" rack with dial on the front) to a remote mainframe. And yes, we got line noise sometimes. And we learned about the need for exactness.
Have we reached a point in history where the machines are so shiny there is no way in? Should we give people recycled laptops with a command line linux install and suggest that they have to assemble their own UI out of bits and pieces of old school window managers? A prize for the most way out desktop?
There are always ways in. Kids are learning the concept and demands of precise 'command syntax' by typing commands in to the Minecraft terminal, or using command blocks. Maybe in the future people will be moaning about how kids these days with their gesture and voice based control interfaces never learned the precision needed for coding the way they did when they were younger: by typing in specific sequences of symbols to turn them into 'emoticons' in chat windows...
"Is it a pain in the ass that missing a closing quotation mark will cause your program to outright crash, at best, or silently and inexplicably carry on, at worst? Sure. But it's not illogical. Computers are dumb. The explicitness of code is the compromise we humans make to translate our intellectual desire to deterministic, wide-scale operations. It cannot be overemphasized how dumb computers are, especially if you're going to be dealing with them at the programmatic level...and this is an inextricable facet of working with them. It's also an advantage...predictable and deterministic is better than fuzziness, when it comes down to doing things exactly right, in an automated fashion."
And yet so much code has so many bugs, defects and errors in it. If writing code was as mechanical and deterministic as you think it is (and should be?), then why does so much production code suck?
Uh...the reasons can vary quite wildly...so I'll stick to one scope: Why does production code suck even when it's been written by competent programmers? I would argue that in a non-trivial system, it becomes difficult to program components that reliably interact with each other. If one team changes the interface to their component, that is not as "deterministic" from your perspective, if the production workflow doesn't have adequate automated testing and so-forth.
In other words, production code can suck because you've failed to anticipate all the non-deterministic things that may interact with your system. Or you may have poorly anticipated them...which can manifest itself in the form of bloated, overly cautious or overly permissable code that becomes too hard to reason with or debug.
But this is moving far away from the original topic of why beginning coders have such problems...sure, when they start working on production code and have to deal with the complexities of the real world, they'll have to adapt. But before they get there, they have to have faith -- at first -- then the ability to confidently reason about code, step by step, in the same way that (hopefully) they've done with math equations...I'm pretty patient when teaching most beginners...but one thing that still sets me off is when someone says, "Well, the code worked 5 minutes ago, and now it doesn't"...and instead of actually thinking through the reasons why that might be (and the reasons could be quite complex, to be honest)...they just accept that as something that just happens with programming. Or even worse, start pasting in random code that worked somewhere else...which literally makes as much sense as adding random digits to an equation to get it to "work".
As someone who was a beginning, community college CS 101 coder at one time, I think focusing on the mechanical, "this code won't compile unless every i is dotted and t is crossed" way of thinking is only half the story. Getting code to compile, after about the 2nd week of class, isn't too hard. Figuring out why your code doesn't give the right result, even when it compiles and runs, is by far a harder skill for new coders to pick up.
Oh I agree, completely -- silent failures can confuse even experienced programmers. But it's very hard to get to the higher-level reasoning, i.e. does this code actually work...if you are continually tripping up on syntax. Moreover, a failure to understand the implications of the syntax can limit your ability to code.
For example, for beginners, it is not immediately evident that the two following code snippets are equivalent:
a = "Hello world"
b = a.lower()
print(b)
and:
print("Hello world".lower())
This isn't just a problem of recognizing the simple logic...but that when you're new to code, it takes more mental energy -- at first -- to pick apart the symbols and process them efficiently. And that mental energy is largely from the same reservoir that is used to process the higher-level concepts.
This is pretty ridiculous, man. I don't think I know a beginner programmer who would be so stuck on "every character matters." (Which isn't even true, to some level, in many langauges - ; in JavaScript and Python? Whitespace in languages besides Python?)
The way I would explain it is to have them take a imagine writing a code tokenizer and interpreter of a simple language themselves. That's what the intro CS class I took at Berkeley, 61A, had us code with a subset of Lisp, with a lot of help, of course. I don't think we needed to know how to use anything but strings, functions, and arrays, although it did involve recursion. This problem will never be broached again once they realize there's code reading their code. Of course it's arbitrary.
You're exhibiting exactly the phenomenon the OP was talking about.
> The way I would explain it is to have them take a imagine writing a code tokenizer and interpreter of a simple language themselves.
For instance, the OP had a part about how a beginner doesn't know that `|` is called "pipe". It follows pretty logically that they are significantly less likely to know what a tokenizer or interpreter is, let alone be able to imagine writing one. Your intro CS class at Berkeley where you did this stuff in Lisp was catering to (in the OP's parlance) early programmers, not beginner programmers.
The whole point is that there's a difference between people who know nothing and people who know things but need practice applying them. There are lots and lots of people who don't know anything about programming, whether you think it's ridiculous or not.
Indeed, CS 61A had a prerequisite placement exam which tested students' ability to write a recursive program. Everyone in the class had already passed that, otherwise they were sent to CS 3. So those students were definitely "early programmers" in this sense.
> This is pretty ridiculous, man. I don't think I know a beginner programmer who would be so stuck on "every character matters." (Which isn't even true, to some level, in many langauges - ; in JavaScript and Python? Whitespace in languages besides Python?)
I guess YMMV...but most beginners I've worked with are confounded by why code interpreters are so literal. The double-equals sign versus an equals sign is one prominent example...it's not that they can't understand why rules exist...but they seem to think the negative consequences (complete program failure) seem to outweigh the tininess of the error.
After dealing with `=` vs `==` errors in beginners' code...something that I almost never screw up on my own as a coder...I've begun to respect the convention in R to use `<-` as the assignment operator...
Notably, pascal uses `:=` and is one of the better first languages in my opinion for many more reasons (easy to grasp language core, simple non-null terminated strings, no actual need to learn pointers until the very advanced stages). Today I mostly advice other people to start with python though, because pascal feels somewhat dated and undertooled.
I think Python's behavior of disallowing assignment in expressions is good enough to avoid those mistakes; it's an anti-pattern anyway in the vast majority of cases.
> I guess YMMV...but most beginners I've worked with are confounded by why code interpreters are so literal.
I can relate. I remember when I first started programming (a little less than a year ago) I spent 2 hours trying to debug a simple function which was working fine in my IDE as I stepped through it but was failing a unit test provided by my class on R. It turns out that the instructor never explained the concept of ending functions with return statements before giving the assignment. I vividly remember screaming something along the lines of "why the fuck can't it just understand what I'm trying to do".
> I don't think we needed to know how to use anything but strings, functions, and arrays
Strings, functions and arrays comprise a huge amount of information. Many C programs of rather frightening complexity and functionality could be written with just those primitives.
When we think of a beginner, we have to imagine that they have the computer science knowledge of a child. Would you ask a child about strings and functions?
When I was new to programming, and almost 100% self taught, I wanted to make a game.
It was Q-BASIC, in about 1998. I made a top-down space shooter where you fly a ship, and meteors and other objects come down the screen and you shoot them.
Somehow, I had missed or glossed over the part of my self teaching that included arrays. I didn't know what that was. My program was written with variables like $ax1, $ax2, etc (asteroid X position 1), and collision detection was a big pyramid of "if $ax1 > $ax2 AND $ax1 < $ax2 + $awidth ...".
I was always wondering how someone would write a program where there was some configurable number of things on the screen? What if I wanted to crank up the difficulty and have 20 simultaneous things moving!?
So yeah, you can get a lot done with some really basic stuff.
The way I would explain it to them is to imagine the compiler as being dustin hoffman's rain man. He'll do exactly what you tell him to do, provided you say it in exactly the right way, because he takes everything 100% literally.
I can't edit my original comment but I'm surprised it was unpopular and the repeated concerns are so petty. I can respond to most of the things you guys brought up:
@sanderjd, you may think I'm exhibiting what the article was talking about, but I'd like to know if you have any other qualms besides using the terms tokenizer and interpreter.
+@kevinschumacher. Obviously I'm not going to just tell them "ok now let's write a tokenizer and interpreter." The beginners I know know what (non-programming) tokens are and I'm sure they would be 100% comfortable with an explanation. Same thing with an interpreter - they're using the god-damned thing from the start, if they start with anything other than C or Java.
@danso, @Thriptic - I think I know what's happening. You're confounding frustration with confusion. I guess I shouldn't have wasted my time. Every beginner programmer, especially, but hell, even an experienced programmer (! think about this - of course we do), gets frustrated with the character-by-character exactness of programming sometimes. That doesn't mean they don't understand or think it's totally unreasonable. Very different.
Thanks guys for explaining the downvotes. Let me end by reiterating something I've said a few times now since the beginning - this is based on beginners I know. I.e., my bio and econ-major friends taking the same 61A class I did. I'd like to know, @sanderjd, whether you have some extra information about the class that I don't? Because I took the class 3 years ago and I know at least half the students are beginners. This is the first class people learning how to program seriously take. I don't know what else I need to say to convince you.
@danso the reason why I'm spending so much time on this is because I'm a student teacher thinking about going into programming education as well. I think it's very important to make the distinction between confusion and frustration. Otherwise, your efforts may be futilely spent on explanation when all they want is to get things working. If they're getting confused on things like "=" vs "==" or why that missing semicolon annoys the compiler, don't go back and explain that the interpreter is this thing with very strict constraints and everything you type matters. That's not the point. Explain what's wrong, why, and how to fix it!!! "Oh, in this language, we terminate lines with semicolons except blocks like if's and loops," "= is for assignment and == is for comparison." They'll get better with practice.
bwy, I am writing this to you, from a current teacher to maybe a future teacher. I have a big issue with this line of your response:
Otherwise, your efforts may be futilely spent on explanation when all they want is to get things working.
Now this attitude is fine in a work environment, or many other places. But this is death for learning. Learning is not about getting things to work, it is about understanding why things work, so you can apply that understanding elsewhere, to unrelated fields even.
So, for example, I do agree with you when you say: don't go back and explain that the interpreter is this thing with very strict constraints and everything you type matters But I disagree with what you say next: That's not the point. Explain what's wrong, why, and how to fix it!!!
What would be better, in my experience, is to lead the student to find out, for themselves, what is wrong, you can supply the why, and get them to figure out how to fix it. These are what we in teaching call teachable moments, random events which present an opportunity to give the student a deep learning experience, one which will stick with them for a long time.
Your 'explain what's wrong, why, and how to fix it!!!' can be done via google, doesn't add to a real learning experience, and can turn people into cargo cultists.
I actually worked on teaching my 71 year old father Python using this book. One point of difficulty that struck me during that exercise was that I as a programmer had completely internalized the idea that an open paren and a close paren right after a function is a natural way to invoke a function with zero arguments (e.g.: exit() exits Python's prompt. exit doesn't.). The whiplash I felt from finding the questioning of the convention silly to finding the convention silly was amusing to feel. Like it makes sense to a parser but not to a flesh-and-blood contextual-clues-using human. We don't vocalize "open paren close paren" whenever we say an intransitive verb. We just "know" that it's intransitive. Anyway, great article.
It is a silly convention, really. Algol-60 didn't require them, and neither did any language in the Algol family (Pascal, Ada, Modula-2, etc). It used to be something peculiar to Fortran and C, but today every language imitates C...
Did Algol-60 have first class functions? It's not like Python just forces you to type the parenthesis to be meticulous; not typing them is perfectly valid code, it just means a different thing.
But there's a good reason for that convention in any language with first class functions. Otherwise you would have something inconsistent like "no parens required if the call is on a line by itself, but they are required in any other expression" (i.e. assignment or another function call).
I can still visualize what it's like to know nothing, because when I saw a BASIC program for the first time when I was ten, I thought the = signs denoted mathematical equality (equations). How the heck can X be equal to Y + 1, if in the next line, Y is equal to X - 2?
Later, I tried using high values for line numbers just for the heck of it. Can I make a BASIC program that begins at line 100,000 instead of 10? By binary search (of course, not knowing such a word) I found that the highest line number I could use was 65,000 + something. I developed the misconception that this must somehow be because the computer has 64 kilobytes of memory.
> I thought the = signs denoted mathematical equality (equations).
I had the same confusion! My very first roadblock in programming was when the teacher told me to write `x = x + 1` on the blackboard, which didn't make any sense, mathematically.
The only important trait I see that matters for either of these groups is a willingness to try things, push buttons, see what happens.
A beginner worries about breaking the computer and doesn't yet understand that any question they have can be typed into a search engine verbatim and will probably be answered with 20 SO posts and 50 blogs posts. And early programmer is stumbling down this road.
I don't know that this ethos can be communicated with a book.
I would also recommend that beginners/early programmers learn 1 programming language really well, and ignore the din of people on the internet who claim to effortlessly, expertly jump among 10 languages as part of their day-to-day.
> I would also recommend that beginners/early programmers learn 1 programming language really well
That's a dangerous approach. The first language is very hard to learn, because, well, it's your first. And when you stick to one language, you easily conflate the syntax and the semantics.
So when you learn a second language, you have to unlearn the syntax of the first, in addition to learn the genuinely different concepts. Distinguishing the similar stuff in new clothes from the actual new stuff is hard. Simply put, learning the second languages will be very hard as well.
Now your programmer has two data point, and knows with their gut that learning a new programming language is hard. This sets expectation, an will make it harder to learn additional languages. It will take some time to realise learning a new language, besides a few insane exceptions like C++, is not that hard.
> ignore the din of people on the internet who claim to effortlessly, expertly jump among 10 languages as part of their day-to-day.
Jumping from language to language may not be that easy. But one can certainly be an expert at 20 programming languages. Once you see the commonalities, there isn't much to learn. Really, a good course in programming languages is enough to get you started. The hard part is memorising 20 big programming frameworks, with all their warts, special cases and so on. Still, if you know the concepts, learning the vocabulary takes little time.
I just don't see it. No one would ever recommend learning Spanish and Mandarin at the same time, for any reason.
All of the things you said are true, and yet the beginner has only so much time, so much patience, so much learning to do in one day.
Given this, I see larger advantages to spending all of that time and energy in one ecosystem. There are many perspectives on, say, Java coding styles, patterns, and idioms. One need not go outside a language to do that.
And I would further argue that you simply cannot (usefully) see the global commonalities and idioms among languages until you've been doing this for a while. Years.
As for experienced programmers, I've not known any "experts" at 20 languages, ever. My point was really that this idea is simply the result of run-of-the-mill internet hyperbole.
Learning two programming languages at the same time is definitely not comparable to learning Spanish and Mandarin at the same time...those two languages are so different that you won't gain anything from it. Learning, say, Spanish and Italian at the same time might be a better analogy, since you'll start to see word roots and constructions that are shared among romance languages.
I think learning more than one programming language at the same time is a great way to help you tease out basic programming concepts from the vagaries in the syntax of an individual language. How to types work? Scopes? Functions? Loops? Arrays? Hash tables? Those are all things that, once you really grok as separate from, say, whitespace problems in Python or curly brace issues in C, allow you to much more easily read and eventually pick up other languages.
Although I can't speak to learning two spoken languages simultaneously, learning a language similar to one I already knew (I knew Spanish, tried to learn Italian) was insanely difficult, because my brain couldn't distinguish them enough. It would get in a loop of searching for Italian words and running into Spanish words and then mixing them up.
On the other hand, learning German after already knowing Spanish was much easier -- and it was much easier to see the similarities and differences between the languages, because my brain would find the Spanish phrase while searching for the German one, and vice versa, but wouldn't get stuck in a loop about it.
Which by the way has practical advantages. While the child takes a bit longer to learn those languages, it configures her brain in a particular way that makes it possible to think effortlessly in both languages, from the start.
I suspect, (but don't know) this even facilitates the learning of other languages, later on.
I found that switching between languages made the beginner stages of learning much faster. If I got stuck on a concept in one language, jumping to another language often clarified what that feature means. I still use this strategy today.
I learned variables and addition while in javascript, loops and classes in Java, iterators and hash tables (dictionaries) in Python, functions in Clojure, pointers in Go, etc.
Trying to learn all of those things in one language only would have been much more difficult, boring, and I believe I would know a lot less than I currently do.
> I would further argue that you simply cannot (usefully) see the global commonalities and idioms among languages until you've been doing this for a while. Years.
That's assuming you are not shown the commonalities explicitly, and have to figure them out by yourself. A course on programming languages teaches just that, and it takes only a semester (possibly a brutal one, but still).
> I've not known any "experts" at 20 languages, ever.
Lucky you. I've never met any expert, period.
Anyway, expertise in a language is not interesting. You want to be an expert at programming. Then translating your thoughts in any language is easy —including languages you don't know.
My "expert at 20 languages" isn't really an expert in those languages. She's an expert at programming, and proficient in 20 languages. Personally, that's what I strive for. I'll leave language lawyering to the compiler writers. (I may write a compiler someday, but it will be for a simple language of my own design. I have no interest in cancerous horrors such as C++ —which by the way is the language I happen to know best.)
Just a side note to your "will probably be answered with 20 SO posts":
I am currently going through Learn Python the Hard Way and at the end of one exercise I was doing the extra credit stuff (research online all python formatting characters) and typed in the question verbatim, one of the top answers was from SO, went there to check it out and one of the first answers was:
"So you are going through Learn python the Hard Way and are to lazy to find the answer"
I mean, I understand some of what that person was trying to say, there is some basic etiquette you should follow when asking questions online, but I am very sure a beginner would not know it. And the question wasn't even "Tell me all the python formatting characters" it was asking where to find a list of them. Another answer did point them to the python docs, but I felt that whoever asked the question would be hesitant about using SO again when they have another problem, which is a shame.
and ignore the din of people on the internet who claim to effortlessly, expertly jump among 10 languages as part of their day-to-day.
The problem with this isn't whether or not that claim is true, but rather that (as you say) its not the right thing for a beginner. Its perfectly fine (desirable, even, IMHO) for an intermediate-to-expert programmer to know many (different) languages and use a few of them regularly (although perhaps outside of JS and whatever you use on the backend, probably daily is a bit much maybe), but this is all distraction for a beginner. Beginners should focus on learning programming concepts, not on learning programming languages (but one language is a requirement, otherwise you can't learn by doing!) Although, Zeds point about learning the basics of four languages still holds - you can do that and then focus on one.
My point is more about the sorts of stories and claims that a beginner will encounter while poking around the usual sites. One could be forgiven for wondering if most HN commentators are 60 year-old polyglots with a security clearance and two failed startups under their belt. It can seem daunting in the right context.
I disagree with this entirely. You can "push buttons and see what happens" for decades without guidance on what buttons you're pushing and why. In addition, newcomers absolutely cannot just "type [their question] into a search engine verbatim" -- this is actually a highly advanced skill that you get after years of learning the correct patterns, abstractions, and jargon you need to get an effective answer for what you're looking for.
For example, a beginner might phrase queries like "jquery how do I make text appear on screen", whereas a more experienced person might query something like "jquery element insert value".
I wasn't suggesting that beginners not use books (or other "guidance"). Only that there are personality traits that are more useful to a beginner than detailed explication in any piece of writing.
Have you read Zed's books? I think this "ethos" is captured pretty nicely in them. Most chapters end asking the reader to solve a few challenges, some of these are difficult for a beginner, and it is recommended that you spend some time researching the answer.
Zed Shaw is a natural when it comes to teaching beginners. I recommend his "Learn The Hard Way" books to everyone who is interested in learning to code because they make zero assumptions and start at the VERY beginning. It's stupidly hard to find great books for complete noobs.
I'm totally behind this distinction, and I hope more content publishers adopt something like this.
If you're behind this distinction, I'd ask that you abandon the negative slang about someone who is new to a topic.
The hacker tradition is revered in large part because it seems to be so egalitarian. We all came from humble beginnings. Anyone who can grasp enough of the mathematics of the stuff can make the gizmo do something magical.
It's a common to be self deprecating about our former ignorant selves, and maybe some people find it encouraging to hear something like "don't worry, I was once a noob myself." But really, the word emphasizes the moments where you felt like an idiot. That doesn't help a beginner.
I was bitten by this as well, I thought the book was for an "early programmer" not a total beginner.
Hindsight and all, it seems the book would have better titled "Learn to Program the Hard Way (using Python)". Or "Learn to Program the Hard Way (using Ruby)". A total beginner is really trying to learn how to build a program, not trying to learn a particular language (whether they know that or not).
I think your parenthetical at the end is most important to the marketing strategy behind the way the book is named as opposed to your suggestions. In my experience discussing with people who do want to learn to program, their first question is generally "What's the best/easiest language to start with?". Having no knowledge of coding whatsoever leads one to focus on comparatively superficial things like language, so I can imagine more beginners being drawn to "Here's how to use Python" vs. "Here's how to program".
Then again a lot of times they have been directed to learn a specific language and "go to this great site I know about to learn it so you can start programming." Since, you know, we have ingrained ideas of what the 'best first language' is :p
I think it took me three years to understand what a variable was.
And I still don't know why it took me so long to understand and why I suddenly understood it.
It's not that I didn't know that assigning '1' to 'a' would result in 'a' having a value of '1', but I didn't understand the concept and workings behind it. I just thought it was magic.
I still remember when I was about 8 or 9 years old, learning Basic, and not understanding why I couldn't do something like this:
10 LET X + 2*Y = 5
20 LET X - Y = 10
30 PRINT X
(Yes, now I know about Prolog).
The only person around me who knew anything about programming was my grandfather, but when I asked him how to get user's input, he began explaining something about interrupts (he only programmed mainframes and got out of the field in early 80s), which left me even more confused and believing that this is just too advanced for me.
There's something interesting here too in that what many call variables are actually a bit more like "assignables". The upshot is that only in programming do variables behave this way---distinct and unlike mere "names" which we're more familiar with from day-to-day life.
So often one "learns (programming) variables" in how they're implemented instead of merely what they mean. Their meaning is much more hairy than mere naming.
Variable to me means 'can change', Constant to me means 'can't change'. It's the mathematical way to use variables in a way that confuses me, though I can see how if you only use 'constant' to refer to an entity like Pi can make sense.
Programming and Mathematics have a lot in common but it would be a mistake to take all your knowledge about terminology from one domain and apply it un-thinkingly to another.
A variable in algebra is exactly what you consider a constant in a programming language. If I say "x = 5" in algebra,that means "x represents the value 5" (and I can substitute one for the other anywhere). The variable x can't suddenly represent the value 6 halfway through my calculations. Variables in Racket and Erlang work exactly as in algebra: single-assignment binding.
> A variable in algebra is exactly what you consider a constant in a programming language.
I disagree. A constant is not expected to change, a variable is absolutely expected to change.
The difference is that a mathematical variable changes "between invocations" of a mathematical statement. An "assignable" changes "within invocations".
"Variable" in math doesn't mean change, it means that the relation a variable is involved in holds over any variation of valid denotations of that variable. So the notion of variance is external to the proposition, whereas in programming the variation is internalized.
(Technically we ought to talk about variables assigned in "for all" style or "there exists" style bindings. There's still a sense of holding under all variations, but the "thing" that "holds" changes.)
On the other hand, constant is different in each domain. Pi, the constant, is emphatically not a variable in mathematics.
Actually, the brunt of the confusion is not the variable, but the '=' sign, which in mathematics means 'is equal to', while in a programming language means 'assign to'. This indirectly changes the semantics of the variable within the statement, and confuses people.
This is why `x + 5 = 10` makes sense in mathematics, but not in a programming language.
"In mathematics, the equals sign can be used as a simple statement of fact in a specific case (x = 2), or to create definitions (let x = 2), conditional statements (if x = 2, then...), or to express a universal equivalence (x + 1)2 = x2 + 2x + 1."
In most programming languages, the equals sign is reserved only for definition.
If you wanted to be explicit about this in maths, you can use := and I think that notation would solve a lot of beginner and early programmer problems.
IMHO, Zed is right. I have been looking for books targeted to beginner programmers so I could recommend them to my friends, but most books unfortunately fail on this point.
A Notable exception I found is "Learn you a Haskell for Great Good!". It is as good for beginning coders as it is for early (or advanced) ones.
The author made the effort to describe some relatively basic things, and it was simple enough (okay, with a few calls to me here and there) for an Art major friend of mine to start with programming, and with Haskell. I can't recommend this book enough.
> I have been looking for books targeted to beginner programmers so I could recommend them to my friends, but most books unfortunately fail on this point.
My favourite book for this by far is How to Design Programs: http://htdp.org
It assumes knowledge of arithmetic and maybe a tiny bit of algebra but not much else.
I feel like I'm perpetually stuck between what the author describes as "beginner" and "early". I understand what programming is, I can write a bash script that does what I want it to (granted, I have to read a ton of man pages to make sure I understand what it is I want to accomplish), I can write simple programs in Visual Basic or Python or Javascript that do simple tasks. I understand program flow, logic, and all the basics of high-school level algebra.
The problem is, I can't wrap my head around many of the concepts I read about here in the HN comments and elsewhere on programming blogs and such. No matter how much I try to understand it (and by understand it, I mean fully grasp what the person is talking about without having to look up every other word or phrase), I can't seem to put it all together. Things like inverted trees, functional programming (I've heard of Haskell and I'd love to learn it, but I have no head for mathematics at that level), polymorphism, and so on.
Maybe I need to just practice more; maybe I need to pick something interesting from Github and dive into the code to try to understand it better (preferably something well documented of course). Or maybe I need to just stop, and accept that I can whip out a script or simple web thingy if I really need to, and stick to being a hardware guy, which I'm actually good at.
You don't have to understand everything. I've been programming for 30 years and it took me 3 years to understand monads. (At least I think I understand them /grin.) I also have no idea what "inverted trees" mean - if you're talking about the recent thread about the guy who got rejected by Google, I believe it just meant trees with the left and right nodes reversed.
...but it makes you feel stupid if you don't. Better to use "Real World OCaml" if you're more interested in the ideas themselves than in their formalizations or related nomenclature.
Exactly. That's where I'm at right now; I know what Haskell is, I love the idea of it, I've enjoyed some of the fruits of it (XMonad). But it was when I tried to dig deeper into it that I felt lost, and yes, stupid. I've never been a math whiz; I am great at visualizing concepts but truly grasping the theory behind them is where I get lost. Based on my junior high school testing, I was placed in Advanced Algebra in my first year of high school. I nearly failed the class because it took me all year to grok the distributive property. I look back on that and I feel ashamed, because once I understood it, it seemed so damn simple! And so it is when I try to advance beyond my current level of programming skill; I hit brick walls and I feel like I left my sledgehammer at home. My pocketknife, even though I know every millimeter of it, won't cut through those walls.
I don't think the concepts are hard to understand, I think that - in Haskell - they're just being presented in a way that is incompatible with my way of thinking.
Having found Haskell materials as simply not suited for me I decided - quite a few years back - to learn Haskell (or the concepts behind Haskell, at least) my own way: by learning first Erlang (it sounded cool), then Scheme (mainly to be able to read many, many papers that use it), then OCaml and Scala (because the type systems and pragmatism) and finally Clean (to fill the last gaps in my knowledge). I progressed from dynamic to static typing and from eager to non-eager evaluation. It took me I think about 2 years to do all this and, of course, it wasn't that easy, but somewhat surprisingly it worked. I never wrote - and I'm not sure I ever will, but that's a completely different matter - any non-trivial Haskell code, yet I'm able to read and enjoy Haskell-related papers.
It's important to realise that there is always more than one way to learn things. You should know yourself well enough to see when the "normal" way simply isn't for you; this way you can go search for alternative ways. I guarantee that you'll find them, if you search hard enough :)
I'd put you at early. You're ready to learn more. Don't pick something interesting up from Github. Reading the source code from a complete project is tough as hell for an experienced programmer.
The stuff you don't know all needs to be studied. People don't happen on functional programming or build trees a propos of nothing; they studied it, either in college or on their own.
Find a good book and dig into it. If you want to learn stuff like functional programming, read The Little Schemer. If you want to learn about trees and other data structures and algorithms, make a few pots of coffee and work your way through the Algorithms bible by CLRS.
Polymorphism is actually pretty easy once you understand the ins and outs of OOP. It just sounds scary. I've taught a first-year high school beginner programming class about polymorphism by the end of the year.
Thanks. I really just pulled those concepts out at random, things that I'm interested in but I know I'm not ready to tackle yet.
I did study programming in college (well, tech school, so that's likely part of my problem), but it was SQL/400 on IBM AS/400 machines and was geared towards direct employment at one specific company. In other words, it bored me to death, and I switched to web design halfway through. I really didn't learn anything there that I hadn't already taught myself, and I was surprised to find that I was the only person in the class (including the instructor!) who knew what Linux was. We had an AIX server that had been donated by some company a few years before, and it was sitting there unused, like a monolith from another era, until the instructor allowed me to work on it in my spare time. I managed to get it up and running, and set it up as a local webserver so we could practice server-side scripting with more control over the environment, and learn a bit more about how web servers do what they do.
And that's the core of it I think; my passion is for tinkering and fixing things, and while there's a lot of that kind of thing in the programming world at large, I'm really more of a hands-on, direct kind of person rather than an abstract thinker. My sister is a website and graphic designer in her spare time, and she and I have often talked about starting our own hosting service geared towards creative professionals, with me handling the back end and her doing the front end and marketing. Maybe I should pursue that instead of wasting time trying to learn advanced programming concepts when I'm not actually seeking employment as a programmer.
Or maybe you need to set aside the shell scripting and start learning a different language? "Things like inverted trees, functional programming, [and] polymorphism" really have no analogue in a shell language. Start working in C++ or Java, for example, and I suspect these things will start making more sense.
Pick a medium sized project, and sketch out all the steps and sub-goals (and update/redo this sketch as you make progress and understand what the final implementation may look like). Then slowly conquer those sub goals one by one, googling furiously and reading what you find very carefully the whole way.
You can decrease/increase the scope of the project part way through depending on how smoothly things are progressing. Fully executing on a simplified initial version is much better than half way executing on a grander vision.
Don't be afraid to ask for help if you get stuck. You will be amazed at everything that you have learned by the end.
I have been thinking this for years.... though I would consider myself an "early coder" according to the article.
This stuck out to me as being just the beginnings of the quintessential issue:
A beginner’s hurdle is training their brain to grasp the concrete problem of using syntax to create computation and then understanding that the syntax is just a proxy for how computation works. The early coder is past this, but now has to work up the abstraction stack to convert ideas and fuzzy descriptions into concrete solutions. It’s this traversing of abstraction and concrete implementation that I believe takes someone past the early stage and into the junior programmer world.
But why stop at just "beginner" "early" and "advanced". All of the books I have on programming are either truly "beginner" or blankly labeled as a programming guide, when in actuality it is quite "advanced"...nothing in between.
If, as the article states, 4 is the magic number of languages to learn up front, perhaps there should be a 4th level of programming guides....one for the journeyman who knows the syntax, can articulate the complex algorithmic issues that need to be addressed, but isn't quite at that "mastery" or "advanced" level.
I think that's the stage I'm at right now (or getting there). But what is the divide between "junior" and "senior/advanced" programmers? And what would help somebody (me) push across that boundary?
Programming is a frustrating job, you're pretty much doomed to be a beginner forever. It's part of what makes it exciting day in and day out, but it can also be overwhelming.
No, there's definitely an underlying substrate of significant commonality between the various programming languages and technologies. If you're at 10 years in and you still feel like a beginner, you're doing something wrong.
Obviously I can't expect to pick up a brand new technology and instantly expect to be a wizard, but I do expect that I can pick up a new technology and be functioning at a high level in a week or two, tops, because it's almost certainly just a respelling/reskinning of some technology I've used before.
(The whole "young guys who know way more than their old-fogey elders" was, in my opinion, an isolated one-time event when we transition from mainframe tech to desktop tech. Despite its recurrence on HN, I think "age discrimination" is naturally receding and will just naturally go away as the people on this side of that transition continue to age, and skill up.)
If you pick up a new tech and it's " just a respelling/reskinning of some technology I've used before" you are doing something very silly or are not using new tech at all. If it's basically the same there is no reason to switch.
Don't tell me... tell the people who keep pushing old ideas in new guises on me!
"Oh, look, the JS community has discovered $TECHNIQUE. Ah, yes, I remember playing with this in 2003. Does any of them remember the ways in which it went bad and never took off, or are they just spouting hype? Ah, I see they've opted for hype. Well, this ends predictably."
Not that $TECHNIQUE is necessarily bad, mind you, it's just that none of these ideas are new and it would be nice to see one of these frameworks pop out every so often written by someone who up-front acknowledges the previous weaknesses of $TECHNIQUE and tries to address them, even if only through user education, somehow.
(And while no language community is immune to this, the last two years of JS has been noticeably worse about this than any other community I know.)
I've been hearing for years about how the javascript community is constantly rehashing things from the mainframe world or the desktop world. I haven't really been programming long enough to see it happen, though, (my first programming book was along the lines of "how to AJAX").
Could you be so kind as to mention some examples of javascript libraries or techniques that are recycling failed concepts from past decades?
"Failed" is too strong. Many of them are good ideas for certain use cases, but also have certain well-known problems, which is frankly true of everything.
Event-based programming was not discovered by Node. It was the dominant paradigm for decades, plural, on the desktop, and still is how all GUIs work, on all platforms, current and past. I've got a big blog post on deck about this one, actually, so I'll save the well-known pitfalls for that.
All of the async stuff that they've come up with has been tried before, and none of it is a miracle cure, though some of them are certainly better than callback hell. Still, many of them still have well-known problems with composability and program flow comprehension. This has been a rich source of people overestimating how green the grass is on the other side; for instance, Python has everything ES6 is going to have anytime soon, and it's still fairly klunky in many of these cases, IMHO. Not to mention the fact I've been outright stunned to see people in 2015 rehashing claims that cooperative multitasking is superior to preemptive multitasking because it gives you "more control" over the performance, which is roughly up there with seeing someone actively advocate for spaghetti programming because it gives you "more control".
Reactive programming dates back to Visicalc (note the fourth paragraph of the wiki page cites spreadsheets as an example); it has well-known problems with cyclic dependencies, which are shockingly easy to accidentally introduce. It strikes me as likely that the nature of web pages will tend to contain this problem, unless you're literally building a spreadsheet; this strikes me as one of the better tech fits. (The more you partition your problems structurally with "pages" and discrete "submissions" to the server, the less likely you are to spaghetti tangle your data flows accidentally, the way a single-spreadsheet application can so easily. The structure induced by the web is in this case harnessed to your advantage.) I wouldn't try to build a game with reactive programming, though.
The idea of using "binding" in your UI, as in Angular, was actually tried by multiple GUI technologies, and generally was hard to work with at scale (made easy things easy and medium things very hard). I'm pretty sure Microsoft had it at least twice on the desktop and once in ASP.Net; none of them stuck. The same effect may save you here, though... GUIs are generally in "pages" as well. Less sure about that. Adding binding to a pre-existing language can also cause inner platform effect, where you have to embed a full language for expressions inside the original programming language: [1] But this is one of those cases where advances may make something more practical than it used to be... dynamic scripting languages require a lot less work to make that work than the olden days, where the fact you were literally writing a new inner language really sucked (i.e., lots of new bugs the outer language didn't have).
Going to a bit of a wider range, NoSQL databases preceded SQL databases, which are called SQL databases precisely because there were databases, then there were SQL databases. Non-SQL databases had problems with being non-standard and causing your application to be too intimately tied to one of the very pieces of the tech stack most likely to fail one of your requirements and need replacing. That said, I will also point out that SQL databases really were in some sense too successful, and they should never have been the "only" choice. (But too many underinformed people still read too much hype and flee SQL databases when they really shouldn't.)
And I'll end my message with a clear restatement of my point, so it's both the beginning and the end: The point is very much NOT that any of these things are "bad" or "failed"... the point is that they are not new, and it would be advantageous to look back at our historical experiences with the tech where possible to learn what the pitfalls are. This is something that both the people writing the techs ought to be doing, and even if they do, the people using the techs really ought to as well, especially when it comes time to decide which tech to use.
Everything comes down to data structures and algorithms. It's just a matter of how much syntactic sugar you want on top of it. Image processing? Machine Learning? Real time communications? Functional programming? HTML/CSS/Javascript? Meteor? NoSQL?
It's all the same thing under the hood, just better syntax and tooling. Data structures and algorithms.
jerf is right.. you have a mental model of how a programming language works, and a new language is just changing the syntax used to represent those same concepts.
Does that mean there's no reason to switch? No.. since some languages are better at representing some ideas; some languages have better abstractions for certain ideas; etc.
I think starting with a lower level language helps with this way of thinking. If you learn everything about C (for example), and later learn a higher level language, it's easy to think of how you would implement a certain feature of the higher level language.
> a new language is just changing the syntax used to represent those same concepts.
I can think of two counter examples for this.
First, Lisp with its meta-programming is completely unlike any other language that doesn't support such concepts. No programming I did(In C-like languages) ever prepared me for something like Lisp.
Secondly, Haskell with laziness, immutability and a powerful type system is completely unlike any other mainstream language out there. You would struggle to even begin to express your programs in Haskell if you are not familiar enough with it. Imperative code and Haskell code are almost always completely different. If you have some experience with imperative languages, most of that experience would translate over to other imperative languages. However, an extremely small portion of my imperative language experience translated over to Haskell.
> I think starting with a lower level language helps with this way of thinking. If you learn everything about C (for example), and later learn a higher level language, it's easy to think of how you would implement a certain feature of the higher level language.
I cannot agree with this more. A lot of people seem to think that a "better" language to learn programming with is one that is "easier" or "more forgiving", but everyone I know who started with C became excellent programmers whereas ability among the group who started with something else is somewhat more hit or miss.
Yes.. I think it's because C forces you to think about things you would never have to think about in a higher lang.
For example, I know exactly how garbage collection works.. since I once had to write a GC for a project. So when I use a higher lang, that part isn't magic... it's just something someone else already wrote for me.
Whereas, if you started with a higher level lang, you could get by without ever learning how a GC works. Yes, you could dive into the details of your language, but there's no requirement for you to do it.
And I think that explains what you've noticed... it's hit or miss because those who chose to dive into the details of their lang eventually became excellent developers...
> everyone I know who started with C became excellent programmers whereas ability among the group who started with something else is somewhat more hit or miss.
I think the point of the article is that you are not a beginner forever. You are a beginner when you know absolutely nothing about programming. After years of experience, no one should be an actual "beginner", by the authors standards, because you understand programming conceptually. Being a programmer can be frustrating for a lot of reasons, but not understanding the basics of coding should probably not be one.
The difference between a beginner and an early programmer is that the beginner has no mental model at all for mapping code statements to the things code statements do - because they have a limited to nonexistent idea what the latter are.
The heart of programming isn't learning syntax, it's in learning how to translate solutions into sequences of symbolic operations, which can then be translated in turn into the syntax of a specific language.
Beginners also need to learn to use a keyboard, an IDE, and a debugger, and probably also a package manager, a distribution tool or system, and how to look stuff up online.
So the cognitive load is huge.
But it's the mental model that trips people up. You literally can't do anything with code until you learn the core symbolic grammar for the most common operations. And even after you do that, there's always more to learn as you can go deeper into more sophisticated models like functional programming.
It's not at all obvious to outsiders how that grammar works. (Which is why programmers tend to be so bad at UX - operations and relationships that are "obvious" to someone with programming aptitude are completely unintuitive to people from other backgrounds.)
Last fall I went through a coding bootcamp in Toronto. It was 9 weeks of hard work sprinkled with lots of frustration and lots of feel good successes.
A main takeaways I had was everyone comes in with a different background and everyone has a unique approach to learning.
The problem expressed in this article is a fundamental bottleneck of education. The communication between teacher and student is often misinterpreted at both ends and the subject matter is never perfectly conveyed or received.
I feel what really lacks in the learn to code community is teaching one how to actually learn.
Lay a positive attitude towards failure and a framework of problem solving first, the content and understanding of a language will come after.
No. Just change the name of the book to "Learn Programming The Hard Way (Python Edition)". By putting the language in the title it sounds like it is for an experience programmer learning a new language, not for learning how to program.
True. When I started a major in CS, I had no programming experience. I started "Learn Python the Hard Way", but when I found out that I needed to learn C for Compilers, I tried to switch to "Learn C the Hard Way", thinking the two books were equivalent. They weren't.
This is a fantastic article and is another great example to pile on as to why Zed Shaw is the king of programming teaching.
One area I struggle with in tutoring is how to inspire/invoke/detect disciplined motivation. What I mean is, whenever I sit down to show someone something, I'm constantly questioning myself "wait, do they actually want to learn this level of detail, or am I just giving too much information that's going in one ear and out the other?" If someone is definitely motivated to learn that's great (and really inspiring for me as a teacher to do better at explaining things precisely).
If this nomenclature were more understood, I would like to say something like "Sorry, what you're trying to do is more of an early/junior task, and right now you need to stick with the Beginning basics". I just don't know how to phrase that without sounding condescending.
Zed points to a very real problem - it's easy for us to forget what we know. But there's another problem with targeting beginners. They are all over the place in terms of experience.
Computing is so tightly woven into our world not that it's hard to find people who have more than a passing interest in it who have not find some way to try to code as kids. Even with those who haven't, there's a gulf between people who have tried to do a little HTML editing (and know what a file is) and people who haven't. There's no one place to start. From Zed's description it looks like he's starting from the lowest possible point, but what are the demographics like there? How many people are in that space and are they mostly adults or children?
I think this is one of the main reasons why you don't see much beginner's material.
"My favorite is how they think you should teach programming without teaching “coding”, as if that’s how they learned it."
I often wonder about this. In the UK, with the drive to get every child 'coding', there are a large number of teachers that constantly talk about how the main skill that we should be teaching is 'Computational Thinking'.
I wax and wane back and forth over this topic, in a very chicken and egg way. However, I usually end up coming to the conclusion that learning computational thinking is great, but you need to know how to code (i.e. learn the basic syntax of a language) before you can possibly learn how to think computationally.
I would be very interested to hear the opinions of actual developers, as to their opinions on the topic.
I don't think these things are all that different, in the very early stages.
The very first part of computational thinking is understanding that you can make a very specific and precise procedure to accomplish a task. If a student is at an age where reading and writing is easy, then learning the syntax of a language is a fine way to accomplish this. The student will spend a lot of time with each finicky word and symbol to make the computer behave, and while they may not recognize that they are defining an abstract procedure, the result is hopefully some intuition that the computer is a very predictable and reliable machine that does exactly what the code says, even if it's not what you meant. With exposure to more languages and by writing more programs, hopefully a student begins to recognize patterns and abstractions in their code, and that's the point at which they become real computational thinkers.
If a student isn't ready for that, there are still fun things to try. One cute one I've seen is "program your parent" exercise at a workshop. The child can make their parent move one step forwards or back, turn left or right, pick up and put down an object, and put one thing inside another. Can they make their parent pour a glass of juice? Or put a lego back in the box?
I don't think there is a chicken and egg problem here, because learning to make a dumb machine perform a task by following a procedure is the essence of computational thinking. Learning the basic syntax of a language is probably the most efficacious way to experience this for many students of many ages, even if the explicit goal is "do well on an AP test" or something mundane.
Part of the problem is that there are too many things each language can now do. Every single language wants feature parity with every other language. Every single language wants to do everything.
This means, an expert in one language is going to be "Beginner" instead of "Early" in some some ways...but "Early" instead of "Beginner" in other ways.
Anecdotally, as a software engineer working with C++, I had to spend a whole months trying to understand event-driven programming of other languages. I didn't really need tutorials on loops and recursions but I sure as hell needed to understand how a typical program in that language works.
I am a instructor for Software Carpentry[1] , the goal of these workshops from my experience is to try and help mostly scientists get started on the journey to becoming early programmers.
In biological sciences with more and more data becoming available, the Expert blindness Zed speaks of is a major problem. We need to invent better systems and actually take heed of research based teaching methods as SW does if we wish to improve this situation.
I tried to contact Zed about a month back, to ask him this question.
I tried though his blog comments, and at the help email he has for his HTLXTHW courses, but never got a response.
I dunno if he just never noticed it, or if he's actually ignoring me for some reason, but having already typed out this question with all the necessary context, I figure I may as well post it a public place where it's relevant, so here:
and was pleasantly surprised find you explicitly mentioning Siegfried Engelmann and Direct Instruction.
Here's the story:
I learned about your "Learn X the Hard Way" series through a friend who had learned Python from your course.
He told me he heard you knew about Zig and DI.
I immediately said something like:
> Nah, pretty much nobody has heard about DI, much less properly appreciates it.
> Probably Zed just meant lowercase "direct instruction" in the literal, non-technical sense of "instruction that is somehow relatively "direct"".
> He's probably never heard of uppercase "Direct Instruction" in the technical sense of "working by Engelmann's Theory of Instruction".
But then I googled, and yeah, aforementioned pleasant surprise.
(I am just not going to say anything, outside of these brackets, about "blasdel" there.
If medicine was like education, the entire field would be dominated by the anti-vaxxers.
Hey, blasdel! supporting "Constructivism" is morally at least as bad as supporting anti-vaccination!
Bah, whatever. Okay, got that out of my system. Anyway. xD )
So now I'm really curious:
You said you "learned quite a bit about how to teach effectively from [Zig and Wes]".
But how did you learn from them?
You haven't slogged your way through the "Theory of Instruction: Principles and Applications" text itself, have you?
I have, and wow was that a dense read... Which is frustrating, because as you're reading, you can see, abstractly, how they could've meta-applied the principles they're laying out to teaching the principles themselves --[the open module on Engelmann's work at AthabascaU](http://psych.athabascau.ca/html/387/OpenModules/Engelmann/) includes a small proof-of-concept of that, after all-- but apparently they just didn't feel it was worth the extra work, I guess...?
(Zig did [say that the theory is important for "legitimacy"](http://zigsite.com/video/theory_of_direct_instruction_2009.h...) --ie, having a response in the academic sphere to the damn "Constructivists" with their ridiculous conclusion-jumping-Piaget stuff and so on-- and that's the only practical motivation I've ever heard him express for why they wrote that tome in the first place.)
Have you read any of the stuff he's written for a "popular" audience, like these?:
If in the world of programming - the biggest issue you're running up against is 'this is too basic', then great :)
If it's too basic, go read something else, no problem. If you're going to get anywhere in this world, you'll have to know how to research. Skimming and figuring out if something is useful or not is a valuable skill - now more than ever. So whoever complains about a well written book not suiting their fancy - it is their problem, not yours.
- Books written for "beginners" target people who already know how to code
- Author's book targets people before that
- Most programmers are bad at teaching people how to code
- Recommends some arbitrary phraseology to differentiate levels of ability
- Until someone learns the basics of 4 languages they don't really know how to code
- Demands people only use the term "beginner" for people who can't code, and "early" for those who can.
This is great and all, but it comes off mostly like a whiny complaint about how most development books are aimed at a group of people who already have a basic knowledge of coding.
The has already been addressed by the so called "dummy" series of books. They were aimed directly at the audience the author is saying are being left behind.
I'm not sure I am seeing a real issue here. Go to the bookstore, browse through the books, pick the one you can comprehend and seems to be aimed at whatever your level is. Done.
It's a pervasive, persistent problem. It's exceedingly common to talk with coworkers, or people at a meetup, who assume a bunch of domain knowledge for their domain, and are dismissive when you don't know that knowledge. And then you find they aren't aware of some other domain. The worst is when the same ideas are used in different domains under different names. There is often no level-setting, either. Compare statistics, machine-learning, and CS language about the same topic.
The 'for-dummy' books are only a tiny sliver, and it's often hard to tell from skimming a book (or from the ToC on Amazon) what a particular book's target is.
I've read Learn Python the Hard Way and the Head First Guide to Programming (which teaches programming through Python). While LPTHW does beat you with the stick of your own ignorance until you achieve enlightenment more neither of them assume anything more basic than the ability to think abstractly. The only way I can see of demanding less in the way of prerequisites is using a ~non-abstract programming language, maybe Scratch or Logo fit the bill?
(In Zed Shaw's conception, this might correspond to "learn computing the hard way".)
I see his examples and other examples here in this discussion, and it makes me wonder about the value (or existence) of a very thorough reference.
I've also encountered this when working with lawyers who wanted to have a reference to cite to courts about very basic facts about computing and the Internet. In some cases, when we looked at the specifications for particular technologies or protocols, they didn't actually assert the facts that the lawyers wanted to cite to, because the authors thought they were obvious. I remember this happening with the BitTorrent spec, for example -- there was something or other that a lawyer wanted to claim about BitTorrent, and Bram didn't specifically say it was true in the BitTorrent spec because no BitTorrent implementer would have had any doubt or confusion about it. It would have been taken for granted by everyone. But the result is that you couldn't say "the BitTorrent spec says that" this is true.
Another example might be "if a field is included in a protocol and neither that layer nor a lower layer is encrypted with a key that you don't know, you can see the contents of the field by sniffing packets on the network segment". It might be challenging to find a citation for this claim!
So we could also wish for a "all our tacit knowledge about computing, programming, and computer networking, made explicit" kind of reference. (I'm not sure what kind of structure for this would be most helpful pedagogically.)