Hacker News new | past | comments | ask | show | jobs | submit login
A Cautionary Tale of Learning to Code (freecodecamp.com)
240 points by michaelq on Nov 19, 2014 | hide | past | favorite | 129 comments



I think the author is correct in that he tried to learn too many things at once; starting with a friendlier environment and focussing on a small number of new technologies has obvious advantages.

But in some other ways, the author is too self-critical:

> I'd spent months sitting alone in libraries and cafes, blindly installing tools from the command line, debugging Linux driver problems, and banging my head over things as trivial as missing parenthesis.

All these things have value, and I doubt that this was time truly wasted. Although some people can get by for a while without it, becoming intimately familiar with the command line is extremely valuable. Also, although this is by no means a requirement for being an effective programmer, most good programmers I know use a traditional text editor such as emacs or vim.

Ultimately, I think the point is that expending time is an investment, and you should prioritise based on the expected future utility from your time investment. More often than not, this means finding the most important thing and putting almost all of your energy into it.


Programming is full of bang-your-head-against-the-wall moments, getting good at working through them with unfamiliar tech is an important skill.

Of course, from a professional perspective, you want to minimize how often that happens, but there will be plenty of blind stumbling as your work takes you into less-traveled territory.


>> I'd spent months sitting alone in libraries and cafes, blindly installing tools from the command line, debugging Linux driver problems, and banging my head over things as trivial as missing parenthesis.

> All these things have value, and I doubt that this was time truly wasted. Although some people can get by for a while without it, becoming intimately familiar with the command line is extremely valuable. Also, although this is by no means a requirement for being an effective programmer, most good programmers I know use a traditional text editor such as emacs or vim.

I agree. I think the prime takeaway is do what you need to do to solve a problem and then most of these things that the OP described came organically for me: I started on Windows, learned programming via Python, used a wide variety of text editors. Eventually I grew dissatisfied by the tools windows provided me and installed Linux. I spent many hours debugging weird weird issues with my then laptop, until one issue (a kernel bug? I think...) forced me to abandon that laptop and switch to a thinkpad. When I started my first software job my coworkers recommended vim to me. After I used it for a while, I began to become fond of its key bindings so now I'm using sublime with vintageous and just straight up vim if necessary for my development.

Along the way, I've learned unix tool sets, dealt with init system, know how to find and interpret logs in a reasonable manner, figured out where to get help from people if needed, and so forth. The experience from this is invaluable to me when looking at a problem and being able to quickly identify at least where the problem is coming from based on what I've seen in the past.

At the end of the day it all depends on what you're interested in and what problems you have/need to solve. I'm interested in the stuff I've done, so I found it to be a valuable exercise (albeit some headbanging every now and then). If someone is only interested in doing iOS development or Windows development. They might find the things I've fiddled around with to be not useful, and I think they would be right.


This. I learned programming the "bang your head against the wall" way, and I think I'm much better for it. I'm much more familiar with the innards of my tools, operating systems and programming languages as a result, and I'm much better at debugging things and understanding what's going on when I write a certain piece of code. I also have a pretty diverse set of skills. I find that any information you pick up somewhere eventually ends up being useful elsewhere.

But I'm privileged in that I wasn't ever pressured to learn, it wasn't a point where my livelihood depended on it, so that's a factor.


I agree with you so much.

Learning without casting a wide net may means you end up with the kind of tunnel vision that used to lead poor souls to 'learn Dreamweaver', or 'learn Crystal Reports', or ... ASP.NET controls, or something.

This guy ended up with a sense of the tech landscape, and knowledge that will serve him well. In one year!

Honestly, if people copied him and just avoided editor/keyboard silliness, they'd do well.


I think the key takeaway from this is that you need to focus on learning one thing at a time. If you want to learn how to program, don't distract yourself by trying to learn some complex text editor at the same time. Use whatever plain-text editor feels easiest to you. And use the environment that's easiest to get going in.

As you learn more, you'll find things that annoy you about your editor/environment. Fix those as you see fit. Eventually you'll have an environment that fits you "like a glove".

It's absolutely detrimental to try to get a fully customized environment set up before you know what your workflow is going to be like. It took me years to get to my environment (tmux, vim+plugins, ack, a VirtualBox environment with a proxy). I experimented a lot. But getting "the perfect environment" was never a goal; it was just a lot of "this is annoying, there has to be a better way -> google -> environment change".


This was one of my biggest issues, even as an experienced developer with a CS degree. I always felt a step behind since I wasn't fluent in the latest and greatest framework. All of that changed when I mentally decided to focus on one language, namely Ruby. It's amazing how much you can do with any one language even though it's not fully optimized for a specific use case you need.


Great article and observations. It's always good to see what this field looks like from a newbies' eyes.

If it only it were so simple as keeping a small tool set and a dev team committed to it. You can certainly build a business from scratch with a small toolset. But as the business adapts to customers, you always start to find things that customers want that the tools do not support. In initial dev, you don't necessarily care, but when you're live, it's better to implement than say 'no' to executive management and sales in most (not all) cases.

Once you modify the tools a bit to support the unique business, you're no longer able to keep new developers focused on a small toolset, because you've customized. New devs may not agree and may look for ways to work around the customizations. Soon, you have a unique codebase that is valuable- central- to the business, and your changes have crept in to the point that off-the-shelf devs and tools and upgrades don't work.

Now you're in maintenance mode. Code schools don't teach maintenance, but it is the lifeblood of the software business. Developers avoid maintenance jobs. They try to find new, greenfield work. That's why we end up with so many disposable business models, 3 year dev-to-acquisition cycles, and ridiculous amounts of abandoned code.

Maintenance is hard. Much harder than development. But it's much more important. Anyone can launch an app that builds a business. Not everyone can adapt and grow that business with code changes that require getting out of the dev comfort zone.

Code schools don't teach this because they don't want to expose aspiring coders who just want to get rich to the grimy dirty details of a real profession. But ask yourself this- would we have cars and highways if all we trained were new car designers (not mechanics or road builders)?


A Cautionary Tale of Learning to Fix Teeth. My own.

How a reasonably balanced individual nearly went insane

I was just a guy in a suit in an office with a vague healthcare idea. Then I decided to learn to fix teeth.

I overheard some guy at a happy hour bragging about how easily he was able to automate his overbite by using a technique called "4 Handed Dentristry". I thought, "huh, 4 Handed Dentistry." I went home, googled it, and within 15 seconds, I was working through a random 4 Handed Dentistry tutorial.

A week later, I went to my first dentalspace meeting. Everyone was talking about techniques like orthodontics, endodontics, and maxillofacial surgery. There was so much to learn. I borrowed three O'reilly books and got about 50 pages into each of them.

Most dentistry books start off nice and easy before making big assumptions about your prior knowledge.

A friend told me I should get good at drilling, and gave me his drill bits. I spent a few hours learning basic foot controls so I could further configure it.

Then some guy walked by and saw me drilling. "Why are you drilling?" he asked me. "Don't you know lasers are better?" "Hm. Lasers." So I started memorizing dozens of laser shortcuts.

Most arguments about restoration techniques are what dentists call "religious wars" - rooted more in historical differences than practical merit.

At the time, it seemed reasonable to think that the faster I could drill, the faster I could fix teeth. I switched to a hydraulic drillset because, hey, it was objectively the most efficient method a dentist could use.

Can you count how many letters, numbers and teeth are in their original oral positions? I'll give you a hint - it's in the low single digits.

On the days I could actually get my netbook to successfully boot DentalCAD - and that I was able to drill more than 10 teeth per minute - I studied oral surgery by working through books and Udacity courses.

After 7 months of grueling self-study and going to dental events, I landed my first Dental HMO job...

You kinda get the idea by now? I dunno, I think what we do is every bit as professional as dentistry. So why don't posts like OP's seem as absurd as mine?

[UPDATE: For those of you who have suggested that programmers don't have others' well-being in their hands, actually quite a few of us have, just not as directly as dentists. We are a profession and we do do important work. A few examples in an old post: https://news.ycombinator.com/item?id=2882620]


This analogy is as absurd and oversimplified as comparing grand theft auto to a download of Taylor Swift albums.

There is no such thing as "freelance amateur dentistry". You can't start tinkering with teeth and get a job the way you can tinker with programs as a hobby before getting a junior position at a programming gig. To compare the two professions and suggest the path to engagement and skill are the same is obtuse.

This article resonated with me more than most HN pieces, because it describes my path as well. I've not had professional guidance, and with the absolute flood of options, of data, of debates and educational material, it's REALLY difficult to know where to start. I don't understand how you find it acceptable to mock someone's interest and lack of guidance.


I don't think he's mocking a lack of guidance, as much as the implicit indignation at not understanding a sprawling and complex field in an afternoon.


The moment that a student dentist starts to drill into a tooth to place a filling (under close expert supervision and after a lot of training) for the first time they are doing two things: 1) Removing enamel that will never grow back - if they do it wrong, there is no option to fix a semi-colon and recompile 2) Doing it on a human mouth for the very first time in their lives.

The difference isn't just that most programmers don't directly have others physical health in their hands, although that is true. Some programmers do, but all dentists do.

The difference is that you don't get do-overs. Have you ever written code that didn't even compile the first time? Or code that failed a test, or code that did something wrong in a test environment, or code that did something buggy in production?

If you'd made a mistake like that as a dentist, a patient might well have lost a tooth that they're never getting back. Much worse could happen, you could cause someone to lose feeling to their face, even kill someone.

There isn't any low-stakes dentistry but there is low-stakes programming. That makes it easier to start as an entry-level programmer because even at a low level of skill and experience you can potentially do useful work for someone.


He did focus in on python for 7 months which is good. I think what's missing is proper apprenticeship in software.

He shouldn't be taking random advice to learn emacs/VIM from people, or use a new keyboard, or (a lesser extent) a new operating system. He needed a guiding instructor to tell him to learn how to code first and focus on building projects instead of trying to turn a 3-5 year education into a 1 year accelerated course so he can be like the cool kids.

I'm self-taught as well, so I'll never say university or professional accreditation is the right answer. But that doesn't mean we should be without effective mentors.

Speaking of which, I've been meaning to look into mentoring...


You should check out codementor.io. A buddy had a really successful time with it. Paired with 2 mentors and went from having 0 knowledge to an app on the App Store in 5 months.


Thanks for the link. Applied to be a mentor :)


+1 I strongly believe that we're a profession and should have professional standards.

On the other hand, I do suggest you look at hobbyist/quasi-professional magazines. It's.... not terribly dissimilar.

Example cover: http://popularwoodworking.woodworkingplansplans.com/images/w...

Conceptually, that's the same exact approach taken..


> +1 I strongly believe that we're a profession and should have professional standards.

As long as we don't kill the goose which laid a lot of us golden eggs.

If we destroy the ability of newbie programmers to come up outside the university-professional path, we've just irreparably damaged the whole field.

This is also why I don't like the idea of unionizing programmers: Even if we come up with a union which isn't based on the wage-and-hour, put-in-your-time model, unions are still based on seniority and coming up the "right" way as opposed to being able to strike out on your own in your own little company, without needing to pay dues, literal or metaphorical.


I look at it the opposite way. There needs to be a way to allow software developers to say, "My profession's ethics will not allow me to develop this (credit card, medical record, banking, etc) software in the way you've specified. You can either change X, Y, and Z, or give up on having the accredited software engineer label on it." It certainly wouldn't be required for many or even most applications, but it seems desperately needed right now (c.f. healthcare.gov).

Done right, this allows for both non-accredited software developers, and professional engineers to live and work side-by-side.


> If we destroy the ability of newbie programmers to come up outside the university-professional path, we've just irreparably damaged the whole field.

I don't believe that, to be honest. There's a profound difference between a highly qualified professional and a hobbyist, and I am perfectly happy demanding that the first have a credential.


This could work as long as the award of the credential actually signified a capability to do the job. Most of our current university education about software ("computer science") is only distantly related to doing the job.

By analogy to another professional field, computer science is to the working software-maker as physiology is to the working doctor. It's the foundation of the field, and so it's something one has to study at the start of one's training, but it's of little relevance to the actual day-to-day work, and it certainly doesn't form the basis of qualification.


> award of the credential actually signified a capability to do the job.

without a doubt.


Fortunately this is self-selecting. The highly qualified people (quite often these are the very best people) who don't have credentials would never, ever want to work for you.


My guess is that it's because most of us don't have someone's life in the balance? Whatever damage you do to a computer when learning, it probably isn't permanent (screwed up sshd? changed file permissions and you don't know how to fix it? either rollback the VM or create a new one!).

Most new webdevs do really basic things which are more akin to them being receptionists anyway (I've seen tickets as ridiculous as "change this while loop to a for loop", "add this css class to this button", etc.). And let's face it, most people who learn programming in 6 months are probably not doing anything more complicated than web dev, with near-zero knowledge of devops (disclaimer: I too know very little of devops)


Because one finds many self-taught developers out there who can outperform college-educated compsci majors day in and day out.

Because one can read universally-acknowledged figures explaining how a large number of people with a software engineering degree can't code their way out of a paper bag or pass the most basic "fizz buzz test". (1)

Because we can see non-genius 17-year-olds writing apps that are bought by top tech companies for $30MM, month in and month out. (2)

Because we call it "software engineering", but it still isn't engineering at all. (3)

Software development is still a very, very young field. The fundamentals are not properly understood yet. It will still take decades before they are, possibly over a century. We won't be able to put proper education in place before the fundamentals are well-determined.

Agriculture, livestock breeding and cloth making are many millennia old. Architecture, engineering, and the law are 2-3 thousand years old. Book printing and complex music are about 1,000 years old. Dentistry is centuries old. Cinema is over a century old. We know a lot about how to do those properly, and schools are pretty good at teaching the important parts. Software development is less than 50-years-old, and schools are still dismal at figuring out the important parts (practitioners are only so-so most of the time too). That makes it different.

It would be hard to get more misguided advice than what the OP received (pro tip: don't learn vim, Emacs, configure Linux or switch to Dvorak before you can write functional, working code). That doesn't mean teaching yourself is a bad way to learn.

(1) Why can't programmers program, by Jeff Atwood (http://blog.codinghorror.com/why-cant-programmers-program/)

(2) Summly

(3) Just a random sample, but very representative: I developed a software package for building structure calculation about 20 years ago, helping an architect with the software part. There are manuals enumerating the exact steps to follow and excess safety measures to add: assume 50% extra weight, 40% wind-load force with higher percentages for higher structures, twice the amount of iron to be put into the concrete when certain conditions are met, etc... Those manuals are the law. If you are an architect or an engineer, and you follow those rules, two things happen: (1) you are not legally liable, and (2) the building doesn't fall down! Software projects fall down all the time (read: Obamacare). That is engineering, software projects with today's tools and techniques are not. This will happen some day in software. We are not there yet, by far.


That is engineering, software projects with today's tools and techniques are not. This will happen some day in software. We are not there yet, by far.

Sure we are, at least pretty close.

Commercial avionics software developed to DO-178B standards calls for reams of requirements, verification tests, compliance to process, internal quality reviews, external audits, and sign-off by FAA representatives.

A one-line code change can take days to implement, and might not be released to "users" for months or years.

But the software is extremely robust.

If we wanted to engage in the same level of software engineering for all software, we could. But we don't want to. Developers don't want to, and users don't demand it. If an iPhone game crashes, who cares? If a productivity application crashes, you might have lost an hour's work, but it's probably not so annoying so as to warrant a couple orders of magnitude more cost associated with the software.

But if a software failure could kill people, well, that's different. It's worth spending a huge amount of time to make it perfect.

Avionics software can be so thoroughly tested because it is thoroughly designed up front. You know exactly what it's supposed to do. Much less-critical software is designed in a more ad hoc fashion; or there might not even be a design at all! How much software has been organically grown, starting with an idea and hacking on it until it seemed to work?

If you want to thoroughly test that, you have to go back and thoroughly state what it's supposed to do.

It's quite possible, but by and large it's not desired enough to make it worth actually doing. I'm not sure how this could change, or even if it should change. Instant bug fixes on web applications are cool, even though they come with the risk of having broken something else...


> But the software is extremely robust.

Does the specification specify the input as well, or is it actually robust against real input?

By real input I mean stuff like HTML tag soup: There's no single standards document which describes it fully, or even mostly, and it isn't going to be fixed. Ever. It simply has to be processed, to the limits of your ability to process it.

Avionics software is robust, sure, but it's almost a toy problem, its domain is so well-specified. You can ignore so much about reality because you've got a contract which says "We only care about what's listed, everything else can go hang" and in the real world (or, well, in the rest of the real world) you can't usually do that.


A fair observation. In the example of handling HTML input, I would suppose that's not a problem with individual developers, but a problem with the industry. Such a relaxed format should not have been allowed to exist, if the industry cared about its software products being as robust as possible.

I'm failing to think of any avionics application that might handle HTML, but avionics systems do have their own formats to deal with. ARINC 661, for example, is an XML file format for transmitting graphical display elements:

http://en.wikipedia.org/wiki/ARINC_661

Of course, all uses of ARINC 661 data are thoroughly tested. I'm not sure I would go so far as to describe it as a "toy problem", but it certainly does intentionally limit the problem domain to exactly what needs to be dealt with. Malformed ARINC 661 data received would just be discarded, not tried to be displayed in the best possible way even if it wasn't quite right, because that would be unacceptable; the problem would be with whoever was sending the malformed data.

Anyway, you're quite right though; without a precise and unambiguous format definition, you can only go so far down the path of robustness.


> Such a relaxed format should not have been allowed to exist.

Amen


Yes, long ago it was observed that the first step in proving software correct was a clear specification of what the software was supposed to do and that for a lot of realistic software writing that spec was unreasonably difficult.


> If we wanted to engage in the same level of software engineering for all software, we could. But we don't want to.

Of course. Like building structures, there are different kinds of software. I can build a garden shed by myself as long as I have the skills to get the thing to stand up. If it blows down in a storm I'm the only person with a loss. But just because I can get a garden shed (or even a larger structure like a barn or a house) to survive windstorms without incident doesn't make me a structural engineer.


In my understanding, those projects are the modern pyramids of software. Built by sheer brute force at an unsustainable cost, only affordable by a select few.

Large amounts of reliable, performant and scalable software will be built on time and on budget at some point in the future, with a cost and effort similar to today's run-of-the-mill software development. This will happen when, thanks to better understanding of the principles, we can create better tools and techniques to do so.


I think sheer brute force would be far less organized. How do you suppose designing and constructing a building according to designs and building codes is a more advanced, less brute-forced activity than building software according to requirements and industry standards?

But avionics-style software engineering need not be an all-or-nothing approach. Elements of it could be introduced into other programming applications for increased robustness. Greenspun wrote up an excellent article on adding external design review to web application development:

http://philip.greenspun.com/software/design-review

Such would not be a heavy burden on a project, and would likely help catch at least the most glaring errors that went unnoticed by the developers.

In any event, I agree that better tools and techniques offer the tantalizing possibility to help all software be more robust, even if it is never more "engineered". Modern languages have, for example, done away with whole categories of bugs that used to plague C programmers (and still do, unfortunately).


"Sheer brute force" is probably a bit of an exaggeration. But just a bit. As you describe yourself, the kind of advance from C to more modern languages is a step away from "sheer brute force" and towards reasonable approaches. And only a small step compared to where we have to get before this is engineering.

I'd say a much larger part of all software projects is dysfunctional compared to the same in architectural or civil engineering projects, and I think this situation will greatly improve in the future, thanks to a handful of qualitatively innovative insights providing enormous improvements that we are yet to see.


> That is engineering, software projects with today's tools and techniques are not. This will happen some day in software. We are not there yet, by far.

This statement is based on exactly the same fallacy as the featured blog post: ignorance. You are ignoring all the tools and techniques available for software engineering today: garbage collection, lexical closures, Hindley–Milner type systems, purely functional programming, MISRA-C, unit testing code coverage tools, bug tracking systems, distributed version control systems and code review practices around them, etc.

People developing widely used tools are still making idiotic mistakes that had widely known solutions 50 years ago: https://twitter.com/vsedach/status/527904732145537025

Engineering is about learning from previous designs. When you shrug your shoulders and say "software engineering isn't engineering yet" and "the field is too young" you are just discouraging people from learning from past mistakes.

Just because some idiot can pick up an oxy-acetylene torch and cut and weld some metal together doesn't make them an aerospace engineer. What's the difference with PHP developers?

Stop making abstract excuses and start treating software and software practices as tools and techniques. Tools and techniques have a history, a learning curve, and areas for improvement.


No, this statement is based on years of learning, practice and reflection, all around the creation and release of many successful software products of many kinds on the cutting edge of software development. I don't know why you assume I ignore all those things when I make my statement.

The list of techniques you provide is a hodgepodge of valuable but limited techniques, inapplicable theoretical results, and voodoo-like superstitions and rituals. They are all insightful and beautiful, but they do nothing to turn the discipline into software engineering. The day software development has turned into an engineering, you can bet the contractors who get awarded big contracts like the Obamacare web site would make sure they spend the $$$ to apply the practices which have been proven to ensure software projects become successful, they are still going to make a ton of money off the top of the contract, and that's what they do for large engineering projects for the most part. Nowadays, they are just at a loss, and try to get by, like everyone else in the industry.

GC: Apple keeps not applying it in iOS but switched to it for a lot of core OS X apps, notably Xcode 5, and switched back to assisted-referece-counting in Xcode 6 and others, since the performance degradation was noticeable. GC is great for many things but not for all. It's a trade-off.

Lexical closures: I am not sure this really makes software work better. I've seen many more well-engineered solid projects in C++ and Java, which lack them, than in javascript.

Hindley-Milner and purely functional programming: beautiful beautiful beautiful Haskell is probably the closest to practical purely functional programming system, and we have yet to see any proof that Haskell makes real software system implementation any better. Don't get me wrong, I love purely functional approaches, I designed and implemented a full custom regular expression engine with a purely functional design and implementation (even if I used C++ to write it), and it's the type of thing where the approach results in much better code in all regards. It's just still not practical for most things. I like to think about Haskell as computation poetry. It's just not practical.

MISRA-C: one entry which is just not compatible with the others. So if you are doing a project using C because it's the only practical option (controllers in vehicle engines being a paradigmatic case), it's good that there is a set of practices that, even if they make software 10x as costly, are quite good at preventing resulting deaths. I don't think this

Unit testing code coverage: another shaman practice, valuable but without providing guarantees. The space of program state is so combinatorially non-linear with regards to the space of its source code (read: entscheidungsproblem) that ensuring 100% line-wise code coverage is just a sad excuse in getting the whole building not to crumble. Better than nothing, still guaranteeing nothing.

Bug tracking systems: more good practices, guaranteeing nothing. Every single released software project I have worked on has been released with a ton of open entries in the bug tracking system. There is about as much team politics as engineering going on in a bug tracking system.

DVCS: yes, git is good and convenient, flexible collaboration, and flexible push/pull-based authority policies allow much more functional team dynamics, including fully auditable "sign-off" as part of the process. Defending this as if it provided any engineering guarantee with regards to the developed software sounds funny. Enforceable, auditable process is helpful and necessary, it ensures nothing with regards to the produced code.

Code review: frankly, this is the pinnacle of shamanism in your list. Four eyes see more than two. Eight eyes totaling 80 years of software creation experience are sure going to benefit the quality of the code. I find it indefensible this as "engineering", no matter if you include it in the commit/merge flow. It's just useful but totally fallible common sense elevated to policy.

Look, I don't shrug my shoulders, and I am not discouraging anybody from learning from past mistakes. I am saying that all the above are a nice but tiny start in the way to this field being an engineering. The difference between you and me is that I am convinced there is a set of tools and techniques which is so much qualitatively better that, once they're available, it will obvious that the current stage is more or less the stone age of software. They will provide a clear, concrete, enumerable set of guarantees about software and its production, allowing us to operate as engineers. Something which none of the entries in the above list do.


> another shaman practice, valuable but without providing guarantees

This is the same fallacious argument you keep making over and over. You dismiss tools that provide guarantees as "not practical," while calling techniques derived from experience "voodoo."

You obviously have no experience in construction or civil engineering, have never looked at a building code book, and generally have no idea what engineering is:

> The day software development has turned into an engineering, you can bet the contractors who get awarded big contracts like the Obamacare web site would make sure they spend the $$$ to apply the practices which have been proven to ensure software projects become successful

Just like Boston's Big Dig.

> MISRA-C: one entry which is just not compatible with the others. So if you are doing a project using C because it's the only practical option

No one in the real world ever has to retrofit or maintain old buildings and infrastructure projects.

> The space of program state is so combinatorially non-linear with regards to the space of its source code (read: entscheidungsproblem) that ensuring 100% line-wise code coverage is just a sad excuse in getting the whole building not to crumble. Better than nothing, still guaranteeing nothing.

Yeah, and finite element models are a perfect representation of the real world.

> Enforceable, auditable process is helpful and necessary, it ensures nothing with regards to the produced code.

I guess all those CAD data management systems don't help engineers design good buildings either.

> Code review: frankly, this is the pinnacle of shamanism in your list. Four eyes see more than two. Eight eyes totaling 80 years of software creation experience are sure going to benefit the quality of the code. I find it indefensible this as "engineering", no matter if you include it in the commit/merge flow. It's just useful but totally fallible common sense elevated to policy.

Just like structural engineering reviews of architectural plans are obviously useless. No way would a structural review by experienced engineers have helped prevent the Kansas City Regency Hotel disaster.


> You obviously have no experience in construction or civil engineering, have never looked at a building code book, and generally have no idea what engineering is:

I have very little experience in a architecture project decades ago, I just described my experience a couple comments above - I haven't claimed any more than that. I don't know what reasonable claim you could have to affirming I have never looked at a building code book - I more than looked at one, implemented in software the rules in there following the direction of an architect, and was generally amazed when I saw how real engineering works. I also spent five years attending an engineering school before working in large "software engineering" projects for 20 years, which I don't think are an instance of engineering at all, but hey.

Enjoy your faith in software engineering, your ad-hominems, your cynicism, and your real engineering career. Good bye.


Software engineering is a relatively young field. However, to say we don't properly understand the fundamentals is absurd.


Of course that's your take, definitely not mine. Not easy to discuss though: the obtuseness in today's software building won't be obvious until we discover the principles that turn it into engineering.

In ancient Egypt, pyramid-building must have seemed the unbelievable pinnacle of human achievement, only surpassable by ever higher and larger pyramids. Only now we see the primitiveness of those works (notwithstanding their merit and value).

It is my take that our current level of understanding of software building is similar to the that of architecture when pyramids were built.

Fortunately, advance is now much faster thanks to the many tools that help experimentation and communication, and we should be approaching somewhere reasonable in just a few decades.


In what way does the work of Alan Turing etc. long ago not constitute a firm fundamental understanding of the field?


In my view, Turing's results are equivalent in computer science (an important part of software creation) to Pythagoras' theorem in geometry (an important part of architecture): incredibly insightful, fundamental, everlasting and useful. Used directly or indirectly in all early works, respectively, in software and architecture. And only a tiny part of the foundational understanding necessary for really mastering the discipline, a level I think we still haven't reached in software.


They are more of an understanding of what theoretically can be computed, rather than anything to do with how to do it or how to efficiently do it. I study lots of theoretical computer science, but it doesn't intersect with software engineering much. Software engineering tends to more relate to project management, operations research etc.


Define "understand the fundamentals".

If you mean "the theoretical underpinnings", then, yes, we think we do. Turing machines and Church calculus provide solid theoretical underpinnings, that (as far as we know now) are complete. (Of course, the future could show us that they are incomplete. That's the problem with knowing whether your current understanding is proper - you don't know what gaps the future will reveal.)

However, if you mean "we know how to build programs so they work", there's massive empirical evidence that says that we don't. (That is, we can do it... sometimes. And sometimes we can do it after it takes five times as much money as we expected. And sometimes we can't make it work, ever. And we can't tell up front which will happen.) So in terms of this being engineering - as opposed to computer science - saying that we don't properly understand the fundamentals seems like an entirely reasonable statement.


I quite agree with you, but not completely. It's true that Turing & Cburch fundamentals are complete, in that anything we describe in computation can be expressed in terms of those. But I think we are using a very small subset of all Turing-Church based computing, and refinements of this understanding will provide a qualitatively better algebra for the types of software we really create. This will make it easier both for engineers to create software, and for end-users to do things that nowadays only programmers can do.

Both ends you describe (computer science and software engineering) are to me part of the same continuum. We still have articulate most of it with a coherent theory.


A mechanical engineer knows about the properties of the metals, plastics, etc. he uses. A civil engineer knows about strengths of steel I-beams, etc. An electrical engineer knows about capacities of cables, switches, etc. So, these engineers can do real engineering and have transmissions run, electric power networks provide the power, and tall buildings and long bridges stand.

A software engineer knows about the capacity of a server? With what I/O rate, TCP/IP data rate, memory locality of reference, independent threads, memory that can be effectively cached, the memory usage, errors and their consequences?

E.g., one of the problems of doing optimization of assigning programs to servers in a server farm is knowing what (1) the servers can do, also considering other programs the server is running, and (2) what resources the software needs, and it's too tough to know either and, then, in a significantly large and complex server farm, fully to exploit this data for full optimization.

Software engineers are working with at best rubber specs on both the resources they have and the resources they will be using. So, can't really do engineering on the work, e.g., really can guarantee next to nothing. Or, would you want to bet the life of your wife or daughter on some software doing its work without errors and on time? Neither would I!


I agree. Sometimes we do take the reductionist approach, take the Apollo guidance software or seL4 as examples. However, when winging it gets you results that are almost as good in a fraction of the time, you can't be surprised at the prevailing method of software 'engineering'.


Architecture, engineering, and the law are all well over 6,000 years old.

People have changed less (ed: more slowly) than you might think. Consider, people where applying makeup 6,000 years ago and there is some evidence the practice is ~100,000 years old.


I just threw an approximate guess of 2-3k years. If it's 6k, more to my reasoning: there's been so much time and effort to learn those that proper principles are now well known and can be taught and applied repeatedly, reliably, affordably.

I don't think people have changed at all, and I didn't say it anywhere above, so I don't understand why you assume I think people have changed significantly.

Our knowledge in a few areas has grown significantly though (read: crafts, science and technology). I don't think anyone reasonable would disagree, but who knows.


No offence intended, I just noticed a lot of those numbers where fairly low. Like where you say dentistry is century's old which is clearly true, but "Remains from the early Harappan periods of the Indus Valley Civilization (c. 3300 BCE) show evidence of teeth having been drilled dating back 9,000 years." http://en.wikipedia.org/wiki/Dentistry

So it's 90+ century's old. Which is one of those, wait what? moments for me that I like to share.


That's fine. I don't know the exact details of all those disciplines, I like history, but my knowledge of it is limited. I was within reasonable orders of magnitude of the real dates, or lower, which helps my argument that software engineering is incredibly young, which is the whole point :)


The earliest known megalithic construction site is 11 thousand years old: http://en.wikipedia.org/wiki/G%C3%B6bekli_Tepe

The problem is that there has been no continuous architectural tradition. Most of the advanced Roman building methods were completely lost around the 5th century (concrete was not rediscovered until the 18th century). The Egyptians around the time of Cleopatra had no idea about the construction techniques used to build the Great Pyramid of Giza (this is why the Greeks assumed the pyramids were built by slave laborers). This is not surprising considering that Cleopatra lived closer to the first moon landing than she did to the construction of that pyramid.


At the smaller scale. There is a continuous tradition in home construction over that time period. Further, Japan for example had an independent tradition unaffected by Rome's rise and fall.


Perhaps because you can learn to code without several years of intense schooling. The bar to entry is significantly lower.


You can learn to clean teeth without several years of intense schooling, you cannot learn peridontal surgery in that time, nor by "practice", in a reasonable amount of time or without losing a few patients.

What is interesting to me is that we have gone from having computers go from giant obscure machines in room, leased by large corporations, to having them all over the place in your house. There is a gradation of expertise and capability as there are with cars or other complex systems. From 'tinkerer' (usually a hobbyist) to 'mechanic' (who earns money adjusting and fixing) to 'engineer' (who earns money designing from scratch). They also come with different financial liabilities.

And that last bit is something that computers have largely avoided by consistently disclaiming all warranties. When that changes, and programmers (or their employers) are held liable for the incidental or consequential damage caused by their bugs, you will see a much stricter code for hiring and employing people who write code that runs on other people computers.


> When that changes, and programmers (or their employers) are held liable for the incidental or consequential damage caused by their bugs

In a way, employers already are - when selling a product, the contract will define SLA and if that product doesn't "live up to expectations", then fines or other penalties will be used against the seller.

Also I don't think creating a good bug-free product is a matter of good programmers, as it is a matter of good testers. While both are obviously important, bugs will always happen and it is up to the test process (i.e. it's up to the management to design proper process) to catch as many of them as possible.


> And that last bit is something that computers have largely avoided by consistently disclaiming all warranties. When that changes, and programmers (or their employers) are held liable for the incidental or consequential damage caused by their bugs, you will see a much stricter code for hiring and employing people who write code that runs on other people computers.

My understanding is that life-critical systems have a pretty high bar today.

(Also, since we're on the topic - what do you think of having some level of Professional Association ala the Bar or the Professional Engineering association)


> (Also, since we're on the topic - what do you think of having some level of Professional Association ala the Bar or the Professional Engineering association)

I'll be interested in comparing defect rates between Professional Association members and non-members, and in how long such comparisons remain legal and not covered by NDAs and professional codes of silence.

If every programmer has to join, we've just killed the field.


   > what do you think of having some level of Professional 
   > Association ala the Bar or the Professional Engineering
   >association)
I think that at the point where warranties are required, such a certification will come into existence because companies will demand it.


Hypothetically speaking, if there was a way to conjure patients to practice for years the same way there's a way to download and tamper with software then yes, you could become as skillful as someone with years of practice because that's what you'd have.

Your example from hobbyist to professional has real physical constraints that are huge to overcome for an autodidact like "where do I get a constant influx of patients to practice on" and that's solved with university. Building software only has "own a computer and have internet access".

As a self-taught my reasons for not attending university are purely financial. It's basically a subtle form to gatekeep lower-class "peasants" from following their own "brain craves" for knowledge and ascend the societal ladder under the guise of a higher moral cause, specifically the appeal to the professional worth. Tough luck for you and for anyone else from your social stratum that's going away sooner than you think, regardless of how loud your barking can be.

But what is more sad is the fact that there are at this time thousands of people trying to disrupt education and yet the highest rated comment in a place that should be focused in disrupting things actually tries to grip even tighter. That's what's sad.


This comment might not float to the top of the thread, but I'll post it anyways.

When dentistry started out, there likely were more than a few dentists who learned this way. There are still, on occasion, unlicensed dentists found practicing dentistry with decently successful businesses.

Software development doesn't have the Software Development Association protecting the practice like dentistry and medicine do, for their respective associations.


There is a line between "professional programmers" and "dangerous script kiddies."

You need a lot of clout to draw that line, you are going to upset a lot of people on the bottom half of the stratification, and you really need to work hard to convince a lot of programmers that creating a professional organization is a good idea, for some unknown reason.


Because software development isn't a small fixed set of problems on a single platform?


It's tough when you're reading HackerNews or spend a lot of time around technical people.

Technical people are, necessarily, very adamant about the technologies they use. When you're first starting out, you just want the "best."

Among the most common misunderstandings for non-technical people is what a programming "language" is. You don't realize that almost all programming languages are made up of very similar constructs. For example, knowing what I can do with a string in JavaScript and transferring that knowledge over to Ruby is a thirty second, syntactical exercise. Non-technical might imagine that you have to learn everything over from scratch. That's probably a result of etymology; when you hear the word "language" you think Russian vs. Spanish - completely different alphabets, concepts, grammar structures, etc. It would make more sense for those who can't program to think of it in terms of "dialects" or something like that, instead of "languages."

It doesn't actually mater what language you learn first, even if it's (god forbid) PHP. But we don't tell that to would-be programmers enough.

I probably wasted a month of my time because I started using zshell and oh-my-zsh on the recommendation of some guy I talked to at a meetup. He loved a tiny aspect of the flexibility of the prompt's highlighting. I barely knew how to use the command line. So heaven knows I didn't understand what happens to my $PATH when I'm dropping whatever the github repo is telling me to into ~/.bashrc instead of ~/.zshrc.

The time I really started to learn was when my company got into an accelerator, and all of the sudden I was the de-facto front-end guy. The only CSS I had ever written was tweaking colors on my blog, and I really had no idea what I was doing. But that didn't matter - I had to build, and it was a real project -- one that was seen by 10s of thousands of people on day one. It doesn't have to be that stressful to learn, but you have to build something and solve challenges learning along the way. There's really no other path that works.

So, my advice to budding programmers or those who may learn to code: Pick a language/framework and don't move on until you are fairly adept with that stack. Your tech buddies may mock your technology choices, someone will say you're an idiot because "this would be so much cooler in Lisp," but you don't have to be writing functional Haskell when you're learning to program. Take things form beginning to end, start to finish, and start changing technologies once you are well versed enough to understand the shortcomings of what you're currently using.

I really wish someone had sat me down and told me that when I started. I'd probably have saved six months of after-hours and early morning struggling.


> It doesn't actually mater what language you learn first

OK, start with Prolog. Now move to Ruby. Then Haskell, and include some SQL in that as well, somehow.

Now write me a program in APL.

Languages within the same paradigm are mostly similar. But there are a lot of paradigms, and some concepts don't transfer well at all. (Quick, what's the equivalent of an anonymous inner class in Prolog?)


This!

...and moving from javascript strings to Ruby strings is far from a "30 seconds syntactical exercise", unless you don't take into account the fact that:

* Ruby strings don't necessarily have all the same encoding

* Javascript strings don't implement all of Ruby Strings' methods

* Ruby Strings are mutable, while JS's are immutable

* R̶u̶b̶y̶ ̶d̶o̶e̶s̶n̶'̶t̶ ̶h̶a̶v̶e̶ ̶C̶h̶a̶r̶s̶,̶ ̶J̶a̶v̶a̶s̶c̶r̶i̶p̶t̶ ̶d̶o̶e̶s̶ (edit: brain fart... I probably wrote too much Clojure[script], even if char literals in cljs just yield strings again)

(Or you use Opal, which doesn't abstract Javascript Strings much)


Totally agree! You'll often hear my talk about how terrible of a language PHP is, but at the same time, I'm forever grateful for PHP. It was my first language! It was simple, but most important, I made things work! I gained more experience with each little script I wrote until I finally felt comfortable enough to branch out.

Now I spend most of my time in Clojure, Ruby and C++, but, if it wasn't for PHP, I might not have ever felt that itch that led me to pursue a career in software development.


I think it's possible to have preferences without resorting to ad hominem. Sometimes it seems that just having a preference is enough to associate you with the assholes. The truth is, every group of every thing has assholes, or groups of people that perceive assholes. Every group has people that have preference, and people that don't. Every group has people that have opinions, and may voice those opinions with certainty when they actually have no certainty, and are aware of this, but don't know how to communicate it without creating logical paradoxes.

The people elements of communication are not easy to parse correctly. You can never truly know how other people think, you can only assume based on experience collected a priori, which may be all together constructed on a false premise that intialized the pattern of thought construction.

I wish I had known the things I know now, 15 years ago. But I don't. That's part of living. We learn and grow because of our experiences. It can't be learned in any other way.

Personally, I just ignore assholes, or if I choose to engage, I learn to play their own game. Then I typically stop judging them.


Great advice, thank you for this.


1. Because many people can easily afford the tools you need to learn programming (like the author said, a used MacBook, or any computer that can run Ubuntu). I'm pretty sure getting a hold of all the gear you need to be a practicing dentist will be much more expensive.

2. Because you can probably learn enough software development to do something useful in several months, maybe even enough to succeed in an entry level job. I think learning enough dentistry to actually practice it will take longer.

3. Because programming is much more lax about credentials. Sure, many employers require a CS degree for anyone they hire, but many do not.


So why don't posts like OP's seem as absurd as mine?

I think the Fine Article does seem absurd. And while he doesn't come right out and say it, I think he would agree that in hindsight his approach was absurd. The whole second half of the post is about how actual programmers get actual work done, and it's nothing like the first half.

The interesting question is why did he think his original approach was reasonable? Did he not talk to an actual software engineer before embarking on his journey? Sure something like his original path can be successful for some people, but it that a reasonable expectation?


Very funny!

Yes, there is a professional pathway to follow to be allowed to meddle with someone's teeth.

Once you've got that far, however, you're free to carry on much as you want.

Dentistry has its own religious wars, the acronym TMJ springs to mind.


Because there aren't a million online tutorials for learning dentistry and you need more than a computer and an internet connection to start learning.


Basically, the problem is that no sane person will let you mess with their teeth unless you lack the proper credentials, and experimenting on oneself can be painful enough that almost everyone will quit after their first equivalent of an off-by-one-array-access segmentation fault.


Because the learning process for medicine is completely different. 8-year olds armed with QBasic can make a computer do cool things. No 8-year old can study or even try out doing anything interesting in medicine by any meaningful approximation. Hell, even chemistry sets are pretty much off-limits.


> "I dunno, I think what we do is every bit as professional as dentistry."

Except that you're not dealing directly with a human being's health and anatomy? You must be trolling right?


I like the idea to choose one language, but... Start with Python. You're going to hear Ruby this and JavaScript that and Swift the other -- but seriously, I'm from the future. You should start with Python. It's essentially BASIC for this century: a great place to start learning the basics without being too weird. There's plenty of time and once you have those cold, you can investigate other languages, including the fun ones that have separate compilation steps and may produce segmentation faults if you're not careful.


The key thing about Python is that it's really plain. It's a straightforward procedural language - values, functions, names, calls, and that's about it. It has a simple object system, and some simple functional features. The whitespace business is superficially kooky, and there is a scary amount of metaprogramming and other craziness available, but people tend not to use it.

As such, it's a really useful preparation for learning other languages.

If you have the basic concepts from Python, then when you learn Ruby, you just learn some syntax and the special things that make Ruby distinctive (metaprogramming, monkeypatching, ubiquitous gemmery, Rails). When you learn JavaScript, you just learn some syntax and the special things that make JavaScript distinctive (callbacks, artisanal object systems, the interplay of JS and the DOM). You never have to learn to do without some special thing you've learned to depend on, because Python has no special things.

Well, that and it's incredibly easy. And it has a really strong batteries-included standard library. And it has a really friendly, helpful community. But mostly the plainness.


> the special things that make JavaScript distinctive (callbacks, artisanal object systems, the interplay of JS and the DOM)

If you're coming to Javascript from pretty much any other language, prototypal inheritance is probably going to be the biggest difference in paradigm to wrap your head around.


The thing is, in the JavaScript i've seen, there is no direct use of prototypal inheritance. Instead, there's some mechanism, usually involving a method called "extend" which implements something that looks like classes and instances on top of prototypes. This is what i meant by "artisanal object systems".


Yeah, aren't most universities using Python for beginners now?


The overall message of this post is great, I've wasted plenty of time learning a shallow amount of "cool" tools.

One thing I wish I knew starting out was how to create a basic VM. I shudder to think of all the time wasted thinking I was a sysadmin genius for dual booting Linux and later doing something stupid to my hard drive. VMs give me my Linux environment without the pain from those moments when I think to myself "maybe I want to be a kernel developer."


Really good advise. All mistakes that I seem to make repeatedly. I've gotten started trying to learn one thing and then jumped over to something else (i.e. node or rails) because it was what the cool kids were doing.


This blog post is an insult to every developer out there. The blogger actually want us to believe that he just overheard a conversation, went home, googled what ever he said he heard and in 7 months became an expert and landed a job. So in my professional opinion, and i do get to say that as opposed to the blogger since i actually studied Computer science and been a developer for 20 YEARS, that his story is a work of fiction, or the company that hired him is one of those startups that just hires anyone that walks in the door like most of them these days. A developer needs time not just to learn a language, trust me once you learn one you learned em all, you just need to get used to the new environment and light changes in syntax. What you need time for is to learn the LOGIC of our trade and learn how to fix errors and how to improve/optimize your code. And all of these things come with time and effort and no amount of facebook posts or reading books could teach you that.


> or the company that hired him is one of those startups that just hires anyone that walks in the door like most of them these days.

Ding ding ding.

You could replace startups with company though, because there are loads of companies of different ages that would hire someone with only 7 months of self study of programming.

Cause the thing is that the industry is starving for software developers. Those that are not employed either live in rural areas without jobs in general, aren't trying, or are just unbelievably bad at job interviews or work overall.


And even considering all of this i highly doubt any company that pays more than 500$ a month would hire someone with 6 months of experience. That much time is not even enough to master html and css


This is why having a mentor/tutor is important - if you can focus for months on the right things you'll learn quickly. Focusing on the wrong things means a lot of that effort is wasted.


This actually mirrors my own experience a lot, except for a few differences. Biggest one being I haven't actually looked into employment yet.

I too bounced between Vim and Emacs until recently I decided to stick with Emacs. I too switched to Dvorak, later switching to Colemak. I used a typing tutor software with which I was actually able to get up to a respectably 50-60 WPM if I remember correctly. However I eventually switched back to Qwerty after growing tired of keyboard shortcuts never working the way they were supposed to. Sure I could rebind them in my favorite editor (and even that was a pain in the ass) but each time I installed something new I'd have to do it again. One thing I am grateful for from Colemak is rebinding Caps lock to Control, what a great idea.

I've also been dabbling between C, Lisp, C++, Python, Bash, and a few others, but I never really became really good at any of them. It was more than just the OPs 50-page dabble (something like 400 pages into C++ Primer) but I feel like I can relate. Sure I can write basic programs in all of them but I didn't have a "default" so to speak that I've mastered. Only recently I made the decision that Python would be that default. The reasoning behind it is simple, I already kinda knew it and it fit my use case well. I just wanted to do stuff with the language and Python makes it easy with its wealth of libraries and (in my opinion) intuitive structure. Stuff like Lisp and Haskell still have a place in my heart because of how elegant they are, but I just feel more productive in Python.

That's my sort of ongoing story of getting at what I feel is the same kind of focus the OP was talking about. Now if only I could just settle on a Linux distro instead of hopping around every few months (currently messing around in Slackware, though I suspect I'd be better off switching back to Ubuntu which I was using before I switched to Slackware).


I have been programming for 10+ years (and really more like 15 if you count being a teenager coding), and I change languages/editors/distros all the time. I have never switched keyboard layout (this honestly to me sounds like the biggest waste of time I have ever heard of). I am just saying that its not about finding some random piece of technology that you stick with forever, 90% of the time its using the right tool for the job, or contributing to something that already has momentum.


If you're already a decent QWERTY typist I agree it is a waste of time to learn a new keyboard layout.

But if you're hunt a peck QWERTY typist who decides to pick up touch typing, choosing an ergonomic layout like Dvorak or Colemak makes sense. You're essentially staring from zero anyways so you're not wasting much time on the layout anymore so than you would on the QWERTY layout.

This way, you retain your two-finger QWERTY skills while learning a layout that minimizes the odds of developing RSI.


I suppose so, it just seems that the odds that you may need to work on a machine that isn't setup for you is probably going to be pretty high at some point. I agree that RSI is probably something we should avoid, but other than that things like typing speed have zero effect (in my experience).


It's takes a trivial amount of time to set up these layouts on most machines.

I agree that good typing skills do not necessarily make you a more productive programmer. But I do find that it is a skill that compliments other programming skills nicely. It's also nice for countless other parts of my workflow that do not include programming.


let me put it this way. i have never met a programmer who's productivity was at all based on their typing speed.


I think you misunderstood me. I never claimed anything like that. In fact I suggested most of the benefits of touch typing are not related to programming.

That being said, you will never know the benefits of touch typing if you can't touch type. You seem to be threatened by the thought of it being a valuable skill so I won't try to convince you otherwise.


i can touch type just fine. I'm not threatened by the concept of touch typing, I am making the case that its a waste of time to try and learn to touch type another keyboard map. While it may have some benefit, there are literally thousands of things you can do that will be much more useful than learning to touch type dvorak.


We've come full circle to my original reply to you. If you are going to learn to touch type you are starting from zero with QWERTY anyways so using a new layout wouldn't add any additional time.

> there are literally thousands of things you can do that will be much more useful than learning to touch type dvorak.

You could say this about any skill or hobby. You are not the arbiter of usefulness. Plenty of people have derived utility from learning to touch type Dvorak or Colemak. You're writing them off out of pure ignorance.


> I have never switched keyboard layout (this honestly to me sounds like the biggest waste of time I have ever heard of).

Being someone who did that, yes I would say so and recommend against it. However one thing I can say about it is that it (Colemak to be specific) felt legitimately more comfortable, having all the most frequently pressed keys on the home row. If you're suffering from RSI it might help.


Focus is hard, but it's great that you learned this in under a year. I still have issues with focus now and then. Like when I decided that I was gonna spend 2 hours after work for a week on new tech and decided to try and pick up clojure... except that I'd never used a lisp, and after a myriad of compiler errors that I was plowing through, I eventually gave up and I know no more clojure today than I did then. In the meantime, I passively continue to improve on those things that I do know (PHP/Python/JS).

I had the same issue with emacs/vim and while I use vim now, I don't consider myself a vim power user

Lastly, the OS... I started with Ubuntu, wasted a week trying to get CentOS minimal up and running (I couldn't), ran with Kubuntu and have used it for about a year. Fedora is growing on me right now since I just spent the past week downloading/destroying/upgrading VMs, but I think that was more trying to find the right flavor of Linux for what I need than anything else (and with Faience theme, GNOME 3 is pretty good, although slow in a VirtualBox VM since you can't have more than 256VRAM)


I can definitely recommend your key point in the article: focus. It is easy today to lose focus since there are so many different frameworks and programming languages. Even if you just focus on Javascript, there are so many different frameworks here (ex. angular.js, ember.js, react.js) that it's easy get distracted and get to a point where you are only familiar with each a little bit.

Great article.


I've worked as the main developer and designer at an online publication using Drupal, then WordPress, among many other duties including communications and social media. I learned enough PHP and HTML/CSS/jQuery to write custom themes and a couple custom plugins. Recently I started working a side project as a technical cofounder (called SpareChair) and have been working with a design/dev shop to build the site with Rails. The TeaLeaf Academy has nailed the tough problem of learning to code with Ruby and Rails online. It's a Goldilocks combo of written and video tutorials and exercises, weekly live sessions and nearly instant responses to questions from students, the founders and TAs in their project forums. It's focused on producing a coder ready to sit in that dev bullpen and hang with everyone from day one of the job and accomplishes that goal. And it's very affordable, the ROI is amazing. Anyone looking to get into Ruby and Rails should check it out.


Very very good article, although the latter part is a bit too focussed on web dev and dynamic scripting languages.

I personally think every programmer should know some C. Basic things like what the stack and the heap are.

Edit: Ok, probably not in the case of absolute beginners, but after a year or two...


There are a whole lot of things every programmer should know because they'll have a profound impact on their code no matter how far up they sit in the abstraction chain.

Even things like memory alignment and cache lines can bite you really badly if you don't know about them and order your loop the wrong way around :)

Date and Time Handling, Unicode and Data Structures are also a common area for many misconceptions and sources of error. Of course nobody needs to know everything, but have a lively thirst for knowledge always helps, especially one you got over the initial confusion of learning the basics. There's just so much interesting stuff out there and a lot of it will help you improve even if you don't end up using it right away / at all.

Even if you're just swimming on the surface, it's always a good thing to know at least the 1-3 meters beneath you, just in case something happens or you get stuck in some seaweed and struggle to get out on your own.

I've actually made quite some good experiences with teaching people a few things about ASTs right once they started writing code, even though I only gave the some very basic lessons about how the "text" is eventually transformed, it really helped them a lot in understanding why certain text does certain things.

In the end, it's always very hard to play the game successfully if you don't know the rules by which you have to play. And a little can go a long way.


Do you mind going into memory alignment and cache lines and what would be good to know? I feel like I ignore most of that unless I am coding in C. I mostly do python or Javascript so does that stuff mater when you are that high up in the tech stack?


I think C is good to know and try if you are going through an academic program in school. For a lot of people just starting to learn to code (while juggling other responsibilities/jobs/etc), learning C upfront can be an overkill. Being able to implement your idea as soon as possible will be of utmost importance. Sure they can dabble with a little more deeper understanding of language internals and theory once they feel confident enough in the one toolset they have chosen.


When I were a lad, everyone you'd meet in the industry had started out doing C on Unix, and had then specialised, maybe they liked Unix more and became sysadmins, maybe they liked C more and wrote Motif apps, or branched further out into C++ on Windows. But everyone had that core, fundamental knowledge. Nowadays people go straight into the very abstract stuff, and have nothing to tie it all together with.


C on UNIX? You were lucky! My dad had to toggle the bootloader into the front panel every morning before he could even run the assembler!


Maybe, but I'm only talking 15-20 years ago. So much has been lost in a relatively short time.


For certain definitions of "should", sure. Like: I _should_ get some exercise tomorrow. I should take out the trash today instead of tomorrow. A very optional should.

I've got 4 years of professional software experience and the only C I've written is a couple hours on an Arduino device. It's just not that needed or relevant to my work as a web developer. I don't think the article is too focused on web dev, that is where the self-taught developers can most easily insert themselves.

FWIW, dynamic scripting language is redundant.


The author makes a great point and something I've been struggling with. We have so many great tools and with tech constantly improving it makes it hard to focus on a single topic area. Personally, I've gone through so many tutorials on so many different languages and frameworks (Obj-C, javascript, python, ruby, etc etc etc) but have never really focused on a specific stack. I think the most important for any new developer is to choose an area like web development or ios programming and focus on the fundamentals of programming. It's just like learning a foreign language, you're never going to be able to speak fluently until you do it everyday for months or years.


I like this article because as someone who struggles with learning to code, I have found myself making it worse by jumping from language to language, learning the very basics of their syntax. Great, I can FizzBuzz in 4 different languages, but I still can't put together something comprehensive and useful.

Reading this was kind of the reminder I needed. Focus. Pick one, become proficient (with the language and just the concepts of programming), and I guess somewhere down the road if I feel compelled try another.

So for now, I'll focus.


There's a difference between learning computation (how to make a computer help you solve problems), and all the bullshit you have to do to build software (especially for other people). The former involves timeless concepts. The latter is full of fads and arguments over tools, broken dependencies and configuration nightmares. The author hasn't actually learned that, yet — he just started working with people who standardized their tool chain for some semblance of simplicity.


Thanks so much for this, been learning and failing with some decent wins for years. Any time there's a clear, focused guide for learning to code I get inspired again.


The last sentence was the most important; the rest should be ignored. Have patience, perseverance, and you'll do anything.

Or, converge onto interesting problems and fail gloriously; the only way to do anything of worth.

The post comparing this to dentistry is precisely what I was thinking, except nowhere near as hilarious. I wish I was intelligent enough to write a Markov algorithm for logic that paralleled reasoning in such a way automatically. Thank you for that.


I believe that your path didn't teach you to learn how to code, per-se, but that learning to code was a side benefit of learning how to teach yourself. Schools exist because specialization is useful, because having lots of people to help is useful, and because information is diffuse. On the bright side, once you know how to learn, learning additional things is easier, especially new things in the same field.


I am going to be slightly rude, but only because I have done exactly the same thing.

Stop trying to cram n years of experience into a blog post seeking some form of social validation. You are guaranteed to contradict yourself more than half the time. That's because you can't include all the important bits that helped you learn. Every stupid mistake is really your future best friend.


As important as it is to have expertise and proficiency in a single skill, I also believe that the more diverse your knowledge base the easier it is to pick up new things when you need to.

But maybe I am just trying to make excuses for myself, maybe if I learn a new language/framework/IDE every month I will get better at learning new languages/frameworks/IDEs.


I'd spent months sitting alone in libraries and cafes, blindly installing tools from the command line, debugging Linux driver problems, and banging my head over things as trivial as missing parenthesis.

Yet this illustrates a very important thing that most proponents of "learning to code" neglect entirely.

Writing code isn't an isolated black box. To write code is to interact with the extremely entangled software environments that we have cumulatively been building up for over 60 years now.

Anything beyond Fibonacci sequences will require you to spill out into lots and lots of domain-specific areas and subdisciplines. A proficient programmer in general will also need at least basic skills in system administration.

To build a real useful application intersects with areas such as network protocols (which is immeasurably vast, depending on what layer you pick and how much you abstract), widget toolkits (and the wider quandaries of computer graphics, windowing, displays, etc.), cryptography (which pretty much intersects with most bodies of computer science), the workings of the kernel, dynamic linker (thus object files and libraries), the C library...

Profiling an application will likely require you to learn some complexity theory. I/O-bound applications will require you to learn how file systems, I/O schedulers and disks work. System programming is a rabbit hole of its own, with POSIX alone just about being an independent branch.

The bottom line is that programming can mean just about anything. And since programming itself is not innate, but a member of its own in the tangled web of computing, programming is thus a vast body of theory itself, a lot of which one will need to learn, unless they intend on staying frozen.

Paradoxically, the more we try to make things so that people won't have to be specialized in order to use them, the more we necessitate huge drifts of specialization in the people that want to do more than cursory tasks.

While picking a few components and sticking with them is probably necessary at least in professional environments to maintain sanity and interoperability, it is not very realistic for hobbyists and learners.

Of course, if you just want to write macros to automate tedious tasks, you can get by with the basics. But that is pretty much something that will end up being born from necessity, and likely self-discovered rather than taught through compulsory means.


How did you land job interviews after being self-taught? I'm doing the same right now, but I assume I need to complete some sort of bootcamp or what not in order to actually land an entry level job as a web developer. How do you get noticed when you don't have formal experience?


Have something to show. I don't know what technologies/languages you have experience in and/or are looking to interview in, but having a single example of a fully completed app/webapp will get you to the interview portion in a fair number of places.


The problem is that crash-course developers are too satisfied with just making something work and most online tutorials are focused on language syntax. If you're learning to code, great for you... but I'd be scared to be part of your team, maintaining your code.


Yeah. There's a huge difference between learning "how to code", and how to actually design good software. Lots of a focus on the former, almost none on the latter.


I'll cut the young 'uns some slack here. You need to learn how to assemble curly brackets and semicolons and whatnot into a machine that actually does something before you can learn how to design machines. The art of design is the difficult and important part, but it's impossible to learn it before learning the basics of construction.


For the past year I've been learning to code in Python. My basic methodology was this;

I had come up with an idea for a niche product - I reckoned at the time that something like this would sell. All I needed to do was to code it up - at the time I had been reading about Python and had tentatively prodded and poked it as I wanted to find out why a lot of my acquaintences were always scoffing at this language... "I friggin' hate whitespace!" seemed to be the number one reason for not bothering with it, which was an instant reason for ME to get to know it as I'm the kind of guy who likes to see what all the fuss is about.

So I had an idea, and I had chosen the language to learn and implement the idea in. That was the easy part. The hard part was to learn enough Python in order to implement it.

I bascially started off with nothing and worked my way up from there by asking "How do I do X with Python?"

I knew in my head what the program should be and what it should look like - "it should be a GUI", for example. I'd already used the Gtk bindings for python, but had gone off Gtk (as I've written before in a post on HN). So I decided to give Qt a try. The answer to that is to either use PyQT or PySide. I decided on PySide as I couldn't afford the PyQt commercial license (I want to sell my application after all, and my budget is next to nothing).

After a while, I was getting used to how Qt Designer works for designing layouts.

Next question was "How do I get the GUI widgets to send signals to my Python code?" followed shortly after by "How do I see signals from dynamically created tab widgets?" - I ended up both asking AND answering my own question on StackOverflow. ( http://stackoverflow.com/questions/17344805/how-to-see-signa...)

And so as the days, weeks, and months progressed, I needed to Do Things. For me it was a matter of asking the How Do I Do That question and researching the answer, then implemented it.

Before I knew it, I was increasingly getting proficient in Python, in Qt, in parsing configuration files using ConfigParser, in loading and saving files, in using QtWebkit in sneaky ways so I can display my company's animated logo using CSS. And so on.

Then I needed a web site. So it took me 4 months to learn enough Django to go implement a basic site. That site and the product I recently launched. ( https://xrdpconfigurator.com ) - that's all done in Python, including the application - which was converted to C via Cython then compiled to object code after that.

[ And now I've learned another important lesson after all that time and effort - no one seems interested in my product - perhaps I should have open sourced it and asked for donations instead ;) Or perhaps I just need to patiently market the thing better. Or perhaps it is just TOO niche! ]


Likely some people would be interested, but it seems fairly unlikely that a mass market would actually pay money for it.

Once you go from "all debian and ubuntu users" subset "those that use xrdp" subset "those that need to customize their configuration" subset "those that are willing to pay money", you probably don't have a lot of people left ;-)

http://www.amazon.com/gp/product/B004J4XGN6?btkr=1 - The Lean Startup is received wisdom on MVP. On the positive side, you've built a product and shipped. Now you can do it again, but with a product people actually will pay money for.


Indeed, and it's actually been a fantastic learning experience in itself.

And yes, I do have one or two other ideas that aren't as niche and more to do with real-world applications needed by many more people - something which I was intending to move on to after I'd completed this one.

Thanks for the link to that book. :)


This was a really good read. I thoroughly enjoyed it. Thanks!


i am still struggling with this problem, but great article and thanks for sharing.


Yes.


definitely a lot of good points here


[deleted]


He meant that with regards to Node.js.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: