Hacker News new | past | comments | ask | show | jobs | submit login
A significant amount of programming is done by superstition (utcc.utoronto.ca)
304 points by mwcampbell on March 25, 2015 | hide | past | favorite | 208 comments



That's right, but I guess we're missing the real problem here.

The real gold are these words:

> You've really read the LSB specification for init scripts and your distribution's distro-specific documentation? If so, you're almost certainly a Debian Developer or the equivalent specialist for other distributions.

Many of us would agree, that it's true. But why is it true? Shouldn't that make us suspicious in the first place? It's almost paradoxical, even funny: making the work done is easier, than finding out, how to do that work. You see, init scripts or makefiles aren't exactly rocket science. I mean, things we are willing to achieve with them are pretty simple: just start some process, just build some project (which means "execute bunch of shell commands in the right order", and not implementing out own superoptimizing compiler).

So when we arrive to situation like one we've got today, the right message wouldn't be "come on guys, you see what we're doing because of being lazy? we should stop being so lazy!". That is, yeah, we all are lazy and maybe we should stop it, but that's not the point. What this situation actually means is that our tools for doing simple stuff are overcomplicated shit, and our documentation for these tools is shit. If anything, it's pretty obvious that a couple of lines of code, showing how to do the work is better documentation than complete list of possible flags and parameters with no examples whatsoever. So no wonder people are reading stackoverflow instead of man-pages. And so on.


> it's pretty obvious that a couple of lines of code, showing how to do the work is better documentation than complete list of possible flags and parameters with no examples whatsoever. So no wonder people are reading stackoverflow instead of man-pages. And so on.

the best manpage is a man page with a nice EXAMPLES section you can grep to quickly. though the permutations offered by many programs prohibit all bases being covered.


http://bropages.org to the rescue!


alias aprobros="bro"


[flagged]


Can you expand upon what you mean?


The Rails community at least is infamous for drunken boorishness and generally being gratuitously offensive and unprofessional. Rails is the spiritual capital, if not the origin, of the worst of the brogrammer attitude.


you really can't generalise like that.

in fact referring to more than one person at a time is morally wrong. everybody is different.

i think collective pronouns and plurals are weapons of oppression.


ok bro.


I think for a lot of people reading man pages is not as fun and satisfying as just making things and watching them work. It's a form of instant gratification.


from my own personal experience I can tell that most of the time I prefer stackoverflow instead of man pages simply because I need to achieve one task and stackoverflow contains answer of how to do it, man pages can teach me how to do all sorts of a tasks, then I have to choose the right one from those that I have learned.

On top of that, to be able to use man page, I need to know command in the first place, while google will digest my broke english question and point me to the right stackoverflow answer most of the time.


I think it really depends.. I was at a job where it was like "Hey, these monthly md checks are causing IO issues let us disable them." So I read the man page for md and yeah... Let's not disable them.

This is an anecdote sure, but an important one. Not everyone needs to understand what's going on with everything but I can tell you the ones who seek out the information and really try to grok the systems are rare and can be well compensated for it.

Another anecdote: trying to use ZFS, and in particular de-dupe, without understanding it. De-dupe has its own trail of tears laid out from people who did not understand the memory requirements of it.

Hello, I'm Rapzid. I'm 30 years old, and I'm a man page reader. Sue me.


There is a balance to be struck.

Progress depends on humans being lazy. We have inventions and innovations precisely because humans are so good at finding ways to make their (own) lives easier - usually by eliminating repetitive tasks via some kind of work-around or alternate approach.

There is a well known quote about insanity [0] and blind repetition that occasionally gets associated with idiocy too. If everyone was always starting from first principles, we could argue that humanity, as aggregate, is insane. (Which might still be true!)

Repeating experiments to verify results is good. Doing the same things over and over again likely means you're not learning.

0: http://en.wikiquote.org/wiki/Insanity (search for "different results")


I don't think laziness accounts for that behavior. Need for efficiency and business might, since most inventions are motivated by yet other endeavors which hardly counts as lazy. I think that the most basic ideas involved in hunting, farming and fighting are motivated by the ultimate need to get some rest.

OTOH, rest shouldn't be confused with laziness, which has a negative connotation, because some see idleness as an ill when it is assigned the highest priority before other desires/needs.


I think a better title might be "A vast majority of programming is done at a level of abstraction that this author is slightly uncomfortable with".

If you work in Java or Ruby or Python or C#, or even C++ all day with say, an IoC container for argument's sake, you're certainly not constantly thinking "okay this interface will be resolved to this type, which has these dependencies which will be resovled to these types, which generate this byte code, which generates this JITed assembly code, which generates this machine code, which generates these executions of instructions on the processor and accesses these memories, which invokes these manipulations of these gates and flip flops on this particular part of the ALU".

Doing so would be the mark of an unproductive madman.


I don't think this is about abstraction. Abstraction implies having a deep understanding of the problem domain and encapsulating it in such a way that the implementation details can be ignored and the domain can be reasoned about at a higher level.

I think a better title would be "A significant amount of (shipped, so-called production-quality) code is not fully understood by its authors".


"A significant amount of (shipped, so-called production-quality) code is not fully understood by its authors"

Of course. Time spent fully understanding the code before it's shipped is time that it's not shipped.

This is sort of a Gresham's-Law situation, where "bad code drives out good code". The code that is fully tested & debugged with all situations understood isn't available to the user at first. As a result, they go with the "good enough" solution, build stuff off of it, and by the time "the right thing" comes along, all the momentum has shifted to its crappy competitor. A bunch of crappy software gets built on top of that, and as a result, we end users get the worst thing that could possibly work.

But work it does. Maybe it's worth a shift in perspective to be grateful for the stuff that works rather than worry about the stuff that doesn't quite work right.


Anyone can build a bridge that stands up, but only an engineer can build a bridge that barely stands up.

Good Enough is our main design goal. Anything more than good enough means you're wasting resources.


This sounds like Engineering, but be aware that in mechanical engineering the requirements are well-defined for materials and load. Consider the famous bridge resonance problem and you will see why wind resonance is now a required subject for structural engineering.

ME has an advantage over CSE in this regard because in CS the requirements are usually very poorly stated. About the best and most rigorous requirements you get are from mathematicans, but most devs and managers now certainly view that as wasting resources.

To make not to fine a point of this, I always like to turn to Alan Kay's comparison: TCP/IP vs the web. When was the last time any human technology like TCP/IP scaled so well and so invisibly, it's like air we breathe without even thinking about it. The web in contrast was the work of rank amateurs.

Alas, mathematics is really the "good enough" standard we in CS should strive for just like physics is the "good enough" standard behind ME and EE. Unfortunately as CS opened to the mainstream, I think a deep fear of mathematics led us to view this as "over engineering" even when it wasn't. The results are that the majority of the web is woefully underengineered, requiring far more money and time for inferior products.

We know they are inferior, because even the simplest gui application has more consistency than a web variation of it. And that's what marketing constantly compares things too when they can't understand why the web sucks as much as it does.

"Good Enough"? Please! For the last 20 years we haven't even come close!


Alas, mathematics is really the "good enough" standard we in CS should strive for just like physics is the "good enough" standard behind ME and EE. Unfortunately as CS opened to the mainstream, I think a deep fear of mathematics led us to view this as "over engineering" even when it wasn't. The results are that the majority of the web is woefully underengineered, requiring far more money and time for inferior products.

As I have stated multiple times in the past, I think the crux of the problem is that software is an immature field that needs to stratify into a proper engineering discipline as it matures. Computer science should be the "good enough" standard behind software engineering. "Computer scientists" should not be the ones actually implementing software systems any more than physicists should be the ones designing cam shafts or laying out circuit boards.

The opening to the mainstream you refer to illustrates the problem. The people in the mainstream should not be studying computer science, and what they practice should not be called such. They are the engineers, technicians, and mechanics of software; they are not the physicists. Forcing all of these strata into the same bucket is doing more harm than good at this point and is likely hampering the field's drive to mature.


What would software programming look like as a true engineering discipline?


(my bias as a C# dev will show here) Most likely a professional association, with a test you have to take to call yourself a software engineer demonstrating basic competency in a couple of paradigms, and a rudimentary knowledge of patterns and practices.

It's 2015, there's no reason that anyone should be writing Big Object Oriented Code, without practicing dependency injection, basic mocking and testing, and other modern development principles. And yet, here we are, with millions to billions of lines of terrible new code written every year.


It's interesting that you mention TCP/IP, because that's the most famous example of a technology that "works in practice, but not in theory". Take a look at Van Jacobsen's presentation on the history of ARPANET:

http://www.youtube.com/watch?v=oCZMoY3q2uM

Packet switching was "utter heresy" (20m in) when it was invented. It "wasn't a network, it was an inefficient way to use an existing network". And it almost collapsed 25 years after it was invented; the presenter is famous for inventing the modern TCP/IP congestion control that saved the Internet [1]. The TCP/IP flow congestion algorithm has been redesigned several times since [2]. It works not because it was designed well, but because it wasn't designed and instead evolved over many years with many contributions from people devoted to keeping it working.

Alan Kay is usually who I think of for great ideas that "work in theory, but not in practice". He's done some crucially important work in OOP, programming languages, and GUIs. But note that we don't actually use SmallTalk; instead we got C++ and Java. Nor do we use Dynabooks and Altos; instead, we got Microsoft Windows.

[1] http://en.wikipedia.org/wiki/Van_Jacobson#Career

[2] http://en.wikipedia.org/wiki/TCP_congestion-avoidance_algori...


Good points. We might be focusing on different evidence for good design. Certainly a system has to be tested -- the theory is not enough. But areas of TCP that were designed up front were things like being able to carry arbitrary payloads, including itself! This is what makes ssh and vpns possible. By contrast, many web protocols break when tunneling: soap within soap.

Also, for those unfamiliar with the actual process of protocol development back in those days, it was a wire protocol, which meant formal modeling and testing. Sure, it doesnt catch everything, but the web is far less formal. For example, the w3c originally said it wasn't going to provide an XML parser reference implementation because any graduate student should be able to code it up in two weeks. WTH?! While I don't doubt that is true, in practice it has meant that dozens of slightly different parsers were written, leading to hundreds of slightly different incompatibilities. Anyone who has had to integrate two different XML stacks will know.

I use Ruby, which was inspired by smalltalk; modern Java is also becoming much more functional.

In some ways, it has taken the larger community 20 years to understand Kay's vision. Also, he always said that the systems he worked on were prototypes -- he's commented before that he fully expected real-world systems to have surpased his long ago.

But now we have Ruby, Node and Rust. Even Java and Spring.io have dramatically reshaped things towards a "smalltalkish" future. So I still put a lot of weight behind some of Kay's observations of the industry.


The web might have been created by amateurs, but it works incredibly well. Sure, we have browser incompatibilities and whatnot, but this is an effect of having multiple independent implementations of the standards, which is part of what makes the web work in the first place.


no, it doesn't. Having multiple independent implementations is a PITA, but that's why you have testing & validation labs along with standards. Take windows graphics driver labs... They test for pixel perfect compliance of output across hundreds of vendor implementations. Contrast that to the web where it took a separate group outside the w3c to embarrass browsers with the ACID2&3 tests. Now separate browsers look a lot closer in output.

Devs are trying to fix these ecosystems: why does react use a virtual dom? Why do we need css resets? Why do we need js shims and polyfills? Because its the only way to come close to normalizing the platform.

But have you ever wondered why you expect no two browsers display the same image? Postscript met that bar and is just as old as the web. Why didn't the w3c base the web on device independent coordinates instead of this confusing and unpredicatble layering of partial scalars and "angle subtended by a pixel on a 96dpi surface at a nominal arms length from the surface" crap? No one could have made a reference implementation off those requirements, much less a consistent verification & validation suite.

And no offense to TBL, but HTTP didn't even survive first contact with netscape's vision of shopping carts. Cookies? An elegant solution? Or simple a new hell of tunneling client/server state over a supposedly stateless protocol. HTTPS everywhere requires long lived sessions as the basis?!? No wonder people are heading towards web sockets, etc. webapps are client/server apps -- HTTP was always grossly misapplied to them.

Webdev is hard, not because I'm building beatiful bridges in the sky that are "good enough" poetic balances of constraints while coming in on time and on budget... Webdev is hard because of all the underlying assumptions I constantly have to check and recheck because I can't rely on them as an ME would (or hell, even as a backend J2EE engineer would). This is why some of us lament that people don't know the stack all the way down, because we have to in order to solve real problems. Every abstraction leaks, but hell, web abstractions are flipping sieves!

No, the thing that "works incredibly well" is not the web, but whats under it that lets us make so very many mistakes and yet keep on trucking.


Regarding Postscript vs web: I'm only talking about device independence. Specifically in the case of pixel perfect layouts. I am not talking about the layout constraint problem, which is extremely challenging no matter the technology. But layout constraints depend on a solid notion of coordinate system, which the web lacks. Device independence gives you that in postscript and SVG.

Besides, windows faces a similar problem of multiple resolutions and devices. How do they v&v? They set the resolutions the same for certain tests! Even if you do this for browsers, they cant pass the test. Yes, it would be nice if we could have device independent layout constraints as well, but even the simplest most constrained test not involving layouts fails. At least now, its close. Before ACID it wasn't even close.

If webdevs can't even rely on their browser coordinate system in the most heavily constrained case, how can they hope to trust it when they try to solve challenging problems of dynamic layouts across multiple resolutions?


Two browsers cannot display the same image if they say use screens of different sizes and dimensions. This is the difficult problem that HTML and CSS try to solve, so that the same web page is actually readable both on a desktop screen and a mobile.

PostScript does not even attempt to solve this problem, so would never work on the web. Unless you mandate that everyone should have screens with the same dimensions and dpi.


>The web might have been created by amateurs, but it works incredibly well.

The fact that it was created by amateurs helped make it work incredibly well for amateurs.

The professional CS alternatives - gopher and the like - were not successful in comparison.

There's a thing in CS where solutions become so clever they become stupid - because the goal stops being task-oriented usefulness, and becomes ideological and formal purity.

It's the process that turns a plain hammer into an atomic pile driver you can only control remotely from the moon by sending it messages using catapulted owls in space suits. It's better at hammering in some abstract sense, but maybe not so much for hitting nails.

Abstraction without contextual insight is one of the most powerful and destructive of all anti-patterns.

On the web the professionals took over from the amateurs, and now web technology is another example of design-by-committee.

It still works surprisingly well because interplanetary owls are kind of fun, maybe, for some people. But is it ever a mess of half-solved problems generating recursive epicycles of complication.


In any case, Tim Berners-Lee wasn't an amateur. He was a computer scientist who had experience with information systems before creating the Web. But his design was clever AND simple and accessible to use for amateurs.

If I remember the context of the "created by amateurs" quote, it was really Alan Kay complaining that the web wasn't designed by OO principles. He wanted the web to consist of objects encapsulating their own presentation logic, rater than document in declarative languages. So basically something like Java applets instead of web pages.

While OO is great for software design, I believe declarative documents have proven to be much better as a foundation for a decentralized information system. Think about how to implement Google, accessibility, readability.com and so on in a web of encapsulated objects. And it is not by accident that TBL chose declarative languages over objects, he actually though about it: http://www.w3.org/2001/tag/doc/leastPower-2006-01-23.html

This is an example of the contextual insight you talk about, and which I believe Kay lacks in this case.

EDIT: The interview is here: http://www.drdobbs.com/article/print?articleId=240003442&sit... It is not totally clear what he is arguing, but is seems he is suggesting that the only job of the browser should be to execute arbitrary code safely, but any actual features beyond this should be provided by the objects. So the browser should really be a VM or a mini-operating system executing object code in a safe sandbox. This seem to be the philosophically opposite of TBL's principle of least power.

Honestly, it seems like Kay is ranting a lot in the interview. When something like the web is not designed the way he would have done it, the only reason he can imagine is that the designers must have been ignorant amateurs.


So I agree that TBL himself did a great job designing HTML for exactly what he concieved: distributed documentation. It was not however a system designed for web applications. Almost immediately after it gained popularity, people wanted to represent shopping carts. Even the places where Roy Fielding's thesis on REST are well understood and applied, it is very difficult to turn documents into applications without implicit client server state.

Just because TBL is brilliant doesnt mean his work can be misapplied. Of course, i also blame the people who thought of scaling thousands of existing client server applications for a fraction of the cost: things like shopping carts and online banking. True, it drove the web to what it is today, but at great cost.

Here is another thought: if the web is so great, why are so many companies creating their own tablet/mobile app experience instead? It cant be because it requires less dev knowledge and effort?


And that applies not only to engineers, but to managers - i.e., task definers. Managers aren't interested to ask engineers to build something which will withstand the test of time unless they are sure that time will be actually needed - and with limited information (and high speed of changes) you have today managers tend to think short term. So the whole civilization works along the principle of dog chasing the rabbit running from left to right - instead of predicting where rabbit will be in 10 seconds, let's run straight towards the rabbit and in half a second update the course. We win since rabbits don't usually run predictably along the straight line.


It sounds like a good task definer (or engineer) should be able to know when thinking a bit further ahead matters and when it doesn't.

For instance, you probably need to take a bit more care with defining your core database structure, than you do with positioning a button 20px to the left or right.


> take a bit more care with defining your core database structure, than you do with positioning a button 20px to the left or right.

not when you are trying to prototype the UX - and don't care about the backend. There's no rule of thumb - it's all very subjective and intuition based. That's where experience comes in, and no amount of book studying will help you.


> Anyone can build a bridge that stands up, but only an engineer can build a bridge that barely stands up.

That's a nice quip but once you cross spans of 5 meters or so you'll find out that that is a lot harder than it seems, especially for non-trivial loads.

The joke of course refers to the fact that to build any structure that has to be both safe and economical is hard but please don't make it seem as if building bridges is easy, it's anything but.


Jacquesm, yes, I agree completely. But the point of jokes like these is that they make a point succinctly and well enough. Hah.

But yes, building bridges is hard. So is building software. But the job of an engineer is solving a hard task within budget and within deadline.


Again, sounds nice, but an ME wouldnt be so cocky if he had to smelt his own materials and quality grade them before even starting to build the bridge. ME's get the benefit of a older, more mature industry that surrounds them and enables them to make rational decisions. See how quickly that goes to hell when getting substandard parts from a sketchy supplier. Then you can see the schedules and the budgets go to hell.

Multiply this times 100 and you are just about in the same situation as web developers. Maybe in 100 years, an ecosystem will grow around to support us? I can only hope. For now, i cant even rely on xml to be marshalled the same unless i control both client and server.


Today's "good enough" is tomorrow's technical debt. But that works perfectly, if you know the exact lifetime of your codebase.


So an even better alternative title would be "The Author of This Essay Does Not Understand Engineering, Software, Technology, Medicine, Or Any of the Other Practical Arts Which Have Always Involved Shipping Imperfectly Understood Machines".


If you read the full text of the original post, it becomes clear that the author does understand the practical arts because he acknowledges that it has to be this way even if it's not what we would ideally like.


I don't think it's about the act of creating an abstraction, but rather, about using these abstractions with a deficient understanding of the limitations and implications of these abstractions.

I agree with you that a significant amount of code is not fully understood by its authors. I see similar phenomena at work in many engineering projects. Relatively few engineers work at the component or board level, rather they work at the subsystem and system levels. They piece parts of systems together to make larger systems-of-systems (sound familiar, programmers?) often with a poor understanding of when the black-box abstractions they deal with can break down and cause havoc.

Currently fighting against this in my own workplace, where it's become clear that many engineers do not understand the system we support in sufficient detail to properly troubleshoot faults with that system.


I agree: layers of something != abstraction


I might keep that one.


"I think a better title would be "A significant amount of (shipped, so-called production-quality) code is not fully understood by its authors"."

Well duh. Not a single program shipped today, no matter how small, is "fully" understood by its authors, because to do so, one would have to understand every implementation nuance and bug of OS layers, libraries used, other parts it interacts with, ... And even for less strict definitions of "fully" - for any program that took longer than say 2 weeks to write, how can anyone claim they really know the details of everything, at the same time? Including the nuanced differences between various of OS calls, and the effect some environment variable might have on your localization code, just to name one thing?


Perhaps. One could argue that one of these init file templates is just a base class with parts of the implementation overridden. :D


That's an interesting way of putting it.

The issue is that we all have our own (more or less completely undocumented) abstractions that we mentally put on top of whatever language/library we're using. And sometimes these mental abstractions are out of sync with reality.


Think of the times when you're using an unfamiliar api, and your browser tabs are split between documentation and stackoverflow posts. Eventually, you get something to work. If you just clean your hands, and say "I'm done", you've fallen into the trap that the article talks about.

You might look at the code occasionally, and think "maybe I can clean this up?". However ,after a few minutes, you realise how much you don't understand what's going on. So you leave the code, in a sort of superstition - it works after all.

After a few years, when you need to finally modify the code, you've got a problem. The code works, but you never took the time to understand why it works. If there were any superfluous calls, or side-effects that you don't want now, you'll need to take the time to understand how it all works. If you took the time to understand it originally, and wrote a few informative comments, you might have saved yourself hours of work.

It must be said though, occasionally you do need to go deeper with your thinking. To figure out why some bug is occuring, or maybe some complex performance issue. So it is important to have some knowledge of how it all fits together, even if you don't need it all the time.


I actually really like the content of the article, but I'm not convinced superstition is the right word here. Yes, absolutely I have copied an init script / makefile or whatnot and if it worked after I tweaked it then I went on my business. But I didn't use it as is because of superstition, but rather because it now worked I believed it correct.

Yes, quite often that came without a perfect knowledge of every single option, certainly usually without referring to the manual for every one of them.

Perhaps this is semantics. But the definition of superstition is:

a widely held but unjustified belief in supernatural causation leading to certain consequences of an action or event, or a practice based on such a belief

Which ya, isn't quite the right thing. Anyways, agree with the sentiment of this 100%, just not the title. :)


I think this is more commonly called "cargo cult programming", defined by Wikipedia[1] as: "the ritual inclusion of code or program structures that serve no real purpose".

[1] https://en.wikipedia.org/wiki/Cargo_cult_programming


I think there's a significant distinction here to be made between "configuration" and "programming".

Do I copy/paste swaths of code that I don't understand and put them in my programs? No, absolutely not, because each line is one I can reason about and want to understand.

But do I copy/paste swaths of configuration files / init scripts / etc, without understanding the implications of every single configuration? Yes, absolutely, because the depth behind each of those options is often irrelevant to me. (until I discover otherwise late at night when an alarm goes off (!!))

I think it is important to distinguish these. I don't think any dev that's done ops work would claim they aren't guilty of the latter, but at the same time I think very few good devs ever, and I mean ever, do the first.


Init scripts are code though. I mean, they're code that's mostly boiler plate, but they're still code - I can do arbitary things in them, reasonably unconstrained by their stated purpose.


Configuration is programming. Perhaps I, being the devops sort, am somewhat alone in this opinion, but I opine it anyway: Configuration is programming. Often very simple programming, but programming nonetheless.

There's a configuration language a Google that's stated goal is that it's not turing complete... and yet it is, because it was a necessity for achieving the expressiveness, and now it's almost entirely unreadable because of the convolutions needed to achieve the real-world solutions necessary in it. There's another that's python based, and widely bashed for being 'too hard to reason about', but it has none of the problems with disgusting 'standard' libraries.

Where I am now, Facebook, has adopted a python based model... and used almost none of python's features. Instead, everything is generated through convoluted lists and dictionaries, and it's treated with very little exception as a json file. This saddens me.

Almost all configuration should be done by libraries with sane defaults. So should programming, even though it's not.


It's definitely not cargo cult programming if the thing you paste becomes a cornerstone of your business, or a foundation of a whole project.

In Pragmatic Programmer the authors talk about "programming by coincidence", I think this would be closer to GP example.


I don't believe this to be so, necessarily. You can copy-paste code AND understand what it does. I'm not sure how this is cargo-cult programming on its own(because it's not). To me creating factories, interfaces, and object hierarchies "because that's how it's done" without understanding why or whether or not it's actually needed is more "cargo cult" programming.

There are certainly people who just copy-paste code without understanding it. As an anecdote(because I'm on a role apparently with this topic) copying C# code off of StackOverflow that does AES encryption but sets the IV to all zeroes.. And not reading the comments that say this is bad practice but a customer constraint...

I guess my point is that a LOT of people seek and implement examples and while some don't understand them some find them extremely efficient and DO understand them.. In fact every time you use a library you are, in effect, copy-pasting code that you are relying on...


Yes, I would characterise a lot of programming as "cargo cult" programming.

I'm not sure what the details are here besides init script, but in this particular case the job is boring and difficult/slow to test. Who wants to reboot their machine all the time to test it?

... Which is why sysv init has taken so long to fix. If it's not obviously broken it won't get fixed.


not applicable either, the code and program structures serve a purpose and the article specifically described removing parts that are not applicable.


Except there's always that nagging feeling in the back of a programmer's head that says, I might need that later.

Silly as it may be with things like git around, I can recall thinking, "maybe we will run this on AIX..." when hacking on a multi-OS shell script.

Not good practice to leave the bloat in, but at least give developers the benefit of the doubt that it isn't outright magic to them.


The part about how people "copied an existing init script or thought ... that init scripts needed LSB headers that looked like this" sounded like it to me.

I didn't initially manage to follow the right sequence of links to find the example[1], but now that I have, the context seems to be that "System V init ignores all of these" (the parts people copied without understanding), so the description still looks applicable to that example. (They serve no purpose at the point in time when they're added by cargo-cultists but cause problems much later when someone/something assumes that they are actually meaningful.)

[1] http://utcc.utoronto.ca/~cks/space/blog/linux/SystemdAndSysV...


In other words, you're willing to try and try to find a way to convince yourself you're not wrong despite having it explained to you already.

good day.


Yeah, I don't really get what definition of "superstition" the author is intending here. It seems like "laziness" or "lack of rigor" would be more appropriate terms.


Traditional superstitions include not walking under a ladder, and throwing spilt salt over the left shoulder. "Lack of rigor" would apply to these but "laziness" would not.

Superstition connotes that an act is performed due to faith that it is beneficial or necessary, rather than an understanding of the underlying mechanism or controlled experiments to demonstrate the need.


> Yes, absolutely I have copied an init script / makefile or whatnot and if it worked after I tweaked it then I went on my business. But I didn't use it as is because of superstition, but rather because it now worked I believed it correct.

That, plus the whole "ugh, I don't want to have to write all this boilerplate from scratch".

Initscripts are mostly boilerplate, and it's just plain tedious do to anything other than copy/paste what you know works.


I had the same issue. 'Superstition' wasn't the right word. Perhaps 'pragmatism' or even 'science': "This has been observed to work, I shall use it again. I will abandon it for a better one should I find one, or if it should fail".

Pragmatism doesn't need to draw a cause/effect link like superstition does, and superstition doesn't generally lend itself to abandoning faulty beliefs.


I'm not picking at what you do just trying to explain the author's context.

The first principles of computation are mathematical and believing something to be correct is not the standard. Proving is the mathematical standard. I take the author's point of reference to be more formal processes than the ordinary practical programming practices they criticize.


The first principles of computation are not the first principles of software development. A computer is not a Turing Machine: it is an electro-mechanical wrapper around the limited imitation of a Turing-like core, and that wrapper and the physical properties of the system put extreme limitations on what can or cannot be proven.

Software development is a perfectly ordinary engineering activity, or should be (although it is mostly done by non-engineers), and the same standards should apply. No mechanical engineer or electrical engineer has ever shipped a "provably correct" machine. They have shipped machines that conform to best practices, including heuristic analyses of MTBF and so on.


As I said, I was only attempting to clarify the author's intent [in so far as I could divine it]. So perhaps your counter-argument might be better directed there.

Out of curiosity, if the first principles of software development are not based in mathematics, what do you believe are their basis and how does it differ from "folk wisdom" if we take "folk" to include communities of engineers rather than as merely pejorative?


The term "blind faith" works, but may be a bit harsh.


Call it "trust", and it looks different.

Do I trust my OS to have a memory allocator that works and to protect processes from reading and writing each other's memory? Yes, but not totally.

Do I trust its more modern and complex APIs? Less so.

Do I trust code I copied from stackoverflow that has 100+ points? Yes, but even less so than I trust the complex or new corners of my OS. So, I read every line I copy. On the other hand, I am lazy, so if this gets to 1000+ lines, chances are I won't read every line. I will think a bit more about the trust issue, though.

Do I trust the documentation of the libraries shipping with my OS? More than stack overflow answers with 20-ish points, but certainly not completely.

Do I make assumptions about APIs? I try not to, but it is hard. There is lots of documentation where it is hard or impossible to find out what the code claims to do in edge conditions.

For example, in https://msdn.microsoft.com/en-us/library/352y4sff(v=vs.110)...., can one pass a null transaction? I wouldn't know, and the page doesn't tell me, so this afternoon, I wrote a conditional operator calling another constructor if transaction is null.

I did consider a "let's try it and assume it always works if it works once" approach, though. I also think many people do that ('if it compiles and runs, it must be valid C')


Except that it's not blind and it's not faith.

We generally have some idea of what's in the black boxes we use, and we are always open to change our beliefs based on new evidence ("faith" is a belief that will stay the same no matter what evidence is presented: in Bayesian terms it is a belief with a prior of 0 or 1, neither of which can be changed by any evidence.)

"Hopeful" or "optimistic" programming might be better.


I've always enjoyed what I call uncoding, which is taking a working piece of code and taking stuff away until things break, and examining how they break. It's really helped me understand what each piece does, and be more efficient when I'm writing it myself.


Amusingly, that's also more or less how we've learned to understand the various functions of the different parts of the human brain. We don't know what it does until it's been removed/damaged.


I love that term. You described exactly how I work: take an existing block of code, break it apart until I fully understand every line, then reassemble to meet my needs.


I've referred to this for years as "ritual-taboo programming". Ritual-taboo societies are ones in which things are done in certain ways because they worked in the past, but no one knows why. Too much of programming is like that.

If you work from the specifications for a library, you'll probably find that some of the documented features don't work. If they weren't used by some "framework", or mentioned in a how-to book, they probably haven't been exercised well.


I hate to call myself out, but I envy most of you. It sounds as if many here get the opportunity to work on interesting problems from end to end. As such, first principles matter, and approaching something from scratch makes sense. I however have been stuck in a segment of the market that doesn't appreciate the right solution, but rather just wants a solution as fast as possible. To be clear, I'm talking about a more consumer audience. My clients expect things done quick, and quick means leveraging cms, ecommerce platforms (magento, woo commerce, etc.). To be frank I find such work unsatisfying, and often need to hack things together to meet the requirements set forth within the constraints of the budgets I'm working with. I'd love to be able to work in an environment where doing things right mattered, but the plain fact is the deliverables aren't measurable based on quality, but expedition.

While I'm on a soap box, I'd like to point out to those in the positions where they're work is evaluated by professionals and peers how lucky they are. I'd kill for a job/client where my code is evaluated by other developers. This community sometimes forgets they are the 1% of the development world. Those who work for the googles, facebooks, and visionary startups with leaders who came from the same cloth. The rest majority of us are laboring away under management that have never produced a lick of code, or even design, for that matter.

I guess the Tl;DR is: appreciate the fact that you even get to consider the finer points, and pray for the rest of us.


A nice example of this is yoda expressions in, say Java.

The initial reasoning for them was to prevent accidental assignment, since a C/C++ compiler will complain on this: if (null = x)

but not on this: if (x = null)

In Java, neither is allowed, so the whole yoda construct is almost pointless. The exception is boolean assignment which does benefit from this idiom, but the loss in readability is a trade off to consider. In any case, I suspect most uses of this construct in languages like Java fall into the "superstition" category, where the people using it aren't really considering why.


A lot of us find Yoda conditions easier to read:

  if(0 == very_long_function_name_here(param1, param2)) {
Tells me a lot after I've read a little of the line, whereas:

  if(very_long_function_name_here(param1, param2) == 0) {
Makes the == 0 part easy to miss, and buried behind a lot of clutter.


eh, you just end up having to parse

    if (VERY_LONG_AND_SPECIFIC_ENUM_NAME_HERE == func(param1)) {
    }
Makes the function called easy to miss and buried behind a lot of clutter ;)


That's why my rule is shorter first :-)


Why would you ever have a function with a name that long in the first place?

Also, there are many more ways of dealing with readability: try different formatting and indentation rules, wrap lines manually where you think it makes sense and so on. As for your example, even the lowly C let's you do something like this:

    int (*short_name)(int) = &veeeeery_long_and_ugly_and_unnecessary_function;
    if(short_name(1) == 2) {
      printf("\n\nYay, it worked!");
    }
which completely solves the problem, no matter the order of compared objects. I'm not sure, but I suspect things like this are being optimized away by the compiler anyway, which would mean that you can use it anywhere you want without worrying about costs of indirection.

In higher level, modern languages you have even more, much more sophisticated tools for doing this kind of things.


Explicit function naming requires slightly longer function names, but you can actually read a program and get a sense of what is happening. Give me long function names over short ones any day. In my book, 5 words is a little long, but not out of bounds.


If you take a closer look at my example you'll see that the long name is there of course, it's still very close to the point where it's used. It's only aliased for a moment in this single section of code and because of lexical scoping there's no danger of name collision. Of course, doing this in a global namespace makes no sense.

How long identifiers are acceptable depends on a language. With languages without namespacing or modules you obviously have to use some naming convention so that there are no name conflicts and this makes your names longer. I'd say 5 words is ok in this case.

But I'm not arguing against long names of things: on the contrary, I like having names as descriptive as possible. OTOH, too long names are also bad, because they make the code less readable and harder to work with. Like almost always it's a matter of balance: you need to know when to stop adding words to a name. I think that "name is too long when it starts making other names in the same line much harder to spot" is a good heuristic for this.


The alias is just adding confusion. If I see such an alias in code, I would wonder where else that function is used in the function, and be confused that it isn't.

Not to mention it becomes more verbose once you make that alias a const ptr (which we do on all of our local variables).


> The alias is just adding confusion. If I see such an alias in code, I would wonder where else that function is used in the function, and be confused that it isn't

You'd be confused once or twice, then you'd learn the technique and you'd stop being confused. Every code pattern was unfamiliar to you at first. And confusing, until you internalized it. It's unrealistic to assume that you can ever stop learning new patterns - try switching to another language and you immediately have dozens of unfamiliar, confusing patterns to learn. (it gets better after a certain amount of languages known (like https://klibert.pl/articles/programming_langs.html) because you start noticing meta-patterns)

My C is rather rusty nowadays and using function pointer here may not be the best option, but as someone else said, there are other language tools for doing this kind of aliasing, like #define. I'd go for function pointer probably, because it reveals not only a name, but also a type of function and it's guaranteed not to escape the current scope (unless explicitly returned) while #define has no knowledge of scopes at all. In languages which support real macros, and preferably lexically scoped macros (like Racket) I'd use those. In languages with closures and first-class functions I'd probably do it in yet another way. But in general, if I find myself working with a name so long that it makes it hard to spot other names on the same line I will alias it locally.

Have you read http://shop.oreilly.com/product/9780596802301.do ? It's a good, short book on the topic and it discusses exactly this issue at length in one of the chapters.


Why would you ever have a function with a name that long in the first place?

Sometimes you might not have a say in how the function was named... having some practices to deal with the unruly code that hasn't been graced by one's own perfect sense of style isn't terrible! ;)

try different formatting and indentation rules, wrap lines manually where you think it makes sense and so on.

All fine ideas, but I wouldn't discount yoda-conditions as not being in the same category.

Some languages are more opinionated than others, but one positive thing about "superstitious" programming is that code is often more consistent because of it ('pythonic' PEP 8, code patterns, skeletons/boilerplate, ...). Many times you'll end up seeing the same patterns elsewhere, for example adherence to Google C++ style guidelines on projects completely unrelated to Google -- simply because they are both practical and familiar.

In a similar vein, I'd probably religiously opt for a #define ... #undef pattern for your given example, because I've seen it more often than using a separate function pointer variable. There isn't really a technical advantage, but mostly one of familiarity.


Re: "buried behind a lot of clutter"

If your code has a lot of clutter it's usually telling you something about your design.

For readability a better way of handling it might be for the function could return a boolean instead of an int:

if (is_valid(param1, param2)) {

self documenting code and all that.


You're assuming the information being returned is a boolean. As a static typing fanatic, I would not return an int to indicate a boolean result.

  if (0 == number_of_students(context, classroom))


Yoda expressions are still useful in Java when we're talking strings.

someString.equals("someString") vs "someString".equals(someString)


A very sigifigant part of Ansible service module code was done to deal with misbehaving init scripts. They are very much cargo-culted just like RPM or debian package formats - borrow the ones from your previous project.

They are hard to get right and many apps don't daemonize correctly or return OK before they are ready to be running - and then are actually running much later. Upstart/systemd didn't neccessarily make that more clear for people to understand either.

While I haven't tried it, I recently encountered a reference to https://github.com/jordansissel/pleaserun and it sounds promising.

I don't know how much of this applies to programming in general, really, but init scripts are.. yes... special things.


I dislike tools that claim to automatically generate your configuration for you. At Google, one of the worst abuses that anyone had to deal with was auto-generated configuration; a tool purported to deal with configuration for you, but really just blutrted a boilerplate mess into your directory, many of the properties and setting set by cargo-cult or overriding sane defaults with baked-in versions of the same. Facebook is proceeding down this path; There's a new tool that will do all of your service stuff for you! I look forward to what a boondoggle those services that use it will be.

There was a discussion I recently stumbled upon about init systems in containers. Many processes spawned lots of zombies! You need a full init! Well, no; If your process is forking new processes, it should probably wait for them. If it's forking other processes that fork other processes, and they're not cleaning up zombies when they're done, that is a bug in those processes. If all else fails, your root-level process should probably implement a simple init on it's own (a wait() on loop in the background, dumping wait objects into a LRU map that other processes can do an equivalent of waitpid() on). Or just run under bash.


I've used pleaserun a little in the past. When you're being quick and dirty and using fpm it fits into the workflow quite nicely. It can be a bit awkward if you're deploying things that need anything complex in their init script, like a bootstrap command or something, but if it's just a straight start|stop|restart it works quite well. It's probably not up to generating distro-guideline-compliant init scripts (and fpm obviously doesn't either for spec file) but so long as that's not the goal it's a useful tool.


Maybe a better term than "superstition" might be "validated experience". We do what has worked in the past.

It is true that there are standards that should be read and followed; the ones referred to in the article are probably among these.

But other standards have a fair amount of wishful thinking in them. (How many of you JS developers have read the latest ECMAScript standard?) Sometimes, what has worked in the past is a better guide for practice than what ought to work.


sysv init scripts in particular are so pointless to write.

In the end, the whole script usually revolves around four lines of actual code (starting a process, stopping it, querying the status, optionally sending it a SIGHUP to make it reload its configuration), but the result is a script with 50 lines of boilerplate.

Which, of course, could nearly all be automated away by a sane, declarative format.

Leaving all the political mess aside, that's the part that systemd got right. You write sometthing like

    [Unit]
    Description=some human-readable text here

    [Service]
    Type=simple
    User=...
    ExecStart=/path/to/deamon --with=options
    KillSignal=SIGKILL

    [Install]
    WantedBy=multi-user.target
and that's it.

Quick comparison of sysv init script lengths on Debian Wheezy vs. systemd service files on Debian Jessie: ssh: 162 vs. 15 lines, cron: 92 vs 11 lines, dbus: 122 vs 9 lines

My point being: it's a bit too much to demand that people carefully engineer their sysv init files, when about 90% of those very same init files is simply boring, repetitive boiler plate code. Also, what happened to the "don't repeat yourself" principle?

I tried writing a sysv init script from scratch. I stopped because of sheer boredom, and resorted to copy&paste to save both time and nerves.


The issue isn't so much in the medium (shell), but the underlying program itself (sysvinit) being overly primitive. See the daemontools family where runscripts are largely as short or shorter than systemd unit files: http://homepage.ntlworld.com./jonathan.deboynepollard/FGA/ru...

In addition, the BSDs have much shorter rc scripts mainly by putting all the boilerplate functions into a single library file that is then sourced. Gentoo's OpenRC framework, as well.


Franly i find myself thinking that moving the boilerplate elsewhere is also what systemd and upstart does. But those implement it in C rather than shell script.


Not just systemd, but... everything else that's trying to replace sysvinit, too.

Upstart jobs are really simple, and OpenRC and modern BSD initscripts are also much simpler than SysV (and old BSD) initscripts.

I think we're at the point where regardless of the systemd vs. everything else debate, we can all agree that SysV initscripts are shite.

> I tried writing a sysv init script from scratch. I stopped because of sheer boredom, and resorted to copy&paste to save both time and nerves.

Better yet, at my last job, I had some extra time, so I rewrote Ubuntu's qpidd initscript as an Upstart job, and it was so satisfying to see that giant script get condensed down to a few lines.


While the article is dramatically true (credits to StackOverflow for some of my production code), there's a factor that isn't taken into account here: time.

Yes, because as engineers our time is more valuable than a minimal and perfectly-standard-compliant solution. I would happily read all the RFCs, and all the significant papers in CS, and all...etc.

The reality is that we don't get paid to read, but to ship.


I'm surprised by the number of people who don't see this as a problem.

I've worked pretty recently as the most senior developer on a team, and I saw tons of issues caused by copy/pasted code. It results in a lot of issues. Subtle bugs around `==` versus `===` in JavaScript, unecessary variables that obfuscate the code, Python that isn't Pythonic, C that was written as an example and therefore doesn't include the error handling that is constantly necessary in C (how many C examples check the return value of malloc?).

The biggest problem, I think, is that often you end up with 10 different ways to solve the same problem in the code because instead of looking at the existing code, people Googled for a solution with subtly different keywords than the previous person and found a different blogger. This kind of duplication is very hard to identify and remove, and this does cause bugs.

Of course a more experienced developer can work around these issues, but a more experienced developer also knows off the top of their head how to write the code themselves and usually does it.


> The biggest problem, I think, is that often you end up with 10 different ways to solve the same problem in the code because instead of looking at the existing code, people Googled for a solution with subtly different keywords than the previous person and found a different blogger. This kind of duplication is very hard to identify and remove, and this does cause bugs.

It's interesting to code in large code bases. They end up getting to the point where no single person can ever understand the entire thing. It is usually at this point that things like Hungarian and strict coding guidelines really help. If you need an array of pointers to some kind of object, you can be reasonable certain that search will be able to find it.


I think that sometimes Hungarian notation and strict coding guidelines can help, but more and more I think that there's no way to effectively enforce those things unless you automate them from the beginning. I've worked on very effective teams, where nothing ever got checked into master without two sets of eyes on it, strict rules were followed, etc., and there are always holes from lack of automation and legacy code.

Automation helps: linters, static checkers, runtime memory checkers, automated tests, all should be run on every check in. On a C# project I even created a tool that correlated diffs with a code coverage tool and rejected diffs that weren't covered by unit tests (it misses cases where code paths were executed by preexisting unit tests, so it didn't necessarily require a new test for every diff--I consider this a bug). But you always end up making compromises because there's Xthousand lines of code written at the beginning of the project that you don't have time to go back and write tests for.

One of the reasons I'm really excited about Rust right now is that it makes it easier to set these things up at the beginning. A lot of memory checking you have to do with separate tools in C comes free with Rust's type system, while Cargo makes it very easy to get unit tests up and running. My hope is that if Rust finds wider usage, the projects I come into will be more likely to have been set up properly from square one, and it will be easier to work on larger code bases.

I dunno, I'm somewhat new at technical leadership, so I'm still working out some of this stuff.


IMO if your code base is large, you have already failed.


If your code base is large, you've failed as a coder.

If your code base never gets large, you've failed as a company.


>doesn't include the error handling that is constantly necessary in C

I mean, I'm sure there are some cases, but how many C programs are capable of gracefully recovering from a malloc failure? Probably very few. If the program is going to crash anyway...


I don't agree at all, but even if I did, that's only one of tons of cases. There are plenty of very recoverable errors which aren't handled by example code you copy off the internet, but which you really should be handling.

Take networking code for example: do you really want to be dying without a visible reason because you didn't check `errno` and do a retry? This is really basic stuff.


... best to do it deterministically and immediately! Easier to test, and no chance that an obscure preallocation leads to crashes elsewhere in the program.


It has nothing to do with being able to recover. It's about not continuing with unknown state.


"copy and paste is the root of all evil."


The problem here is that if it works, how is it superstition? When I knock on wood or throw salt over my shoulder, I don't have a way of verifying that I've averted disaster. However, I can test the code I don't understand, and if it does what it says it does (or does what I want it to do, since that's why I've selected it in the first place), then we're a step removed from superstition. There might be unintended consequences if I truly don't understand the code, but that is something else than superstition.


The problem here is that if it works, how is it superstition?

Consider Bertrand Russell's Chicken:

The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken. [1]

Sometimes things working but not in the way we imagine they work is worse than them not working at all.

[1]: http://www.ditext.com/russell/rus6.html


Had the chicken been living in the wild, it might have lived a long, full life... or it might have been painfully maimed or slaughtered in the myriad of nasty ways that nature has to offer, at any age from before it hatched on up.

I don't think it's at all safe to assume that a comfortable life with a definite-but-unknown end point in your prime is definitively worse than a life subject to the random events of the wild. Particularly for something that's down the lower end of the food chain.


If you do not understand the code, you cannot say you tested it. It may have corner cases you did not anticipate. The test may be incomplete and only cover what you think the program should do.


Nonsense. We test stuff all the time we don't understand "completely" (which is some kind of magical standard, as we obviously don't understand the universe completely, so the very atoms and molecules that the machine is made of are in some sense not well understood.)

Certainty is the alchemist's stone of philosophy. People have done all kinds of interesting stuff while attempting the impossible: turning knowledge into certainty. They can't, but that doesn't mean knowledge--which is inherently uncertain--is inferior. It means certainty is inferior, because it can't actually be achieved, regardless of what any pre-Bayesian thinker might have imagined.

Knowledge is always uncertain.

Tests are always incomplete.

This does not mean "knowledge is impossible" or "you cannot say you tested the code" but rather "certainty is a chimera" and "compete test coverage is a futile goal."

The whole point if good engineering is to adopt standards that are both achievable and useful, and the kind of test coverage that people run on code they are unfamiliar with to ensure it will do what they want it to is generally adequate for that. Evidence: most shipping code actually works.


"Evidence: most shipping code actually works."

The vast majority of time spent on software development is on bug fixing and on many projects the construction/coding defects make up more than half of all the defects.


Tests are always like this, no matter how well you know the code. A test that covers every possible case is called a proof.


why write unit tests when we don't understand the microcode in the CPU? We can't have "tested" it because we don't understand said microcode.

Or wait... no, maybe your characterization is wrong.

Yeah... that's probably it.


Where exactly did I say you should understand the code down to how electrons flow through the chip's gates?

I said if you do not understand what you are testing (not the compiler, not the runtime libraries, not the OS, not the instruction decode, not the microcode, not the gates), you can't say you actually tested it. You merely observed what could be a side-effect.


you're drawing an arbitrary line, that was the point.

We specialize and abstract for a reason. No one understands everything about everything surrounded their code, and that includes other software. Don't tell me you've personally been through the code of every single project your projects touch.

You haven't and that makes you a hypocrite.


A line that encompasses the object you are testing but that you don't fully understand (or that is hidden from you) and nothing more is not arbitrary. The best you can do in these circumstances is say it adheres to a spec for the cases you tested (which could, for all you know, be the only ones that yield correct results).


It's an arbitrary line. Too many people come up with an opinion and then cast about for a way to make it seem valid.


This is, BTW, just what you did by claiming the line is arbitrary.


yes and I'm racist when I point out someone is being racist.

I'm also a troll when I point out someone is trolling.

do you know how I turn female? By pointing out that someone is female.

It's amazing how making an observation suddenly means you've become the thing you're observing.

Or not, but who am I to judge, amirite?


That's why they're called unit tests. They are supposed to test precisely one unit of functionality, while assuming everything else works as specified (using mocks if necessary). Taking into account the effects of microcode execution on the test result is in fact a kind of integration testing.


I think the GP's point was that you can't mock out the computer chip's opcodes.


Such as, people are taught not to use global variables, but maybe they weren't taught the reasons why not. The reasons (some of) would be polluting the global namespace, having unexpected things defined in unexpected places, and so on. At some point, it could save the programmer oodles of time to just use global variables. It could be a project with limited scope where polluting and things won't be an issue. The reason for superstitions at all are stopping people from doing something when they can't understand the reasons yet. It's hard to explain to a child all of the reasons why they should be a good person, and much easier to tell them Santa will give them presents, until they can understand why being good is inherently its own reward.


I'm a proud cargo culter.

Truth of the matter is you can't just stare at a man page or textbook all day and try to understand, then code. You'll code some mess you don't fully understand, sure, but later on you'll come back and either realize it was wrong and fix it, or you'll understand why it was right. But you will eventually understand.

The people trying to understand it will have gotten nowhere in the mean time.


I am someone who tends to work from first principles, to a fault. It slows me down a lot. But it's like a compulsion.

StackOverflow has improved my programming life dramatically, because it makes it a lot easier to "memoize" the search from what you're doing down to first principles. Whatever question you're trying to get clarity on, there is probably a StackOverflow answer out there that builds a positive case for a particular option, going all the way back to primary documents like standards.


This does not resonate with me at all. I can not stand having things in my code I don't fully understand. Without reading the source code of libraries, without reading relevant standards, without understanding the mathematics behind algorithms, there is always this feeling I may make really dumb mistakes. Really understanding all the bits and pieces obviously does not make your code bug free, but I am usually pretty confident that I did not do something horribly wrong.


This piques my interest. When was the last time anyone read the source code of (g)libc "cover to cover" before using it? Does anyone know the precise implications of (not) defining any of the myriad sensible permutations of its "feature test macros"? Is anyone even sure that those implications are actually implemented correctly before relying on them?


I obviously do not and can not trace the entire code path from a print statement through the class library, the operating system, the display driver and the hardware, some things I will still take for granted. But if there is a really critical aspect to what I am doing, say some thread synchronization, I will definitely not just follow some sample code, but read the relevant parts of the memory model of the language specification detailing why this works as expected. If the underlying hardware matters, I will grab the reference manual for it, too.

A lot of this is surly curiosity and you can get away without it most of the time, but at least as soon as I run into the first problem it really helps me a lot to have at least a basic understanding of what is going on in the layer of abstraction I am interacting with and maybe a layer or two below. And it gives me a really comforting feeling, that I know what is going on better than describing it as some black magic.


The computer doesn't care which code you consider to be in the scope of your concern, and which you take for granted.

If you examine why you're concerned about some code more, you will probably come to the realization that you're less confident in it because it is less old and mature, has fewer users, and has had fewer developer eyes on it and such.

I.e. whether explicitly or not, you're focusing your understanding where it will be needed and that coincides with where there is more risk if you don't understand.


It's not about the quality or maturity of the code I am relying on, it's about the quality of my code. I want my code to do the right thing in the simplest and most elegant way possible and part of this is knowing really well how everything works.


There's no reason to read it "cover to cover", and the GP did not really ask for that. There are many functions in glibc which can be read and understood in relative isolation. I find myself reviewing stable library code (glibc, ncurses, zlib C++ STL) in fits and spurts, several times a year. Sometimes that is paranoia (download the source to make sure the code matches the behavior described in man pages), but mostly it is spelunking to verify something impementation-specific.


But what does it mean to fully understand your own code? Everyone relies on millions of lines of code that they didn't write themselves.


For me that really depends on what I am dealing with. Most of the time the library source code is good enough. Sometimes - I am a C# developer - I will have to have a look at the .NET runtime specification. Sometimes I will have to learn details of Windows. In some cases I will even look into Intel's reference manuals to understand what is happening. I would say I stop at the level where I don't expect the layer below to be relevant to what I am actually dealing with.


A significant amount of everything is done by superstition.


Still, important to be occasionally reminded of your own assumptions and corner-cutting.


Though as the author himself implies, it's not always appropriate to allocate the time necessary to do anything but corner-cutting.

"If you say that you don't take this relatively fast road for Linux init scripts, I'll raise my eyebrows a lot. You've really read the LSB specification for init scripts and your distribution's distro-specific documentation?"


To be clear, the intention was to point out this (generally bad practice) is practiced in many fields. The intention wasn't to suggest it's ok/good.


A significant amount of programming is done without articulating either the problem or the solution.


as a concerned cancerian, i am glad somebody is willing to admit it. i leave my sub conscious do the heavy lifting. "this is the general shape". making a list before starting is probably the extent of my preparation. if i can copy what's gone before without understanding it, all the better.

only when i get stuck do i engage critical thinking. maybe this has been to my detriment.

very often i notice myself and colleagues making up nonsense explanations for why c++ behaves the way it does without any real basis other than intuition and truthiness.


What that means is that if you're writing the system you want someone to do 'right' -- you better provide lots of good examples in the docs.

A 'good' example is one that you've actually tested for real to make sure it works, that follows the intended principles for the system in question, that meets actual use cases developers using it will have, etc.

A significant amount of software is released without good docs -- examples being just one part of good docs. So of course developers copy from the examples they can find.


In my view, a lot of this is just trying to avoid learning yet another system. Learning systems is hard and takes time, and it's only worth it if you're going to be using the system a lot in the future.

For example, creating a new init script or a Debian package description is something I don't do all the time. I don't want to learn all that stuff because by doing so I'd basically push out something else from my mind. Copying an existing configuration or script and making modifications without trying to create something original is an excellent pattern for things outside your immediate expertise target.

The "avoiding creating something original" part does indeed carry a sort of "superstition". It's like relying on a new culture by merely mimicking it, because you don't really know it well enough to break the rules correctly. You don't know why things are done the way they are but your best bet is just to copy and adapt. This creates a sphere of fuzzy knowledge where intents and black magic seem to alternate.


This is specific to shell scripting and to the history of nix standardization. No-one follows or cares about the LSB[1]; certification is expensive and based on buggy test programs that practically force you to "code to the test" and not to the specification. And when so many implementations ignore the spec, it would be more superstitious to code to the magical spec and assume it's going to work, rather than writing something and testing it empirically.

When I'm programming (in Python or Scala) I can and do check the language specification if something is confusing, and I don't think I'm alone in this. It may be a "significant" amount of programming, but there are definitely areas of programming with more rigour.

[1] I blame Debian, usually quixotic in its adherence to standards like the FHS - but when it came to the LSB they prefer to pretend that "alien" counts as RPM support. That set the tone for how much other distros tend to care about the spec.


IMHO "alien" isn't much of an issue since LSB packages are allowed to use only a small subset of RPM features; the fancier features aren't portable between different RPM-based distros either.

http://refspecs.linuxfoundation.org/LSB_4.1.0/LSB-Core-gener...

http://refspecs.linuxfoundation.org/LSB_4.1.0/LSB-Core-gener...

A bigger problem is Debian gratuitously using different SONAME than required by LSB for some libraries - your LSB binary can't use those system libraries if it wants to run on Debian. But at least it looks like they fixed that problem in Wheezy.


I have a tendency to not use things I dont feel that I fully understand. This makes me stick to using the more basic elements of the language. In C# for example recently, there was a part of a game I am writing in my spare time for which 'events' would have suited. But I still dont feel I understand what events in C# is doing exactly. So instead I just put all the things I want the event to happen to in a list, and when the event happens everything in the list has a particular function called on it. I will probably get around to using C# events eventually when I have looked at them more and feel like I understand them. There is maybe a very small performance cost to what I did (according to some stuff I read, but I havent tested this so I dont know) But at least I have an architecture for my game and code that makes sense to me and that I feel like I understand completely (at least at the abstraction level of C#)


The answer to "Why we can't build software the way we build bridges?" is that bridges and other physical works disintegrate over time. Code doesn't rot, so the mistake of 1980 can be freshly reused 40 years later.


This explains why so many websites don't validate email addresses properly.


How do you validate email addresses properly?


even with code I write myself, I can't remember the line by line implementation on the entire code base. To be honest I often forget implementation details on code I wrote last week.


Ditto. When a coworker asks me about implementation details of some code a wrote just a few weeks ago, my first answer is always "I haven't a fucking clue". Then I scan the code in question and it all comes rushing back.


I should add that the extent to "it all comes rushing back" is directly and inversely correlated with how long ago I wrote it, and positively correlated with how interesting/difficult the problem was to solve at the time.

Also, almost every time I go back to review code I wrote more than a year ago, my first reaction is "jesus, that's a stupid pattern/implementation. Why did I do it that way?". I've been a professional developer for about 12 years now and I suspect that will never change.


It won't. And sometimes you'll start to change it, hit a wall, and then realize "oh that's why I did it that way."


I've worked with developers who seem to code mostly by going to stack overflow, finding something that seems to fit their problem and then they copy code into the app. No attempt to find a more recent version or a more correct solution of the problem is made. I've seen JavaScript code copied from a stack overflow answer that was dated 2009. There is nothing better than a 6 year old answer to a common JavaScript problem.


I think when I was very young and new to programming I sometimes did coding via superstition. But it was a long time ago, in say my first year or two, when I didn't yet have a good mental model of how the computer worked and how the language compiler or interpreters worked. Once you become solid at that I've found that 99.9999% of the time that computing is refeshingly rational and deterministic and amenable to logic.


Check .gitignore to see if it appears copy and pasted or actually thought through--this will invariably indicate the quality of the rest of the code.


I think this is what I refer to as SODD - or 'Stack Overflow Driven Development'. Somebody is not sure how to do something, so they look for an example on SO and they copy the code without really understanding what they're doing. (note: this is different to when they get an answer on SO and copy it knowing what they're doing).


The author sometimes confuses the work "superstition" with "abstraction". That is how Humans work everywhere. And that is why my idle PC cannot create a Facebook/Twitter/or even a plain website by itself.


If init scripts are mostly boilerplate that people keep copy-pasting, maybe the init process is the problem, rather than the poor chumps who have to write the scripts?


What would be an example of good programming primary sources?


Superstition...right I don't shave until a project is done and I have lucky underwear that I program in...


I'll have to admit something about myself -- I've never been able to learn how to do anything from just reading its first principles. I need to have a working example that I can then adapt and change and observe changes in the result.

This has been especially true in programming. I've been writing programs in some capacity for 18 years now and not once in that time have I been able to simply read the reference manual of anything and following that write working code on my first try. Well, obviously I've been able to do it for trivial things, like after the Python 2->3 switch when map/filter/range started returning iterables instead of lists, simply being told "map/filter/range/etc. now return an iterable and not a list" is sufficient to explain the change in behaviour, but that's only because I know all the concepts behind it. You could say the explanation is only one level removed from my intuitive knowledge. Things start getting exponentially harder the more levels removed they get, i.e. if it's a new concept explained using new concepts previously defined, but for which I have no intuitive knowledge what they mean.


Almost nobody can. Some people think they can, and charge forward with it, but rarely do they produce something that actually works. Usually, they bang on it and test it and cargo-cult it until it works, then convince themselves that it's from 'first principles'.

Even if it were a common ability to produce engineering work from written design principles, I'm not certain that's what we want. Experimentation is the key to science. You have to observe, change, guess, run, observe, change, observe until you can successfully predict what will happen for each change you make. The scientific method is taught to everyone, even if most ignore it and skip steps. 'Guess/check' is the correct way to do real-world engineering and design, not pure-logic simulation, because we still don't know all the first principles.


> Experimentation is the key to science.

Computer science is a branch of mathematics. Here you build models, reason about them, find properties, then prove them, then rely on them. Tests - a.k.a. experiments - are usually not exhaustive and don't provide guarantees of correctness.

Having said that, I agree that experiment is of crucial importance - how else you would validate your models in the first place? However, both experimenting and reasoning can be flawed, and part of the trick is to learn how to do that properly. Computer science is difficult.


Programming is a branch of engineering. Very little programming is done by proof; It's done by experimentation, debugger, and tests are human-reasoned and hopefully cover all codepaths and critical junctures (for every x > 0, you test x=1, x=0, x=-1).

Proof is of critical importance - It's how you find those junctures, for example, how you optimize, how you design and how you reason about runtime, etc. But programming as practiced in the field is engineering first, mathematics second.


Reminds me of the famous Knuth quote:

> Beware of bugs in the above code; I have only proved it correct, not tried it.

Deep thinking and mathematical proofs are tremendously useful when thinking about a well-defined and scoped problem at a more or less fixed level of abstraction.

The problem with software engineering is that in practice all abstractions are leaky, and so you inevitably find yourself dealing with issues that are too far-flung and random to be mathematically tractable. The best you can do is be rigorous about the core problem you are solving, but there is always some amount of hammering and duct tape to build it into a non-trivial real-world system. It's possible to attack this problem asymptotically with a rigorous engineering process such as NASA employs, but we don't because it's simply not cost effective for the majority of software.


because it's simply not cost effective for the majority of software.

That's a great point. We utilize best practices, like automated tests, to narrow the gap between pure experimentation and mathematical proof. Proofs in most code would be difficult to impossible because most of use libraries that use other libraries, etc., so we just do the best we can given the time and monetary constraints that we have.


This is a great comment. The category error you've identified is manifest in more than the comment you're replying to as well: it's at the heart of one of the biggest problems with software interviews in a certain sector of this industry (the heavy focus on academic CS in startups and many Bay Area tech companies).


Even in math, you don't build models, you discover them. Pythagoras wasn't worried about the axiom of choice, he was discovering how geometry works.

I don't mean this to be a "mathematics from nature argument" (even infinite sets are hard to justify as coming from nature), but the course of history has usually been "solve similar problems in an ad-hoc manner many times, then generalize"


I think "write code and see if it works" may be tad more succinct and closer to reality than "build models, reason about them, find properties, then prove them, then rely on them. Tests ... provide guarantees of correctness." wrt to OP's comment.


Then most programming is not computer science.


Physics of also a branch of mathematics. But experimentation is key to proving correctness.


What? Physics is not a branch of mathematics, it just uses mathematics and mathematical properties. That's like saying psychology is a branch of mathematics because it involves statistics. Using math isn't the same as being math. Admittedly physics is one of the "mathiest" fields, but it still involves doing experiments and coming up with mathematical models that fit the data rather than coming up new kinds of mathematics. The line was much more blurred in Newton's time, but these days the distinction is pretty obvious.


Certainly physics is not a branch of mathematics, but it isn't uncommon for papers in theoretical physics to essentially be math papers with an eye toward applications to some physical notions, and physicists do come up with new math.


I would restate this so that software engineering and physics are fields concerned about building practical models. They are similar in a way that they both employ and invent models communicated through a mathematical formalism. "Model" is the key here, not mathematics.


It's not really worthwhile to try to code from first principles, because the context can always multiply in complexity. You want to make a tic-tac-toe game. What language are you going to program it in? Is it going to be cross-platform? Are you going to code an AI engine?

Trying to do anything in computing from first principles is like trying to code without mistakes. You're losing the value of iterative design.


I disagree. It's worth working things out from first principals as a method of practice, because when you have to do something completely novel, you won't be totally lost (If you never have to do this in your work, then I guess it doesn't apply).

Also, if you never work things out yourself, we'll only ever have one way to do things. What if a better approach exists but nobody's ever tried it?

It's good to double/sanity check what you came up with afterwards, and if the only way you can think of to do something is obviously seriously flawed, then don't go through with it and look it up instead.


Any time I have to develop something from first principles, I do it 2-3 times in different ways, and only then make a serious attempt at actually writing it. It's still terrible, of course, but at least has a chance of working since I have a bit of experience in what can go wrong.


That's exactly my method. What I've noticed in the last couple of years that there are more and more programmers that could not start from a 'blank page' if their life depended on it. That's a sad thing in my opinion, programming is a creative job and creation sometimes needs to start from nothing or extremely little.


Engineering is not a science, there is very little space for discovery there. In engineering you're applying science, not creating a new scientific knowledge.


Engineering lives on a continuum from almost pure maths to pure cookbook canned solutions, with a much smaller arrow going in the opposite direction from unexplained observations to theorising and model building.

Things like Shannon's description of information-as-entropy were certainly a mathematical discovery about engineered systems, and led directly to a lot of coding and data compression theory.

There wasn't a whole lot of interest in quantum channels at the time, so the fact that the theory fed something back to mainstream physics was a bonus.


> Experimentation is the key to science.

A sort of humanitarian science, since almost everything we touch have been made by other humans. But somehow STEM people are famous for looking down their nose on the non-hard sciences.


Gall's law is related to this.

I think it's reasonable that people want to create software components that have been proven to work and then just forget about their internal details and copy-pas... apply them.

It's also what Bret Victor has been getting at. Very often the component's input-output mapping is much more relevant than implementation details. And you can get a feel for that by fiddling with the components.


Gall's Law for anyone who didn't know or couldn't remember it:

"A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system." – John Gall (1975)


Thanks for the reference, this is the best quote I've found in a long while. Strangely, I was unfamiliar with this.


Incidentally, he passed away recently. He had a varied career and was actually my pediatrician and encouraged my interest in astronomy. I only learned at his death about his contributions to systems theory...


I think it's crucial to the maintenance of our superstition that we don't know which parts of the system are actually important, and we keep misleading ourselves into the wrong assumption.

For example - style considerations. Important? Yes? Maybe? Not at the moment when it reaches the computer, but important when humans have to grok it.

So then should we "lead" development by promoting style? Will that get us somewhere? Probably not as much as some other strategy. But it's still sort of important and this blogger made a nice post the other day about style, so...


A microwave oven's input-output mapping is much more relevant than how it works. And you can get a feel for that by plugging it in and pressing the buttons.

It's a very general point - how to interact with things is often dependent on the surface, not the volume. This is the generative insight for the systems approach to engineering.


This is probably why I am an engineer instead of a pure scientist. I want to see something working, and be free to play with it, take it apart, put it back together, combine it with other things I know, etc. I do this with computer programs, with engineering systems, with mathematics. The more I do this in a given area the more intuition I build up and the more easily I can then learn something else from first principles if I need to. But I really do prefer to get my hands dirty first. Other people don't do this -- chalk that up to different learning styles.

I try to guard against cargo-culting (I abhor it).


teaching from first principles doesn't hold up experimentally. Every documented case of teaching e.g. mathematics or languages from first principles without first going through practical examples or the like has resulted in failures.

I'm pretty sure nobody actually learns from first principles,but understand first principles from things they've already seen


Teaching mathematics not in a constructive way is extremely harmful. One have to wipe out all the previously acquired superstitions, all that high school crap out of the students minds before reinstating a proper, constructive knowledge there. Which raises the question - what was the point of filling the minds with all that crap in the first place?

Same thing with languages, there are schools which approach foreign languages formally, starting with grammar and all that. Some of that schools are known for training spies, for example, which is sort of an indication of their quality.


It's the ability which is best trained by studying mathematics (even if totally unrelated to programming).

Do not listen to those saying it's not possible. We do implement programming languages based on their specifications, we implement protocols by reading RFCs, we implement numeric algorithms by reading pseudocode in the papers, etc.

It's all trivial and mechanical. The latter notion is very important, one have to understand that there is very rarely a place for "creativity" and even thinking. You simply translate specification from one formal language into another, following simple rules. Thinking too much considered harmful.


We do implement programming languages based on their specifications, we implement protocols by reading RFCs, we implement numeric algorithms by reading pseudocode in the papers, etc.

We implement spherical-cow versions of languages, protocols, etc by reading those papers and specs. Then we spend months or years fixing it to implement the actual languages and protocols.


No. We spend months and years fixing the issues with the specs (especially if they're committee-driven, they tend to be crappy). Yet it's easy to keep your implementation conformant to the specification without any hand weaving and looking at "examples".


You can keep it conformant, but that's not enough, because the real world isn't. Try writing a conformant HTML parser that rejects invalid input, and then run a crawler that uses it. You'll reject half the web!


In such a case the right way to go is to build a formal model of the informal expectations first. I.e., split scientific and engineering phases. Collect your data, build a theory, validate it, and only then engineer an implementation of it, mechanically, with zero mental activity.

And what you seem to suggest, start writing a parser and then experiment with various real-world inputs until you're satisfied, is certainly not a very productive way of doing things.


I'm not suggesting anything but that you can't build it from first principles. Collecting data and building a theory is not working from first principles, it's just a more structured way of hacking it until it works. You're still liable to have the next website you crawl break your parser.


> but that you can't build it from first principles

Do not confuse principles (which are universal, simple and beautiful) with specs (which suck shit a metric ton per second).

> it's just a more structured way of hacking it until it works

You're confusing hacking with cargo cult coding. Hacking until it works is exactly this formal loop: collect the data, build a model, test if it's applicable. Cargo cult is "google for an answer, paste some code from stackoverflow, see if it works". The OP article is about the latter.


As you've pointed out, "no intuitive knowledge", the first principles of a language/api/framework come implicitly attached with design patterns that are probably fairly deep for whatever reason. Your usage of them gets you familiar with the framework, thus your ability to work with them. I would also concur that practice is necessary before being able to construct anything from scratch.

The cognition of programming languages is very much like regular human languages. It takes a tremendous amount of disciplined, repetitive practice and observation before the mastery of a language allows it to just flow effortlessly out of ones fingers.

A simple concrete example is the difference in articulative ability in constructing the same exact sentence for an essay between a 12, 18 and 25 year old post-doc. For the post-doc who has written a countless number of 10+ page essays versus the 12 year old in 6th grade who has more-than-likely never written anything beyond 1-2 pages, the ability is obvious and striking. The patterns and structures come naturally to the well-practiced individual as the Chinese parable of ZhaungZhi teaches us the the concept of achieving flow (WuWei).

The difficulty with mastering programming the way we speak english (or any other primary language for that matter) is that many projects only ever need to construct the equivalent of a proper sentence once, then that sentence is committed to source control for eternity without ever needing to be constructed again, thus the cognitive muscles creating programs tend to not achieve flow, but the muscles of identifying, locating, copying, pasting & modifying do tend to achieve flow.

To exacerbate our inability to construct things from scratch, we also look to reduce the monotony and abstract away the difficulty and reduce what actually needs to be written and kept track of from our keyboards via APIs and abstractions that segregate duty. Of course I'm not advocating that good architecture be thrown out for the sake of practice, but in a sense, programming could use the equivalent of musical scales, where the well known patterns, and language-native constructs are exercised daily (frequently enough) to the point of trivial mastery.

The argument against such kinds of practice from my peers & others on this board has been,

"duh, that's what API documentation and computers are for." "This stuff can be looked up." "This is why whiteboard programming exercises are not indicative of programming ability." "See, why learn math, when I can just type x+y into the console and get the result?"

However, I'd argue that the effortless mastery that comes from daily practice comes with tremendous benefit. The mastery allows one to focus on the architecture problem set rather than the details of what to name a method (assuming that good method names come naturally from practice). Or be encumbered by whether the first or second argument in the split method should be the separator.

Would it be acceptable as a passenger of a bus if the driver claimed that they did not need to know whether the left or right pedal was the gas or brake pedal since that could be looked up via Google in the API documentation for the bus? Of course the scope of learning how to drive can be mastered in a couple of months, but programming is more equivalent to spoken language and perhaps mastery of playing musical instruments.

I would argue that much of the arguments for or against certain interview techniques/questions/strategies really stems on what the interviewers and interviewees see as a signal for effortless mastery. Esoteric pet questions test for intimacy with a narrow topic, but also get to assume that other deep-related knowledge follows along. Whiteboard interviews are looking for problem solving ability, but also effortless mastery of a language and basic data structures that someone with 1-2+ years in a language probably should have.

But at this point, I digress.


At the other end of the spectrum from practicing "scales", there is the growing size and complexity of libraries and APIs. If you only scratch the surface, learn the minimal necessary, then go on to learning something else, then it's hard gain fluency.

On the other hand, you should only be repeating the same patterns so many times before you build something so you don't have to replace yourself again...

Maybe it's an 80/20 mix; mostly repetition, part novel content.


Thankyou! Also to everyone else who replied. I thought i was a bad programmer. I am cobstantly in a cycle of read, try, adjust, repeat until it works, rebuild it now i understand better how it works.


I realized that the entire piece is hinged upon this incorrect assumption:

>This isn't something that we like to think about as programmers, because we'd really rather believe that we're always working from scratch and only writing the completely correct stuff that really has to be there;

This is the issue I have with this article - the author assumes everyone works in exactly the same way they do and so are vulnerable in exactly the same way to their vulnerabilities. I don't know anyone who actually approaches programming in such a way that they think they're writing things from scratch and it's correct. I mean, defensive coding is a concept that has existed for decades that specifically addresses how incorrect the code we interact with is. The fact that I started coding in an environment where I had to manage memory myself and now I don't makes me very aware of the fact that everything I'm using is an abstraction built upon other abstractions.

I dunno, the day I learned to program in Intro to C and Intro to Computer Science, defensive coding, not trusting user input, and the completely understandable at the time weirdness of the guy who came before you have been stressed as things to pay attention to.


Usually when I'm writing code it is "Let's get the infrastructure up and come back to clean up after I know it works..." Sometimes I never come back. It doesn't mean I think the code is correct, only that it is not the biggest problem.


Since I began programming I don't believe in miracles. I count on them.


I am sorry, the only reason I have been as successful as I have been is because I am not superstitious at all. I know that if something goes wrong, there is always a logical explanation for it. The quicker I can find that logical explanation the better I am doing.

I don't think I am alone in this.


I often wish to have investigation classes. How to systematically reduce the search space in order to find meaning in explaining a phenomenon, good or bad.


> The quicker I can find that logical explanation

The quicker your test suite can find that logical explanation... ;)


[flagged]


I did read it, there were quite a few comments before I even posted.

What he described was not superstition.


Indeed, what described was not superstition; there doesn't appear to be an exact word for what he was talking about (cargo cult is perhaps closer but not quite right), so it was a fuzzy match.

What you responded to, however, was superstition and as a result your comment had basically nothing to do with the article. I hope you can forgive my misunderstanding. :)


It's true. All of my programs go from line 665 to 667. I've never made a commit on Friday the 13th, ever--and the one time I did it caused irretrievable data loss. Once I walked under a ladder, and it broke the build.


Article: Reinvent the wheel, because if you just mount whatever wheels fit the axle they will inevitably work poorly for your purposes.

Wheels work just fine. Choose the best wheel from those available and tweak it as desired. Test it to make sure it doesn't shatter under too much weight. Apply and move on.

This pervading notion of "if you don't understand you shouldn't do it" is silly. Sure it is nice to understand, but it doesn't put food on the table. Sometimes it's just handy to use the gun as is and go shoot your dinner in the woods. You don't need to understand the formulation of gunpowder or rifling in order to use a gun properly for it's intended use.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: