The author talks about languages and technologies (JS, Mongo), but he's really getting at something deeper. The real danger of the "ship it" culture is that things that can't be "shipped" right away — things that require solving really hard problems — tend to fall off our collective radar because there is just SO MUCH cool and (relatively) easy stuff to do right now. PG has a great term for this: "schlep blindness": http://paulgraham.com/schlep.html
The problem runs deeper than that even. Even if you like doing stuff that requires more effort, it's shockingly difficult to find places/people who want to invest/hire you to work on such technology. You have essentially a choice between academia (which is, well, sometimes very academic) and large corpos which have their own share of problems and not everyone fits.
And even in big corps, 95% of all IT is usually plumbing and hectically applying band-aids for past issues that should not have been there in the first place.
To me, the fact that IT is currently seen as rapidly progressing is laughable. It is rapidly changing, true, but for the most part we're solving old problems with new tools that work only marginally better than the old tools (because, like the article says, the underlying mentality is the same). Very often they change fast enough to keep everyone always "learning" at the expense of the users, so the marginal benefits are cancelled out.
(Seriously, I know people who made the same excuse for bad quality for the last decade. "Oh, I'm just learning [over-hyped tool]. Everyone makes mistakes at first." Learning tools is a fake kind of learning. Real advances happen when your conceptual basis is improved, so you get better at doing something with whatever tool. That's real learning. )
---
Also, sometimes we're using new tools that work significantly worse than some old tools.... because at some point conceptual development branched and "our" branch is at the same level of development as another branch was in, say, 60s or 70s. Sometimes I read about old tech and it just blows my mind. "What? They've been doing this three decades ago? How come this is marketed as awesome new thing right now?"
There is another option: work for a salary, live frugally, and stockpile savings. Then later (maybe 5-10 years later), work on the tech you really want to work on that has no obvious & immediate business need. The upside is that because no one invested or hired you to work on it, you own 100% of that work.
Also, make sure you are (a). young (< 30yrs) (b). have little to no responsibilities (family, kids) (c). have great healthcare or health insurance and (d). lucky as a little green leprechaun!
Yea, if you have met all those conditions, then that approach may work for you.
I think many people confuse "ship it" culture with just doing crap work. I've seen too many ship it projects die because the biggest feedback from the customers was "This thing doesn't actually seem to work" or "This is useless". Ship it can apply to hard problems as much as easy ones, it's more about not doing extra work before you know what your customers want. But if you MVP it to the point where it barely works and has no real functionality then customers will often view it as trivial and useless and you'll likely confuse missing product market fit with under delivering.
Perhaps ship-it-now is the right answer for the times. With so many rapid changes in technology there should be 1) a lot of low-hanging fruit (i.e., quick, high-value solutions), making short-term project more valuable and 2) a shorter shelf-life for any solution (i.e., a new tech will make it obsolete), making long-term projects less valuable.
I wonder if ship-it-now isn't more a reflection of the times, rather than the right answer.
"Pick off the low-hanging fruit" works for a shop looking to turn some mad profit without having to do a lot of deep thinking. And that's certainly fine. But if that's the culture of the industry, we'd be sacrificing progress and innovation for a quick and ephemeral dollar.
There's a quiet theme that runs through the community here, and tech in general, that suggests everything that's new is often just old-again. A lot of these "rapid changes" really do feel like reinvention and change for its own sake, and seems plagued with the same issue as above -- building new iterations on existing ideas, picking off low-hanging fruit but not really going anywhere.
edit:
The problem is that these technologies, being so beginner-friendly and aggressively marketed, rapidly pick up steam and become the “cool” things to use, regardless of actual merit or lack thereof.
This line hit it on the head for me. I've been in the business for a while but have only been programming directly for a fraction of that -- I can readily admit, a lot of the newer JS libraries and frameworks made me feel like a superhero with almost no proper training or understanding of computer science.
The problem with that proposal is that we're currently waiting until all of the low hanging fruit got picked. I'm no masochist - if there are nice fruit on the bottom branches, I'm going to pick those. The problem is that the low hanging branches have been picked of all their nice fruit, just leaving the sour, the rotten, and the immature. We could start putting in the effort and climbing the tree for the nice, ripe fruits at the top, but we're being lazy and waiting until we've picked every last worm-filled mush at the bottom.
To further the analogy.. as soon as the fruit on the low hanging branches becomes bad enough to justify the additional effort of climbing the tree for the better fruit, people will do it.
Your argument makes it sound like you're saying no hard problems are being worked on currently, which is simply not true.
To continue the analogy even further - ... not if people instead invent ways to paint the crappy fruits so that customer buys them anyways. Happens all the time in every sector. As industries mature, products get crappier.
On this tree called "life", new low-hanging fruit grows every day. It's a reason we see such churn in the sorts of tech that keeps solving the same easy problems in different ways.
The only reason why there is low-hanging fruit is because you need to reimplement solution in the tech du juor to problems that were solved 30 and 40 years ago.
technologies change quickly because everybody wants to ship fast and put out something just good enough to be better than the last half baked solution that was shipped because technologies change quickly.
Maybe that depends on who's uttering the phrase and what it means to them. Some points in the space:
- "Devops" as a movement encompassing the ideas that developers should not be walled off from operational realities, and that software operations tasks should be encoded as repeatable, testable software (vs ad-hoc stuff a sysadmin does on a box somewhere).
- "Devops" as "oh look, we can hire less people and just get some 'devops' to do it all."
The way i have seen it used was more like the inverse of your first example, where one go from encoded and tested sysadmin tasks to "lets throw into production any newfangled thing the devs (monkeys hammering keyboards, more like it) wants to use".
Devops can definitely be used as a very expensive bandaid around poor engineering practices.
But there's more to it... sometimes the engineering work it takes to prevent something from crashing is much less than just monitoring and restarting when failures are detected. Maybe "failure engineering" is a good term for a lot of the value Devops techniques can bring to a team?
When implementing a failure-tolerant system, one has to remember that "crash" can often be parlayed into "exploit" or at least "denial of service", and thus not become too tolerant of failure.
But isn't that what all systems engineering is all about? Feedback loops with resiliency. Why is it that the software folks think they have discovered something new?
DevOps is an integral part of Continuous Deployment. I am not sure why methods used to make deployment of software more frequent and less error prone should raise anyone's hackles. I guess it is not understanding what DevOps is and what it is trying to achieve.
You pretty much answered yourself right there in your comment. See how you capitalize "Continous Deployment" and "DevOps"? That's because they're recently invented feel-good buzzwords.
That's the one. That's basically short hand for "you don't need to do anything really well, you just need to do everything barely good enough to ship it now".
The sad bit is that many full-stack developers are actually better that specialized developers in respective skills. (unless of course they are JS/Node.js types)
Which really means "OUR stack developer, being a subset of the full-stack in our particular narrow domain, going as high as we go, and as low as we go".
I agree and that's exactly what job adverts should say. Spell out the stack and the amount of experience required. None of the nonsense with vague phrases like "full-stack developer".
This whole article makes me want to scream, if only because the author seems to have never heard of the concept of Path Dependence[1]. I could utter a similar rant, on how terrible it is that we're stuck with awful legacy dumb AC lightbulb sockets everywhere, and wouldn't it be nice if the entropy fairy just waved a magic wand and we all had 'net-connected DC smart sockets for our wonderful Future Bulbs.
But that kind of talk utterly dismisses the important reality that 1) we don't have a magic wand and 2) we have to deal with things like path dependence and network effects as phenomena. Example: if you chart out historical network bandwidth, you'll get a Moore's-law-esque curve, but with a significant step function depending on which network aspect you study. Why? Path dependence and (literal!) network effects: we don't see the benefits of network bandwidth improvements until enough hardware has been upgraded to see an end-to-end improvement.
Javascript has become the defacto browser programming language, even though it's objectively awful for complex software. The only reason it's achieved it's high status is because browser developers refused to cooperate and develop something better. Microsoft and Apple view software lockin as a competitive advantage and have actively undermined technologies they thought were threatening(java and flash amongst others). The explosion of Html5 and javascript was more of a 'nature will find a way' type development and can almost entirely be credited to Google and various web companies pushing it and making it better.
It's still a very flawed technology stack for complex web-apps though and it's very reasonable to suggest that we could do a lot better.
> The explosion of Html5 and javascript was more of a 'nature will find a way' type development and can almost entirely be credited to Google and various web companies pushing it and making it better.
You say this with a pejorative tone, but the fact is I've yet to see a human-designed system that is as elegant or as robust as the systems evolved by nature.
Sure for relatively simple problems in isolation we can design extremely elegant solutions, but complex systems beyond the scale of one human mind become exponentially more difficult to properly design as communication overhead quickly overwhelms the capacity to reason about the entire system. Abstraction is our main tool to fight this, and it is powerful, but most abstractions are leaky, and there's always some essential and irreducible complexity that fights against encapsulation and modularity.
As long as I've been a web developer (20 years) there has been this constant drum beat of hand-wringing over the terribleness of the web and its "abuse" as an application platform, all the while its evolutionary traits as a simple, open platform which has achieved an ubiquity that no single vendor platform has ever approached. Think about it: web pages are accessible to people with any disability on almost any device in the world regardless of operating system. Yes, the win32 API is a more elegant substrate to program a GUI, but even with in the flower Microsoft's dominance it still paled in comparison to the ubiquity of the modern day web. Fixing the warts of the web is not the hard part, the hard part in designing a "better" platform is achieving adoption, that would require an act of god as there is no coalition powerful enough to make it happen through human volition alone.
Fixing the warts of the web is not the hard part, the hard part in designing a "better" platform is achieving adoption, that would require an act of god as there is no coalition powerful enough to make it happen through human volition alone.
I don't know, I remember when Flash was viewed the same way, ran everywhere and nothing would unseat it. Although I am not a huge Apple fan, I think Job's started the snowball which unseated Flash's dominance. So maybe someone or some organisation could unseat what we see as unseatable.
Flash always had its drawbacks, such as: 1. it was not a web standard, 2. it was not indexable by search bots + was not accessible, 3. needed a plug-in to run. Therefore, I don't think it was ever being seen as unseatable. It was actually quite hated in the community of web standards & accessibility-aware web developers very early on.
Don't get me wrong, I hated Flash as much as anyone else. And yes it required a plugin, but it was ubiquitous, and did run everywhere. Plus it allowed people to make rich games and apps. I even remember someone developed a DB connector that used Flash's protocols (it wasn't using Flash) to allow universal web based access to DBs (my google-fu is failing me, can't find the project now).
So regarding your points:
1: Flash everywhere made it a de facto standard
2: You win there - except in my opinion most real world examples of 5/3/JS are not accessible (which is condemnation of us devs, we have to tools to make our sites accessible but we don't (are too lazy to?) do it.
3: Fair point again, but I glimpsed a headline here on HN that said Firefox was using something else to play flash videos...
But to your main point which was drawbacks and dev hate, I think we are also starting to see that happening now with JS. There are drawbacks to JS, and some are starting to hate it. Wether or not it is enough to start a Che Guevara movement against JS I guess will unfold in the coming years.
My money is on the fact that JS is here to stay (but I am personally still hesitant to dive deep into it)- but I think that it is in the realm of possibility to remove JS and move to something 'better'. Just what that 'better' is I don't know.
> Don't get me wrong, I hated Flash as much as anyone else. And yes it required a plugin, but it was ubiquitous, and did run everywhere.
Flash was more ubiquitous than other GUI toolkits, so I'd say it was as close to the dream as any proprietary platform developer ever achieved. However it never achieved anything approaching the web's current reach in sheer breadth of platforms.
Grandparent's point is that any model of reality that doesn't acknowledge the fact that sometimes powerful actors will refuse to cooperate is a flawed model of reality.
Who's the "we" in your comment and the original article? HN readers? The programming community at large? You and anyone else reading this article are free to write a compiler for a much better language than Javascript. There is no guarantee that Microsoft or Apple or Google will not actively undermine you too if it looks like your application is becoming a threat. More likely, such a language wouldn't serve the needs of most developers, and it'd be ignored like many other brilliant research languages that cannot get popular adoption.
Tech corporations aren't dictators and developers are not powerless. There is various ways we(developers and users), can put pressure on them. Damaging their brand by raising awareness is certainly one of them.
> it's very reasonable to suggest that we could do a lot better.
Yes, and it's so damn obvious now that you'd better have something salient to add. To wit: this has been done much better by others. C'mon, the book is called Javascript: The Good Parts (cough, 2008 vintage) This post is just chock full of bizarre straw-men in the guise of actual argument:
Systems stopped using cooperative multitasking at least 20 years ago because it sucked compared to the alternative of automatic, preemptive multitasking. And yet Node.js harks back to those dark days with its callback-based concurrency, all running in a single thread.
What the heck? No comprehension of evented systems? Reactive programming, anyone? No acknowledgement of the use cases when these systems do (or do not), in fact, kick total ass? Yes, the vanilla JS nested callback thing is a bit annoying. So don't do it. Use some of those FP chops and unwind it. Maybe a fold or a monad or some categorical pocket lint. Just please let cooperative multitasking have a nice rest.
META: This is certainly the most divisive comment, vote-wise, that I've ever made. It was significantly positive last night, and is now currently hovering around zero. It's been up and down so much I really wish I could see the total vote volume.
To downvoters: now certainly this has a bit of ranty nature to it, but is it really so off the mark and off HN policy that it deserves only a downvote, and no rebuttal or reply-in-guidance? Or are some of you just using your downvotes as rebuttals? C'mon, speak up!
> …browser developers refused to cooperate and develop something better. Microsoft and Apple…
Javascript was the language for nearly a decade before Apple shipped Safari.
Netscape shipped Javascript, but no one was necessarily locked in. When Microsoft shipped IE, they included an only somewhat compatible version of "JScript"/"DHTML" in it, with proprietary extensions added in. There wasn't even cooperation of Javascript per se, much less a "better" language to replace it.
C++ is fine for complex software; it's bad for simple software. The problem is that it provides benefits that are irrelevant for most people, and the costs of making those benefits available is pretty high, so for the average developer, it seems stupidly complicated.
I don't understand your complaint against the article. I think it exactly discusses the harms arising from path-dependence and how we need a lot of focus, dedication, and hard work to overcome this path-dependent inertia, to build better tools, and to get them adopted.
Many of the problems he describes do not need a magic wand. In fact, many cases of horrible tech are were created by making an arbitrary choice between several possible options which contained far superior solutions. That's what why the situation is infuriating. Fixation on bad technology is often presented as equivalent to consumer-grade standards (wall plug, lightbulb socket) when it is absolutely, totally different in nature.
So, there's a great argument to be made about the negative impacts of "Shipping Culture".
One could argue that it encourages underdesigning of ultimately complex systems. One could argue that it encourages excessive code and infratstructure reuse, to the point that even trivial projects pull in more than is needed and consume more resources than even remotely necessary. One could even argue that it creates in customers an unreasonable expectation about ongoing fixes to a project, about ongoing support contracts and culture, and generally discourages creating artifacts that later generations can use in favor of ongoing maintenance performed by morlocks.
Sadly, the author makes none of those arguments, and instead gripes about hackathons, about Javascript (badly), and about MongoDB (which is fun, but doesn't go anywhere).
Author bemoans "ship it now" culture, bemoans shipping things before they're technically good--and then has the gal to use a Netscape Navigator screenshot! Does anyone else find irony in that? Netscape was never known for good quality code or sane implementations--go read old kvetching by Brendan Eich or Jamie Zawinski. You know what they did do, though? Fucking shipped.
Author complains about NaN--which is a goddamn standard in IEEE 754, which is exactly how C++ (their pet language) handles things, and which is completely reasonable behavior for any JS implementation. Not having an integer type? Newsflash kid: JS supports bit-accurate integers up to Int32s.
Author claims systems stopped using coopterative multitasking two decades ago. Author has presumably never written code for hard-real-time embedded systems.
Author goes on to complain about ancient standards. I have an ancient 120v 60-hertz AC circuit running my house. I have an ancient system of measurement for the beer on my desk. Funny thing about ancient systems: if they're still around, it's because they've perhaps solved the problem well enough to stay around.
"Shipping Culture", as defined by the author, isn't what's hurting us: it's the influx of jackasses with loud mouths and no appreciation for history, business, or engineering loudly proclaiming that perhaps the most productive decade in software engineering is somehow wrong.
Yeah, the title and the article have very little to do with each other. Complaining about NaN semantics is especially irrelevant.
OTOH, it's funny you bring up Netscape. Yes, they shipped. Back in the Netscape 3-4 days of the Browser Wars, every new version was a crap shoot. It might un-break some page you wanted to use, but it might also be drastically slower, crash a page that you needed, lose your preferences, or whatever.
But the cool thing is that you could choose when or if to update! You could install the new version alongside the old one, try it out for awhile, and junk it if it was worse overall. If most people thought Netscape was getting worse, then most people would stick to the old, working version. Nowadays browsers follow the same "random walk" development model, but shove the latest version down most users' throats whenever they feel like it, and make it hard or impossible to revert the latest breakage. "Ship crap" combined with "software force-feeding" is good for no one but lazy coders.
So, on the one hand, I agree very much that having the options available was pretty cool.
On the other hand, having seen this repeated time and time again in the enterprise world, I don't think users should be given the option of not updating, especially for services that they aren't hosting themselves. All they end up doing is creating a support burden and being dissatisfied.
Our users are increasingly ignorant about the systems that they use--I think that precludes them from having final input about how said systems are implemented and deployed.
Since I don't support anything "enterprise," I'm probably a bit biased. Still, "creating a support burden" basically means "making more work for the developer," while "updating" almost always means "making more work for the user." I happen to use Emacs, and even though it's very slow-moving and careful about backward compatibility, I always put aside some free time for major updates, because they always break my setup somehow. And I'm lucky compared to the average software user: I'm a coder, so I can usually work around the breakage without too much trouble. Regularly making work for people without this option is inhumane.
Regarding ignorance, you know far more about how the systems work, but they probably know far more about how they use them.
I still think the auto-update treadmill is a symptom of developer arrogance, laziness, and callousness, but maybe I'm just old before my time.
> Instead we get some bizarro-world where where the type of NaN (“Not a Number”) is number, where NaN !== NaN*, and a chart like this exists for something as simple as comparing two values.
That's the definition of NaN in virtually any programming language with floating point numbers. And the comparison table makes sense with the rule "when types are incompatible, both are casted to strings". Just use === instead of ==, which is only really useful for comparing against null/undefined.
I think the author just picked an awkward example. The real point is the existence of that table of weird (non)equalities and (non)truthinesses in JS [1].
I cracked up over the famous NaNNaNNaNNaN Batman presentation. But, IMO that table is very predictable for a 'dynamic' language, for the normal use-cases. It only gets strange on the outer edges with oddball arrays and objects.
But, if you have code which compares one-element arrays with strings, you have bigger problems than javascript. I can't recall seeing any serious comparison Batman-style bug. Hypothetically, that kind of stuff wouldn't even be a logic bug, but a design issue that was allowed by the dynamic type system. Solution: don't bitch about Javascript, use something else.
NaN behavior might actually be appropriate in Haskell. I don't know enough about it to comment. But it is not helpful in JavaScript. Part of the problem is that in JS you can get NaN in a variety of different ways that do not involve mathematics. And that all numbers can be floats (whereas in other languages we have integers). And that NaN can be silently propagated and morphed in in a more complex expressions.
If (x == x) tests false, then it asserts that x is not itself, which is logically preposterous.
ANSI Common Lisp has a bit of this problem in it too, but it's not required; it is there for some weird historic implementations. That is to say, if x holds a number like 1, then (eq x x) is not required. (But in sane implementations it does yield t; and it yields t even if x is a bignum, because (eq x x) is given the same object as two arguments. Two separately computed bignums of equal value will likely, of course, not be eq.
How this can be explained is that eq tests "implementation identity", and somehow different instances of a number are treated as different implementations. Argument passing is by value, and the two reductions of the expression x in (eq x x) to a value somehow produce a different implementation of the value.
This rationale is unrelated to IEEE NaN-s, though.
Don't quote me on this, but I recall the rational for NaN is because NaN is typically the result of a division by 0.
Divisions by 0 can be thought of as infinity (for the sake of this explanation, but mathematicians will cringe), but it is not any particular infinity. In the sense that x / 0 does not necessarily have to equal y / 0. For that definition, the result of a division by 0, NaN, must not equal itself.
I understand the point perfectly. However, if I have a NaN which is captured in a lexical variable (perhaps the result of a division by zero, as you note) then in fact I do have a particular infinity: whatever object is inside that darned variable! If I do another division by zero, then sure, hit me with a different NaN which doesn't compare equal to the first one I got. But don't make my variable not equal to itself.
Normal division by zero gives you Infinity. To get NaN, you have to do something as numerically confounding as divide zero by zero, which isn't any infinity, because the numerator is zero, and which isn't zero or any finite number, because the denominator is zero.
IEEE division by zero gives positive (or negative) infinity (with the sign determined by the sign of the zero). NaN crops up with e.g. sqrt(-1) and infinity - infinity for which there is no way to define a sane answer.
Determining positive or negative infinity from the "sign of the zero" is not sane to begin with. Zero has no sign in mathematics. It's just a representational accident in floating-point: there is a one bit wide field for sign, and for the sake of uniformity, representations of zero have that field too. Treating these as different is dumb; they are just different spellings of the same number in a bitwise notation.
To drive the point home, this is somewhat like making 0xF different from 15.
Not wrong, but zero having a sign is useful for several complex plane algorithms--it's not just an accident.
As a general rule, anything that is in IEEE 754 (or 854) has a damn good reason for being there, and you had best take some time to understand it or risk looking stupid. A lot of hardware people screamed about a lot of the obnoxious software crap in IEEE 754, so, if something survived and made it into the standard, it had an initial reason for being there even if that reason has gone away with time/the advance of knowledge/Moore's Law.
The original designer's commentary about it is:
William Kahan, "Branch Cuts for Complex Elementary Functions, or Much Ado About Nothing's Sign Bit", in The State of the Art in Numerical Analysis (eds. Iserles and Powell), Clarendon Press, Oxford, 1987.
First of all it is debatable over whether or not the spec allows (eq x x) to be false. eq is required to return true if the arguments are the same identical object. It would be a twisted interpretation of that to allow (eq x x) to ever return false.
Now (eq 1 1) is specifically not required to return true, as, for example an implementation that boxes all numbers could create two separate objects for that expression, and the arguments are now in fact not the same identical object.
This is something that exposes implementation details to the user, so using eq is discouraged (and in fact it can only portably be used to compare symbols). There are times when the best way to accomplish what you are doing is to (ab)use implementation specific behavior, and this is particularly true of Lisp which was a dynamically typed garbage collected language in the 70s (yes, predating the VT100 terminal referenced in this article).
Direct quote from Common Lisp HyperSpec, under Function EQ:
"An implementation is permitted to make ``copies'' of characters and numbers at any time. The effect is that Common Lisp makes no guarantee that eq is true even when both its arguments are ``the same thing'' if that thing is a character or number. "
It is strange for (eq x x) to be false, when x holds 1, or anything else; yet that appears to be allowed when it holds a number or character.
(eq 1 1) being nil caters to implementations that have heap allocated numbers. That is fine, but under (eq x x), both arguments should be the same heap-allocated 1.
(eq x x) being nil is nothing but pandering to some weird, historic implementations, whose quirk should never have been made ANSI conforming. I don't think it's relevant today.
NaN literally means "not a number." Lots of things aren't numbers. The letter a is not a number. The square root of negative 1 is not a number (at least, not one representable in floating point math). a is not equal to the square root of negative 1.
In practice, NaN as a literal means that the outcome of a mathematical statement is not expressible. So, the letter 'a' is not equal to NaN (with either two or three = signs). NaN, in other words, has a special meaning.
Dismayingly, JavaScript's isNaN() function diverges from this special meaning, and does something close to what you said -- it tests to see if something is at least almost a number. So isNaN(13/0) and isNaN('foo') both evaluate to 'true', whereas isNaN('1') and isNaN(42) both evaluate to 'false'.
Javascript type coercions might make sense if you realize it was originally designed for a close integration with HTML forms using a simplified proto-DOM ("Level 0").
The idea was more like "isNaN(myForm.myField)"?
For all I know, the above might actually still work. But either way, that explains Javascript type conversion logic.
Okay, but while "abc" is not a number, it is also the string "abc". NaN is special in that all it tells you is that the value is not a number -- it doesn't tell you what it is. In order to return a true value from an equality test, it isn't enough to know that both values are not a number, you have to know what they actually are.
If it is not known what is in the p variable, then the variable is indeterminate; it has exactly the same status as a variable that has not been initialized. In this case, the behavior upon accessing the variable should be undefined.
I agree with making accesses of NaN-valued variables undefined behavior, so that not comparing equal is then a possible consequence of undefined behavior.
I don't agree with defining the unequal comparison as the required behavior. To define the behavior is tantamount to the recognition that a NaN is something: an object. A variable can have a defined value which is that something, and that value must obey the Law of Identity.
According to http://stackoverflow.com/a/1573715 the IEEE-754 committee decided to make NaN != NaN in order for programmers to have a simple way of detecting NaN before there was a standardized isnan function or macro.
Yes, if you take the false result to be the indication that you have a NaN. Either polarity comparison will work, if you correctly interpret its result.
The law of identity shouldn't apply to NaN because it isn't a single value. NaN represents a set of values that get mapped to NaN. But information is lost along the way. Since we can't recover that information, we can't know for certain what value this NaN represents or whether this NaN represents the same thing as that NaN.
The stupid thing about IEEE NaN is that it's not equal to itself!
Why would it be? The rationale makes perfect sense. It would be unexpected to believe that 17/0 == 0^0. Both are NaN, and equally nonsense, but 2 very different statements.
Secondly, IIRC, NaN is supposed to break assert(x==x), because if you use NaN in your program, then your program is undefined.
Having consistent behavior for NaN means that I can write "<math operation> ? x : y" and know that "y" will be the result when "<math operation>" involves NaN. It's useful.
Not saying it's the perfect way to handle things, but it's pretty reasonable in world where people write code that generates NaN almost every time they use division. I worry more about writing code that generates NaNs than NaNs themselves anyway.
> Systems stopped using cooperative multitasking at least 20 years ago because it sucked compared to the alternative of automatic, preemptive multitasking. And yet Node.js harks back to those dark days with its callback-based concurrency, all running in a single thread.
Uh, no. For instance, Unix kernels have been traditionally cooperative---at least when executing kernel code. That is to say, user space can be preempted but not the execution of kernel code. The introduction of SMP brings concurrency into the kernel, though, and with that, preemption can follow. Linux development followed this path.
Still today, you don't have to turn on CONFIG_PREEMPT when building your kernel. If you don't have SMP either, then you have a cooperative kernel: one task is in there at a time, and it has the CPU until it voluntarily calls into the scheduler.
Cooperative tasking has the enormous advantage that it makes a whackload of potential race conditions go away, and that could be a possible reason why Node.js is the way it is.
Unix might not have been successful had Ken Thompson decided to make the kernel preemptive, and then spent 1973-1987 debugging it. :)
>Unix might not have been successful had Ken Thompson decided to make the kernel preemptive, and then spent 1973-1987 debugging it. :)
Sadly that didn't happen, UNIX was adopted by 80's hipsters into their startups, became adopted by the industry at large, thus spreading C into the industry.
I'll bite.
When I learnt C I found it charmingly simple and it took me years to realize how deeply broken some aspects are, especially the separate compilation circus — programmers working hard to help the compiler and not getting any modularity benefits in return.
So suppose C didn't happen. Do you expect the industry at large would have adopted a great language instead?
Or some random product of history, half way between PL/I++ to JavaScript?
Enforced modularity in languages has downsides, like basically killing the flexibility in system construction. Not every image produced is an application program running over an operating system.
Yes. If your problems are essentially trivial and you are just trying to gauge customer moods, shipping and shipping again quickly is the way to go...and thinking about the problem will probably not help.
However, while quick iterations converge rapidly on a local maximum, they really, really suck at getting you out of that local maximum.
I notice this with trying to create a new and different programming language (http://objective.st): for almost all the problems I am tackling, there are quick and obvious answers...that get me stuck in the same mess we are already in. So I've found it necessary to deliberately delay implementing stuff, going slower than I can and make sure I leave the time for my not-quite-conscious thought processes to work out the problem and present the results during a relaxing shower.
You see the same thing in natural organisms. Vestigial adaptations and odd or even awkward designs abound, yet they are functional. Some people take "Worse is Better" to the extreme of "Worst is Best" but there is some wisdom in the idea that it's better to get an adequate solution into play early than to wait for for a perfect, clean-room design. The important thing is to setup a feedback loop that continually refines your solution.
Shipping doesn't always need to mean shipping to the public.
I write complex software too but I "ship" to myself early and often. I start by trying to solve a hard problem as fast and as quickly as possible in a naive way so that I can experiment more freely. This gives me more insight into the problem itself and into the solution space. The solution often changes -- sometimes it's the problem that changes.
If I were working with a customer or a startup I would definitely "ship" early and often, too, but I wouldn't give much weight to these "releases". They're just something to play with and to cut the shape for the problem.
The problem that I have with this thesis (that shipping culture is hurting us) is the results people have. You can certainly argue that a better result could be achieved by taking more care with your design, but it is unclear that in doing so you would have achieved something more "valuable" than you did by shipping and iterating.
I do agree with Gary Bernhardt that infrastructure is not getting the attention it deserves, but as we saw with GPG it isn't something that is easily funded. Sun paid an engineer who did mostly nothing but maintain xterm (and the Sun tools equivalent) for several years. Where are you going to find a sponsor for a new terminal? Current terminal code could be improved of course, but in so doing would you get better code? Better systems?
The thing is the "Think carefully and create deliberately" approach has been tried so very few times. But when we have tried it we've gotten Lisp machines, ACID RDBMs, strong typing, and many other important and very useful tools. I'd say it's worth a shot more often.
We know we're in a world of hurt now, but the argument against it is "oww. Devil we don't know!".
The "think carefully and create deliberately" approach has given us a lot more than that - there's also Smalltalk, Xanadu, Eros/E, NextStep, BeOS, microkernels, Plan9, Dylan, and General Magic.
A look at that list is pretty instructive for why more people don't take the "think carefully and create deliberately" approach: by and large, the creators of those projects failed to profit from their inventions. In many cases, they wasted years of their life slaving away in pursuit of perfection, and the market didn't care. If you study any of the systems I mentioned, you'll find some incredibly elegant and beautiful CS concepts, ones I wish I could use for every-day programming all the time. But the problem is that none of these innovations exist in a vacuum, and in the time it took to perfect the product, the market passed them by and the world changed in a way that made them no longer relevant.
"We" and by that I mean software engineers, used to do the 'Think carefully and create deliberately' all the time, we called it software architecture. But "They", and by they I mean software engineers with a 'ship it' culture, got stuff to market faster and improved it faster, and called it 'ship and iterate'.
Lots of people were called out as "old fogeys" or "dinosaurs" when they asked to think through some of the ramifications of shipping things. Those folks got trained that such behavior is a quick way to get managed out of an organization.
It would be interesting if, as an industry, wave 3 (which like wave 2 before it is a decimal order more engineers than its preceding wave) decided to go back to a more mindful way of developing software. And if so they could remain employed amongst the wave 4 cohort.
There's also this really ugly aspect of the ship vs. art spectrum:
ship ------------------------> art ------------>
At the most radical "ship it" end of the spectrum, you get things like nightlies. It's difficult to ship more often; you know when you're shipping a little bit too often because your bug database tells you. There's a real limit there, where you just can't ship it any more often than you already are.
At the "art" end of the spectrum, you can spend months refining a product and never really know if you're done yet. It's completely open-ended. Your bug database will never be empty, there will always be another feature request, and you can easily end up working on version 2.0 when 1.0 never even left testing. You can go forever in the "art" direction.
I agree wholeheartedly with the author's complaint. I spend most of my time interacting with end-users of software and other engineered systems. People are really frustrated. I keep trying to communicate their frustrations to programmers, but programmers keep blowing it off: "oh, people just hate change, they'll get used to it"; "they should update more often, we fixed that bug right after the software was released".
I used to expect there'd be some kind of backlash at some point, but now I think it's worse: a lot of people gave up, they just don't expect trouble-free software anymore. It happened just the other day with a bookkeeper who visited a client's office while I was there: "this will just take a minute ... oh, Quickbooks updated ... oh, huh, it needs me to re-enter all that information I entered a while back ... I'll have to look that up ... oh well."
And it's not just the end-users. Programmers expect software to be broken too. They almost relish it, it seems. "That's just how it is, fix the bug yourself" or "it works for me" are both common responses that completely dismiss complaints from other programmers.
New terminal code could be really awesome. I would love to be able to just drag-and-drop files between terminal windows and have them automagically scp stuff between servers. I'd love to have the ability to open a remote server log file in my local text editor and get everything syntax-highlighted for me -- tailing mail.log with live syntax highlighting in Sublime? Oh yes please.
I do think some infrastructure is getting attention. Containers are (maybe) some progress, AWS has been a huge revolution for a lot of people. But yeah, there also seems to be a lot of popular technology right now that isn't really advancing the state of the art very much, while a lot of nuts-and-bolts parts of the industry are really suffering.
People use mongo and node because they want to get a site up really really fast and see if it gets traction. When they actually get somewhere they'll rewrite it in a better language.
If you want to set yourself up with really slick tooling and a great language you can code in Scala with Intellij. You can even avoid touching any ugly dynamically typed stuff by coding your JS in scala.js. If you're doing it better than the other guys then great, you can code rings around them. Maybe it's not worth all the extra complexity when just starting out though.
When you get to big corp size code bases static languages are more common. That's because to be able to navigate and maintain that big of a code base, a good IDE, a static typing compiler and refactoring tools are a huge help.
> When they actually get somewhere they'll rewrite it in a better language.
Anecdata for you: Not necessairly. I'm working for a company which has "actually gotten somewhere", and our development team is still writing in Node with Mongo.
Of course, there is a reason for it, the front end development team has spare cycles, and the backend team does not. Ergo, Node!
The interesting thing is that all these big-corp code-bases started out relatively small. The origins were actually able to overcome the complexity-wall that arises when a product is successful.
>>> Don't worry, be crappy. Revolutionary means you ship and then test... Lots of things made the first Mac in 1984 a piece of crap - but it was a revolutionary piece of crap.
It's a spectrum. Trying to ship a perfect software from the start will probably end up like OS/360 project. Then K&R came along and designed what we know now as unix. An OS that wasn't complete and fully featured like OS/360 was intended to be, but it worked!
Making the wrong choices is inevitable, but correcting them is part of the software life cycle.
We aren't working with concrete and building bridges here. If we built Pisa tower like software, we can still fix it.
So yes, ship when you can as long as it works and delivers what it promises. It doesn't have to be perfect.
Delays getting OS/360 were a blessing, really - it prompted many universities to develop their own mainframe OSs, thus helping OS research flourish around the world.
The author's bit about how we forget about the problems and solutions of our forefathers - that resonates with me.
We do seem to keep re-inventing the wheel, and while part of that is out of joy of creation, I think a lot of that is that we're not writing down the problems, and we're not teaching the problems. We're only paying attention to solutions, and that's limiting.
Code is a solution to a problem, but it's not always apparent what that problem /was/ when you just look at it. So NodeJS (according to the authoer) is doing something that was thought of as a good idea at first, and then people learned why it wasn't, and /nobody wrote that down/. Or, at least, when they wrote it down, nobody taught it.
This is then part of why the TDD movement is in the right direction - you write down the problem you're going to solve, and later, someone can come read it. And maybe teach it.
(Not saying TDD has gotten "there", but it's in the right direction)
While I agree with the facts in the article, my own conclusions are quite the opposite (feel free to downvote / flame, but please read the whole response first).
I guess it depends on values - for me, what matters is the value added to customer. Iterating quickly (often) yields better results in this area because it allows you to guess customer expectations early in development process. I have often encountered die-hard engineers who want 100% specification upfront... In my experience, world doesn't work this way. It would be nice if it did, but it doesn't.
And yes, I am guilty. I am using MongoDB AND JavaScript (but not Node.js - not that I have anything against it, just never had the need for it). I don't use these technologies because they are "cool", but because they solve specific problems in an efficient way. Which is probably why hackathon devs used them too. And yes, I would appreciate schema in MongoDB and types in JS, but I can live without it. Life is made of tradeoffs. Does it really matter if latest Fart App (tm) builds on transaction-safe DB and uses strong type language?
So, are we seeing a rise of "shipping culture"? Yes. Does it change how we work? Yes. Is it in some ways worse? Yes. Is it hurting us? No.
Every time I see someone quote that JavaScript was designed in ten days I cringe.
JavaScript was designed in 1995. It was standardized as ECMAScript in 1997. Ever since then, it's been under active development by a thriving community of engineers pushing for better standards.
Given that the ECMAScript community has (for good reason; "don't break the web") decided to avoid backwards-incompatible changes, that necessarily places a limit on the amount of "fixing" they can do to the language. New or enhanced features are okay, rectifying mistakes is harder.
(This isn't just a JS thing; go talk to any random Python developer and ask them what they think of Python 2 vs 3. There are tradeoffs to both approaches.)
Because there are now God knows how many millions of applications built on top of it that would break if you made backwards-incompatible changes. But I don't think you're really unaware of that.
> Every time I see someone quote that JavaScript was designed in ten days I cringe.
I cringe because it's not true; even the original Javascript wasn't really designed in ten days.
If you look at the history, Eich was playing around with designing a language for months before the order to shove Javascript into Netscape Navigator. Netscape had large software libraries for dealing with virtual machine-type problems for dealing with Java, and Eich was quite familiar with them by the time Javascript was "designed".
Quit your fibbing! I joined Netscape on April 4, 1995, to "do Scheme in the browser". Immediately I was out of luck on several fronts:
* Put on the server team with the McCool twins (NCSA httpd and then Netscape's reboot of same) and Ari Luotonen (proxy guy), working on "HTTP1.1-lol".
* Told Java was in play with Sun -- the deal was not done but it was likely to go forward in Netscape 2 -- so maybe never mind about Scheme, but:
* If there was to be a "scripting language", it had to look like Java.
Kipp Hickman (Netscape first floor, and my kernel hacking colleague from SGI) and I wrote the "Netscape Portable Runtime", NSPR 1.0, the "large software libraries" you allude to, in April and May.
Kipp used NSPR for his Java VM prototype, which Sun helped convince him to abandon (single-source implementation required or else bug-for-bug compatibility would be a nightmare).
So, much of that "dealing with virtual machine-type problems" code was from me, not from some anonymous and non-existent team at Netscape who had "months" ahead of May to develop at leisure.
When I switched to the client team in early May, @pmarca and I had been conspiring (with Bill Joy of Sun on-side; he signed the trademark license for "JavaScript" in early December, 1995). Marc made the case for "a language you put directly in the HTML" -- not something you compile into an applet. This idea got enough support for me to spend ten days, a week bracketed by weekends without much sleep, hacking the first "Mocha" runtime.
I spent the rest of the spring and summer embedding that Mocha, then LiveScript, interpreter into Netscape 2's rendering engine and network library; I had help from the front end hackers (@jwz and Spence Murray on the XFE; @knobchouck and Garrett Blythe on WinFE; Aleks Totic on MacFE), who did all the native control integration; we collaborated on the front-to-back-end API.
During these months, the only file I didn't write in the original JS implementation was mo_date.c, the Date object. Ken Smith, who joined from Borland with three others as a team, helped do that by closely porting Gosling's JDK1.0-prerelease java.util.Date code from Java to C.
Why you are making up facts now, I have no idea. If you have some overriding animus against JS or me (or both), take up a better argument than making up false history. That inevitably blows back.
If you can't believe I wrote as much code as I did, you should see what TJ Hollowaychuk has done in the last few years. But yeah, I was writing lots of C code then.
Since your opinions on some language features are controversial in a sense (like `==`) - it could be nice if you could elaborate a bit on why you made them in that time span?
"Lloyd [Tabb] and Bill [Turpin] made suggestions such as too-loose implicit conversions for the == operator, and the String.prototype.link/bold/blink/etc. HTML formatting methods. My fault for taking these, they were mistakes."
Again, my mistake. Not passing buck here. Borland had a language called Lucy (Loose-C) that was wild with implicit conversions, but counter-example -- my bad for making the changes to ur-JS.
An interesting read but poor tools can't only be attributed to shipping culture. A more important reason is the tools are good enough (only just, but that's enough) and at that point priorities change. And once momentum becomes big it's hard to change direction i.e. Javascript, so hard to revisit the fundamentals.
Better tools are coming, such as Lighttable / Bret Victor's talk: https://vimeo.com/71278954 . But it's not clear when they'll arrive! Will we still be developing web-apps in Javascript after another decade? It feels like we can do better.
Seth Godin wrote that 5 years ago, i think (hope) the economics are shifting, in the last 5 years a lot of the omgcats apps (market opportunities that can be captured with lousy software) have already been written, i think it may not be so easy for companies started today. However with the pace of innovation in the last 5 years (basically the coming of age of functional programming) it's a lot easier for a single person to write high quality software.
Anyway common wisdom is always going to lag behind today's actual best practices
To me, "ship it" as a philosophy means the most important thing we do as developers is deliver working code to customers.
On my team, when we encounter a concept that is just too complicated to ship right away, we try to branch-by-abstraction and keep shipping changes, even if the feature isn't "ready" yet.
To me, the marginal returns on trying to be right diminish more rapidly than the returns on making it easy to fix to stuff when I'm wrong. Maybe I'm just not as smart as everyone else here, but it seems to work.
It's more of a business necessity then a philosophical point of view.
The history of software development is littered with companies that failed because they spent so long polishing the beautiful crystal tower they were developing that someone released a concrete one instead and everyone bought that, and then no one needed a crystal one any more.
Unfortunately, it can be really hard to find out about these companies because of how hard they've been trodden into the ground by their successors, but Xerox and Netscape would be two high profile examples.
Shipping constantly may not be the best thing for technology, it may not be the best thing for quality and it may not, at times, even be the best thing for your users. But if you don't, someone else will.
The problem is that the Author is assuming that 90% of software we write to make a living is set into stone and suppose to run forever. Almost everything we do is disposable and replaced within the same decade.
So he does not like Javascript and Nosql. Is it hard to find a JavaEE job? I think not.
I think the real problem is the disposability attitude you just described. It's prevalent in our real world consumerist economy and it's killing us there too.
Building software that's to be thrown away is a waste of mental resources and physical resources. Society and civilization advances on top of our lasting creations, not the ephemeral ones. Reinventing the wheel doesn't advance the state of the art. You want to build one good set of tools that will last a long time, so you can stop thinking about them and be free to tackle the next truly new challenge. Doing anything else is just a waste of life.
honest question. do you guys want a piece of crap from me that illustrates a new application type? (the way bittorrent or bitcoin or napster or wikipedia was new?) or should I wait.
I don't have any resources to put into this but can release a piece of crap myself. (I don't personally program professionally.) honest question - discuss.
would you like a piece of crap - or for me to wait. (Nobody more competent is going to just code this for me, at least not until the piece of crap exists and has traction.) I don't really envision other options but am open to them. What should I do? Get it right (not happening) or get it out?
I actually watched the screen cast this blog post is about. [1] And I would like to say something about that here, because the questions from the screen cast were also coined in the blog post.
The Question is this: "Why do people not replace VT100 style terminals?" There are two reasons. tl;dr Terminals have other reasons than programming in them and people actually reinvent how you can program on a daily basis. See more in depth arguments below.
The first thing is that people actually still need the old terminal stuff. There are loads of old computers you want to communicate with (just think of your pa trying to relive old times be trying to get the 80's game console to work). And there are also a lot of technologies that really need something that stupid, e.g., if you develop your own embedded system you might actually communicate with it using these VT100 commands. So yeah. Wow. A Terminal (emulator) doesn't have the task to show your text editor. You can start GVim if you just want to show your editor. They have the task to communicate. They can communicate with the system you are running them on or you can use them to communicate with another terminal. If you want to replace them, you have to replace the software in your pa's console (and probably some hardware), you have to find a new way to develop fresh, small computers, you have to find a new way SSH works, you have to find a new way to show your text editor. It's not impossible, but it's probably so hard that nobody would like to spend their whole life (work time, spare time, youth to death) doing it. It's not worth that much. Summary: It's not worth it, if you consider the whole picture.
And second argument: People actually do, if you just think about use cases like coding, gaming, etc. A modern game doesn't run in a Text shell as e.g. Nethack could. It runs in a graphical shell and is represented to you in 3D, e.g., GTA 5. Also there are many people who use IDEs. Unix+Bash+Texteditor was actually an IDE. Eclipse etc are a new way of thinking about the editing task with helpers like compilers, static analysers, debuggers, performance analysers, unit test runners, etc. There are even people who reinvent the programming wheel from another point of view, e.g., have a look at NoFlow. The reason the other stuff is not dying is because it's useful for other reasons. That doesn't mean you have to still use it for programming (also some people, like me, choose to). Summary: People do work on finding modern ways to program.
I haven't tried Mongo.db, but Node.js takes up a lot of resources in a browser, Node.js apps feels a bit like the widest book strategy in the bookstore, you have to shove other stuff out of your browser, in order to use stuff that uses Node.js, and I feel that as a rather arrogant approach to deliver software, especially, when the functionality of Node.js apps doesn't really dictate the necessity.
AS for Javscript, there are worser languages out there, and the debuggers are good enough now, so it can be perceived as a "normal" language.
That may be so, but when I run something with Node.js in my browser, then I may have to close other things. So, wherever it runs, it really uses too much resources, and therefore I find it as a bad idea from the consumer perspective. It shouldn't be so, that somebody else, should dictate the content of your browser.
Node.js is something that runs on server ONLY. It's 100% like PHP but with JS instead. Your browser never sees nor executes Node.js code. It only operates on text a Node.js server produces.
The short definition would be: "node.js is something that eats up my browsers latency", something along those lines, and as far as I am concerned, that is everything I have to know about it. Node.js is a no stop for me.