Hacker News new | past | comments | ask | show | jobs | submit login
Strengthening JavaScript (developers.google.com)
343 points by jashkenas on March 10, 2015 | hide | past | favorite | 221 comments



This is dead on arrival:

    Some examples of strong mode changes we are aiming for are:
    accessing missing properties throws, arrays cannot have holes,
    all scoping errors are static, classes are immutable.
Specifically, "accessing missing properties throws" will undermine how developers use the language. Options objects require this as a feature. Immutable classes might also be an issue as language extensibility / monkey patching is another feature, enabling a whole swath of Aspect Oriented Programming solutions like New Relic's monitoring or the node.js async trycatch library.

I do wonder though, in fighting for its life against native, whether JavaScript will need to adopt optimizing solutions like "strong mode" to strengthen where it's weak.

EDIT: It's really unfortunate that I'm getting down voted so severely (from +3 to -3 in <10m). HN increasingly discourages discussion and debate. I don't think there's any question that my comment adds to the conversation. At most, I could have added, "In my opinion..." to the beginning of my comment.


> Specifically, "accessing missing properties throws" will undermine how developers use the language.

I couldn't agree more; there are many times I access a property that may or may not exist and I plan on using the undefined that comes with it not existing. Requiring it to exist is trying to turn JavaScript into a more strict language without all of the language features to do so very easily.


I wonder if "SoundScript" will support optional properties. TypeScript also considers accessing missing properties a compile-time error, and I haven't noticed many problems with that as you can define some properties as optional.


That would work at "compile" time, but how do we know whether or not a property is missing at runtime without accessing it? Or resorting to an expensive and minifier-hostile call such as hasOwnProperty? The TypeScript approach won't work here.

Guaranteeing that optional properties have a default value would be one solution, another would be to use ES6 default parameters instead of options objects. But there has to be some sort of value to use at runtime if missing property access always throws.


It only throws if you access the property with the [] or . syntax, so I don't think this particular thing is a big problem.


What other syntax is there to access a property?


They only barely mention it, but I think they expect you do to `foo = 'foo' in obj ? obj.foo : undefined` instead of `foo = obj.foo`


I really doubt it, as that's an unsound bypassing of the type system. Missing properties are slow in JavaScript, because the in-memory representation of the class has to be modified when new properties are added. Given that strong mode is about performance, it would make far more sense to just prohibit missing properties and encourage use of other ES6 language mechanisms such default parameters instead.


You check if is there before you access it with . or [].


Oh right. I love those null/undefined checking chains, it really puts the Java in JavaScript.


> Options objects require this as a feature.

    Object.assign(defaults, options)
The defaults object has all the necessary properties, no missing property access is necessary.


(P.S. For the unfamiliar, that mutates `defaults` -- you might want

  Object.assign({}, defaults, options)
instead if `defaults` is an object shared across invocations of your function.)


I think this is part of the problem. Adopting "strong mode" will make lots of modern code obsolete. At least there are sane alternatives though, but it would also require setting empty properties on classes and undermine duck-typing. Yes, it's doable, but it fundamentally requires a non-dynamic approach to JavaScript programming, a highly dynamic language.


Object.assign (or equivalent with libraires like jQuery or Underscore or what have you) is very much modern code. Hell, Object.assign is ES6.


You misunderstand. Object.assign is as available today as is `typeof foo.bar === 'undefined'` or `_.defaults(...)`. Yet I doubt even the majority of modern code always uses them consistently because `if (foo.bar)` or `foo.bar = foo.bar || 1234` is much simpler and much faster.


> Adopting "strong mode" will make lots of modern code obsolete

The same could have been said for strict mode.


Yes, but that's exactly my point; "strict mode" eliminated a lot of the consensus "bad parts". "strong mode" discourages modern utilized features. Not only are accessing undeclared properties and mutating classes not considered "bad" by the majority of the community, but in fact they're actively relied upon as a feature. It's almost as if Google didn't bother convincing anyone they were bad, because they wouldn't succeed, and instead jumped straight to ruling by fiat, which is precisely why in my opinion this is dead on arrival.


> It's almost as if Google didn't bother convincing anyone they were bad

That's what this proposal is.

You can disagree and discuss alternatives, and that's great, because that's how standards get written.


That's a little too, um, "magical" a version of how standards get made.

In truth it's a pay-to-play sausage-making fest, with incumbents privileged against newcomers -- but at least with the Web, some 10M growing to 20M developers are the ultimate sued-to-woo power.

I'm not yet worried about Google cramming this down other browser vendors' throats. MS and Apple will only come on board when it's in a state where Mozilla is on board too, I predict. This may not happen, so don't count on anything. Do use and report usability and other bugs.


> That's what this proposal is.

Perhaps you're right, and I merely remain wholly unconvinced, though I acquiesced in the OP that perhaps it will be necessary. Optimizability dictating language features to this extent is the tail wagging the dog. I think this would undermine the ecosystem by creating a subset that is neither Java nor JavaScript, neither safe nor dynamic; a lowest common denominator of sorts that may do more harm than good in the competition with native.


Going slightly away from the topic, which I agree with you entirely on, btw...

This is pure speculation on my part, but I wonder if a lot of the Google devs involved in JS in one way or the other just aren't that used to writing it.

It's this quote that caught my attention

    Optimizability dictating language features to this extent is the tail wagging the dog.
Apologies to Java devs if this is uncharacteristic of the community, but a lot of Java devs I know seem to put performance and vm internals ahead of expressiveness.

AngularJS is another project that strikes me as being heavily Java inspired (although it's neither expressive nor performant), almost as if its authors are used to writing desktop applications in Java or C# and just wanted to port their experience to the browser.

It's as though the authors of Gmail or Google Docs are not people developing these other things.


"accessing missing properties throws" will just lead to a lot of use of try/catch, and the issues that entails. It's frankly doubtful whether it will speed things up at all, if there is any penalty to throwing try/catch everywhere.


That won't be a problem in SoundScript though, because any code which tries to access a property which might be missing is unsound, so it won't "compile" (or rather, it won't pass the type checker).


In order for this to work, the type needs to be statically known, which in turn means type casts would be required when asserting a downcast.

Javascript littered with casts is nothing I want to see. I don't know about you.


or you can check first with .hasOwnProperty() or with the in operator


if the client supports .hasOwnProperty() or the 'in' operator ...

(it's just a few years that these were not common, and there are still clients around that don't support it)


Got any examples of these browsers? IIRC i used hasOwnProperty as far back as IE6.


You can use ES6 default parameters to implement options objects, so instead of:

    function foo(x, options) {
      var y = options.y | 0;
      return x + y;
    }
You can write:

    function foo(x, y = 0) {
      console.log(x + y);
    }
Problem solved!


> Options objects

Options _Map_. There I fixed it.


JavaScript has been in the throes of an identity crisis since its inception, and IMO we're just now starting to get to a point where developers are leveraging its power and dynamism in its own right rather than trying to cram into a statically-typed, OOP box.

To that effect, it seems like strong mode and SoundScript are a step in the wrong direction. I noticed the very deliberate and self-aware FAQ entry "Are you turning JavaScript into Java?" along with the canned denial, but that still doesn't change the fact that strong mode and SoundScript aim to remove dynamic typing and prototypes from a dynamically-typed prototypal language.


> we're just now starting to get to a point where developers are leveraging its power and dynamism in its own right

It's damning that you say it took 20 years for us to "start" to leverage JavaScript's power. Compare to C#, which appeared 15 years ago, or Go which is from 2009, or Swift, which hasn't even celebrated its first birthday yet.

Better comparisons might be Python and Ruby, which are both fully dynamic and have cool metaprogramming facilities like JavaScript does, but they'll both give you hard errors when you type the equivalent of [].lenght. And they give you proper stack traces and error messages, too: none of "undefined is not a function" which took until 2015 to get fixed in Chrome (try it). Working with Node.js will make you fall into despair when you realize that your program crashes, and it has stack traces, but none of the stack frames are in your code because everything has been threaded through callbacks. Python's major implementation has serious problems with concurrency (re: the GIL, global interpreter lock) and it too has dynamic types everywhere but it's still far easier to work with.

What I want is to be able to refactor a JavaScript project without feeling like I'm crossing the river Styx. Where are the call sites for this function... who knows? TypeScript seems to at least give me some of those tools, it's a shame that the tooling isn't really ready--the compiler takes ages to run (timed it at 2.0 seconds on a small project... I might as well be using C++ at those speeds).

And if Google has been using TypeScript as a basis, as they say, then you can keep using JavaScript the way you want and you don't even have to type anything in differently. Just let us add some type checking if we want it.


Before TypeScript (fall 2012), Zef Hemel then at c9.io did static analysis for JS IDE diagnostics (mid-2012?), Dmitry Vardoulakis then grad-student-interning at Mozilla Research did DoctorJS (spring 2010, I think). And don't forget the Closure compiler from Google, but I'm not sure when that really got serious about static analysis -- perhaps after Google hired Dmitry. So no more than 15 years, not 20. ;-)

What's damning is the stagnation under the IE (convicted in the US) monopoly from late '90s through 2009 when ES5 was published. It's not as if Python 1.3 or whatever, if frozen in Netscape 2 amber, would have fared better if forked.


> It's not as if Python 1.3 or whatever, if frozen in Netscape 2 amber, would have fared better if forked.

Disagree. Python 1.3 was at least designed as a general-purpose programming language, with a variety of use cases, and took the best parts of existing languages (TCL and Perl) rather than having novel features that sorta-but-not-quite replace more conventional ones. It had its share of warts but nothing in Python 1.3 is as awkward as prototypal inheritance or javascript's scoping.

IMO the biggest tragedy is Mozilla's unwillingness to put other languages in the browser. Dart was developed in public, far more of an open standard than the original JS, fixed language design problems that ES6 still isn't addressing, had an adequate approach to running in JS-only browsers, and would have added much less complexity to browser implementation (the reason given for rejection) than many standards that Mozilla has been happy to support.


> IMO the biggest tragedy is Mozilla's unwillingness to put other languages in the browser. Dart was developed in public, far more of an open standard than the original JS, fixed language design problems that ES6 still isn't addressing, had an adequate approach to running in JS-only browsers, and would have added much less complexity to browser implementation (the reason given for rejection) than many standards that Mozilla has been happy to support.

It's also Google's failure to exploit their market position. Instead of using Dartium, they should've just integrated the Dart VM into mainline Chrome and began paying web developers to use Dart. Mozilla would then have no choice but to implement Dart or find their users complaining about not being able to access their favorite websites.

On a similar note, it's kind of a shame that IE's support for VBScript never took off. Not because VBScript was a good language, but because it would've laid down a precedent for the web being a multi-language environment. Browser vendors would have developed pluggable scripting architectures to compete. Imagine if, say, every browser on Windows could have allow scripts written in, for example, everything offered by Windows Script Host (and I'm sure some equivalent architecture would have been developed for Mac and Linux -- maybe even something as simple as "any script that can be interpreted via a shebang line").


See my reply just above. You ignore high costs of multiple engines, both intrinsic and inter-engine -- including, notably, a memory cycle collector of some sort, with its attendant write barrier costs. See Fil Pizlo on webkit-dev the other year:

https://lists.webkit.org/pipermail/webkit-dev/2011-December/...

As for your lamenting the lack of Machiavellian market power abuse by Google, it really is not clear that even Google can force Dart down others' throats, or its own throat. Rewriting JS code in Dart is a net lose, as far as I can tell. No Google property yet uses Dart, even with dart2js (I welcome correction).

Ignoring the morality of such market-power shenanigans (hey, I was at Netscape, but I've paid my dues, and someone had to be "first" :-P), it is also not clear "Mozilla would then have no choice". Money AKA energy is not free; Mozilla can least afford follies; there is always an option to reject a neo-standard.

Microsoft could not make VBScript stick in the '90s. I say this is signal, yet again. Multiple and mandatory HTML scripting languages exact high direct and indirect costs. Lack of such an outcome is not a conspiracy or tragedy. It's economics and evolution in action.

Update: WSH, shebangs, wow. You forgot about security!


An example of a power-play and widely deployed de-facto standard that we resisted at Mozilla, wisely: ActiveX plugins. Independent companies emulated some of the ActiveX COM APIs, plus basic MS COM, on Unixen, in the '90s and into the noughties. Lack of spec and open source were not a barrier, and MS documented well.

Was MS abusing market power? The US v. Microsoft case said they were. They definitely caused plugin vendors to support ActiveX _en masse_. And they then dropped the old Netscape Plugin API, which led many such vendors to drop NPAPI too, leaving those plugins IE-only.

Yet we at Mozilla resisted ActiveX, successfully. We did not just hold the line, we restarted NPAPI incremental evolution via the plugin-futures@mozilla.org list I created, and meetings jst@mozilla.org and I convened with Apple, Opera, Real Networks, Dolby, and Macromedia.

Even a convicted monopolist couldn't ram ActiveX down other browser vendors' throats, when it could and did do so to plugin vendors and some web developers. Google with Chrome is not yet near MS IE's monopoly share. So again, I think you should be more skeptical of your own assertions and assumptions about "no choice".


"An example of a power-play and widely deployed de-facto standard that we resisted at Mozilla, wisely: ActiveX plugins. Independent companies emulated some of the ActiveX COM APIs, plus basic MS COM, on Unixen, in the '90s"

Thanks for doing this Brendan.

I was at a young startup at the time and an MS dev team gave us access to the IE Trident preview with ActiveX [0] and you could tell it was a pile of horse, without even implementing it.

   '*you what?* create a plug-in, only works on IE, 
    then everyone has to download it?'
Totally against the grain of how the web worked and still works.

[0] IE4 layout engine: https://duckduckgo.com/l/?kh=-1&uddg=https%3A%2F%2Fen.wikipe...


> it really is not clear that even Google can force Dart down others' throats

It's peculiar that Google claims that it is utterly impotent to persuade developers to use Pointer Events, and so it is going to give up even trying, and yet it continues to develop Dart and make vague noises about integrating it into the browser.


Google's own developers use Dart for Google products, no? I think that pressure is grassroots even if it is internal. And even if Dart remains permanently a compile-to-js language, as tragic as that would be, it's still an incremental improvement over GWT.


First, you missed my point, which was clear enough from the context you cut: stagnation would have been a problem for any hypothetical Python circa version 1.3, wedged into Netscape Navigator (other problems listed below). It would have been flash-frozen in 1995, not just due to standardization or Netscape losing the first browser war, simply due to the Web's scale.

Python (like Lua, Perl, etc.) had to evolve backward-incompatibly, and thankfully could do so as a Unix-y and server-side command line. As a browser-embedded scripting engine, don't-break-the-web selection pressure would have forked and stunted early Python or any other language about as badly as JS, ignoring the spurious (to what I wrote) "Blub was a better language to start from" benefit. That's the main point.

Python was never an option, anyway, as I've remarked in many places. Netscape management insisted on "make it look like Java", and time pressure combined with "don't be too big a sidekick" angst from Sun kept such things as classes out of JS in the early days (it took till ES6 to add classes). Python had OS-specific APIs and an unsafe FFI to boot, so would have been forked by stripping out many features, too. Fuzz testing would have exacted further costs, but I'll stop here. Python was never an option in the '90s, for many reasons.

BTW, I did help manage and fund Mark Hammond (then at Active State), when I was leading Mozilla in the early noughties, to add C-Python as an alternative scripting language engine to Gecko. That code was for Active State's Komodo IDE, which used Python in XUL script elements. But the DOM complexity tax was bad enough that Gecko maintainers ripped it all out just a couple of years ago. See

https://bugzilla.mozilla.org/buglist.cgi?order=Importance&re...

So I know a lot more about what you're blithely talking about, from actual experience, than you do.

Finally, your hyperbolic language ("tragedy") is off target. Mozilla is way underfunded compared to all of the other browser vendors. It could and still can least afford to jam in other languages, plus pay the requisite inter-heap cycle collector and write-barrier overhead costs you neglect to mention. Falling on this sword would have done nothing -- did nothing even with Mark Hammond's fine work -- to displace JS or get other browsers on board. It would have helped Mozilla fail, though.

You'd do better to cry that Google did not fund, out of its massive profit margins and market cap, engineers to add such things to other open source engines -- as Microsoft has done with Pointer Events for Gecko and WebKit.

Please do name a standard that Mozilla supports which is both (a) more complex than multiple language runtimes embedded and integrated with a low-overhead-enough inter-heap cycle collector; and (b) not already obligatory for competitive browsers to support (as Dart certainly is not!). Take your time; list your references.

Apple and Microsoft won't fund such follies, and even Google has yet to ship Dart on top of Oilpan in Chrome (https://www.chromestatus.com/features/6682831673622528).

This is not a case of me as past Mozilla leader ducking "responsibility" to fund your pet follies based on breathtakingly incomplete HN-comment assertions about low complexity from you. I don't owe you or anyone that big a free lunch. (How about you write the patch and submit it, with tests, and then talk to Mozilla or any other open source browser project.)

Rather, it's a set of three confirming signals showing that while talk is cheap, multiple independent language engines in one browser certainly are not. Only Google has the funds and engineers to burn on such speculative work. And it's an open question whether Chrome has the market power to make any such extension "stick" as a de-facto standard.


> As a browser-embedded scripting engine, don't-break-the-web selection pressure would have forked and stunted early Python or any other language about as badly as JS, ignoring the spurious (to what I wrote) "Blub was a better language to start from" benefit.

All (successful) languages ossify sooner or later - just look at how long promised Java improvements are taking to arrive. Python 3.4 is recognizably the same language as Python 1.3, and still suited for much the same problems; it's never going to be Perl or OCaml, even with that ability to make backwards-incompatible changes that JS doesn't have. Where you start from matters.

> Netscape management insisted on "make it look like Java", and time pressure combined with "don't be too big a sidekick" angst from Sun kept such things as classes out of JS in the early days (it took till ES6 to add classes).

Exactly! A language that was designed to not be a replacement for Java for serious, large-scale programming programming, turns out to... not be a replacement for Java for serious, large-scale programming, surprise. No-one is denying (I hope) that JS is a very good language within the constraints it was designed for. But today's use cases are very different from making the monkey dance; the things we're writing in JS today are much closer to the things we were writing in Java 10 years ago than they are to the things we were writing in JS 10 years ago.

JS's unsuitability for certain problems is a result of design decisions, not any "stagnation"; no amount of active development (as welcome as it is) is going to make language X into language Y, and no language can be all things to all people. With the browser becoming more and more of an OS, requiring all web programs to be written in the same language is as wrong as requiring all non-web programs to be written in the same language.

> How about you write the patch and submit it, with tests, and then talk to Mozilla or any other open source browser project.

Wait, Mozilla would accept my patches? Of course Mozilla should allocate their own engineering effort as they see fit (we may disagree about specifics). But my understanding, and the part I took issue with, was that Mozilla was refusing to ever embed Dart as a matter of policy.


You're still changing the subject, and now moving the goal posts. I didn't have Python 3.4, or 2.5, to start from in 1995. I had 1.3, and it wasn't possible for other reasons.

Are you really arguing that, in the far-away-in-the-multiverse world where it didn't have to look like Java, I had time to make a safe, FFI-free subset, etc. etc., Python 1.3 in Netscape 2 would have evolved any faster than JS did?

ES1: 1997

ES2, just the ISO version of ES1, pro-forma changes: 1998

ES3: 1999

ES4: mothballed in 2008

ES5: 2009

We had to restart browser competition with Firefox even to get a crack at updating ES3. Competition drives standardization; the IE monopoly stagnation mattered.

Most of the rest of your post is flogging "Blub is better". No one agrees on Blub, and there's no way to replace JS with it. Meanwhile, JS is evolving, not just ES6 but ES7/2016 and beyond.

Betting against evolution on the Web is risky, in my view.

I don't think you're being serious about the work to develop a multi-language-runtime embedding framework. It would not be one patch. You'd have to solve the write barrier cost implied by cycle collection among heaps (see Filip Pizlo of Apple on webkit-dev, I cited this already):

https://lists.webkit.org/pipermail/webkit-dev/2011-December/...

I never refused "as a matter of policy" to embed Dart. Excuses for not working on the real and complex problems with multi-language-runtime VMs are one thing -- making stuff up is another.


> Are you really arguing that, in the far-away-in-the-multiverse world where it didn't have to look like Java, I had time to make a safe, FFI-free subset, etc. etc., Python 1.3 in Netscape 2 would have evolved any faster than JS did?

No. I'm arguing that assuming it evolved at the same rate as JS, "ECMA-safe-FFI-free-Python-1.3 5" would be a substantially better language for writing today's web applications in than ES5 is. I repeat: stagnation isn't the problem with JS, fundamental design (a product of the constraints you were working under at the time) is.

> Most of the rest of your post is flogging "Blub is better". No one agrees on Blub, and there's no way to replace JS with it.

Status quo and sunk costs. No-one can agree exactly which of A, B or C would be best, so we won't do any of them, even though either of A, B or C would give us something better than what we have now. Seriously, every serious proposal I've heard - Python, Dart, LLVM bytecode - is one that I'd take over JS.

> Betting against evolution on the Web is risky, in my view.

XHTML2 was evolution. <audio> and <video> weren't (an evolutionary approach would have been to specialize or adapt <object>). Web history has its share of successful radical new solutions and failed incremental ones as well as vice versa.

> I never refused "as a matter of policy" to embed Dart.

That's what matters here. Seriously, I'm really glad to hear that; it's not what I'd understood from the public statements and reporting.


Holding to as brief a reply as possible given short time to write it (apologies to Blaise Pascal):

* Python 1.3, stripped of most of its C extensions, may have been substantially better than JS1, I grant -- that was your digression and straw-man, not the point of the text to which you replied.

I do dispute that a forked-and-stripped Python 1.3, slowly evolved thereafter, would be materially better than what JS became. Just consider whitespace-based indentation vs. minification, for one issue.

* I'm very well acquainted with the sunk-cost fallacy. But that is not why JS endures.

JS endures because given the scale of the Web and the consequent backward-compatibility imperative for browsers, the switching cost is extremely high for web developers and browser hackers. The cost of adding a 2nd runtime faced by browser hackers is also high (but less high). Most important, the conserved evolutionary-kernel of JS is "good enough" (worse is better, look it up -- sorry about the warts but not sorry, we have to deal with the world as it is).

Therefore no browser (not even Chrome yet) has both the means and the motivation to invest lots of megabucks in engineering high-performance, multiple language runtime tenants sharing the DOM and the rest of the browser APIs.

Compile to JS sucks away the oxygen from any such boondoggle. There are multiple Python-to-JS compilers out there. With modern JS (both language and VM-level optimizations), you can get higher fidelity Python this way than you would have by forking and freezing Python 1.3.

* XHTML2 was revolution, not evolution. It proposed to replace HTML on the web, not even interoperating well with HTML via the DOM, before we founded the WHATWG and did HTML5. HTML was originally an SGML fork that, as it evolved, bent or violated lots of SGML rules in favor of flat vocabulary, ad-hoc error correction, implicit content models (I perpetrated implicit CDATA for <script>), and worse-is-better.

XHTML would yellow-screen-of-death if it didn't validate, and because IE loading the most popular MIME type would error-correct as HTML, even hard-core XML purists would commit errors to published content, unaware. This plus the better-is-better overhead and complexity of authoring XHTML (XML namespaces in particular) led to ~0 adoption.

* On me vs. Dart, as on your assertion that multiple runtime integration is much less complex than unstated Mozilla work, you have yet to cite sources. I'm bored by such cheap shots, so will wrap up here with a few thoughts about Dart and its opportunity costs. Feel free to have last words.

If you care to read my past HN comments, I railed against the Google "Dash" memo for being two-faced about "we <3 JS; we must REPLACE JS", about the latent and very large conflict of interest for Google as self-congratulatory standards bearer: investing in Dart at the expense of JS.

Indeed Google has probably sunk tens of millions into Dart with very little to show for it. Elsewhere here, you asked or assumed that Google uses Dart in its web properties. Does it? So far, I hear it does not, and I've asked Googlers. You should not assume.

UPDATE via twitter, thanks to @rightisleft -- some Google services use Dart now:

https://www.dartlang.org/community/who-uses-dart.html

This means dart2js, which means Google should've already helped get bignums into ES6 :-|. END UPDATE LOL

Dart cost Google some V8 competitiveness, vs. other engines. The whole Aarhus team (except for moonlighters) switched to Dart; a new V8 team had to be recruited in Munich. You could say it's Google's choice on how to invest, and of course I agree, just considering business ethics.

But not for "Google the good" as standards bearer. There, the effect on JS standardization and real progress vs. the feared/hated mobile native stacks' languages (C++, ObjC on iOS) -- by such evolutions as asm.js/WebGL for games, and something like SoundScript for larger hand-coded apps -- was significantly delayed.

Big companies can work for and against the grain of the web. Elsewhere on this thread, a commenter replied to me about how ActiveX went against the grain (very true). Stuff like NaCl, Pepper, and Dart run the same wrong way, at first and even in the long run -- whether open source (open-washing a project dominated and controled by paid contributors is a hazard, even at Mozilla) or even de-jure standardized (Ecma is pay-to-play). The only way they'd get into other browsers is via market power of the 95% monopoly share kind that IE enjoyed around 2002.

Anyway, that's why I railed against the "Dash" memo. The delay in reconstituting the V8 team, and in coming up with something like SoundScript instead of Dart, is relevant to this current thread. If the topic of what's both practical to evolve from JS, and better for developers over against compile-to-JS approaches, is worth arguing about, then I'll argue that Google let the world down by doing Dart.

Evolution delayed, but not denied, still hurts. I'm glad to see the new V8 team proposing experiments like this one.


I occasionaly read the esdiscuss list, and from those discussions, I get a sense that it is incredibly difficult to evolve JS given the legacy compatibility constraints.

In my view having the Dart team trial new features in a new codebase without the legacy constraints allows them to move quickly, and get feedback about new features by shipping real code in real products. The results of this process can give good input back to the ECMAScript process. Perhaps this has already happened, I guess good examples would be SIMD, and the scoping rules around for of loops.

One of the most important improvements I see in Dart, is the focus on reducing start up time. i.e. snapshots and lazy evaluation of top level variable initialisers. They are currently claiming a 10x start up time when snapshots are used. This would be a huge benefit for mobile apps.

Is there any one investigating how you chould change or subset Javascript to allow for snapshotting? Snapshotting seems really beneficial for the web, especially for the mobile web vs native. Currently I only see the Dart team working on this.


Dart may prototype ideas that land in JS, but then why not help with bignums sooner (see http://code.google.com/p/dart/issues/detail?id=1533)? Time's a-wasting, Dart started at least in 2010 (first I heard of it was that spring).

JS compat constraints would affect anything like it at scale, see also HTML5 and version-as-anti-pattern-on-Web.

Snapshotting would be relatively easy to do with ES6's module system. People would have to refrain from abusing the global object, or snapshotting might fail.

Honestly, ES6 modules plus the asm.js work to avoid loading too much cold code is likely to bear more fruit than a new VM's bespoke snapshotting tech. Again, Dart ain't gonna replace JS in the foreseeable future. We need progress on the Web we have, not a dream-Web we can't have.


> why not help with bignums sooner

It's great that Apple and MS are on board with ES6. But even if the V8 team had implemented bignums behind a flag in 2010, do you really think that the other vendors would have also implemented it already? I find this scenario unlikely - but you obviously have a better view of this than I do.

> Snapshotting would be relatively easy to do with ES6's module system. People would have to refrain from abusing the global object, or snapshotting might fail.

I guess you'd need to add some extra restrictions, such as no top level code outside of functions including variable initialisers, and no modification of prototype chains. This seems hard to do with JS as it is written today. Perhaps the "use strong" experiment will make this easier.

> ES6 modules plus the asm.js work to avoid loading too much cold code is likely to bear more fruit than a new VM's bespoke snapshotting tech

This is your intuition and opinion. Some people disagree, and have chosen to put their effort into another approach. Maybe you're right. Maybe they're right. Personally I think it's better to try both approaches, and let benchmarks settle this. From the benchmarks I've seen, snapshots look very promising.

> Dart ain't gonna replace JS in the foreseeable future

Flamebait! It doesn't need to. It just needs to provide good development tooling for some developers who prefer using it than vanilla javascript.


Bignums have been on the Harmony agenda for years, going back to:

http://wiki.ecmascript.org/doku.php?id=strawman:bignums

from 2010 (before the Dash memo leaked, before dherman or anyone on TC39 not from Google, and probably any of the folks from Google for that matter, knew that Dart had `int` as bignum).

With V8 jumping in, you bet that they'd have been implemented in at least V8 and probably SpiderMonkey -- just as many ES6 features proposed since then have been:

http://kangax.github.io/compat-table/es6/

Did you know about these precedents? Serious question. I'm not citing total what-ifs here: we did have bignums on the agenda, but we lacked a V8-level champion; we did -- we the various JS engine implementors -- implement a lot in the last few years.

No intuition is required to say that VM isolate snapshotting (if that's truly needed to deal with massive gmail-level cold-code on first load; I see other solutions) could be added to JS if it could be done for Dart. Lars Bak tried asserting no-way-in-JS because of the notorious JS global object. I say: never say never, opt-in is the way the web evolves. The global object is being ring-fenced by ES6 modules and new binding forms.

This is not a difference of opinion, it's a matter of choice and funding.

Also not flamebait: my point about JS not going away. You switched from "Dart does good for the Web via (eventual) JS uplift" (paraphrasing), to "Dart just needs to provide good development tooling", seemingly in response to my pointing out that there hasn't been much uplift over five years.

What's with the goalpost moving, and in a weak direction to boot? You were very much on target citing SIMD flowing from Dart to JS, thanks to John McCutchan, to whom I tweeted the idea when I saw his tweet about Dart SIMD:

https://twitter.com/BrendanEich/status/308343704194781186

John replied that he was going to look at JS SIMD next.

So thanks to me too -- this Dart => JS transfer was not as far as I can tell on the Dart leaders' agenda, it was bottom up work by John. After our twitter exchange, Mozilla and Intel joined in.

The `for (let...)` binding per iteration semantics in ES6 did not come directly from Dart. We'd been working on it for a while in TC39. Here's a note from me about it (find "binding per iteration") from 2009:

http://wiki.ecmascript.org/doku.php?id=harmony:let

It helped to have Dart do it, in the "moral support" and "evidence it is worth the spec/impl complexity" senses. But there was no tech transfer or direct causation.

Don't get me wrong, Google cannot spend tens of millions on top talent and make something of zero value. My point about opportunity costs stands. We could have come a lot farther, faster. This is true now as it was in the IE monopoly era (no, I'm not saying things are as bad, just that the hazard of doing WPF or Dart instead of working more directly on the Web stack remains).

Complaining about JS as a backward-compatibility-constrained language, to support the idea that JS uplift needs expensive, separate "REPLACE JS" alterna-language/runtime projects such as Dart, which inevitably -- because you can't replace JS, flamebait trigger warning LOL -- backpeddle to compile-to-JS tooling and tardy tech transfer into ECMA-262, doesn't justify the opportunity cost.

There are better ways to improve the system we're stuck with, sooner. I'm not religious about this, nor are the V8 folks now working in earnest on SoundScript. You're seeing more of what I'm advocating, finally. It wasn't in evidence for too many years.


> Complaining about JS as a backward-compatibility-constrained language, to support the idea that JS uplift needs expensive, separate "REPLACE JS" alterna-language/runtime projects such as Dart, which inevitably -- because you can't replace JS, flamebait trigger warning LOL -- backpeddle to compile-to-JS tooling and tardy tech transfer into ECMA-262, doesn't justify the opportunity cost.

Ok, let's say JS continues to evolve over the next 5-10 years, and by this time all browsers support integers, optional types, operator overloading, less implicit conversion, and mixins. By this point compiling Dart to JS is trivial, and the cross compiled code is also completely readible. So I can take my existing Dart project and compile it to ES11, clean up a few things by hand, and then continue developing in ES11. In the meantime I've had 5-10 years being very productive developing in Dart with good tool support. This seems like a win to me.

Having a VM is also incredibly beneficial for development and debugging. These points alone justify investment in Dart.

But there are more benefits. Providing a testing ground for VM engineers to try out new features without legacy compatibility constraints. They haven't really got started on performance yet, and they're already solidly ahead of V8. This can provide insights for TC39, to see what kind of performance/memory improvements are possible if changes are made to the language.

> Bignums

I'm aware of the bignums strawman, and I'm aware the page on the wiki dates to 2010. But I repeat: even if the V8 team had implemented bignums behind a flag in 2010, do you really think that the other vendors (including MS and Apple) would have also implemented them already? I find this scenario unlikely. (But kudos to MS/A for implementing the new ES6 features. I'm glad to see JS evolving)

> [Snapshotting] could be added to JS if it could be done for Dart.

My understanding was that the V8 team actually implemented snapshotting and initially used it for loading their JS core library. They didn't think it was possible to use it for loading cached web code. Not because of the globals object, but because javascript code must be executed to build up classes, i.e. setting prototypes etc. Because execution and structure is interleaved, top level code and initialisors could perform side effects before the static program structure exists.

> there hasn't been much uplift over five years.

It looks like Dart is a longer term bet. It will be interesting to see what kind of an influence it has over the next decade or so. Especially given the recent talk around adding optional typing to JS. The Dart team are also currently experimenting with co-operative threading, growable stacks and concurrency primitives. https://github.com/dart-lang/fletch/wiki/Coroutines-and-Thre...

> My point about opportunity costs stands. We could have come a lot farther, faster.

I'm not convinced. I don't see how the development resourcing on the V8 development team could have sped up the ES6 standards process. The ES6 process includes getting consesus with Apple/Microsoft - this process isn't time constrained by V8 development resources.

Consider the opportunity cost if they chose not to develop Dart. Developers have to wait a lot longer to get modern tooling, with: completions, doc hovers, code navigation and static analysis. These really make a huge difference to developer productivity. All of the experience gained here can feed back into javascript tooling (assuming optional typing makes it into ES).

Also consider the individual developer's motivations. It's great that you are excited about working on JS and spidermonkey year after year. But what were the motivations of the developers at google? If the developers are passionate about the web like yourself, then they can continue to work with the V8 team. But if what excites them, is working on a state of the art dynamically typed VM, well then removing some of the legacy JS constraints allows them freedom to innovate. If their only choice had been to work on V8, they could well leave the company and work on another VM project somewhere else. So what would the cost have been if google management said no you can't do Dart?

> You switched from "Dart does good for the Web via (eventual) JS uplift" (paraphrasing), to "Dart just needs to provide good development tooling" ... What's with the goalpost moving ... ?

The shifted goal post was: "Dart ain't gonna replace JS in the foreseeable future". I never claimed that Dart will, or needs to, replace javascript.


This is going in circles.

I cite ES6 features now implemented by all the majors, and point to bignums as a proposal pre-dating Dart and pre-dating other ES6 proposals that got implemented. You assert against all evidence that, unlike all the other ES6 stuff implemented, a bignums proposal that was accepted for ES6 and prototyped in V8 would have been rejected by Apple and Microsoft. Makes total sense! :-|

Your goal-post shifting had the effect of letting Google off the hook for lack of uplift other than SIMD, so far. Great development tools and beneficial debugging! Lousy argument and defense against the lost opportunity for not only JS but dart2js, frankly.

A VM is an incredibly expensive investment, but Lars et al. were tired of JS and wanted what they wanted when they wanted it. (So does my three-year old.) Whether it makes economic sense in 40 years (the time-frame I've heard for when Dart finally takes over)? Who can say. Unfalsifiable.

That Dart has cost tens of millions of direct NRE at Google, and years of lost opportunities with V8 and TC39, is in the Basil-Fawlty-ian category of the bleeding obvious. There's no way to waffle around this point. You have to hold your breath for 40 years and hope.


> The shifted goal post was: "Dart ain't gonna replace JS in the foreseeable future". I never claimed that Dart will, or needs to, replace javascript.

And I never said you did, so stop echoing my phrases. You called my "JS ain't gonna be replaced" flamebait, not goalpost moving. (I cited your obvious goalpost shifting, from "Dart helps uplift JS" to "at least we get great tools"; separate point!)

JS replacement wasn't ever a goal. I noted it can't be done, so it's not a goal. Nothing shifted. This fact is a painful truth that I cited as the minor premise of a syllogism:

Major: Web needs better programming language support than current JS, and soon. (The Dash memo was not wrong on this, although it overstated how mobile native stacks have better languages when the bigger issue is app/device/system APIs.)

Minor: JS cannot be replaced. (Anyone want to argue this?)

Therefore: JS needs to be improved soon. Easy to agree but harder to do, yet it is being done, e.g., ES6, SoundScript, plus over-the-top but informative tooling such as Flow and TypeScript.

We were arguing about the best way to improve JS. Since it can't be replaced, an entirely new second language and VM, especially without equivalent compile-to-JS semantics via JS uplift (bignums, more), has drawbacks:

* Super-expensive. (Other vendors won't do it without third bullet: market power abuse.)

* Leaves JS uplift to "later". (We've already seen this happen with bignums, but not with SIMD due to John McCutchan going extra miles.)

* Tempts market power abuse, a la ActiveX. (Not yet a real problem for the Web.)

In no way is there a goalpost to move about JS replacement. It's not in the cards. The only issue is how big a chunk of work and "time out" from improving JS should one take to test and bake new ideas.

To return to this HN thread, SoundScript is a quite-big chunk of work and time-out (much of this year, for some of the V8 team), yet it still looks aligned with the grain of the Web.

Dart without more JS uplift than the fine SIMD work is too big a chunk of work to produce much fruit for the Web soon.

Clear enough?


I think your argument is clear - but I think you're also missing a point about another approach to evolving JS, and how development stacks such as Dart can fit into this.

Bignums - ok - you claim that bignums could have made it into ES6 if google had provided more support. I'll take your word for it, but you can probably understand my scepticism.

I'm not religious about language either. I don't care much about syntax. What I really care about is being productive.

IMHO the main things that help my productivity are:

* optional types

* completion

* doc popups

* development-time runtime type checking

* good debugger

* Few implicit conversions

* Consistent core library

Now you argue that google would have been better off building these language features and tooling on top of ECMAScript instead of creating the Dart stack. What would this alternative scenario actually look like - especially from the point of view of a developer?

* It could be a mode in V8, or a fork of V8 (I'll call it "V8FutureMode"). V8FutureMode would be incompatible with other vendors JS VMs. Just as Dart is.

* Each time ES standardises a new feature already in implemented in V8FutureMode, then all your existing code would need to be updated to match the ES spec. This regular churn is painful for developers.

Now let's consider another scenario where google develops Dart, and ES20 arrives in 2020 with optional typing.

* 2013-2020. Stable language no breaking changes.

* In 2020 I can port my code over to ES20 (trivial as near 1:1 semantics, 99% automated).

This scenario is better for me as a developer. It also insulates me from the risk that optional types don't make it into ES until 2030. I also think it gives a lot of real world usage data helpful for getting optional types into ES. (You seem to disagree here.)

From the Dart team's perspective the Dart scenario is better too:

* Can build a core library and tools on a stable language, rather than having to regularly deal with breaking changes.

* Able to experiment with VM optimisations which are difficult to do in JS due to legacy compat.

* Personal motivation - this is what the developers really want to work on. (Being told by your boss to drop your experimental project, and continue doing the same old thing isn't always a good management approach)

I'm optimistic that over time experiments such as "use strong" will mean that the same optimisations implemented in the Dart VM will be able to be used in other JS VM's. Perhaps if a developer guarantees that all JS code is "use strong ES20", then it would be possible for a browser to even use a different VM behind the scenes (in Chrome perhaps even using the original Dart VM codebase, but with the language front-end changed to compatible with ES20).

The future is uncertain and difficult to predict. There are multiple ways to evolve JS - you seem quite certain that incrementally fixing JS is the only approach. I think building a new programming stack, and simultaneously evolving JS in that direction with insights learnt from the new stack, is also a valid approach. I don't understand why you have been so hostile towards this.


tl;dr -- you took too many words spelling out what I already said: that Dart diverged too much and can only hope to rendezvous with ES2030.

First, it can't without careful separation of its incompatible breaks with JS, from whatever optional type system is standardizable in future JS. The two are intertwined in Dart right now. JS is not going to break compat or add opt-in version selection to runtime-incompatible modes. That died with ES4, buried by 1JS.

Second, 2030 or even 2020 is way too far out, given the interest in TC39 right now from Google, Facebook, Microsoft, and other members. And because it took the long-odds bet, Dart is now clearly less likely than TypeScript => StrongScript to have any say on future JS optional types (AKA the revenge of ES4). You write

"I also think it gives a lot of real world usage data helpful for getting optional types into ES. (You seem to disagree here.)"

I'm not disagreeing just as a speculative opinion about 2020. I'm telling you that Dart has missed the optional types boat now, in TC39 -- the next meeting in less than two weeks. Dart and any real-world usage data from it are nowhere, while StrongScript is "on" and should generate V8-based data this year.

On the laundry list of "optional types / completion / doc popups...": most of this has been done without Dart in full or even Dart's optional types, including in the Closure compiler. (See DoctorJS for early higher-order control flow analysis that could do completion without making programmers declare types all over.)

Even the claim that a new VM is needed to do the complete laundry list is suspect at this point. Yeah, it's nice to have a clean slate, or one starting from StrongTalk's slate, as Dart did. But if only Google can afford it, it diverges incompatibly up front, and it can only partially re-converge far in the future, then it is very likely a "miss".

Life is not just a series of random Homer Simpson events, with everything about possible futures a matter of opinion. People bet on futures profitably, beating fair coin tosses. Alan Kay was right, the best way to predict is to invent.

So the smart money is not on Dart designing future optional types for JS. This could change but I doubt it, since Dart broke too much compatibility, unnecessarily for the purposes of making types optional. No one is disentangling and minimizing a proposal for ES7 or ES8 (2016 or 2017), while StrongScript is in pole position. This is much more like how standards evolution should work than the silly Dash memo's model.


If you're right, and can land optional types and bignums soon, then Dart's semantics can become a subset of ECMAScript. At this point it's pretty easy to replace the Dart language in the VM to support the ECMAScript syntax subset, and source code translate the core libraries over to ECMAScript. Then you have a fast VM which supports an easily optimisable subset of ECMAscript, and a large class library. This still seems like a good outcome to me.

"any real-world usage data from it are nowhere"

Ignoring available data from existing optionally typed VMs during the standardisation process doesn't seem like a smart idea.

Anyway, I hope there is good discussion at TC39 about Nominal typing with implicit interfaces, vs Structural typing. I'm not really a fan of structural typing - I think the Dart team made the right call here.

> Alan Kay was right, the best way to predict is to invent.

You mean to invent something compatible with legacy technology and easily adopted ;)


> If you're right, and can land optional types and bignums soon,

First, "soon" is incremental and staged, so it's not as though full optional types won't take till 2020 to be "landed" as in standardized and widely implemented.

Second, the only way to get to that 2020 is by working in TC39 now, 2015, as SoundScript is. Not pulling a Dart in 2010, doing a different language and VM, making a disjoint Ecma standard, and then hoping to pull off a last minute rendezvous.

> then Dart's semantics can become a subset of ECMAScript.

They cannot, without incompatible changes. This doesn't fit your "big investment now for great tooling, easy migration later" thinking for Dart, and it's right out for JS.

Problems go beyond bignums, but just bignums are enough. Dart can start a counter at 0 and ++ it past 2^53 without loss of precision. JS can't (you'd have to use bignum(0) or 0n). Something's got to give, and it looks likely to be infeasible to statically analyze and auto-fix all the code.

> At this point it's pretty easy to replace the Dart language in the VM to support the ECMAScript syntax subset, and source code translate the core libraries over to ECMAScript. Then you have a fast VM which supports an easily optimisable subset of ECMAscript, and a large class library. This still seems like a good outcome to me.

Except that this won't happen in 2020 thanks to Dart, because it forked hard early. Instead we will get to 2020 (maybe 2018) with SoundScript standardized, still incompatible with Dart, and no good outcome for Dart users who can't get dart2js to perform as well as DartVM. But in this scenario there's likely little market pressure on browsers to do the crazy amount of work to support multiple VMs. Good outcome?

Worse, this is a narrow, self-interested point of view from Dart side. What about the JS side? Why should we have waited five years at least to get to SoundScript? How much sooner from 2010 or even 2015 could we get to the promised land if Dart weren't flying off toward Pluto?

> "any real-world usage data from it are nowhere"

> Ignoring available data from existing optionally typed VMs during the standardisation process doesn't seem like a smart idea.

Cut the bad-faith arguing, please. It's not up to me to find Dart's "data" (where is it available, pray tell?) and try to separate all the confounding variables arising from Dart being a different VM and language.

Sound/StrongScript in V8, OTOH, will give unconfounded data this year, in the context of JS and TC39.

Who are you gonna bet on? Anyway, leave me out of it, try begging some Dart staffer for "data".

> Anyway, I hope there is good discussion at TC39 about Nominal typing with implicit interfaces, vs Structural typing. I'm not really a fan of structural typing - I think the Dart team made the right call here.

Did you read the StrongScript slides? Try http://www.mpi-sws.org/~rossberg/papers/JSExperimentalDirect... or better location.

> > Alan Kay was right, the best way to predict is to invent.

> You mean to invent something compatible with legacy technology and easily adopted ;)

That's right, because of noted non-flamebait minor premise: JS cannot be replaced. Are you going to argue against this forthrightly? I thought not!


Ah. Interfaces are structural. Didn't notice that first time through. Thanks for the pointer.

JS cannot be replaced in the short term. However you can implement a VM (or mode) which supports only a subset of JS with some of the legacy cruft removed. Strongscript is a step in this direction. (Side note - deprecate a few more features and you can start reusing some of the optimisations in the Dart VM.)

It is also possible to treat JS source as an AST serialization format an to display code differently when viewed by the developer. When language semantics are close to 1:1 this can work seamlessly (unlike of the big unintelligible balls of source required to work around ES5 semantic gaps).

And who knows, give it 10 years, and perhaps TC39 members will agree on a new language syntax that fits the existing "use strong" semantics. Note there is no need for expensive new VMs in this scenario, it is just a new parser generating AST nodes in the existing VM. This seems pretty obvious, I mean it's a bit of cruel joke to make developers use === for the next few decades. Surely this can be fixed some day.

Forgive my naivety about TC39 and standards, but I would have thought when standardising a new language feature that it makes sense to review similar features in other languages, and to perhaps reach out to implementers, for a presentation, or at least some email discussion. Seems prudent to learn from other's mistakes.

Btw. It is trivial to statically analyse Dart code, and even ignoring type annotations, decide whether a number literal is an integer or a double. And yes such automatic translation wouldn't be possible until changes like strongscript and bignums have landed in ES. But perhaps I misunderstand your point.


Nope, your "Ignoring available data" weasel words, implying negligence by me or TC39 for not carrying Dart's water, were below-the-belt.

No one is "ignoring available data". Dart's type system, in particular its covariant subtyping unsoundness, outraged Andreas Rossberg of the new V8 team. He expressed himself plainly about this on LtU the other year:

http://lambda-the-ultimate.org/node/4377#comment-67586

Andreas and many others among us TC39ers are well acquainted with Dart. I think this intra-Google, anti-Dart outrage motivated SoundScript, in part.

That doesn't mean the "data" is out there on some Dart public wiki to inform SoundScript's design in ways that programming language research and development do not already sufficiently inform it -- that we should just go get it, or we're slackers. I think you know this.

Yes, all dynamic language VMs will look more alike in 10+ years. StrongScript or a closely related descendent of it in 5 still beats Dart, and I'm still right that this took too long by 5 years at least, precisely because of the bad politics and JS-can't-be-fixed-but-it-can-be-replaced fallacies of the Dash memo that pre-forked and diverged Dart too much, without JS uplift apart from SIMD.

Are you done defending this sideshow as a productive experiment that helps the Web? Because so far, apart from John McCutchan's work, it has not helped. Maybe it will emerge a dark horse winner, but odds are against.


Technological improvements have increased the preponderance of crap on the web. Dart ain't gonna change that but it will give more centralized control to Google.

More technology means more corporations building crap. I should know, I've spent about three months of my 10+ year building things that were just slightly better than useless.


Still unpatched dartvm vs. dart2js numerics divergence:

http://code.google.com/p/dart/issues/detail?id=1533

Could be addressed via asm.js bignum emulation and should be addressed by work (w/ Googlers helping) on bignums in ES-next.


I'm looking forward to having integers in Javascript.

The dart2js numerics divergence isn't a problem in practice, as during testing/development the Dart VM is run with a flag which throws if an integer escapes the 2^53 safe range. And if an integer is overflowing to a double in production code then this is a bug anyway.

It would be nice if Javascript VM's could also provide a similar feature, at least until bignums land. But I guess this is difficult as there is no way to distinguish which numbers are intended to be integers and which are actually doubles.


Yeah, mythz (not you, I trust) used the same dismissive approach here:

https://news.ycombinator.com/item?id=8308578

after first asking someone to cite such a problem.

AFAICT the "isn't a problem in practice" happy-talk is all coming from Dart insiders. Does this shoe fit?

Anyway, it's a real Dart bug that drives potential users away. And why is JS bignum support -- which Google for-sure could have championed in Ecma TC39 and prototype-implemented in V8 -- just an "It would be nice" thing?

(It is not difficult to support suffixes for new value types, e.g. 0m for decimal and 0n or better for bignum. This is on the ES7 agenda. For more on range analysis and other/better optimization options, see https://news.ycombinator.com/item?id=6950475.)

Trusting checked-mode test coverage is bold in light of this bug, where bold might be "stupid" or "you're fired". As I wrote once in this thread already, Murphy was an optimist.


I agree this needs to be fixed in JS. And I have been following the ES strawmen/proposals. Not sure if I like suffixes. The approach in Dart seems to work well too.

In JS the overflow is often silent and is easy to miss. Since the DartVM can throw an exception on overflow this makes it easier to catch. So the Dart toolchain is an improvement, but also not ideal.

Where correctness is important there are also 32/64bit and bignum libraries which can be compiled to js but are slow.

Edit: Mythz used exactly the same phrase. Jinx.


IBM Research did a multi-VM framework a while back, to host both the CLR (.NET) and the JVM, with an inter-heap cycle collector. I think it was called Parlay, but that name seems to have been reused. Highly non-trivial!

EDIT: Aha, "Parley":

http://hirzels.com/martin/papers/vee04-parley.pdf

You previously asserted that Mozilla spends more on other non-standard-track work than it would cost to support multiple language runtimes. Where is your evidence? I cited mine with PyXPCOM.

We did the work to support Python alongside JS. It was neither simple in any sense compared to other work (Mozilla does standards-track-only work; this sometimes leads to dead ends, but that's life) nor a good sunk cost.

I'm not here to chinwag with Monday-morning-the-next-decade quarterbacks. First, advert to the complexity you denied up-thread. Then figure out design problems and solutions.

If after a ton of work "on spec", you end up writing good patches, I'll help champion them in. I would like to see a zero-cost cycle collection solution. That's research, not patch-hacking. If you can't acknowledge the problems starting with cycle collection, you're not for real.


My comment about 20 years was in response to a quote from the parent comment. But yes, that's a very good point. Very few languages have ever come out of the gate close to the mark. Early C# and Java were painful without closures, iterators, function pointers, or good performance. Which is why I'm glad to see things like TypeScript, Closure, and ES6, Traceur, etc. (and I'm less excited about Coffeescript and Dart).

It's especially a good sign that Google is using TypeScript as a basis because it means we might not get siloed implementations and communities.


You might be interested to know that Python too is going towards gradual typing, with PEP 484 https://www.python.org/dev/peps/pep-0484/ it's trying to formalize the syntax from MyPy.

I think it will happen since in September at the Python mailing list the Guido found a slew of bugs when using MyPy and started to advocate gradual typing.


> It's damning that you say it took 20 years for us to "start" to leverage JavaScript's power.

If that's true, then Lisp is surely triply damned, as it's older than 60 and it's safe to say that most working programmers in the industry don't know how to leverage its power.

Or, since you brought up Ruby, the industry by and large didn't care about its existence for roughly the first decade of its life, though I suppose that's only half-damned.

One might reasonably conclude that what a language has to offer is only tenuously connected to whether a critical mass of industrial programmers knows how to work in it and take advantage of everything it has.

> Compare to C#, which appeared 15 years ago, or Go which is from 2009, or Swift, which hasn't even celebrated its first birthday yet.

Since a comparison has been invited:

- C# is an iteration of an iteration of an already industry accepted paradigm (and the first iteration's uptake is considerably more impressive).

- Go is more or less the same thing, except they took out a lot of industry accepted paradigms (and enough people are actually complaining loudly about some of the removed bits that I think Go's uptake is going to plateau more quickly).

- Swift is a super interesting case and maybe the best comparison to JS of all of these -- familiar syntax but with enough differences that are going to be unfamiliar that some developers would probably freak out and resist... if its only real competition weren't Objective C (so of course it's doing well :b) and its target market weren't already familiar with the underlying Cocoa libraries (which are almost 30 now).

- C# and Swift were both released by deeply entrenched market-bendingly powerful industry patrons who were decades old (this shoe nearly fits golang, too).

By contrast, JavaScript was released by a company barely a year old in an emerging market. A partner (another decades old entrenched player) of the neophyte company wanted the language nerfed and positioned as a toy, and got some of what it wanted. It was created very quickly. Within a year or two of release, tens of thousands of amateurs and professionals were doing something with it, most without having ever bothered to learn the specifics of the language at all. Within five years, thousands were actually doing reasonably complex applications with it. At about 10 years (right around the age Rails mania starts to hit the industry, right?) is when some significant portion of the industry starts to take notice that things like Google Maps can be built and (some) people start to consider that maybe if they actually took the time to learn the language and platform, they could do neat things too....

I'd say JavaScript fares pretty well by a number of standards for comparison.

Not to say it couldn't be improved; it is being improved. Better error messages and stack traces are great ideas, type hinting could work out too.


> If that's true, then Lisp is surely triply damned, as it's older than 60 and it's safe to say that most working programmers in the industry don't know how to leverage its power.

Yes. It's an awful language, beloved of those who've never had to maintain someone else's code.

> Or, since you brought up Ruby, the industry by and large didn't care about its existence for roughly the first decade of its life, though I suppose that's only half-damned.

Yes. Ruby is an uninteresting language. Rails was the kind of innovation in framework design that's so insightful that it now seems obvious and it's hard to remember that we would have thought differently, and by an accident of history it was in Ruby. Industry popularity turns out to track the really important innovations, who'd've thought?

> JavaScript was released by a company barely a year old in an emerging market. A partner (another decades old entrenched player) of the neophyte company wanted the language nerfed and positioned as a toy, and got some of what it wanted.

Javascript had the huge advantage of being required to effectively use the most popular application in history. The fair comparison is with something like Visual Basic during the rise of Word, except that most word documents never needed a single macro. Or perhaps to the things people make with Excel, even if it's not a "real" programming language. If Javascript were the perfect language handed down on tablets of stone, if it were Idris 20 years ahead of its time, it wouldn't have got much more adoption - and if it were the devil's own, MUMPS back from the dead, it wouldn't have got much less. The factors driving its adoption were simply unconnected to its strengths or weaknesses as a language, because there simply was no alternative for the overwhelming majority of Javascript use cases.

If you compare it to a grassroots contemporary, a language without the corporate sponsorship, how does it fare? Going by Wikipedia those look to be PHP, Ruby and OCaml. It's not clear-cut, but all of those feel more mature than JS to me.


> some significant portion of the industry starts to take notice that things like Google Maps can be built and (some) people start to consider that maybe if they actually took the time to learn the language and platform, they could do neat things too....

It's a Turing-complete language. If you can do neat things in one Turing-complete language, you can do neat things in all of them. The only difference is how painful it is to do neat things in one Turing-complete language vs. another.

Just because it's Turing-complete doesn't mean it's a good example of a Turing-complete language.


Uau! 2s is considered ages to run?!


I got tired of replying with true ES4-is-not-turning-JS-into-Java denials in 2007:

https://brendaneich.com/2007/11/my-media-ajax-keynote/

/be


Aren't you against this sort of thing?

"It seems to me that a lot of you "bring on our new overlords" boosters are unaware of how HTML5 came about. It was not through one vendor shipping proprietary code and standards bodies mopping up later. Study some recent history."

You convinced me 1274 days ago.


V8 folks promise to work with TC39 on semantics and open source the code, which helps. You're right that, worst case, this turns into another Chrome-only power play. But all signs suggest otherwise, so far.

/be


::adds "publicly spanked by Brendan Eich" to resumé::

But seriously, that article is insanely ahead of its time. Or we're progressing reeeaaally slowly. Or both.


`type` and `like` seem like a really natural way to enforce interfaces in JS. What happened to those ideas?


I couldn't help but notice how it boiled down:

Q: "Are you turning Javascript into Java?" A: "No, it's optional"

That's not a strong denial, I was hoping for a more substantial response.


What does "turning into Java" even mean? What exactly are you afraid of?

Having actual integers? Threads? Better tooling?

The `class` keyword? It makes the code less verbose.

Optional types? Would you rather write JSDoc comments?

Optional typing actually works amazingly well. You need very few type annotations to get all the tooling benefits. Typically, it's enough to annotate the headers of your function. Type inference takes care of the rest.


1) I was commenting on how they didn't answer the question. 2) JS and Java have very real differences. Turning one into the other DEFINITELY comes with consequences. Maybe you like them, maybe you don't, but it's clearly a legit question.

Regarding #1:

The optional part wasn't Perl6-style optional typing (which I'd be fine with), as I read it, but USING their system at all.

If I teach how to juggle chainsaws, and someone says "isn't that dangerous?" and I respond "No, you don't have to do it", I've not addressed the question at all.

Regarding #2:

For me, it'd be threads (a simple system that is almost never painless), classes (JS today has objects but not classes, and it's a fundamental part of how it works), compilation, and most importantly typing. "Session session = Session.getSession()" is not something I enjoy typing.


It might be a reference to Dart?


That was a classic non-denial denial[1].

[1] http://en.wikipedia.org/wiki/Glomar_response


Some of the suggestions seems reasonable, e.g. 'arrays cannot have holes', 'Length always in sync' and 'Sane Coercions'.

It's also a good gesture that they'll try to work with others to standardize. SPDY -> http/2 is great example of this!

I disagree with 'Accessing missing properties' and a few others. I don't want Javascript turned into Java.

Also sounds a bit like a hammer looking for nails, i.e. they've already optimized everything, so to optimize more they'll need to rewrite the language. Again, I think some ideas are good, but may be slightly misdirected.


> Any program running “correctly” in strong mode should run unchanged as ordinary JS. Thus, strong mode programs can be executed on all VMs that can handle ES6, even ones that don’t recognise the mode.

This worries me. If it's running "correctly", then sure, it will run elsewhere. But if it hits something that Strong Mode considers "incorrect", then things diverge, and other browsers will execute something different.

This can be avoided if Strong Mode just issues warnings, not errors. Which seems sufficient, if the goal is to inform people about stuff they should optimize.

For those that want more rigorous checking, browsers could add a developer mode, in which those warnings become errors. This way, there is no risk of production code working differently on different browsers.

We have seen "use strict" cause errors in production. Avoiding that with "use strong" seems like a safer path, with no significant downsides that I can see.


You should never use "use strict" or "use strong" in production, they're very useful during development but can really only hurt you in production.


The point of "use strong" is to enable optimizations that wouldn't be possible otherwise, among other things. If you remove it in production, you won't get the optimizations.

"use strict" is used in production all the time. Google and Facebook use it. The Babel transpiler (formerly 6to5) adds it to your code by default.

It's one thing to get JS programmers to write their code in a certain way, but if the VM knows they are doing so, it's even more powerful.


I think there are 2 points - the one you said, to enable optimizations, and the other, to guide people to writing code that is easy to optimize.

I guess the question is, how much better can you do, over easy-to-optimize code. Or in other words, once something is valid in Strong Mode, how much do extra Strong Mode optimizations get you.

Such code should already be quite fast in current JS engines. The big, big difference is between the engine getting hard-to-optimize vs easy-to-optimize code. So I would bet that the bigger deal is the devtool aspect here. But, these are empirical questions.


This attitude really surprises me. I've been deploying 'use strict' code to live environments since 2012 and genuinely can't recall a single occasion where it has caused us a problem.

We lint, unit-test and acceptance-test aggressively (including legacy browsers), and the strict directive itself is of course scoped to our own IIFE. Perhaps that explains why I've not witnessed the hurt you mention.

The mechanics of strict mode are very clearly defined too, so I've not seen any cases were developer confusion caused unwanted side-effect (although I have heard straw-man arguments made where people predict it would).

The benefits, on the other hand, are real and tangible. Errors that might otherwise be difficult to diagnose fail hard and obviously. Browsers are better able to optimise strict code than non-strict. And culturally, I suspect that a use-strict policy has forced everyone to be more clinical in their approach.

Hand on heart, I would never go back to deploying non-strict code, so would be really interested to hear from anyone else in the opposite camp.


Could you expand on how it could hurt you? All I was able to find was this [0]

> I found two opinions about using strict mode in production:

> > There is no reason to ship “use strict” in your production code. There is no performance gain (verified with V8 team and Brendan a while ago) and I don’t need my users’ VMs doing the extra checks as well. Keep it development only, strip it out during build. This way you also avoid the concatenation issue you reference.

> And:

> > There may not be a performance gain, but there’s also not a performance loss. In production, even more than in development, is where you want to be sure you’re noticing errors. Minimizing the changes between the development and production versions of the code is key to being able to debug issues quickly and efficiently. Yes it helps during development, but there’s no reason to pull it out of production code.

> The source is in the comments at the bottom [1]

> And of course those 12b weight of "use strict" won't change a thing.

[0] http://stackoverflow.com/a/10986793/1072106

[1] http://www.nczonline.net/blog/2012/03/13/its-time-to-start-u...


In practice there have definitely been perf losses from "use strict" (emscripten used to emit it until we noticed that). "use strict" sounds easier to optimize - it's stricter - but in practice, JS engines didn't get around to optimizing it, at least that was the case a few years ago.

There have also been breakage issues. "use strict" changes semantics. Before all browsers implemented "use strict", some had it and some didn't, which meant scripts with "use strict" ran differently in different browsers.


I remember when it was generally suggested to avoid "use strict", because there wasn't great browser support. You could write "strict" code that would work at the time (due to browsers not actually enforcing strict mode), but in the future the same code might break in conformant releases of those browsers.

That said, these days it's invaluable. Getting errors early saves time and prevents subtle bugs. It's become a widely accepted best practice, as noted by other posters.

I definitely share your concerns about mode switches, but as long as there's a conformant implementation to test against, it seems like strong mode could move the language forward without too much of the "use strict" early adoption pain. Strict mode wasn't completely smooth sailing, but I think in the long run it helped.


The biggest potential problem is using a file/global-scoped "use strict" directive in a .js file that gets concatenated and minified with other files that are not strict mode compliant, thus unintentionally modifying the behavior of any code that's concatenated after the strict mode script.

This issue bit Amazon in the ass a while back and gave rise to JSHint checking for function-level strict mode declarations only and the general rule-of-thumb to not use strict mode in production code.


But isn't "use strict/strong" required to get some/many of the optimizations?


Won't keeping the pragma be a useful signal for the compiler to know it can use the more efficient language subset? Those that don't support it can just ignore the pragma and go with standard Javascript.


I agree that you shouldn't. But sometimes people will make the mistake and do it, if it doesn't cause a problem on their dev machines. Then it could break on the open web.

Not changing JS semantics would avoid this. All the developer benefits are still possible without semantic changes.


isn't it like why you would remove asserts? the only thing an assert will do is fail (or do nothing) - why do you want to fail for users? (when you could not fail, or at least not on that line) - so they can see that you messed up?

It's kind of like protocols: be strict in what you send out but lenient in what you accept. be strict in the javascript you write but then lenient when your users are actually running it.


A failed assertion means the program no longer has any idea what's happening. If it continues past that point it can either get lucky or start to do lots of damage. Whether that's an acceptable risk depends a lot on what type of program it is.


What if you're writing HFT software and an error causes you to spend millions of dollars on the wrong thing? If you had that assert in there then the program would have just crashed instead of bankrupting your company.

Similar problems could easily arise in JS with e-commerce type applications, failing is sometimes a good thing.


Pragmatically, how does subsetting matter? ES6 transpilation will be a requirement for many more years. Once you transpile, all the ES6 requirements blow up (eg. not extending prototypes).

A more particular criticism. How does forcing let statements on everyone improve performance?

All it does is force implicit (rather than explicit) scoping rules on JS developers because devs from other languages don't seem to be able to understand functional scoping or hoisting. It's a huge case of X language isn't like Y language that I already know, so let's change the unfamiliar language.


Not really. It's a way to enable increased performance by disallowing certain hard-to-optimise constructs.


Which ones specifically?


> [...] accessing missing properties throws, arrays cannot have holes, all scoping errors are static, classes are immutable.


I think it's important for the VM, which is what I got as to why they are proposing strong mode and sound types.

I imagine that let and const afford optimizations to the VM.


The flag offers optimizations -- except they don't exist unless you compile a version of ES6 code for chrome only.

Sure other ES6 browsers will ignore the string and move on, but they will have other ES6 compatibility issues. For example, Chrome's "fat arrow" doesn't do the lexical bind to the parent scope that the current spec requires. There are a ton of situations where this would break your code if you ran it on FF (and there are other differences).

The only option is to transpile into ES5.1, but ES5.1 breaks the optimization requirements.

Thus, using the optimization flag means building to Google's JS engine (we've been there with IE), while building a shared version means you cannot use the flag.

Let adds scopes. Scopes are bad for performance (this is why closure compiler removes them). Removing them (if safe) is one more thing for the JIT to do. It's has little to do with efficiency and a lot to do with forcing another language's preferences on users.


> For example, Chrome's "fat arrow" doesn't do the lexical bind to the parent scope that the current spec requires.

And that's why fat arrows are currently not enabled in Chrome. Chrome exposes in development features behind flags. IMO vastly better than the vendor prefixes of ages past.


Conversion to TypeScript has been going inside Google for a year at least. Angular team started to do slides in TypeScript about year ago.

What I do find annoying is the constant rebranding the teams in Google tries to do to TypeScript. Angular team called TypeScript + runtime typechecking a "AtScript". Now this document calls something of similar a "SoundScript".

Can't the Google just admit it's using TypeScript? Adding a runtime checking doesn't change the fact it's just a TypeScript.


> Can't the Google just admit it's using TypeScript?

What makes you think they haven't "admitted" it yet?

> AtScript is TypeScript: We’ve been collaborating with the TypeScript team on our proposed productivity [1]

They have been collaborating with Microsoft, and it's no secret[2]

1. http://angularjs.blogspot.com/2015/03/announcements-from-ng-...

2. http://blogs.msdn.com/b/typescript/archive/2015/03/05/angula...


It's good to know, it just didn't seem like that last year. I know that Hjelsberg went to meet Angular team after this lecture https://www.youtube.com/watch?v=b69vwMIphic&t=3928 (QA position 1:05:30)

As a little note about your references:

1. was posted 23 hours ago. 2. six days ago.

I'm actually not checking their plans daily.


It is absolutely not TypeScript. SoundScript is supposed to have a sound type system (one that can be the basis for optimizations). TypeScript was designed with an unsound type system.


I don't really like this, or at least how it sounds.

1. Google implements this 2. Chrome becomes the fastest browser by a wide margin 3. Other browsers die or become minor players 4. Google leverages this to enforce their own changes into the HTML/Javascript domain, "for the better" of the web and maybe even into mobile. 5. Web is broken unless you use Chrome

The pain developers suffered before dealing with IE4-6 will be nothing to the repercussions we face now if we have to continue being backwards compatible with some version of Chrome because website X is "Chrome only" in the future.

Hell it's already happened a bit with the new HTML5 features. Hell Korea is still paying for the mistakes of committing to go with IE.

Feel like it ends up being Google's version of EEE.

Also, keep in mind, this "SoundScript", even if its in the early stages, has not been declared as being open source, so it is closed source at the moment. This makes me feel that at best SoundScript will be open source like Android is open source. Release stuff but keep the latest version for Google to use.


> 1. Google implements this 2. Chrome becomes the fastest browser by a wide margin 3. Other browsers die or become minor players 4. Google leverages this to enforce their own changes into the HTML/Javascript domain, "for the better" of the web and maybe even into mobile. 5. Web is broken unless you use Chrome

Er, you just made up five steps, none of which have happened yet, and then conclude that they sound like "Google's version of EEE". That's what happens when you make up a "Google's version of EEE" scenario" :)

It's a proposed ES7+ addition being implemented behind a runtime flag. Can you be specific about what that has to do with Korean banking laws and destruction of the competitive browser landscape?


This is rough and maybe slightly off but the gist is that Korean banks (and other institutions) are required by law to use ActiveX and other IE-only technologies which had (has?) kept IE's browser share ridiculously high (with most on older versions).

I lived there for a few years and worked with Macs. I was completely unable to access my bank account online. Even when I could log in on a Windows desktop, there's an extremely overly-complicated (and quite manual) certificate system also required.


I missed the Q&A question about standardization, and that alleviates some of the concerns about how this will be play out.

But again, the concern is that Google is going to go ahead with this even before any approval. So when every single javascript related product from Google is already using it, what's preventing Google from just shrugging and going "oh well" and just continuing on with their modifications?

What is V8 for if not for Chrome (since V8 is not beholden to node/iojs)?


Well, you can always look to past behavior, such as with SPDY[1]. Once something like SPDY was adopted as HTTP/2, Google decided on dropping SPDY support for Chrome. Their core complaints with HTTP were addressed in a new standard, so there's no reason for them to keep their specific variation alive and cluttering the browser landscape.

1: http://blog.chromium.org/2015/02/hello-http2-goodbye-spdy-ht...


You point out a legitimate risk. Most won't take it seriously, because google history goes more like:

1. Google announces New Great Way 2. Google implements 3. Early adopters jump on 4. Improvements are promised. 5. Languish 6. Google moves on

Not universally true (NaCl and GWT followed this, SPDY didn't), but frequent enough that people aren't yet cautious.


I've already encountered a number of sites which appear to only work in Chrome, so arguably that ship has already sailed.


Your comment is so unnecessarily cynical it made me giggle. You don't think like that in real life hopefully.


If you were not on the web from ~1998-2004, and then had to dig out of the glut of crap that IE4 dumped behind it from 2005-Now, you probably think this is cynical. It's merely considering the historical record of what happens when you allow one company to control the web.

Why do people want an open web? Because if you allow one entity to gain too much influence, you become locked into their whims rather than what is best for the web.


I was on the web ~1998-2004, and I still think you're being cynical, and are just flat wrong on certain points:

- V8 (and Chrome, via Chromium) is an open source project: https://github.com/v8/v8-git-mirror . Try doing a search for 'use strong' in the V8 source and you will find that the code for this proposal is already open source, not "closed source at the moment".

- Chrome is nowhere near the kind of market dominance that IE once enjoyed (> 90%), and there is no rational prospect of it ever reaching it, unless you believe Windows, OS X and iOS are about to disappear. Thus they will never enjoy the monopoly necessary to unilaterally introduce proprietary features.

- Chrome auto-updates. In fact, it was the first mainstream browser to adopt this "evergreen" model (since adopted by Firefox and IE). This means that there are no concerns about maintaining backwards compatibility with "some version" of Chrome.


> no concerns about maintaining backwards compatibility

Actually, one of the reasons old IE versions persisted forever is because the enterprise rarely updates. Although I doubt there are many out there that use Chrome at all, they would most certainly block auto-updating, much as they used to block Windows Service Packs until they could be tested in-house - no matter how severe the vulnerabilities they patched.


Differences between IE and Chrome:

- IE has fixed versions, IE 6, 7, 8, 9, 10, 11; Chrome autoupdates so there's only Chrome stable, Chrome beta, Chrome dev and Chrome canary.

No need to hardcode IE6-specific hacks.

- IE is closed source; Chrome is more open source.


> Chrome autoupdates so there's only Chrome stable, Chrome beta, Chrome dev and Chrome canary.

I still see old versions of Chrome in my access logs fairly regularly. Don't kid yourself that everyone updates all the time.

> - IE is closed source; Chrome is more open source.

Chrome is not F/OSS, Chromium is.


IE autoupdates since IE10, which is frankly a huge event for our industry that very rarely gets recognized.


Except for the fact that it still moves at a glacial pace compared to the competition. Incremental IE10 updates do not include new CSS or Javascript functionality, that's limited to version releases - which is measured in years. Compare that to Chrome or Firefox, where new functionality lands every couple of weeks in stable.


IE10, if left alone, will auto update to IE11 silently.


They still lock feature versions in with windows releases so they don't get many brownie points for being able to auto update security patches.


Chrome autoupdates on desktop.

On mobile, there are all sorts of old Chrome versions being shipped by various Android vendors and not getting updated at all.


Another +1 for TypeScript. Does anyone else see TypeScript as being a big player in the coming years?


I've largely ignored it so far. I'm not a huge fan of writing in something like Coffeescript/TypeScript/etc because when the underlying JS changes (Like ES5->ES6) it can cause big breakage and/or confusion (Will CS classes work like classes do in ES6?). I am a fan of Babel (formally 6to5) because writing JS in the new standard format and transpiling to ES5 seems a lot cleaner and it's all still just JS. Also ES6 is a MUCH easier sell to other developers (who might not use JS as much but still need to interact with it) than moving to CS/TS IMHO.


"when the underlying JS changes (Like ES5->ES6) it can cause big breakage"

[Citation Needed]

As I understand it ES6 is meant to be backwards compatible with ES5.

CS classes won't make use of the new "class" syntax, but they should be compatible ES6 classes (they can subclass each other) because they're both really just sugar on top of JavaScript's prototypal inheritance.


The problem is ES6 is not forward-compatible from the point of view of CS or TS.

That's ok if the latter two (and other such languages) can change to track ES6, as MS promises TS will.

Just one example from CS: the difference between for/of meanings.


That seems like a bigger issue for TS, which aims to be a superset of JavaScript, vs. CoffeeScript which is a totally different language.

And I think the parent comment was concerned CS/TS would break as ES changes underneath.


Pretty sure the parent was talking about what I elaborated on -- see in particular

"... because when the underlying JS changes (Like ES5->ES6) it can cause big breakage and/or confusion (Will CS classes work like classes do in ES6?)"

Having two kinds of for-of loops and two kinds of classes, even if CS tries to present only its flavor and hide the underlying (but you can still escape out to raw JS from CS), does make for confusion.

Also, separate point, ES6 sucks oxygen from CS, so while I expect CS to live on, it won't see as much adoption as it would have absent ES6.

The reason to consider this is that if it's a big enough oxygen-sucking effect, CS might want to break its own backward compatibility in order to re-align with JS. I've heard talk of such a change, but I haven't talked about it Jeremy or Michael. Pointers welcome.


Yep, it's already happened; CoffeeScript just implemented support for generators with v1.9.0.


The breakage argument is valid for Coffeescript, but Typescript has always made it clear that it supports ES6. Am I missing something?


TS doesn't support ES6 today. It doesn't support generators, it doesn't support ES6 modules , quasis ... For that you need to wait For Microsoft to implement those features.


the website says it supports modules. Quasi literals also seem to be supported.


That's untrue read the spec. Typescript doesn't support ES6 modules , nor quasi literals. Typescript has its own module system which is incompatible with ES6 modules.


Fair enough. I haven't read the spec, just went to their website, and one of the first things they mention is modules and classes. Also, when I googled 'typescript template string' the first hit was https://github.com/Microsoft/TypeScript/issues/13, so I assumed they were supported.


I apologise, I have read very little on TS due to me lumping it in with CS. If this is not the case then I would like to retract my statements on TS. It appears I need to read up more on TS. I was also put off by it due to when it was released there was very little IDE support and IMHO it was a little murky on how to use it (Also MS released it AFAIK and I'm a little skeptical of anything involving MS + Web, IE scars don't heal fast). I need to look into the tooling available for it now.


Works great in emacs.


You can use TypeScript with the ES6 output mode and then transpile that to ES5. In fact, that's what I do.


I wonder when we will see Dart's logo here

http://www.lemonde.fr/pixels/visuel/2015/03/06/google-memori...


I would expect it's soon. It's not taken off at all and now we are seeing multiple teams in Google embracing TypeScript (Angular last week and now the V8 team).


Isn't TypeScript the same type system that Microsoft scuttled along with the rest of ES4?


No, TS came in October 2012, after ES4 was laid to rest by Harmony (July 2008). Its type system differs notably (e.g., bivariant generics, structural-only subtyping). There's an interview with Lars Bak and Anders Hejlsberg here:

http://channel9.msdn.com/Shows/Going+Deep/Anders-Hejlsberg-a...

At one point, Lars and Anders both say they liked ES4 and they wondered why it died. I LOLed a lot :-|, in view of how their employers were intimately involved in killing ES4.

(I write this without rancor, as ES4 needed to die.)


> (… as ES4 needed to die.)

Why is that by the way? From what I read at the time in the PDFs it seemed nice and not out of the ordinary. Did you ever wrote about technical failures of ES4 and I missed it?


From twitter (https://twitter.com/BrendanEich/status/575427109977378816):

* why ES4 had to die? https://news.ycombinator.com/item?id=8906807 says why in brief: "But ES4 was trying for too much too soon." Adobe bailed, too.

* (yet MS was about to give in! If only Adobe had known.) ES4 toward end was not AS3-compatible because JS wasn't AS3 compatible.

* enabling incompatibilities under opt-in version selection was fairly toxic to all implementors (V8, JSC esp. ) and many users.

* With 1JS idea, based on modules (classes, also generators via function*), new syntax is its own opt-in: no big red mode switch.

Update: also the "ECMAScript Harmony" email I wrote in July 2008 talks about the problem of namespaces:

https://mail.mozilla.org/pipermail/es-discuss/2008-August/00...

The early binding problem for the Web with multiple script tags was not an issue for Flash with a single SWF link-time tooled image.

The open namespaces costs can be thought of as a third lookup rib for identifiers beyond scope chain and prototype chain -- anything like this (e.g., scoped object extensions a la C#) is still bouncing off of engine implementors.


I like ES6 and how ir was carefully modernized a lot. But ES 4 (at least as it is implemented by Action Script 3) is a pleasant language to work with as well.

— jhnns


Fair enough, and I led Mozilla to work in earnest with Adobe on ES4 and Tamarin, to get MS to re-engage with Ecma TC39 beyond ES3 (also for ScreamingMonkey: ActiveScripting uplift of IE to support ES4 via Flash without MS's cooperation).

But we couldn't map AS3 directly to ES4, because Flash was too different from the Web: single SWF link-step and static typing were two big diffs.

Even now static typing in dynamic code loading environment, or "gradual" or "hybrid typing" if you will, is not quite a solved problem. SoundScript or whatever it's called is an experiment, as the V8 folks have been careful to say.


(past ES4 champion from the Adobe here)

I agree with Brendan about the difficulty of mapping features from AS3 (Flash) to ES4 (the web). I'd go a step further and say that the only way this even would have been possible is to grow it out of the host's DNA of JS1/ES3.

Even if at a high level many of the emerging features of JS look like those in AS3/ES4, they are in their details profoundly different. This is as it should be. They need to grow out of the primordial stuff of JS1, not AS3. I suspect that the ES4 effort has informed many of those now working on ES.next in a way that wouldn't have happened otherwise. So in that sense, at least, ES4 lives on.


I see...

After ES6 finally providing a concise syntax for prototype inheritance, hybrid typing is the last "AS3 feature" I'd love to see in ES.

But btw: You guys are doing great job! I totally agree with your approach of evolving slowly and considering community feedback.


Harmony feels a lot more like JavaScript than AS3 ever did, and has some clear improvements (lambda arrows, destructuring, `...` syntax that also works in function bodies, etc.). That said, part of me wonders if the web would be any different if we had some of these tools 10 years ago.


(I write this without rancor, as ES4 needed to die.)

Apparently Chrome feels the same about ES6 .. they continue playing launch pad chicken (only 39% on Kangax)


Honestly, every time I see google pushing for another JavaScript change, I end up not liking it. Its like they're trying to slowly transform all of js into typescript / @script


I see and hope for GopherJS or other Go to JavaScript compilers to become big players in the future. I'm already using that approach and it's great, and IMO has tons of potential.


No.


I'm only having a problem with default sealed classes on block level.

With perl, which has similar characteristics we decided to seal classes only from the main application, with "use oo :final", and modules, libraries, implementations of classes do not seal their classes. Thus you can easily allow duck-typing in the calling application for testing, or go fast and disallow adding or changing methods and properties.

So you cannot pre-compile modules per se, but js has no static block compiler anyway.


So the V8 team doesn't want to add optimizations for the "use asm" subset of JS, but does want to add optimizations for the "use strong" subset of JS? I do see the difference: strong mode is an aid for web developers writing JS whereas asm.js is usually a compilation target for porting C++ code (and a competitor to Google's NaCl).


V8 is adding optimizations for asm: https://hacks.mozilla.org/2015/03/asm-speedups-everywhere/

Asm.js isn't a solution for JavaScript developers, it provides an efficient compilation target for porting C/C++ code-bases to JS by leveraging Emscriptem or Mandreel compilers.

But this doesn't benefit the 99+% of websites running normal JavaScript. Whereas the goal of StrongMode is to carve out a safer and faster "subset" of JavaScript that's less error prone and easier to optimize with more predictable performance.

Only SoundScript is proposing adding (TypeScript-compatible) optional typing that's built into V8. ES6 is source compatible with TypeScript which can be transpiled to strip its type annotations so it runs in ES5 JS VMs. Essentially this is an evolutionary enhancement to JS.


> But this doesn't benefit the 99+% of websites running normal JavaScript. Whereas the goal of StrongMode is to carve out a safer and faster "subset" of JavaScript

That still wont be used by 99%+ of the websites out there.

At least not unless they want to forgo standards, or forgo strict-mode in all browsers but Chrome, and last but not least: they will have to update the code which is already there, and IE6 told us how that works in practice.

Let's not help Google forge the internet a new MSIE with Chrome.


In the article they they are doing both. People from chrome team said they are doing asm.js.

"Q: How does SoundScript compare to asm.js? A: They have different goals: asm.js is a low-level language designed to be a compilation target for other (mostly low-level) languages. SoundScript is a user-facing type system for direct use of JavaScript as a high-level language. Both are complementary, both are useful, and V8 is committed to both."


That's not true at all, they already have experimental support for "use asm".

https://code.google.com/p/v8/issues/detail?id=2599


This just shows that the current state of Javascript is a confusing mess. Where do new developers start now when they are been fed ES6 transpilers and ReactJS.

I think Google (as usual) is getting ahead of this problem. But they shouldn't be alone in the fight for clarity.


If having more options "leads to a confusing mess", I fail to see how having an additional option gets "ahead of this problem."


Having a solution pushed forward by arguably the most influential implementation team is what pushes it forward, not the fact that it's "just another option". Regardless of what Chrome implements (whether it's this, or raw TypeScript, or CofeeScript), the fact that it works directly in V8 is what pushes that solution to the top of the stack.


Ehem, Dart.


It depends on where you're coming from, but for lots of people who want to make something, there's probably some kind of tutorial or library with examples that puts it in reach. Pair that with the fact that JavaScript as a language has a small number of fundamental concepts and a great free REPL and user interface environment (the DOM), and you get lots of people making lots of things that would otherwise be more expensive or less accessible.


> But they shouldn't be alone in the fight for clarity.

They're starting with TypeScript which was Hejlsberg's project at Microsoft. Microsoft wants people to develop web apps for Microsoft server platforms, and you can use TypeScript with intellisense in Visual Studio. We'll see if Microsoft puts it in the browser but it seems like they already have a stake in the success, at least.


Perhaps new developers shouldn't transpile at all, or may use a single transpiler. It certainly could get confusing working with multiple transpilation layers.


Looking at the referenced strawman:

* Why is `delete` such a problem? * Closed objects (under "Dynamic Restrictions") seem like a really heavy change from current common JS usage. You basically can't use objects as a hash. This would break a ton of code. ES6 Weak Maps seem like an odd replacement. Especially because they are weak, that's semantically totally different than an object. * Not being able to add properties to Arrays I assume will cause quite a bit of problem, as that pattern is common. Or maybe subclassing Array will work fine? * Under Functions they seem to be disallowing traditional class definitions, so you can only use ES6 classes. So you can't write ES5 that is also StrongScript :( * Under Classes, it sounds like you can't call methods of this? I guess this is so you can't call methods on an object that is not entirely initialized. This will break a lot of code, or at least most classes I've written. I'd rather see all the possible instance attributes get pre-filled with undefined. * Both this proposal and ES6 seem to dislike non-method properties on prototypes. Why? * Disallowing `for in` is also annoying.

I like the concept, but I'm not excited by the proposal.


This will be great for things like node-webkit or atom-shell (and node in general). There are many use cases where using an optimized subset is a big win, like atom.io or any cross-platform app with an embedded V8.

Granted, it won't speed up DOM operations much, but JS is being used in so many non-browser places... it's good to know that the language performance isn't stuck against a wall.


I'm especially concerned about the loss of "var".

The web (as in HTML + JS) has always been about backward "interpretability" (meaning, a new feature won't break existing clients). The incompatibility of "var" and "strong mode" seems to be the most important break in this concept since JavaScript 1.1 and the introduction of the Array-constructor. (Some may remember the days, when we had to write two scripts, one for Netscape 2 and newer, and another one for older clients, each defining the same functions and producing comparable results. – Most of this was owed to features related to DOM-interaction. – Alas, the language-attribute ...)

Realistically, not everyone will have a fully updated (or even fairly current) browser, and this really shouldn't be an issue which necessarily breaks things. "strong mode" will thus limit access to only those clients with at least some support of ES6 (as in "let").

Please add some basic support of "var"!


And while we're at it, why not add a backward compatible type declaration, like a string "use type a: int, s: string;" immediately following any declaration or declaration list?


They are not removing var from the language - they are removing it from the "sane mode" only. If you don't write "use sane mode", you can still use var AFAIK.


This is a bit of a tangent but sometimes I wish browsers would just open up some kind of standardized assembly language like interface to the vm and the dom. This would effectively make development for the browser virtually identical to the desktop; and also as a side effect, completely end this debate about "strengthening javascript."


They already do. The standardised assembly language is called "javascript".


Standardized? yes, assembly-like? No.

Essentially I am saying expose a layer even closer to the metal in the browser so that these optimization problems can be moved to front end development.


See past HN threads and posts that spawned them, e.g., this one:

https://news.ycombinator.com/item?id=5707088

Apart from syntax wins, you can't get much lower-level semantically and keep both safety and linear-time verifiability. Java bytecode with unrestricted goto and type confusion at join points makes for O(n^4) verification complexity. asm.js type checking is linear.

New and more concise syntax may come, but it's not a priority (gzip helps a lot), and doing it early makes two problem-kids to feed (JS as source language; new syntax for asm.js), which not only costs more but can make for divergence and can overconstrain either child. (This bit Java, pretty badly.)

/be



What's the point of all this? If you need more rigid language to feel safe just write your programs in this language and then compile to javascript.

Why extend javascript instead of treating it as VM for anything you like?


Javascript isn't bytecode. It's a language that people do actually write and have to deal with on its own terms.

Transpiling to javascript is a hack to make up for percieved deficiencies in the language itself - but as with any other language, javascript can still be improved upon and evolve.

Requiring that anyone who wants to write javascript for the web do so through a Java or C compiler is actually the worst option.


Because many legal constructs in javascript preclude various optimisations that could otherwise deliver near-native performance.

Compiling a more rigid language to JS doesn't help with this, because the runtime and JIT compiler has no way to know what the original constraints were, so it has to remain open to the possibility that this is plain old JS and any of those weird and unoptimisable -- but legal! -- constructs may occur.

Of course, you can write runtimes and JIT compilers with optimistic fast paths, that assume certain constraints apply and bail out to slower paths if they encounter something they can't handle. But this only gets you so far. It adds a lot of complexity to your implementation, and provides no clue to developers that they've violated a particular constraint and their code is running 10x slower than it could do. Furthermore, different runtime implementations from different vendors may have different heuristics and fast path behaviours.

Having a standardised subset of the language, marked by a backwards compatible pseudo-pragma (like 'use strong') lets developers opt in to explicit constraints that enable greater optimisation. These constraints are common across different runtimes, properly documented, and are enforced by fail-early checking. The latter gives clear feedback to developers when they violate a constraint, and means the fast-path code can work without escape hatches, because the code it's running is guaranteed not to violate the constraints.


You can compile to asm.js if you care about performance that much.


They explain that, they are aiming for faster execution and less bugs. Faster execution requires that the VM and the language changes. They tried this with Dart but that hasn't become popular, now they try again with something different and with smaller changes.


Most (all?) of the improvements listed in the document are already present in Dart. You can use them today. Good to see that these may also filter into JS in a few years time.

Google is capable on working on improving JS and Dart simultaneously. You can see Dart as a test ground for features that may make it to JS. Cross-browser improvements made to JS will also benefit the Dart ecosystem, as it is a compile to JS platform.

Don't expect Dart to disappear any time soon. Despite popular rhetoric, Google actually has a good history of supporting its web programming platforms, for example GWT and Closure.


Great idea, the web need more help to fragment more efficilently. TypeScript, Flow and CoffeScript aren't enough. We need moar langages, so you can't find two web developers that use the same !


Fragmentation at this level is even annoying. Heck, I might use ClojureScript in the near future. But there are two facts. 1) At the end of the day everything is literally just JavaScript. 2) If a person is working on X Transpiled Language (XTL) then they could work in JavaScript.


What what, a javascript subset that runs in existing browsers instead of a new language that you have to transpile into a 5MB binary chunk you drop in for negligible performance improvements?

Mmm... yes please.


Classes are not an improvement. See Crockford: https://www.youtube.com/watch?v=bo36MrBfTk4


it used to not be, but with the reversal of the `this` creation and addition and use of `target.new` in constructor functions we are now able to extend Exotic types like array.


`new.target` ;-).

/be


Thanks for the correction :)


Out of date, as usual :-P.


Overall the proposal looks good.

BUT: please provide a way to construct classes dynamically (or allow functions to have [[construct]]).


They seem to mention this at the end of the slides.


Can someone give me an example of TypeScript declaration that is not "sound"?

I thought that

"It guarantees that (1) an expression that has a static type T can never evaluate to a value that is not a T, and that (2) evaluation cannot run into unexpected runtime type errors."

is already supported in TypeScript.


Generics are bivariant in TypeScript and so unsound:

    class Base {}
    class Derived1 extends Base {}
    class Derived2 extends Base {}
    
    class Box<T> {
      constructor(public value: T) {}
    };

    var a: Derived1 = new Derived1();
    var b: Box<Base> = new Box<Base>(a);
    var c: Box<Derived2> = b;
    var d: Derived2 = c.value;


TypeScript is structurally typed; there's nothing in your example that suggests that Derived1 and Derived2 have different types, instances of each are the same type.


Point, but add a differently-named field to each and it will still compile.


My understanding is that gradually typed code, i.e. using "any", introduces unsoundness.

    var s : String = "bar";
    var a : any = s;
    var i : int = a;
Not sure, but I think Typescript's assignability rules, which use structural typing, also introduce unsoundness.

    class A { foo() { print('A'); } }
    class B { foo() { print('B'); } }
    
    var a : A = new A();
    var b : B = a;


I like most of what I see in the sane mode. I would even go further and make "" and 0 truthy in that mode.

I don't think I would use the type system. Maybe it should have its own proposal instead of being "bundled up" with sane mode. At least it is optional.


Making 0 and "" truthy is just asking for trouble, whatever your opinion of current JS semantics. The hope with sane mode (or whatever less able-ist name it should have) is to have no runtime semantic change if the mode directive is not recognized (as it won't be, if a quoted prologue directive aka useless string-literal expression-statement, in any downrev browser).

Doubling the testing load, making hidden time bombs for older browsers to find blowing up in their faces at later dates: Not Good.

Elsewhere here, both Alon Zakai I question the wisdom of using an erased "use sanity"; mode directive, given that runtime semantics will diverge for any code not in the new subset. The hope is that people will test fully with checks on, and find all code that doesn't follow the rules. I think TC39 will balk at this hope. Murphy was an optimist.

/be


Hi there, thanks for taking the time to comment.

> The hope with sane mode (or whatever less able-ist name it should have) is to have no runtime semantic change if the mode directive is not recognized

From my limited understanding, old browsers can't handle 'let' anyway, which is supposed to be used exclusively on this mode (as there will be no 'var'). But you probably know more about this than me.

> making hidden time bombs for older browsers to find blowing up in their faces at later dates: Not Good.

Doesn't that happen all the time? What I mean is: If ES6 becomes widely supported in browsers, pages written on it will make old browsers "blow up", too. Unless the people who program those make two versions, which defeats the point of using ES6 in the first place. The same when a new version of CSS comes out - use that, and the old browsers "blow up", too.

It seems to me it is the nature of the beast - new features make old browsers blow up.


It turns out `let` is usable in most browsers today, and in some older ones. The point is there's no extra opt in and no forked runtime semantics.

> Doesn't that happen all the time?

You're talking about new syntax bombing old browsers. That's not the issue. The issue is same syntax in old and new, works (for whatever value of "works") in old and new, but differs in meaning due to a "use sanity"; directive.


Fair enough. You have probably heard arguments similar to mine hundreds of times, thanks again for answering.

For what is worth, I never depend on the truthyness of falsyness of 0 and "", and always test for them explicitly, so it will not affect my code if in a distant future javascript starts treating them as truthy. Just, you know, in case you change your mind in the future.

Good night!


I, for one, sincerely hope the semantics of truthy/falsy operations of which I've grown accustomed to these past 19 years don't change. I tend to appreciate these expressions in JS moreso on actual user input operations.


The likely TC39 debate is between Dart-like Puritanism: no implicit conversion from primitive values or object references to Boolean values; vs. implicit conversion of primitives but not of object refs. I'm in the latter camp.

Update: this does not say 0 == "" of course, separate issue.

/be


This is starting to sound like what Google is doing to JavaScript as Facebook has done to PHP. Customize to meet their own needs based on performance.

This "testbed" could be good or bad, time will tell.


This presentation makes me want to punch the next person I see https://drive.google.com/file/d/0B1v38H64XQBNT1p2XzFGWWhCR1k...

"let is the new var". No it's fucking not! "let" promotes procedural block scoped code. Moves people away from thinking properly about flow in their code. "var" has it's places, "let" can be a good addition is SOME circumstances. And calling code with "var" in it as insane, is itself insane.


I don't really understand what you're saying here. `let` is the way JavaScript should have worked from the beginning: a sensible implementation of lexical scope. How can you possibly make an argument for `var`? What is "procedural block scope"? Why does a proper implementation of lexical scope damage "thinking properly about flow"? I just don't get it.


What is "procedural block scoped code" and why do you think var as a binding construct (ignoring the initialiser) is control-flow sensitive? It's not.

/be


I mean people will widely use for, while, switch statements instead of maps, hashes, weak maps etc. They will be setting variables in parent scope, procedural step by step thinking, rather than passing functions, chaining things etc.

I personally don't like some of the ES6 additions, but this one is just a step too far. For us mere mortals that is.


let composes nicely with all the statement forms. You can even do `for (let x ...)` and have a proper let binding per iteration, for closures that capture x in the body.

/be


Yep and it's awesome to have that but I personally hardly ever use for loops.


Why not just join the TypeScript project instead of subvert it with a fork?


Dear google, don't you have anything better to do?


Clickbaitier headline: "You won't believe what new color of lipstick V8 engineers have found to put on this pig!"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: