Wait, Edge does better on ES6 coverage than both Chrome and Firefox? Microsoft have seriously stepped up their game, especially seeing as it's now neck and neck for performance with Chrome: http://venturebeat.com/2015/09/10/browser-benchmark-battle-s...
The next Firefox Nightly build should get 84% on that page, much closer to Edge than Firefox 44 (74%).
I work on SpiderMonkey and I'm super excited about this news. All JS engines have added more-or-less similar performance optimizations but often implemented differently and I'm really interested to see what the Chakra team did. I'd be happy to write a blog post on it next month, if people are interested.
Speaking as someone tuning a JS game, I focus on V8 over other engines because there are great articles around that explain in detail what v8 knows how to optimize, what causes functions to deopt, what kicks object property lookups into slow mode, etc. Articles explaining this for SpiderMonkey would be greatly appreciated!
FWIW, people in Mozilla are working on a "JIT coach", which will tell you if performance critical sections of your code aren't getting JITed, and why. I believe this is almost ready for use, though I'm not sure when it will be presented.
Sounds analogous to the way the chrome profiler tells you when functions have permanently deoptimized, right? That would be a terrific feature for Firefox, looking forward to it!
To get into the gory details google Vyacheslav Egorov - he's a v8 engineer with a number of talks on YouTube, presentation slides, and blog posts on v8 internals. He also maintains a tool called IRhydra, that lets you examine functions after they've been compiled into v8's internal representation.
Super interested! I often find technical blog posts like this years later and I'm really grateful to the people who donate their time to the difficult task of writing them.
One would think Firefox would lead here, since the browser is Mozilla's primary product, imagine how much hype and PR Firefox would get if only they were far ahead on ES6.
Strange thing to think. Companies do not produce browsers - employees do. In fact, not even employees. A handful of talented programmers produce browsers. All the desire in the world from a company won't improve the ability of those handful of programmers.
It's actually fairly interesting. These huge mega companies are just support teams for a few programmers. All the corporate vision and endless strategies mean nothing compared to one of those programmers having a good or bad day in their work. You honestly get the feeling something somewhere is very broken when you think about it.
>Strange thing to think. Companies do not produce browsers - employees do. In fact, not even employees. A handful of talented programmers produce browsers. All the desire in the world from a company won't improve the ability of those handful of programmers.
Strange thing to think. As if the core mission of the company, the motivations of the management, strategic decisions to hire people and structure projects etc, the funding and priority they give to specific products etc, don't determine and affect the end product!
Put that way it's as if a great browser engine can even get out of some accounting software house, if only the right programmers chance to work there.
Microsoft has about 120 times the number of employees that Mozilla has. That's an insane number. If they consider the browser remotely important, they can put much more resources behind it than Mozilla ever can.
Only things like the Mythical Man Month save Mozilla a bit here.
Now I know that, I'm actually looking forward to playing with the engine more than before - a concentrated braintrust of a few skilled engineers is always more ideal than a sprawling mass of seagulls (to borrow ideology from Finding Nemo :P).
The flip side, of course, is that all of you have to keep your game up to quite a high degree or you're out. Respect. (I think what the Edge team as a whole has managed is really amazing - I mean, a brand new browser...)
[Also... I have to ask... I've been wondering since before this announcement: is it an even remotely vague possibility that I'll ever able to natively run EdgeHTML on FreeBSD or Linux one day in the distant future, source or binary? :D]
I simply can't take this claim it has better coverage at face value.
because I only just finished testing a week or so ago and the js code we deploy that works on every platform from android through Linux mac ios and Windows.
is still mostly broken on edge
and doesn't even begin to work in ie.
so we will still be recommending users not to use edge or ie at this time.
that recommendation isn't one I make happily.
but windows machines make up such an insignificant part of the market now it's an easy business decision.
What is actually relevant here is whether the percentage that Google considers Chrome to be valuable to their company times the resources of the company is competitive against the resources Mozilla has (times the percentage that Mozilla considers the browser their focus, as they clearly spend a lot of money on side projects people sometimes seem to enjoy, such as Persona). Mozilla is small enough (at least in comparison to Google) that I think comparing their entire company to one division at Google is probably the correct strategy.
As it has been losing ground for a decade or so, and without it there is no Mozilla. I see some crazy initiatives obviously doomed to fail (like the mobile OS), which are worrying.
Coming up with a Servo based browser that's both more secure due to Rust AND faster due to parallel processing, and a better native-look-and-feel story (at least on the Mac) would be good to catch up with the others. And better developer tools, as Chrome has eaten that influential segment (web devs) up.
I think that firefox has improved their dev tools significantly no? I thought that Chrome and Firefox were neck and neck on this at the moment? With some features better in one and some features better in the other.
Although important for their future, Chrome is not Google's cash cow, Chrome is everyday stealing Firefox's market share, they have the luxury to wait, Chrome's pressure (and now Edge) on Firefox hasn't made Firefox step up their game as one would expect.
It might make one question the underlying assumption that competition somehow can cause people to somehow magically become better (a concept many people have which makes no sense). In reality, competition changes how people allocate resources as they play a strategy game to not lose control over segments of the market they perceive as strategically important. It also causes them to lose their negotiation power in the ecosystem, which can be good (as they can't push around smaller players) but also bad (as they can now be pushed around by larger players or loud users).
Mozilla used to be able to sit around and say "we absolutely refuse to do certain things, and we want to spend our time figuring out how to make the web an interesting place for power users and developers". I respected that Mozilla. It had a lot of clout in the market and used that clout to fight against DRM on behalf of all users while spending their resources building a super-extensible platform (which I think is a better description of Firefox's crazy plug-in oriented nature).
The post-Chrome reality is that Firefox no longer has an automatic dominating position in the "alternative" (non-IE) browser space, and so they have had to start caving to loud user demand and start fighting for the end-user market segment. They don't have the ability to fight against DRM anymore, so they have been forced to include Adobe DRM by default. They don't have the ability to waste a lot of time on power users anymore, so they are dropping all the complex-to-maintain parts of their platform spec and have started dumbing down the UI.
The one thing that Chrome did that was truly important was not to compete against Firefox: it was to prove to the world that something--specifically high-performance JavaScript--was both possible and desirable. This is the one positive aspect of "competition", and it is something that frankly should never need to happen in the world of open source, as one can do that in the context of the other project: I can't imagine a scenario where Firefox would have turned down performance patches :/.
But of course, Google isn't going to want to do that, because Google is a company with a strategic vision that happens to benefit from owning the web browser and being able to unilaterally make major decisions and perform weird experiments and crazy product integrations through it, which is all the easier for them to bootstrap as they can use their position as "the place almost all people both search and advertise" to push Chrome on people. This means that Chrome has no reason to collaborate with anyone, and even the one alliance they sort of had (with Apple on WebKit) they broke off when they decided they didn't have enough unilateral control: rather than collaborate as part of a community, Google just wants to own the product.
They also happen to be the primary customer of Firefox, so Mozilla is being forced to operate on smaller budgets. Note that this is the usual effect of competition and should be the obvious one: the idea of someone "stepping up their game" makes no sense when you are now operating on smaller margins (as competition means you can't demand as much share of the profit on any particular transaction) of a smaller market (as competition means that some customers will be using your competition). You only get to "step up your game" momentarily, often towards frustrating ends (such as giving up the DRM battle or trying to dumb down your UI as fast as possible), until your resources start to wither. (Yes: in a small initial market, competition can cause greater customer awareness leading to more pie for everyone; but that obviously isn't the case here: that is only true near the beginning of a new concept, when no one even believes the thing you are doing is relevant or valuable.)
In this case, it is even worse, as the primary customer to Mozilla's product was Google... and so they are essentially screwed in that negotiation. Firefox has had to switch to Yahoo as the default search engine and start making content deals to bundle marketing and software with their product, something they were morally opposed to doing in the past but have been forced into doing due to competition. This also doesn't come cheap with respect to executive time: rather than working out their product and platform vision, they are having to spend time negotiating and having painful conversations about how to keep their company from being destroyed and what morals they are willing to compromise for how long in order to maintain that fight. I don't particularly love Mozilla (as someone who has been paying attention to the open Internet since the beginning, I frankly found Netscape's business model of selling web browsers bundled with ISP contracts terrifying), but I have great sympathy for them these last few years, and absolutely do not see Chrome as being a positive force for anything at all in this ecosystem, except maybe security :/.
Just for historical accuracy, both Safari and Firefox were working on JITs since before Chrome was announced, so high-performance JavaScript was coming independently of Chrome. I do agree that there was more competition with Chrome there and it took less time than it might have otherwise to get to the performance levels we have today.
The Mozilla project was wrongheaded from the start. Anyone who genuinely believed in open source could have known that KHTML was better quality code and so it proved, despite vastly greater resources being poured into the terrible Netscape codebase. (I can't help thinking this was largely jingoistic Americans preferring an American project).
Some good things have come out of Mozilla-the-organisation - I very much hope that Rust/Servo is a success. But when it actually comes to developing an open-source browser, the incentives of a donation-funded foundation like Mozilla are all wrong.
I don't know what the right way to fund open-source development is. Dual licensing has its share of failures. So does trying to make it a direct business. So do research grants. Partly it's just the tragedy of the commons. In my darkest days I wonder if open source is fundamentally doomed because it simply can't make the monetary incentives line up with good engineering practice.
Mariner was an attempt to upgrade the old Netscape 4 codebase that got cancelled. Gecko/NGLayout was the new rewritten layout engine. WaSP pushed for this cancellation:
This is full of TILs, and mind-bogglingly enlightening to read.
In many ways the Web feels like exactly the same place as it was 16 years ago (especially to read about the WSP xD) but things have gotten significantly better for the user and standards of late.
TIL that NS/Mozilla was really the thorn in everyone's side on the tech front, but M$ wore the blame for the Web's early history because of the antitrust cases... that's insane. Absolutely insane.......
Are there any binary builds of Mariner, newlayout and NGLayout I can track down?
Well, historically, when Microsoft was still in competition-mode, Internet Explorer was kicking Netscape's ass with the later versions.
With the first few versions they were playing catch-up, but if I recall correctly, IE 4 and IE 5 actually had more features and better standards compliance than the current Netscape versions, as did IE 6 at its launch.
Perhaps. But it's also worth noting that United States vs. Microsoft was coincident with the stagnation in Internet Explorer, with the case starting in 2000, during IE 5's tenure.
For better or for worse, I think the case had a lot to do with Internet Explorer's long pause. I often wonder how Microsoft's browser, its Internet services, and the company as a whole would be today had that case not been undertaken.
«Are you suggesting some other consequence of this case, leading to stagnation of IE development?»
I think that you could see a lot of optimistic excitement from Microsoft in the idea of merging IE into Windows just prior to the antitrust case. There were experiments with using the HTML renderer everywhere in the OS from widgets (the "Active Desktop" thing) to applications (HTML Help, even the HTML usage of Windows (now File) Explorer)... Admittedly today with have mixed opinions of such experiments (and their often poor performance), but it is hard not to wonder what could have happened had Microsoft invested fully into that combined Windows/IE rendering platform had they been less afraid of the antitrust repercussions...
We're finally starting to see HTML/JS/CSS "everywhere" application toolkits (it's a vertical slice in the "Universal Windows Platform", and then there's efforts like Electron and Cordova), and it's interesting to think that maybe some of that would have happened sooner in a world without that antitrust lawsuit. (Certainly the counter is that it would have been less standardized, but I don't think that is necessarily the case, either: it would have largely have been different standards though.)
> Are you suggesting some other consequence of this case, leading to stagnation of IE development? I can't see any...
I can't directly see any either because I don't know anyone at Microsoft, least of all on the IE team. But I think the coincident pause in IE's evolution is undeniable. As for a cause and effect link, that's just conjecture. To me, it seems likely that there were either formal business decisions that reduced the effort expended on IE or at least a psychological block that had much the same effect.
Look, I am a Mozilla partisan through and through. I've been using Netscape, then Mozilla, then Firefox for years. So in many ways, I applauded the outcome of the United States vs Microsoft case. But to consider the case as one of only upsides and no downsides seems a bit narrow minded.
I think the antitrust suit also scared Microsoft off from including antivirus software with Windows, which has protected the incumbents in the antivirus industry, but the monetization model of Norton and MacAfee makes them little better than the malware they combat.
The path is always easier when there's someone there to show you the way, and a clear example of where not to go. Mozilla had a clear path forward with Firefox against IE, and had Navigator as an example of what not to do. Microsoft is now in that position with Edge.
I think this is perfectly valid point.We can think of it like this, they are pretty competitive and have shitload of resource. but they are not innovative , the moment , they become leader they lose their interest toward growing and growing, although they have huge amount of resource which help them the moment they become underdog. I think this is cultural problem with Microsoft.
They've had a seperate research department for over 20 years or so, and if memory serves me well quite some innovative things came (and still come) out of there. So I think they is too general in your phrase - there is innovation (also look at recent Surface line etc) but you're right in saying that it doesn't always come out and doesn't make it to the market, probably because the they you mean is some parts of management which sees money flowing in and is like 'hey, cashcows enough, no need to think about the future'. Or something like that. And lately this turned around again.
The papers coming out of Microsoft Research are a goldmine (relatively speaking as far as academia goes) for programming language and graphics research, just to name a couple of areas I'm familiar with.
Exactly , You correctly rephrased that.
I want to note last time I checked people who works at R&D in Microsoft , I literally blown away by their name. Maybe more big name than every other company in whole IT industry.
But the fact as you mention remain correct. They are not trying to bring new and innovative stuff to people's life. At least not when they are leader .
I was also surprised to discover Microsoft Research's awesome technology. Actually, they've done a fair bit of licensing that research to other companies who go on to make great things with it. I always thought it was because they were pre-occupied with their own projects: Windows, Office, the Xbox, and of course supporting the vast .NET platform which remains the core of their business model: creating software tools and licensing them out to huge corporations, big big money!
Well, from what I've read on the internet, their culture is changing, at least at the grass roots level. A ton of people promoting Open Source & other similar stuff.
It's hard to say what this means for the higher levels, but many people think that Nadella's appointment says that Microsoft does want to change.
And luckily for US, Microsoft is and will be the underdog for a while at least, in most markets they are in: web search (after Google), browsers (after Chrome), mobile OSs (after Android and iOS), server-side OSs (after Linux), cloud stacks (after AWS).
And another, very important thing at that time: IE4/5 was much faster and lightweight than Netspace. I recall using IE all the time just because it take NN forever to start up. The only contender on the speed front was Opera.
Yep. They're kicking everyone's asses in ES6 feature coverage. All the more impressive when you consider how they came from behind. http://kangax.github.io/compat-table/es6/
Engineer on the Chakra team here. As the blog post says, we are definitely interested in going cross-platform. Which platforms would you be interested in seeing first?
It's just very rare to use a Windows VM in a cloud service environment to deploy services.
This may change with the Docker support we are seeing promised. Powershell is definitely a workable remote shell. But it's not the case that this is sufficient today.
That's cool but only run on Windows 10 which limit its usage and hence the community. A widely used non-V8 Node.js would be a great project in my opinion.
Give it time. When Node was new it didn't work on Windows, which limited its usage and hence the community.
Now, Node is probably the most reliable cross platform language host. Write something for node, nearly anything, and if it works on your OS it'll probably work elsewhere too.
There's no reason something similar couldn't happen with Chakra, especially now that it's open source
Just to clarify, you're taking about Chakra going cross-platform, not Edge itself? I think a few of these replies might be looking at whatever_dude's comment and getting the wrong idea.
OS X. I don't use it but I think it'd have great value. I often hear engineers talk about how they don't want to work with IE because they have to boot up a VM to use it. So...they don't even test it, they know nothing about it except for how IE used to be 10 years ago when they still used Windows machines.
That'd be the coolest thing since it'd encourage developers that shy away from MS technology to dive into it.
They're open sourcing the JavaScript engine, not the browser. A port to OSX would mean you can run there server side programs before deploying them to Windows servers.
I believe there are more Linux servers out there. Still a port to OSX would be useful because there are many developers with Macs that deploy to Linux.
I don't see how it'd help deploying server-side programs. Maybe that makes sense with IoT. And as a Node alternative/enhancement.
BUT, once you have the JS engine over, I could see them porting over the rest. But anyways, you could still run headless browser (or just Chakra) and run tests against it.
yeah but it's paid, needs internet access, and requires some sort of setup. Plus, most OS X devs would do this ONLY for IE. Which is a barrier of entry.
Downloading and installing IE browser on OSX is a much better solution. And this way, you have a browser that people can use casually as well. Since it has awesome ES6 support, that makes it even better for JS Devs.
It might perform as fast as Chrome. I certainly wouldn't rank it as stable or memory efficient. It hasn't been, in my experience.
I use Chrome as my primary browser on Linux and OS X. Tried to use Edge as primary on 10, but it's
a freaking HOG. I'm using Chrome as primary on 10 now, too.
What I'd love to see is webmidi support, it would be rad to be able to do a windows universal app with es6 and create a midi sequencer for my hardware synths, I can do that on chromeos and then package it as an android app on android 5.0+ but I can't do that with windows universal, also can't do that with ios. Webaudio is supported across all three but only google and opera so far support webmidi.
I wonder if Microsoft is actually spearheading the implementation of ES6 in the browser or did they benefit from the timelines of the ES6 specification fortuitously lining up with the development of Edge - or maybe both.
Not sure what you mean by "lining up with"? The development of ES6/ES2015, and the development of all the browsers and their various JS engines are pretty much ongoing, all the time.
The Edge, Chrome, Firefox and WebKit teams are all working on ES6 compatibility, and releasing new versions pretty frequently. The Edge team are in the lead because they've implemented more features, faster.
Chrome/V8 was actually quite a way behind for a while, although they've caught up quite a bit recently. I believe the situation wrt to V8 within Google was a bit messy for a while, as the original developers (Lars Bak, etc.) were more interested in championing Dart than implementing the latest ES6 features. Eventually, Google had to create a new team, based in Munich, to work on V8.
Of course, I'm merely saying that there might be some benefit that arises from implementing a specification (ES6) into a newer engine vs a legacy engine.
Ah, I see. The JS engine actually wasn't changed for Edge, just the rendering engine. Chakra has been around since 2009, and was used in IE9-11. It was one a wave of new JS engines developed in response to the release of Chrome/V8 which destroyed the previous generation of JS engines in terms of performance (I'm sure the various browser vendors would say they always planned to create faster JS engines, but there's no doubt that the release of Chrome/V8 in 2008 accelerated their efforts).
> It was one a wave of new JS engines developed in response to the release of Chrome/V8 which destroyed the previous generation of JS engines in terms of performance (I'm sure the various browser vendors would say they always planned to create faster JS engines, but there's no doubt that the release of Chrome/V8 in 2008 accelerated their efforts).
Well, TraceMonkey can at least legitimately argue to not be inspired by it: it was publicly announced prior to Chrome, and work obviously started on it before that.
No wonder: Microsoft needs "class" syntax in JS, they are a driving for behind TypeScript and ES6. Especially interesting, as Javascript has prototype inheritance. The reason: All of MS code is class-based, and they want it to look similar to C#.
Developers of large JavaScript codebases have come to realize that class-based OO is exactly what they need. Prototype OO sounds interesting in theory, but in practice it tends to not be what's needed. Projects that use it end up trying to poorly imitate class OO. This wouldn't necessarily be a problem, except it isn't done consistently. There are multiple approaches, some of them which don't mesh well with others. Class-based OO, on the other hand, tends to be much more consistent and well-defined. This consistency improves developer productivity, it keeps the code cleaner, and it allows for greater code reuse between projects. Prototype OO hasn't proven itself in practice, while class OO has again and again.
Not just that - there are performance implications as well due to browsers being able to use a fixed allocation of memory with classes' properties instead of a dynamically resized amount for objects, which results in less efficient algorithms able to be applied internally for the browser for doing similar operations.
In principle, in many cases it's possible to allocate the right amount of space for an object's properties, under the assumption that most objects from a single constructor gain properties in the same order.
I have been doing a fair amount of JS work in an app using Angular this year. I don't use much inheritance of any kind, mostly FP techniques and "objects" (associative arrays) as aggregates.
If you use "closure" variables instead of "this", it's pretty easy to graft functions from one aggregate to another for code reuse.
"JavaScript will never have mandatory types" says Brendan Eich, who doesn't work for Microsoft last I checked, which I mean to say Microsoft is not the driving force behind ES6.
Mandatory types is obvious—it needs to be compatible with existing JS code. Gradual typing is entirely plausible (and I think highly likely), and from there it's easy to add some static lint that ensures a codebase is statically typed.
I find myself hoping Microsoft makes a comeback and they are really doing a lot to win developers which I think is the right move. Obviously, they are a huge platform and developers inherently will be using it but they have taken a lot of great steps like open sourcing this engine as well as other projects.
The Code editor they released is built on Atom Electron and seems more performant than Atom in the few experiences I have had switching between them.
If they can continue to gain trust in the community and improve their UI they could become great again. You can tell they have thought about how to do this. A few years ago now I remember the guys from the IE team did an AMA about the new explorer IIRC it was 10. They talked about cross browser compatibility and wanted developer feedback.
I am not sure if they are actually an "underdog" but I find myself feeling like that, and hoping they can get it together.
Ditto - and I agree, underdog doesn't quite feel like the right word. It's more like yesterday's champion coming out of retirement to deliver some good old-fashioned ego checks on today's cocky up-and-comers. Either way, it's fun to watch.
Wow, I'll admit that I haven't been looking at Edge simply because of the IE stigma, but this blog post impressed me. 90% ES6 support? More so than Babel? Awesome. And it's getting open sourced! I hope to see it ported to the Unixes. Perhaps Servo+Chakra could be a thing?
Sounds like a lot of work (not to say it's not possible or interesting). Our bindings code[1] is pretty complicated already. It depends highly on how Chakra deals with things internally -- if it's similar to SpiderMonkey; I'd be interested in having a look. Might become a fun project :)
Without a reference open source DOM implementation (which we have for SM -- Firefox's DOM), we'd also need good docs for the Chakra API. No idea if that exists.
Initially I thought Edge would just be IE with (yet another) skin, but I have to say, it's absurdly fast and lightweight. It's still not my main browser as I'm now too deeply entrenched in Chrome for development, but it makes me really happy to know they're actually pushing the web ahead.
I find its UI quite annoying and its tab management really poor, but technologically it's really solid. I've noticed it seems to handle HTML5 video more smoothly than Firefox (I rarely use Chrome so I can't really comment on it).
Servo already uses a JS engine in C++ because a Rust JS engine is a huge undertaking in itself (see [1])
It does make sense to try out Chakra or V8 for Servo. It's probably a lot of work, though. And there may not be a net gain out of that (We have access to in-house spidermonkey know-how, none of that for the others).
I have a vested interest in creating a SM -> Chakra adapter to swap out SpiderMonkey and be able to compare it to see how it does because Chakra is also an interpreter and not JIT-only like V8. Such an adapter would also make playing with Servo -> Chakra possible. I haven't play with Servo, but is there a list of all the JSAPI calls that it needs to function or can you easily dump that list?
My bad, I meant JavascriptCore or the JS engines often paired with WebKit. What I meant was, why pair Servo with Chakra and not any of the other Javascript engines?
Servo currently uses SpiderMonkey as its JavaScript engine (SpiderMonkey is the Firefox JS engine). Sure, a JavaScript engine written at Rust would be useful at some point, but right now that's not on their roadmap.
Servo contributor here: Haven't heard of this plan (still could exist), though we do (mostly jokingly) toss the idea around every now and then. There are a couple of not-production-ready Rust JS interpreters floating around (https://github.com/swgillespie/rjs/ is the latest I've seen) though.
Stuff like JITs can't really reap Rust's safety+performance benefits[1], and overall the same might be said of a JS interpreter, with all the garbage collection and stuff. The other thing is that Spidermonkey/Chakra/V8 have had years of optimization and tweaks, starting from a clean slate would be _very_ hard. With Servo's small team this is an impossible task, but with a larger, dedicated, team, there's a chance. shrugs
[1]: though it's possible they will still be safer than the C++ counterparts, which is still good. The question then becomes, how much is that increase in safety, and can it justify a whole rewrite?
It is true, though, that a lot of the danger isn't in the JIT but rather in the bindings (e.g. the native implementation of the Date object, or Typed Array Buffer, or what have you), which Rust's safety features could help with.
I'd like to see Node.js using Chkara by default, V8 developers have showed that they don't care much about Node.js, they are more interested in Chrome, and MS have showed more interest in Node.js than Google and I'm sure it will be better for all, fingers crossed.
This is definitely a problem today, but NAN (https://github.com/nodejs/nan) which is now the recommended way to build native modules, offers a way out. That project provides a stable API to insulate module developers from changes between v8 versions.
Sometime down the line, a NAN-spidermonkey or NAN-chakra project might become feasible.
Sort of -- in practice, we found ourselves instead playing catchup to NAN (many changes across Node 10, 12, 4, etc.). It insulates from minor changes, and increasingly, part of that has been Node waiting longer and longer between v8 changes.
> which is now the recommended way to build native modules, offers a way out
On paper. In reality, HELL NO. I maintain node native addons, and the thing is, NAN is great, but they simply cannot foresee what will break next in v8.
Once something breaks, they provide a node-version-independent feature-shim, but every time there's a new version of Node, I DREAD the work I'm going to have to do to maintain my add ons.
It's simply less work to cross compile C++ to JS w/ Emscripten or write it in JS (or higher level language and compile to JS) in the first place.
That's the interesting part, even with V8 updates NPM packages get brocken, I don't see the migration to Chakra as something different to migrating to a newer V8 engine witch also break a lo of stuff.
I have a somewhat off topic question: is there anything in the design of Javascript that mandates single-threadedness? Could any Javascript engine implement threads?
I'm asking because I'm wondering if Node.js's evented approach is the only way to do things.
If you allow threads to share memory arbitrarily you need to add locking to all internal VM structures, which is going to be a significant slowdown.
Also it's not (any longer) considered good practice to have languages that allow mutable memory sharing since that makes software unreliable, so it's not really a good idea.
Without arbitrary memory sharing, multi-threading is already supported with web workers.
Agreed! FWIW this is why I made Operative. It gives you a way of writing "inline" JS that utilizes web workers (caveat: not actually inline; no scope/context access of course). It provides good support across browsers and fallbacks for envs where web workers don't exist (and ~all the in-between cases): https://github.com/padolsey/operative
Why would anybody ever want to JSON.parse typed arrays from web workers when the parsed data (or unparsed strings) can be passed around directly? Strings are immutable and are not copied when you pass them around. I don't see your point.
Remember: web workers communicate (to the best of my knowledge) using a full serialization/deserialization of message objects (which is why they added transferable objects). So, to create a one-to-many broadcast mechanism, you'd do exactly what I described. Additionally, I am unsure, but I could easily see immutable strings being copied nonetheless between web workers because of the desire to give every worker a private heap.
I still think is faster to send the same message to all than using JSON.parse on each (not to mention much more compatible). Since each worker is going to make a copy anyway (with JSON.parse or by receiving a message), why does it matter?
Also, assuming what you say is true, what's the problem? It's much easier for the JS implementations to synchronize things only with a specific type of typed arrays than sharing all kinds of GC-managed data structures.
There is nothing in the language itself that allows multi-threading. Going forward, they are adding support in the language for the async keyword, similar to F#. Any way to achieve parallelism would have to be from APIs, e.g., like WebWorkers in the browser.
No! Javascript the language and thread are not related. If you find a Javascript implemented over JVM like ringojs uses JVM threads to execute JavaScript in parallel. In which case you get both event based within thread, and multi-threaded at the same time.
The JVM implementations of Javascript (e.g. Nashorn) support multithreading. Imho it's not a good thing, because all existing Javascript code is written with singlethreading in mind and lots of stuff will break if you use it from multiple threads. Multithreaded code for Nashorn requires the use of JVM synchronization primitives, which are then not supported by other Javascript engines.
It used to be possible to write Firefox extensions that used multi-threaded JS; however, like any other language, accessing the DOM was not threadsafe (so you can't have a window be your global object, and therefore they needed to live in separate source files). As JS lacked native support for threading, you also had to be very, very careful and often ended up with less obvious threading issues anyway.
In the last few years SpiderMonkey (Firefox's JS engine) has dropped support for this more and more, and these days you can't anymore. But that's a consequence of the engine implementation, and not the language.
Mozilla is working on a spec for something called SharedArrayBuffer, which will allow Workers other than the main thread of execution to share memory.
The reason for this is that as soon as you have separate threads sharing memory, it introduces non-determinsism into the mix. When developers use synchronization mechanisms incorrectly, or not at all, this can lead to deadlock between threads.
This must not be allowed to occur on the main thread shared with the rendering engine.
So keep your eyes out for SharedArrayBuffer.
The event loop is very important to the existing language semantics and isn't going anywhere.
My speculation is that before Node stepped into the picture, primary use-case for JS was browser, hence UI manipulation. That usually prompts for a single thread that can update UI, unless you're willing to introduce a lot of mental overhead when dealing with synchronization primitives.
I don't think there is anything preventing JS running shared memory threads. Apart from all the existing code and libraries which aren't thread safe, but that's the case in many languages.
Microsoft is slowly but surely winning me as a fan. Keep doing things that matter, show that you're committed to the open source community, and continue to help push the web forward and I think nothing but good things will come from this.
Wow. They're really serious about changing their philosophy aren't they. Using Github for their stuff, making and open sourcing Visual Studio Code, other stuff I can't remember, and now this.
They're aware of the permeance of open-source in current tech scene and want to be a part of it than fight it. Not many startups (except for DreamSpark, BizSpark, and Seattle/Redmond based ones) build on MS stack nowadays. MS audits are dreaded (thanks to Oracle for showing the way) and a diversion for a growing startup. It's better to steer clear of MS & other proprietary tech whilst building a startup.
They're embracing startups since they need them and are changing their business models to cater better to smaller startups than the behemoth enterprises with Office 365, Azure, etc.
If they go full SaaS I would not be surprized if they go Open Source on Windows itself, seriously. I think for that to happen Nadella needs one big win/turnaround and the shareholders might be on board.
Well, it seems feasible if they take out enterprisey features. For example Windows without AD support could be an option. Or without remote desktop support.
The hobbyists would use it as is but no company would touch it without support anyway.
Unless my memory is playing tricks on me, a couple of months back, Mark Russinovich (of SysInternals fame) gave a talk somewhere and mentioned that they were looking into that.
Sounds kind of strange to say it out loud, though. ;-)
These programs aren't magic. They use the same kind of constructs you use every day to write your programs. They use some different algorithms, but you can look those up in books and in existing engines. People who work on these engines started where you started and there's no reason you can't pick up all the skills over a few years.
The problem with JS engines nowadays isn't as much about the language itself (or its core library), it's more about the absurd amount of optimization it should have to perform well in a varied number of possible setups, and all while the browser is busy doing DOM compositing, loading yet more JS code, etc. Things like JIT are not exactly necessary if you just want your language to "run", but an important part of an optimized system.
To me it seems the past ~5 years of JS engine development have been focused on what kind of optimizations to implement, rather than parsing or implement core library features.
Building a javascript engine is super simple. But to build one that has performance on par with the current crop will take you - as a single person - a lifetime and by then the state of the art will have moved on.
Actually, for JS engines this is a much smaller problem than for other parts of the web platform. There are two nasty warts I'm aware of: the "function inside if" mess, and the fact that the spec says enumeration order is undefined but actual web pages depend on some things about enumeration order that all browsers implement; chances are this will make it into the spec at some point. But by and large JS engines agree with each other and with the spec. Much more so than, say, DOM or CSS implementations.
[[OwnPropertyKeys]] is not invoked by a for-in loop. [[Enumerate]] is instead, and while it defines that the set of iterated names is the same as that returned by [[OwnPropertyKeys]] it explicitly says that order is not defined.
In a world-ending scenario, 'having a working javascript engine' is likely to be surprisingly far down the list of priorities. I think we might actually need a new level on Maslow's hierarchy of needs, just above 'self actualization', for 'lightweight scripting.'
There are many JS engines which are extremely small compared to the big guys used in browsers. Most of them tend to grab a subset of the language and implement that.
There is even a javascript interpreter written in javascript which can be a good starting point to learn a bit how it works https://github.com/jterrace/js.js/
From the copyright headers he worked on it starting in 2009 and had the first public release in 2014 (actually 2010, see comment below). He certainly worked on other stuff in between too. This is also not a toy implementation. While I have no idea how it compares to state of the art engines, it does feature a whole bunch of optimizations and a JIT compiler.
Friends of mine used "Create Your Own Programming Language" to get started learning about interpreters:
http://createyourproglang.com
I personally learned with SICP (But reading this book isn't just about interpreter, it will make you a better programmer and blow your mind in so many different ways. I wouldn't say it's for experts only, but this isn't the kind of book that beginners would get excited about. However, if someone is serious about improving as a developer, I can't think of a better book):
http://www.amazon.com/Structure-Interpretation-Computer-Prog...
Finally, (How to Write a (Lisp) Interpreter (in Python)) by norvig is a great read if you're getting started with interpreter: http://norvig.com/lispy.html
* The reason the two last links are lisp related is because writing interpreter in Lisp is really straightforward (Since the code is the AST). Similarly, if you wanted to learn about memory management, a language like assembler or C might be more suited than say Python.
While at university I attended a "learn Scheme" summer course and it was taught by implementing your own Scheme interpreter (in Scheme). It is not that hard to implement a programming language, but making it perform well might be a completely different ballpark.
Edge is certainly much faster than Chrome/Firefox for JS processing that I wish I could use it on Linux. Looks like that might be happening. Really great news.
I didn't know Node.js could use anything but v8. This is also very nice.
I'm not sure where the hangup is, but I've found that Edge is fast as hell for initial page startup, but lags behind quite a lot when under heavy load.
I've seen the benchmarks but in my experience it just... lags...
For example, on the kangax ES6 compatibility table [1]. On chrome clicking a column takes less than a second, on Edge (i'm on an up-to-date Windows 10 machine, no preview stuff) it loads faster than in chrome, but takes 3+ seconds to switch between columns.
Even some of the stuff i've written acts similarly, and i can't figure out why.
I know between Chrome and Firefox for a particular project of mine (which is a bit old now), Firefox ran raw JS faster, but Chrome could update its DOM faster. FF would spend much more time when many updates needed to happen in a large column of divs (10k+). Raw JS speed isn't everything.
I get that, the Kangax table was just an example I literally ran into moments before.
But for a more "pure" js example, I have an app which does some image processing in the browser using the canvas ImageData stuff and typed arrays.
For whatever reason my test case completes in about 3 seconds in chrome, 3 to 4 seconds in FF, 11 seconds in IE11, and 6 seconds in Edge.
I've profiled the hell out of it and i just can't figure out the reasoning for it, but it's there. (and as a side note, having the dev tools open in Edge obliterates javascript performance, my test case was taking 30+ seconds to run when i had the dev tools open and it took me longer than i'd like to admit to figure that one out)
I have a feeling the problem is that i'm optimizing for V8 because i know it the best, and i'd much rather not do that if possible.
Chrome's Dom and painting engine is faster than Edge. That being said Edge just came out recently and is still bogged down by some of IE 11 stuff like the trident layout engine. I'm sure they are really invested to make it faster.
Think about it, Microsoft really wants bing to succeed, for it to happen, Edge has to succeed so I'm sure they are putting their best minds behind it. Its a matter of time.
> If it fails when they paste via right-click menu, it's actually a known issue/design decision on Drive's side.
I expect that if this were the problem, OP wouldn't be mentioning it.
When you attempt to use the right-click menu to copy and paste, you get a clearly-worded message box that tells you that you need to use the keyboard shortcuts, and which keyboard shortcuts to use.
I know a guy who works on core web-dev stuff at Google.
He tells me that Edge is oftenvery quirky in very subtle ways. He's had to spend many, many hours diagnosing, reporting, and working around quirks in Edge. These quirks sometimes get fixed and sometimes do not, but are often not present in Internet Explorer.
We have a fairly large-ish JS codebase at work and from what I saw so far is that Edge's JS was never a problem. SVG rendering on the other hand is horribly broken currently for some cases.
That being said, since our JS is generated, it's purely ES5 since it still has to run on IE9. So maybe the weird cases are in the more modern JS parts we don't use.
Because you may want to implement things you care about, not necessarily the things the V8 developers care about.
Unless you mean "why write your own engine instead of forking V8", in which case the answer might be that if you have the time to do it, it's easier to become expert on something you created yourself rather than an existing large complex system. And while you're at it you can make sure the system you create is good at solving the problems you mean to solve, which the existing system may not be.
A concrete example of one of the design tradeoffs here: V8 doesn't have an interpreter mode, just several levels of JIT. Every single other browser JS engine does have an interpreter, because it turns out that for cold code (which is most code) that's both faster and less memory-intensive than a JIT. Not to mention allowing your engine to run on hardware for which you haven't created a JIT yet.
It was years later. V8 shipped in 2008, having been in development for (IIRC) three years. Chakra got its first public preview release in 2010, and shipped in 2011—I strongly suspect it started after V8 first shipped.
What in blazes?! Okay MS, that's an impressive step.
I'm waiting for the first ports to Linux or, hell, a native port of IE... given the trend, it's not unreasonable that MS will open source a load of stuff.
>It's a term that has two accepted meanings, the other meaning being the total opposite of the intended, making it a meaningless "flavor" word.
No, it's a term that has two accepted meanings that context makes perfectly clear which one was meant.
Like "bad" (which can mean actually bad, or "great" as slang, both meanings in the dictionary). In fact many common terms have that characteristic, including fast ("moving quickly" or "fixed in place"), dust (which also means removing dust), etc. https://en.wikipedia.org/wiki/Auto-antonym
>Leave it out of phrases from now on, and the phrase will not lose any meaning or clarity.
It's not meant to add to the core meaning of a phrase. It's meant to emphasize something in it. Without it the literal (sic) meaning would be the same, but the intended impact of the phrase would be lessened.
It's like: "I don't care for your bonus" and "I don't care for your fucking bonus". The meaning would be the same without the expletive but the intended emphasis and impact lost.
Something that Dickens and Shakespeare (who used it that way among tons of others) understood.
Quote Shakespeare and/or Dickens so I can see whether their use of that word improves the passages, or carries any signal at all for that matter. Otherwise I'll just assume that since they wrote a lot, there's going to be some mediocre bits and padding in it. For all I know, they might simply not have paid attention.
Alternatively you can just ask any linguist, and they'll assure you using "literally" for effect is not just ages old and used by lots of prominent writers, but also an absolutely legitimate use of language, typical of the inventiveness that moves language forward, and a use that has many more similar examples that people who complain about it wont think twice about using.
>Otherwise I'll just assume that since they wrote a lot, there's going to be some mediocre bits and padding in it. For
The thing is question wasn't about it being good literature, but about it being legitimate use of language. It was more: "prominent writers use it too, not just ignorant buffoon as idiots claim" and less: "it belongs to good literature".
As for it carrying any signal, that's not something that can be refuted. It ALWAYS carries the signal of "emphasis" over the same phrase without it.
> It comes down to that oft-spoke mantra – language changes. Our job is to document that for better or for worse. Except for us, there is no worse. We have to look at language objectively and dispassionately.
That's a bit like the difference between a helper taking blood samples and the doctor, not to mention a loved one. Linguists can be lovers of language and philosophers, but they don't have to be, they just measure what's there. Language is made up by billions of people using it, and I consider my rejection of parts of it a curative effort. So I ask, with as much right as any other single human being using language: how does this particular "invention" move language forward? That other people might say it does, without anyone (I could find) giving an argument how it does, doesn't really help.
Where, exactly, is "forward", here? Downstream? Even not gaming anymore, I noticed that "kids these days" tend to call low frame rates "lag". I still know what they mean, but I also consider it regression and impoverishment of language and more importantly thought. And while it's a silly example, I find it fascinating in a morbid way how things just get picked up and passed on. I observe similar dynamics with more serious things, like how words like "terrorism" or "war" get used. I think what it all really boils down to is that power in various doesn't (feel the) need to justify itself, and likewise people who say things most people have no problem with, don't ever have to stop to ask themselves what they are saying, and don't feel like they owe anyone going against the grain an explanation. And yes, nobody owes me one, but I'd still like one.
When someone asks me "Did you meet Peter at the party?", and Peter happened to be the person opening the door when I rang the doorbell at the house where the party was, I might say "He was literally the first person I met there.". But even that "use" is just because we've gotten so used to not meaning what we say how we say it, having to flag when we do.
> As for it carrying any signal, that's not something that can be refuted. It ALWAYS carries the signal of "emphasis" over the same phrase without it.
Emphasis can mean two different things. Saying "I don't care about your sports car" vs. "I don't care about your sports car" could mean that the speaker cares about something, it's just not the sports car. It's emphasis as in changing weights in relation to other weights.
But then there is also simply being louder or gesturing more wildly, i.e. if "blown away" means "very impressed", then "literally blown away" means "really very impressed", right? But when we start "emphasizing" things that didn't actually happen the way we phrase them by going out of our way to say that they did happen the way we phrased them, we might end up having to use "actually literally" or something to flag the things that indeed happen the way we phrased them. Rinse, repeat, and I still don't see the point of even the first iteration.
Earlier I read a sentence I consider very true and important:
> When daily life requires turning a blind eye to the falsity of countless things we’re told, it weakens the power of language to sort truth from fiction.
This also applies to all the little exaggerations and cutesy things. They do add up. And yes, this is old as dirt, just like doublethink predates "Nineteen Eighty-Four". Thinking and talking clearly and truthfully is exceedingly rare, I myself can only do it in short bursts when I concentrate. But that doesn't mean it's not generally something to strive or, or to take lightly. We inherit and modify the tool set future generations will start out their attempts to grasp reality and/or rationalize their own atrocities with, depending on how well we do our job. We, like the previous generations, are abysmal at it. That someone has a brain fart is one thing, but that it then has centuries of tradition and makes it into the dictionary, that's our collective fault, and that everybody is in on it doesn't justify it in my books. That's just one more for the pile, really. There are so many things "most people do" or "most people are fine with", and they're not fine. And even supposed hypocrisy, i.e. me making those mistakes myself, doesn't change that, whenever I do catch such things, I edit it. And if I had more time, this comment would be a lot shorter :P
>That's a bit like the difference between a helper taking blood samples and the doctor, not to mention a loved one. Linguists can be lovers of language and philosophers, but they don't have to be, they just measure what's there. Language is made up by billions of people using it, and I consider my rejection of parts of it a curative effort. So I ask, with as much right as any other single human being using language: how does this particular "invention" move language forward? That other people might say it does, without anyone (I could find) giving an argument how it does, doesn't really help.
One problem is that you could say it with any previous form of the language. Tons of the words you take for granted and accept evolved in a similar way. So why is "literally" bad and "awful" (worthy of awe originally) ok? The only honest answer would be "because it happened before my time, and I was always used to that".
A second problem is the question itself: "how does this particular "invention" move language forward?". If it's used by the masses (as opposed to being a random error by some person), then it moves language forward, period. Languages don't move forward by design, but organically, by people adopting this or that change or new structure. And for that to happen, the change needs to respond to some need.
The need doesn't need to be rational and kosher. E.g. "That word the most elegant way to minimally convey such meaning". It just has to serve the purposes of what it's adopted for, namely to help speakers convey their message (including meta information like impatience, anger, emphasis, scorn, passive aggressiveness, affection, concern, etc).
>Where, exactly, is "forward", here? Downstream? Even not gaming anymore, I noticed that "kids these days" tend to call low frame rates "lag". I still know what they mean, but I also consider it regression and impoverishment of language and more importantly thought. And while it's a silly example, I find it fascinating in a morbid way how things just get picked up and passed on.
The mistake here is that you have a preconceived notion of more detail translating to "better language" where the benefit in this case would be something else (e.g. succinctness or the ability to unify different but similar phenomena under a common term -- that is: abstraction.). Those are all legit benefits, and language needs to have mechanisms for that too (like we collapse all different kinds of cars, motorcycles etc into 'vehicles').
In this case, communication between gamers for such a trivial matter doesn't need much extra detail beyond "lag". OTOH, a team of game programmers would have a much expanded dictionary to discern between lag, low frame rates, various kinds of latency, etc.
>But then there is also simply being louder or gesturing more wildly, i.e. if "blown away" means "very impressed", then "literally blown away" means "really very impressed", right? But when we start "emphasizing" things that didn't actually happen the way we phrase them by going out of our way to say that they did happen the way we phrased them, we might end up having to use "actually literally" or something to flag the things that indeed happen the way we phrased them. Rinse, repeat, and I still don't see the point of even the first iteration.
The point of any iteration is that used language ("blown away") for example, loses its impact (intended impression), because people are used to hear it. When that happens you need to patch it with extra qualifiers to give the same strong impression that you want.
But we're going too far: we don't need to justify it. As with anything else in language, the fact that it happens is justification enough. It's like evolution -- you can't say a previous form of a language/organism was "better", but what's better changes with the environment and the times.
You could say that for individual organisms (individual speakers who misuse language in some ad-hoc personal way), since, but you can't criticize new _evolved_ forms of language as used by the masses.