To everyone chanting "Embrace, Extend, Extinguish", I'd like to remind you that Node proper was a golden egg sitting under Joyent's goose. When Joyent decided they didn't want to embrace anymore, the community took the extinguisher out of their hands. Granted, Microsoft is in a more similar position to Google than to Joyent, but the point of the argument is that the community has driven the language and the runtime forward, not the corporation backing it.
Unlike most of Microsoft's other EEE attempts, this time they're putting their code with their mouths are and putting it all up under MIT. There's little you can do to get more open and honest than that. If the community doesn't like the direction MS is taking Chakra, they're free to take it in a new direction. Hopefully it doesn't come to that, but it's an option.
>When Joyent decided they didn't want to embrace anymore, the community took the extinguisher out of their hands
Joyent wound up embracing io.js, and Node.js is moving at a very fast pace now while maintaining LTS branches in parallel. It's a very well managed project, in my opinion.
Aside, how come anything remotely questioning of Microsoft when there's an announcement is met with downvotes?
Aside, how come anything remotely questioning of Microsoft when there's an announcement is met with downvotes?
I don't think it is. However, I'll downvote criticism that isn't actually based on anything other than "well, you know, it's Microsoft so they won't", "remember extend, embrace, extinguish?" or "M$", etc. etc.
I've opened a few issues and sent a some pull requests along to various MS projects (couple of bits shipped).
I've found them generally to be quite supportive and encouraging in the vast majority of places.
I've been un-acknowledged on a couple and have felt un-welcome in only one instance. (In that one case I'm pretty sure the words we were using were all English, but I'm not sure we had the same meaning... I walked away from that one feeling dumber than where I started.)
All said and done, I have been wildly pleased with my experiences as a Microsoft community contributor on github... Though would say that there is some variability as you cross teams.
I mean that the things that they were saying were so complex that I was completely unable to understand a single word of it.
That's a bit over stated too, but the short version is that they confused me to the point that I didn't think I'd be able to make a valuable contribution to the project.
To break down my phrasing above, "I know you're speaking English, but I don't know what you're saying" is a phrase I've heard used but I think it isn't common.
> how come anything remotely questioning of Microsoft when there's an announcement is met with downvotes?
Selection effects! Posts about Microsoft doing something cool will attract a readership in which Microsoft fandom is over-represented, compared to the average readership of HN.
I don't agree with your characterisation. GP was complaining about downvotes to anti-MS comments, and that's nothing to do with 'attracting a readership'. More like the specific criticism of Microsoft demonstrated in this thread so far is the same tired old crap and not relevant to the actual topic.
> putting it all up under MIT. There's little you can do to get more open and honest than that.
I disagree. Node was OSS-licensed under GitHub back in the Joyent days, but the inner circle was still very much closed. PRs languished for months or even years. The leap from there to here for Node is, arguably, bigger than the leap from closed-source to open. Evidence: io.js had to happen.
Forking is always possible, but it was only practical because the community could plausibly claim to be able to put as much or more effort into building out their fork than Joyent could theirs.
Microsoft is a much, much, much bigger company than Joyent. If they decide they want to take things in their own direction, they can throw an awful lot of programmers at making that happen. Which would make standing up a fork that could plausibly claim to be at least as actively developed as Microsoft's would be much more difficult.
And there is evidence that this is true. Node itself has had to make major changes to it's own code to account for unilateral V8 changes, and node really has had no choice but to suck it up and make corresponding changes. They could fork V8, but it's not likely since Google is always going to be able to throw more resources at it.
Forking isn't possible when the code is not there, and when the company that created/copyrighted the code is openly hostile to forks. By them releasing under MIT, this issue is resolved.
As for the manpower behind Microsoft, git has made it very easy to merge code from relatively similar source code - in the cases that they make large important changes, those can always be merged in, and usually without too much work involved.
The only way they could succeed with something like that would be for them to go closed-source again, in which case you're still better off than if they never went open in the first place because you still at least have some code to work from.
Node isn't a very complicated piece of software though... 30,000 lines of C++ and 20,000 lines of Javascript. We're talking about a top tier Javascript engine here that weighs in at closer to half a million lines (only a fraction smaller than v8) where open source efforts are already split over three other mature engines.
Great stuff. I've worked professionally in the .NET development ecosystem for almost a decade and I've never been so enthusiastic about the direction that Microsoft is taking. Their embracing of open source practices is very positive and will help to retain and maybe even attract developers. You just don't have to worry about licensing any more this way. You can actually sit an MCP exam in MS product licensing, it's that complicated.
This announcement is probably timed to coincide with NDC. There were lots of similar open source announcements at a previous MS conference I was at. Although it was a bit weird due to the time difference on the embargo.
I might be wrong, but I think the attitude change really started with .NET. I'm not sure that many people remember the "shared source" version of .NET. I worked at Corel, who was contracted to work on it, at the time. Although I didn't work on the project itself, I interacted with quite a few people who did.
I'm not going to pretend that MS "got it" at the time. "Share Source" was an insult to everyone. However, internally you could see light bulbs going off in people's heads. People who worked on the shared source version were proud of what they were doing and were really eager to have people look at it. The fact that "shared source" essentially crippled it to the point that it would be useless to everyone was surely a thorn in those people's sides. It (and every other shared source project) died in obscurity (thankfully).
I often wonder if this was one of the seeds that eventually sprouted inside MS to make them start to take open source seriously. I even wonder if one day we will see them embrace software freedom. Well... I can dream can't I ;-)
MS is betting that their services division(s) will be the future of the company... from Azure to Office 365, XBox services, etc. It probably is the safest bet given the monumental shift in computer/online usage away from typical desktops to phones/tablets and their middling encroachments into that space.
MS is being open because it makes sense to, and it's the only way to maintain mindshare in the near future. Azure's a decent enough offering, but AWS and GCE are right there as well, each with advantages over the others. I actually like windows (mostly), but need it far less, as I can do what I like with OSX and Linux and do daily.
If you publish something using apache license you grant users the right to use your patents that cover this software. When you publish under MIT, you can still keep a patent and sue any user.
The MIT License does not use the word "patent", but there are good arguments that MIT grants at least an implied patent license, perhaps even a (limited) express patent license. "Implied patent license" is the search string if you want to read more.
Apache version 2.0, on the other hand, has an express patent license. There are very good reasons to prefer an express patent license.
I don't know a single online source worthy of recommendation, so I'm afraid you'll have to do your own searching. But in a nutshell, at least under US law:
1. The permission language "deal in the Software without restriction" is very broad.
2. Of the more specific, enumerated permissions "use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of" the terms "copy", "modify", "publish", "distribute", and "copies" speak to the exclusive rights of copyright owners per United States Code title 17, section 106. The terms "use" and "sell" speak to things you can sue for under a patent per United States Code title 35, section 271. The term "sublicense" is generic to licenses of all kinds, including copyright licenses and patent licenses.
3. There is also a legal idea called "estoppel" that prevents, broadly speaking, leading people to believe they can violate your rights freely and then suing them, or taking back or shrinking permission that you've already granted. State Contracting and Engineering Corporation v. Florida, decided by the United States Court of Appeals for the Federal Circuit, is an interesting case in the patent context. You might find a summary online, if you haven't time to trawl through the court's written opinion.
To quote Larry Rosen, from his book Open Source Licensing:
> Be careful about implied licenses. An implied license is necessarily vague and incomplete. The terms and conditions of an implied license may not be clear to either the licensor or the licensee. Reliance on an implied license is particularly risk when important property interests are at stake.
If you're worried about an implied license for a specific reason, please talk to your company's legal or engage an attorney. It's going to require a judgment call, and you want someone on your side who will consider a lot more than what I've mentioned here in coming up with good advice.
Can you explain how this works in more detail? If I use someone's MIT license code that has patented stuff in it, does that make me a potential target? Seems hard to justify using MIT licensed code in that case?
That's right. MIT does not protect you from any patent-related legal action. It's not that hard to justify though - it's the same for BSD and MIT/BSD has been used for ages.
Both Apache and GPL grant patent use to everyone. I'd paste the relevant bit, but it's stupid-law-level-long. Point 3 in https://www.apache.org/licenses/LICENSE-2.0.txt - also see the exploding trap at the end of that point. You cannot sue the author for patent reasons.
Please don't say things like, "MIT does not protect you from any patent-related legal action."
It isn't that the MIT License is known to provide no protection; rather, it's unknown whether the MIT License provides protection. The MIT License contains strong wording to suggest it may very well provide protection against patents, but it's untested and not quite as strong as the patent grant in, say, Apache 2.0. It's a matter of knowns and unknowns.
Consequently, this is why the MIT License and the BSD License should not be lumped together.
Their open-source friendliness has unfortunately coincided with a decline of their respect for user privacy and choice (Windows 10). I find the "New Microsoft" bittersweet.
The Windows 10 privacy thing is completely overblown.
Most of the privacy settings that people are most worried about is tied into Cortana or more specifically what you allow Cortana to do on your behalf. You can simply choose not to use Cortana or to disable it entirely. Google Now & Siri support the same functionality but ironically don't give you the settings to disable it, so nobody bats an eyelid.
Beyond that you only really have so called "basic telemetry." Which tells them when you launch a Windows 10 "app" and some basic computer information. But even without that setting/ability the apps themselves could conduct the same reporting (via direct HTTP requests), the only reason it is even a setting is because Microsoft unified it to make developers lives easier and to better aggregate the information their side.
Are people legitimately concerned that when they launch the XBox music app that Microsoft knows you did? Even without the Windows telemetry they could just look at their own server logs since the XBox Music app utilises so many web resources (e.g. images). The same is true with any developer, any app, on any platform.
Ultimately the whole thing is a bunch of noise and no substance. Nobody can actually point to exactly what specific information Microsoft is even taking, and have even less data yet still to back up any claims.
> Most of the privacy settings that people are most worried about is tied into Cortana or more specifically what you allow Cortana to do on your behalf
In all of these discussions on HN, the only ones bringing up "worries" about Cortana features have been people defending Windows 10 (and now Windows 8 and 7) data collection. It's easy to disable and, if you choose not to use it, it won't collect data anyways. The same can't be said of telemetry.
> Which tells them when you launch a Windows 10 "app" and some basic computer information
Actually it's not clear what else is being collected, because Microsoft won't disclose what's being collected. Is there evidence that only data about usage of Windows store apps is being collected?
> But even without that setting/ability the apps themselves could conduct the same reporting
Which would be needed to take into account when deciding on if that app should be used. Just to be clear, that means the application I choose to run might be collecting analytics, but my OS is also collecting analytics on the applications I'm choosing to run because it thinks it should.
> Nobody can actually point to exactly what specific information Microsoft is even taking
Yes, because Microsoft won't tell anyone what it's collecting!
> and have even less data yet still to back up any claims
As a baseline, you yourself talked about how we know they're collecting information about application usage. That right there is undesirable behavior for many that we have plenty of evidence for.
> Actually it's not clear what else is being collected, because Microsoft won't disclose what's being collected. Is there evidence that only data about usage of Windows store apps is being collected?
Microsoft has disclose what is being collected[0]. People just choose to ignore it and making conspiracy theories instead. So where is the evidence that Microsoft is lying?
> Which would be needed to take into account when deciding on if that app should be used.
How? Neither iOS, Android, or Windows restrict HTTP. Any app can conduct analytics and unless you're running them one by one and analysing the packet dumb you wouldn't be the wiser. At least Windows tells you.
But that's what all this is about. Most other major operating systems conduct similar activities but there is no public outcry about that... Why the double standard?
> Yes, because Microsoft won't tell anyone what it's collecting!
Yes they do[0].
> That right there is undesirable behavior for many that we have plenty of evidence for.
But not something you can readily avoid. I'd go as far as to say that the vast majority of apps or applications send basic telemetry back to the manufacturer, it is the norm now, and has been the norm since before 2007.
> Microsoft has disclose what is being collected[0].
let's quote them:
> Security. Information that’s required to help keep Windows secure, including info about telemetry client settings, the Malicious Software Removal Tool, and Windows Defender.
> Basic device info, including: quality-related info, app compat, and info from the Security level
Notably vague, no "conspiracy theories" needed.
And you're still missing the point that even the specifics that have been explicitly disclosed are still enough to be objectionable to some.
edit: I missed the much more extensive list of what is being collected via "basic" telemetry:
> It's great they disclose more information than I thought they did, though this list is actually far worse than I had guessed.
If this list is far worse than you'd guessed, why were you worried at all? Nothing on this list looks like personal information. It's basically just a hardware profile and useful but not invasive system information like when your computer crashes. It doesn't appear to record any of my information.
You have no way of trusting any other company that releases closed systems either. Like iOS, or manufacturers Androids. You have to trust some institution at some point.
You're the only one who has used the term "conspiracy" here. Again (for the 5th time you've been told) Microsoft has used exceptionally broad terminology to describe what data they are collecting. Since you're the one drumming on about "evidence" for a "conspiracy theory", why don't you provide a definitive list of the exact data they are collecting?
The Windows 10 privacy thing is completely overblown.
Not when you consider the fact that they're being so damned sneaky about it.
First, the idea that you have to run an enterprise OS to turn off (not "set to basic", but turn OFF) the tracking says volumes. Just by having it default would meet the goals of collecting development/troubleshooting/debugging info since most people won't change the defaults.
Next up, there's the fact that Microsoft has gone out of their way to make it hard to turn off anyways by hard coding addresses and ignoring registry keys. The only way for a home user to stop all tracking is to firewall an ever changing list of IPs.
Third, you do not know what they are tracking or not tracking. The communications are secure and the application closed source.
It's more the attitude these actions belie than the actions themselves, IMO. It's a very quiet, but distinct disdain for the user. "Yes, you paid $100 for our OS, but you didn't think you actually controlled it, did you?"
So... minus four in a short amount of time. Would anyone care to explain how I'm incorrect about any of this?
It is not overblown. I'm very happy that Microsoft is getting bad press. Hopefully this drives people away from Windows. This is the only way Microsoft "listens". When their revenue takes a hit. The primary issue with Windows 10 is how Microsoft is forcing people to upgrade to it. The secondary issue is with the lack of controls given to users.
I don't want certain Win 7 machines that I own/maintain to ever upgrade to Windows 10. Didn't ask for it. Don't want it. Just go away and let me do my job. Microsoft provides me no official way to disable that horrendous popup. I expect that shit from shady browser toolbars, and bundled crapware, not Microsoft.
Microsoft seems to think the users can't be trusted with options to disable Updates, Telemetry, and other BS services. I absolutely HATE having to ever reboot my windows machine. It is extremely critical for me to have my dev box be rock stable and running 24/7. I frequently used to go months without ever rebooting it. Also, with Win 7 I could just set the Update behavior to "don't call us, we'll call you". With Windows 10 I'm forced to muck around with services and group policy just to get my machine to update and reboot on my schedule, rather than Microsoft's.
As a customer of MS, I despise these methods, as I continue to use their OS, because there simply isn't any other alternative. All I can hope for is people who don't specifically need Windows to switch to other OSs en masse. At this point I could be swayed into thinking that its a good idea to support anti-microsoft efforts even if they are not based in facts (as you point out, nobody has shown exactly what they collect), just to obtain the result that you want.
> You can simply choose not to use Cortana or to disable it entirely. Google Now & Siri support the same functionality but ironically don't give you the settings to disable it, so nobody bats an eyelid.
Siri can be disabled. Location tracking to improve contextual understanding can be disabled separately if you don't want to disable Siri completely. If you disable Siri, all data is deleted. You're also informed of exactly what data is uploaded.
> Beyond that you only really have so called "basic telemetry." Which tells them when you launch a Windows 10 "app" and some basic computer information. But even without that setting/ability the apps themselves could conduct the same reporting (via direct HTTP requests), the only reason it is even a setting is because Microsoft unified it to make developers lives easier and to better aggregate the information their side.
Does Microsoft let you disable this? Apple offers you a prompt like this one http://imgur.com/aBzTNsV on every major (x.y) update, and shows you what data is being uploaded (as well as letting you disable it) in settings.
A lot of small medical and dental practices operate on one or two PCs. Windows doing things like uploading your calendar and address books could end up being a HIPAA violation.
Enterprise customers are fine because they have controls for everything, but small offices running Windows 10 Pro aren't so lucky.
Let me delicately observe that there is a straightforward reason why one has HIPAA-protected information in one's address book and it implies you make hundreds of HIPAA violations routinely.
I get what you are saying, but I'm not sure I agree.
If I buy a new car and it (by design) has a speedometer that sometimes shows the wrong speed or sometimes makes the lights malfunction and I get traffic tickets, do I have no right to be upset with the car manufacturer simply because I routinely speed and roll stop signs?
Honestly, I think HIPAA may have been a net negative to the medical industry and has probably prevented the basic sharing between medical practices and related services that it was supposed to more readily allow. The lawyers have benefited a lot, and the abstract details insurance companies have gathered, but refuse to disclose based on HIPAA grounds to their own customers about themselves and medical practitioners treating those same people is often ridiculous at best.
HIPAA mandates that the people have access to their own PHI held by covered entities, a mandate which did not exist before HIPAA, so it did the opposite of creating additional barriers to patients getting access to their own data.
Patients are denied access to their own records in the name of HIPAA. During a recent medical crisis this happened to me. No amount of threatening or cajoling would get them to release my images to my designated 2nd opinion.
To put it simply, anyone claiming HIPAA is the reason for that is just lying (since HIPAA has no such requirement) and would invent some other excuse -- or simply refuse without explanation -- without HIPAA. In fact, HIPAA patient access requirement mandates that patients have access to, and a right to receive copies of, their records under most circumstances, and refusal to provide such access is an actionable violation of HIPAA -- without HIPAA, there would be no recourse.
Pre HIPAA I never had a problem obtaining my records. The trouble is less-knowledgeable health gatekeepers are relentlessly indoctrinated into believing that privacy is paramount. A records request that doesn't follow a rigid checklist is defaulted to REJECT. And it is in the name of protecting the patient's privacy.
You'd be surprised how it's worked out in reality though... I'm not against what HIPAA was meant to do. I just with the actual circumstances allowed for the more easy communications system that was hoped for... not filling out 100+ fields across 4-5 pages when seeing a new doctor for the first time.
No, I wouldn't be surprised, having worked in the field for most of the time that HIPAA has been in force, and having had experience with trying to get information released by health entities before HIPAA. While HIPAA's requirements may allow more room for inconvenience than they should, and plenty of covered entities fall short of the requirements, the situation is miles better than it was without HIPAA, where such information was often treated as proprietary, competitive information and there was no general mandate -- and thus no recourse in many cases -- for disclosure to the patient.
And most of the history you fill out when seeing a new doctor has nothing to do with whether or not they can get old records -- they want the same information even if they have the old records.
Those practices use purpose-built software, not Outlook or some other Windows-readable program. Microsoft also offers HIPAA-compliant service agreements for such businesses.
If those offices aren't hiring someone who knows those things, they're probably not HIPAA compliant for many other reasons as well.
Very small offices may not use purpose-built software. My dentist when I was at school was a one-person operation (he's retired now).
Even if they are using purpose built software, by default Cortana watches what you type in everything and sends samples (at least) to Microsoft. You can turn Cortana off, but you can't uninstall it or even stop it from running and you can't be sure that Microsoft won't just turn it back on anyway (unless you are an enterprise customer).
I just don't understand why Microsoft doesn't just allow all of that stuff to be turned off by everybody? I could be pretty happy with Windows 10 if it respected my privacy more.
It's actually REALLY HARD to disable OneDrive. I've tried about 3 separate methods and the damn thing keeps re-appearing. From what I read, it looks like Group Policy might work, but I don't have AD in our environment and haven't been able to test it.
I am fine if they collect basic anonymous telemetry data to track user patterns and find out system stability and such only if they specifically state all the parameters they collect.
Plus them collecting data from windows 8(.1) and 7 users without any proper warning/message to user is wrong.
I'm just going to copy my post word for word from the last time I commented on the Windows 10 privacy issue:
I still can't imagine why Microsoft would think it's worth the bad publicity to not offer the option to disable telemetry, considering how few people tend to deviate from defaults in general, and how tiny a percentage of their users even care about privacy to begin with (evident from the commercial success of Windows 10).
They've been getting so much goodwill for all the other great things they've been doing lately. It just seems downright foolish to squander all of that for a few extra basis points in their telemetry data from those rare few users who care about privacy but are forced to use Windows 10 anyways for whatever reason.
Those who care about privacy but do have a choice in OS will simply avoid Windows 10 altogether because of the mandatory telemetry, which is telemetry data that they wouldn't be getting anyways. A certain percentage of these people could be Windows 10 users if it weren't for the mandatory telemetry. Would they really rather have people using other OSes, than having them use Windows 10 without giving them telemetry data?
So many aspects of their position on this issue seems completely irrational to me.
Yes, a much more accurate description than the GP's would be that the "new Microsoft" isn't letting single executive edicts set corporate policy for many things, such as how all teams should interact with the open source communities, instead letting teams themselves make that decision and then supporting them in doing so. The natural result is that many developers within Microsoft come from or have extensive experience with open source and so will embrace it when given the opportunity.
It feels like less a change in philosophy of "open source is our future" and more of one that good decisions can come from the bottom up (and a little bit of "yeah, the whole 'linux is a cancer' thing was really stupid").
I think it's not so much "open source is our future" as it is "if you can't beat them, join them". I'm sure there are plenty of folks at Microsoft who realize that sticking with Windows in Cloud puts them at a competitive disadvantage vs eg Amazon or Google. There are things that _have to_ use Windows (Azure AD, MS SQL, Exchange, Sharepoint), but everything else is only harmed by it. So the litmus test is going to be whether they actually move their major backend systems to Linux in the coming 2-3 years. If they do not, all these Linux ports will remain second class citizens compared to Windows versions, with lower performance, fewer features, and more bugs. Which is probably why Microsoft is releasing them in the first place: if you want the real deal, come to papa.
It's sort of comical you say Azures is a competitive disadvantage. Azure and GCE are in almost every way superior to AWS in terms of everything but established marketshare.
If I were going back to step 0 with a startup today, I would not repeat my decisions to stick with AWS. Their stack is slow-moving and frustrating. The VPC rollout has been a slow motion car crash that is only righting itself in the past 3 months. Meanwhile Azure has amazing management tools and GCE has got well-supported containerization as its primary product interface. Both provide competitive rates and subsidies for small tech business, as well.
I didn't say Azure is at a competitive disadvantage (though it is, when it comes to Linux -- their Linux VMs are substantially slower than competition). I merely said that given choice, people prefer Linux as a platform, and they do so for a reason: better tooling, less vendor lock in, greater variety of free/libre backend software and frameworks, to name just a few. Windows really is redundant in this environment.
I was doing some benchmarking on AWS, GCE and Azure at various price points for some research and didn't see Azure perform much worse for similar price points to AWS. Everyone is slower than GCE.
Do you have references for the assertion? I'm entirely happy to say I did the tests wrong, I did them as a trio of evening projects.
My workload was CPU-bound. Test TensorFlow CIFAR model training on the VM with the same amount of CPU on all three major providers. In my case Azure was ~30% slower, whereas GCE and AWS were neck and neck.
Here's why their Linux ports, at least for open-source won't necessarily become second class... if they're used and popular with developers... that's it.
The DevOps trend, and developer mindshare is paramount regarding their cloud initiatives... Azure AD and Office 365 will keep things floating, but only Azure as a whole will keep it alive beyond XBox...
Microsoft is very good at tracking and keeping with developer mindshare, as well as adapting. They're also a large corporation with many nooks and crannies, so there are exceptions to everything. The change lately is the acceptance that windows is waning... and they have to shore up other areas even at the cost of windows for the larger company to survive... this is a colossal shift that can't be underestimated.
As a result initiatives that may have never seen the light of day around .Net as a whole, VS Code (with great go and node support), and a slew of other nice things coming from MS would never have seen the light of day. I am surprised how much I like using VS Code on OSX, I switched from Webstorm and Sublime, even with some rough edges in es6/7 syntax support. DNX and .Net Core stuff has me considering .Net and Mono again when I'd all but written them off.
I was forced to work with Azure late last year, and after writing some simpler wrappers using some of their APIs have to admit that I liked using them more than AWS. It's really interesting that the less I rely on MS, the more I appreciate the work they've put out there.
> Their open-source friendliness has unfortunately coincided with a decline of their respect for user privacy and choice (Windows 10). I find the "New Microsoft" bittersweet.
Similar to Google, Facebook, Twitter and others: They support open source, but not end-user control or privacy.
EDIT: Not a complaint, but it's very interesting that this was modded down. It's a simple, factual statement that I doubt the companies themselves would disagree with (though the truth is much more detailed).
This complaint is so frustrating since odds are many people are reading this from Apple laptops running OSX that does at least as much telemetry. Run Little Snitch and launch apps, type in spotlight, visit a store... hell you'll get regular uptime telemetry dumps even if you don't do anything.
And odds are there is an iPhone or Android phone in your pocket, its software metaphorically top heavy with instrumentation to track your ever engagement pattern with laser-like precision.
But Microsoft takes the brunt of this complaint because they're open, honest, and give you a scale of opt-outs. But mysteriously no one blames Ubuntu for doing the exact same things because... freedom?
Meanwhile, Microsoft has opened their entire core technology stack to outside scrutiny, adopted a radically user-centric approach to OS design, and taken a very risky and user-and-dev-friendly strategy regarding how apps are shipped and created.
I find this ceaseless double standard less than sweet, and indeed very much about bitterness.
If you're concerned about MS privacy, you should be absolutely outraged about Google, Facebook, etc. You also shouldn't consider Apple to be that different.
The Win10 privacy thing isn't much different from what Apple now does by default, and all of it can be turned off. While it does gather data, it does so in the course of supporting functions that most users want like Cortana and smart search. Apple is also full of phone home stuff that must be deliberately turned off if you don't want it.
The fact is that the market has clearly indicated (1) a preference for free stuff that is supported by 'ecosystems' and indirect monetization, and (2) a willingness to trade privacy for convenience and features. You can't blame companies for complying with what the market clearly wants as indicated by consumer buying behavior. If people cared about security and privacy more than features and UX, they would vote with their dollars.
You are right. A few years ago, googleupdate.exe would scan all your hard drives, it would be interesting to know what they did with all that data.
That's why I spent several hundred dollars on a copy of Windows. But can you imagine my reaction when Microsoft secretly installed tracking software to collect my personal data!?
Some times when I open up a new app, copy something into clipboard, or search on the web, there will be outgoing connections to akamai / microsoft servers.
While this is true, only open source makes choice and privacy even possible.
They are obviously trying to reinvent themselves partly in the image of Google, whose motto "Don't be evil" also sounds very hollow nowadays.
But I shudder when I think of a world where Google is built upon closed software.
[To clarify: I know that Google does not open most of their software. However, with their reliance on open source software they were a big driver for the legitimacy of the movement]
I don't really like the user privacy approach on Windows 10 but I wonder if they will change that approach soon or someone else (third party) will came up with a tool to automatically handle this issue.
I found this tool in another HN discussion about this Win 10 privacy. I don't know if this tool works to solve the entire problem: https://github.com/10se1ucgo/DisableWinTracking/. If anyone can vouch for this tool's effectiveness I would like to know.
I had this in my favorites. It wasn't supposed to be a guide. You may be right about those solutions not working anymore. However the problems are still there. I had to disable several of those telemetry tools in the Task Scheduler just yesterday on my Win7. I'm pretty sure I've done this before.
The only thing that changed since the preview build is that there are 3rd party tools to do the job for you...
> As far as I know they give you the option to opt out of everything
The thing we now know as "telemetry" used to be opt-in, and they hide the opt-out control during initial setup behind a thing that is a link but doesn't look like one.
It can still be turned off after installation in the same exact way as the previous versions of Windows (enter "experience" in the Start menu search box), but most won't know that this control exists.
All the rest of the things causing people to freak out about privacy are related to Cortana, and the process for turning that feature set on is in-your-face opt-in, where it's clear that you're authorizing data sharing with them.
I switched back to Windows 10 as my primary desktop and laptops OS after years of running Mac -- especially on laptops.
We do a lot of high-end GPU stuff here, and Apple doesn't make any hardware that supports multiple high-end NVIDIA cards in any practical way. And no Apple monitor does 100% AppleRGB or any of the cinema color spaces, so we always needed third party monitors when doing color work.
Windows 10 is as nice and stable as Mac OS used to be (before Steve Jobs died).
Do you have anything to back this up other than it being your knee-jerk reaction?
Microsoft's entire leadership has turned over and it's easy to see that their culture is very different from the days you're talking about.
Also, they don't have a monopoly on nearly anything and certainly not on Javascript engines. They couldn't move to the "Extend, Extinguish" phases even if they wanted to.
Not saying i agree, but we consider history for every decision made. Want to loan someone money? consider their credit rating. Want to hire someone? consider their employment record.
Microsoft has a long history of not being so nice when they have an advantage. How much of that is leadership driven, and how much is built into the corporate culture and structure? It's knee-jerk, but it's not crazy to think about.
> Want to loan someone money? consider their credit rating. Want to hire someone? consider their employment record.
That's a single person. Microsoft is many thousands of people, and the most important decision makers have changed. In your analogy, you're considering whether to loan money to an almost completely different person.
Those same decision frequently had hiring and promotion authority. How different are these new people? Are they someone the old decision makers promoted minutes before leaving or are they fresh new hires? Chances are there is a complex mix and at least some of microsoft's behavior has changed and some will remain the same.
> some of microsoft's behavior has changed and some will remain the same
That's the exact argument I was making. The analogy doesn't work because this is, literally, a new Microsoft. It has new leadership. You can't use the past as a benchmark for how this company will behave because no one really knows what parts of the company's behavior and culture have changed and what parts haven't.
> Microsoft's entire leadership has turned over and it's easy to see that their culture is very different from the days you're talking about.
Yes. They made a 180° turn already. Who says they won't do another?
Customer lock-in isn't Bill Gates' idea. It's a very tempting strategy that you can find virtually everywhere where companies think it'll gain them money.
I'm genuinely curious which projects you are thinking of when you mention Apple. Shedding GPLv3 and refusing to (for example) update Bash because they do not want to use anything licensed under GPLv3 is pretty.... I dunno, I'm just curious what projects you're thinking of when you say that. The walls of that garden are pretty high.
What "disadvantage" are you referring to? The one that forces companies to give back in the same manner that allows them to use the work of others for free? Yeah, that is so shitty.
Or Samba. When given the choice to abide by the GPL and open source their Samba changes, or to drop everything; they did the latter and wrote their own SMB implementation. That was three or four OSX releases ago and it's still buggy as hell. The only one who lost here are Apple's paying customers.
Have to agree, had a lot of pain when they first switched, their initial release/update was broken/poor to say the least.. the hangs and performance issues aside even.
I thought that had less to do with the choice of license and more to do with the fact that Apple just bundles the zipped source and publishes it every year or so (in contrast to Swift where you can see every commit made since the beginning of the project).
It could help Microsoft close source THEIR implementation later down the line, but it doesn't help them "extinguish" it completely, particularly if third parties wish to maintain it.
MIT makes it easy to fork and relatively easy to re-license.
IANAL but AFAIK as long as they are the sole owner and contributor or has agreements in place with everyone the GPL doesn't protect us against going closed in future releases.
What it does secure is you right to fork the last open source version. Then again, so does BSD, MIT and ASL - and under better terms for everyone else.
The copyright holder can license a work under any license at any time. Although you can obviously issue an exclusive license for example, in most countries you can not simply dissolve the copyright (there is no concept of assigning something to the public domain). So practically, there is no way to write a license that stops the copyright holder from issuing a new and different license.
What the GPL does is forbid anyone who is not the copyright holder from changing the license. If you make a derived work from something and wish to use the GPL as the means for allowing you to make that derived work, then you must use the GPL.
The "better terms" that more permissive licenses give you is the ability to change the license when you make a derived work. Of course someone can fork the original open source version, but this is only useful if that person: 1) knows about it 2) has access to it 3) can build and install it for their platform. In many cases, more permissive licenses are preferred because vendors know that they can provide digital locks on the platforms that stop people from deploying free (as in freedom) versions. They wish to get the benefit of open source development, but still lock their users out of certain functionality.
Permissive licenses are useful, but they are not automatically better for everyone in every circumstance. Certainly I have a lot of useless bricks that used to be useful consumer electronics because my vendor doesn't let me upgrade them. Strangely the best piece of consumer hardware I ever bought was a bicycle navigation unit from Sony. GPL from top to bottom and they even gave you instructions on how to rebuild it and deploy it.
You may be interested in some thought experiments I'm running to try and achieve the aims of a verifiable public domain dedication under existing US law:
The most interesting reading I've done is probably the old CC mailing list threads on CC0-1.0's backup license. If you set yourself the goal of writing international/portable language, things get really hairy.
Even if companies had been able to fork IE6, it would not have changed anything.
(Companies are able to fork WebKit, and yet it's on a good track to become the new IE6, because Apple doesn't care and their market share force everyone to work around bugs in iOS/OSX Safari.)
Which makes building a community the most important thing to do, to make it harder to keep closed forks closed. It's worked really well for LLVM/clang, with contributions coming from a number of normally open-source hostile sources. Conversely, were Microsoft's engine here GPLed, with the contributor licensing agreements open source organizations like Canonical and the FSF have, it would be a lot easier for them to close things back up, and take all the toys with them.
Three existing production-ready open source engines + other embedded options wasn't already "competition"?
It's nice to see but, as far as competition within OSS is concerned, it's about 7-8 years too late. MS will no doubt, at some point, release something like Node.js based on this engine. If they do it'll be widely lauded, but you can barely say they're even a "fast follower" here when they're this slow.
Temporarily perhaps. Most of the gains look to have happened within the long IE11 to Edge 12 release cycle. Firefox 45 looks like it'll leap ahead again, and they're on a 6 week release cycle...
I'm not disputing MS have made impressive efforts. I've just yet to see anything from them that would make me put long-term stock in their open source commitments.
Chakra has been well ahead of the other browsers for >1 year now. That lead has been declining (as there's only so much of 100% you can implement), but kangax has always shown it to be ahead.
Slight derail but I find it interesting that the Swift team is tackling the fragile ABI problem with v3... Something C++ could have done at any time, enabling portable and interoperable C++ interfaces.
(There's no reason clang importer couldn't surface C++ or any other language to Swift for that matter)
Even if you can easily parse C++,[1] there are a lot of practical impediments to binding to C++ libraries. If an API exposes templates, or relies on constructors/destructors, it's very hard to call that from non-C++ code. For example, SpiderMonkey wraps JS pointers in C++ objects and uses RAII to register roots being held by C++ code with the JS garbage collector. Handy, but more or less impossible to call from non-C++ code without writing a shim by hand.
[1] You young-uns and your Clang. I did a C++ binding generator for my GSoC project, and back then ('05) GCC was the only open-source thing that could parse C++, and you could only get at the parse tree by patching it to dump XML.
Except that at the time Jobs fought FSF, so he wouldn't need to open the ObjC sources. He lost that fight.
It's called knowing history. Just because two decades later Apple decided to fund open-licensed clang does not negate my second sentence. Clang was no longer considered competitive advantage, like ObjC was at the beginning of 90s.
It could have, but a standardised ABI is a terrible idea. Once you commit to an ABI you're stuck with the tradeoffs you make. If C++ had specified a stable ABI 20 years ago, we'd all be using setjmp and longjmp for exceptions, and small string optimisations and such would be impossible.
I don't really see the big deal tbh. C++ code compiled with GCC was ABI stable between GCC 3.4 (Apr 2004) and GCC 4.9 (Apr 2014), and you can still use use the -fabi-version switch today to generate code against older ABIs. Hell, libstdc++v5 (shipped with GCC 3.3) is still packaged for most distributions.
Yeah. As much as I absolutely love C++11, and use its features constantly, if I had to make the hard call: "do I get lambdas, or do I get some extern ABI marker I can use to opt-in to a non-fragile ABI, with field and vtable offsets linked as symbols and return types mangled into function names", I would have made the (incredibly depressing) choice for the latter, as it makes C++ suddenly able to be used in ways that are otherwise insane to contemplate.
It might not even be that small as it opens up a view possibilities that are not that easy with V8/C++, like using the Engine from Rust or other languages that only provide a C FFI.
That's correct. V8 license is required in v8-*.h files because they are v8 header files that chakra shim uses. The implementation files has Microsoft license.
If I would like to write a native extension for a possible Chakra powered node.js, which methods would then be possible?
- Use V8 C++ API
- Use Chakra extension API
- Use NAN
- All of those?
And in your port are Nodes inbuilt native functions (like the libuv based IO functions) still using V8 APIs which are mapped to Chakra by this shim or are they reimplemented directly on top of Chakra APIs?
For native modules, below options are ideal:
- V8 C++ API : These APIs will continue to work with node.js v8 as well as with Chakra because in shim layer we map them to equivalent Chakra APIs. If you see problem in any APIs not working as expected using Chakra engine, please file an issue on Microsoft/node and we will happy to fix it.
- NAN : Node.js Chakra is compatible with latest NAN version so it should continue to work with V8 and Chakra engine.
Regarding, Node inbuilt native functions (in other works deps) that are independent of v8, continue to work in Chakra without reimplementation. Chakra shim comes into picture to map V8 C++ APIs to Chakra equivalent.
Having more than one engine can mean that there is less of a V8 monoculture. Bugs in V8 don't become "standard" and "things that V8 is really good at" doesn't become "things that Javascript is really good at".
That's what Go does. The reference of the language is [an actual document](https://golang.org/ref/spec), not a reference implementation. There are two compilers (gc and gccgo), and this way, no edge-case of an implementation can make its way into programmers' habits, something that tends to happen a lot with JavaScript.
Microsoft was surprisingly open in this regard with C#. The C# language and the CLI bytecode are both ECMA standards (334 and 335) that could be implemented by anyone. Of course, this is with the realization that the .NET standard library gives Microsoft's C# implementation a lot of staying power.
If I'm not mistaken, the case with statically linked binaries in Go is more of a "It's not a bug, it's a feature" situation than being touted as Go invention. Besides, they are rectifiing that one if I'm not mistaken.
Didn't JavaScript already have a reference spec going back at least a decade? I mean wouldn't a reference spec be a prerequisite for JavaScript becoming an ECMA standard?
What do you mean by the, "edge-case of an implementation... makes its way into programmers' habits" as something that happens a lot in javascript. Maybe I'm not understanding but I read that statement and I don't see how that applies to javascript.
For example, a try/catch in V8 triggers deoptimization for the entire function it's in, while it might not in other engines. So this leads to many developers avoiding try/catch in performance critical code.
This ends up with them avoiding it in general usage, which means that now "avoiding try/catch" is considered a general purpose performance tip in javascript, even though it might only apply to one engine (and the v8 team has expressed interest in trying to stop that deopt)
This might be a bad example, i'm honestly not sure how try/catch performs in other engines, i just know it deopts in V8. But the fact that i don't know if it's a javascript thing or a V8 thing speaks to my point.
Since node.js is backed by V8, optimization decisions that the V8 team have made end up in developer code that runs in node.js. For example, Bluebird has a list of "optimization killers"[0] that many developers that I know use as a reference when making node.js libraries. However, these optimizations are V8 only, and so "edge-case of an implementation makes its way into programmers' habits". A quick example is "5.1. The key is not a local variable:", which ends up with having to jump through some hoops in a for-in loop, not because it's easy or because the language forces it, but just because V8 is the engine.
> optimization decisions that the V8 team have made
Optimization decisions that V8 team have made are much more universal than it might seem.
It is true there are sometimes strange artificial corner cases - but those are often either bugs or temporary solutions that are going to be replaced with something more generic as soon as they start to hurt too much code.
> which ends up with having to jump through some hoops in a for-in loop
Could you clarify which hoops precisely do you need to jump through?
I am not sure I can imagine a reasonable chunk of code that strives to be both fast, well written and wants to use a context captured or global for-in variable.
for (IWantThisVariableToEndUpOnGlobalObject in obj) { }
Optimization decisions that V8 team have made are much more universal than it might seem.
That's certainly possible, but maybe a competing VM can do better, but finds that everyone has coded specific V8 optimizations. They may be hesitant to push out their implementation. It may be a better optimization, but it causes existing code to run slower on their VM. Thus back to the original statement that implementation quirks find their way into developer code.
for (IWantThisVariableToEndUpOnGlobalObject in obj) { }
The variable has to be in the local scope and can't be in any higher or lower scope, not just a global scope (which your words acknowledge, but your code snippet doesn't). It's easy to end up sending the prop variable through a closure accidentally. Some real world examples[0][1][2]. Note that the solution requires pushing out to another function just to get around the deopt. With the sproutcore example being very clear, as they made the change specifically to satisfy the V8 VM.
> The variable has to be in the local scope and can't be in any higher or lower scope, not just a global scope (which your words acknowledge, but your code snippet doesn't)
Sure! I am just trying to say that in my opinion whenever you have a non-local variable used as iteration variable in for-in then you most probably have a bug in your code. I tried to illustrate this with an global variable example because it's a common source of JS bugs - when people leak things into a global namespace by accident.
> Some real world examples[0][1][2]
These links refer to a different bailout reason --- "ForIn is not fast case". The original ForIn support in Crankshaft (written coincidentally by me) only supported this kind of for-in because it was the important case to support and the one where you can get good performance with reasonable investment of time.
Given time and bug reports from people hitting this bailout I would certainly extend ForIn support to cover a more generic case (assuming that supporting more generic case would make some code faster), however just in a couple of months after I landed this initial support I switched to a different project, so I never had chance to revisit this.
This bailout reason is actually not in V8 anymore - as now V8 supports both fast and slow cases in Crankshaft[1]
Hey, maybe you're the right person to ask - why does `arguments` so easily deoptimize in every major engine (well, V8 and SpiderMonkey, at least)?
It seems that engines could easily detect the most common munging of `arguments` (such as [].slice.call(arguments), Array.prototype.slice.call(arguments). Array.from(arguments) and the like) and allow those to be optimized. Doing so would speed up a very large amount of code.
Do you have any insight on why that still has not been done after so many years?
> Do you have any insight on why that still has not been done after so many years?
I can't really speak for either V8 or SpiderMonkey but I think there are few reasons -
a) nobody got to doing it, even though it was discussed multiple times, e.g. for V8 it just was not the right time to implement it as its trying to completely revamp its optimization pipeline and certain Crankshaft idiosyncrasies make this sort of optimization pretty brittle;
b) I am not entirely sure that it will actually speedup that much code, this kind of code is rarely on an extremely hot code-path (extremely hot code paths must strive to avoid allocation entirely!);
c) there is a reasonable workaround that provides good performance (manual loop);
d) ES6 provides something better than arguments object: rest arguments.
I'll give you an example of where it hits hard - Event Emitters. Some of the largest EE libs in the NodeJS ecosystem still munge the arguments object to pass args to listeners. I've sent PRs to some of them, but the deopt caused by the arguments munging seems to slow down the whole function and everything it calls, which can be quite significant (like an entire render loop).
ES6 solves this problem nicely but it will be a long time before we can deploy it natively. Thankfully, Babel handles it correctly and uses a proper for loop. So the need to fix this is less urgent than ever.
This really cements the case that the poster above was stating.
no edge-case of an implementation can make its way into programmers' habits, something that tends to happen a lot with JavaScript.
There is code in the wild right now that fixes (perceived or actual) problems in the code's interaction with the V8 engine. And worse, that developer code no longer solves any problem.
whenever you have a non-local variable used as iteration variable in for-in then you most probably have a bug in your code.
Agreed, I'm pretty sure every linter would pick that up anyways. That's not really the cases that I saw though, mostly it was stuff where the key is needed in a function/closure (just threw this together straight in the text area here):
function Intercept(obj1, obj2){
for(var key in obj1){
obj2[key] = function(){
console.log('called: ', key);
obj1[key]();
}
}
}
Note that in the document I referenced, this was under section 5.1. I'm not sure if you would consider them being the same or not, but that's where it's listed in the document, so that's I how I cited it.
> no edge-case of an implementation can make its way into programmers' habits, something that tends to happen a lot with JavaScript.
Edge cases should not, and they almost never do... However performance optimizations is a very special area - you have to know how things are implemented and utilize this knowledge.
> mostly it was stuff where the key is needed in a function/closure (just threw this together straight in the text area here):
This code is also buggy - all closures will point to the same `key`.
That's awesome! With SpiderMonkey hitting 90%, WebKit at 81%, V8 92% and Chakra at 80%, ES6 as production target becomes realistic!
I'm wondering when Android/iOS default browsers will get updates because that's going to be the one remaining blocker. (and IE11 of course... but that's just waiting for userbase to move away from it :/)
Apparently iOS 9.3 will include a bigger-than-usual Safari update. While there are no new big ES6 features mentioned in the announcement, I'm hoping for some smaller improvements.
As far as I understand, v8 doesn't care about node, and would never think twice about doing something that breaks the entire node ecosystem if it makes Chrome slightly faster. FWIW, they aren't even "wrong": it makes sense to me. What is interesting here is not "competition" but "someone who builds a JavaScript engine wants you as a user".
That is probably true, but maybe this position stems more from the fact that they were never challenged than we think. I'm hoping some pride will be thrown into the mix and will make them care just a bit more than they currently do.
perhaps more competition? v8, though much faster than other engines when released, is still roughly twice as slow as go. while jvm (also uses JIT) is roughly twice as fast as go and reaching near native/c/c++ performance. I know it's not a fair comparison between javascript(dynamic) and java (typed), but I am sure there is still room for improvement.
Before Node was popular, I tried a command-line JS engine called Narwhal. It was built by 280 North, and as I recall, it had pluggable engines. I remember being surprised at how much faster JSCore (Safari's) felt than the others.
Not being permanently coupled to a particular platform is probably a good thing for Node, and provides a healthy incentive for V8 to stay on-top of its game.
Over the past 5 years or so, Microsoft has lost developers as a core group. There are certainly interesting things happening out of their dev group, but with the rise of Cloud infrastructure, cheap (free) alternatives to WINS stacks rose tremendously in popularity.
I was a Seattle Techstars 2013 Founder, and the only two teams using MSFT tech were those whose founders were ex-Microsoft employees.
This just feels like a ploy to get developers back on to Microsoft's platforms, then slowly bleed them dry with fees.
*
My experience with the new Microsoft: One of our major products runs on Microsoft SQL Server. Sadly, many developers/ops folks did not realize the difference between SQL Server Enterprise, and SQL Server Standard, and opted to install SQL Server Enterprise on servers, despite not using any of the Enterprise features.
Not only does SQL Server Standard pricing differ compared to Enterprise, but how it's price differs. Enterprise is charged based on a per-core license, requiring both a core license for every core, as well as a minimum of 4-core's per physical processor.
During a routine Software Audit, MSFT determined we had quite a few more SQL Server Enterprise installs than we licensed, so sent us a nice fat bill. We asked for leniency. It was a mistake (ours of course), but we didn't use any of SQL Server Enterprise features.
MSFT wanted no part of that.
So we ponied up our bill, and decided to never use MSFT tech again.
Contrast that with AWS. Due to developer error, we had one of our AWS keys compromised and an attacker used it to mine bitcoins. Amazon sent us the bill. We contacted them, and they gave us a one-time forgiveness.
*
The morale of this story: Use microsoft technology at your own peril. Also, having developers handle infrastructure is cool, but don't discount the value of a dedicated infra team.
In both cases the fault was in your own company and you are trying to use that to distinguish between two suppliers. There's no guarantee that Amazon will continue to be so lenient in the future (though you aren't the first person to mention them acting like this). For what it's worth, I think the new direction for MS is a good thing. They're far from my favourite company, but I dislike them less the more they release software as open source.
MSFTs tactics: allowing you to install SQL Server Enterprise without actually having a paid-for license, is similar to charging someone a fee for a service, and having them opt-out of any reoccuring subscription fees.
This is just anecdotal evidence that "new Microsoft" is the same as "old Microsoft."
I'm willing to believe that the folks working on this actually do care about contributing to open source and believe "rising tides raise all ships," but not for a second do I think that the decision makers at MSFT also believe that way.
One thing that always baffled me was "routine software audits" by Microsoft, Oracle, and the likes.
So, you license a technology, and now your supplier can police you at their own wish. OK, there are situations when you have no choice - they are the only game in town. But in the last decade much of the infrastructure has been available in an open-source form.
I am surprised anyone would touch anything from such companies if there is a free alternative not connected to such companies (and in this case there are many).
You agree to this even if you're just using Windows and/or IE, or any other Microsoft software for that matter. So either you have absolutely no Microsoft software, or you're open to an audit.
how does that work as a consumer/very small business?
they have no proof that I'm running their software, they would be never be able to get a warrant to search my premises, and if they ever stepped foot on my property I'd have them done for trespass.
I can believe there's a contractual term in the volume license agreement that allows them the right, but for that they be giving you a significant discount, which seems fair.
I'm more than skeptical towards M$, I have a deep deep aversion to anything they are associated with - based on decades of using their products and wrangling their apis as a developer.
Every time I use firefox or libre-office or linux or vim or postgres or http or node.js or any of the great open alternatives to the ugly walled garden that is Microsoft, I feel free as an individual and empowered as a developer - I at least have the tools I can use to make peoples daily work better.
I particularly want them to leave node.js alone, as I use this on a daily basis for real work - I don't want to see them screw that up aswell.
I've seen that a partial Linux support is on the roadmap, but does anyone know if there is a way to compile it with gcc or clang, or if it's on the roadmap ?
Hi Chakra Engineer here. It's a work in progress. You can see current work on the `linux` branch (https://github.com/Microsoft/ChakraCore/tree/linux). I believe we are targeting clang. It does not build yet. We will enable builds of parts of the project in stages, building up to a full build with some differences (as you can see in the roadmap, no JIT to make portability easier in the short term, among other things).
In the last couple years it seems like MS has been acting like the crazy-small-TV-shop-owner-doing-his-own-tv-ads : "Yes folks, we're open sourcing everything, cuz I'm crazy Bill". Mind you, this is good for devs finally getting access to this amazing technology!
Interesting that the inbound terms (CLA) include an express patent license, while the outbound terms (MIT) have less clear patent language.
There is tremendous political and PR value in offering MIT outbound. Fun thought experiment: How different would reception be with Apache 2.0, Eclipse Public, or similarly "enterprisey" terms?
Considering how deeply Windows-specific EdgeHTML is likely to be, I don't think open-sourcing it would be very useful, unless someone wants to backport it to Windows 7 and 8.1. I'd rather Microsoft allowed developers to embed EdgeHTML in Win32 desktop applications running on Windows 10.
Hi. Chakra developer here. I don't speak for the Edge team, but there is a blog out that indicates it is already possible to embed EdgeHTML via the WebView control in Universal Platform apps: https://blogs.windows.com/msedgedev/2015/08/27/creating-your...
I know that doesn't entirely address your wish, but it's a step in the right direction. :P
I'd be most interested in using it in an Atom Electron app :) Given VS Code's use of electron, do you know if anyone internally has given a whack at swapping webkit for edgeHTML?
Caveat emptor. The license only gives you permission to use the code, not any patents covering it. So while you can read the code, to use it you need to either get a separate patent license or convince yourself that MS holds no patents on the code. I'm not going to do a patent search, nor do I want to know, but I would be surprised if they haven't patented aggressively.
I'd love to read an article about the various high-performance JS engines. Something like http://fabiensanglard.net/doom3/ but for Nashorn, V8, JavascriptCore and Firefox covering approach, policies, code quality and bug stats.
!! Thanks!! I have spent the last two months furiously working on improving a project of mine (Cycript) that is built on JavaScriptCore, and this is super useful to see!
Surprised they use WordPress to build this website. I thought they'd use Umbraco at the least.
It doesn't surprise me though. ASP.NET is a powerful platform which has seen some great improvements after MVC gained traction, but I wouldn't use it for anything simple where a static site generator or WordPress would get the job done.
I do like the Javascript support, however the browser itself still has some bugs and other performance issues in daily use. Those could be issues with the code being ran on certain sites though.
MS changes towards open source is nice, but I wonder how long it will be before all these projects are usable and stable enough on Linux/BSD/OS X. Perhaps they should focus on bringing a single project cross-platform, like .Net, instead of simultaneously attacking so many fronts.
Looks like they're using the same business model as Quantopian. Get contributions for free, still hold tons of leverage over anyone else that tries to compete, but otherwise allow other people to use the code.
It's called ChakraCore because it is the core engine component of Chakra, not because of any "trend". Specifically, they mention it excluding the runtime bindings for browser or Windows apps.
> ChakraCore is a fully capable JavaScript virtual machine that has the exact same set of capabilities and characteristics that are supported by Chakra, with two key differences. First, it does not expose Chakra’s private bindings to the browser or the Universal Windows Platform, both of which constrain it to a very specific use case scenario. Second, instead of exposing the COM based diagnostic APIs that are currently available in Chakra, ChakraCore will support a new set of JSON based diagnostic APIs, which are platform agnostic and could be standardized or made interoperable across different implementations in the long run. As we make progress on these new diagnostics APIs, we plan to make them available in Chakra as well.
Funny thing is that the core is the part of the apple that people generally don't eat.
As for the naming, I've always found it fairly standard for internal productized libraries. The library/product as a whole consists of all the possible pieces you can link with, but the core is the only mandatory part.
No, the *Kit suffix comes from NeXT, which Apple acquired and is where Mac OS X (and iOS) are derived from. NeXT started using that terminology in the 1980s, before Be Inc. was started (in 1991).
What is the main reason for not merging V8 and ChakraCore? It's really cool that teams from the big companies can learn tricks from each other, but as far as I see the goal of Google and Microsoft here is to just advance Javascript in the same standard direction. Have you talked about not duplicating all efforts?
> Have you talked about not duplicating all efforts?
In this case the duplicated effort is desirable. You're implementing an engine to meet a spec. Duplicating the effort gives you:
Competition - if there's only one implementation, what are your drivers, metrics and priorities for improving performance, spec compliance, and correctness? In practice, you're going to be deciding these things arbitrarily, by committee, on behalf of the market. If on the other hand you have two(+) competing engines, all these decisions are driven much more by the market, which is far likelier to generate better results for the market.
Redundancy - If the same spec is implemented semi-independently by two different teams, then different implementations of the same features will be tried, and tested in the wild, until winners emerge for different use cases. This is incredibly powerful for innovation and optimisation, and the fact that both engines are open source only multiplies this power.
Definitely by literally merging the two engines into one codebase, you'll lose these advantages almost completely, and even putting the two teams and the two projects under one organisational roof whilst still trying to keep them 'separate' would put the advantages of them being truly independent at serious risk, whilst probably not having much upside that isn't already provided by having the two codebases be open source.
Two independent teams duplicating their efforts is a very, very good thing, and even better now that ChakraCore is open source.
> What is the main reason for not merging V8 and ChakraCore?
Pragmatically, that's very unlikely to be effective.
The two teams are culturally, physically, and temporally (time zone) distant, so Conway's Law tells us it would be very difficult for them to efficiently work on a single code artifact.
It's not impossible, but it requires a lot of coordination. With something as deep as a high performance language VM, it's really hard to scale development beyond a handful of people in one room. Unlike, say, an application framework or an app, there are few easy, loosely coupled things to hack on.
(In case anyone thinks this is more amazing than it is: it runs on Windows only right now. But presumably someone will start working on Unix ports shortly.)
It's not on the short term roadmap because we've scoped it to clang 3.6 x64 Ubuntu while we stand up the branch but we're trying to keep our code changes for the port pretty generic.
From the Roadmap document - it seems like JIT on Linux is not a priority? That's surprising. Is it due to technical issues or the whole idea is to have NodeJS on Windows use Chakra and let the community do the Linux part with JIT?
The first step is to get the runtime and interpreter running. Next step is to bring the jit up.
Chakra is the hybrid engine where we have the interpreter, simple jit and full jit. The interpreter is the one which does all the profile data collection. Profile data is required to get the quality jit code.
Hi! Thanks for explanation and another cool product!
One question that just came to my mind regarding this: Have you thought about a configurable/modular build where this interpreter/simple jit/full jit can be switched? Or is this perhaps already there?
I can think of some use cases where it would be interesting to build a modern and full featured JS engine in interpreter-only mode. E.g. to integrate it into platforms which don't allow JIT (iOS), to reduce the footprint or the attack surface. Or to be able to port it the other platforms without much effort - in the embedded domain there are still other processors than x86 and arm and other OSes than Windows, Linux and OSX floating around.
Why design the engine first released in 2011 under a completely closed culture to be portable for its eventual open-source release five years later? I understand the appeal of the thoughts behind what you said, but you shouldn't burn time on things you don't foresee being an issue.
I'm happy to see WebAssembly talked about on there. It talks about standardisation and the polyfil (using asm.js). Any idea on when support will be worked on in the engine?
Good that it's open source, but I found this blurb from their contributing guidelines [0] to be contribution inhibiting:
You will need to complete a Contributor License Agreement (CLA) before your pull request can be accepted. This agreement testifies that you are granting us permission to use the source code you are submitting, and that this work is being submitted under appropriate license that we can use it.
Don't most large organisations require something similar with their own open source projects? I know Google and Facebook do.
It's just a way of confirming that you own the rights to your contribution, and that you explicitly give Microsoft permission to use it. (And if you didn't want to grant them permission, you would have no reason to contribute!)
> Don't most large organisations require something similar with their own open source projects? I know Google and Facebook do.
Mozilla gets by without one. Apple's approach for managing Swift, too.
> It's just a way of confirming that you own the rights to your contribution, and that you explicitly give Microsoft permission to use it.
If that's all they wanted to do, they could get by with something resembling the terms that Mozilla gets its committer's to sign. Usually these CLAs are specifically written to go a lot further than that, though. They usually take away a lot of the contributors' bargaining power. Microsoft's CLAs included.
> * Mozilla gets by without one. Apple's approach for managing Swift, too.*
Mozilla does have one.
> Code committed by you to a Mozilla Repository, whether written by you or a third party, must be governed by the Mozilla Public License 2.0, or another license or set of licenses acceptable to the Mozilla Foundation for the Code in question.
> By submitting a pull request, you represent that you have the right to license your contribution to Apple and the community, and agree by submitting the patch that your contributions are licensed under the Swift license.
Neither of those are CLAs. Please correct your comment.
I'm particularly perplexed why you linked to that Mozilla document, for two reasons:
1. It says what it is, and as I said above, it's not a CLA. I've signed that document. It's required by everyone to get commit access to the Mozilla repositories. Here's how it works if you want to contribute to Mozilla: you send in a patch, they accept it, and say thank you. Nobody has to sign anything. (The same process goes for Swift, by the way.)
2. I specifically mentioned such a committer's agreement in my comment. It's even in the part you quoted.
I see your point now, but the steps are functionally equivalent. The person committing the code has to verify that it's licensable by the contributor under MPL2.0, which means copyright license, patent grant, etc.
The specific terms, however are down to the author of CLAs, not inherent to CLAs themselves. The Apache and the Google CLAs are fine, for instance, and look roughly equivalent to the terms of MPL 2.0.
I can see the user friendliness argument, but functionally it's the same thing. The swift CONTRIBUTING.md could just as easily put in some crazy line about patent negotiations.
> They usually take away a lot of the contributors' bargaining power. Microsoft's CLAs included.
Can you point out which part you object to specifically? On a quick read I don't see anything in here[1] that's not the equivalent of licensing your code to a project under a major OS license with a patent grant (e.g. Apache 2.0).
> I see your point now, but the steps are functionally equivalent.
No, you're equivocating. This part never happens in the CLA-free world: I send a patch in to somebody and they tell me to sign the CLA before accepting it. Or: somebody else sends in to Microsoft some code I've written, and Microsoft tries to get in touch with me to sign the CLA. (This has happened. Not only is it not necessary, but the text of the CLA itself says it's not necessary. The developers of the project don't even understand their own CLA. This is not specific to Microsoft. I wrote about cargo-culted CLAs when Swift was opened up[1].)
> The specific terms, however are down to the author of CLAs, not inherent to CLAs themselves.
This isn't remarkable; I don't know why you chose to make a remark about it. It's fluff. It doesn't legitimize your (still uncorrected) comment, and it doesn't delegitimize mine. Superficially, though, it looks like it does. I don't like this.
> The Apache and the Google CLAs are fine, for instance, and look roughly equivalent to the terms of MPL 2.0.
I'm not going to analyze them here, but let's go with the "equivalence" stance. The argument becomes, "They're equivalent to the terms that are already in the license, so there's no need for them at all." That is, there's no reason not to accept the changes if the contributor never signs the CLA.
Of course, most of the time, the CLA isn't equivalent, which is why you're asked to sign it.
> Can you point out which part you object to specifically? On a quick read I don't see anything in here[1] that's not the equivalent of licensing your code to a project under a major OS license with a patent grant (e.g. Apache 2.0).
To use Apache 2.0 as a specific example, I already wrote a comment about it.[2] The CLA neuters the patent termination clause. This leaves the "beneficiary" of the CLA open to sue anyone over patents with impunity while keeping the hands tied of those on the other end of litigation.
Yes, many projects do not, but some very popular ones do: Google Go requires one. Apache requires one. Some FSF projects require one.
I could go on, but the point is that CLAs exist for good reasons. There may be another, different way to accomplish the same goals, but since it isn't my pocketbook funding the project, I don't feel like I have a right to complain.
As such, just because a project has a CLA doesn't mean there's anything nefarious or inhibiting going on.
In the case of the MIT license, I don't see much harm in throwing in a CLA because it's nearly impossible for Microsoft to do anything to break the terms of the license; however, while I am not a lawyer and this isn't legal advice, in general signing a CLA (including in this instance) often gives the other party rights to re-license the work and continue to use the work if they end up in violation of the original license. For istance, a hugely popular, third-party Minecraft modification platform called Bukkit was partially licensed under the GPL (v2, I believe); however, it turned out that this code was not permitted to be distributed as GPL because it linked to to proprietary code, and Mojang was not eager to open source anything anytime soon. Because contributors submitted their contributions to be used under the terms of the GPL, and the project failed to meet the requirements of the GPL, contributors were then able to file legal DMCA notices of copyright infringement to take down the project.
Regardless of what license the project is filed under, as an individual contributor working for free for the community, the license of a project determines the terms under which I am sharing my work. I expect those to be respected in perpetuity, and I would like the right to rescind my contribution if the copyright holder is in violation of the license.
(Please don't turn this into a GPL vs MIT discussion)
EDIT
Echoing other responders here, if you downvoted I would really appreciate the discussion of why.
Regardless of what license the project is filed under, as an individual contributor working for free for the community, the license of a project determines the terms under which I am sharing my work. I expect those to be respected in perpetuity, and I would like the right to rescind my contribution if the copyright holder is in violation of the license.
The CLA is just another license to consider with the project; just like projects that have the "any later version" GPL clause. That little addition is important to some individuals.
I think a more successful argument would be against the lack of specific guarantees in existing CLAs, rather than CLAs themselves.
For example, the CLA could give joint copyright to the involved parties, but also agree to only publish the code under a license published by the Free Software Foundation.
CLAs, as you point out, are sometimes a necessary mechanism to ensure that a project can correct licensing issues when they arise. Obviously, through careful choice of licensing it may be possible to avoid the need for a CLA entirely, such as is starting to be done with the Apache License 2.0 by some projects.
So I would think of this differently: depending on the license, some projects may need a CLA for protections only available to them with one. For those projects, the community should encourage them to encode their intentions in the CLA to assuage fears and provide certain guarantees to contributors.
Finally, keep in mind that the entire world does not follow US Contract / Copyright law (or even the bern conventions). In some countries, a CLA may be the only possible way to ensure that a project won't have problems later. While the current versions of today's "free software" licenses have generally tried to account for that, legal situations change over time.
> CLA could give joint copyright to the involved parties
Licensing to allow joint copyright isn't really a thing in the US. Anyone trying to craft a license that formalizes joint ownership after the fact is going to have a not-so-fun time trying to make that work.
No. But you have that backwards. The FSF is good because they protect everyone's freedom to use the software, and one of the ways they do that is through copyright assignment (not a CLA).
It also means that the company can sell proprietary versions with extended functionality, and the company is therefore motivated to not allow contributions to the free code which duplicates functionality which they sell.
However, since ChakraCore is released under a permissive license (like most corporate open source projects), Microsoft could do the same thing without a CLA. It matters more when something is under a GPL variant...
It's a good point, but it matters for Apache licensed code, too, because Apache 2.0 has a self-destruct clause in the patent grant to keep all parties' lawyers in check. CLAs (like the one Microsoft uses) routes around that.
The way this ends up working for Microsoft projects licensed under Apache 2.0 is that it essentially allows Microsoft to do anything with the contributions (as if it were licensed under MIT, with a liberal interpretation of the implied patent grant), but requires everyone else to continue abiding by Apache 2.0. Not exactly balanced.
EDIT: I'm totally confused about why people are having a problem with this comment.
Many license allows that without a CLA. The CLA allows the company to change the license of the code to whatever they want. Which is usually inhibited by license compatibility...
Don't care really what you do Microsoft. Your legacy of vendor lock-in, monopoly market, unethical business strategies and even bribes still lives on today with your toy OS. You set us back many, many years. Hopefully this free and open movement will be your undoing.
Unfortunately there are people here right now that are paid to shill your crap and constantly come up with excues like "things have changed". Yeah right, like I will forget what you have done.
Unlike most of Microsoft's other EEE attempts, this time they're putting their code with their mouths are and putting it all up under MIT. There's little you can do to get more open and honest than that. If the community doesn't like the direction MS is taking Chakra, they're free to take it in a new direction. Hopefully it doesn't come to that, but it's an option.