Hacker News new | past | comments | ask | show | jobs | submit login
“The working group should not agree to freeze whatever syntax Chrome ships” (w3.org)
282 points by cylo on Feb 5, 2014 | hide | past | favorite | 207 comments



The problem is the standards process exists to keep people from swinging their weight to force rushed/poorly thought-out technologies into a platform some already accuse of fragmentation and bloat. And to people who "But this is Google!" Throwing the process under the bus today because you think your favorite tech company can do no wrong screws everyone later when management changes. Management will change. The process is here to protect the web's interests regardless of whether any one actor's intent is good or not. If Google steamrolls everyone today, who will steamroll everyone tomorrow? The MPAA is on the W3C too.


I don't think that people have double standards for Google, it's that the potential flaws aren't obvious.... Where's the competition and alternatives? We're stuck between making decisions too quickly and Google becoming the de facto standard.

EDIT: To be clear, I find Google's actions in this regard to be surprisingly impolitic and ill-conceived. Anyone who reads it should have alarm bells.


Google should respect the desire of other parties to be given adequate time to review their proposals. Being told "you have to object NOW because we're going to ship" doesn't allow third parties to actually identify technical issues. It's like someone committing code without letting their peers review that code for flaws. It's a recipe for cementing flaws into the web platform. Intended or not, there is arrogance in the notion that external discussion of Google's proposal won't improve it.


It's like someone force pushing to production without letting their peers review that code for flaws. FTFY


I believe SPDY, er, Google HTTP 2.0, has that feature.


So it's almost like the IE 6 days except that in this case the standards actually are influenced by Google instead of MS just doing whatever the hell it wanted. So slightly better but still not good. Plus if there is anyone I trust to do what is right for the user it's going to be Mozilla not Google.


That argument seems isomorphic to demanding that all software design and feature innovation happen within the W3C commitee process. Certainly that's not how most successful features have worked historically...


The historical processes boil down to:

1) Single Browser Ideates -> Single Browser Ships -> Time Passes -> Other Browsers Ship -> Standard. This caused delay in implementation and vendor prefix hell.

2) W3C Ideates -> W3C Bikesheds -> Browsers Implement. This causes things like AppCache that work on paper, but don't adequately cover developer use cases and address the needs of the platform.

The ideal process (in my opinion)

1) Single Vendor Ideates -> W3C Evaluates/Refines -> Browsers Implement Under Feature Flags -> Standard -> Feature Flags are removed. This process, depending on the size/scope of the change, should take about 6mo to a year. For a permanent irrevocable change to a cross-browser API? That is mature and prudent.

edit: formatting.


You (2) is not correct. Most W3C specs are edited if not outright proposed by browser implementor teams. And of course nobody anywhere ever prevented browsers from shipping stuff under feature flags (most of ES6 was in Firefox since the days of FF3/FF4, with slightly different semantics, and with an opt-in behind a version flag for instance). Hixie (HTML5 editor, CSS Selectors Level 3 editor) works at Google since 2005 and worked at Opera before, Shadow DOM's editor is Google's Dimitri Glazkov


The last example seems implausible: the MPAA doesn't ship a browser and can't control features by fiat.

I guess I'm not understanding the firestorm here (rather, I am, but in an uncharitable way as YET ANOTHER !@#?! Apple vs. Google proxy war on HN). Google developed a feature they like and want to ship it. They want it to be a standard. Apple (at least this one Apple person) doesn't want it standardized yet. Google is noting that they are going to want to ship this anyway ("threatening to ship it anyway" if you swing that way).

So what? Surely this kind of argument happens any time you have a proposed standard from a single vendor. What's special about Shadow DOM that it can't tolerate this kind of disagreement?


As the Apple person in question, my primary objection isn't about whether I "want it standardized yet". It's more that this hasn't even reached step 0 of the standards process (writing an "editor's draft", a document that writes stuff up in standardese but hasn't necessarily been documented by the WG yet) and Google is saying "we're shipping now and then will likely be unwilling to change it". Furthermore, it's not like there are only some edge cases to work out.

There are also substantive objections to the content of what is being shipped. Not only on the names (which Google invited comment on) but also on the intended semantics. It's pretty hard to usefully comment on the semantics without a draft spec and some time for discussion.

That's why most browser vendors try to make sure there is at least a draft, time for discussion, and rough consensus before shipping a feature enabled and unprefixed.

For what it's worth, Apple WebKit engineers agree with many Google Blink engineers about lots of standards topics, we actively collaborate on many things, and we actively support the goals of Web Components and of this particular styling feature, if not necessarily all the details. So your "proxy war" narrative is false.

However, I feel that Google is not being a good standards citizen in taking this particular action.


> MPAA doesn't ship a browser and can't control features by fiat.

DRM extensions (EME) have shipped in Chrome and IE already (with whole W3C process steamrolled), so it seems that MPAA does control browser features — by proxy (Google Play/Xbox video store).


Missed the point. All W3C committee member influence the set of features covered by W3C standards. That's what the W3C committe is for.

But the allegation here is that Google is ignoring the standardization process and shipping features before getting signoff via a committee draft, and that's a bad thing. The MPAA can't ship features at all, so they would never have access to this trick without convincing a browser vendor to do it for them.


You're right, the MPAA example is inapt.

As regards what makes Shadow DOM special? It's not special. The issue is that Google has made many public claims toward a Brand New Day of participation in standards with Blink. [1] Vendors and developers were beginning to have cautious optimism that upcoming standards wouldn't be as rushed and fragmanted as they were in the past. This action directly contradicts the previous goodwill.

[1] http://www.chromium.org/blink#new-features


Just to be clear, Googlers (myself included) have been constantly participating in the standards process both pre-and-post fork. There's nothing "brand new day" here; conscientious standards engagement is just how we roll here on the Blink team.

Regarding charges of this being "rushed", note that it has been clear for 3+ years (since we started talking about these features publicly and engaging in the standards process for them) that we needed a way to style shadow DOM that enabled author and component-author styles to co-exist and be selectively populated into the eventual tree. This isn't new, nor has it been "rushed". Many iterations of the design have lead to this point.

How many more years of bottle-aging & iteration do you suggest? On what basis?


The particular part of the Shadow DOM spec that sparked this discussion (the "cat and the hat" combinators)was introduced less than 4 months ago and was met with some objections just this last week at the CSS F2F meeting. So the 3+ years comment is a bit unfair in this context.

I certainly appreciate the level of investment Google has made here and I can sympathize with how it feels to just want to ship a feature already! I've been pumped for web components ever since the first pitch you and Dimitri gave us in a hotel suite a few years ago at TPAC. But keep in mind that both Microsoft and Mozilla have (failed) experiences in building component models for the web (e.g. HTAs). This isn't an easy feature to get correct. The web needs us to get this right and shipping it before there's consensus isn't how we're going to succeed in that regard.

-Jacob Rossi [IE team, but speaking on my own opinions]


Hi Jacob,

I sympathize with the hesitation that you must feel regarding a request to analyze a feature on a short timeframe, it is however the case that MSFT has in the recent past (Pointer Events, which are GREAT) used its prerogative to ship features ahead of standardization. That contrasts with the current scenario in which agreement by the WG on the names of the APIs _would in fact change our course_.

I noted the timeline as I understand it here: https://news.ycombinator.com/item?id=7187024

The web platform is behind. It's regrettable that we are, but that's the current situation. If you'd like to help, I encourage you to help weigh alternative in the www-style thread.

Engagement on the content and not the process would go a long way at this moment.


We prefixed our pointer events implementation (which, yes, prefixes have issues and we should stop doing this IMO). Blink is talking about unprefixed, on by default. We updated the implementation and only removed prefixes once we could demonstrate consensus with other browsers (Candidate Recommendation, in this case was one signal of that).

Matt and Rossen from our team were at the CSS F2F where this is discussed with some hesitation on design and I'm sure they'll weigh in further on www-style (this API area's not my cup of tea).

A consensus process, while annoying at times and never perfect, makes for a better and more interoperable web.


I realize that I'm an outlier but I support prefixing as a possible solution to this sort of thing; but that's even more out of favor with the CSS WG than what's being proposed (AFAICT).

Looking forward to timely www-style feedback, it's much appreciated.


>Engagement on the content and not the process would go a long way at this moment.

In all honesty this is a terrible and frightening attitude for you to express.

It validates that:

* You have contempt for the process.

* You demand others shut up about it.

You are making it obvious that Google is a problem.


I'm suggesting if people in the appropriate working group feel as if a particular solution to a problem (regardless of how old the problem is) wasn't given due consideration, the notion of "speak now or we'll ship anyway" is not productive. I don't believe that Google in particular is a bad actor in the standards community. Far from it! I do feel as if something as fundamentally paradigm-shifting as Web Components needs care and thought to be done correctly and respectfully. Saying "we're going to ship this solution ASAP, any last-minute objections" puts unnecessary stress and time constraints on the task of making the best, most useful solution possible.


Progress delayed is progress denied. If you want a better web platform, that means wanting one that is different than the one we have today. It is also the case that standards committees are not outfitted with effective fitness functions (the ability to predict market success, e.g.). The best we can do is to iterate and be data-driven. I do not know what "correctly" means in this context, and I submit that you don't either. What we _are_ doing is making the progress we can with the best available data we can gather (polyfilling via Polymer, building large-scale apps, observing existing libraries and their challenges, etc.). It's fine to critique the method, but bring data.

If you have specific concerns regarding their utility or design of these specs, I'm hopeful they can be aired as we continue to iterate. Shipping is not the end; it's a new beginning.


Then do everyone a favor and namespace your apis so it's obvious they are not standards compliant and can be cleanly used at the same time as other browser implementations until the standard is ready.


Progress delayed is progress denied.

If that's how you really feel, why bother paying lip service to the w3c?


Meaningful progress is progress for the most people. To achieve that, it's often most effective to create coalitions to help you achieve that progress in areas you yourself cannot. This is where collaboration and standards come in.

What's going on now is a request for collaboration. It has been somewhat de-railed, but I have hope. Barring consensus, we feel the risks are outweighed by the gains for developers in having changed things in the bit of the world our codebase addresses.

Perhaps you weren't looking for a real answer, but there's one anyway.


You seem to be saying that the problem has been obvious for 3+ years. Has the solution also been obvious? Even to nongooglers?

I haven't followed this closely, but from the surface it seems as if a couple of paragraphs in a public spec document 9 months ago could have diffused a lot of these ostensibly reasonable sounding criticisms.


A previous solution based on explicit naming of "parts" of shadow DOMs was withdrawn several months ago. And update was given the the CSS WG last November that outlined the changes. FWIW, the "::part()" solution would have _worked_. It's entirely feasible that we could have improved the world by providing Shadow DOM with that as the primary styling mechanism. Not ideal, though. So we have been willing to change course in response to feedback.

Here's the Blink bug: https://code.google.com/p/chromium/issues/detail?id=309504

All of this happened in the open, in consultation with other vendors (not necessarily at WG meetings, which aren't where you design features anyway).

Remember the context here: Googlers have been doing the heavy lifting on this front for _years_. It's exciting that others are starting to pay attention to the problem space. That they don't think providing Shadow DOM to users is as urgent as we do is their right. We, however, are willing to take some pain in this (relatively small) instance, should they not be willing to help us work through the naming issue in short-order (which, as you can see from the thread, is the actual request; our goal is to improve things through web developers, preferably via collaboration.


The concern is shipping a non-standard solution and then encouraging developers to use it is the first step toward lock-in. No matter how much you flag something as experimental, it will be used, and the developers who built things using it will move on. When other browsers later implement the standard, nobody goes back to build it in to their projects- they're working on new projects now. This is how Firefox can support Web Audio and still not have it work on the majority of sites that use it. A differing implementation is just as bad as a prefix, and hurtful to future developers who will have to code for it to reach a compatibility matrix.


The problem is that it won't be your pain, it'll be other vendors' pain as they have to reverse engineer Blink's implementation.


The details of both the semantics and syntax in question have been widely discussed and understood. It's pure hyperbole to appeal to "reverse engineer Blink's implementation" (which, as you might be aware, is Open Source).

Here's a refresher for those who missed it: https://docs.google.com/a/chromium.org/document/d/1cxW4MtsDb...


The fact that web developers will then need to worry about differing implementations across browsers and -- pray not -- across blink versions, on the other hand, is not hyperbole.

If it's ready to ship with nobody else agreeing on how it should work, please do everyone a favor and ship it with a prefix. If only as a courtesy to fellow developers who will need to deal with this mess. Nevermind that you don't want a prefix. Use one.

Better prefix hell than different implementations using the same syntax.


Problem is - even if something is open source, doesn't mean it will be easy to pull and implement in your code base...

Maybe cat/hat Shadow DOM depends on some C lib that causes incompatibility with a lib you use, which means you'll have integration problems.


> (which, as you might be aware, is Open Source)

Being open source has never been an excuse to break the standards process. It's disheartening to see open source being used to justify this behavior again (first with PNaCl, now this).


That's moving the goalposts. The full context of your statement was:

> it'll be other vendors' pain as they have to reverse engineer Blink's implementation

Which was the salient point I was replying to. Quoting out of context to imply I made statements I did not is...disheartening.


What's special is that recently browsers have been much better about not randomly shipping features unless there is some consensus from the other browsers. Instead, they've been implementing them behind runtime flags, getting specs written and agreed on, _then_ shipping.

As in, recently there have in fact not been these sort of arguments because browsers have somewhat cooperated to iron out issues before shipping.


> So what? Surely this kind of argument happens any time you have a proposed standard from a single vendor.

The declarative nature of CSS means you have to set syntax in order to test things out while also making it bad to deprecate syntax (no meaningful conditionals). Attempted workarounds are things like the vendor prefixes disaster.

I was under the impression that post-split Blink was going to do the same thing Mozilla does and turn on unprefixed experimental features for Canary/Beta and flag them off in the release channel until things hit CR.

There isn't a lot of debate that the feature in question is desired but it seems like goog is jumping the gun. I'd guess it's because they have internal stakeholders (polymer, angular).

P.S. If anybody on the committee is reading and cares about a random guy on the internet's opinion: +1 ::shadow, ::shadow-all


"Apple (at least this one Apple person) doesn't want it standardized yet."

Downthread it's clear that Mozilla and Adobe are also against it.


It sounds like there's issue here where Google hasn't actually proposed a standard. They have an implementation which seems to at least not be completely documented, or not at all? I can't tell with just a little reading.

So, it's more than just an Apple v Google spat, the Google fellow is just saying "we don't want to take the time, so here accept what we ship or gtfo".


If Google ships and freezes syntax then it puts huge pressure to just go with it. This is not the right way to do things.


the problem is the web today is not what a proper application plateform should be. Developpers have waited long enough, we need more features and it takes way to long for the W3C to deliver.

At the same time, HTML5 was a coup orchestrated by the Whatwg on the future XHTML spec, It was a mistake in my opinion because the original W3C spec was way better as we would not have the discussion we are having today about web components and the shadow dom if the W3C spec had been chosen.

So I dont know. Is google way of doing things short-sided? or does it really solves some issues? Time will tell. but maybe the lean approach the Whatwg took wasnt that "future proof" after all.


>we need more features and it takes way to long for the W3C to deliver

We also need features that work across all major browsers.


It's kind of a catch22. We do need features to work across all browsers but the standards body moves to slowly. So vendors push out features they think are good. Others implement them. This is how we end up with multiple vendor prefixes. Then eventually the standards group gets around to accepting it and the vendors update to use the standard.

Waiting for w3c might be the "right" way to develop standards but it also slows progress. I'm not sure which is worse. Even using the vendor prefix process things work in more browsers now than they did in the past.


I don't know about the CSS stuff that spawned this thread, but the rest of the shadow DOM stuff is polyfilled to work in other browsers:

https://github.com/Polymer/ShadowDOM


This is speculation: The reason Google is moving forward quickly with Shadow DOM implementation is that this polyfill is dog slow. Think about what it takes to make DOM invisible in the browser in a polyfill- overriding every DOM method to filter out shadowed nodes. I've heard reports of 10x slowdowns on DOM operations with this polyfill in place. The kicker? Web Components (specifically Custom Elements) lose almost all their encapsulation without Shadow DOM. Those of us who are betting heavy on Web Components as The Future™ are pretty anxious about this.


> The reason Google is moving forward quickly with Shadow DOM implementation is that this polyfill is dog slow.

I'm not sure what you mean by "moving forward quickly". They've been hammering on this in the public eye for over three years. How many more years of salary should they pay before they get it out the door? At some point, you just have to ship.


When the features don't work the same way in every browser, it hurts the platform. Poorly designed features are impossible to take back. (see also AppCache). Web Components is moving quickly, but there has to be an element of care. Shipping features before they're ready is driven by marketing and PR, not engineering maturity.


If you want a proprietary platform controlled by a single vendor, code for iOS or Android.


This is a bit jaded, and I mean it light heartedly. But I'd be more inclined to listen to the W3 if they'd shoot down DRM. I no longer believe they have the web's interest at heart. Might as well let Google steamroll, if they're bound to steamroll things themselves.


But Google was one of the main driving forces behind the DRM proposal…


"Google" was also one of the main forces combating DRM in HTML. Amongst others, the guy with his foot in his mouth in this thread (Tab) was one of the most vociferous opponents of the DRM proposal.

(there's also, of course, Hixie, but it's not surprising that his opinion wouldn't be informed by his employers)


"Google" was also one of the main forces combating DRM in HTML.

Bullshit. Utter and complete bullshit.

Google was one of the initial enablers of DRM on the web. Without Google enabling it, nobody could have used DRM on the web and it would have been a non-issue. If they were "against DRM" they should have stood their stance.

But they didn't. Because to them being able to sell ChromeOS with Netflix-support was more important than the integrity of the open web itself.

And so it shall stand in the history-books: When the web was poisened with DRM, it was done so first by the hands of Google.

There's no way they can claim they "were actually against it" when they were the ones doing the original sin. Fuck Google. Seriously fuck Google for this one.

Fucking standards-maimers.


Quit with the histrionics. You can be passionate without the informationless drama.

Read what I wrote again. I never made any claim like Google "were actually against it". I said that several of the most outspoken in combatting the proposal in the HTML working group (which is still an editor's draft, btw, and certainly not fait accompli) are employed by Google, and they will likely continue fighting it. Notable among them has been Tab[1] and, not for nothing, hixie[2], the editor of the HTML spec at the WHATWG, where he refuses to put in any sort of DRM to a specification for the open web.

That's exactly what you hope to see when a company doesn't force the people it pays to be on a standards body to toe the company line.

[1] http://lists.w3.org/Archives/Public/public-html-admin/2013Fe... (with many others on that list)

[2] https://plus.google.com/107429617152575897589/posts/iPmatxBY... (see also: posts to public-html)


The simple fact of the matter is, Google were the first company to ship production hardware or software that used HTML5 EME, and what's more it was incredibly locked down. Their HTML5 EME implementation only plays back on approved Chromebook hardware from official suppliers that's locked down from the hardware up to prevent users from running any unauthorized applications on the system. Turning off the restriction on unauthorized code also disables the decryption module. It's thanks to Google that soon there'll be 100% standards compliant HTML5 sites that can only be legally viewed on locked-down, single purpose web browsing machines from approved manufacturers.


The original announcement implied that name-change bikeshedding was the only thing holding up this wonderful feature. TFA counters that disagreements remain over "the syntax and the semantics". ISTM that this kind of difference of opinion is expected and nearly unavoidable, but doesn't portend especially dire consequences. The difference between "feature-switch" and "google-kicks-ass-hell-yeah-suck-it-wc3-feature-switch" was a PITA in the past, but now we have tools to abstract away the details. If web developers collectively decide that the pending Chrome feature is more broken than simply having obnoxious names would imply, then we'll just have to endure another round of this when Mozilla or whoever comes out with the next version.


Well, they just made Dimitri Glazkov famous.


Here's the reply: http://lists.w3.org/Archives/Public/www-style/2014Feb/0138.h...

the money quote: "We feel the standard is sufficiently advanced, the remaining issues sufficiently small, and the benefit to authors sufficiently great to justify shipping sooner rather than later."

the Shadow DOM is pretty awesome, so I'll agree with the "benefit to authors sufficiently great" part. No comment on the rest.


This part of his reply annoyed me: "Attributing an ultimatum to my words is blatantly violating the Principle of Charity, especially since I've very explicitly clarified that I'm talking about the latter."

Given his words were "Whatever API gets shipped will be frozen almost immediately. If you want to suggest name changes, as we brainstormed a bit at the f2f, do so RIGHT NOW or forever hold your peace." ... I'm a proponent of the principle of charity, but I don't know how to read that as anything other than an ultimatum.


This is the other way of reading it: He's essentially saying, "Once this is shipped, the cat will be out of the bag and we will not be able to put it back in, so if you need the cat for something, you should do it now." That's not what is traditionally meant by "ultimatum," because an ultimatum implies that you want someone to do something and will do something else to injure them if they fail to meet your demands.


Your cat analogy still sounds like an ultimatum to me: threatening someone with not being able to work with the cat unless they acquiesce to their demand to work with the cat on a very short timeframe.

Here's how the email fits your definition of an ultimatum:

Demand: that the working group suggest changes to the proposal "RIGHT NOW."

Injury if they don't comply: Google will ship the API anyways and freeze it so the working group can't later change it without breaking compatibility with a major web browser.


> We've been working on Shadow DOM in the open for 2 years

There are references in the thread to a f2f that happened a week ago, but this implementation has been evolving for a Long time now. The most vocal respondents only seem to be concerned about the phrasing, and haven't indicated publicly that they have anything technically substantive to add to the discussion anyway.


The technically substantive stuff is all in existing spec issues. http://lists.w3.org/Archives/Public/www-style/2014Feb/0152.h... has some links if you care.


Which is replied to here: http://lists.w3.org/Archives/Public/www-style/2014Feb/0158.h...

Personally the only thing that I think was done incorrectly here was the wording of the initial e-mail.

The web needs to move forward


Agreed - there was even a terminology explanation and apology message in there too.

The first couple of messages from the original post on seem a little hyperbolic, but the fire sort of put itself out in short order.


There are some fundamental disagreements here about whether certain cases are "edge cases", if you keep reading the threads.


Thanks - I did continue to read and looked into the spec bugs. I guess I didn't consider Boris, Dimitri, or Tab as being all that outspoken about the so called 'freeze' - one acknowledged the poor word choice, but that's about it.


No, that is not the form of an ultimatum. In an ultimatum, the threat will only come to pass if the target does not comply with the demand — it's an either-or situation, with the threat used as leverage to get what you really want. Google isn't saying, "Either suggest some stuff or we will ship the Shadow DOM API. We don't want that to happen, right?"

The message was of the form of an advisement rather than an ultimatum. It is similar to "Hey, everybody, I'm putting these cupcakes out. Get them fast because when they're gone, they're gone." I don't think anybody would hear that and say, "Hey, stop throwing around ultimatums." The "threat" is not a threat — it's a heads-up about something that is going to happen, so that people have a chance to respond adequately.

You can still say that Google was not considerate enough in what they did, but there is definitely a reading of the message that is not an ultimatum.


>a heads-up about something that is going to happen, so that people have a chance to respond adequately.

Using the passive voice here doesn't change the fact that the "something that is going to happen" is something that Google is going to do.

"You should give me your wallet or something is going to happen to you. Just a heads-up."


You're still confusing two constructions. There's no "or" in Google's message. They're doing to do what they're going to do, regardless of anyone else's actions. They're giving advance notice, but not an opportunity to change their plan.

It's the same distinction as between these two sentences:

"Give me your wallet or I'll drop this piano on you."

"I'm going to drop a piano, you might want to get out of the way."


The big difference here is "something is going to happen" vs. "something is going to happen to you". A threat or an ultimatum implies that you're the specific target of the action. An advertisement or notification implies that someone is going to take some action that may possibly affect you, but isn't specifically directed at you.

The boundaries get fuzzy when billion-dollar corporations get involved because they tend to wield a lot of power. However, most people would say that if you're a startup and a competing startup says "We're going to launch a new feature; better get ready", that's not an ultimatum. (As startups go, that's pretty damn polite.)


It reads as an advisement, but it really is an ultimatum. the analogy you provide about advisements is that something is going into effect which the adviser has no power over. However, Google does have the choice whether or not this goes through. It's not the weather, it's not a god, there are people that make decisions and they can be modified. That is where the ultimatum is, just disguised as an advisement.


You criticized the analogy and ignored the entire rest of the text which made my point entirely clear. Why do people do this?

OK, fine, here's another analogy: "I'm going to be leaving in 10 minutes, so if you have any questions, you'd better ask them now." The thing being "threatened" here is entirely in the speaker's control — he could opt not to leave. So is that an ultimatum? I don't think it would generally be viewed that way. Why? Because there is no threat being used as leverage to get something — the speaker (i.e. Google) isn't really making demands here, but rather stating what it's going to do. The thing being "threatened" will happen regardless; this is just an advisement that it will happen so that you can respond accordingly.

Put another way, the "threat" component of an ultimatum can't be unconditional. It is conditioned on noncompliance. AFAIK that's the defining characteristic of an ultimatum. It is possible to read this as being a conditioned threat, but it seems entirely reasonable to me to read it more in the spirit of "I'm going to do something, so here's your chance to get ready." It's entirely possible that Google doesn't care if it receives any further input (which is theoretically the "demand" in this ultimatum).


What's being threatened is that something will be shipped and frozen without review, input, or comment, if comment isn't made immediately - not that something is going to be shipped.


Yeah, it's just doing a little dance around the word "ultimatum". It's still the same thing.


Exactly. Not shipping this feature hidden behind a flag is an ultimatum.

If the feature is hidden behind a flag, then only developers experimenting with the feature will turn it on an use it, but they will never venture to make a production app since they can't expect their users to also have that flag turned on.

If the flag is shipped in the on position, developers will take that as a sign to start shipping features using it to production.


By "be frozen" he didn't mean "Google will declare it to be unchangeable", he meant "users relying on it will prevent any of us from being able to change it." If it's an ultimatum, it's the users of the feature that are holding the gun, not Google.

Tab clarified what he meant in a response:

    By the way, I deeply apologize for the confusion revolving around my
    use of the term "freeze".  That term is used internally (within the
    Blink team) to indicate something that's no longer changeable (or at
    least is probably too difficult to change) *due to compat pain*.  I'm
    aware that there are other uses of the term, like "feature freeze",
    that imply a much more deliberate *decision* by somebody to stop
    making changes.
    
    I forgot about those additional meanings and was intending just the
    more popular internal meaning.  While I've certainly used that meaning
    in public, and have heard other people use it or other relatively
    close phrasings, I should have given more thought to the term and used
    something less likely to be confusing.


By "be frozen" he didn't mean "Google will declare it to be unchangeable", he meant "users relying on it will prevent any of us from being able to change it."

And that's only a problem because Google decided to ship it now, RIGHT NOW instead of caring about the standards process and maybe holding off for a little bit.

Nobody dies if Google ships a little bit later.


Maybe, but "We want to ship now!" is a VERY different statement than "We want to lock in the API now and refuse to change it later!". I have a great deal of sympathy for the first one.


I disagree that the Shadow DOM is pretty awesome. I think scoping style is valuable, but building components that are exposed as new tags is not appealing given the vast complexity of the implementation and the limitations of tags.

Markup has a very weak type system (strings and children) which makes building complex UIs more painful than it has to be (this also stands for markup driven toolkits like angular and knockout -- where the equivalent of main() is HTML markup and mostly declarative). Markup isn't a real programming language, and it's very weak compared to a true declarative programming language.

JavaScript however is a real programming language with all of the constructs you need for building extensible systems. For building anything complex (which is where Shadow DOM should shine) you will need to use extensive JS, you will need your Shadow DOM components to expose rich interfaces to JS... At which point, why are you still trying to do mark-up first -- it's something that's more "in your way" than helpful.


Thank you! I thought I was the only one who felt this way. I truly do not understand why Google feels that application composition should happen at the presentation layer rather than the code layer, particularly when the presentation layer is as weakly typed as HTML. This was tried and failed in the very first round of server-side web frameworks back in the mid-late 90s. More recently, the complexity of Angular's transclusion logic should have clued them in that this is an unwieldy idea.

I agree that some kind of style scoping construct would be a good addition, and far simpler than ShadowDOM. Simple namespacing would be a good start. It would be a more elegant solution to the kludgy class prefixing that has become common (".my-component-foo", ".my-component-bar", etc.)


Well, for one thing, div-soups are hard to read, create deeply nested DOMs, and lack semantics or transparency. If you're trying to preserve the nature of the web, which is transparent, indexable data, one where "View Source" actually has some use to you, then having a big ball of JS instantiate everything is very opaque.


The phrase "div-soup" makes me reach for my revolver. It seems to be a straw man that means "Either you're for Web Components or you're for the worst of web engineering."

- How does ShadowDOM (or Web Components more generally) make your DOM shallower? It's still the same box model. Deeply nested DOM structures are usually the result of engineers who don't understand the box model and so over-decorate their DOMs with more markup than is semantically or functionally necessary. Nothing in ShadowDOM (or, again, Web Components) changes this.

- Are custom elements really more transparent than divs? If "View Source" shows <div><div><div>...</div></div></div>, do you really gain much if it shows <custom-element-you've-never-heard-of-with-unknown-semantics><another-custom-element><and-another>...</etc></etc></etc>? Proponents of Web Components seem to imagine that once you can define your own elements, you'll magically become a better engineer, giving your elements nice, clear semantics and cleanly orthogonal functionality. If people didn't do that with the existing HTML, why will custom elements change them? At least with divs, I can be reasonably sure that I'm looking at a block element. Custom elements, I got nuthin'. They're not transparent. They're a black box.

- Finally (and more importantly), we already solved the "div-soup" problem. It was called XHTML. Custom elements in encapsulated namespaces! Composable libraries of semantically-meaningful markup! How's that working out today? It's not.

TL;DR: a common presentation DTD is the strength of the web, not its weakness. Attempts to endow web applications with stronger composition/encapsulation should not be directed at the DOM layer but at the CSS and JS layers above and below it.


1. Shadow DOM scopes down what CSS selectors can match, so deep structures can hide elements from expensive CSS rules.

2. Custom Elements promote a declarative approach to development, as opposed to having JS render everything.

3. XHTML was not the same as Shadow DOM/Custom Elements. XHTML allowed produce custom DSL variants of XHTML, but you still ended up having to implement them in native code as trying to polyfill SVG for example would be horrendously inefficient.

4. The weakness of the web is the lack of composeability due to lack of encapsulation. Shit leaks, and leaks all over the page. Some third party JS widget can be completely fucked up by CSS in your page and vice versa.

A further weakness is precisely the move to presentation style markup. Modern web apps are using the document almost as if it were a PostScript environment, and frankly, that sucks. We are seeing an explosion of "single page apps" that store their data in private data silos, and fetch them via XHRs, rendering into a div-soup.

The strength of the web was publishing information in a form that a URL represented the knowledge. Now the URL merely represents a <script> tag that then fires off network requires to download data and display after the fact. Search engines have had to deal with this new world by making crawlers effectively execute URLs. I find this to be a sad state of effects, because whether you agree or not, the effect is to diminish the transparency of information.

You give me an HTML page, and I can discover lots of content in the static DOM itself, and I can trace links from that document to other sources of information. You give me a SinglePageApp div-soup app that fetches most of its content via XHR? I can't do jack with that until I execute it. The URL-as-resource has become URL-as-executable-code.

IMHO, the Web needs less Javascript, not more.


Both are needed! Javascript is great for portability of apps that would otherwise be done in a native environment (you wouldn't want to index these anyway). Isn't there a standard mime type to execute js directly in browsers? There should if not. If you care about being searchable and having designs that are readable on a variety of devices, powerful and degradable markup is very useful.


Or search engines could use URLs with a custom browser that man-in-the-middles XHR and WebSockets to effectively crawl APIs, since the APIs theoretically are semantic by default.

execute url, index all XHR and websocket data, follow next link and repeat.


> If "View Source" shows <div><div><div>...</div></div></div>, do you really gain much if it shows <custom-element-you've-never-heard-of-with-unknown-semantics><another-custom-element><and-another>...</etc></etc></etc>?

You can extend the semantics of existing elements so you'd actually have <div is="custom-element-with-some-unknown-semantics-but-its-still-mostly-a-div">. Unextended tags are for when nothing in the existing HTML spec mirrors the base semantics you want.

Of course nothing stops people who did bad things before from doing bad things in the future, but it doesn't make tag soup worse.


The Custom Elements[1] and Shadow DOM[2] specifications have little to do with each other. The former is useful for defining new elements in HTML, along with properties and methods. The latter can be used to encapsulate the style/dom of that element's internals. So each technology is useful by itself and can be used standalone. When used together, that's when magic happens :)

[1]: http://w3c.github.io/webcomponents/spec/custom/ [2]: http://w3c.github.io/webcomponents/spec/shadow/


While you're perfectly allowed to disagree, it sounds like what you're saying is this:

"Collections of div-soup activated by jQuery plugins are the way to write maintainable web applications that make sense"

It's not as though Javascript has no role whatsoever in custom elements, but really, there's a lot to be said about how this way of working will be a huge improvement over the current jQuery + div-soup status quo.


No, that's not what I meant.

I'm saying that DOM through its relationship to HTML has weaknesses that make it unsuitable for building application components out of. "jQuery-enabled div-soup" is an example of how mixing presentation with model and logic yields unmaintainable results.

I have been interested in React.js recently, since it provides an interface to create reusable components and to use them inside a rich programming language with full types. I think that's a better example of a competing idea.

My experience is with building single page apps from scratch, so maybe there's a common use-case (embedding a twitter widget, or a 3rd party comment system in a blog) that Shadow DOM and Custom Components will address that I'm not familiar with.


FB React is a good example because it's living more in the presentation layer. But Custom Elements offer some things that React doesn't (as far as I'm aware).

One is better encapsulation, another is a well defined styling system (although obviously this article shows that this is not a super simple problem to solve, I'm certain that a good way of doing this will be around before too long) --- finally, and the most important thing, is that it's just baked into the platform itself, so interop between different frameworks is less of a pain.

For instance, suppose you want to use a particular Ember component in your Angular app. You probably don't want to include the entire Ember environment, and you want it to play nicely with Angular's idea of data binding and local scopes. Can you even do this? If you can, how much effort does it take, and how much does it degrade the application?

So, we've got: interoperable components/widgets. Easily style-able widgets. Elements with some semantic purpose. Simplified documents. Reusable templates (which, once HTML imports are pref'd on by default, should be easily host-able on CDN hosts).

There are a lot of benefits to baking this into the platform, despite making the platform an even bigger, crazier mess than it already is. It should hopefully give us better (and better designed) tools to work with.

Granted, I'm not saying it's going to solve every (web) problem ever, nothing ever does.


Yes, yes, a million times yes.

The biggest problem with this weak type system is obvious with CSS3 Matrix Transforms. CSS3 matrix transforms are the biggest bottleneck preventing fast updates to many DOM elements. Without fast updates of many elements across an entire page (window), pulling off the awesome smoothly animated effects found in modern desktop and mobile operating systems is pretty much impossible, especially in a system that implements immediate mode graphics over retain mode (DOM).

Marshalling matrix3d updates from numbers in an array of length 16 to a stringified array to apply it to an element, only to have the browser have to convert that stringified array back into an array of 16 numbers is insanity.

If you want performance, you need a more robust type system than just strings and children. I'm an engineer at Famo.us and we would absolutely love it if we could do something in javascript like element.set3DTransform(someLength16Array); We could simultaneously update thousands of DOM elements if arrays and typed arrays were supported instead of stringified.


Yeah, the CSS OM is really horrible too. CSS Animations is another area where you end up feeding huge generated strings into the DOM -- in theory Web Animations is meant to improve this, though personally I feel like the API too high level and ends up being really large because of this :(.

In your example, I think it'd only be a small patch (for WebKit, where my experience is) to optimize "elem.style.transform = new WebKitCSSMatrix(a,b,c,d)" without intermediate stringification. Mozilla doesn't expose a CSSMatrix type unfortunately. I've done some similar things for other CSS properties in WebKit -- have you considered submitting a patch? I've found the WK guys super receptive to small optimizations which don't change observable behavior (i.e.: you can't tell if the WebKitCSSMatrix was stringified or not currently) like that.


We're not über familiar with the internals of the browser or how to go about submitting a patch that fixes this. We did talk to people at Mozilla about this, but we still have to follow up on that.

Do you contribute to this area of Webkit? I'd love to chat more about this with you. Email is in my profile. Use the famo.us one.


The following reply does not inspire confidence:

Sorry, this is my fault. These things were defined in the spec before, but we sliced them out for a separate spec, which I was supposed to write and haven't gotten finished yet.

http://lists.w3.org/Archives/Public/www-style/2014Feb/0113.h...


So Google is taking their cues from the Internet Explorer playbook.

Having been in (and out) of standards wars at Sun and NetApp it really cries out how challenging doing "standard" stuff is. The original IETF standards process was really powerful at keeping out cruft, it included "fully documented" and "two or three interoperable implementations, of which source code must be available for at least two of them." The goal being that someone could sit down and write an implementation, and two there were at least two pre-existing implementations they could compare against for interoperability issues/questions and testing.

But standards break down when it comes to captured market share. If you're market share capture depends on your "value add", then you don't benefit if anyone can implement the same thing and you have to stay compatible with them.


When did the IETF process change? From looking at SIP (with all its combined RFCs, it's one of the largest standards), there's plenty of insane cruft. There's even an RFC that takes delight in coming up with crazy messages that are technically valid and suggestions on how implementations should try to guess the intent of the message. SIP goes back to the late 90s, so the process must have been corrupted by then.


Understand I've got some emotional baggage here[1] :-)

It changed when it became more relevant than ISO, I put it right at the end of the IP/ISO war, probably 1997, 1998. Basically up until that point the people who were motivated to subvert standards efforts pretty much ignored them, they had their own transport (OSI), their own mail services (X.400) their own directory services (X.500) and basically every else. IP and its "hacked together" stuff was pretty much universally considered "on its way out" by the "players."

When it became clear that IP wasn't on its way out, and in fact it was the burdensome and complex OSI standards that were going no where, the movers and shakers switched tactics, invade the IETF meetings (which did have an abusible community participation process) and drove the organization off a cliff in order to preserve their interests.

A really really good example of that was SIP and IPV6 both of which started out pretty reasonably, (because neither the phone companies nor the networking companies thought either was going to be relevant in the 7 layer OSI world) until whoops, that is where the action is so lets get in there and "fix" it.

[1] I sat in front of 500 engineers and explained out Sun was prepared to hand over any proprietary interest whatsoever in the RPC and XDR stacks (which had been "standards track" RFCs before that has a more explicit meaning, and so had sat there implemented by everyone but not officially 'standard') only to have Microsoft and the guy they had hoisted out of Apollo/HP, Paul Leach, invest thousands of dollars in derailing that offering, for no reason other than to try to resist anything compatible being out there. It was redonkulous in its pursuit of killing ONC/RPC at every venue. But like I said, that just made it personal for me.


Why is anyone acting the least bit surprised here? Google built Chrome for one reason and one reason only. So that they could control the development of web technologies.

This is the tale of the scorpion and the frog playing itself out as usual.

http://en.wikipedia.org/wiki/The_Scorpion_and_the_Frog


This is because Google are trying to turn Chrome into an app platform to rival Cocoa, while almost every other web player (except Mozilla) has a vested interest in keeping the web a document viewer platform.

The recent arguments regarding Adobe's arbitrary region stuff were very enlightening on this.


I think you're thinking of entities like "Apple" where it is more useful to think of "the people working on WebKit at Apple".

WebKit is what they do, all day, every day. They love the web, that's why they chose to go get a job working on it.

To say that the people working on WebKit don't want the web to succeed as an app platform is not something I have ever seen. Go hang out in #webkit on IRC and see what you think.

Edit: oh, right, so what IS going on? Exactly what it looks like: once you ship API, as Tab says, it DOES become very difficult to change due to compatibility requirements. That's why you do API review. People are saying, "why are you strong-arming this through?" It's not malicious.

I do also have sympathy for Tab. Progress on the web has a committee model, and committees suck. I don't know how they ever get anything done. Maybe it would be better if everything just fragmented, at least it'd move faster.


Actions speak louder than words though.

Apple already have integrated and deployed the Adobe CSS regions stuff, Google say it's just not their priority this year and would have performance implications on mobile. This is the weasel way to say they don't care, much as Apple are here. The devs may feel otherwise, but they are beholden to their managers who will be well aware of the strategic implications of their work.

Google see Chrome as the best way to bust the iOS App Store, and as such Apple will mysteriously make the e-books in WebKit vision more compelling in the near future than the in browser app development one.

I'm not saying I agree with Google here (though more for tech reasons), just that we need to be clear that the web is experiencing a schism, and that there are certain subjects which it is hopeless to expect agreement on. This is one of them.


Are you arguing that if you care about native app platforms you aren't interested in stuff like the Adobe CSS regions stuff?

One of the biggest changes to Cocoa in iOS 7 was TextKit. https://developer.apple.com/Library/ios/documentation/String...

This is concerned with the same sorts of things as regions (though it's more than that), and developers really, really wanted it.


I'm arguing that different browsers mysteriously prioritise feature development to be in line with the strategic objectives of the larger organisations that they are in, and that this inevitably leads to conflict where those objectives are directly opposed.


Yeh, but Apple are just as guilty of ramming things into the web platform.

Take srcset for example the value it adds as a responsive image solution is up for debate but there's no hope of Apple dropping it in favour of something else - this is pretty clear from various email lists, F2F meetings (to the extent that Chrome felt they've got no option but to support it)


What do you think Chrome people at Google do all day? Cold-call ads leads?


> almost every other web player (except Mozilla) has a vested interest in keeping the web a document viewer platform.

How do you explain the fact that Mozilla representatives at the WG are also opposed to Google's move here then?


Those are mere technical arguments.

Apple and MS, for example, have active interest in preventing Google's vision from happening.


The fact that Google is willing to steamroll over consensus with its market power even in the face of technical arguments is precisely the problem here.


Why should Google's product roadmap be subject to a consensus of their competitors?


> Why should Google's product roadmap be subject to a consensus of their competitors?

...because they're attempting to shape standards around their product roadmap, in committees in which their competitors also participate?

You can tell your competitors to go fuck themselves and do whatever you want with your roadmap, or you can make standards promulgation and compliance a part of your roadmap, but you don't get to have your cake and eat it too.


Apple have done it with webstandards too - there are no innocent parties


Then presumably you agree that Google should be called out for this and no longer treated as a trustworthy custodian of the open web.


They never were nor did they claim to be.

These working groups are working groups second, and political battlegrounds first. I think the pace of the WHATWG has proven the futility of such working groups.

We see this time and time again, with various technologies. It's never worked well. And if Apple were doing this, 3/4 of this website would be cheering them on even though Apple's been pretty bad about this sort of thing historically.


> They never were nor did they claim to be.

Then how do you explain this: https://www.google.com/takeaction/ ?


Microsoft apologists asked the same thing in the late 90s and early 00s. That did not end well for anybody. So here are your answers: 1. because it's a douche move, and 2. because it's bad for the platform and bad for just about everybody.

Other than the one who ends up with a monopolistic lock-in over the whole system for half a decade, maybe.


I feel like the Microsoft comparison is a little unfair. I doubt anyone would have complained much if IE and ActiveX were open source projects distributed under the Apache 2.0 license.


Given that ActiveX was something that by design wouldn't have worked on non-Windows machines (since it was, if I remember correctly, pretty tightly tied into COM/services stuff) - I'm pretty sure it being open source wouldn't have changed anything.


Exactly my thoughts. Web is a shitty app platform. And because it's a decentralized platform – the specification is developed separately from the implementations – it's moving forward too slowly.

Currently, the native platforms and the web is a mess imho. Tons of web standards, android, ios, linux, windows, osx, etc. Web and native platforms have different strenghts but there's no platform that combines them. I wish there was one solid, well-designed platform for both interactive documents and applications and for both mobile devices and desktops.

I'm (no longer) a fan of Google, but I'm beginning to think that if Google did this with Chrome, it would be a positive thing. Imagine the massive amount of programmer man-hours it would save.


> I'm (no longer) a fan of Google, but I'm beginning to think that if Google did this with Chrome, it would be a positive thing.

Just so we're clear, you think that it would be a positive thing if Google unilaterally dictated Web standards by using its market power?


My exact position is this. Imagine Google built a open-source, very well-designed doc & app platform, with the advantages of both the web and native platforms. If this platform would get widely adopted, it may be a good thing.

Edit: Linux is similar to what I'm talking about except it's a lower-level platform. It's open-source and has a majority market share in many segments (servers, supercomputers, smartphones).


> Imagine Google built a open-source, very well-designed doc & app platform, with the advantages of both the web and native platforms. If this platform would get widely adopted, it may be a good thing.

I don't think it's a good thing if Google achieves this by making Blink into a de-facto standard, effectively making Blink (a large pile of C++ code controlled by a single vendor) the only viable browser engine.


So in other words, imagine if something that would never happen, happened, it might be good. Have I got that right? :P


It's unlikely to happen but not impossible :) What part of the scenario makes you think this won't happen btw?


It's just not in Google's interests. Their current plays (Chrome, ChromeOS, Android) illustrate their interests.


It seems likely that if this platform were to get widely adopted, it would cease to be open-source.


It would get forked.


> Tons of web standards, android, ios, linux, windows, osx, etc.

There was a time in the not too distant past that there was pretty much one dominant computing platform, and it wasn't exactly elysium.

I get the problems with web standards (right now I have a front end engineering job) and the multiplicity of platforms, but I'll take that over monopoly monoculture.


True, but 1) it was not a well-designed good platform 2) it wasn't free and open-source. Linux is becoming a monopoly too (and is a monopoly in some segments.


No monopoly will ever meet your first criteria over a long period of time. Absent competition and a plethora of competing engineering platforms, there's simply no pressure to make things well designed, and no competitive selection to help distinguish what even is well designed.

The idea that we can have the good parts of a monoculture but "do it right this time" and avoid the horrific downsides is a fantasy.


I'm pretty sure Windows at the time (before monopoly) was a preferable platform than it's competitors.

Problem is that there doesn't exist a platform that can be made and satisfy all possible use cases present and future alike.

Wishing Google descends from the heavens and make a perfect Web APP platform/protocol, is akin to believing a God will descend from the heavens and grant all people's wishes (even if they contradict each other).


Mozilla has a vested interest in keeping the web a "document viewer platform"? Are you serious? Have you even heard of Firefox OS?


> while almost every other web player (except Mozilla)


Oops, my bad. I didn't see "except". Ignore this! (assuming that it wasn't edited in :))


One thing that irritates me is that Apple still don't support WebGL in iOS Safari. Even on the desktop version it has to be manually switched on as a 'developer' tool. And it's clearly technically possible - iAds can use it!


That's because they aren't satisfied that it's secure enough for the open web yet and they don't want to expose their users to risks they can't protect themselves from.


Hah.

Remember when Blink was announced, how some people said "competing engines would be good for the web"?

And how the Blink team/Google paid lip service to improving the web going forward, better standards compliance, etc.

Firefox is ahead in some ways, too burneded by BS like a plugin ecosystem (focus on browsing damnit) and non-native UI skins, on the other.

IE is catching up but limping.

Webkit is not updating that fast anymore.

Opera, nobody but 10 people ever cared about. Besides, they're just Blink now.

Anybody holding their breaths for full ES6 support?


This is a shallow misreading of the situation.

"competing engines would be good for the web"?

They will.

Firefox is ahead in some ways, too burneded by BS like a plugin ecosystem (focus on browsing damnit) and non-native UI skins, on the other.

Plugin systems are key to a good browsing experience; Chrome includes almost entirely the same feature set, so your argument is void; and, overall, there's no evidence that Firefox is particularly burdened by anything.

IE is catching up but limping.

IE caught up already, and I don't see any evidence that it's "limping"

Webkit is not updating that fast anymore.

Well, there are fewer contributors now. But it's kind of hard to draw any conclusions about how fast it's updating given how little time has passed since Blink forked.

Anybody holding their breaths for full ES6 support?

Yep, we're making good progress, and Firefox 29/Chrome 33 are doing better than ever. Still work to be done, but the ES6 spec isn't even finished yet, so that's to be expected.

In other words, chill out. Everything's looking pretty good, there are loads of excellent browsers available, and they're mostly getting better. It's not perfect, but what platform is?


If I may chime in on this:

    > IE is catching up but limping.
    IE caught up already, and I don't see any evidence that it's limping.
I actually kind of agree, and I'm surprising myself in saying that. Just from a consumers perspective, I found at least one place where IE offers something that I directly wanted to do that couldn't be done on any other browser.

I wanted to watch a HD Netflix on my laptop, but the Silverlight client had no hardware acceleration so it played terribly. Turns out, there is a hardware accelerated HTML5 version of Netflix that's only usable using IE since only IE has the necessary DRM.

I was quite surprised. Ignoring the potential ethical quagmire here, I thought it was interesting that there was something where IE gave me a better experience.


I wanted to watch a HD Netflix on my laptop, but the Silverlight client had no hardware acceleration so it played terribly. Turns out, there is a hardware accelerated HTML5 version of Netflix that's only usable using IE since only IE has the necessary DRM.

I was quite surprised. Ignoring the potential ethical quagmire here

Let's not. DRM on the open web is bad.

DRM on the open web is an open specification for how to close the open web.

If Netflix had decided to use a standard mechanism to deliver video, instead of relying on closed, proprietary DRM-plugins, everyone would be able to provide this good user-experience.

Not saying MSIE hasn't improved, but enabling DRM on the web is not in any way improving anything.

It's bad. All bad. Bad for the present, bad for the future. It needs to die.


I believe that hardware-accelerated HTML5 version of Netflix is also used on ChromeOS (since Silverlight isn't an option).

It surprises me a bit to hear that the same doesn't happen with desktop Chrome.


Not quite - it's used on (some) Chromebooks, as in the HTML5 version of Netflix only runs on locked down hardware from Google partners. Unlock the hardware so you can run your own software on the Chromebook and the media decryption module refuses to decrypt anything.

The various content providers seem to have used the advent of HTML5 to insist on stricter DRM requirements, ones that can only be met through control over the entire hardware and software stack. I presume Microsoft's version uses the long-standing GPU support for hardware decryption and acceleration of DRMed video instead of whatever ChromeOS does. Apparently unlike the Google version it's possible for other browsers to freely support HTML5 EME that's compatible with Microsoft's DRM, but naturally only on Windows: http://msdn.microsoft.com/en-us/library/windows/apps/dn46673...


>"competing engines would be good for the web"? They will.

Citation needed. Because, to support my argument, we just saw a "competing engine" starting to divert from the others. And not only in this case -- just the other day the Google team announced they'd drop CSS Regions too.

Would it be as easy for Google to do so, if they were still sharing Webkit code with all the other Webkit partners?

>IE caught up already, and I don't see any evidence that it's "limping

Well, we still have to support IE10 (some also IE9). Everytime, every version gives something new, but holds something back that the latest other browser engines already have. E.g IE9 and WebGL.

>Webkit is not updating that fast anymore. Well, there are fewer contributors now. But it's kind of hard to draw any conclusions about how fast it's updating given how little time has passed since Blink forked.

Nope, also before. Google had published commit stats in the time of the fork, and Apple had considerably slowed down commits for couple of years or so before that.


This is misdirection. The allegation is that Google is using it's market power to subvert the standards process and subvert the open web it once championed now that it is powerful enough to do so.


An awful lot of Firefox users happen to value its "BS" plugin ecosystem quite heavily.


They might (though the vast majority just uses 2-3 plugins at most, ad block etc, that could as well be merged into core).

But this "awful lot of firefox users" is, in aggregate, awfully few judging from FF's sliding market share.


Bringing Ad Block into the core would violate the wink, wink, nudge, nudge arrangement between Mozilla and their default search provider Google. If Mozilla did this, the value of being a default search provider in Firefox would fall to around zero, and Mozilla would close up shop as 90% of their revenues disappeared.


Bringing Ad Block into the core also sounds like a violation of the browser abstraction to me. A browser is supposed to show me what I give it. Specifying and implementing policies about what content is "good" or "bad" and should be treated specially seems logically separate to me.


You realise that such features can come with a on/off setting right?

Plus there's no such a "browser abstraction". A browser is just supposed to be useful for surfing the web, and if blocking ads is part of that, then so be it.

Nobody talks about a "mailer abstraction" ("a mailer is supposed to show me what is coming into my account"), when mailers have built-in spam email filters.

Not to mention that similar things already exist in browsers. Browsers eg. stop you going into "bad" content ("fishing", "malware" etc sites). They stop your visit, show you a warning page, and ask if you're sure you want to continue. Technically those are just other pages, they are content too (and sometimes they even have been labelled wrongly).


Interesting, and concisely put. Now I'm wondering whether FF users are obliged to use google (the same way we get obliged to turn off adblock on sites we don't wish to hurt).


"Obligated" is the wrong word, but if your goal is to support FF then yes, you should probably use Google. Or perhaps donate instead.


I'm pretty sure at least Bing has a revenue sharing program with Mozilla for search results from the search bar (they did as of Firefox 4[1], at least). Google just won the auction to be the default. Not sure about the other default search engines, though.

[1] http://arstechnica.com/information-technology/2010/10/citing...


The type of users who value extensions heavily are the type that evangelize browsers, and the type whose friends and relatives ask them what browsers to use.


You have an incredibly odd definition of "awfully few."


> Anybody holding their breaths for full ES6 support?

For which browser? Because currently, Gecko leads by a half marathon, and it's a marathon: http://kangax.github.io/es5-compat-table/es6/


Everyday a little bit closer, Google the new Microsoft. Not surprising.


Last week, I was giving a lecture on HTML5, starting with the origins of the Web, HTML, and the "browser wars." I indicated that the browser wars were an attempt by two companies (Netscape and Microsoft) to add tags without checking with anyone else, in the hopes of getting market share.

As I got to this point in the lecture, I paused, and realized for the first time that Google is often playing a similar game with Chrome.

Everyone agrees that standards are time-consuming, bureaucratic, and prone to all sorts of compromises. But the goal of standards isn't the speed of the process, but rather the inclusiveness of the process. (And yes, you can make a case for the W3C not being so inclusive...)

It's great that Google is interested in treating Chrome as a laboratory for new Web technologies, but I think that some added humility would be in order. It's one thing to say, "We think that this might work well, and are throwing it out there to see what will stick, keeping the good stuff and throwing out the bad." But instead, they seem to be saying, "We think that this is good enough for a standard, never mind the process." And that can't be good for the Web.


That is how roughly all progress on the Web ever has happened. The existence of HTML5 is a testament to that — the W3C was lost in the woods for years and years trying to make XHTML 2 with nothing to show for it. Meanwhile, the browsers were coming up with their own ideas. So the WHATWG came along and specified what the browsers were already doing, as well as a few simple improvements. I think we can all agree HTML5 was a good thing. Well, if you think so, then you agree that progress coming from the browser implementers is often a good thing.

The problem in the browser wars was that they were trying to be incompatible, which is not really the case with Google. (Remember, Microsoft's model was not "Embrace, extend, evangelize.")


> The problem in the browser wars was that they were trying to be incompatible, which is not really the case with Google. (Remember, Microsoft's model was not "Embrace, extend, evangelize.")

This is sometimes the case for Google's technologies as well. PNaCl is pretty much impossible to embed into other browser engines because of the dependency on Pepper, which is very Chromium-specific and there has been no attempt at all that I'm aware of to standardize it. (Pepper even has undocumented stuff that Flash uses!)


Honesty, if there was an attempt to standardize it, what would Mozilla's reaction have been anyway? Mozilla seems completely invested in JS for everything and hostile to any competing technology for execution in the browser. It seems the result would be pretty much the same.

It's not like Mozilla doesn't do things for expediency when they need ability to iterate quickly, e.g. The WebAPIs effectively ignoring the existence of similar DAP efforts, and then having to turn around and rationalize them with the previous work. If you need to ship a physical phone with Firefox OS, and the manufacturers are waiting, are you going to block on W3C, or ship with proprietary or un-ratified device APIs?


We pretty seriously looked at PPAPI back when Adobe announced effective end of life for NPAPI Flash on Linux.

If PPAPI were not as tied to Chrome's internals as it is, we might in fact have implemented it.

As far as being invested in JS, I think it's more being invested in managed code. Our experience with NPAPI is that you end up with the unmanaged code depending on all sorts of implementation details because it can. The classic example is that all NPAPI Flash instances across have to be in a single process, because they assume they share memory. And the browser can't do anything about it, since it's not involved in the memory sharing bits in any way. Similarly, the fact that NPAPI plug-ins can do their own network access makes them hard to align with CSP and the like. Managed code can start to depend on internals in weird ways too (e.g. sync access to cross-origin window bits), but you have more chances to pull the wool over its eyes in some way (e.g. blocking cross-process proxies).

Once you accept managed code, JS seems like as good a place to start as any, with at least the benefit of being there to start with. ;)

There are, of course, obvious drawbacks to the all-JS approach, starting with the fairly lacking parallelism story. At least now we've grown Workers, and there's work on things like SIMD, ParallelJS, etc. Then again, the one major language addition to a browser VM recently (Dart) didn't exactly address this need either....


I worked on Adobe's Flash Player team when Chrome was pushing PPAPI. Adobe engineers were strongly opposed to PPAPI because it was complex and incomplete (and it was only implemented by Chrome). The only reason a Flash PPAPI plugin exists for Chrome is because Google did the work.


> Honesty, if there was an attempt to standardize it, what would Mozilla's reaction have been anyway? Mozilla seems completely invested in JS for everything and hostile to any competing technology for execution in the browser. It seems the result would be pretty much the same.

There was a thread on plugin-futures about Pepper way back in 2010, when this effort was getting underway. Every other browser manufacturer suggested using the Web APIs instead of Pepper. Google ignored the consensus and did Pepper anyway. Looking back, from everything I saw the other browser manufacturers were right—asm.js now has an advantage in that it can use the standardized Web APIs. If Google had listened to the other browser manufacturers, it might have turned out better for PNaCl.


Seems like they had legit reasons at the time, given the goal of getting games written for it, and the fact that the WebAPI didn't support multithreading nor synchronous for some common operations.

I don't know what the reasoning then was, but it seems to me that if you goal is to get games to port C code, it's a lot less re-engineering if you have threads and blocking ops. I've ported multithreaded apps to the web and de-threading them is a major headache.

Again though, if there were no pepper, do you really think Mozilla would have adopted NaCL?

On a side note, Firefox 27 finally killed the last vestige of NPAPI exports that we needed to make GWT Dev Mode work (side side note: Chrome killed us first deprecating NPAPI). Here's a case where WebAPI equivalents don't work. Not even PPAPI can work. GWT Dev Mode very specifically relied on synchronous access to JS APIs. Point is, shoe-horning everything in the browser event loop is not the ideal location or design for every API. Sometimes you do need something outside the WebIDL bindings.


I agree, that's one case where the criticism is probably right on the nose. But I don't think it can be generalized to everything Google does. PNaCl is the odd one out, not so much the norm.


Last week, I was giving a lecture on HTML5... and realized for the first time that Google is often playing a similar game

I'm not going to be calling you names, but if you've been looking this has been obvious for at least a year now.

Chrome was released to enable all the bad stuff Google is doing now.


Were you not around during the dark days of IE? "Pushing for quick progress on standards" was not exactly a major complaint people had against them.


I thought the issue was exactly that - that MS pushed for adoption of proto-standards before there was any sort of agreement on how they should actually be interpreted/implemented.


Not exactly.

The original problem with Microsoft's behavior was that Microsoft simply ignored standards that did exist and replaced them with subtly different standards of their own, so that they broke standards-compliant pages. They also pushed for proprietary technologies that they wouldn't allow others to use, like ActiveX and VBScript, but their breaking standards was worse.

The second problem was that once Microsoft had achieved dominance, it stopped adding anything at all to IE. This made the Web platform largely stagnant for years.

At no point was the problem "Microsoft came up with this cool new idea and implemented it before the standards bodies had the requisite seven years to agree on it." That's what Firefox and Safari were doing, and most people agreed that it was pretty good. These ideas Firefox and Safari came up with became HTML5. In fact, so did one idea Microsoft had that didn't break existing standards — XMLHttpRequest.


This is a rewrite of history: if you actually read the material published at the time, Netscape ignored the standards (even actively scoffing at them), and Microsoft's IE was seen as the white knight that actually cared enough to be on the mailing list with the standards body, working on their DTDs in public.

Now, as everyone is always "citation needed" on this, as maintaining this myth is a much stronger goal for most people than doing even minimal backing research, here are some places I've talked about this before in more detail, the second link containing a very large number of citations if you go through to the bottom of the thread.

https://news.ycombinator.com/item?id=5216141

https://news.ycombinator.com/item?id=5716787

Your comment about VBScript is silly, given that JavaScript was also non-standard Netscape-specific Java-laden ludicrousness that also "broke the web" (script elements were implemented in a way that required special hacks to an HTML parser to even parse due to nested < having a different meaning, and done without any requirement for backwards-compatibility to ones that would see the content as part of the document). It was only due to Microsoft's JScript (ehich went hand-in-hand with VBScript and ActiveX in the same way JavaScript worked with Java as "LiveScript" in Netscape) that ECMAScript got standardized at all.

Seriously: I simply don't understand why everyone perpetuates this madness when you can't substantiate any of it if you look at the actual history... every comment bashing IE always repeats this stuff, so everyone thinks of it as gospel truth, but it really is all just myth at this point: "citation needed".


I think your frustration on this matter is causing you to read in things that I didn't actually say. The things you're mentioning here mostly don't even contradict what I said. It's true that Netscape drifted off into Crazytown as time went on. I remember back when IE had divs and Netscape was like, "Nah, layers."

My point is not "Netscape rules, Microsoft drools." My point is that implementing new ideas that haven't been fully standardized — as both Microsoft and Netscape did — was never really the big problem. The problem with IE that made people come to hate it was that Microsoft broke the standards, left them broken for years and years and refused to implement anything new. Netscape was corpsified by the time this really came to a head, so I don't know what they have to do with anything. The alternatives I brought up were the later browsers like Firefox and Safari that got tired of waiting and started implementing new things and slowly eroding IE's marketshare.


> The problem with IE that made people come to hate it was that Microsoft broke the standards, left them broken for years and years and refused to implement anything new.

I always thought that was because the DOJ went after them. And MS said, roughly, "You don't like IE? You think it's 'bad' for everyone? Well, fine then. We won't touch it. See how you like that."

Point being, I think we were stuck with a shitty IE for so long primarily in retaliation for the Justice dept.'s anti-trust suit.

But that's entirely conjecture...


I just reread your comment, and I stand by my response.

Microsoft didn't replace some standard with a subtly different standard when they did ActiveX and VBScript... what was the standard you claim existed that they were ring subtly different from? JavaScript? Java? (Again: not a standard.)

You claim Microsoft ignored standards, but Microsoft actually cared a lot about being on top of standards and were involved in the standards process; if anything their fault was shipping stuff too early (which would be something you could pick on, but didn't; it would even be a powerful and apropros argument, as that's what Google is doing today): they cared about CSS when no one else did, and later they cared about XSL/T when no one else did.

You also claim that Microsoft stopped working on IE6 when they obtained dominance: I directly touch on that in my expanded comments, pointing out that it makes more sense that Microsoft stopped working on IE when they ran into legal opposition and the project became demoralizing and dangerous. Microsoft does not have a history of simply not releasing updates to products they have dominated: their updates are at times problematic to others and sometimes even self-defeating, but something really weird happened with IE6--where there was only a single service pack release during a nearly five year hiatus from any form of update--that simply isn't well-explained by their dominance.

The only situation I can come up with similar to your arguments is DHTML Behaviors, which is amazingly similar in goal to that of this new Web Components stuff ;P. I will claim this is exactly the kind of thing chadwickthebold is talking about when he said "MS pushed for adoption of proto-standards before there was any sort of agreement on how they should actually be interpreted/implemented". Here is the proto-spec:

http://www.w3.org/TR/1999/WD-becss-19990804

Microsoft submitted DHTML Behaviors to the W3C in 1998, spent a bunch of time with them talking about how it would work, and in the end shipped it in 1999 as part of IE5. Maybe the timeline is tighter than Web Components today with Google, but it seems like the same story. (Note that I haven't researched this to the same extent as my earlier comments, but I am definitly basing this off of historical sources; if you disagree with this comparison, I would be interested in hearing some similarly-historically-based arguments for how the timeline played out and why I this fundamentally different than Google today.)

Finally, now in this comment you talk about "broke the standards": I would appreciate some citations and examples; the primary things I know that people like to argue about are issues with CSS and XSL/T that are entirely explained by "pushing proto-standard"... the IE box model being the prototypical example.

So, each one of your comments are incorrect. It thereby doesn't matter what your conclusion is; in my earlier comments I'm arguing against people comparing Microsoft to Netscape, but in this thread I am just pointing out your statements about Microsoft are unsubstantiated. It doesn't matter what your opinion of Netscape is, and if you reread the comment I made on this thread, my response to you, you will see that I don't bring that idea up at all... you decided to transplant the conclusion from my citations to here, I am guessing because that is easier than addressing any of my actual complaints about your facts one-by-one? Again: none of these facts you are stating fit the history, and if you are going to keep repeating them I'd like some primary source material for them (mailing list posts or comments by developers from back when Microsoft was actually dominant or before the did, not after they lost and abandoned the web entirely).


> Microsoft didn't replace some standard with a subtly different standard when they did ActiveX and VBScript... what was the standard you claim existed that they were ring subtly different from? JavaScript? Java? (Again: not a standard.)

I agree. I called out ActiveX and VBScript as not being examples of this because I thought somebody might bring them up if I didn't. I don't think those really contributed much to IE's poor reputation on standards. You list some of the examples I was thinking of later on.

> You also claim that Microsoft stopped working on IE6 when they obtained dominance: I directly touch on that in my expanded comments, pointing out that it makes more sense that Microsoft stopped working on IE when they ran into legal opposition and the project became demoralizing and dangerous. Microsoft does not have a history of simply not releasing updates to products they have dominated: their updates are at times problematic to others and sometimes even self-defeating, but something really weird happened with IE6--where there was only a single service pack release during a nearly five year hiatus from any form of update--that simply isn't well-explained by their dominance.

I'm not sure if you think I claimed otherwise, but I didn't. The reason for Microsoft's stagnation wasn't relevant to my point, so I didn't address it. Remember: The purpose of my comment was not to slag Microsoft. The purpose of my comment was just to illustrate that making progress was not what got Microsoft its bad reputation for web standards.

Why Microsoft did things is interesting (and I think you're probably right), but it's beside the point when we're just asking "What did Microsoft do?"

> Finally, now in this comment you talk about "broke the standards": I would appreciate some citations and examples; the primary things I know that people like to argue about are issues with CSS and XSL/T that are entirely explained by "pushing proto-standard"... the IE box model being the prototypical example.

CSS was still newly finalized when IE released support, but I don't believe there was ever a draft specifying the behavior IE used. Here's a draft from a year before IE 3 that specifies the standard box model: http://www.w3.org/TR/WD-css1-951117.html#horiz

I can't find any evidence that Microsoft had reason to believe the behavior they implemented was what would be in CSS. As far as I can determine, they simply diverged from the spec. Maybe they misread the spec, maybe they liked their model better and chose to ignore the CSS standard they had available to them — and I mean, hey, I liked their version better too — but the fact is that they just created a competing standard and were reluctant to adopt the real standard, and this caused people to feel that Microsoft had poor support for standards.


The Blink team seems dead-set on repeating the mistakes they made with the Web Audio API (shipping an immature, vendor-specific API to the web without third-party feedback and then forcing it through a standards process, etc). Pretty frustrating.


On the other hand, the Mozilla counter proposal, which was trying to do low-latency audio by using Javascript as a DSP was pretty awful. It's like saying you'll do 3D by handing a framebuffer to JS and doing rasterization in software.

"Forcing it through the standards process" sounds like the kind of rhetoric Republicans use when bills get passed they don't like.


Web Audio API has a similar mechanism for generating samples with JS and it's just as bad. In fact, Web Audio is worse for playback of JS-synthesized audio, not better. And lots of use cases demand playback of JS-synthesized audio: emulators, custom audio codecs, synths, etc.

The Mozilla API did one or two things and did them adequately; it did them by extending an existing API (the <audio> element). Use cases it didn't cover could have been handled with the introduction of new APIs or extensions to existing APIs.

The Web Audio API introduced an interconnected web of dozens of poorly tested, poorly specified components that covered some arbitrary subset of cases users care about. Most of the oversights still haven't been fixed and the spec is still full of unaddressed issues and deficiencies.


>Web Audio API has a similar mechanism for generating samples with JS and it's just as bad. In fact, Web Audio is worse for playback of JS-synthesized audio, not better. And lots of use cases demand playback of JS-synthesized audio: emulators, custom audio codecs, synths, etc.

Generating audio samples via JS is an escape hatch, the same way rasterizing to a raw framebuffer is an escape hatch. If your system had audio acceleration hardware, and many systems do (e.g. DirectSound/OpenAL), you want to leverage that.

If you are deploying a game to a mobile device, the last thing you want to do is waste CPU cycles burning JS to do DSP effects in software. This is especially awful for low end phones like the ones that Firefox OS runs on. Latency on audio is already terrible, you don't want the browser UI event loop involved IMHO in processing it to feed it. Maybe if you had speced it being available to Web Workers without needing the UI thread involved it would make more sense.

>The Web Audio API introduced an interconnected web of dozens of poorly tested, poorly specified components that covered some arbitrary subset of cases users care about

The arbitrary subset being, those that have been on OSX for years? AFAIK, Chris Rogers developed Web Audio based on years of experience working on Core Audio at Apple, and therefore, at minimum, it represents at least some feedback from the use cases of the professional audio market, at the very least, Apple's own products like Garage Band and Logic Express which sit atop Core Audio.

You assert that the other use cases could have been handled by just extending existing APIs, but to me this argument sounds like the following:

"You've just dumped this massively complex WebGL API on us, it has hundreds of untested functions. It would be better to just have <canvas> or <img> with a raw array that you manipulate with JS. Any other functions could be done [hand-wave] by just bolting on additional higher level apis. Like, if you want to draw a polygon, we'd add that."

APIs like Core Audio, Direct Sound, OpenGL have evolved because of low level optimization to match the API to what the hardware can accelerate efficiently. In many cases, bidirectional, so that HW influences the API and vice-versa. Putting the burden on the WG to reinvent what has already been done for years is the wrong way to go about it. Audio is a solved problem outside JS, all that was needed was good idiomatic bindings for either Core Audio, OpenAL, or DirectSound.

Whenever I see these threads on HN, I always get a sense of a big dose of NIH from Mozilla. Whenever anyone else proposes a spec, there's always a complaint about complexity, like Republicans complaining about laws because they are too long, when in reality, they don't like them either for ideological reasons, or political ones.

Mozilla is trying to build a web-based platform for competing with native apps. You can see it in asm.js and Firefox OS. And they are not going to get there if they shy away from doing the right things because they are complex. Mobile devices need all the hardware acceleration they can get, and specing out a solution that requires JS to do DSP sound processing is just an egregious waste of battery life and cycles IMHO for a low end web-OS based mobile HW.


> Generating audio samples via JS is an escape hatch, the same way rasterizing to a raw framebuffer is an escape hatch.

But that escape hatch is where all the interesting innovation happens! It's great that canvas exists, and it's much more widely used than WebGL is, because it's more flexible and depends on less legacy cruft. You don't have to use it, but I do, and I'd like a "canvas for audio" too.

To make matters worse, the Web Audio stuff is much less flexible than OpenGL. You can at least write more or less arbitrary graphics code in OpenGL: it's not just an API for playing movie clips filtered through a set of predefined MovieFilter nodes. You can generate textures procedurally, program shaders, render arbitrary meshes with arbitrary lighting, do all kinds of stuff. If this were still the era of the fixed-function OpenGL 1.0 pipeline, it'd be another story, but today's OpenGL at least is a plausible candidate for a fully general programmable graphics pipeline.

Web Audio seems targeted more at just being an audio player with a fixed chain of filter/effect nodes, not a fully programmable audio pipeline. How are you going to do real procedural music on the web, something more like what you can do in Puredata or SuperCollider or even Processing, without being able to write to something resembling an audio sink? Apple cares about stuff like Logic Express, yes, but that isn't a programmable synth or capable of procedural music; while I care about is the web becoming a usable procedural-music platform. One alternative is to do DSP in JS; another is to require you to write DSP code in a domain-specific language, like how you hand off shaders to WebGL. But Web Audio does the first badly and the 2nd not at all!

> Audio is a solved problem outside JS

Yeah, and the way it's solved is that outside JS, you can just write a synth that outputs to the soundcard...


WebGL is less used than Canvas for the most part, because 3D and linear algebra are much more difficult to work with for most people than 2D. Also, people work with raw canvas image arrays much more rarely than they do the high level functions (stroke/fill/etc)

OpenGL was still a better API than a raw framebuffer even when it was just a fixed function pipeline. Minecraft for example is implemented purely with fixed-function stuff, no shaders. It isn't going to work if done via JS rasterization.

Yes, there are people on the edge cases doing procedural music, but that is a rare use case compared to the more general case of people writing games and needing audio with attenuation, 3D positional HRTF, doppler effects, etc. That's the sweet spot that the majority of developers need. Today's 3D hardware includes features like geometry shaders/tessellation, but most games don't use them.

OpenSL/AL would work a lot better if it had "audio shaders". Yes. But if your argument is that you want to write a custom DSP, then you don't want Data Audio API, what you want is some form of OpenAL++ that exposes an architecture neutral shader language for audio DSPs, that actually compiles your shader and uploads it to the DSP. Or, you want OpenCL plus a pathway to schedule running the shaders and copying the data to the HW that does not involve the browser event loop.

That said, if there was a compelling need for the stuff you're asking for, it would have been done years ago. None of the professional apps, nor game developers, have been begging for Microsoft Direct Sound, Apple, or Khronos to make audio shaders. There was a company not to long ago, Aureal 3D, which tried to be the "3dfx of audio", but failed, but it turns out, most people just need a set of basic sound primitives they can change together.

I have real sympathy for your use case. For years, I dreamed of sounds being generated in games ala PhysX, really simulating sound waves in the environment, and calculating true binaural audio, the way Oculus Rift wants to deliver video to your senses, taking into account head position. To literally duplicate the quality of binaural audio recordings programmatically.

But we're not there, the industry seems to have let us down, there is no SGI, nor 3dfx, nor Nvidia/AMD "of audio" to lead the way, and we certainly aren't going to get there by dumping a frame buffer from JS.

Right now, the target for all this stuff, Web GL, Web Audio, et al, it exposing APIs to bring high performance, low latency games to the web. I just don't see doing attenuation or HRTF in JS as compatible with that.


I agree that for games the market hasn't really been there, and they're probably served well enough by the positional-audio stuff plus a slew of semi-standard effects. And I realize games are the big commercial driver of this stuff, so if they don't care, we won't get the "nVidia of audio".

I'm not primarily interested in games myself, though, but in computer-music software, interactive audio installations, livecoding, real-time algorithm and data sonification, etc. And for those use cases I think the fastest way forward really just is: 1) a raw audio API; and 2) fast JS engines. Some kind of audio shader language would be even better perhaps, but not strictly necessary, and I'd rather not wait forever for it. I mean to be honest I'd be happy if I could do on the web platform today what I could do in 2000 in C, which is not that demanding a level of performance. V8 plus TypedArrays brings us pretty close, from just a code-execution perspective, certainly close enough to do some interesting stuff.

Two interesting things I've run across in that vein that are starting to move procedural-audio stuff onto the web platform:

* http://charlie-roberts.com/gibber/info/?page_id=6

* http://www.bfxr.net/

There are already quite a few interactive-synth type apps on mobile, so mobile devices can do it, hardware-wise. They're just currently mostly apps rather than web apps. But if you can do DSP in Dalvik, which isn't really a speed demon, I don't see why you can't do it in V8.

Edit: oops, the 2nd one is in Flash rather than JS. Take it instead then as example of the stuff that would be nice to not have to do in Flash...


Your argument for the superiority of the Web Audio API seems to be 'it's like Core Audio', and you seem to argue that WebGL is great just because it's like OpenGL. What actually supports this argument? Would you be a big fan of WebDirect3D just because it was exactly like Direct3D? After all, virtually all Windows games and many windows desktop apps use Direct3D, so it must be the best. 3D games on Linux use OpenGL so it must be the best. If you're going to argue that it's good to base web APIs on existing native APIs, why not OpenAL -> WebAL, like OpenGL -> WebGL?

Specs need to be evaluated on the merits. The merits for the Web Audio API at time of release:

* Huge

* Poorly-specified (the API originally specified two ways to load sounds, the simplest of which blocked the main thread for the entire decode! very webby. Spec was full of race conditions from day 1 and some of them still aren't fixed.)

* Poorly-tested

* Large, obvious functionality gaps (you can't pause playback of sounds! you can't stitch buffers together! you can't do playback of synthesized audio at rates other than the context rate!)

* Incompatible with existing <audio>-based code (thanks to inventing a new, inferior way for loading audio assets), making all your audio code instantly browser-specific

* Large enough in scope to be difficult to implement from scratch, even given a good specification (which was lacking)

* A set of shiny, interesting DSP/filter chain features, like convolution and delay and HRTF panning and so on, useful for specific applications

* Basic support for playback and mixing roughly on par with that previously offered by <audio>, minus some feature gaps

The merits for the old Mozilla audio data API at the time of Web Audio's release:

* Extends the <audio> element's API to add support for a couple specific features that solve an actual problem

* Narrow scope means that existing audio code remains cross-browser compatible as long as it does not use this specific API

* The specific features are simple enough to trivially implement in other browsers

You keep making insane leaps like 'Web Audio is good because it's like Core Audio' and 'Mozilla wants you to write DSPs in JavaScript because ... ????' even though there's no coherent logic behind them and there's no evidence to actually support these things. A way to synthesize audio in JavaScript does not prevent the introduction of an API for hardware DSP mixing or whatever random oddball feature you want; quite the opposite: it allows you to introduce those new APIs while offering cross-browser compatible polyfills based on the older API. The web platform has been built on incremental improvement and graceful degradation.

P.S. Even if the Web Audio API were not complete horseshit at the point of its introduction, when it was introduced the Chrome team had sat on their asses for multiple versions, shipping a completely broken implementation of <audio> in their browser while other vendors (even Microsoft!) had support that worked well enough for playing sound effects in games. It's no coincidence that web developers were FORCED to adopt Google's new proprietary API when the only alternative was games that crashed tabs and barely made sound at all.


Isn't the point of introducing something actually getting it fixed from feedback in the WG? So you're complaint is, someone introduced a draft of an idea with a prototype implementation, and you're pissed it wasn't perfect the first time around?

Calling something someone worked on, who happens to be a domain expert, "horseshit" seems a little extreme don't you think? Were not most of the initial problems resolved by WG feedback or not? If yes, they hurray, the WG fulfilled it's purpose. If every feature arrived complete with no problems, there's be little need for a WG, emphasis on the 'W'.

Also "* A set of shiny, interesting DSP/filter chain features, like convolution and delay and HRTF panning and so on, useful for specific applications" Specific, as in, the vast majority of applications. This would be like pissing all over CSS or SVG filters because they don't include a pixel shader spec. 3D positional sound and attenuation are the two features used by the vast majority of games. Neither most applications nor games resort to hand written DSP effects.

As for the <audio> tag playback. Here's a thread where I already had this debate with Microsoft (http://cromwellian.blogspot.com/2011/05/ive-been-having-twit...). Even Microsoft's implementation of <audio> was not sufficient for games or for music synthesis. First of all, their own demo had audible pops because the JS event loop could not schedule the sounds to play on queue. For games like Quake2, which we ported to the Web using GWT, some sound samples were extremely short (the machine gun sound) and were required to be played back-to-back seamlessly as long as the trigger was pulled to get a nice constant machine gun sound. This utterly fails with <audio> tag playback, even on IE (in wireframe canvas2d mode of course). Another port I did, which was a Commodore 64 SID player had the same issue. So let's dispense with the myth that using basic <audio> is sufficient for games. It lacks the latency control to time playback properly even on the best implementation. For Quake2, which features distance attenuation and stereo-positioning,

On the issue of Web Audio / Core Audio, my point there is merely that all of your bashing ignores the point that it is in fact, derived from a mature API that was developed from professional industry requirements over a long time. You keep bashing the built in filters, but those are the tried and true common cases, it's like bashing Porter-Duff blending modes because it's not as general as a shader.

As for Direct3D. You do realize that OpenGL for a long time sucked and was totally outclassed by DirectX? Shaders were introduced by Microsoft. A lot of advanced features arrived on Windows first, because the ARB was absolutely parallelized. So yes, if someone created a "Web3D" API based on Direct3D, it would still be better than Canvas2D, even if you had to write a Direct3D->OpenGL mapping layer. I don't have many bad things to say about DirectX8+, Microsoft did a good job pushing the whole industry forward. DirectX was the result of the best and brightest 3D IHVs and ISVs contributing to it, and so it would be unwise to discount it just because it is proprietary Microsoft.

And Web developers were not forced to adopt Web Audio. For a long time, people used flash shims. In fact, when we shipped Angry Birds for the Web, it used a flash-shim. If Data Audio API lost at the WG, you can't blame Google, people on the WG have free will, and they could have voted down Web Audio in favor of Data Audio, regardless of what Chrome ships.

What I'm hearing in this context however is that you are content in ignoring what most developers wanted. People trying to build HTML5 games needed a way to do the same things people do in games on the Desktop or on consoles with native APIs. The Mozilla proposal did not satisfy these, with no ability to easily do simple things like 3D positioning or distance fallout without dumping in a giant ball of expensive JS into games that developers were already having performance issues with.


The O'Reilly book shipped with the old API! http://my.safaribooksonline.com/9781449332679#safari_reviews


Shadow DOM is a very, very bad idea.

http://glazkov.com/2011/01/14/what-the-heck-is-shadow-dom/

It kills cross-browser compatibility, kills standards (since they're unreachable, undocumented elements that can handle input, interaction and affect other elements.


It's starting to dawn on me what this will really be used for, and I can't say I'm stoked about it. But the article you've linked to seems pretty upbeat about the whole thing.


Author of the article is the spec author.


I linked to an article that explains what it is and left it to you (the reader) to decide if it's good or bad. I believe it is a very bad thing.


Even if it will become a standard, it may be complete evil. This concept is quite complicated for regular developer, so some day we'll see petabytes of cryptic, totally undebuggable markup. It should not be done in that way.


Any "insiders" have comments on this? Is this the Chrome team just trying to keep pushing forward, or are they technically sticking to the referenced compatibility guidelines they published?

I'm just wondering if there has been a lack of progress with Shadow DOM and their solution is just go-go-go to get things moving.


From the shadow DOM spec author: https://groups.google.com/a/chromium.org/forum/#!msg/blink-d...

There's been a lack of progress in that it's taking a while, but there are a bunch of still-open unresolved issues... The problem with big complex features. ;)


They're commenting in this topic, and they seem clueless as to why anyone would object at all.


Forget the technical content, or the political point here - that thread is worth reading just as an excellent example of how to conduct a civilised discussion even in a situation with serious potential for strife.

Of particular note is how, even though it does get a bit stressed in the middle, everyone very carefully allows the discussion to calm down and restate their points with no emotional content.


Rather than link to a comment from someone at Apple part way down the thread and starting 'burn the witch' style threads on HN perhaps the OP should have started at the beginning - http://lists.w3.org/Archives/Public/www-style/2014Feb/0032.h... - and included the Blink thread that discusses Google's position and issues in more detail https://groups.google.com/a/chromium.org/forum/#!topic/blink... - read Adam Barth's comments if nothing else


Reading about google shipping unpolished things too fast in chrome makes me smile, since i've just decided to stop using it as of yesterday, after realizing how bloated that software has become ( cpu & ram wise ). Wonder if that's a coincidence.


what do you use instead, which has better ram and cpu footprint?


Well, actually chrome was consuming about 1 gig of ram (if you're counting all the "Chrome helpers" processes) and 2% constant CPU when i had absolutely no page opened. So pretty much anything else is better (i'm on safari right now and it feels extremely light). Note that it's not just me : https://productforums.google.com/forum/#!topic/chrome/y8VTVH...


"hi relevant standards authority, you're not working fast enough for our liking, so we're doing what we wan't and if you don't like it, too bad" if this was anyone but google, there'd be a shitstorm...


I was happy to see this from the link in the email:

Vendor Prefixes Historically, browsers have relied on vendor prefixes (e.g., -webkit-feature) to ship experimental features to web developers. This approach can be harmful to compatibility because web content comes to rely upon these vendor-prefixed names. Going forward, instead of enabling a feature by default with a vendor prefix, we will instead keep the (unprefixed) feature behind the “enable experimental web platform features” flag in about:flags until the feature is ready to be enabled by default. Mozilla has already embarked on a similar policy and the W3C CSS WG formed a rough consensus around a complementary policy.

I remember people getting worried that app developers might tell users to dig through their about:config equivalent to access the "full version" of this site. Glad that things are moving away from vendor prefixes. Kind of sad to see web audio still vendor prefixed (and the O'Reilly book shipped with the old API, lol!), but I assume that was implemented before the webkit/blink split.


Going forward, instead of enabling a feature by default with a vendor prefix, we will instead keep the (unprefixed) feature behind the “enable experimental web platform features” flag in about:flags until the feature is ready to be enabled by default.

For the feature's we're talking about (JS methods and CSS stuff), doing so will mean that effectively no web authors use it. It's no different than requiring a plugin to view content. Anything hidden behind a feature flag might as well not exist, at least for those two categories.

For dev tools, stuff like that? Sure, feature flags are the way to go.


Funny how the biggest opponents of this kind of thing seem to also be the biggest fans of Javascript, which was developed by freezing whatever syntax Netscape came up with in three days. If this had been the process back in 1990 the web would never have got off the ground.


It sounds like you're trying to say that fans of JavaScript don't like freezing syntax but that that is disingenuous because JavaScript is the result of a frozen syntax thrown together in three days. Is that accurate so far? So... freezing something and throwing it out there is good? Or finding broad consensus is good? I can't tell which approach you're going for. Which "this" process are you referring to not having happened back in 1990?


I've said it before and I'll say it again:

Chrome is the worst thing which has happened to the web because of what it allows Google to do.

MSIE ruined and fragmented the web with non-standards because it was in Microsoft's interest at the time to hinder a successfull adaptation of the web.

Google however is fundamentally a web-company and now they are using Chrome to shoehorn anything which fits their company onto the open web without a standards-comitee in sight, or accepting any sort of reasonable feedback.

This breeds monsters like HTTP2.0^W SPDY^W technogizzwizz kitchensink internet everything protocol.

This is much worse than anything MSIE ever did.


On another note, how great is it going to be to have components you can reuse between angular, ember, jquery and the likes? A web-components.com repository would be awesome!


And wouldn't it be awesome if multiple parties was able to read through a draft and make comments on it and provide constructive feedback before it was finalized and pushed into production on the web, to be supported for all eternity?

That way we could have a good implementation of this whole thing.

Sadly so far everything except the "rush this unfinished thing into production"-part is missing.

Obviously Google only see their own implementation, think "all is good" and don't give a rats ass about anyone else and their opinions, but here on HN we should have a wider perspective.


A bit late to the party, but this seems relevant:

"[W]hy do we have an <img> element? Why not an <icon> element? Or an <include> element? Why not a hyperlink with an include attribute, or some combination of rel values? Why an <img> element? Quite simply, because Marc Andreessen shipped one, and shipping code wins."

http://diveintohtml5.info/past.html


Oh, it's about Hat and Cat. I first saw these at a talk on Shadow DOM and the future of the web by Rob Dodson.

http://robdodson.me/blog/2013/11/15/the-cat-and-the-hat-css-...


The current div-soup approach with massive JS enhancement is what is a bad idea. It abuses the document model and makes information less transparent, less semantically meaningful, and makes it hard to share and reuse components between sites due to leakage. It also makes debugging and reading the DOM in an inspector far more messy.

Scoped CSS and Shadow DOM clear up the leakage problems and introduce proper encapsulation to the web beyond heavyweight IFRAMES. The templating system lets documents expose information in transparent ways again, rather than executing a big ball of JS just to get the DOM in a state that is meaningful.

Web Development has been fundamentally broken for ages. The basic ideas to fix it have been discussed for years. Every day delayed is a day for more native encroachment.


Nothing wrong with trying to do something different. Committees are too slow.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: