Hacker News new | past | comments | ask | show | jobs | submit login
An easier way of using polyfills (hacks.mozilla.org)
117 points by rnyman on Nov 6, 2014 | hide | past | favorite | 76 comments



Hm. Alternatively, they could serve the same js to everyone, and (assuming sites use the CDN) it would be cached across pages, so users would only pay for it once. Zero fanciness needed on the CDN.

How big is the "full" set of polyfills anyway? Probably small enough that a single download isn't an issue on even dialup?

--

That aside, polyfill ALL the things. Hopefully making it easier makes it common, which lets devs use a nicer version of Javascript.


> How big is the "full" set of polyfills anyway?

Google's polyfills for custom elements that they ship with Polymer weigh in at around ~151Kb of JavaScript, around 10x the size of a small library like Underscore, and 4-5x times the size of an MVC framework like Angular. And they only cover about 5-6 specs (the 4 webcomponent specs and a few more to support them) Those are admittedly hard to polyfill features that will be bigger than most, but it turns out that polyfilling "everything" is hard and would add significant page weight.


> How big is the "full" set of polyfills anyway?

Right now it's around 75 kB without minification and without gzip. However, this doesn't include any code for doing feature detection.

So, my somewhat educated guess would be: around 30 kB (which is comparable to jQuery).


A definitely-flawed sorta-concatenation of all `if(detect.js){polyfill.js}` files, plus some cleanup, plus syntax fixes, run through http://closure-compiler.appspot.com/home gives me just under 10kb gzipped, just over 31kb raw.

Nice guess :) that seems not too bad, tbh.


While a neat idea in theory, I have some qualms about this. User agent detection sounds great, until you have a user that spoofs their agent. I often do so for various reasons, and from time to time I forget to turn it off after I've finished what I was doing.

If this service becomes common, then I will be given broken webpages seemingly at random. The other option, a polyfill covering specific features and served to all clients, regardless of user agent, doesn't suffer from that.


Users that spoof their user agent are likely to either be tech-savvy people who either know what they are doing or who are hiding who they are, like a bot.

I'm more worried about supporting older users on old machines who barely know the difference between Internet Explorer and the internet than in supporting someone who knows exactly what a UA is and how to spoof it.


Take a look at what Microsoft is doing with their newest mobile IE browser - http://blogs.msdn.com/b/ie/archive/2014/07/31/the-mobile-web...

Basically they're crafting the UA to appear like Chrome or Firefox.


But only because web developers buckled and added loads of workarounds for older versions of IE.


That's a good reminder to turn off your user-agent spoofing. ;-)


Why do you think browsers report user agents? I think it's perfectly reasonable to serve the browser that you say you are.

If a lot of the pages you view end up broken, you would just change the way you lie about your user agent so that it's more prominent to you so you don't forget.

Your complaint is like complaining that if you set your browser's "language preference"* to only contain Dutch, you will end up getting served pages in Dutch.

Why is that different? You ask for Dutch only because supposedly you can understand it, and you ask for a presentation against a particular browser/version because supposedly you're rendering in it. Seems fine to me!

Otherwise, we might as well only serve pages in Esperanto - you know, the universal language standard.

* "(or, more appropriately, that of its user) along with every HTTP request being made in the form of the Accept-Language header that is part of the HTTP/1.1 RFC"


>User agent detection sounds great, until you have a user that spoofs their agent.

Do a significant percentage of users spoof their agent?


Most mobile browsers have a "request desktop site" feature, which I assume would involve spoofing the user agent to look like the desktop version.


Most mobile web applications that would use this polyfill wouldn't have a separate mobile / desktop site and have just a good responsive design, so mobile users wouldn't feel a need to request the desktop site. That feature is usually used (I assume) because the mobile versions of websites often offer limited functionality.


It spoofs the platform, not the browser. Mobile Chrome/Safari should have feature parity with their computer cousins.


Most JavaScript polyfills first check if the native implementation exists, and only if it doesn't the polyfill is activated. This shouldn't break anything. However, the blog article states they use User-Agent detection over feature detection. I hope they combine these solutions.


You can't use feature detection with a simple script tag. It's calling in a script and the cdn chooses the file to send based on the user-agent. No Javascript is run that would allow it to feature detect before choosing to send the load. There may be feature detection afterwards of course.


I think we're saying that most modern polyfills do a feature check before installing the polyfill... thus if you use Chrome with an IE agent, the polyfill still checks before implementing -- it's just a little slower than not installing the polyfill to begin with. Anyway, given the percentage of people who do this, UA detection seems like an acceptable tradeoff.


Agreed, feature detecting afterwards makes sense. Others seem to be suggesting feature detecting before requesting the polyfill script.


While the roundabout trip would be a waste, it shouldn't be hard to just include a quick and simple feature detection before the polyfill is injected. I don't see any reason why one wouldn't do it.


Quick and simple meaning... are you checking for every feature back to ES3?

The point of this service is that its not requiring the user to know what feature to look for. They'll polyfill anything thats needed. To do feature detection to the level they're trying to serve you'd have to check EVERY API that needs polyfilled on an in-use browser. Full feature detection also involves looking for apis that exist but don't follow the spec. Checking every single API possible on every browser would not be quick and simple. User agents allow them to settle on a list of known needed polyfills without any code needed for the user beyond a simple script tag.

There are definitely tradeoffs to their approach but I can't say "I don't see any reason" why you wouldn't feature detect for this.


They have, apparently. Check it out:

https://cdn.polyfill.io/v1/polyfill.js?features=Array.protot...

Docs about this can be found here: https://cdn.polyfill.io/v1/docs/api


Polyfills won't break your page and random, worse case you get worse performance over native. However the point of a polyfill is to do exactly what the native method would have done if it existed


It will break your page if you aren't sent a polyfill when you should have been. Imagine running a browser without Object.observe, spoofing an agent of one that does have it, and you aren't sent it. Suddenly data-binding doesn't work at all.


Yes but the way mozilla has chosen to implement this is to send the code over the wire based on the clients UA. So if a method doesn't exist that should exist the client will never receive it.

It would be cool if they allowed you to disable this feature, and just use feature detection and always load the code for the requested item.


The polyfill service was created and is hosted by the Financial Times. Mozilla is not affiliated.

There are good reasons why you cannot make a good choice of polyfill prior to knowing the browser family and version, primarily due to polyfill variants (more details in the Hacks post).


It's not just about spoofing user agents. Take third party browsers for iOS for instance - they might be called Chrome or Opera but are really just UIWebViews. If they would add "Chrome" to their user agent string, this solution might break, serving incorrect polyfills. Today these browsers identify as "CriOS" and "OPiOS" to work around broken user agent detection. This can also be a problem with all web browsing happening in embedded webviews (feed readers, reddit apps, ...)


And if you forget to turn Javascript back on things will break too.


>User agent detection sounds great, until you have a user that spoofs their agent.

Yeah, properly serving that 0.01% demographic would be hard...

/s


Something is very wrong with how web developers think about threat models if they're so incredibly willing to load completely arbitrary code into their customer's applications from a source that isn't even remotely party to the vendor/customer relationship.


> Something is very wrong with how web developers think about threat models

Yes.

We have pretty much the same problem with the web, as we had with macro-viruses an office suites -- because we're solving the same problems in the same way, without learning from past mistakes. It might actually be worse, because if you tell people you don't run macros in untrusted office documents, most people will applaud you for being wise -- while if you say you use noscript people will dismiss you as a paranoid Luddite.

Runable code created by random people, from random sources, in one address space with access to all your user data -- what could go wrong?


>It might actually be worse, because if you tell people you don't run macros in untrusted office documents, most people will applaud you for being wise -- while if you say you use noscript people will dismiss you as a paranoid Luddite.

Maybe because macros are a BS add-on functionality that does nothing for 99.9% of Office users, whereas JS is a key component of the modern, dynamic, web.

>Runable code created by random people, from random sources, in one address space with access to all your user data -- what could go wrong?

Yeah, it's not like we have a security model for JS, sandboxed environments, and even each tab running as a separate process.


> Yeah, it's not like we have a security model for JS, sandboxed environments, and even each tab running as a separate process.

Sandboxed environments containing user data that web developers voluntarily compromise by then inserting 3rd-party controlled code.


Precisely. I'm glad my comment didn't pass over everyone's head.

A little disappointed that it's teacup50 that pours cold water on coldtea, and not the other way around. If only I had a biscuit-themed nick, to go with this little sub-thread.


I maintain the polyfill service. I'm very much aware of this issue, and we have tried to take steps to mitigate these concerns:

1. You can easily run the service yourself, just download it from the github repo

2. The hosted version is hosted by the FT and sponsored by Fastly, so you're not importing code from some random unknown entity. Those corporations practice good security awareness (in the FT's case it dramatically improved about 2 years ago), but if you don't believe us, see point 1.


Would you trust the FT and Fastly to have access to all local data all your desktop applications?

That's what you're building for the web. #1 doesn't absolve you of the culpability of promoting such a fundamentally flawed tool.


As our web applications get ever more complex, and as a side result less secure (on the premise that complex sites are harder to secure than simple ones, and more likely to use features like this which further complicate security), I think we're going to see more and more of a divide between the 'secure' web and modern rich applications.

One of the reasons that I'm such a big fan of Chrome's End-to-End project[0] is that it looks like it may provide a good way to bring real security to modern rich applications. In the current incantation it's not going to be suitable for use in every site out there, but it appears to be a really solid step in the right direction.

[0]: https://code.google.com/p/end-to-end/


This was actually a major security issue on some gov't websites point out a while back. Something about obama's blog's admin page using 3rd party google analytics...

I can understand something like cloudflare or other major vendor like google's cdn being more trustworthy, but even then there would need to be some sort of signature to verify the content (like they do on mega) before it could truely be trusted as not being compromised. That removes the speed benefits of a CDN since the preexisting file will need to be loaded and checked everytime.


You would be correct, if that were the case here. But Mozilla as a browser vendor is far from remote to the vendor/customer relationship. That is like writing a Linux application and being concerned because arbitrary code of the kernel is called.


working on an alternative that doesn't use its own polyfills and instead uses other people's well written libraries. i used polyfill.io before and half the implementations had bugs due to the lack of tests. feedback welcomed!

http://polyfills.io


I maintain the polyfill service. I think it's great that more people care about polyfilling older browsers and it would be so much more powerful if we collaborated on one service. Your feedback on polyfill.io is accurate, but out of date. I know you were very active in working with Jonathan on his original service, and I'd be delighted to talk to you about how we can work together.

I'd also make a few points of feedback on your solution:

1. It's confusing to name your service with a name that is only one character different than ours. Some might interpret that the confusion is your intention, which I'm sure it isn't - would you consider renaming it? We've already seen confusion happen around polyfill.io (which is still the old service for compatibility reasons) and cdn.polyfill.io.

2. Granted, Jonathan's original polyfill.io had no tests. But that's one of the things we've spent a lot of time fixing, and our test framework is now extremely good. You're testing using naive feature-detects, while we have relatively comprehensive test suites for many features and are adding more all the time.

3. Your targeting of browsers is based on data gathered from crowdsourced sources like caniuse and MDN, whereas we establish compatibility through testing our polyfills in every browser using an automated CI-driven process on Sauce Labs.

4. We are now starting to incorporate other people's polyfills where they are better than ours, and have a mechanism to do that, and to store and serve the appropriate attribution and licence information. We've actually considered many of the polyfills that you have made part of your service.

5. We've had discussions with several of the authors of the popular polyfills you've included, and their main concern over the way we were considering including their code was that we copied it into our repo rather than linking their repo as a dependency. You're doing that too, and including more third-party code than we have so far. It's totally legit within the licences they've granted and it's a policy we'll probably continue to follow too, but we're still thinking on how we can do this in a way that keeps everyone happy. For the moment, this is encouraging us to lean more towards using our own code.


> half the implementations had bugs due to the lack of tests

No, the implementations had bugs because they had bugs.

The bugs were not documented or fixed due to lack of tests or use.

For some this is a distinction without difference; For some, there is a huge difference.


So we've spent years shouting out the mantra "test for features, not browsers!" and now Mozilla, of all people, tell us that that was basically a bit impractical, and we should just go back to user-agent sniffing like we did 8-10 years ago?


It looks like Mozilla is testing for virtually any web browser that is in use. That seems OK.

And, you can choose to do feature detection if you wish. Here's a quick example:

https://cdn.polyfill.io/v1/polyfill.js?features=Array.protot...

More information about that can be found here: https://cdn.polyfill.io/v1/docs/api


> and now Mozilla, of all people, tell us that that was basically a bit impractical

No. What Mozilla did was explain their reasons for doing so. So at the very least your comment should engage with their stated reasons in some detail. Otherwise it's hard for the rest of us to know whether you've got a valid objection or you just replied without really reading the original article.


Not Mozilla. The polyfill service is made and hosted by the Financial Times. Mozilla just kindly offered us a posting on the Hacks blog.


Does anyone know why is Array.from not able to be supported on the most recent versions of Firefox? (That's the one feature where the polyfill works on older browsers but apparently does not work on newer ones.)


Firefox 32+ supports it natively.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...

http://kangax.github.io/compat-table/es6/

There doesn't seem to be anything wrong with Firefox 32+'s implementation.


Thank you; the chart indicates it is not supported natively. There is a feature on the site involving unit tests of each of the features: maybe something about that unit test isn't working on Firefox 32? (Or maybe it is just a mistake, of course.)


Mozilla's implementation is the problem here, not the polyfill. The polyfill is actually (on this point) a better implementation of the spec than the one that landed in FF 32. What has happened here is that our test runner has loaded the polyfill, but it has not installed because FF passes a naive feature-detect. However, it goes on to fail the full test suite, so we conclude that the polyfill does not work (which is strictly true) and mark it accordingly on our compatibility chart.

Moz bug filed: https://bugzilla.mozilla.org/show_bug.cgi?id=924058


I love it. It'll be interesting to see if browser vendors will start attempting to detect these sorts of polyfill services and respond with their own browser-bundled or -specific polyfills.


Yyyyeah uh, a browser-implemented polyfill is also called a 'supported JS feature'. Polyfills are for older browsers that don't support a certain feature yet (or I guess for features that are not yet implemented in newer browsers).


Why would a browser want to bundle a polyfill, instead of just implementing the feature in question?


If browser vendors adopted this this opt-in, polyfill-first approach it would make it significantly less expensive and much simpler to progressively rollout experimental features to the public.


[deleted]


This post is about the FT service http://polyfills.io/


polyfills.io is not an FT service.

polyfill.io is the original service by Jonathan Neal that our service is based on. It is shortly to be decommissioned.

cdn.polyfill.io is our new service, the subject of this HN thread, and a collaboration between Jonathan Neal and the FT.

polyfills.io is an unaffiliated project by Jonathan Ong.

The naming is confusing. Hopefully it will be clearer when we're able to decommission the original polyfill.io, and I've asked Jonathan Ong to consider renaming his.


A brief introduction to "polyfilling" and what that means would be pertinent.

I've never heard that term and have no idea what this is about, nor do I care.


As a graphics guy I thought they referred to the algorithms used to fill polygons, but it turns out that "polyfills" in web developer parlance are developer implementations of things that should be standard in the browser but aren't.

E.g. maybe all browsers implement an array sorting function, except for Internet Explorer. So for IE clients you'd load a "polyfill", which would be JS code that implements array sorting.

I'm really glad I'm not a JS developer.


>I'm really glad I'm not a JS developer

Yeah, because other languages don't all have their issues...

"as a graphic guy" implies strongly "C++". Hardly the pinnacle of language design...


Everyone picks their poison, why can't he be happy with the one he's picked?


My sentiments exactly. He doesn't have to piss on JS.


Well, if you never heard that term, and

1) you are a front-end JS developer, perhaps front-end JS development is not for you.

2) you aren't a front-end JS developer, then the post wasn't meant for you, so not much need to explain anything.


3) you were born knowing everything about your trade, so no point posting articles like these.


I was answering to a parent who seems to have deleted his comment.

I'm not suggesting that everybody should know everything about their trade. Polyfill on the other hand is an extremely common thing in the trade. You'd expect a surgeon to know what a scalpel is.

Plus, he put it like: "I don't know what it is and I don't care". How about bothering to Google the names you don't know, instead of demanding everybody else to include introductory terminology lessons in their posts?


It's not very productive to talk down to people, even if it's satisfying to you. There's a better way: http://xkcd.com/1053/


Talking down? I was replying to the parent's "I don't know what this is and I DON'T CARE" comment.

If you don't care, you can always not comment.

If you do care, Google is a keyboard shortcut away, and it's up to you to get informed, not to harass the post autors about not including a terminology section.

How about "I don't know what side-effects are and I don't care" posted for every Haskell post? Or "I don't know what functions are"? Where does this BS ends?


Sorry, I just recoil against phrases like "x is not for you."

You seem to view this individual commenter as a sea of uncaring people, but it's just one person. Change their mind, and their contribution to the "BS" does end.

Even if a response like yours seems deserved, it's not very productive. It certainly doesn't help someone to care more.


Bascially, the Javascript language standard defines a set of default API functions. These have been expanded over time, most notably with ES 5, which added things like .map() to arrays.

However, older browsers like IE <9 do not support ES 5, so if, as a developer, you try to call .map on an array in IE 8, you'd get an error.

A polyfill uses Javascript's awesome ability to add functions to basic types, and extend the language in other ways. So if a browser does not ship with, for example, array.map() itself, a polyfill can do array.map = function() {} and implement it.

That way, a developer can simply use array.map and not have to do a different implementation (fall back on a ye olde for-loop).


[deleted]


This article was written for web developers, and your comment makes it clear that you are not one. You are not in the target audience and you shouldn't be upset about this.


And yet, if you just google "polyfill", there are no results referring to that usage. So, although polygon filling would also be interesting to read about, it's not really fair to complain about the OP's idiomatic usage of the term.


People fought the terminology for a while, but were overwhelmed. The only purpose it seems to serve is to make a simple discussion about a simple concept completely confusing to people not 'in the know', and in addition the confusing word in question is nearly ungoogleable. Greek does make things sound more mathematical/formal, though, so there's that.

edit: it does seem to be googleable now, but I think the search engines are taking hints from wikipedia.


Yes, whenever I hear polyfill I think computer graphics. Then I decided to read up on it and discovered that it's such a simple concept that's been employed forever. I just continue to shake my head at the kids coming up with jargon to make old things new again. I wish the computer field were less like the fashion world.


I don't see what's stupid about that. Are you suggesting that anytime we want to talk about polyfills, we should instead write the following?

  #include "config.h"

  // ...

  #if !HAVE_FOOBAR
  void foobar(...)  // missing: implement ourselves
  {
  }
  #endif
Because that's nuts. This usage is actually more useful than the one that's a contraction of "polygon filling."


> This usage is actually more useful than the one that's a contraction of "polygon filling."

Actually, it's a reference to polyester fiberfill (poly-fil), a material used for upholstery, crafts, etc.


Well, there's "shim"....


What did the [deleted] parent comment say?


It complained that this use of the word "polyfill" was "stupid" because it just referred to the code sample I copied in my comment, and that older programmers would recognize the word as referring to polygon filling operations.

Basically, it sounded like the comment author went in expecting an article about rasterization and was upset to find that web developers used the word for something else. Which I'll admit I've also done before on other topics, but you can't begrudge other fields their jargon.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: