Hacker News new | past | comments | ask | show | jobs | submit login
Why we use progressive enhancement to build Gov.uk (gdstechnology.blog.gov.uk)
235 points by joelanman on Sept 20, 2016 | hide | past | favorite | 147 comments



I really appreciate that kind of focus and dedication. I recently had another dive into web development after a couple years of absence and was pleasantly surprised by tools like react. In combination with server side rendering it allowed me to serve a solid/usable HTML page with no JS required. When I started pulling in components from popular Material Design libraries, though, I realized that nobody thought about progressive enhancement even for a second. Rather than serving a plain `<select />` and then progressively enhancing it, all I got was an unusable mix of HTML tags and styling. It's a shame that even down at the library level people don't care about progressive enhancement. Are there any (react) projects that try to do better or is the mantra more like: react without JS is nonsense?


There's no substitute for caring, and unfortunately much of the contemporary JS/Web world just doesn't care about these issues.


> There's no substitute for caring

Or for being paid, progressive enhancement is more work than not doing it and if the client/manager/… doesn't want to pay for it… the dev is unlikely to bother doing it on their own dime.


The 80 / 20% rule applies once more - why spend 80% of work on the 20% of users that would benefit from progressive enhancement? Unless those 20% of users generate 80% of revenue, which in all likelihood they don't.


I'm a startup guy so of course I understand this, but on the other hand accessibility is one of those things that takes practice and know-how, so if you never do it of course it seems like this giant mountain to climb. However if you make it a priority, it's not 80% of the work, more likely it influences how you do things, so maybe you spend 20% more effort and end up with a little less whiz-bang. There is a cost, but we shouldn't paint it as an insurmountable thing. This is why we have the ADA, so that people with disabilities don't have to go through life as second-class citizens.


You only have to handle the 20% of users as a special case if you have gone and made some technology choices that excludes them. There is nothing inherent about the internet or web that says that needs to be the case.

A rephrasing might ask why we make technology choices that exclude 20% users?

If we're not able to produce a single page web app that reaches 100% of users, that's cool... just make a many page web site without lots of JavaScript that does.


My biggest gripe with this, though it seems sensible on the surface, is that these numbers are often simply pulled out of someone's ass to argue their stance. Often a manager or a front-end person who just wants to work with the new hotness. How do we know 20%(or whatever the number is) is at all a relevant estimation? Who are the demographics that this number makes up? Why is it important to isolate them or exclude them? Furthermore, at best this number often only accounts for the known people that the product is already not serving or poorly serving. Not the people who bail immediately upon realizing that this service isn't for them because they opted against an inclusive strategy.


I did some actual math on progressive enhancement costs vs. graceful degradation in consideration of reach: https://medium.com/@AaronGustafson/the-true-cost-of-progress...


Too right - PE costs more and unless explicitly specified and understood it's not going to be done.


PE is the default of how HTML, CSS and JS work. You have to go out of your way to screw it up. Which is precisely what the whole web world delights in doing now.


> PE is the default of how HTML, CSS and JS work.

PE is not what the client pays you for.

> You have to go out of your way to screw it up.

Hardly. The shortest path to the whizzbang the client wants will break PE all on its own.


Quite a lot of websites don't have clients that pay for design - in many cases (especially SaaS), the person wholy responsible for the look&feel target is your colleague the designer, sitting in the office next to yours.

Come to think of it, given how most websites are designed, I'm starting to think we're making a mistake with sticking to HTML/CSS/JS. We should move to "executable Photoshop image" format - so that the beautiful magazine-like layouts made by designers could be implemented without a shit ton of hackery and bloat webdevs have to put nowadays. We could also push some security and sanity decisions into the format this way, so that the tool itself would tell the designers they can't (or shouldn't) do something.

(I'm only slightly sarcastic here.)


> Quite a lot of websites don't have clients that pay for design - in many cases (especially SaaS), the person wholy responsible for the look&feel target is your colleague the designer, sitting in the office next to yours.

Call them boss, call them stakeholder, call them prospects, call them whatever you want, point is, progressive enhancement requires actual hard work, it's not the default state of things when you move beyond the "research paper" format of having a bunch of paragraphs of text interspersed with section titles on a completely static page.


I still disagree - I made websites back in the ancient days when people used to generate HTML server-side. Things worked perfectly well. Current proliferation of heavy JS frameworks buys us nothing over those times, but wastes more electricity and time of our users.


Just because it's the default doesn't make it easiest nor does it make sense as the default necessarily, it's just the default because that's how he web started. I'm not saying we should skip progressive enhancement but there's truth to it being more work (when compared to how easy we can do things without it using JS only), regardless of how the web was originally designed to work.


"Just because it's the default doesn't make it easiest nor does it make sense as the default necessarily"

I used to throw them together in Frontpage or Dreamweaver with a few optional scripts. Scripts for things like page counters or menus I just cut and pasted off dynamicdrive.com followed by tests in several browsers. Backend handled forms and such. Worked on every browser, loaded up fast on dialup, and everyone knew what my HTML/CSS code did without training. Can't say that about most of these sites and frameworks these days. So, I'd say it's easiest and a sensible default.

To be clear, I'm not critiquing complex, web applications that truly need advanced tooling client-side. I'm talking about 90+% of what's out there that just publishes content with minimal dynamic elements needed.


It’s easy to knock something together with JS. It’s very difficult to make it work well, certainly in our experience harder than having reasonable fallbacks.


This, this, so much this.

We owe it to our users to provide all of them with a good experience. We also owe it to ourselves to write good, clean, maintainable applications. The wonder of it is that very, very often if we write RESTful apps then progressive enhancement just falls out naturally: a GET request with some JSON Accept header will return data; with text/html it'll return data, and possibly a form to update it; a POST with some JSON Content-Type will DTRT, and so will one with application/x-www-form-urlencoded or multipart/form-data.

Why wouldn't one do that?

Of course, the user experience of such an app isn't as great as a singing, dancing single-page app (SPA). But a user experience is better than no user experience, which is what offering only an SPA provides.

It's just sloppy to require JavaScript and CSS to use a web site. It's particularly sloppy when that web site is composed of documents.


For government websites I can understand it but for commercial stuff I think this attitude of "everyone must be supported at all cost" is nonsensical.

Where do you draw the line?

Do I have to also support people who prefer to use command line text-based web browsers?

Before you know it we'll have someone say "Oh I work for a bank, we have HTML disabled here. Can you make it work without HTML?" give me a break.


> Do I have to also support people who prefer to use command line text-based web browsers?

Yes. The web is about linked HTML documents: if you don't support lynx, links, elinks, w3m, emacs-w3m & eww then you're honestly not doing Web development properly.

You have to do more work to not support them, because writing plain HTML + forms is the baseline.

> Before you know it we'll have someone say "Oh I work for a bank, we have HTML disabled here. Can you make it work without HTML?" give me a break.

That wouldn't be using the Web anymore.


Many people aren't building websites. They're building applications which target the cross-platform runtime and APIs provided by modern web browsers.


Many of those people should not be building applications. Boilerplate startup landing pages, blogging platforms and e-commerce sites should be websites, as they always were.

As for those who really are building web applications - well, those are separate beasts. Nobody expects to run Google Docs or Gliffy properly in lynx. But I do expect my banking site to run in a text browser, because there's no valid reason it shouldn't.

(Yes, I understand the business reasons for the current sad state of the web. I also believe those reasons are things one should be ashamed of.)


There is a line: support vs. optimization.

Support everyone with a browser & internet access, optimize for the stuff that has the greatest market share or where you can do the most good and stand a good chance of having your dependencies met.


> Why wouldn't one do that?

There's a litany of reasons, but they all basically boil down to it's more work than just picking one solution that you think most people will use and executing on it well. Those users usually get a better experience and the pointy-haired boss is happy you shipped so fast. I didn't have the luxury of doing any of this until working at my current Big Co. web job, where accessibility is non-negotiable and we actually take steps to make sure everyone can use our sites. But even then don't serve pages that work without javascript half the time.


I like the reasons they give why users might not have JS, even though they haven't blocked it themselves. Here are three:

* user’s hotel is intercepting the connection and injecting broken JavaScript

* user’s telecoms provider is intercepting the connection and injecting broken JavaScript

* user’s country is intercepting the connection and injecting broken JavaScript

Broken or not: all three of those sound like good reasons to intentionally block JS.


* ...provider is intercepting the connection and injecting broken JavaScript

Who the hell puts up with stuff like this? Is this real?

If so, time to switch provider. Really.



See also: https://www.mysociety.org/2011/08/11/mobile-operators-breaki... where O2 stripped JS comments, without checking to see whether those comment delimiters were part of a string.


Yes, the unfortunate reality is a lot of places have de-facto monopolies in the US, but the monopolies get to claim there is still a free market by pointing to dial up or spotty satellite connections.


The user might not have another provider capable of fulfilling their needs. Either too laggy, too slow, too expensive, unacceptable data caps, or not mobile. It's completely possible for the user to live in an area where slow DSL is their only broadband option, the cell phone signal is weak, and the user can't afford satellite.


…locked in on a contract for 2 years, provided for them by their company, provided for them by their family, etc etc.


I think all mobile providers in UK do it, especially around downgrading images.


At that point I would either be switching providers to someone who didn't mangle my signal (because that's effectively what this is), or if that for some reason wasn't doable, I would at least install a VPN to a known secure, untampered connection (like my home router) and route everything through that connection.

Having someone intercept my signal and process it behind my back is simply not something I would tolerate from any communications-service provider I employ.


You are part of a tiny minority who has the technical interest / proficiency to do this.


To my knowledge Three UK doesn't. That's based on using them and not noticing any degradation, or the click-to-view-original-quality javascript that T-Mobile were (are?) injecting.



Vodafone (Fiji) used to do this.

They did honour the 'Cache-Control: no-transform' header though, so every site I build now includes that directive.


These are not solved by progressive enhancement but by SSL.


The author doesn't say that they are, he's simply listing reasons that a user may have JS disabled, and hence why it's a good idea to employ progressive enhancement, so that such users are not entirely locked out of using your app.


If you want to tackle these specific reasons, just serve the site via SSL. Other reasons for progressive enhancements might still persist, but the listed cause bigger troubles than "JavaScript does not work correctly" and SSL is a complete solution to all of them.


You're conflating user and service designer here. The user has a problem (including but not limited to those listed) and so turns off JS. The service then has the problem that JS is disabled... no amount of SSL will help if your pages require JS to load.

Edit to clarify: The user turning off JS and using the service are likely to happen at entirely different times. If you browse with JS turned off, it's likely off by default and only enabled when needed (if at all).


How many users actually disable JS for these (or any other) reasons? Might it be cheaper to e.g. have a call center that trained to use the website on behalf of such users, rather than increasing the development costs of every government site?


Last time we checked we get ~180k visits a month from people who specifically disable JS, ~810k further visits a month from people who don’t have JS available.

So no, it’d be more expensive to have a call centre.


I don't like this idea because it sacrifices ease of use in order to make small savings during the development phase. You shouldn't subject your users to extra grate where possible, and I think this is especially true for a government site.


My suspicion is that maintaining a call centre is more expensive than using progressive enhancement... but it's a fair question.

You're also assuming that starting from a simple HTML implementation is more expensive than other methods. I'm not sure that's the case.

As a rule, in terms of accessing public services at least, I'd be inclined to be very careful to make sure it was as accessible as possible.


"The user's service provider or corporate firewall blocks ssl"


Not everywhere, sometimes organisations such as the university i'm in make you install a certificate to access the network.

SSL is supposed to work a certain way, in practice it doesn't always work that way.


SSL won't do any good against a nation-state with a CA key, so it won't do any good against JavaScript injection by such a state.


How exactly SSL will solve this when there is a middleman proxy?


It is the whole point of SSL that there are not any.


My last point about the user’s country linked to the example of Kazakhstan, where all HTTPS traffic is blocked unless you install a government-supplied middleman cert.


And here's a good example of one hand giving while the other takes away; other parts of the UK government are quite cozy with Kazakhstan. We probably sold them the HTTPS interception solution.


> It is the whole point of SSL that there are not any [middleman proxies].

Hahahahahaha, you should do stand-up!

In my more cynical moments, I wonder if the whole point of SSL is to obscure when middleman proxies are in use.

Seriously, examine your browser's trusted CA list. Do you really trust every single one of those CAs to vouch for any website in the world?


If someone has got a CA to produce a fake certificate for your website, you have more serious things to worry about that progressive enhancement!


Unfortunately, most employers and many countries violate this expectation. I expect businesses providing wifi to customers to start doing this, too.


I don't know about 'most' employers - so far I haven't worked for one that has. But I'm sure there are many that do.



If they inject broken JS, JS execution can stop and if you're not careful you might end up with a broken page.


Also good reasons to use HTTPS.


The gov.uk site is wonderful to use! It is incredibly fast, just a tenth of a second latency, it feels.


> The gov.uk site is wonderful to use! It is incredibly fast, just a tenth of a second latency, it feels.

That's because unlike "modern" sites, it contains the site's contents in the HTML, so when the HTML is loaded, you have everything you came for.

    % curl  https://gdstechnology.blog.gov.uk/2016/09/19/why-we-use-progressive-enhancement-to-build-gov-uk/ >test.html
    % du -h test.html
    36K
That amount of data can still be downloaded over the slowest mobile link imaginable in a fraction of a second. And whatever scripts are needed to do whatever extra can be downloaded and happen after the fact. That's how the web used to work.

Compare to "modern" SPA-monstrocities (like blogger.com until recently), where you had actual code written to defer showing the (1KB) content (which you came for), until several MBs of fonts, scripts and whatever had loaded, parsed and executed.

It was enough to make page-loading on a high-speed connection of a modern PC take many many seconds. On a mobile phone with a weak CPU this can lock your phone up for quite a time.

If that's not an anti-pattern I don't know what is.

We need to get back to the basics. The basics worked, and the direction we're heading is doing all the wrong things.


> Compare to "modern" SPA-monstrocities (like blogger.com until recently), where you had actual code written to defer showing the (1KB) content (which you came for), until several MBs of fonts, scripts and whatever had loaded, parsed and executed.

Whoever came up with that idea should be loaded, parsed and executed.

> We need to get back to the basics. The basics worked, and the direction we're heading is doing all the wrong things.

Frankly, it's gotten to the point that I don't even like using my smartphone anymore. Even with uBlock, sites take too long to load, they do annoying things with animations, colors and behaviour, and it's just generally a burden to use.

I just want to be able to read documents and follow links to other documents. The Web was awesome when it was a web of documents, linked together.


> sites take too long to load, they do annoying things with animations, colors and behaviour, and it's just generally a burden to use.

YOU THERE! YOU, SILLY USER!! YOU'RE SCROLLING WRONG. LET ME HANDLE SCROLLING FOR YOU!!


Perhaps someone should port lynx or links2 to smartphones.


> That amount of data can still be downloaded over the slowest mobile link imaginable in a fraction of a second.

That's a bit of an excessive enthusiasm. EDGE is effectively ~200kbps (30kB/s) with ~200ms ping, so you're at a good 1.5s assuming no dropped packets.

And that's not "the slowest mobile link imaginable" by a long shot, GPRS is under 100kbps and before that was CSD (around 14kbps, with high speed CSD around 56k)

And CSD was already 2G, 1G was NMT, at 1200 bits per second.

None of these even require imagination, they're existing historical mobile links, most of which (GPRS up) are still in active use (I believe NMT has no deployment left, and while I'm reasonably certain most telecoms have dropped CSD I wouldn't bet that all deployments are gone)


SPAs aren't helping with this.


How are they not?


By requiring multiple megabytes of bullshit to be downloaded before they can even _begin_ loading actual content.


That page might not be the best example - I think the blog is just wordpress.


There's a lot of people who really don't like gov.uk, and I don't understand why. I've always been able to find the information I'm looking for quickly - faster than any other Government website I'm forced to use.


I would assume Politics. From a technical side they're great.

However what is happening there basically taking work away from big outsourcing contracts(Lots of easy money involved) with companies like Capgmini. Upsetting the people in those companies, and civil servants who want to eventually work with those companies. These companies generally hires ex-civil servants they used to work with.

They're also taking away responsibility from the individual departments also upsetting empire building civil servants, even if GDS is doing a better job. At the moment there's talk of breaking up gds and giving services back to the departments who did a terrible job in the first place.

So you get a lot of drama and arguments involved, and "House of Cards" style schemes against them to try to get rid of them. Frankly with all the money and self interest against them, i've surprised they've lasted this long.


> surprised they've lasted this long.

It's rare for me to say David Cameron and his friends did anything right, but this was one initiative that did make the world a better place. GDS wouldn't have been around this long without significant support from the very top. It is telling that a change of political fortunes will likely coincide with GDS losing primacy.


It was very hyped when often all it's done is provided a better-looking menu page - the page you land on looks great, but once you click through to the actual functionality you're back at the same page that used to be there. And that results in an even less consistent experience than before - before at least all the HMRC pages were on the HMRC site and looked the same, all the FCO pages were on the FCO site and looked the same... now you click a menu link on gov.uk and you might get an old-HMRC-style page, an old-FCO-style page, or something else.

I mean don't get me wrong, it's fine as far as it goes. It does look nice. But it doesn't make the actual functionality noticeably more usable, so it's hard to see what all the fuss is about.


A lot of functionality is being gradually replaced with GDS services (*.service.gov.uk domains), though.


To be clear, we’re not doing that work. We’re helping the departments build their own services with both patterns and standards, and assess each service before it goes live for quality. You can see what you need to do to meet the service assessment standard at https://www.gov.uk/service-manual/service-standard


Lots of information that used to be on government sites was removed.

They added Google Analytics everywhere. Even on the pages where you fill in your taxes or apply for a passport. Google does not need to know my tax details. Google shouldn't know my tax details. Since Google isn't even a UK company the UK government doesn't even have legal authority over what Google does with the information they now hold about UK citizens taxes.


This is the site that is used to apply for a passport: https://passportapplication.service.gov.uk/ips-olc/

Where on that site does it use Google Analytics?


https://www.gov.uk/help/cookies

They tell you they're using the cookies. They tell you why they use them. They tell you how to opt out of them.

They also say:

>> We don’t allow Google to use or share our analytics data.


It looks like they make their users' browsers request a Google URL.


It seems likely that they have a contract with Google that doesn't allow Google to use analytics data like they would for other clients.


Legal contracts are no replacement for technical impossibility.


Google breaking its contracts over Analytics Premium would severely negatively affect it, given that it's a product a number of very large companies and organisations use.


The NSA will just install their funnel somewhere new and off you go.

Browsing privacy is not just about the intentions of the two parties talking to each other but everyone one holding the wire plus everyone the other sites invites to listen in on your private exchange.


I don't think the NSA snooping on Google Analytics should really be in your threat model when looking at a UK government site. They could just ask GCHQ.


All user data is anonymised before sending to Google. I believe we strip the last two blocks off the IP address before sending it over. We use the analytics to inform choices we make - our 3rd design principle is to ‘design with data’ https://www.gov.uk/design-principles#third . Finally, on projects where it’s critical that any data doesn’t go out of the UK like Verify we run our own instances of analytics software.


>I believe we strip the last two blocks off the IP address before sending it over.

No you don't. You don't "send" anything over to Google. The users browser does because loading a gov.uk page also loads a Google Analytic script straight from google-analytics.com. You don't have the power to prevent the users IP being sent to Google in that situation and it's worrying that you think you do.

Additionally, focusing on the IP address seems rather silly when a much more accurate tracking cookie is being used.


I've been deciding where to immigrate, and the quality engineering behind gov.uk has factored into that decision. Furthermore, it's easy to navigate and logically laid out.

If it is at all a representation of how smoothly the overall government runs, then "shut up and take my taxes."


Unfortunately, it's really not. The GDS are sort of a silo, and take many more risks than most of the rest of the Government. The rest of it is plagued with bureaucracy and requirements to outsource large chunks of their operation.


Thanks for the information, much appreciated.


I'm glad GDS is working to improve gov.uk sites but some really simple things still seem broken. Example: it'd be great if GDS would stop with the ridiculous forced password recipes across gov.uk sites.

Today I had to try three times to get 1password to satisfy the insane requirements of the site (8-12 alphanumerics, no special characters).


> it'd be great if GDS would stop with the ridiculous forced password recipes across gov.uk sites.

GDS isn't enforcing weird password policies across government.

This is the advice GDS publishes on passwords: https://www.gov.uk/service-manual/user-centred-design/resour...


I'm not surprised really, since the sort of services you can access with a gov.uk account can lead to identify theft or other types of fraud.


So require a long password. Limiting passwords on an important site to 12 characters is madness.


The cited password recipes are not better for security.


The JavaScript annoyance will always be a part of web development at this point, but I really don't see why so many people use these single-page applications and require JavaScript to perform simple tasks.


Because full page loads are a poor user experience for many actions a user can perform. The more such actions that are possible, the more it makes sense to build as a web application instead of a web site.


I agree, that among other things, one should make it dependent on how the user interacts with the service.

In think it makes perfect sense that apps like Slack, New Relic, etc. are SPAs. But in news/blog type sites for example, it often leads to worse user experience in my opinion. A lot of traffic (and for some sites probably the majority) is external, directly pointing to an article.

In that case, I rather have a full page reload when i happen to click on a related article after reading than watching a spinner for second(s) every time I land on their page somehow.


They are a poor user experience because you don't want to have to load the whole text for cases where the user is only supposed to see some of it at first?

If so: then does it make sense to download megabytes of JS libraries in order to save downloading 100kB of text?


That's not what I'm talking about. I'm talking about actions a user is taking. For instance, HN just performed a full page load for me to make this comment, and will do another one after I click "reply", and then I'll have to find my place again. That is sucky and would be much better implemented with javascript and a background POST. This is somewhat ok on HN because actions like commenting are much more rare than passively reading content (and because I find enough value in the commentary here that I'm willing to put up with quite a bit of suckiness), but it is a lot less ok for applications that are a lot more behavior based.

To put it in concrete terms, I came to the conclusion that SPAs (or at least using javascript to implement most behaviors on individual pages of a multi-page application) make a ton of sense after spending a few years building an internal line-of-business application with a lot of behavior and not much content as a normal Rails multi-page app, noticing that the user experience was not great at all, and finding it to have very low return on investment to attempt to do it with progressive enhancement. We would have saved a lot of time and money by starting with a JS-based SPA approach for an application like that. If I were building a blog or news site, I suspect the opposite would be true.


> For instance, HN just performed a full page load for me to make this comment, and will do another one after I click "reply", and then I'll have to find my place again.

That's why I comment in a new tab.

But yes, I see that as an example where JavaScript could improve an HTML page — but that's no reason to make it impossible to use with JavaScript!


What's wrong with an old school navigation bar and page segmentation? You really don't need JS to display simple blog or other text-mostly webpages.


A lot can be done without a new page load without using JS, and SPAs tend to break page loading altogether.


What can be done without a page load without using JS? Are you talking about anchors or CSS animations or what?


Those sorts of things. But also iframes are powerful.


Powerful how? I'm struggling with the idea that it is possible to build the kinds of apps I spend most of my web time in using anchors, CSS animations, and iframes. I'm struggling even more with the idea that it might be worthwhile to try.


Because most sites won't work with JavaScript disabled regardless, and single page applications improve the user experience by dropping full page refreshes and replacing it with instant on-screen feedback.

That being said, you do have to be careful or your site may break for people utilising accessibility technologies such as a screen reader.


It's easy to get rid of full-page refreshes in a traditional many-page application with progressive enhancement with pjax. It's not very easy to go the other way, though.


I wish progressive enhancement were more popular, but it is probably too late for anyone to ever care about it.


Well anyone who tries to access a slow loading page cares. As does anyone trying to use a page that misbehaves because over-aggressive javascript taking control in unhelpful ways cares. Anyone who's phone runs out of battery because some script is in a crazy loop cares.

They might know know or care what the technical reasons or solutions are. That's supposed to be our job.


I care about it but I just feel so befuddled by CSS that I don't know where I would begin to learn to do it. I've been hoping that I could find somewhere that teaches a mental model of layout which doesn't feel like playing whack-a-mole. However, I've spent 9 years of using HTML+CSS, had multiple discussions with fellow engineers, and even asked an MDN technical writer. Everything I've seen tells me that, "yep, CSS is just like that."

And so getting rid of JavaScript as a tool to make things work just doesn't make sense.

Hopefully flexbox becomes more prolific.


Sorry, but 9 years are definitely enough to learn CSS. If that is true, you have experienced IE6, whose broken implementation helped nicely to understand the underlying models.


No, he's right. The browser incompatibilities are incredible. The devil lies in the subtle things. Relative spacing and heights, absolute positions, divs that have to line up, etc. As soon as you require precision on all browsers, it becomes a nightmare. And a decade of HTML+CSS experience changes nothing about that, unless you only go for the most simplistic website imaginable.


> As soon as you require precision on all browsers, it becomes a nightmare.

Don't do that then. Seriously, browsers aren't meant to provide bit-for-bit precision. Really, they can't: your page will be viewed on phones, on phablets, on tablets, on laptops, on desktops, on giant wall-screens; it will be viewed in colour, in black-and-white, in high-contrast palettes; it will be viewed with the same fonts you have installed, and without them and in monospace; it will be read aloud; it will be copy-pasted; it will be saved as plain text or as HTML.

Just … don't do that. Resist the impulse to do everything one browser or two browsers will let you get away with; limit yourself to what works on all browsers.


The only real difference between Chrome and Firefox I notice regularly these days are subpixel rounding, input + button positioning in Firefox and styling of stuff like checkboxes, selects or radio buttons.

It is a pita, a few weeks ago I implemented a responsive design for a wine website created from the company's print agency, who just vomitted layers upon layers of images on the thing. Zero whitespace. Still it is nothing like it was 10 years ago. IE6 with its float bug simply destroyed the whole layout. Now it is more like "this button is 2px off in Firefox" or "here is some whitespace in chrome".


Just a few days ago, I noticed a bug in IE11 that destroyed a layout of a site of mine that used absolute position on a div. It worked in Chrome, Firefox, Edge and on mobile browsers. Things break just like in the 90's unless you constrain yourself to a very small set of commands and topologies that are guaranteed to work.


Maybe if I had focused on just HTML+CSS, but alongside going to university and studying/working with other more predictable things like page tables or Django...nope.


Well, not anyone, we care :) Part of the reason I wrote this was to provide some ammo for people that want to go down this path for the right reasons, but don’t know how to argue for it.


user’s company has a proxy that blocks some or all JavaScript

This one bites me every day. Websites with their social button / analytics scripts before the main feature JS become unusable if the 3rd party JS errors or fails to load.


I'm pretty sure some sites (cough-BoingBoing-cough) do it intentionally: no tracking, no content.


Is progressive enhancement still possible nowadays? Considering how many millennials choose JavaScript frameworks for most of their projects I suppose not. It is very rare for me to find a website that works without JavaScript enabled either because the website in itself is written with JavaScript (which by the way doesn't sounds like a good idea, try that with Lynx) or because they are using analytic services as dependencies for the rest of the code.


> Is progressive enhancement still possible nowadays?

It's certainly possible to write a site intending to progressively enhance it. It's easy, even.

It's not possible for end users to rely on it existing, because as you note lazy, sloppy developers don't bother.

I prefer to light a candle and curse the darkness.


From personal experience for a number of web-sites I've have had drop curl and use a headless chrome browser with ChromeDriver/Selenium/Python to get the data I need now.


This is quite progressive for the government.

think of all the savings for the government - no need to update old hardware sitting in the schools, libraries, Councils etc...


If anything this is a great example of what being agile is about. Start with the bare essentials and slowly add features, without breaking the characteristics that make the website nice to use such as speed and reliability.


Here's the original post that explains how they got the numbers: https://gds.blog.gov.uk/2013/10/21/how-many-people-are-missi...

Looks like this was removed after a few years: https://github.com/alphagov/frontend/pull/944

I'd like to see the data over that period and see what the JS stats are like today.


We’re working on revamping and rerunning the test, will hopefully have new data before the end of the year.


One small quibble, if you are going to take the time to implement progressive enhancement, why serve up incredibly large images?[1] Even sitting on a MBP with a high speed connection, the initial download took some time from across the pond.

[1] https://gdstechnology.blog.gov.uk/wp-content/uploads/sites/3...


Oops. I’ll see if I can get that fixed.


I tweeted recently to say how I'd wanted to give some good feedback to @GDSteam, and the beta feedback link redirected me to a localhost URL.

I hoped this would be a quick and pleasant interaction, but - and I really should have guessed it - the Twitter 'operator' was clearly so far removed from the developer that can quickly fix it.


I work at GDS - that sounds broken, can you let me know where that link to localhost is?


I love that someone from GDS is here in the comments and cares enough to follow up on something like this. You guys do a great job. I hope this attitude exists across government departments!


I'm just back from this year's PyCon UK. There were quite a few GDS people there, they've attended for several years now in increasing numbers. All good geeks doing good work (and recruiting!).


    > and recruiting!
When I looked previously it seemed the department was hiring only for short-term contracts?

Which worries me a bit if that's the case - I think it's fantastic what and how they're doing it, but not if the philosophy is "ship it and forget".


Nope, go to https://www.civilservicejobs.service.gov.uk/ and put Government Digital Service into the organisation field. We’re currently recruiting developers / senior developers and web ops. No pure front-end developers at the moment though.

Re short term contracts, we hire for two year contracts but it’s not explained well that it’s trivial to extend those. We’re also trying to make roles permanent as well in general.


do you hire contractors, too?


We do, although that doesn’t go through a central system and is on a team by team basis. If you’re interested get hold of me on Twitter and I’ll see if I can point you in the right direction.


Yes it does :)

Awesome to see a reply here - I wondered but wasn't hopeful.

It appeared in the beta feedback bar across the top, after completing a rebate (requiring adding a new address, if relevant to the flow) - it then said something like "processing request, will be transferred to your bank in 2-3 days" and had the top orange bar "this personal tax site is in beta, give us your feedback here" or similar.

Clicking the feedback link from that page redirected to `localhost:<port>/correct/uri/to/feedback`. After exploring the site some more, I tried the feedback link again (different page) and it worked fine.

If memory serves, the port was `3157` - I'm not totally sure, but it may help your search.

Just to be clear - the feedback I wanted to leave was wholly positive!

Cheers :)


I’m asking around, suspect it’s an HMRC thing but I’ll pass it on if it is. Thanks for the report!


The giant 1MB image in this post should be fixed, goes against the rest of the principles.


This may not go down too well, but it is an opinion I heard and I personally agree with (having worked in UK gov institution and that far from GDS): GDS is wasting taxpayer money, spending way too much attention on minute details like apostrophes, CSS and stuff. Given that most of *.gov.uk are standardised now, it works out to literally millions of GBP per line of CSS. In an ideal world we'd have fully accessible web, progressive enhancement, responsive layouts etc, but any real business has to say stop - this does not improve our services that much anymore, whereas GDS are free to to redesign their CSS over and over, just because no one else in the government understands technology.


No, GDS are tremendous value for money. If you think it's expensive you should see what would happen if it were oursourced to Capita.

> millions of GBP per line of CSS

This is a silly metric, because quite a lot of the effort is spent in reducing the amount of CSS. And that's how other projects get screwed up: silly metrics.

The government is also under a statutory obligation to be accessible.


If we want to provide Government services via the internet, which is far cheaper than other forms of delivery, then a setup that doesn't ignore 0.9% of your user base who can't run JS is worthwhile. They should also be caring about accessibility, something that most business give zero thought to (despite legal requirements) because they're the Government, it is their job to work for all citizens not just ignoring the ones that are difficult.

There are some good tech people working in UK Government, there are also some utterly terrible ones. I'd far rather have GDS spending the odd million to provide a baseline of what we should expect from Government IT than put up the culture that has foisted every other Government IT disaster on us.


What about the time saved simply through the use of information in terms of phone calls and interaction with the government? Gov.uk is so easily traversable and it's very easy to find information. You guys are so lucky. My governments website is nothing like that and I struggle with where to find anything. Simple things such as the use of the word 'etc' to describe things needed on an application or the use of wording which is difficult to understand.


This is the UK. We have a wide variety of terrible national level IT projects provided by private companies.

There's the post office stuff, where errors made it look like sub-postmasters were stealing stuff. Several of those people died by suicide; others went to jail, before the company accepted their software was wrong. http://www.bbc.co.uk/news/uk-23233573

http://www.bbc.co.uk/news/uk-32377013

There was the software for the probation service which was abandoned after the budget tripled:http://news.bbc.co.uk/1/hi/uk_politics/8339084.stm

And there's the health stuff, which cost billions.


My initial reaction to the post was similar - but remember that government aims are different from the business world and funding works in a very different way. I've also worked in government and it does sometimes feels like money is being wasted everywhere, which I found frustrating - but I've not managed to come up with a valid argument against the way they do things. I think setting up a behavioural comparison between businesses and government is a false opposition.


I definitely get the sentiment but the government has different imperatives than business, and that includes making services available to a small fraction of users at greater expense than "normal" users.


> GDS is wasting taxpayer money, spending way too much attention on minute details like apostrophes, CSS and stuff.

You're getting downvoted because everyone already knows governments waste taxpayers' money—thus your comment is redundant.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: