Hacker News new | past | comments | ask | show | jobs | submit login

HTML5 "AJAX History", also known as History.pushState, can solve this problem. It allows a website to update its contents with AJAX, but change the URL to a real URL that will actually retrieve the proper resource direct from the server, while maintaining proper back-forward navigation.

See <http://dev.w3.org/html5/spec/Overview.html#dom-history-pushs...; for spec details.

It's in Safari, Chrome and Firefox. While Opera and IE don't have it yet, it would be easy to use conditionally on browsers that support it. I'm a little surprised that more sites don't use it.

EDIT: This chart shows what browsers it will work in: http://caniuse.com/history




It's really great that in a few years, browsers will support a new AJAX technology that solves this problem that we wouldn't even have with sane, traditional URL schemes.


Maybe you're trying to be snarky, but I'll choose to take your comment seriously.

The AJAX approach to Web apps does provide a genuine user interface benefit. A full page load is very disruptive to application flow, and being able to have new data appear without incurring that penalty is great. Most of the time you only need to load a little bit of data anyway, and it's wasteful to reload all the markup that wraps it.

AJAX solves that problem, but it creates the new problem that your address field no longer properly reflects the navigation you do inside a web app. #! URLs are one approach to fixing it, and pushState will do even better. At that point, the user won't even have to notice that the Web app they use is built in an AJAXy style, other than the navigation being smoother and faster.


A full page load is very disruptive to application flow, and being able to have new data appear without incurring that penalty is great

On many examples I don't see any real disruption to application flow with just using normal links, though there are more full-fledged webapps (like gmail) where I would agree. Playing around with old v. new Twitter, the old one actually has considerably faster navigation performance, at least on my setup (and I'm using a recent Chrome on a recent Macbook Pro). Sure, some HTML header/footer stuff is being retransmitted, but it's not very big.


I would put it down to there finally being a distinction between web-sites and web-apps.

Within an app, my current context (which control has the focus etc) is important and a full page reload loses all of that.

Within a web site, as it is less interactive, this stuff doesn't matter so much.

As to whether New Twitter is a site or an app is debatable (I say site and therefore shouldn't be using #!). And as for Gawker...


"Most of the time you only need to load a little bit of data anyway" - that's highly questionable as a general statement. In a rich UI like GMail, yes. But in examples like the new Lifehacker, you load a whole story, yet its locator is behind the hashbang.

Not every website is a web app. Just show one article or item or whatever the site is about under one URI.


That might be the case for LifeHacker (don't know; don't use it). But the example in Tim Bray's post is Twitter, which definitely needs to load a lot less data than it's full interface markup on most navigations.


Lifehacker kind of looks nicer only loading reloading the story and not the whole page. I gives things an application feel rather than a collection of pages and saves a heap of extra processing, why run the code again to generate a header and footer and side bars constantly when the version the user is seeing is perfectly up to date.


The experience is slicker - if you run a search on lifeHacker, you can click through and browse the results without affecting the rest of the page (including the list of results). With traditional page refreshes this would not be possible.


Is it ready for the mainstream?

Apart from not being supported in IE - the browsers that do support it still have quirks e.g. your code has to manually track scroll state.


This solves two problems: (1) the only visible/bookmarkable URLs are those without a #!; and (2) initial page loads can be fulfilled by a single request to the server. It doesn't solve the problem of URL discovery, but two out of three ain't bad.


Not sure what you mean by URL discovery. Although link/hrefs should be the same/legacy. What you could do is progressively enhance normal links. Javascript could disable the default behavior of a link <a href="/about/team" data-remote="true">About Team</a> and check and see if browser supports History.pushState. If it does it would just request the appropriate content for /about/team and update it client side, then update the url. If pushState is no supported, it could just request that link /about/team normally. This would be the ideal way to support both regular and progressively/JS enhanced pages (for speed).


Well, it "solves" it - you still have to download and parse a ton of Javascript before you even begin downloading the data...


CDNs make the download part much less of a problem.

And your server could easily send a fully rendered page on the first page load when it receives a full URL (one which was made by pushState and linked elsewhere) and still subsequently load pages via XHR. So it wouldn't have to parse any JS on first load -- subsequent loads would, but they'd be saving time from not downloading as much and not refreshing the entire page.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: