Hacker News new | past | comments | ask | show | jobs | submit login
Breaking the Web with hash-bangs (isolani.co.uk)
41 points by tswicegood on Feb 8, 2011 | hide | past | favorite | 14 comments



We've had hashbangs in Unix for 30 years, and while they were not entirely perfect (bit of a violation of the kernel/userspace boundry going on; issues with suid), they've been good to us.

It's really highly annoying that this good old thing is now associated with flaming javascript evil.


Maybe I'm explaining the obvious here, but there is a reason people do this. If you have built your site so that the content is generated by Javascript running on the client then you need some way of making the page's current state visible so that it can be bookmarked. Without full adoption of HTML 5 that means you need to modify the URL. And the only part of the URL you can change without forcing a pageload is the part after the #.

So you need to do this if you want the user to be able to bookmark pages whose appearance depends on Javascript running. Think of it as the equivalent of "GET", but on the client - when the page loads, the Javascript reads what is after the # and re-generates the content.

Of course, that then breaks search and so you need #! which is a convention that allows server side code to give search engines the same thing that you are showing your user. But that doesn't mean that every use of storing client side state using # is automatically crazy or stupid or trivial to avoid.

Having said that, making a system that displays a blank page when Javascript fails is pretty dumb.


I could not figure out what this article was trying to say. Either I'm tireder than I think I am or it's really badly written.


The problem is that the meaningful part of the URL is only sent to the server via JavaScript. Your browser doesn't normally send the fragment (the part after the #) when you access a page.

That means there's no fallback for when the JavaScript breaks (which just happened) or for crawlers (except Google), browsers without JavaScript, etc. The site has absolutely no content without working JavaScript.

The author asserts that this also breaks caching. (AFAICT, more analysis would be needed to support this, because the extra request the JavaScript makes may very well be correctly cacheable.)


Thanks. What got me was that for some reason I couldn't figure out whether I was being told why the author thought #! was a bad idea, or why Lifehacker had thought it was a good idea.


That #! urls are ugly.

I can understand the technical necessity for them but I have no idea why you would want to redirect from example.com/foo => example.com/#!/foo -- do it the other way around instead and then the #! becomes a implementation detail of your internal AJAX-based navigation of your site rather than a public URL change.

Actually, for a company whose mainstream success can be mainly attributed to citations in newspapers and the media at large, I find it surprising Twitter chose such an ugly URL scheme due to the obvious difficulties in typing URLs you've seen offline that include #!-like warts.


I think having a consistent permalink helps to encourage better linking. The more redirects you have before a link reaches it's destination URL the less juice you get. From a "going forward" perspective it makes some sense to enforce the hash bang.


Why did the hash bang make sense at all though?

What was wrong with twitter.com/username that hash bangs somehow fix?


I guess the main technical reason is that if your on one page and click to another, you want the url to change so that if you copy and paste the link it's from the url of the page you are on. If you have proper separate links to the the other page I assume updating the address would trigger a new http request?

In terms of just loading the ones page I don't see any reason why you couldn't use the normal structure and have js get what it needs for an ajax request from that.


You also want it to change so that you can add to the browser history -- so the back button will work.

The hash is actually an outdated method now, it's important to note. HTML5 introduced replaceState and pushState which allow rewriting the parts of the URL after the domain with JavaScript. So no more hashes in the URL except for page anchors.


True, the hash method works everywhere though with little changes, if you go the new html5 method you need to support both that and the hash method for the IE7, etc.


Of course, you will just use an abstraction which detects if that feature is available and falls back to hashes. Feature detection and graceful fallbacks is par for course with JS.


The purpose and meaning of the hash-bang convention is that there is a "real" url that corresponds to this "javascript" url. That has a lot of advantages that the post does not mention.


Yeah, but the post is questioning the point of changing every page to an ajax call.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: