This isn't new, and it's not as simple as not letting the browser change the URL during the click event. This swapping of addresses is more complicated that it even needs to be, all we need is:
And this is a valid use case - perhaps we're in a web app and if the user clicks the link, we want to perform an AJAX call for the data instead of the normal link action - unless they're opening the link in a new tab, in which case it should load as usual (from the URL given in the link). So this isn't really a easily fixable problem.
Agreed. The ability to do something other than what the href implies is a small part of JavaScript's bigger purpose: To extend the capabilities of web pages beyond the basic functionality of a browser. It's what makes the browser more than just a dumb document reader, and it's a big part of what makes the web attractive as a platform for app development. We shouldn't chip away at it.
Rather than crippling JavaScript, we should focus on implementing a different security principle: Merely visiting a web page should never be dangerous or harmful. This has always been a goal of browser developers. Browsers run in a sandbox for this reason. Although they will execute arbitrary, untrusted code (i.e. JavaScript), by design that code has no access to your hard drive, other web pages you visit, etc..
Were this principle implemented perfectly, the OP's concerns would mostly be alleviated. Sure, some people will still download and run malware EXEs, or enter their private data on a phishing site. But those who do will probably not benefit much from hovering over a URL anyway. If you don't know better than to hand the keys to the castle to random web pages, you probably don't know how to spot a suspicious URL.
As a Linux user, malicious exe-files doesnt concern me much, but random joksters trying to get me to click on goatse-links does. It especially sucks when you're at work and you thought it was a link to a page describing matrix multiplication. I think that a modern browser should protect you against that. After all, my economic loss is likely to be larger if my boss sees me staring at a bleeding anus than if I catch a malware infection. Just like browser these days detect malware sites they should also detect shock sites.
Granted, but does the hover URL do much to protect you from offensive content? It sounds like you're a technically sophisticated person who's worried about being pranked by other technically sophisticated people. Therefore, I assume they are capable of using a URL shortener or a misleading file name to trick you. Seeing the URL ahead of time doesn't help in these situations, unless you simply refuse to click on links whose destinations you can't identify with certainty.
As a Linux user I am more worried about malicious elfs and dwarfs than malicious exes.
Seriously, though; while Linux seems to typically be less of a target, and thus a malicious executable is less likely, the level of security that things like running as a non-root user buy me aren't that huge on a desktop machine. Most of the interesting things are things that my non-root user can read and write, by necessity and design. With a proper SELinux setup, this might change a bit, but that's not "simply" a matter of running Linux.
The main shock sites are all clones of each other, so if you filter out hello.jpg and whatever the .js scripts of a typical Last Measure instance are called, you've killed most of them :)
Who cares if your boss sees you click on a goatse link? He's probably been pranked before; he knows how it works. Are you afraid that he thinks you're a goatse afficionado? If so, you've got bigger, wider problems.
Linking is the foundation of the Web. The fact that you can no longer build a website without subverting even the most basic of browser functions via custom code is a sign that something somewhere is broken.
The sad part is that a lot of web developers cannot even imagine a different architecture that wouldn't have these problems. The discussions are mostly about more of the same. More permissions. More low-level APIs.
While linking is the foundation of the web, I would say that the web has grown far beyond its foundations. Nowadays, web technologies are not just a way to share a set of linked documents. They're a platform for app development. The browser is starting to resemble a miniature operating system. Thus the demands for "more permissions, more low-level APIs." You may feel philosophically that this is the wrong direction for the web, but many folks (myself included) are excited about it.
You haven't posted any information I wasn't aware beforehand. Yes, things change, the way we use the web changes too. That's exactly the problem. Instead of just "going beyond" foundation, it would be much more sensible approach to expand it, i.e. change what's possible with core HTML/HTTP.
> Linking is the foundation of the Web. The fact that you can no longer build a website without subverting even the most basic of browser functions ...
i don't believe that's a fair assessment. A bit of javascript that "subverts" the href of an anchor tag to perform some _other_ task other than loading the page at that href, is functionally indistinguishable from having that same task performed _after_ the said href page is loaded.
Sure, the technical side of this is indeed very different. One is an ajax call plus some DOM manipulation. The other is loading a different page, in an entirely different execution context. But to an end user, a link is something that you click on, and somethign happens. What happens is also quite predictable (provided the site/web app is designed with usability in mind). I dont think somethign is fundamentally broken at all.
Is it just me, or can't you just leave the ", false" part away?
And to the people that use "return false;" in their JS to prevent default actions from happening, i.e. a click on a link or submitting a form, don't, stop using it, e.preventDefault(); is the way to go, as posted by timothya.
Yes, you can. The boolean just denotes whether the event is listened for during the event capture phase or the event bubbling phase. Makes no difference in this case.
I work for a startup that uses an HTML5 music player on its site, and we use this trick to load all of our pages with AJAX so the player doesn't get interrupted by refreshes.
I expect it to work by capturing the page unload event? or perhaps capturing just the ctrl-r keystroke? I'd like to hear about how this _could_ work sensibily too.
Isn't it what Google does? Try hovering over a search result, and then try copying the URL. It's annoying like hell, btw., especially when you want to copy-paste a link somewhere.
I'm sure I've successfully copied links directly from Google's search results before, but I've only recently (past few months?) noticed the URL mangling.
I remember when it was not the case (and I remember a period when say 90% of the time you just got straight links, 10% you got redirect links with no Javascript trickery hiding them). But yes, it has been this way for years.
Most of the time though the issue is we don't trust links in comments/posts on websites we otherwise trust. Those websites don't allow people to inject javascript code in their comments (but they often do allow people to change the link text).
Of course, once you're willing to run arbitrary javascript code you can't trust any mouse click anymore. Doesn't matter if the click is on a link or a button or anywhere else. Disallowing href attribute modification doesn't help one bit.
So the "hover" hint you get for links is still highly useful, especially on sites you trust.
Forget the Javascript tricks; the idea that an ordinary person can look at a URL and decide if it is safe to open or not is ridiculous. Even if the browser could 100% show people what URL would open that doesn't give users enough information to then know if it's OK to click on it.
I think what bugs people is that the browser could be seen as tricking the user, even lying to the user; not whether the clickable link is bad but whether we should trust what the browser tells us in a very fundamental way.
Hi HN, so that's my site, and well, I'm feeling pretty dumb right now. I really didn't think that post through--honestly I put more thought into carving pumpkins yesterday. So, here's my retraction that I added to the top of the post:
So, sometimes I am wrong. This attack does work, but it’s irrelevant, and here’s why: if someone has control of the DOM the game is already over, there’s nothing the browser can do for you in that case. It doesn’t really matter that the hover-status can be spoofed at that point. I’ll leave the post up so you can marvel in my stupid, but to summarize–nothing to see here. (At least I’m not throwing banner ads at you.)
If it's any consolation being dumb has a price :) I don't try to make any money off that site. No ads, no syndication, no attempts at revenue whatsoever. My bandwidth bill is gonna hurt this month, so if you don't mind down-voting the link, I'd be pretty appreciative.
Well, it says a lot for you that you admitted your mistake. Hope you will become a regular HNer :) I'm actually quite surprised your post got so many up votes considering this community is mostly dominated by web developers.
"I don’t know why this simple attack is allowed to work … browsers should not allow the href to be modified on a link with the onClick handler."
It is often necessary to attach click handlers to anchor tags. For example, let's say you have a link to an image, but when a user clicks on that link, you don't want to navigate to the raw image - you want to open it in a shadowbox. But if someone right clicks and copies the links URL, that will take them to the raw image.
In terms of browsers "allowing" this to happen - anything beyond the ham-fisted approach of just disallowing click handlers on all anchor tags would be prohibitively complicated.
It is something worth being aware of though - just because that link says it's taking you to Google doesn't mean it won't take you somewhere less savory!
Really not worth a discussion as this functionality is probably around since the first web browser implemented JavaScript. I don't know anyone who's actually giving that "hover" target-preview attention anyway. On the other hand, preventing this is not a solution. It would break patterns for AJAXified navigation.
This is an old trick people use to hide affiliate links and redirects via tracking URLs, for example. It's not intrinsically evil as it's providing a way to keep things working even if JavaScript isn't on, or the site is visited by a bot, for example. It's really the basis of graceful degradation, unobtrusive JavaScript, and the foundation of libraries like jQuery.
One interesting lesson here is if you reverse the logic. What if you have an Ajax action triggered by JavaScript? My point is, you should generally trigger actions from linked content instead of random controls like buttons (you can of course place a button inside a link or style the link as a button). This way, bots and non-JS browsers will still be able to follow it.
What attack is "hovering" trying to prevent, phishing or executing a malicious page?
If phishing, it's much more important to look at at the URL after the page is loaded. URL shorteners already obscure the actual destination much of the time.
If malicious pages, well, if an attacker can present a link with a JavaScript onclick handler they can most likely already inject an iframe or redirect you to a malicious page.
Well, to satisfy all the Javascript Übermensch who are eagerly trashing the hypertext foundations which made the web so successful, maybe browsers should add some kind of hover styling to signify that it is a bullshit link that is going to do secret magic stuff in the background, like tell thousands of advertisers that you just clicked that bullshit link.
No, the malicious page could simply send you a HTTP redirect header to basically achieve the same result (redirecting you to a URL you didn't intend to go) without Javascript.
RequestPolicy also protects against this by asking the user for confirmation before visiting any new domain unless the request was initiated by the user. In this case, RequestPolicy would detect that the request for a URL on a new domain was generated by JavaScript.
So even if you enable JavaScript on a site, RequestPolicy is a good second line of defence.
I was thinking the same thing. It would seem that to do what he's suggesting you would have to prevent any interaction between the anchor tag and javascript. You can trigger window.location.href on hover for gosh sakes.
The hint itself: That's complete nonsense and doesn't improve your security by a bit. The only reason why you might want to do something like this (at work) is if you don't want to follow non-worksafe links. And even then - what are you going to do if it's a tinyurl or goo.gl link? Just not following?
The attack: This completely misses the point. If a site wants to mess with you it doesn't need a link for that, it can just directly execute Javascript code and do whatever it wants to do.
Middle-clicking doesn't execute the `onclick` event in some browsers. Likewise, right-clicking and hitting open link in new tab opened Google, but middle clicking executed the `onclick` event.
Assuming the page you are already on is compromised / malicious. Then it link baiting you to another compromised / malicious site is not your only worry.
Edit: Not sure why the parent was deleted, but it basically stated that in Chrome and Safari, the mouseover URL pointed to the same URL as is set in the 'tricked()' method.
It's interesting though, that they bother to resolve the click event URL so, in a sense, they're evaluating the code before the click event fires and the code is actually executed.
I'm not a security guy, but there seems like a potential for exploit there, though I'm guessing there's probably a very good explanation why there isn't.
Hovering is showing 'www.google.com' for me in Chrome 22.0.1229.94 so I'm guessing the comment is incorrect (or only correct for some unknown version of Chrome). I don't have a mac so I can't test Safari.
I had a totally different comment plotted out before realizing that the link actually points to the demo URL (which previews 'google.com' and directs to the malicious URL on click, as expected.)
In summary, I am wrong, and you are right. Thanks for pointing it out, as I was genuinely confused there for a moment.
I might be wrong but I believe facebook does this on any news article linked by someone in your feed. It does so to load up the relevant publication's app upon clicking a link, even though that link "appears" to be a deep link to the article, when you hover over.
Like with so much on the web, with great power comes great responsibility. However, that is not super reassuring.
Most of the time I have seen sites do this so that the link is clear and easy to read for the user, while adding utm_campaign codes and other junk that the user does not care about.
However I am sure some sites use this behavior to .... trick users.
So we need to ask ourselves which is the greater good?
There might be a UX solution here, in how those links are shown, and maybe there should be some rules about HTTPS links in pages. Maybe users could set a setting that if the link is changed on click, that maybe, it would not automatically go to the site, but would prompt the user, or at least inform them on the landing page with some form of notification.
Well, mainly because you can only "touch/click". You can replicate a hover though, saying the hover goes on when you touch, and goes off when you release.
Yeah, the hover trick is only recommended for looking at URLs in e-mails before clicking them (no sane client would execute JavaScript contained in an e-mail), but really you should just never click a link in an e-mail anyway and open up a new window and type in the URL of the website.
Open a new browser window and type in PayPal.com rather than clicking any links in that "You need to update your information or your account will be shut down" e-mail.
Once you're actually on PayPal.com (or whatever), looking at the hover URLs is pointless.
Well google search has been doing this for ages. If you hover over the search results links it would show the url of the page, but if you click on that it goes to a google url which then gets redirected to the url you see. Google does this so that they can keep track of which link you clicked. Other search engine probably does the same, I haven't verified though.
Its not a bug, its a feature. What you should be worried about is what http://www.example.com can do, can it execute arbitrary code, post to facebook/twitter etc.?
I seem to remember some recent articles on phishing indicating that the assailants are not targeting the brightest among us but rather those easily fooled. While this technique is alarming to those who know the mouse-over technique I doubt this will apply for your average "mom and pop" phishing scheme.
That's actually a slightly convoluted way around - you can also use JS to cancel the browser's "normal" action for an element and have it do something else - see jQuery example http://api.jquery.com/event.preventDefault/
If you want to be even more shady you can use the 'onmouseover' event with the same redirect as above. Which defeats the look before you leap approach of hovering before clicking.
hold on guys, can someone explain me what the browser is doing wrong here? I mean the source says it will take me to /utils/onclick.html which is exactly what the tooltip says as well. Please help me understand?
Did you already open the link before that screenshot?
The script the OP uses replaces the href, so once you click it, it'll reflect the true destination.
As others have mentioned, you can get around that by doing a redirect and cancelling the link action instead. Then it will always show the fake target.