Hacker News new | past | comments | ask | show | jobs | submit login
Javascript exploit actively used against TorBrowser (torproject.org)
314 points by secfirstmd on Nov 29, 2016 | hide | past | favorite | 132 comments



If TBB leads want to run Firefox with JavaScript "default on", then Tor Browser Bundle needs to be messaged as insecure. Either that or turn on NoScript and inform people what bad shit can happen when their browser is interpreting arbitrary code in a not-so-sandboxed manner. TBB is not a solution against targeted deanonymization attacks.

This is neither the first nor is the last 0day in Firefox that will affect TBB.

IMO the best practical mitigation against these attacks is sandboxing with an amnesic system like Tails, as even as a VM it will leak a lot less information about the machine it is running on and requires burning both a Firefox 0day and a VM escape to get any real information outside of the real IP address of the user and some basic things out of /proc (although Tails may protect against the latter now). Also, as the whole VM goes away when it's closed, you're not getting persistence on that machine if you just pop the browser.

A 30 second glance at the source code makes it looks like this exploit pivots to attacker-controlled memory on the heap, and spawns a thread using kernel32.dll. As EMET has hardening against attacks like this, I am curious if this exploit works at all on EMET-enabled Windows systems.


Unfortunately based on my experience training activists/journalists all over the world, the average user at risk in the field struggles to use TAILS.


What's their biggest struggle with it?

PS, if you're ever on the US West Coast or in Singapore, drop me a DM. I'll buy you a drink someplace.


Honestly? Many things:

-It's tricky for a non-technical user to setup

-It disrupts their regular workflow

-People get frustrated with speeds of Tor etc

-People get frustrated with Captcha (Dam Cloudflare!) and other things caused by using Tor in a safe manner.

-People get annoyed as it doesn't solve their problems and exposure on mobile

-You have to restart to run it

-They can't run their regular programs on it - MS Office, Outlook, Adobe, etc.

-It's Linux based, so a big mental jump for most people coming from Windows (or most people not at the command line on OS X)

-It's hard to access files on other drives

-Documentation and TAILS is only available in certain languages

-It often has driver problems - e.g Macbook Pro 2015 WIFI issues

-In developing states, computer literacy is low, so anything other than the norm (Windows) is confusing

-In developing states, hardware tends to be slow (often counterfeit) so running TAILS in RAM is slow

-People lose the USB sticks they put TAILS on (also many counterfeits, so they often fail or have a false size)

-TAILS often requires training, which not everyone has access to

-Skills fade for digital security training with journalists/activists is often quite high, especially if they don't need it that often.

-the list goes on............

Don't get me wrong, I think it's great (as is Qubes, Subgraph etc) but we need to be realistic about it's limitations for the majority of people. Especially if we want to be sensible and try to tailor advice, training and tools to their realistic threat models.

P.S Definitely, I'm on the West Coast probably once a year. Ditto you if your ever in Dublin (Ireland, not that fake one in California :) or London!


What's their threat model, then?

If you're going to be actively targeted by exploits like this, then you shouldn't give a damn about some of these tradeoffs. If you have journalists/activists willing to go to information war with a nation-state, they shouldn't be surprised when their adversaries have the resources to pwn them. If David wants to fight goliath, they will need to take this into account. The guys with guns and power want their heads.

If the activists/journalists/whatever don't want to take the necessary precautions to use computers to talk to people in a way in which they are protected from capable adversaries, then I'm not sure what it is that they are expecting.

Also, I was recommending these users simply defending in depth by using Tails as a sandbox for a leaky browser, to require a chain of expensive exploits (browser 0day + VM escape 0day). Training someone to install VirtualBox or VMWare and run an ISO from it doesn't really disrupt too much workflow and defends in depth against the browser issues, then again I am likely wrongfully assuming VT-x/VT-d and a lot of RAM on the activists' computers.


If the activists/journalists/whatever don't want to take the necessary precautions to use computers to talk to people in a way in which they are protected from capable adversaries, then I'm not sure what it is that they are expecting.

I get what your saying but humans are humans, journalists are busy and security is a pain for most people (until they need it).

The threat model really depends and so is too wide for me to make any sweeping statements. Clearly a US journalist working on drone strikes is a different threat model to an democracy activist in Sudan.


>>I get what your saying but humans are humans, journalists are busy and security is a pain for most people (until they need it).

...until they see they need it.

Sad, as most people may not even know that they need to go to these kind of depths. For them, TBB seems more than enough.


Couldn't agree more.


> If the activists/journalists/whatever don't want to take the necessary precautions to use computers to talk to people in a way in which they are protected from capable adversaries, then I'm not sure what it is that they are expecting.

The problem with this mentality is, often its not themselves they're protecting but others. If Alice and Bob are communicating, and only Alice is the one under threat: Alice may be willing to go to great lengths to make her side secure, but you really need to be making it as easy as possible for Bob, who has less of a direct incentive to overcome the inconveniences.


Yep. That's the beauty of Signal App or WhatsApp use of Signal Protocol. You a making it much easier to meet the person-at-risk on a platform where they already are. As opposed to tasking to use something like Pidgin.


While that's true, the trade offs imposed are not only controversial and paradoxical, but effectively disrupts many security and privacy models.

I feel the urge to mention https://chatsecure.org/ as a notable middle of the way alternative.


Tails' existence is a symptom of much greater illnesses but offers a false, inconvenient panacea that just muddies the water. The major, underlying issue is untrustworthy applications, operating systems and hardware the "free market" hasn't brought integrity, privacy and anonymity to barely anyone in a practical, consistent and verifiable form. It may not be easy to accomplish end-to-end verifiably-uncompromised systems, but it's the minimum standard when lives and safety are at stake. Anything less is crap, like America's attitude to nonregulation of health supplement ingredients or those 62k chemicals grandfathered in via TSCA, when there are superior regulation frameworks like those in Scandinavian countries.


I don't know whether we should push for more system integrity from the manufactures. They'll give us locked down systems that run binary blobs that your OS can't control, prevent installing alternatives OSs by only booting signed kernels... If you want user freedom you have to go another route.


Perhaps a solution all on an OrangePi 2E or something similar would be better, since then you have a libre platform with no blobs, and then you can theme it to be close to Windows.

Ideally Tails would move in the direction of Signal Private Messenger, Anonymity tools need to be as user friendly as possible, otherwise the user has to develop specialized skills and expectations to deal with them, which creates user rejection & user apathy.

Or, we could swap i2p in there, it'd likely be more consistent than TBB, but user friendliness was worse last I checked.


Having tried and failed many times, I haven't really found specific hardware deployments (e.g locked down Raspberry Pi) to work for most people. Again it's too disruptive.


Hey! I don't mean to hijack this conversation, but I see you built Umbrella specifically for android.

In an attempt to take back a little control over my personal data, I've switched from android devices back to ios. I notice that orbot/orfox aren't available for ios and it doesn't appear umbrella is either.

Is there something I am missing about apple's platform that makes android the better choice for security? Or why aren't people building ios apps for security?


Hey no worries....a few things.

-Umbrella on iOS is coming in 2nd quarter of next year! Whoohoo (We get asked about this all the time).

-The main reason that we and a number of other open-source projects built on Android first is that because is by far the dominant smartphone platform. Especially in developing areas with significant human rights problems like China, Russia, parts of Africa and Asia. Mainly because the cost of Android phones is low.

-On the security specific question. I think the Android vs iOS debate has evolved. A few years ago it was felt that the open-source(ish) and customisation aspects of the Android platform meant that it was the more obvious choice for a secure phone.

I think that what we have seen recently, with tens of millions of Android phones not getting updates etc - has probably challenged that[1]. Especially when iOS now has encryption as standard and other security features. Of course there are Android options like Copperhead/F-Droid/Guardian Project which are examples of how you can retake control to a certain extent, but I think for the average person's threat model iOS is probably pulling ahead on the security side of things.

[1] https://threatpost.com/android-security-report-29-percent-of...


Thanks! That's mostly what I suspected. Android does have massive market share outside of North America, but here it is much closer to 50/50. Obviously our security/privacy concerns are drastically different than those in other parts of the world so it makes sense to secure android first.

It's hard to recommend alternative distributions of android to most people. I feel like it's similar to linux 15 years ago, it CAN be more secure, but it can also be incredibly insecure if setup improperly. And if you are just going to go install playstore and google apps, was anything really accomplished?


Yes. Though in some countries and threat models (where Google/NSA is not your problem) we are seeing many activists/journalists switching entirely to a Google Platform...Google Apps, Docs, Android, Google Chromebooks etc. If implemented properly (two factor etc) in some threat models it actually makes more sense compared to a mishmash of systems without anyone capable of monitoring and protecting them - and it helps to reduce the overall attack surface...It's not ideal but often the best that can be offered in certain circumstances.


> it actually makes more sense compared to a mishmash of systems

100% agreed! It's like using new, super secret, awesome encryption (telegram/zcash) vs something more established that's been reviewed, tested, and has support.


The Russian government just confirmed Sailfish for all government mobile work going forward: https://cdn.jolla.com/wp-content/uploads/bsk-pdf-manager/Jol...


> -It's tricky for a non-technical user to setup

What exactly? A non-technical user can plug a flashdrive on an usb port. Other than that, it's basically read instructions and doing exactly what the instructions are telling, which should be what "regular" operating systems already make you do. But obviously that is the perspective of a power user. In the quality of someone trying to teach people how to use tails I also perceive the barrier imposed. I am convinced that the best path to lower this barrier is to constantly question "what exactly", until we find out.

---

> -It disrupts their regular workflow

I am afraid that this is non negotiable, although other people may disagree. I advocate that security and privacy is less about the digital tools I use and more about my habits and perspective. Much energy is wasted trying to make "fool-proof" tools, but that is ignoring the fact that the responsibility shall be on the end user, and not in the developers. There are parts of the tails documentation explaining those things much better worded than my comment.

---

> -People get frustrated with speeds of Tor etc

That is frustrating for much people. Many people don't want to be part of any anarchist agenda, but there is simply no alternative. The tor network probably will continue to be volunteer driven and an instrument of tech resistance, and that's not a hipster thing, the network is suffering real world attacks and almost always being flagged as a bad thing.

---

> -People get frustrated with Captcha (Dam Cloudflare!) and other things caused by using Tor in a safe manner.

Adding to the above comment, it boils down to the same thing. I understand that people don't want to be tricked in political agenda, but this really is about system administrators deliberately blocking tor traffic because they don't want to deal with the tor network, or because they've read somewhere that tor traffic is bad. This basically should be motivating "genuine" tor users to demand that tor network stops being blocked everywhere, but like I said, people shouldn't have to feel obligated to engage in political agenda. Although I personally advocate for the exact opposite elsewhere ;)

---

> -People get annoyed as it doesn't solve their problems and exposure on mobile

That's important. Being android a linux based system, one would think that by now we'd have something like tails for smartphones too. But is not that simple. These devices started to being manufactured in a time that placing backdoors in the hardware or in a lower software level is easier, therefore making harder to secure them, compared to desktops/laptops. That said, there are plenty initiatives and things being developed to bring security and privacy for mobile devices, but I agree that it's not yet "for the masses".

---

> -You have to restart to run it

> -They can't run their regular programs on it - MS Office, Outlook, Adobe, etc.

> -It's Linux based, so a big mental jump for most people coming from Windows (or most people not at the command line on OS X)

I can't think of other answer to that than "that's closed source people's fault, blame microsoft and adobe". I am aware that this answer doesn't solve people's problems.

---

> -It's hard to access files on other drives

I don't fully agree with this one. However, I agree that the default GNOME look and feel doesn't provide an obvious "my computer" sort of thing. That is well done on many ways in linux distros. Previous versions of tails had that solved. GTK devs, where are thou? The tails website has called everyone already, little help here =)

---

> -Documentation and TAILS is only available in certain languages

I am one of the lazy volunteer translators who should dedicate more time translating tails than the other futile things I do with my life. I hope more potential translators feel ashamed as well.

---

> -It often has driver problems - e.g Macbook Pro 2015 WIFI issues

I acknowledge that as a big problem, because people shouldn't have to compile drivers just to use an operational system. But I can't miss this one: "that's apple's fault!".

---

> -In developing states, computer literacy is low, so anything other than the norm (Windows) is confusing

> -In developing states, hardware tends to be slow (often counterfeit) so running TAILS in RAM is slow

> -People lose the USB sticks they put TAILS on (also many counterfeits, so they often fail or have a false size)

Here is the magic point where the "go blame microsoft" arguments have no sense and lose their meaning. This is the kind of reality that I see everyday and that I think should be top priority in tails development. Whose privacy and security issues are we trying to address? I don't mean to be rude, but I believe people with easy access to macbooks, fast internet connection and with means to buy many disposable usb flashdrives won't understand easily, if not at all, what it is having to operate frankenstein machines and to have only one usb flashdrive which is probably used by other people. This is serious shit because apart from the everyday problems, when these people are offered "digital inclusion", it is often something to take away for good their privacy and security, and everyday there are less gaps and possibilities of "hacking" the way out of censorship and surveillance. See internet dot org for the most nefarious example.

---

> -TAILS often requires training, which not everyone has access to

> -Skills fade for digital security training with journalists/activists is often quite high, especially if they don't need it that often.

Again that divides my opinion. I recognize that the tails doc people should always improve it bearing in mind that anyone should be able to operate tails just from reading the docs, and should be the most accessible as possible. In the other hand, security and privacy are not subjects you can solve by means of digital tools alone. There are not, and there shall be not any magical tool that dispenses the concomitant lectures people should listen to while trying to address privacy and security.


> A 30 second glance at the source code makes it looks like this exploit pivots to attacker-controlled memory on the heap, and spawns a thread using kernel32.dll. As EMET has hardening against attacks like this, I am curious if this exploit works at all on EMET-enabled Windows systems.

EMET can be bypassed so it's no guarantee that it would stop the exploit (but it would probably stop THIS exploit). I don't know if some modification would be able to bypass EMET or other mitigations.

A better solution would be to run javascript in a sandbox (as is done in Chrome/Chromium based browser) which has a much higher barrier to exit.


> A better solution would be to run javascript in a sandbox (as is done in Chrome/Chromium based browser) which has a much higher barrier to exit.

Sure, but we can't get TBB rewritten overnight to work instantly with Chromium, and I'm sure there'd be a lot of push back on that.


An easier solution would be to enable e10s. It should be on by default in the next ESR, and I know TBB has been working to make their patches compatible with it.


Not just e10s, they also need to enable the sandboxing, i.e. it requires Firefox 50 at least.

It should actually be easier for Tor to enable stricter sandboxing than in the default Firefox, though, as presumably they have to care less about compatibility.


Well yea but this isn't something that we discovered today it's been years sadly :(


I don't know much about EMET. How would they mitigate this? After all, it's obviously valid for a VM to call CreateThread.


EMET has mitigations against stack pivoting.


I've never understood the Tails threat model, and this comment does not really help. You say that it will prevent the attackers from learning any information, except the real IP address of the user. But hiding the IP address of the user is the whole point of Tor.

If you give that up, then what's even the point? The state can simply drive a black van to your house and get the rest of your information at their leisure.


If you're using Tor from a coffee shop, so an IP address alone isn't enough to identify you.

Or if you're in a country oppressive enough that they'll raid your house for using Tor, but free enough that they'll let you off if they don't find evidence you were doing something illegal over Tor, and they didn't compromise the site you were visiting just asked your ISP to look for Tor users.


Definitely ignorant on the subject, but are there ANY nations that would meet that requirement? I would assume any that are savvy enough to detect tor AND care about it would probably not just say "Oh you crazy kids. Be more careful next time"


Fine. Replace Tails with Whonix-Workstation and Whonix-Gateway, if you need to worry about leaking the IP address.


"If"? Are there any Tor users who don't need to worry about leaking their IP address? Then why do they use Tor in the first place?

The Tor project itself seems to promote Tails much more than Whonix, which seems very odd to me.


After thinking about this, I agree with your point, but it's past me being able to edit my original comment to address this issue there.

OK, now you have an IP. Now what? You get a warrant and search the place. What do you find? A computer, maybe an amnesic virtual machine. No actual access to the website/onion in question. IMO Tails promotes better opsec when using Tor - you don't leave any traces behind of your browsing activity, and you can't gain persistence on the victim without a sandbox escape, since the Tails VM wipes itself. It is still a defense, but maybe not a good enough one.


You look at this from the privacy perspective of someone who wants to hide something within the constraints and confines of a working - and at least somewhat ethical - legal and judiciary framework.

The original use case for Tor is for people who actually need to be able to use the net and hide. If their location and they get it with the equivalent of their local government's "search warrant", it's more likely a raid, interrogation, threats, harassment, censorship, and possibly torture and death.

It's a whole different ball-game.


TL;DR: A plurality of Tor users are from Western countries with arguably decent judicial frameworks. Those that have life-or-death consequences to network anonymity will need a lot, lot more than the Tor Browser Bundle or Tor itself.

> If their location and they get it with the equivalent of their local government's "search warrant", it's more likely a raid, interrogation, threats, harassment, censorship, and possibly torture and death.

This is not who is primarily using Tor. 1/5 directly connecting users of Tor are in the United States. See:

https://metrics.torproject.org/userstats-relay-table.html

This doesn't change even for bridge users:

https://metrics.torproject.org/userstats-bridge-table.html

So, the majority of Tor users are in places I think we'd consider have somewhat working judiciary frameworks. And I'm highly skeptical of even the American judiciary framework, if you read some of my past posts.

You are correct, my original threat model was those Tor users and their use cases; if they are in FVEY territory they are probably already lost as Tor does not protect against "passive global adversaries" that FVEY IC has proven to be and may be able to be probabilistically deanonymized as was shown in the Snowden slides. [1]

Yes, I admit I should have been thinking more deeply, and my original advice isn't good enough. I have a tendency to not think things through fully before posting here, and then I edit/evolve my thoughts as time goes on, as one does in a verbal discussion.

Like you stated, clearly there are situations in which users rely on Tor for more than simple anonymity. They are already misguided in using the Tor Browser Bundle for this purpose. Use Qubes or Whonix on dedicated hardware, follow the grugq's "Opsec for Hackers" [1]. If the threat of information is torture and death, Tor alone is not going to save you from your adversary. Your threat model requires a hell of a lot more precautions than anonymity over the wire. You need to assume your tools are compromised and defend in depth as much as possible to make yourself a lot, lot harder to track.

If you are using Tor Browser Bundle on Windows, you fucked up already. If you are only using Tor Browser Bundle, you fucked up too. If you are using Tor on your home connection, nope. If your device leaks identifying information to your access points (MAC addresses, hostnames), negative. If you are not using FDE on the device when they come for you, you are toast, etc etc.

If your adversary is a powerful nation state or an organization with the ability to purchase exploits to use against you and they are willing to fuck you up physically, you have a big problem and you need bigger solutions. No anonymity project will be enough. You need to frustrate your adversary as much as possible and realize that your security comes from making you very expensive to track down, and hope they don't care enough. You are playing the game where you are angering the bear and attempting to be faster than the other guy, so that the other guy who didn't care as much is the one that is eaten.

If they do care enough to come for you, and they have the resources to break a lot of layers to get to you, and you do not have any meatspace power to fight or flee, you are highly unlikely to win.

If that's the "whole different ball game" you are playing and are just using TBB, you will lose. If your adversary is that strong or you have your life to lose, and you are likely being targeted, it is clear at this point that Tor Browser Bundle should be considered harmful without a better strategy of defense in depth.

[1] https://www.theguardian.com/world/interactive/2013/oct/04/to...

[2] https://www.youtube.com/watch?v=9XaYdCdwiWU


Regarding the beginning of your answers: note that nowhere in my comment did I make an assumption about the distribution of the TOR users by use-case. I spoke of the original intent. I don't really care what the vast majority of users use it for and in what context. I care about its original goals.

Regarding needing more than TOR, not necessarily so. There are many oppressive states (on different points of a large spectrum, from basic censorship to actual physical oppression), and though we read many stories about their crackdowns on privacy rights and monitoring facilities, very often we over-estimate their capabilities (e.g. the GFW of China is rather sad joke, technically speaking). So if you're not your state's Public Enemy Number 1, you're within a risk range that's most likely acceptable using TOR, so long as you use it correctly and carefully (and that you accept that risk...). Basically, it boils down to what you said: "if they do care enough to come for you, and they have the resources".

Indeed, I was also probably a bit over-simplistic in my previous answer: there are different leagues with different ball-games.

For the rest, we're in agreement.


Also it should be noted that whenever someone raids my home, they'll find the qubes laptop which my ISP will be able to identify as the whonix computer, and therefore I will probably be tortured until I spill out the f*cking hard drive encryption password. That's useless for the tails computers.


I know several people who use Tor purely for its tunneling, and not because of security. There are more use cases.


Tor has a whole alternative network often labeled "deep web" that is accessible either via tor or Aaron's initiative 'tor2web'.

That's enough to state that there are cases where the access to information is more important than anonymity.

R.I.P. Aaron


VMs are all nice and that but if the exploit can compromise the TBB it's too late already, sandboxing needs to happen in the browser on Linux you can use namespaces + strict seccomp rules but don't know what one would use for Windows. First priority would be to sandbox the browser and work your way down if you want to sandbox more stuff. For Windows EMET can help to prevent certain exploits I guess but yea a browser that can access anything on the filesystem & system calls is badstuff.


VMs are all nice and that but if the exploit can compromise the TBB it's too late already, sandboxing needs to happen in the browser on Linux you can use namespaces + strict seccomp rules but don't know what one would use for Windows.

You can take a look at the sandbox implementation of Firefox (shared with Chrome) to see. TBB uses ESR which predates all that, though.


Working within an assumed breach scenario, the VM is defense in depth. Firefox has holes, and it will continue to be relatively easily exploitable as long as TBB allows for plugins and JavaScript by default. There is reticence from TBB team to disable JS by default even in the face of a few of these 0days, so you have to protect TBB users a level down from the browser and assume it'll be popped.

There are Windows "sandboxes" like Bromium, and as stated, IIRC EMET will stop the stack pivot here.


Last time I checked they were working on a TBB sandbox [1] Let's hope it will be there soon, subgraph has oz[2] and can be used with any program really then there is firejail[3] but these 2 are only on Linux available.

1: https://blog.torproject.org/blog/q-and-yawning-angel 2: https://github.com/subgraph/oz 3: https://github.com/netblue30/firejail


I reversed the shellcode, it's almost exactly the same used in 2013 (freedom hosting): https://twitter.com/TheWack0lian/status/803736507521474560


This likely points to this being an FBI "network investigative technique".* I'm really curious where this attack was injected, as that also means that that .onion is also compromised.

My guess? Some darknet market.

* Sure, this could be some type of awkward false flag, but it seems unlikely to my gut.



Unrelated, but kudos on the arbitrary hash use on the Wordpress auto updater last week.


Thanks. That was all Matt Barry. I just prettied it up. He literally did that in his spare time and one day showed up at work and after some smalltalk he was like "Oh, yeah by the way..." and my jaw hit the floor.

That was a few months ago. We had to go through the disclosure process via HackerOne etc.

I'm really lucky to be working with people like Matt and others on the team.


It's on a CP site (giftbox). The exploit got loaded on the confirmation page after logging in.


Eh, with that being the case, I don't personally have too much sympathy.

+1 to FBI on this being pretty well targeted; you had to have had a successful login for them to be attempting this in the first place. It's about as precise as they can get; you're only going after users that are active members of the service. They are at least being reasonable in who they are targeting. I can't really think of how they can be more targeted in attempting to deanonymize people in the network.

I don't like this whole NIT garbage because I'm afraid this will lead to fishing expeditions, where you just root everyone on an .onion that happens to visit it, and then clean up with a multitude of search warrants later and hope you get something. I also don't believe it's the FBI's (or America's) job to play world police.

-1 to the FBI, at least: they were (once again) actively serving CP on a compromised server again, which seems like something you shouldn't be doing as an LEA fighting the distribution of the content. Illegal actions shouldn't be taken to fight crime. Distributing the thing you are fighting is the definition of the abyss having gazed into you.


> Illegal actions shouldn't be taken to fight crime.

I disagree with that and I am on favor of illegal actions against criminal actions.

I just think that this is not FBI's role. Illegal organizations should assume the illegal work. If FBI commits crimes to fight crimes, then to me they're as criminal as the folks they're hunting, and therefore I would treat them the same way I treat the "regular" criminal people.


Hey! Motherboard reporter here. Can you provide some evidence of this? You can contact me (anonymously) via OTR lorenzofb@jabber.ccc.de or ricochet:p5mbxsckf3qbmobc Also via email (PGP: https://keybase.io/lorenzofb/key.asc)


I found the following note in a pastebin-similar website on TOR (Deep Paste): http://pastebin.com/iNRasUFT


It's not much, but the code redirects the user to a 'member.php' page after 2 seconds. So whatever the target was, it probably had a member.php page.


This might be a good moment to point out that you should not put the IP+path into your browsers navigation field unless you are looking for a surprise home search.

(Maybe the EFF wants to do this)


The post mentions "VirtualAlloc" in "kernel32.dll". Does this exploit work on Mac/Linux or is it Win specific?


The bug is generic, the exploit is Windows specific. It should be possible to construct Mac and Linux exploits.


I feel like Tor Browser should just spin up a fresh VM with a minimal Linux distribution and fullscreen Tor browser, with the VM's only networking tunneled through Tor.

I think Hyper-V can do graphics as well and it looks like bhyve added some sort of graphics support earlier this year, but xhyve has none. Not sure if there are any other lightweight hypervisors that support graphics (or maybe just use a protocol like X11 or VNC?).

Docker for Mac and Docker for Windows have done a great job of hiding the fact that it's using virtualization from users (but doesn't need graphics, of course)


> fullscreen Tor browser

Tor recommends not going full-screen, since window size can be used as one of several identifiers.


I mean fullscreen within the VM's desktop (no need for normal GNOME/KDE/whatever desktop), which itself may not be fullscreen on host OS. It would act like a native app. If you quit the browser it shuts down the VM.


Ah, You mean the VM only have one program which is Tor browser, and when Tor terminated the VM should terminated with it.


>I mean fullscreen within the VM's desktop

so like... maximized?


How does this work?

I would expect a generic resolution like 1920x1080 to convey much less identifiable information that some random 1583x1176 that the user might resize tor browser window to.


The idea is to not change the window size at all from the default. If this advice is followed, you minimize the possible information leak. In your example, 1583x1176 tells us that your system is capable of rendering at least that size. Given the unusual numbers, we further suspect you're not maximized on a system capable of an 1176px tall browser (much fewer of those than 1920x1080). While not uniquely identifying, it's a piece of the puzzle.

https://trac.torproject.org/projects/tor/ticket/7255


A better idea would be to simply not leak any of this information at all, or if it must, return some generic 1080p regardless of the actual size. It's a terrible UX to restrict yourself to the default window size (and depending on the window manager, the default window size might not even be respected). Plus, it's so easy to accidentally change the browser window size.


The VM can go "full screen" and lock the height/width in X, then.


We could do stretching then when the user tries to enbiggen it, that way they don't get deanonymized!


It is basically impossible to fully anonymize a browser as long as JavaScript+plugins are running. EFF's Panopticlick [1] and browserleaks [2] are good at explaining some of these fingerprinting vectors.

[1] https://panopticlick.eff.org/

[2] http://browserleaks.com/


This is worst part of Tor browsing for me. 1000x800?! Are you kidding! Must be upped to macbook's 1680, or whatever is most popular resolution of a modern display


Yes it can.

It is a type 1 hypervisor and DX is supported since a few versions.

Also the foundation of Windows 10 containers and secure kernel.


So basically Whonix?

https://www.whonix.org/


One word: whonix.


ParrotSec OS is another OS that tries to fully block all connections not going through Tor, out of the box.


Another: Qubes


A slight aside, but is Docker on Unix running natively then?


Typically yes, unless you set it up in a VM yourself (or using docker-machine)


Confirmed in the next message in the thread: https://lists.torproject.org/pipermail/tor-talk/2016-Novembe... (the vulnerability appears to exist in upstream Firefox as well). Seems to relate to SVG animation.


Has nobody tried putting Gopher and Tor together? Would probably yield slightly better results given how minimalist Gopher is, mostly text based. It might not work as well if you try to have a "community" on Tor, but it would be interesting to know how Gopher works out in Tor if at all?


As much as I love Mozilla and their philosophy, it has to be said that - if you have any sort of worries about security - using Firefox is a bad choice and borderline reckless.

It lacks even basic exploit mitigations that other browser have had for years now (most importantly a feature-complete sandbox).

Right now, Firefox is just a single process with zero separation of privileges. Any bug in the rendering code is a potential RCE. Writing exploits for Firefox vulnerabilities is well within a strong amateur's reach and this headline does not surprise me at all.

This is a (probably incomplete) list of all public exploits in the past three years:

- 2013/08: XPCOM RCE (https://github.com/rapid7/metasploit-framework/blob/master/m...)

- 2013/08: __exposedProps__ (https://github.com/rapid7/metasploit-framework/blob/master/m...)

- 2014/03: WebIDL RCE (https://github.com/rapid7/metasploit-framework/blob/master/m...)

- 2015/03: PDF.js RCE (https://github.com/rapid7/metasploit-framework/blob/master/m...)

- 2015/08: The file stealing exploit: https://blog.mozilla.org/security/2015/08/06/firefox-exploit...

- [The FBI exploit (2016-ish)]

- This one.

Publicly known as in, with a fully functional Metasploit exploit, and most of them through JavaScript, so 100% reliable. ASLR isn't going to help with interpreter bugs. This is 90s level bad!

And those are just the publicly known ones. With a code base as large as Firefox, it'd be foolish to assume to assume that there aren't any private 0days. Just take a look at this list:

https://www.cvedetails.com/vulnerability-list/vendor_id-452/...

Even Microsoft Edge is better at this.

Project Electrolysis is a step in the right direction, but it will take a LONG time to mature. Last time I checked, it was just for process separation and did not provide any security guarantees. Chromium had a fair bit of sandbox escapes during the first years, and there's no reason to believe this is going to be different with Firefox. If have high hopes for their Rust re-implementation, but that's not going to be usable any sooner.

In the meantime, there's nothing like Chrome/Chromium security-wise. Not even close.

When was the last time there was a reliable, public Chrome exploit with a sandbox escape? The only one I can think of was the Hacking Team exploit, which used a Windows kernel 0day to escape the sandbox.

Chrome's security team is probably the strongest in the industry and they poured an absurd amount of effort into Chrome's security. And it being open source means that I can use without worrying about backdoors or data leakage.


Just to provide some balance: Chrome exploits are not as rare as you claim in this post. Pretty much any time Pwn2Own or similar contests are held, with non-trivial prize money, somebody brings a fully working Chrome exploit.

If you check https://zerodium.com/program.html you can see that the current market prize for a Chrome exploit with sandbox escape is about 80k USD. Firefox is cheaper (30k USD), but only by a bit more than factor 2.

(I've been working on security vulnerabilities pretty continuously since 1998, so I somewhat know what I am talking about)

In general, for any major browser: Given the size, complexity, and code churn, an attacker just needs enough motivation / time.


Also, it is safe to say that ChakraCore (the JS interpreter inside Edge) is much more broken / easier to find bugs in than Firefox, at least at the moment.


Why is that safe to say?


I looked at both.


I fully agree with you, Chrome isn't magically secure either (especially with Flash and Windows prior to Win10). The Chrome and project zero bug trackers are full of PoCs for old vulnerabilities. It's just in a much better shape than Firefox.

Did not know those prices were public, really interesting.


> When was the last time there was a reliable, public Chrome exploit with a sandbox escape? The only one I can think of was the Hacking Team exploit, which used a Windows kernel 0day to escape the sandbox.

Don't forget that it's not just the sandboxing and Chrome/Chromium based browsers can mitigate entire classes of bugs thanks to win32k lockdown. A recent example was a Flash bug which required access to some of the blacklisted system calls (GDI I think).


win32k lockdown is part of the sandbox. On Linux you do this with seccomp filtering for example.


Last time I checked, it was just for process separation and did not provide any security guarantees.

It's a separate project from e10s (though it depends on it): https://wiki.mozilla.org/Security/Sandbox

Chromium had a fair bit of sandbox escapes during the first years, and there's no reason to believe this is going to be different with Firefox.

I agree. Note that people still find sandbox escapes against Chrome anyway. Yes, sometimes they use the OS, but due to how the sandboxing works that's to be expected.

Even Microsoft Edge is better at this.

CVE counting, especially the ones published by the developers themselves, aren't a very good measure of security.


> Note that people still find sandbox escapes against Chrome anyway. Yes, sometimes they use the OS, but due to how the sandboxing works that's to be expected.

Google developed special architecture to make browser safer. Other vendors did nothing, they just wait until exploit becomes public and patch the code (and yet they call their product "secure"). Google also motivated Microsoft to allow disabling potentially vulnerable kernel library. Google since long ago has their own PDF viewer (so people don't have to use Adobe viewer) and their own Flash plugin. As a result on contests like pwn2own exploits for Chrome are the most expensive.

Regarding Tor I am not sure if JS is really necessary for a secure network. You don't need it to read sites like wikileaks bit it provides a large attack surface and it might be better to have it disabled by default.


> CVE counting

See my reply below. Counting exploits, not CVEs. By CVE count, Chrome would be the worst.


I learned this the hard way. Got a drive-by virus infection on Firefox a couple of years ago. I clicked on a link from a google search, and that website completely infected my machine, .exes started running. I thought it was a browser popup at first. It was not. Scary stuff. With Chrome, I feel much safer, and no such thing has happened again. To you, the reader, this might just be an anecdote. But to me it was very frustrating and time-consuming. These days, I always secure my browser properly, allowing only minimal amounts of cross-site requests, JavaScript and plugins.


Is the situation with Firefox this dire, when compared to Chrome? Can anyone corroborate?

This realization may be enough for me to finally switch, if so.


There's no questioning that Chrome's sandbox implementation is ahead of Firefox: they've been shipping it for several years, whereas Firefox only got its first one out in Firefox 50 (for content! They had a Flash sandbox and DRM/media decoder sandbox for longer), with the more strict ones being in the Nightly/Dev Edition branches. It's possible the real Firefox is not vulnerable to this exploit because of that, but we'll have to see.

None of this would have helped Tor/TBB, because it's based on an older Firefox branch, with no sandbox at all. This means most vulnerabilities are exploitable and lead to a total compromise. There's relatively few of those and they get fixed very quickly, but if you use Tor you are likely specifically targeted so any hole is very serious.

Parent sounds so bad because he seems to grade security by seeing how many CVE's the developer publishes, ignores the fact that browser exploits are often done by exploiting attack surface outside the browser (because all browsers are - relatively speaking to other software - secure), and conflating Chrome vs Chromium.

This particular bug is bad (it's a 0day - a security exploit found by bad guys before Mozilla or security researchers found it) but a lot of the buzz here is because such problems are rather rare these days, and because it's targeting Tor.


> Parent sounds so bad because he seems to grade security by seeing how many CVE's the developer publishes, ignores the fact that browser exploits are often done by exploiting attack surface outside the browser (because all browsers are - relatively speaking to other software - secure), and conflating Chrome vs Chromium.

By counting CVEs alone, Chrome would be the least secure since it has more CVEs than any other browser thanks to Google's bug bounty and fuzzing, most of them harmless.

What I counted were real-world browser exploits which is an excellent measure of security.

> such problems are rather rare these days

In Chrome, yes. They happen rather often with Firefox.

> conflating Chrome vs Chromium

Their security features are identical. It's the same code.


In Chrome, yes. They happen rather often with Firefox.

Shrug, I disagree. The fuss made here illustrates it: 0-days are rare enough that "rather often" is a serious mischaracterisation.

Their security features are identical. It's the same code.

You said: "And it being open source means that I can use without worrying about backdoors or data leakage." Which has nothing to do with security. Inspecting Chromium tells you nothing about what Chrome does, and using Chromium means you miss features that Chrome has (H264, Netflix, ...)


> The fuss made here illustrates it: 0-days are rare enough that "rather often" is a serious mischaracterisation.

A RCE in a browser is literally the worst possible case and Firefox had multiple of them, most trivially exploitable with JavaScript. This simply doesn't happen with Chrome.

> Chromium means you miss features that Chrome has (H264, Netflix, ...)

Google made an effort to open-source everything, including their PDFium PDF reader.

The only remaining bits are the Pepper flash player and the Encrypted Media Extensions. Both are closed source in Firefox as well. You can use them with Chromium just fine and both are sandboxed. They cannot be distributed with Chromium for licensing reasons, but nothing prevents you from downloading the Chrome package and extracting those two files. Many Linux distros have scripts which automate this.


I specifically pointed out H264 support (and you ignored it) because it's an annoyance when using Chromium. And yes, that's due to licensing reasons as well.


That's up to the distribution policy/packaging. Fedora refuses to add H264, on Ubuntu you can install chromium-codecs-ffmpeg-extra and it works fine.

The code is there in Chromium and it's fully open source.


> Chrome's security team is probably the strongest in the industry and they poured an absurd amount of effort into Chrome's security.

Could you elaborate? I'm curious what they do for security.


Just google for Project Zero.


Are there immediate actions for inoculation, e.g. disabling SVG, and/or detection, i.e. if this has been triggered?


noscript would probably work, seeing that it relies on javascript to function


about:config -> set javascript.enabled to false. It's the first thing you should do with TorBrowser anyway.


This only works in Windows, right?


The shellcode is Windows specific, but the bug is in Firefox. It is possible there is something server side sending the proper code depending on the user agent.


This exploit is Windows-specific, though the vulnerability appears not to be.


Thanks. If you could say more about that, please do. Abstracting from 'The exact functionality is unknown but it's getting access to "VirtualAlloc" in "kernel32.dll" and goes from there.' to Linux etc is over my head.


The underlying vulnerability has to do with a memory corruption of some sort in Firefox's SVG rendering, which is a code base that is shared across platforms. So probably an analogous memory corruption exists on other platforms, because it's compiled from the same C++. While it's possible that it's not exploitable outside of Windows, there is no specific reason to assume it won't be.

But the exploit here with the ROP chain, calling Windows APIs, etc., is apparently Win32-specific and doesn't have binary code that could run successfully on other platforms.

The setup for the exploit is apparently primarily in the Javascript function craftDOM() which makes some SVG objects and modifies some of their properties, presumably in a way that triggers an underlying bug in Firefox's SVG support. There is also a Win32 object code payload in the string object thecode, which would not be able to run unmodified on another platform. Also, the ROP chain code is likely to be Windows-specific in several respects. Indeed, the statement

  throw"Bad NT Signature";
seems to be actively giving up the attack if it detects a non-Win32 environment.


Thanks, that helps.


This may be an unpopular opinion here but if the TorBrowser folks cared about security they should switch to a Chromium based browser. The sandbox provided by it would be robust and well tested as it's used in Chrome.

I don't see why the two objectives of having a secure browser and the privacy/anonymity provided by Tor have to be diametrically opposed. You can have both.


Because removing all of the Google-related features from Chromium would be a very large task indeed. Quite a few people have discussed it within Tor (and IIRC some people started working on it) but it might not be as good of an idea as you might initially think.



There is currently only one maintainer of that code, and there is this disclaimer in the readme:

> DISCLAIMER: Although it is the top priority to eliminate bugs and privacy-invading code, there will be those that slip by due to the fast-paced growth and evolution of the Chromium project.


Does this affect the regular (non-TBB) current version of Firefox?


The bug yes. The exploitability, unclear. Current release Firefox has some minimal content sandboxing protections enabled but TBB is based on older ESR.




This will make a very interesting postmortem. I'm curious how they escaped from js


So going on a not-HSTS site through tor can now infect your computer (through the MITM of the exit node)?

Seems that using regular internet is actually safer now.


Exit nodes will also steal any unencrypted passwords and put malware in any binaries you download. It's been happening for years.

In China the "regular internet" intercepts http and inserts javascript malware to create a DDOS botnet.


Yes, but I'm not talking about downloading exes or logging onto gmail (and definitely not putting credentials on a site using HTTP) or anything.

I'm talking about going anywhere on Tor can infect you.



No, Tor Project scans exit nodes for misbehavior and those that do bad stuff are flagged as bad exit nodes and are not used as exit nodes. https://trac.torproject.org/projects/tor/wiki/doc/badRelays


The same vulnerability apparently also exists in Firefox, which Tor Browser is based on.


But its worse on Tor.

Regular internet has a few protections:

1. Google safe browsing

2. AdBlocking

3. Websites try to keep their reputation.

Tor exit nodes, on the other hand, have no reputation (and if one gets sullied, spin up another) and costs money.


I have one question: the list of exit nodes is public, we can know at any time the circuit's complete list of servers.

Does something prevents us from rating tor exit nodes according to their "transparency" and add this rating in the consensus file?

Does anybody already worked on that? I cannot find anything on the internet…


There are projects that scan exit nodes for various heuristics; if they find very bad behavior, they report it to the Tor Project to request a BadExit flag. However, there's no kind of continuum of rankings, just BadExit or not.

My impression from talking to people working on this a few years ago was that they wanted to be a little bit secretive about exactly what they scan for, in order to make it harder for malicious exit operators to anticipate the scans or to distinguish the scans from end-user traffic. There was a suggestion this is an activity that anybody can engage in: if you can think of an attack against Tor users that you know how to detect, you can write your own client that tests for that thing (modifying the path selection algorithm to ensure that you test every exit node!) and then start running your tests. People will be interested in your results.


> But its worse on Tor.

Oh is it? The exploit for upstream Firefox on Windows is now completely public, free of charge. How is that worse on Tor, where most people using it have idea that JS and 3rd party connections should be blocked?


You still need to intercept a users' connection and redirect them to malicious JS with a regular Firefox. For the attack to work on a large scale you'd typically you do this by compromising an ad network and hoping you get enough users before SafeBrowsing blacklists you.

With Tor on the other hand you can just run an exit node and infect the user even if (s)he's visiting a regular site.


I've quit using TOR. It seems to have been targeted by law enforcement and now this.


This was an upstream Firefox bug, so you should probably quit using Firefox if you're concerned about bugs like this.

(Using Tor does change who can attempt to attack you with such bugs -- and maybe who is motivated to.)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: