Hacker News new | past | comments | ask | show | jobs | submit login

I'm not understanding this. If I use the "Tor Browser Bundle" and never use that browser for anything but Tor, and never log in to anything on that browser, how can they track me?



Tor is not resilient against timing correlation attacks.

Suppose Alfred hosts a Tor onion describing relativistic physics.

Suppose Bob cautiously uses Tor to consult such information on a regular basis.

Then depending on priviliged access or leverage on the internet backbone multiple approaches can be used:

A) suppose some regions randomly suffer internet or power black-outs. Obviously a Tor onion interacting with the Tor network is not in that region. While a Tor onion disconnected for the duration of such an event is possibly/probably in one of such regions. Similar for tor Browsers.

B) instead of waiting for spontaneous events they can be elicited (costly in case of internet blackouts, very costly in case of energy blackouts).

C) instead of disabling participation, one can randomly stall it: if ISP's at both ends co-operate or are compromised, network packets can be intentionally given known pseudorandom delays on top of the spontaneous delays. By calculating the correlation of the delays one can identify which Tor user IP address is frequenting which Tor onion host IP address. This works even if the added delays are smaller than the spontaneous delays, because the spontaneous delays are uncorrelated with the injected delays so the "correlation" of the spontaneous delays with the injected delays will average towards 0, whereas the correlation factor of injected delays will correlate with the injected delays. The number of packets necessary to have true positives raise above the noise floor depends on the relative sizes of the spontaneous variation in delays and the injected delays. If the injection delays are smaller it will take many more packets before true positives rise above the noise floor.

This article is from the time of the Snowden leaks, more than 10 years ago.

The moment they have correlated the traffic on your ISP's end, with the traffic on the specific Tor onion's ISP's end, they can just ask your ISP for your true name.

In this case the experts were convinced cookies were used, which is conceivably correct for a fraction of the users. The cookies and ads were probably multifunctionally abused: tracking random browsing, spam email for lucky hits, propagation delay injection of the advertisement packets, ...


This is understood, but that doesn't make "Google Ads" a way to exploit this.


Something I have been meaning to write up for a long time but never got around to:

I assume the reader knows the basics of asymmetric cryptography, for sake of brevity and simplicity lets us consider RSA, even though thats not the onion encryption in Tor uses. I assume the reader is familiar with the mathematics behind RSA, and the basic proofs that decrypting the encrypted number results in the original number, so familiarity with modular arithmetic, modular exponentiation etc is assumed...

I assume the reader knows the basic concept of onion routing: the sender of a packet chooses an arbitrary path through routing nodes, whose public keys are known, and first encrypts the packet for the exit node's public key, then encrypts that for the next-to-last nodes public key, and so on in a backwards fashion to finally encrypt the onion packet for the first routing node's public key. At each layer a bit of metadata is encrypted along so the routing nodes know only the next node to send their decryption to. So the N-layer encrypted packet is sent to the first routing node, which decrypts the first layer, splits the metadata from the N-1-times encrypted packet, and sends the latter to the next node mentioned in the metadata.

From the perspective of an ISP or 3 letter agency monitoring the traffic of a specific intermediate routing node, they see encrypted packets arrive, and encrypted packets leaving.

Let me first state the obvious, but which I will NOT rely on:

If the eavesdropper were to possess the capability to break RSA, they could trivially decrypt the packet and associate the incoming packets to the outgoing packets. (let us ignore that if they could break RSA, they could just decrypt the whole layered onion of the packet at once...).

To transliterate to math:

EavesDropperAbleToBreakRSA => EavesDropperAbleToTrackPackets

given "A => B" and "not A" one is unable to prove "not B", although it is tempting to jump to that conclusion. B can be true while A is false, it would just mean that the eavesdropper could track packets in an alternative manner, but how?

Lets go back to our hypothetical naive RSA implementation of Tor:

Is it really necessary to break RSA to match incoming and outgoing packets of an intermediate node?

Of course not: imagine first for simplicity that the node only received 2 incoming packets, and 2 outgoing packets.

This means the eavesdropper sees 2 incoming k+1 times encrypted packets, and 2 k-times encrypted packets, which happen to be the decryption of the incomming packets. Why break RSA if the outgoing packets ARE the decryptions? One merely needs to re-encrypt the outgoing packets with the proper metadata, given the routing node's public key, and one should end up with identically one of the 2 incoming packets, so consolidating ISP powers, or other attackers able to monitor network traffic on a sufficient number of nodes can simply track packets in the onion network. Effectively the k+1-times encrypted packet is an RSA signature of the k-times encrypted packet!!!

Suppose a random route is 5 hops long and that there are 30 routing nodes (not realistic but insightful as we will see).

Suppose only the entry node packet and the exit node packet are logged, but not the intermediate traffic. How computationally expensive would it be to guess and verify the route?

that would be 30 times 29 times 28 times 27 times 26 combinations. Each combination would consist of 5 encryptions/signature checks. Very feasible to brute force.

The reason this is insightful is that a dominant eavesdropped missing observability on a small number of links can brute force these without having to break RSA, and still verifiably confirm the actual route. It would only need to consider public keys of nodes on which observability it lacks. So this becomes expensive much quicker for entities that have less eavesdropping infrastructure, than for dominant eavesdroppers.

A security researcher who understands this potential ploy in onion routing networks will have a hard time proving the exploit in practice, because the researcher lacks the eavesdropping powers that ISP's and 3 letter agencies possess.


because their target are not nerds using tor to access their own machines or random 4ch clones.

they target politicians, whistleblowers and journalists.

if you ever volunteered to organizations helping those you quickly Learn that group is not very tech literate, have cheap limited devices, skips instructions.


I’m not aware of any sandbox escape attack for the Tor browser, but I am no expert. If there is one, even limited, it’d probably be enough to figure out a way to track you down.


The only way to anonymize it enough, also defeating any attempt at cookie/malware injection to me would be to create a VM with the strict necessary to run Tor browser and clone it for single use, that is, a script that clones the VM, opens it, let you use Tor browser, then as you close the browser the VM is also closed and deleted. The script could also create the next VM changing bits here and there for added anonymization (OS and browser signature, screen and window size, mouse settings, etc) while the old one is still running, to save time.


This is largely the motivation for TAILS: https://en.wikipedia.org/wiki/Tails_(operating_system)

And also Whonix: https://en.wikipedia.org/wiki/Whonix

Using the DVD image in a VM would largely suffice for most users. For even more security, you would use the live image on a throwaway laptop at a coffee shop or something, but that's not exactly practical for everyday use.


I remember I used immutable VMs for a lot of testing - those reset to the last saved state when shut down and made a lot of tests more easily repeatable. This is a sensible thing to do for privacy as well - any cookies you gain during your session you shed on reboot.



Does Tor Browser disable cookies? I think not.

You don’t have to login to be given a cookie that’s then stored and tracked across each new IP that Tor cycles through.


This is trivially searchable. Tb doesn't store cookies.

https://support.torproject.org/glossary/cookie/


You only have to mess up once.

Google has programs where they can identify budding extremists and correlate behavior to medical diagnoses without HIPAA exposure.

If your secret weird shit that you’re doing with Tor is of interest, they’ll eventually get a profile. Using Tor is like setting off the bat signal.


what is this program called?

The fact that we tolerate this shit is unbelievable.


I believe it’s called Opioid 360 now. It’s a collab with Deloitte.

They did similar work with ad campaigns to defuse individuals who were in danger of becoming extremists, etc.


> The fact that we tolerate this shit is unbelievable.

The fact that we tolerate it is relatively expected; the fact that Snowden leaked 90% of this stuff a decade ago and nobody cares is what's unbelievable. We kinda deserve to be surveilled if this kind of apathy is what dominates our behavior and executive function.


> nobody cares is what's unbelievable

Approximately zero Americans think this affects them, and the usual tropes "If you have nothing to hide, why do you use curtains?" result in "That's different, duh."

A few are split whether they care Ring video is available to law enforcement, most think it's a benefit.

Interestingly, all care quite a lot whether an AirBnB host has cameras inside the property. Privacy suddenly matters.

At the same time, most shrug if you assert the government has all their emails and social messaging, in a "What are you gonna do?" and "If they want to read all that more power to 'em…" way.

For people to care about most anything abstract, they must believe it affects them personally, and be able to both picture and believe a credible bad outcome.

While the AirBnB creep works, it seems everything else is filed under "That's about someone else, not me."


Yes, literally unbelievable because it doesn't exist.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: