Did they also set IP_TTL to set the TTL value to match the platform being impersonated?
If not, then fingerprinting could still be done to some extent at the IP layer. If the TTL value in the IP layer is below 64, it is obvious this is either not running on modern Windows or is running on a modern Windows machine that has had its default TTL changed, since by default the TTL of packets on modern Windows starts at 128 while most other platforms start it at 64. Since the other platforms do not have issues communicating over the internet, so IP packets from modern Windows will always be seen by the remote end with TTLs at or above 64 (likely just above).
That said, it would be difficult to fingerprint at the IP layer, although it is not impossible.
>That said, it would be difficult to fingerprint at the IP layer, although it is not impossible.
Only if you're using PaaS/IaaS providers don't give you low level access to the TCP/IP stack. If you're running your own servers it's trivial to fingerprint all manner of TCP/IP properties.
I meant it is difficult relative to fingerprinting TLS and HTTP. The information is not exported by the berkeley socket API unless you use raw sockets and implement your own userland TCP stack.
Yeah, some sort of packet mirroring setup (eg. in iptables or at the switch level) + packet capture tool should be enough. Then you just need to join the data from the packet capture program/machine with your load balancer, using src ip + port + time.
The argument is that if the many (maybe the majority) of systems are sending packets with a TTL of 64 and they don't experience problems on the internet, then it stands to reason that almost everywhere on the internet is reachable in less than 64 hops (personally, I'd be amazed if it any routes are actually as high as 32 hops).
If everywhere is reachable in under 64 hops, then packets sent from systems that use a TTL of 128 will arrive at the destination with a TTL still over 64 (or else they'd have been discarded for all the other systems already).
Windows 9x used a TTL of 32. I vaguely recall hearing that it caused problems in extremely exotic cases, but that could have been misinformation. I imagine that >99.999% of the time, 32 is enough. This makes fingerprinting via TTL to distinguish between those who set it at 32, 64, 128 and 255 (OpenSolaris and derivatives) viable. That said, almost nobody uses Windows 9x or OpenSolaris derivatives on the internet these days, so I used values from systems that they do use for my argument that fingerprinting via TTL is possible.
What is the reasoning behind TTL counting down instead of up, anyway? Wouldn't we generally expect those routing the traffic to determine if and how to do so?
To allow the sender to set the TTL, right? Without adding another field to the packet header.
If you count up from zero, then you'd also have to include in every packet how high it can go, so that a router has enough info to decide if the packet is still live. Otherwise every connection in the network would have to share the same fixed TTL, or obey the TTL set in whatever random routers it goes through. If you count down, you're always checking against zero.
The primary purpose of TTL is to prevent packets from looping endlessly during routing. If a packet gets stuck in a loop, its TTL will eventually reach zero, and then it will be dropped.
That doesn't answer my question. If it counted up then it would be up to each hop to set its own policy. Things wouldn't loop endlessly in that scenario either.
Then random internet routers could break internet traffic by setting it really low and the user could not do a thing about it. They technically still can by discarding all traffic whose value is less than some value, but they don’t. The idea that they should set their own policy could fundamentally break network traffic flows if it ever became practiced.
This is a wild guess but: I am under the impression that the early internet was built somewhat naively so I guess that the sender sets it because they know best how long it stays relevant for/when it makes sense to restart or fail rather than wait.
It does make traceroute, where each packet is fired with one more available step than the last, feasible, whereas 'up' wouldn't. Of course, then we'd just start with max-hops and walk the number down I suppose. I still expect it would be inconvenient during debugging for various devices to have various ceilings.
If not, then fingerprinting could still be done to some extent at the IP layer. If the TTL value in the IP layer is below 64, it is obvious this is either not running on modern Windows or is running on a modern Windows machine that has had its default TTL changed, since by default the TTL of packets on modern Windows starts at 128 while most other platforms start it at 64. Since the other platforms do not have issues communicating over the internet, so IP packets from modern Windows will always be seen by the remote end with TTLs at or above 64 (likely just above).
That said, it would be difficult to fingerprint at the IP layer, although it is not impossible.