I don't know how you do it but when sending data from a file, I read chunks of the file and send the chunk I have read, not the entire thing. These chunks can be very small, particulary as the MTU over a network is something like 1500 bytes (unless using jumbo frames). So to sync 10 files in tandem (10 threads), they would need 10 * 1500 bytes for the read buffers (+ overhead for storing the file pointers). Even you can see that it is a tiny amount of RAM required for this.
Or are you somehow living in a world where your internet upload speed is somehow faster than reading from disk or memory, and your network interface has to wait for reading from disk/memory???
Are they reading the synced files entirely into memory or something stupid like that?
I don't. It's still a limited resource and Dropbox is not the only thing people run.