There are Thuderbolt adapters, as another comment noted few other interfaces have the bandwidth.
There's a lot of other bandwidth issues that aren't sorted out yet, too. If the adapter and drivers don't have TCP offloading, you'll be very hard pressed to get more than 3Gbit for anything other than a UDP dump.
Most of the OS network stacks aren't properly tuned for 10Gbit either.
I'd say give it a few years; as it becomes more mainstream, things should improve.
I've had the pleasure of testing my rMBP on a 10gbE internet facing port, I can get 4Gbps+ on a browser-based speedtest without any tuning. (And full 10Gbps with iperf)
Well, it doesn't apply to OP's question, which is why I didn't mention it directly, but I know for sure that most enterprise class virtualization software doesn't accelerate through to the VMs. (Yes, there is SR-IOV, but then you lose hot migration and other HA features. SolarFlare had it a while back but it no longer works)
I didn't mean to suggest that it's hard to find, just that there are a host of things that need to be in place before you'll get the expected speeds. This isn't a knock on the tech or any manufacturer, it's just that it's not mature enough to to be like 1Gbit where you plug it in an almost everything starts running at 100Mbyte/s.
I doubt IP/TCP offloading even makes that big of a difference, if SIMD (SSE2+ or AVX2+) can be used. A single CPU core is probably capable of TCP checksumming more than 100 Gbps.
Of course it's a completely another story without SIMD. A naive traditional checksum loop with a register dependency stall is just not going to be fast.
The L3 layer checksum is useless because IP packet is small and the kernel has to read/write all the fields anyway.
The L4 checksum covers TCP/UDP packet data, which the kernel can avoid touching if necessary.
When a TCP sender uses sendfile(), the kernel does a DMA read from storage to a page if the data is not already in memory (in the so called page cache), and just ask the network card to send this page, prepended with a ETH/IP/TCP header. That only works if the NIC can checksum the TCP packet content and update the header.
If the network card can do TCP segmentation offload, the kernel does not have to repeat this operation for each 1500 bytes packets, it can fetch a large amount of data from disk, and the NIC will split the data in smaller packets by itself.
The benchmarking I had done had the [non-offloaded] bandwidth peak out around 2.5-3Gbit/s. Could have been trouble with the drivers, or a naive implementation, or any of a number of things. Didn't dig into it too deeply at the time as the offloading drivers worked fine.
simple checksum computation and/or verification, indeed most cards can do (sometimes with restrictions: not for IPv6, not for VLAN...)
the other kind of offloading that the kernel can use is TCP Segmentation Offload (TSO), which is much more complex to implement in hardware, and you won't find it on cheap NIC (like Realtek)
Which raises an interesting quesiton - why don't we have Thunderbolt switches for our networks? The switch could convert to Ethernet, reducing the cost of servers and giving desktops lower cost access to higher speeds. Yes I know about the security issue but I'm referring to corporate or internal networks.
Windows doesn't keep up with it well. We had alot of issues with VDI workstations running 10GB interfaces. Lots of weird latency problems, solved by presenting a 1GB adapter.
Completely the opposite of my experience with PyParallel; I can saturate 2x10GB Melanox-2 ($35 each off eBay) via TransmitFile() with about 3% CPU use.
Probably a combination of lack of need until recently (125MB/sec is more than fine to saturate a classic disk), parts cost and power consumption.
I can't think of anything except connecting to a _fast_ SAN that would require a 10GBE port in a laptop. Maybe something specialized for a network engineer, but even then it's probably easier to buy dedicated equipment for line rate port monitoring
Even a cheap $100-$200 SSD can do over 2Gbytes/sec these days:
256060514304 bytes (256 GB) copied, 95.4762 s, 2.7 GB/s
The M.2 interface has finally shrugged off the SATA bottleneck for commodity hardware. It's common on new motherboards, new laptops, and I recently read there's similar circuitry in the new iphone 6s.
A very cheap RAID5 setup with 4 spinning disks and a filesystem like ZFS or btrfs should get you about 3-500 MB/s, so 10 GbE is good for that setup on nodes.
The only way you're getting 3-500MB/s on a 4-disk ZFS RAID5/raidz is with very, very fast SSDs and a very, very fast CPU. Not exactly "very cheap". (The compute and I/O overhead for raidz is significant.)
This is... not my experience at all. I have a RAIDZ2 comprised of 6 4TB Seagate drives (the 5900RPM variety) and it can do about 600MBps read/write with moderate CPU usage on an Intel i3. A mirrored zpool of two Samsung 850 EVO SSDs can do nearly 1GBps read/write. That's not a particularly expensive setup.
Building it in doesn't really make sense. Nearly no users, power hungry (although that has gotten better), bulky. And what connector do you use? 10Gig Base-T? Not really common and power intensive. SFP+ port? Really bulky.
There are Thunderbolt adapters. USB3.0 only has 4Gbit/s available, ExpressCard only 2GBit/s, so both aren't really good options.
Don't know about OP, but I could use it to connect to fast storage on network and do some video finishing. Laptop has sufficient power, but no storage of that capacity.
Depends what I'm doing. For editing it's usually just fine. Color and fx work require higher bandwidth. For example, at 2k a single frame is 12 MBytes (times 24 or 25 per second, depending on the project). And we are at the dawn of an era where we are talking about 4k dci or QHD mastering for all. That wpuld be 48 MBytes per frame. So,we're looking at 300 and 1200 MBytes per second for a single workstation. Gigabit is not up to it.
If you say so. Explain how am I running 2K DPX 10-bit in realtime with color correction applied on Lustre then? Does my machine and Autodesk software perform magic?
Nothing much other than copying lots of stuff, but the tech is a decade old+ and I believe in future proofing when buying hardware that I'll use for years. I would pay an extra hundred for the port, but found zero options.
Until recently Ethernet only increased in speed by factors of 10. This made sense when it was increasing 10x every 5 years, because it takes years for standards and products to be developed. But now there are intermediate speeds like 2.5G, 5G, 25G, and 40G.
I know it is overkill, its just that it has been about ten years already, isn't it cheap enough yet? Can't a modern ssd keep up with it?