Probably a combination of lack of need until recently (125MB/sec is more than fine to saturate a classic disk), parts cost and power consumption.
I can't think of anything except connecting to a _fast_ SAN that would require a 10GBE port in a laptop. Maybe something specialized for a network engineer, but even then it's probably easier to buy dedicated equipment for line rate port monitoring
Even a cheap $100-$200 SSD can do over 2Gbytes/sec these days:
256060514304 bytes (256 GB) copied, 95.4762 s, 2.7 GB/s
The M.2 interface has finally shrugged off the SATA bottleneck for commodity hardware. It's common on new motherboards, new laptops, and I recently read there's similar circuitry in the new iphone 6s.
A very cheap RAID5 setup with 4 spinning disks and a filesystem like ZFS or btrfs should get you about 3-500 MB/s, so 10 GbE is good for that setup on nodes.
The only way you're getting 3-500MB/s on a 4-disk ZFS RAID5/raidz is with very, very fast SSDs and a very, very fast CPU. Not exactly "very cheap". (The compute and I/O overhead for raidz is significant.)
This is... not my experience at all. I have a RAIDZ2 comprised of 6 4TB Seagate drives (the 5900RPM variety) and it can do about 600MBps read/write with moderate CPU usage on an Intel i3. A mirrored zpool of two Samsung 850 EVO SSDs can do nearly 1GBps read/write. That's not a particularly expensive setup.
I can't think of anything except connecting to a _fast_ SAN that would require a 10GBE port in a laptop. Maybe something specialized for a network engineer, but even then it's probably easier to buy dedicated equipment for line rate port monitoring