Hacker News new | past | comments | ask | show | jobs | submit login

Wasn't someone here saying these SSDs consume an abnormal amount of power to achieve those (2x?) faster speeds than current PCIe SSDs? Or was that applicable just to Intel's SSDs?



The Intel 750 that was linked on here earlier in the week does use more power than you would expect for a 'consumer' part. The stated reason was that it uses an 18 channel controller - apparently the same one they use in their higher-end enterprise offerings - and is not a low power part.

From [0]:

"The controller is the same 18-channel behemoth running at 400MHz that is found inside the SSD DC P3700. Nearly all client-grade controllers today are 8-channel designs, so with over twice the number of channels Intel has a clear NAND bandwidth advantage over the more client-oriented designs. That said, the controller is also much more power hungry and the 1.2TB SSD 750 consumes over 20W under load, so you won't be seeing an M.2 variant with this controller."

[0]: http://anandtech.com/show/9090/intel-ssd-750-pcie-ssd-review...


Samsung claims the opposite: NVMe SSD provides low energy consumption to help data centers and enterprises operate more efficiently and reduce expenses. Power-related costs typically represent 31% of total data center costs, with the memory and storage portion of the power (including cooling) consuming 32% of the total data center power. NVMe SSD requires lower power (less than 6W active power) with energy efficiency (IOPS/Watt) that is 2.5x as efficient as SATA.

http://www.samsung.com/global/business/semiconductor/product... http://www.pcworld.com/article/2866912/samsungs-ludicrously-...


And as far as I understand, isn't NVMe SSD just a "different name" for PCIe SSD? PCIe being the protocol (that's already being used for graphics cards), and NVMe the standard for SSDs to understand that protocol.


There's more to it than that. NVMe is a higher layer technology than PCIe.

SATA drives connect to the host system over a SATA PHY link to a SATA HBA that itself is connected to the host via PCIe. The OS uses AHCI to talk to the HBA to pass ATA commands to the drive(s).

PCIe SSDs that don't use NVMe exist, and work by unifying the drive and the HBA. This removes the speed limitation of the SATA PHY, but doesn't change anything else. The OS can't even directly know that there's no SATA link behind the HBA; it can only observe the higher speeds and 1:1 mapping of HBAs to drives. Some PCIe SSDs have been implemented using a RAID HBA, so the speed limitation has been circumvented by having multiple SATA links internally, presented to the OS as a single drive.

NVMe standardizes a new protocol that operates over PCIe, where the HBA is permanently part of the drive, and there's a new command set to replace ATA. New drivers are needed, and NVMe removes many bottlenecks and limitations of the AHCI+ATA protocol stack.


Not quite. PCIe is just the bus transport protocol. The PCIe SSDs still need a protocol to describe data operations to and from the OS to the Drive. Currently, they use the same protocols as those designed for HDDs, and those protocols make a lot of tradeoffs and assumptions about disk access time. The reason NVMe is exciting is that it is a brand new data protocol, designed with low latency SSD style storage in mind. We're already seeing speeds much higher than existing PCIe SSDs can manage (3+ GB/s!)


Most current PCIe SSDs behave to the PCI bus & operating system like an AHCI controller with a SATA SSD attached to it. Advantage of this is that it's well supported by firmware (BIOS/EFI) and operating systems (stock drivers). Such SSDs inherit the limitations of that architecture, which was designed for spinning platters. NVMe is a fresh start in that regard.


PCIe and NVMe are both full-fledged protocols, one on top of the other. Previous products used custom protocols on top of PCIe.

PCIe is packet level, NVMe is a block device interface.


That's just a design decision for those models. It's still PCIe, nothing is causing wasted power.


Last time I looked at the SATA PHY it seemed pretty wasteful. During every transaction the back channel was running at full speed blasting "R_OK R_OK R_OK" to the transmitting party. Not a checksum, mind you, just a bunch of magic DWORDs indicating that the drive was receiving (it would use a checksum too once the transmission was over, of course). I guess it's an easy way to keep the PLLs locked but it seemed like a pretty huge piece of low-hanging fruit towards the goal of reducing power consumption.

In comparison, PCIe is a much more sophisticated serial protocol. It's an entire damned packet network with addresses, subnets (so to speak), routing, retry, a credit system for bandwidth sharing. Unlike ethernet, which typically tops out at achieving ~75% of its theoretical bandwidth, PCIe typically tops out at ~95% of its theoretical bandwidth. Crazy stuff.

I'd bet good money that the PCIe PHY can beat the crap out of the SATA PHY on energy/bit and that the disparity will only widen with time.


I'm usually getting 95%+ with gigabit ethernet. More than that with jumbo frames. Well, not with Realtek, but Intel and Broadcom chipsets are just fine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: