Hacker News new | past | comments | ask | show | jobs | submit login
SSDs: A gift and a curse (laur.ie)
277 points by mirceasoaica on June 2, 2015 | hide | past | favorite | 94 comments



Of course SSD firmware is buggy. You know why? Because any half-decent electrical and computer engineering team can slap a NAND flash controller and some flash chips on a PCB, take the controller's manufacturer's reference firmware implementation, tweak the dozens of knobs provided by the reference implementation (ignore FLUSH commands, change the amount of reserved sectors, disable this, enable that, etc), change the device ID strings to "Company Foobar SSD 2000", and release the product on the market, with minimal QA and testing. And that's exactly what happened in the last 5-7 years with dozens and dozens of companies around the world designing SSDs.

But with traditional HDDs, the amount of engineering and domain-specific knowledge to manufacture and assemble the platters, moving heads, casing, etc, is such that there are only a handful of companies around the world who can do this (Seagate, WDC, Hitachi, etc). They have decades of experience doing that, so the firmware part of HDDs happen to be very robust as these companies have seen everything that can and will go wrong in an HDD.

So it boils down to this: which would you trust more, an HDD firmware code base that is 20 years old, or an SSD firmware code base that is 4 years old?

Combine this with the fact SSD firmware is much more complex (a flash translation layer must minimize write amplification, do wear leveling, spread writes on multiple chips, etc), and you are guaranteed that many SSDs on the market are going to be very buggy.


That is a nice theory, except for the fact that firmware bugs have hit big players equally badly, if not even worse. Intel and Samsung come easily to mind, and Sandforce was notorious for not getting their own firmware to work (for their own controllers).


My theory predicts that even big players will have bugs (if only due to my last point: complexity). But I insist that small players will tend to have even more bugs.


Since when has Intel been hit equally?


The "8MB bug" (https://communities.intel.com/thread/24205) was the most notorious, but I remember also some bricked Intel SSDs too.


Ignoring that, Intel is rock solid. The larger issue isn't incompetents making SSDs, but SSDs being sold soley on performance. Intel has always been a step or two behind the performance game but manages to deliver some really reliable SSD's. I have yet to see a consumer Intel fail, which is incredible as that kind of reliability, outside of the super expensive PCIe cards is unheard of.

I think we're a few more years of shaking out this industry and leaving behind all the growing pains. No idea who will be left standing, but I suspect Intel will probably be around.


I managed to brick an Intel 320 last week. Going to see if they’ll replace it under warranty - those devices were upgraded to a 5 years so it should still be covered.

(Apparently the drive really didn’t like being issued with an 'hdparm --security-set-pass X /dev/sdb' command. I was intending to do a secure wipe, for which the password setting is a pre-requisite, but the drive never came back from setting the password. After that it just returned junk data to any command, including 'hdparm -I /dev/sdb' & failed the power-on BIOS SMART test.)


For example, the Intel 520 and 530 series have known issues. Besides the lower performance after some amount of writes, they also tend to randomly "disappear" from the system, which is much worse. Google it, it should be easy to find many related topics.

I don't know if this only affects the SATA SSDs or not, but it happens a lot. (BTW, the random disappearing also affects some other SSDs from other brands.)


The Intel SSD 520 and 530 use SandForce controllers, so Intel started with a big mess and spent months trying to clean it up before shipping the drives. Clearly they missed a few bugs, and they couldn't work around every mistake and design limitation in the controller's hardware.

Intel's in-house designs have an extremely good track record. I'm only aware of two major bugs.


We saw some performance issues with Intel's DC S3500 firmware. But they were able to fix them without too much hassle.


Oh there used to be a lot of HDD mfgr in perhaps 1985, all new products like this have an explosion of competitors, then they all get bought out or otherwise exit the market.

There used to be something like one hundred car manufacturers in the USA, around a century ago!

Another way to look at oligopolies is they no longer need to compete on quality or specs if they slack in unison, and the collapse and commodification of their product means they'll get nickel and dimed to death so I wouldn't expect perfection from them, they find ways to screw up. You can't "wish" or "push" commodity status on a market if mother nature and the economy in general prevent it from happening, forcing commodity style management onto a non-commodity product/market just makes a mess.


And, contrary to a fly-by-night SSD manufacturer that's here today and gone tomorrow they have a reputation to protect and they suffer big time if a batch of faulty drives hits the market. See 'deathstar' and other re-runs of that concept. So they work very hard to make sure that doesn't happen to them.


Not with Seagate. I have a drive waiting to be fixed because of their silly BSY bug that renders a drive inaccessible. I'll never buy their garbage again.


50%+ failure rate on 3TB Seagates from Backblaze studies.

Something _horrible_ happened on the 3TB Seagate batch. 4TB and 2TB are reasonable... but 3TB... wtf happened?


This. I had one at work fail on me after 10 months of usage; got almost everything back from backups, but the timing was incredibly bad and I had to work some really late nights to pull through. Got a free replacement, but I'm pretty distrusting of it so now it's in Raid1 with a WD drive. That may've been a decent one though, because it's been working for two years now.


It would be interesting if SSD firmware were all open-sourced.

Manufacturers could then eventually converge all of their development and QA efforts on a few different codebases designed for different tradeoffs rather than everybody rolling their own thing and learning everybody else's lessons the hard way.

But I guess as long as custom firmware remains a major competitive advantage, this would never happen.


From personal experience I would trust neither to be infallible. I remember the IBM Deathstar drives, my Maxtor drives repeatedly dying along with their RMA replacements & the issues Seagate seems to have with 1.5TB & 3TB drives. When was the last time you had an SSD die from a dead motor or crashed heads?

Do you have intimate knowledge of the firmware writing process for hard drive manufacturing? Do you think knobs aren't twisted and codebases aren't rewritten to take into account the changes required for the latest in magnetic storage? Geometry calculations for perpendicular and shingled media is less complicated than a flash translation layer? Also just because something is old and hasn't broken yet, doesn't mean it's well written, bug free or will work in the future. This isn't to say that hard drives are reliable, as they're notoriously the least reliable thing in your computer.


Sounds almost like having free software SSD firmware would be a nice thing. I just love how I have microcontrollers on all my persistent storage devices now that operate as a black box made by incompetent or possibly malicious developers.


You've always had this.


Not back in the days of loading programs from cassette tape I didn't.


Excuse me but most of what you typed went right over my head. Where can one start learning about SSD/HDD firmware and most of what you said?


The OpenSSD project[1] might be worth a look. If you don't fancy paying thousands of dollars, you can buy SSD controller boards and raw NAND flash chips off Taobao and assemble them together. There's plenty of information on various Chinese forums (Mydigit, Upan etc.), admittedly more towards USB flash drives. Some older flash chips are TSOP rather than BGA which makes hand soldering much easier.

[1] http://www.openssd-project.org/wiki/The_OpenSSD_Project


Firmware is the software on the hardware, basically. If you take a hard disk and look underneath it, there is a PCB with chips on it. The firmware lives in there and is responsible for doing the job of reading/writing data to the disk in a specific format.

If you take the platters out of one hard disk and put it into another hard disk with different firmware, the other hard disk PCB won't be able to read it or write to it. I found this out by having a dead Hitachi drive (yes, the IBM "Deathstar" disks after they were sold to Hitachi) so I bought another identical drive and swapped PCBs but on startup the read/write head would not settle as there was a mismatch between the PCB firmware number and the firmware identifier written on the start of the disk. So, although I thought they were identical disks they were not.


Early SSD failures were all firmware bugs. As time goes on that has receeded into noise. Now SSD failure stats show sensible wear patterns.


I think StorageReviews.com has an article on exactly this.


Parts of this sound more like hardware RAID controller issues than SSD issues, which is why I typically avoid hardware RAID in production environments unless there's a specific reason for it. RAID controllers tend to be buggy pieces of shit, usually implementing some RAID method that's more-or-less proprietary and unique even between different RAID cards from the same vendor (meaning that if your RAID controller fails, you might as well kiss your data goodbye, since the replacement - 9 times out of 10 - won't be able to make sense of its predecessor's RAID setup).

Also, RAID6 is a bad idea, almost as bad as RAID5. There have been numerous studies and reports [0] indicating that both are very susceptible to subtle bit errors ("cosmic rays"), and this is made even worse when SSDs are involved. If you need absolute data integrity, RAID1 is your only option; if you need a balance between integrity, performance, and capacity, go with RAID10, which is still leaps-and-bounds better than RAID5/6.

[0]: http://www.miracleas.com/BAARF/Why_RAID5_is_bad_news.pdf


To expand: in my 20-year experience, no small-computer RAID 5/6 controller I have had experience with has ever saved any data, ever. All failures are hard and non-recoverable. Which begs the question: why use them at all? I mirror only.


RAIDs are in place to for your guaranteed uptime in the SLA. With the current storage density RAID5/6 is probably more fragile than single disks because UREs are very likely during a rebuild. Nonetheless having a degraded array is probably better than having an offline system and it will buy you some time to migrate. Mirrors are ideal, but it is hard to justify the upfront cost.


If you're trying to maintain guaranteed uptime, you're probably better off with machine-level redundancy rather than disk-level redundancy.

> Mirrors are ideal, but it is hard to justify the upfront cost.

The upfront cost of redundant backups can also be high, but that doesn't mean one should forego them (unless their data isn't worth it, in which case, why bother collecting it?).

Going cheap on things is fine in the home computing realm, but once you're in the business realm, going cheap almost inevitably becomes more expensive in the long run.


For small home machines, the storage cost is minimal. All my home machines are mirrored, for around $100 each. Its saved me several times.


I've never understood this. At home, you don't need uptime, thus you're much better off with a real offline backup instead of a mirror.


You don't need uptime, but it's nice to have. Repairing a degraded mirror takes a lot less time than restoring from a backup in the vast majority of cases.


I have seen data recovery. Once. 22 years into my career, and it was still a home system. Which I believe proves your point.


A modern filesystem, like ZFS, will protect against random bit errors.

Raid 5/6 are still not appropriate for todays large disk sizes, but raidz3 (three parity disks instead of one or two) is.


Depending on the size of the array, however, even having three parity disks is likely insufficient. Using parity instead of mirroring (or a mirroring+striping approach like RAID10) - i.e. RAID3, RAID5/6, and (IIRC) RAID-Z - assumes that disk failures are rare; in reality, the probability of subsequent disk failures increases significantly once one disk failure occurs (which makes sense: unless you're mixing/matching vendors (which has its own problems), you're probably buying a lot of disks at the same time, probably from the same or similar batches, and therefore with the same failure tendencies). For things like SANs and other large arrays of disks (where the use of ZFS is more prevalent), the difference in safety between two and three parity disks is negligible. You could keep adding more and more, but at that point, you might as well just use RAID10. If you're running a smaller array, you also might as well just use RAID10 (or RAID1 if you only have 2 disks in the array, but that's playing it dangerous for the same reason why RAID5/6/Zx is playing it dangerous).

This is part of the reason why ZFS supports a variety of different RAID and RAIDlike configurations. If you're using a pure-ZFS approach, I'd strongly recommend having zpool create striped mirrors (i.e. RAID10) in order to get the benefits of ZFS' data checksumming and not be susceptible to such a high risk of data loss.


I am surprised they haven't mentioned Crucial SSDs. With cheap drivers like the MX100 having features as power loss protection and Opal 2.0 support, I preferred these over the slightly faster Samsung products at the time.


Turns out, the Crucial drives' "power loss protection" doesn't actually preserve data "in-flight" at the moment of power loss. It just prevents data "at rest" from being corrupted.

This appears to apply to all the consumer-grade Crucial drives.

See [1], money quote:

"In the MX100 review, I was still under the impression that there was full power-loss protection in the drive, but my impression was wrong. The client-level implementation only guarantees that data-at-rest is protected, meaning that any in-flight data will be lost, including the user data in the DRAM buffer. In other words the M500, M550 and MX100 do not have power-loss protection -- what they have is circuitry that protects against corruption of existing data in the case of a power-loss."

[1] http://www.anandtech.com/show/8528/micron-m600-128gb-256gb-1...


That is true of ordinary HDDs too. Until data is written to the flash or platter itself, it hasn't been actually written.

People don't expect files they haven't saved to be written and magically come back later if the power goes out, but they do expect that their drives will power back up with all the data that was last written intact.


Yes, that's true.

However, Crucial explicitly marketed their SSDs as having, quote, "power loss protection", pointing out the array of capacitors on the drive's PCB, and strongly implying that this feature included in-flight writes surviving a power loss (i.e., that the caps had enough capacity to enable flushing the DRAM buffer to the NAND media, much like the non Sandforce-based Intel SSDs — e.g., the 320 series from a few years back, or the current DC-3500 and -3700 drives).

That, it turns out, isn't true.

And that's a problem, because I and many others bought these drives on the basis of Crucial's implying they were power-loss durable. When enough people started reporting to Crucial support that their drives didn't, in point of actual fact, offer this feature, their marketing literature changed so as not to imply they did, and forum posts and reviews, like my previously-linked Anand Tech article, started pointing out that they didn't.


The power loss protection in Crucial drives is not what most people think it is. It's a guarantee that the drive won't become corrupted on power loss, not that data buffers will be committed. So it's still dangerous. The idea that the drive would lose existing data on power loss was very surprising to me in the first place, enough that I never suspected that's what Crucial meant by power loss protection.


The new flash DIMMs (which thus bypass PCIe bridges and ATA layer, since they plug directly into the memory controller) are really interesting. Not a commodity yet, but seems like a case where simpler -> better -> cheaper.


What a wonderfully informative article. I really appreciated all the specific scenarios and cases.


I am looking forward to the day when all SSDs ship minimal firmware, and offload all the complex work to (main-CPU) software.


Having the flash translation layer on the drive itself is the only practical way for the drive to be usable by multiple operating systems. Almost everyone who uses a SSD wants it to be accessible to at least two operating systems (UEFI, and whatever OS resides on the SSD). Doing the FTL on the host system would basically relegate the SSD to being just a cache device.


Yup. I looked at this while doing design work for a certain highly visible gaming product. You want the responsibility for block leveling and transactional goop close to the device, where the firmware can know stuff about device geometry and other nasty bits that affect reliability.

Putting that stuff in the OS puts you back to, oh, the same wicked stuff that people had to deal with for MFM hard drives (remember cylinder, head and sector counts? Only probably ten times more complicated).

A block level abstraction is fine, as long as the abstraction can make some transactional guarantees.


We have these things called standards which allow nice things like interoperability. Besides, you do not need the entire drive to be accessible from UEFI, it is enough that you can load a small bootloader which contains the drivers to handle the disk. Thirdly, UEFI has extensible right in the name, shouldn't be too hard for OSes to throw a disk driver blob into UEFI.


Doesn't that require partitioning the disk? That doesn't seem like an OOB solution.


It's not the requirement to partition the drive that kills the idea. Setting aside the first small chunk of the drive for the firmware to read is how it's always been done. The problem is that this would require the partitioning to be done below the wear leveling layer, which reduces the effectiveness of the wear leveling slightly and means you can only change your bootloader settings a thousand times before the drive is dead.


That's not really what kills it. You can use a small ring buffer to boost that to 100 thousand, and/or use the flash in a more reliable way there (like SLC or devoting 2/3 of the bits to ECC).


sadly with NAND even reads contribute to wear.


The user experience of win-modems, win-printers, and win-scanners was absolutely horrific so you really need to click edit and add the sarcasm tag.


Wouldn't that make performance a lot worse, for both CPU and disks? Offloading the job to something closer to the flash chips and fully dedicated to servicing them sounds like a better (and the current) path. Same for high speed network links.


FusionIO cards run with FTL maintenance on the main CPU. Run a workload and you'll see the driver spinning in a way that isn't accounted for by your operations. It's not pretty - at high workloads, the network interrupt traffic starts interfering with all the bus traffic to the cards. A controller can be designed to run exactly at the speed of the chips, no more no less, leaving the CPU doing what it does best.

That being said, if you build your system correctly, there is not any extra CPU load created by maintaining the FTL at app level. The main problem is the lack of an API for wear detection. I have discussed this with senior chip designers at the major manufacturers, and they are all loath to nail down to one api, as they believe how wear is specified will have to change over time. That's a broad brush - some manufacturers are more open to these discussions than others.

I'm still working on them.



Always have that in the back of my mind =]. However I don't think that's the current trend right now. Except where special hardware is integrated to the chip, but the work isn't left to the CPU itself.

I'm not so versed in the area so I'm just blind guessing here.


Flash translation layers don't really need to be offloaded for the sake of performance. You need some kind of controller to multiplex access to a dozen or more flash chips from a narrow PCIe link, so you might as well throw in an ARM core to do the translation and wear leveling and garbage collection, and then you can stick a few supercaps in the drive too and have a robust hardware platform.


The FTL runs much faster on a 3GHz Xeon core than on a 150MHz ARM core. Of course, the cost and power consumption is also much higher.


Unfortunately that day isn't going to happen. In fact the opposite is happening -- interesting spinning rust technology like Shingled Magnetic Recording (SMR) drives are getting more complex translation layers.


I wouldn't say that it's never going to happen based on current trends.

This kind of thing tends to be quasi-cyclic.

At one time, desktop computers had separate FPUs. Over time, those got integrated onto the main CPU, but then the trend switched back, with heavy calculations moving off-CPU to the GPU.

The mainframe->desktop->cloud story is similar.

I say "quasi"-cyclic because (e.g.) cloud isn't really the same as mainframe, but the transition back and forth between centralized and decentralized is still striking nonetheless.


I'm eagerly awaiting the part of the cycle where the GPU gets folded back into the processor. I'm really not interested in writing code against mystery-meat architectures with no documented ABI...


We're already seeing that with integrated, "soldered-on" graphics cards, and with frameworks like OpenCL, and higher-level abstractions like Theano, Torch7, etc.


All we need now is documentation for the low-level abstractions so we can scrap all the vendor code and compile our own GPU machine language.


I am not sure based on reading your post whether or not I am providing new information to you:

Keep an eye on the AMD APUs moving forward and their HSA progress.


Thanks, that sounds like an interesting development.


SSDs that use SATA aren't going to allow any finer control, I suspect, due to the interface's limitations.

PCIe SSD devices, though, already feature an advanced controller (that is, a CPU core). This core could be programmed from the "computer" side somehow — but building any public API is always a major commitment, and takes much more effort than a proprietary API which you can change at will.


This is a bad idea for a lot of reasons. A well designed system has components that just work and have standardized interfaces. What you're suggesting is that each device should be totally unusable unless the machine has the specific software running that knows how to use it. That's poor design.


Thats how... pretty much every peripheral in existance works. You need webcam drivers. Video card drivers. Sound card drivers. USB drivers, at the least for generic HID devices. Nothing stops there from being common standard generic SSD drivers, where vendors could publish their own if they had an optimization.

Right now the SATA interface prevents anything like that from happening, and the market has already moved to these proprietary broken controllers with braindead GL-class protocols to access them. (SATA, NVMe)


Any such interface for SSDs would need to be able to work with a dozen different kinds of flash, a few sizes for each, a wide range of quantities of flash chips, all while exposing the performance and endurance characteristics of the flash and the topology of its connection to the host bus so that the software FTL can properly tune itself.

You're asking for a hardware interface of unprecedented complexity and cross-platform vendor neutral drivers of comparable complexity to 3d graphics drivers, all for the sake of letting SSDs have "dumb" controllers for a generation or two before the technology changes enough that the controllers need to start translating things again.


this would be a huge win for database design - FTL is incredibly wasteful in terms of unnecessary block erases for cache


Welcome back to driver hell. We thought you weren't ever coming back, but here you are!


If you think that, I've a crate of ST-506s to sell you.


In the long term latency makes that a non starter.


Moral of the story:

* Don't use hardware RAID controllers. * Don't buy hardware from people who are going to change SKUs out from under you, or worse, change what's actually delivered for a given SKU.


Totally agree with you. I was just about to write the exact same thing. I've had problems with dell hardware, hardware raid controllers, and hundreds of ssds. The most stable systems I've worked on use lsi sas controllers, perhaps software raid, and Intel/crucial ssds in fault tolerant distributed systems. Therefore, I also question if it's etsy's tendency to use non distributed datastores, with less effective or complex fault tolerance makes ssd failure a bigger deal? Failure is inevitable, and should be expected at every level.


This is exactly what I took from the article "Don't use hardware RAID controllers."


All these drives were on hardware RAID cards it seems, is it feasible to do without them?


Preferable, I'd say. The last thing you want between your OS and your disks is a buggy unknowable black box you can only talk to with some crappy binary blob management tools.


I recommend using the RAID card as an HBA (JBOD), turn on the fast path, and use software that can share multiple devices evenly.


Am I missing something? Most of the issues seems to be Hardware Raid related.


On the upside, [the ridiculously expensive HP SSDs] do have fancy detailed stats (like wear levelling) exposed via the controller and ILO, and none have failed yet almost 3 years on (in fact, they’re all showing 99% health). You get what you pay for, luckily.

Call me a huge cynic if you must, but given the other problems observed, I think there's a really simple explanation for perfectly uniform "99% health" after three years of service that doesn't involve "you get what you pay for."


A SSD can be 85% of the way through its lifespan and still be operating with the same performance and reliability expectations it had when it was almost new, so the drive can reasonably be said to still be healthy. It's only when it has to start retiring bad blocks and expending the spare area that the drive is operating in a degraded mode, and it's not until you hit that point that the drive can start making accurate predictions about how much more use it can take until it dies.


Not quite.

An SSD at 85% wearout will take longer to program and erase the blocks and it will require more refreshes (data retention suffers) so there will be considerably more background operations.

In fact SSD vendors do quite a bit to make the disk perform its best early on when they absolutely can and then start to break down further down the road after the benchmark is over. After all the users are mostly only testing it for a month or two and they never get to a real wearout condition during the test and then they buy in bulk and run it for a few years but by that time the SSD model is already outdated anyway (about 18 month cycle for SSD models).


"There's already been silent failures"?


Isn't running raid1/5/6 on ssds silly b/c they'll all die at the same time? And hardware raid on top of that? Why?

SSDs have a fairly consistent failure curve (exusing firmware bugs and other random events) for a given model, so they'll wear evenly in a raid setup. This means they'll all die at the same time as writes/reads are distributed fairly evenly across the disks. Given the size of today's drives, you may not complete a rebuild before losing another disk.

Has this been proven to not be true within the past few years? I don't run redundant raid on ssds. It's either raid0 or jbod.


Not necessarily. We've been running Intel SSDs in productions at Stack Exchange for 4+ years, and just recently had our first 2.5" drive die.

That said, most of the drives in this article are consumer drives. The problem with consumer drives is that they don't have capacitors. And since your writes are cached by the drive before they go to the NAND, if you lose power all of your drives will be corrupted in the exact same way at the exact same time.

If you don't care about the data, go ahead and use them. If you do pay the extra for Enterprise drives. They really aren't _that_ much more expensive these days.


Interesting. Do you have more info on what you mean by "Not necessarily?" From what I've seen during reliability studies on SSDs, they have a fairly tight failure curve. This is very dissimilar from hard disks where there's much more variance from drive to drive.

I'm genuinely interested.


Nothing that would pass deep scrutiny. Just our experience running SSDs in almost every server at Stack. We've only had one mass failure of drives. That was when 5/8 Samsung drives died around the same time in our packet capture box. The remaining 3 are still alive, although we don't really use them.

We have only had two Intel drives die on us. I'm interested (well academically, not professionally) if they will die at the same time or keep dropping off one at a time.

We tend to retire the machines or the drives in them before they fail.


In my experience (Flash Platforms Group, HSGT), a significant number of flash device failures are caused by mechanical issues, such as: on-die faults, wire bonding problems and solder joint/package stresses which may occasinally be down to a production issue, but are more often attributed to rough handling during installation or thermal stress. This is less of an issue with SFF SSDs, but especially true of PCIe products crammed into 2U servers with poor airflow.

In general, thermals tend to be a significant issue for all form factors when devices are retrofitted. Less so for 'products' (All flash arrays etc.) which are designed as whole products from the outset.

This creates a failure pattern that is totally separate from predictable wear due to use and means that 'they'll all die at the same time' becomes much less certain for some categories of device.


That's what I had considered as well. Specifically, that the wear out failures are on the right hand of the curve, meaning that other issues would tend to dominate. What you said makes a lot of sense: failures due to thermal/electrical stress, manufacturing issues, and handling.

Also, there's the controllers, which tend to be a significant source for issues.

So yeah, maybe theoretically, the wear out would be a concern for raid. However, practically this is rarely an issue as other failures randomize the distribution enough to where this never causes a problem in-situ.


I bought two consumer intel.. 80gb? Ssds. They both lasted less than a year before they stopped posting or allowing a system boot on separate devices.

The 850 pro is ok, but it's slowed down a lot lately. Might be an OS thing, which i doubt.

All in all i keep a redundant backup on old school hdds too since the failure rate of SSDs isn't so great in my experiences so far.

Anyone try one of the newer M2's yet? Or i think i mean the pci-E types?


So there are bugs in drive firmware. How about security bugs? Should we expect quality drives to have had a security audit?


Yes, firmwares have security bugs too: http://spritesmods.com/?art=hddhack


If you are doing your security at the hard drive layer, haven't you already lost?


So another reason for whole-disk encryption is to defend against your own drive firmware? I guess that may make sense nowadays, though it hadn't occurred to me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: