Hacker News new | past | comments | ask | show | jobs | submit login
Samsung Unveils SSD Delivering Speeds of Over 2 GB/s (techgage.com)
272 points by notsony on Jan 8, 2015 | hide | past | favorite | 112 comments



Has a Lenovo part # on it, so I suspect this is the PCIe SSD inside the new ThinkPad X1 released as CES 2015, and there was a nice discussion on HN about it yesterday [1]. In that article, the author mentioned that in the tests "read speeds reached 1350 MB/s", which is pretty freaking awesome for a laptop!

Update: Yeah, the SM951 (shown in this article) is in the X1 [2 - see specifications heading].

[1] https://news.ycombinator.com/item?id=8847411

[2] http://www.thinkscopes.com/blog/2015/01/06/lenovo-thinkpad-x...


That's amazingly frustrating. I bought the second-generation X1 Carbon last year, on the theory that Lenovo had killed off mouse buttons permanently so there was no sense waiting for a new system that had them; now they release the third-generation X1 that not only has the mouse buttons back, but kills off the insane touch-strip in favor of real function keys again.


The trackpad/keyboard change made me so happy, because I can now buy a Lenovo laptop again. Now they need to bring back the Ultrabay (so we can have 2 spindles (1 swappable) + M2 SSD in a 14" laptop) and get rid of non-removable batteries.

The customer backlash must have had a substantial financial impact for Lenovo to reverse course. Other than Blackberry, has any technology vendor trumpeted their own course-reversal in launch marketing for a new product?


> Now they need to bring back the Ultrabay (so we can have 2 spindles (1 swappable) + M2 SSD in a 14" laptop) and get rid of non-removable batteries.

Perhaps in the T series, but those choices actually make sense in a thin-and-light laptop/Ultrabook. If you've ever looked inside one of those laptops, it becomes obvious why they don't have replaceable batteries: they put battery cells in every spare space, not in one encapsulated area.


Yes, I meant the T-series, e.g. the relatively thin and light T400S has a swappable Ultrabay Slim which can accept a hard drive, battery or optical drive.


The 8GB ram limit on the X1 and X250 is still pretty crazy.

I have virtually no reason to upgrade from my X220 (i5 2410, 8GB/SSD+HD) unless it breaks. Sad.


That's the only thing holding me back to upgrade my 2012 X1 Carbon. As soon as they release an X1 with >8GB RAM I buy it


No kidding; even my 2010 X61s had 8GB and my 2 year old X230 has 16GiB.


I'd agree except the X220 screen resolution is idiotically bad.


Bad for what ? If you had a full HD resolution on that size you would lack proper display on Linux for high DPI screens (it's still not perfect at this stage).


Apple... The iPhone 6's screen size.


The entire history of the iPhone is one instance of this phenomenon after another. "Nobody wants big screens." "You don't need cut and paste on a smartphone." "You don't really want multitasking, you only think you do." "Web apps are all you're going to get, and you'll like it."


The first instance is the only real case of that phenomenon, the others Apple openly framed them as temporary compromises.


I can assure you that the iPhone as originally announced was never going to allow third-party native apps; it was to be web-apps only forever, and this was not intended to be a temporary position. Even though their language was often cagey, it is my clear recollection that Apple's general attitude around/beyond the release of iPhone OS 1.0 was that web-apps were essentially "as good as" any native app could be, and were therefore everything developers would ever need. As I recall it was some 13 months before they finally relented and began to talk native third-party API access.

I was at WWDC 2007, here's what Jobs said at the time -

"The full Safari engine is inside of iPhone. And so, you can write amazing Web 2.0 and Ajax apps that look exactly and behave exactly like apps on the iPhone. And these apps can integrate perfectly with iPhone services. They can make a call, they can send an email, they can look up a location on Google Maps. And guess what? There’s no SDK that you need! You’ve got everything you need if you know how to write apps using the most modern web standards to write amazing apps for the iPhone today. So developers, we think we’ve got a very sweet story for you. You can begin building your iPhone apps today."

Additionally -

Apple board member Art Levinson told Isaacson that he phoned Jobs “half a dozen times to lobby for the potential of the apps,” but, according to Isaacson, “Jobs at first quashed the discussion, partly because he felt his team did not have the bandwidth to figure out all the complexities that would be involved in policing third-party app developers.”

The following article discussed this http://9to5mac.com/2011/10/21/jobs-original-vision-for-the-i...


No it was not 13 months, it took 4 months from WWDC 2007 to announcing the SDK, which was then released another 5 months later.

Jobs was being slippery because they had nothing to show at WWDC 2007, but here is the timeline:

iPhone revealed: January 9 2007

WWDC 2007: June 11 2007

iPhone released: June 29 2007

SDK announced: October 17 2007

SDK released: March 6 2008

App Store opens: July 10 2008

From the iPhone being announced to the App Store opening was about 18 months total, quite a pivot if "iPhone as originally announced was never going to allow third-party native apps; it was to be web-apps only forever, and this was not intended to be a temporary position."


Jobs was being slippery

It was always Jobs' M.O. to trash-talk any important feature(s) they didn't have at any given time. It was and is a valid marketing technique -- I can't really blame him for employing it -- but at some point it stopped fooling me. It required me to assume that Jobs was a dumbass, which regardless of what we might have thought about him was never the case.

From the iPhone being announced to the App Store opening was about 18 months total, quite a pivot if "iPhone as originally announced was never going to allow third-party native apps; it was to be web-apps only forever, and this was not intended to be a temporary position."

As I understand it, the App Store had been under active development for some time, but for the iPad. The iPad was always going to run native third-party applications, but the company was caught off guard by the demand for them on the iPhone.

In retrospect, the apparent speed of their pivot on the App Store policy should have been a very strong indicator that a tablet was in the works. It did look suspicious, but I don't think Jobs was actively lying when he said the iPhone would rely on web apps.


And yet, didn't they also say that the the built-in apps couldn't be equaled by Web apps? I wonder how Jobs reconciled that.


I might be mistaken, but my impression was that the "web apps only" thing was real and not a temporary compromise. They may have spun it that way retroactively, though.


Certainly not in my recollection.


I was waiting all of 2013 for that X1 refresh, and spent a whole day swearing at random inanimate objects after seeing what they actually built. Bought an ASUS Zenbook instead. I would LOVE to have somehow lasted this year without a new laptop and sprung for this fixed-up X1, had I known they would correct course. Alas I only buy new laptops every 4-ish years.

So Lenovo lost me as a customer for a 7-8 year span because of those asinine touchscreen keys. I loved Thinkpads as reliable Linux workhorses, but the phrase "Will change their usage per app!" might as well say "Will never work in Linux!".


The "adaptive" strip actually does work in Linux; hitting the Fn key toggles between F1-F12 and volume/brightness/etc. It's just not fun to use, since it has no tactile response.


Same boat. I finally bought one, after trying out someone elses and deciding that I could live with it.

Ugh. It's almost unusably bad. Fortunately I already don't use the mouse much, but it's gotten worse with more use not better. :(


It also has severe overheating issues under load; fullscreen video or compilation will hard-hang it, and it won't boot up again for a while.


This brings them in-line with the 2013 retina MacBook Pro's PCIe SSD speeds though which I believe we're just below at around 1GB/s?


You have no idea what kind of performance you'll get with an Apple SSD. My almost brand new Macbook Pro has the worst SSD I've ever used, we're talking sequential speeds comparable to a hard drive. But it still works, so they refuse to change it.


You should blog about it. Write a simple article, post it everywhere. Submit it to hardware review sites like Anandtech, they might get interested. They definitely jumped on the recent EVO 840 issue.


This has been common knowledge since Apple started selling Macbooks with SSDs. They simply refuse to do anything about it unless you can prove that it's not working.

I'd shortcut the SSD if that didn't mean being without my laptop for a good two weeks.


shortcut?


Drive higher voltage into it to burn a few wires in it? No need to open it up to void warranty or whatever.


I think this is because Apple uses two or three suppliers, so if you're lucky you'll get a Samsung SSD, but if you're unlucky you'll get a slower Toshiba SSD.


We need a class-action lawsuit that results in a cryptographic hash manifest of all hardware components being delivered with every purchase, or made available for lookup online by unit serial number. Alternately: labelling regulation, or voluntary disclosure.

Amazon could design a "showrooming" mobile app to augment the reality of Apple retail stores, with photos of the supply chain casino options on which the buyer is gambling, while they promote Amazon items which have guaranteed transparency on components.


The only ones over-promising and under-delivering are independent hardware reviewers who publish benchmark numbers Apple isn't guaranteeing.

Barring a recall, nobody cares which manufacturer their RAM comes from or the hard drive or optical drive or the battery cells or the WiFi chip (except for Linux users) or the LCD or even the NAND behind the SSD controller. The controller itself is the only component that Apple switches out with alternatives that aren't functionally indistinguishable, but I can't see any way to regulate that without also harfully restricting Apple's freedom to multi-source components in a ways that are strictly beneficial to consumers.


They can source the items from where ever they want, I don't really care that much. But when they advertise that their Flash-based PCI-Express based SSDs are 1.6x faster than the comparable SATA-models they are demonstrably lying. I don't like that.

This was a laptop in the $4k range, which puts it in the really-fucking-expensive line. I expected more, at the very least that they'd be more than willing to change it. But it powers on, so no problems according to Apple.


Consumers can decide (with non-vendor advice) what's beneficial for their goals (which are unknown to the vendor), but only if there is transparency on the ingredients inside the device. Vendors can change components as much as they want, as long as they disclose the components.


But who decides the granularity at which the components must be disclosed? Do they have to update the documentation every time they switch sources for things like capacitors? The overhead of doing that would cripple the supply chain and hurt everybody for the potential benefit of almost nobody.


Reasonable people can draw reasonable lines. We can use the last 20 years of the PC industry to come up with an initial list of consumer-impacting behavior.

If the behavioral targeting industry can fingerprint every possible aspect of browser behavior, and run subsecond auctions, OEMs can track a subset of BOMs on an annual cycle.


The main purpose of a hard drive is to read and write data. When one is much worse than one of the other alternatives they use, that adversely impacts the performance of the whole computer. Things that directly affect the user like that should be disclosed.

Would you be okay with one screen having twice the resolution of another on the same model laptop with the same part number?


If Apple were forced to standardise on a single SSD spec do you think they'd standardise on the higher spec component or the lower spec one? Clue - the higher spec components are almost certainly under supply constraints. You could even end up with higher spec components being intentionally crippled to match the performance of lower spec ones to comply with uniformity regulations.

Congratulations, you've now made the world worse for everybody, but at least it's 'fair'. Good job!


Any vendor is free to sell lottery tickets, but if they call those lottery tickets something else (e.g. a computer with fixed specs, rather than a range of possible specs), customers may be less than satisfied with the transaction.


There is nothing new about manufacturers sourcing parts from multiple suppliers. This is routine practice across many industries. Often it's necessary due to limitations in availability.

If you receive a computer from a supplier and find that it's higher-than-advertised spec parts have been intentionally downgraded to comply with uniformity regulations, would you be more satisfied with the transaction than if they had not been or less than satisfied?


The post that started this subthread was about disclosure of component identity, nothing was said about sourcing origin or uniformity.


you've now made the world worse for everybody

So you don't mind when Apple ships you an 800x600 display with your MacBook because, sorry, we ran out of Retina screens this week?


Oh please. Nobody's talking about delivering products that are under the advertised specification. If they were doing that, they'd get sued in an eyeblink and rightly so.


My late 2013 rMBP with the 1TB option sees sustained read and write speeds of around 900MB/s, indeed. Useful when dealing with uncompressed video.


What if we gave up on the idea of fixed disk storage and just filled our computers with enough fast RAM to use as a permanent RAMdisk that wrote to these super fast SSDs on occasion, effectively treating the SSD as a backup for the RAMdisk? With a proper battery backup implemented, I imagine it could work.

2 GBytes/sec is crazy fast. That's my entire steam library in 20 seconds. What percentage of those files change per day? 10% or so at most? So when I shut it off, its syncs up the diff in 2 seconds.

When storage gets this fast and RAM this cheap, why bother with fixed disks? Imagine the new Macbook Air with 8gb RAM installed and another 128gb as a RAM disk and a 128gb SSD to back up the RAMdisk.


We have servers like this, and under a surprisingly large number of load scenarios you only get a little speedup. There are use cases where it's helpful, but there are lots of little things that slow down performance that aren't disk-bound, like single-threaded waits for a piece of networking code to timeout, or waits for code that's CPU bound, they all add up.

I used to dream of what you're talking about, but from my own experience in the last year, it just doesn't matter that much for the daily experience using a computer. 16GB of RAM is plenty right now, SSDs are very fast for many tasks, I'd look at upgrading your internet bandwidth after that if you want your computer to feel faster.


>> and just filled our computers with enough fast RAM to use as a permanent RAM disk

Current ram implementations use a lot more power compared to an SSD, and might not be suited for a laptop.


How is this better than a MacBook Air with 128+8gb of ram and a 128gb super-fast SSD? Why buy a super-fast SSD and only use it for daily backups?


End-game should be that my PC just has 'storage' that ranges from RAM to cloud-based auto-backed up data.

The computer should automatically optimise this for commonly used or 'app' style data and keep the information in the level of storage that makes sense.


That's great, but I'd settle for half the speed and twice the density.


It usually works the other way with SSDs: The bigger the capacity, the faster. The reason is that to increase capacity they usually add additional flash chips in parallel, so apart from on the very low end it is not unusual to see peak performance double when capacity doubles except where you're limited by the maximum performance of the controller or interface.


How long until they just use dimm slots?


Hopefully never. Flash's characteristics are so different from DRAM that flash DIMMs just confuse people.

But there is http://www.sandisk.com/enterprise/ulltradimm-ssd/


You can use the DIMM slot for the DDR bus interface, but not used in the traditional way DRAM would be. The DDR interface is extremely fast, and can be purposed for uses other than the standard row/column/select/precharge etc commands.

You would likely need chipset support for it to be used in this fashion, though.


That's what flash DIMMs do. The problem is that if you stick something in a DIMM slot people expect it to behave like RAM, not like an I/O device.


I don't think the kind of person who buys a DIMM SSD is the type of person who is confused by that sort of thing.

To a non-technical person, mSATA, DIMM, and PCIE SSDs all look the same. These products aren't for them.


Agreed.

It would be nice to see a set of slots for both RAM or an SSD. So your laptop may have four slots total, and you can mix and match RAM or drives as you see fit.

The technology to do this is already commercially available in servers. Just need it to come to laptops or PCs.


At some point, I hope to see the lines between RAM and SSDs dissolve, such that the OS can just use a portion of the SSD for RAM.


Uhh, unless SSD technology changes significantly that line is a long long way off. SSDs are orders of magnitude slower than RAM. For example assuming a 2 GB/s speed is legitimate, RAM is over 22 GB/s. Plus RAM can be written and re-written without any wear, whereas SSDs have a max-write limit.


Right. Plus when you power off RAM ... Oops!


I think it's supposed to happen with these: http://en.wikipedia.org/wiki/Memristor


HP is working on a new architecture based on computers with just a buttload of persistent RAM, but so far it's vaporware.


Supposedly they will never sell this computer, only renting access as a "fast cloud".


I would think the recent historical tendency is to have more cache levels, not fewer. This would only happen if the cost of storage loses its sensibility to speed.


Isn't that called "swap"?


No. Swap or paging is unused RAM "saved for later" (to free up real RAM space). It is never accessed or modified directly from processes except to push it back into RAM when a process requests a memory region which has been paged.


SATA DIMM slots are already available:

http://www.vikingtechnology.com/satadimm-ssd


That only uses the DIMM slot for power - it still uses a SATA connection. This definitely has its uses, though.


http://www.sandisk.com/enterprise/ulltradimm-ssd/

I think this one meets his description and makes actual use of the DIMM for memory transfer, though I've done relatively little research on it.


Ugh. Well, I suppose it might be OK as long as it doesn't inherit anything about the DRAM hardware interface. Seriously, if you haven't seen how it works (test: are the timings tCAS, tRCD, tRP, tRAS familiar?) you are dramatically underestimating how ugly and restrictive the interface is. It's completely inappropriate for anything but DRAM and probably inappropriate for modern DRAM too. If not, it's probably because backwards compatibility with the interface has constricted DRAM design.

Speaking of which, I'd love to read a modern analysis of chip-to-chip serial vs parallel if anyone has one handy.


It may happen, but not without potential pitfalls. For example, modern CPU's cannot actually address the full 64-bit address range of memory, and if your drive is a DIMM device it is necessarily memory-mapped. CPU's can address a few tens or hundreds of TB. While few computers have that much disk, we aren't far off.

http://en.wikipedia.org/wiki/X86-64#Canonical_form_addresses


There's nothing in that article that says you can't write a driver which utilizes certain addresses as registers and other address ranges as visible sections of a mass storage device, for example. That's just one hack workaround I can think of that would allow you to put flash into a DIMM. You could combine that with DMA to read data out of your flash DIMM and into your RAM DIMM.

At that point, reading data would be a matter of setting up your paging registers and then queuing a DMA operation to copy data out of the exposed segment of your mass storage device and into RAM. Your flash DIMM would then be just another memory mapped hardware device.


Hmm, good point. Looks like I was overthinking things.


> offered in 128GB, 256GB, 512GB, and even 1TB capacities

Same as current Apple offering, although Retina MacBook Pro's only supports only 2x link.

I hope some larger capacities will become available in next year or two. Also I have noticed no difference in daily computer use when went from 500 MB/s SATA drive to 900 MB/s PCIe.


I have a 1TB SSD in my late 2013 15" MacBook Pro Retina and System Information is reporting that it has a 4x link. Even with the additional overhead of having FileVault enabled I still see read/write performance in the region of 1000-1100 MB/s.


The 1TB disk in those is PCI-E.


Actually, it can go to 4x : http://blog.macsales.com/25878-owc-gets-1200mbs-from-ssd-in-...

They simply plugged in an SSD from a 2013 Mac Pro. Go figure.


But can they plug in the new Samsung SSD (when available) or any of the PCIe-SSDs currently available on Amazon?

My understanding is that Apple (deliberately) changed the physical shape of the PCIe-SSD connector so you could not upgrade the SSD yourself.


Interesting. EveryMac reports only 2x.

I've seen that OWC article where they get up 1200 MB/s and though that might be still achieved by 2x link width.


This. Sata Express/M.2 looks promising; there are no mobo's with multiple M.2 ports so far though, so sorry, your 2 GB/s is still not enough for me - a raid0 mdadm device with 4+ very good Sata3 SSD can beat it.

OTOH if a mobo with 2 (or more) M.2 ports hits the market, things are probably going to change...


How does this compare to the PCI-e based SSD's in the Mac Pro?


PCIe on Mac Pro has 4x link, although in AnandTech's benchmark they achieved ~1GB/s read speed on 512GB stick.

So I assume theoretically one would see speed increase.


It's a newer model. I suspect the difference is negligible in real-world usage.


Twice the maximum speed is "negligible?" Hmm. Certainly at sequential tasks it should be marked, maybe not so much in random.


It looks like the real-world read speed of 1362MB/s

http://www.thinkscopes.com/blog/2015/01/06/lenovo-thinkpad-x...

is only around 10% more than the 1184MB/s speed of the 2013 Mac Pro:

http://macperformanceguide.com/MacPro2013-performance-SSD.ht...


Besides copying large files, do you have an app that can consume 2 GB/s? That's more than two streams of uncompressed 4K.


Yeah, one of my main debugging tasks right now is primarily limited by my SSD write speed. Instruction-level traces take 9gb at minimum, any more significant work takes quite a lot more.


How are you generating the instruction traces? What format are you writing them to on disk?


Ollydbg run traces and in a plain text format. I'm constrained by my target application only working on older versions of windows, so I can't easily insert a compression step in the log generation or switch to a more compact binary representation.

(I'm currently looking into solutions like a custom filesystem driver, running a second VM and using internal networking to stream to a FUSE filesystem, or possibly even hooking the filesystem access of my debugger and inserting a compression step into WriteFile() calls)


As its transfer speed yes I do.

I don't have to move 10GB+ files to know if my network is 10MB/100MB/1000MB/10G0B. The speed it transfers is representative of my network speed be that file 4k, 10MB, or 100GB.

The same is true for all transfer speeds.

:.:.:

Also yes there are diminishing returns with information requests, acks, etc.


"git grep" on a large source tree. (Or, for that matter, "repo grep" on a whole family of git repositories.)


Upgrading to Mavericks comes to mind.


Try working with map data, where having layers that consumes tens of GB, and having hundreds of layers, is not that unusual. Load that into a database, and watch the size blow up even further thanks to indexes etc.

There are plenty of applications that are still disk IO constrained at those speeds.


Uncompressed 16-bit 16K for mastering high-quality 4K content? It is thought 8K will be the final TV resolution - for that you'd need 32K for a good quality output - I guess even current DDR3 would be too slow, not mentioning SSDs.


Ad-hoc connecting new Macs over Thunderbolt 2 at 20Gbits/sec is very nice! My MacBook Pro and Mac Pro share their 1TB SSD drives between each other via TB2.


SVN, some people are still stuck using it....


Speed's nice, but get the price-per-terabyte down, I beg you.


Speed's nice, price-per-terabyte is important, but do something about endurance and how it acts when it dies. I want my drive to tell me "x and Y is broken, only Z is available, read your data before I die" instead of simply VANISHING from my system on next power up like EVERY FRICKIN SSD does right now (even ones promising read only failure mode like Intel).


Any electronics device can die at any moment without a chance to tell you anything.

That's why you have to make backups and run your disks in a RAID.


Yes, any electronic device can die, thats normal.

What is not normal is SSD drive going dark just because there was a SMALL corruption on one of critical flash cells (usually caused by firmware error, or power loss or wear). Its quite rare for magnetic drive to die because of few bad sectors, on the other hand it seems to be a standard failure mode on all SSDs on the market.


My operating system files churn as much as anyone, but I don't intend to use SSDs for files that will be written and rewritten.

I'll use it for the stuff that I want to write once and keep for 30 years.

I have terabytes of such, already.


SSD is not for you then, data retention is very poor (single digit years, or few months until read speed degrades in case of Samsung 840 :P). Whats more mere reading degrades flash, current SSDs keep a log of how many times sector has been read, and reallocate that sector when it triggers because read operations upset stored values.


It has come down by about 50% in the last year.


http://www.amazon.com/Samsung-2-5-Inch-SATA-Internal-MZ-7TE1...

$422.

I've been looking at this same one for about a year. It's slightly lower than it was at first, I think it was $470 or so.

I need it to be about $100. I'll buy one per month at that price. But at $422 for a terabyte, it's not that useful to me.


http://camelcamelcamel.com/Samsung-2-5-Inch-SATA-Internal-MZ...

Looks like it has dropped less than $100 in the last year, but came down very swiftly in the few months it was on the market before that.


That one will never get there. This might get close: http://www.newegg.com/Product/Product.aspx?Item=N82E16820147...

But you'll likely have to wait one more generation.


Are there issues with booting from PCIe slots? I assume if there are, they'll get sorted out quickly as drives like this become more popular.


Unless your system has very buggy firmware, any PCIe lane is as good as any other, even if it's behind a switch. The real difference here that limits compatibility is that the controller uses the NVMe host controller interface and command set, instead of the AHCI standard used by SATA controllers and previous PCIe-based SSDs. Software (firmware or OS) that doesn't have a NVMe driver won't even recognize this drive as being a storage device.


would love to update my 128gb SSD on mid 2011 macbook air :(. Should’ve upgraded to 256 at least …damn you hindsight :)



You're better off just finding and OEM 256GB drive on eBay or something. You can use any 2010 or 2011 SSD in the 2011 Air, but the 2012, 2013/2014 Airs are not interchangeable.


I've upgraded my MB Air to a 1TB SSD from Transcend (JetDrive). I think it works with a mid 2011.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: