Hacker News new | past | comments | ask | show | jobs | submit login
AMD Ryzen Threadripper 1950X and 1920X Review (anandtech.com)
192 points by jsheard on Aug 10, 2017 | hide | past | favorite | 150 comments





I was looking to build myself a new (Dual) Xeon workstation, but looking at these specs and performance, I am going to consider threadripper instead.

With this many threads, ECC support and 64 PCIe lanes this CPU looks perfect for my intended workload. It's also gonna be slightly cheaper than a dual xeon.

Exciting times!


I am in the same boat, one of my use cases is many many instances of the same application. (100 instances would be good, as I get 70 on my dual socket E5-2670 v1)

Only problem when compared to dual socket Xeon E5 solutions up to Broadwell(v4) is support for Windows 7 which is a must for my case.

I remain hopeful that it will be possible to run Windows 7 on Threadripper "unoficially".

It might take a PS/2 keyboard and a DVD(USB is not supported on Ryzen for Windows 7 without slipstreaming USB drivers) to install but it should be doable.


Only slightly cheaper than a 2P Xeon system?


Was not planning on implementing high-end Xeons, I was looking at ~EUR600 per CPU. So total for CPU's would have been slightly more than a 1950X. Also, a dual socket motherboard is also a bit more expensive.

In total I think a single CPU 1950X is about 500 euro cheaper. So yeah, 'slightly' was a wrong choice of words.


Comparable Broadwell-EP or new Purley chips cost thousands more. They have better features and more cache but price wise these platforms are in different universes.


That's what I was getting at :-).


What workload is that? If I may ask.


I am dying to know as well. I have a Ryzen 1700 and my primary workload is compiling Go code, which should, in theory, be fabulously parallel. But in reality it hits all 16 threads maybe for the first 2-3 seconds and then falls back to utilizing only 3-5 cores. Perhaps our project isn't big enough to load up all cores for long...


Large C/C++ code bases (Chromium, Firefox, Linux, FreeBSD, whatever) are really good for this, since all C/C++ files can be compiled independently.


Heavy virtualisation, running complex infrastructures within minikube, loads of data processing for customers. Also occasional hardware synthesis and video editing (but not professionally)


Unfortunately for me, the fact that rr can't run on Ryzen CPUs is a dealbreaker. :(

https://github.com/mozilla/rr/issues/2034


I am wondering why this isn't getting more attention.

> It appears that the Ryzen PMU just isn't quite accurate enough :-(. rr > might work OK for some kinds of usage but I wouldn't recommend it. > > I'll land the patches I have with a warning for Ryzen users that things > won't be reliable.

Can someone with a technical understanding describe what's going on here? Is this a flat-out bug- or does the x86 architecture spec allow for this?


Awwww :( That's really sad.

rr is important enough for C++ development that I agree it's dealbreaker.


Barring a major technical problem being uncovered (hopefully the segfault thing won't be an issue) or nefarious action by Intel I cannot see how this thing can fail. So much power at the price point. As soon as the price drops a bit I'll be replacing my FX-8350 setup with one of these. Go competition! Go AMD!


Considering they didn't give samples to any of the linux press... ?


AMD changed their mind and are now sending samples to Phoronix (albeit too late for a launch day article).

http://www.phoronix.com/scan.php?page=news_item&px=Ryzen-Seg...

> We will also now be receiving Threadripper and Epyc hardware for testing to confirm their Linux state.


Haha, I'll be waiting to purchase until Linux reviews surface or MicroCenter offers the same discount on TR and i9 chips.


I mean I could guess one way it'd fail - if they market it heavily to the PC gaming market, and their single threaded performance isn't as good at similar price points. There's still lot of workloads out there that can't take advantage of 4 threads, let alone 16/32+. I would assume they don't market it heavily to gaming, but it's like the second thing they mention on their main landing page, so... idk.


AMD is already making a big marketing push about "you need 32 lanes for SLI/CrossFire!".

Which is utterly false, at 3.0 speeds x8x8 only costs you a few percent at most. Strangely x16x16 can actually hurt you in a few titles, which suggests we may just be seeing some other kind of variation/noise anyway.

http://www.gamersnexus.net/guides/2488-pci-e-3-x8-vs-x16-per...

https://www.pugetsystems.com/labs/articles/Titan-X-Performan...

Threadripper is no better than a Ryzen 7 at gaming, and in fact is often worse (often closer to a Ryzen 5). If you really buy into the need for x16x16 then getting a 6850K is really the obvious answer, it competes with Ryzen core-for-core in both performance and efficiency and since it's a HEDT product you get the 40 PCIe lanes as well as quad-channel RAM. Somewhat pricier than Ryzen, given, but actually much cheaper than Threadripper.


Wasn’t the segfault issue determined to be a bug in the php steps?


Some of it was, but not all. There are still issues in some chips and some customers are getting RMAs.


One thing I have with Intel is the virtual certainty the machine will work flawlessly with Linux. Atom, Celeron, Pentium, Core, and Xeon E3 have built-in GPUs that are very well supported.

Before jumping to AMD gear, I'd like to know if I'd have the same "just works" experience. I'm well past the age I liked to waste time debugging setups.


Ryzens do not have an integrated GPU, at least not at the moment. The CPU itself works out of the box, even slightly more exotic features like virtualization is flawless under KVM.

If you want a Ryzen-based desktop with "just works experience" similar to i7-based Intels, go with a Radeon RX460 or RX560 GPU. It will get picked up by the open source AMDGPU driver in the latest popular desktop distros.


Seconded on the RX460, that's what I put in the work machine, worked straight out the box on Fedora 26.


I have a Ryzen 1700 with an RX460 (it was the cheapest card from the same supplier that would drive my screens) and its been flawless out the box with the open source radeon driver.

In fact I didn't have to touch any of the graphics at all, more by chance than anything left screen was left/right screen was right (it's 50/50 and I end up with it the wrong way around 95% of the time).

I'm driving 2x2560x1440 Dell's and it's buttery smooth.

I don't game on that machine (it's a work desktop I built).

The open source radeon driver has improved massively in the last year or two.


Well, it is bleeding edge and you probably know that linux needs some time to support that properly. The integrated graphics is a non-issue here: Neither X299 nor X399 processors have any. You have to combine them with a gpu, and then are free to chose one that is properly supported. For what it's worth, the RX line was the first gpu ever that had proper working free Linux drivers on release (but the stronger versions like the RX 570 or 580 are not in stock or overpriced, thus not a reasonable option right now).

I'd expect the RX 550 or any of the older AMD gpus with the mesa driver to be a good choice, but it depends on what you want to do.


RX 550 won't work without upgrading mesa either from source or from 3rd party repos [1]. I went with RX 460 instead and it worked flawlessly out of the box with Ubuntu 17.04

[1] http://www.phoronix.com/scan.php?page=news_item&px=Radeon-RX...


Thanks. Since mesa is developing that fast being on a current version is something I assume, but the RX 460 is a good choice anyway.


>I'm well past the age I liked to waste time debugging setups.

Why not outsource that to Dell?


The note about this being the first time consumers will see NUMA systems is interesting. I hope reviewers are familiar with it when they're benchmarking.


Apparently, if you were interested in using these for a DAW, you might want to wait for Tech Report's review[1]. They're late to the party[2] because AMD didn't send them a review kit until TR publicly asked their readers for a CPU to review[3]. Maybe because TR is one of very few publications using DAWBench?

[1] https://twitter.com/jkampman_tr/status/895645729972080640

[2] http://techreport.com/news/32377/here-a-sneak-peek-at-our-ry...

[3] http://techreport.com/news/32343/updated-wanted-for-review-a...

Ryzen is also comparatively poor at DAW workloads.


I haven't been paying tremendously close attention to this line of CPUs, but I'm interested. I've seen indications that it's not particularly stable under Linux. Is that true? Is there a microcode update?


AMD have acknowledged there is an issue with their mainstream Ryzen CPUs that can cause segfaults on Linux, but they claim it doesn't affect their Threadripper or Epyc models.

http://www.phoronix.com/scan.php?page=news_item&px=Ryzen-Seg...


It seems to be related to php. A user on reddit which has a Epyc 7551 CPU has no segfaults without the php portion of the test: https://www.reddit.com/r/Amd/comments/6rmq6q/epyc_7551_minin...


The PHP issue is a separate cause for segfault that got mistaken for the original linux segfault issue when Phoronix messed that up in that article. Those segfaults also happen with Intel machines. The Ryzen linux segault issue is still not completely clear, but is confirmed by AMD, can be triggered by gcc and is not the same as the PHP issue.


People have also reported it when performing parallel compilation using GCC, so it's not limited to PHP.


The segfaults in that particular Reddit thread were caused by a php test suite and are not the high parallel load type of segfault plaguing RyZen.

As of now, nobody seems to be able to reproduce the high parallel load segfaults on epic or threadripper.

AMD says they can reproduce the problem on RyZen, and that it is not present in epic/threadripper.


I don't use PHP and have a Ryzen linux machine that hard freezes every so often (around once per week) with nothing in logs. Have pretty much given up on debugging it.


You should contact AMDs support and let them figure it out with you. From what I hear they treat that issue seriously by now, and if your processor turns out to be faulty they should offer you a RMA.

Though hard freezes not during compilation should be a completely different issue, and it will be an effort to pinpoint the processor as responsible.


Yeah, we pretty sure it's not a RAM, PSU, or SATA disk problem (because we had multiples of different kinds to swap around), and we also know that the problem gets much much worse when using an M2 NVMe PCIE SSD (as in hard freezes occur way more frequently, regardless of M2 model). (And it's also possible that the M2 issue is a different problem entirely.) But it could still easily be some kind of motherboard issue. And with it only happening around every 9 days on average... We could contact AMD, but how the heck do we ever narrow down what's going on if we can't trigger replication more frequently? And that's time that my actual job isn't getting done... (I'm personally inclined to think Mobo is at fault.)


Are you monitoring your temps?


The PHP portion segfaults on other CPUs because it's not related to the real problem.


In the LinusTechTips review he mentioned only top 5% dies go into Threadripper. So if it's related to quality of Ryzen, then that might make sense. I'd be surprised if Threadripper isn't plagued by this bug, though.


Except the Threadripper dies are closely related to the Epyc dies (ECC, PCIe lanes. etc), not the Ryzen cores.


No, you've got that wrong.

TR uses the same Zeppelin dies as Ryzen. Just the top 5% binned parts.

Epyc uses a newer die stepping that isn't yet used for Ryzen or TR.


Thanks for the clarification!



I'm not sure what you're asking - you want all cores to run at max speed, or do you want high-clock single-threaded performance when only using a single core? If it's the latter:

http://agner.org/optimize/blog/read.php?i=838 "Test results for AMD Ryzen":

> The Ryzen is saving power quite aggressively. Unused units are clock gated, and the clock frequency is varying quite dramatically with the workload and the temperature. In my tests, I often saw a clock frequency as low as 8% of the nominal frequency in cases where disk access was the limiting factor, while the clock frequency could be as high as 114% of the nominal frequency after a very long sequence of CPU-intensive code. Such a high frequency cannot be obtained if all eight cores are active because of the increase in temperature.


Does anyone have a theory for why Ryzens are beaten so badly by Intel in the Chromium compile benchmark?


I think on GCC or Clang Threadripper should be much better (see link on my other reply)

And also Arstechnica reported much better numbers, I think Anantech either had a bad config or there is a bug in the compiler they used.

https://arstechnica.co.uk/gadgets/2017/08/amd-threadripper-r...


The arstechnica numbers surprise me, given the sheer difference between the Ryzen 7 1800X and the Threadripper 1920X compared with the 1920X and the 1950X; either you've reached more or less peak parallelism by 16 cores, or the larger L3 cache and quad-channel memory makes the difference. Or something's changed in the benchmark setup (be it the compiler or whatever) since the 1800X was done.


On Twitter, the reviewer blamed it on Ryzen's L3 cache being a victim cache [1] (which I don't really understand; maybe it's related to having ephemeral data structures displace long-lived ones?). I think it's also been stated elsewhere that the Intel chips dominate during linking, which is a large chunk of the overall build time for Chromium.

[1] https://twitter.com/IanCutress/status/868799386079420416


Cross-core communication


Actually its a problem with Visual studio, Ryzen utterly destroys comptetition when it comes to compilation. Unfortunately Anandtech does not test with GCC or Clang.

http://www.phoronix.com/scan.php?page=article&item=ryzen-kab...


Well, for multi core workloads AMD is a clear winner. Other features are nice too: ECC memory; no thermopaste nonsense. I'm really waiting for future Intel offerings, they must do something extraordinary to gain lead.


> Well, for multi core workloads AMD is a clear winner.

Actually not really. It's actually slower in x265 encoding than a 7900X, and it doesn't really pull away much in x264 or Blender rendering either (10-25% over a 7900X). Also, it appears to perform pretty badly at some compilation workloads too, like a 1950X barely beats a 6900K at Chromium compiling using MSVS.

At most you can say that you really need to look at the specific task. Of course there may still be some tuning, but right now it's certainly not the slam-dunk that everyone assumed it would be.

http://www.anandtech.com/show/11697/the-amd-ryzen-threadripp...

http://www.anandtech.com/show/11697/the-amd-ryzen-threadripp...

http://www.anandtech.com/show/11697/the-amd-ryzen-threadripp...

https://www.overclock3d.net/reviews/cpu_mainboard/asus_x399_...

https://www.overclock3d.net/reviews/cpu_mainboard/asus_x399_...

Not a particularly great showing overall for a processor with 60% more cores. Despite AMD's attempted pushback it appears Intel's smack-talk was correct and Infinity Fabric is not a magic panacea for NUMA performance problems.

It also pulls an absolutely absurd amount of power to do it, literally more than a 7900X. The onboard package-power measurement appears to be drastically undershooting the power as measured at the wall. Even factoring out PSU efficiency losses, measuring inside the case something is eating at least 130W that isn't showing up as package power.

https://www.overclock3d.net/reviews/cpu_mainboard/asus_x399_...

http://www.anandtech.com/show/11697/the-amd-ryzen-threadripp...


As somebody who uses a lot of NVMe storage it is nice to see expanded PCIe lane options. I can really a put one more drive into my current Intel box.

NVMe SSD drives feel like the only _large_ performance change this decade.


Having to reboot to enable Game Mode really sucks. Usually I game after I finish working or maybe in between while I'm waiting for a download. I don't want to have to re-open all my programs and set up my desktop yet again after I'm done gaming.

I wonder if this will be fixed with software later on or we'll have to wait for the next model of Threadripper.


Any chance of just hibernating the system and then resuming it?


No way. It changes the number of CPU cores exposed to the kernel.


Hmm... I'm sure this is manageable. Windows Server supports Hot Add/Remove CPU and RAM. If it can do it hot, why not cold? Windows UPnP service is continuously polling hardware and reconfiguring the system.

I have a feeling Microsoft might have a solution in the works for this.


In theory, using Process Lasso to tie the game to 8 specific cores on a single die should produce equivalent results.


I hope they do not suffer the same issues the already released Ryzen 7 and Ryzen 5 series CPUs are having.

I'm also cautiously optimistic for EPYC the server grade CPU that AMD is releasing soon, although in this world of "per-core" licensing costs there is a strong need for less cores and more single-thread performance in the server.


AMD specifically claims they do not suffer from the gcc segfault thing. So I guess it depends on what you mean by "issues?"


AMD claims it, but until I understand the issue better I'm a little skeptical. We'll see as these parts make their way into the hands of consumers who will run these tests.


Per-core licensing is just ridiculous nowadays and I hope AMD's Epyc is the turning point to revise it.


> Given that Threadripper is a consumer focused product – and interestingly, not really a workstation focused product

Huh. Will AMD workstations be built around similar Epyc products, then? But those would have worse power consumption, right? (4 dies versus 2)


Epyc as lower clock speeds and slightly lower TDP.

The fastest 16-core Epyc has 2.4/2.9 GHz (base/boost) while TR has 3.4/4.0 (4.2 GHz with XFR, also 16 cores).


This could actually be huge news for desktop based rendering, video editing, cgi of any kind.

Beeing able to render on 64 threads for just the price of 2 high end graphics cards seriously makes me consider stick with cpu rendering for my next workstation - even at a lower frequency per core the time savings could be substanial.


How long until we can get chips with this many cores while still maintaining the clock speed needed for good single threaded performance?


I mean, it's a tradeoff. It will always be a tradeoff, so the answer you're going to get is "never". There will always be less cores vs higher base clock (i'd imagine).

However, the clock speeds of ryzen/tr/epyc/whatever are more than enough for a "good" workload. In fact you could argue that unless you're doing something which requires only one single core (and honestly, I can't envisage a workload like this in a professional environment, but i'm surely wrong) the speed is fantastic and not really noticeable.

It's incredibly easy these days, at least in .NET, to parallelize workloads locally, so using something like this would outshine any 4ghz base 8 core any day of the week. But to actually answer your question as to when can we see 4ghz+ base clocks on 16/32 core processors...the next 5 years? I guess...


Yep and the bottleneck moves around as well, Intellij used to take an age to do a full index rebuild on large multi-language projects, on the Ryzen 1700 I have it pegs all the cores at 100% for a few seconds and is done, the bottleneck now seems to be back on the disk access.

I pigz'd a 5GB archive in <20s and the bottleneck there was the SSD again, (Samsung evo 850 but it was a lot of small files and ~60,000 PDF's (don't ask...)).

The Ryzen 1700 paired with a good SSD is a dream for development workloads.


If the other cores aren't doing useful work they'll be powered down to let the remaining core run faster. Not perfectly but the extra millimeters of surface contact with the cooler the extra cores provide probably makes them a net benefit in that case.


These chips go up to 4.2 GHz, if you're using 4 cores or less and they can stay within their power & thermal envelope for your workload.


No question about it - this is cool technology. I will be building an ECC based Threadripper this fall.


My next plex server for sure.


I don’t really see the point of using this for a Plex server. Unless you’re doing massive amounts of transcoding 24/7


My Ryzen system can do 2-pass 1080p24 x264 encoding on "slower" preset in real-time. That is, both passes finish in the time it would take to play the movie. It would only make sense for streaming if you had >10 users at a time, and at that point it would probably save a lot of work if you just did the transcoding ahead of time and kept the right format on disk!


Do you actually gain anything from two-pass encoding for streaming? It helps with file sizes but not with momentary bandwidth requirements, as far as I know.


It helps if you have a target filesize, or equivalently an average bitrate. The first pass sets the "budget" that each part of the video gets to use out of the total, and the second pass has to make the best encoding it can that fits in the budget for each part. This lets you re-allocate bits from easy parts and give harder parts a little extra.

Edit: so no, it doesn't produce any of the final video during the first pass so you're just adding a huge amount of latency for nothing.


A little bit. Configured for constant bit-rates, x264 can still do a bit of jiggling on the assumption that there's a buffer, which helps with quality in general. Two-pass gives it information to do that better.

But constant-quality modes are far superior when you're not bandwidth-constrained.


Single-pass will also juggle within a seconds-long buffer. Does two-pass help on that micro level, in addition to the way it can allocate bits across an entire file?


x264 has a ton of settings for that kind of thing. This page has a good summary: https://en.wikibooks.org/wiki/MeGUI/x264_Settings#qpstep


I'd do that but the amount of data I have, yeah I couldn't keep extras. Takes up too much space.


Maybe you have 4K source video and you happen to watch it sometimes on your mobile phone or 1080P TV. You're gonna need a good CPU.


Transcoding is what I need it for.


My plex server is an rpi. What are you doing in plex that requires such horsepower?


Transcoding on the fly takes juice


That much?


Yes, it's expensive to re-encode video to a different resolution or quality.


I realize it's expensive, but does it require 12 cores/24 threads running at almost 3.5ghz. I just find that hard to believe, sorry.


Well when you've got your plex account shared with 10 or so people it adds up.


ohh, i see


He is using ~$1000 of hardware to save on ~$100 harddrive


16TB of video, actual files, not raw storage, and I need to transcode it, so you're saying just transcode that data and store duplicates? Yeah not a ~$100 hard drive.


Sorry, ~$180 (8TB storage pod, for example STEB8000100).


Whatever man, that's just a single hard drive. I've got 8 hard drives setup in a custom raid. I can't just willy nilly buy an off the shelf usb drive to add storage too it. You come off so high and mighty not knowing someones actual needs.


Sounds about right.


In an attempt to be as independent of third-party services as possible I switched from Plex to Emby and I see no reason to go back. You can even stream media outside your home network for free.

edit: https://emby.media/


So have there been any announcements about whether there will be dual-CPU Threadripper mobo's in the future? Does the CPU even support it?


I don't think they plan to support dual Threadripper, because single Epyc does more or less the same thing without the inter-socket latency or expense of a dual-socket motherboard.

It's effectively two Threadrippers on a single package (2x cores, 2x memory channels, 2x PCIe lanes) for only slightly more than double the price ($999 for 16C TR, $2100 for 32C Epyc).


Cheapest 32 core Epyc is ~USD3200, with 2GHz base clock. I'd much rather have 2 x Threadripper for USD2K with base clock of 3.4. I don't think there are any Epyc features that are relevant for compute loads - like the security stuff or guaranteed x years of support.

Either way it'll be an upgrade from my 4 core Xeon :)


The $3400 model you saw is the dual-socket capable version, the single-socket version (EPYC 7551P) is only $2100. Point taken about the clock speeds though.

http://www.anandtech.com/show/11551/amds-future-in-servers-n...


Oh I see, I didn't realize that difference. Well if they can build 2CPU Epyc CPUs for 150% of the price, I'll stay hoping they'll do the same for Threadrippers :) Although at that point you're right, the price would be the same as a single Epyc (1k * 1.5 * 2).


Is Threadripper the first CPU that can support 2+ NVME (m.2) cards due to all of the PCI lanes it has?


I'm going to answer my on question.

From TFA:

"With Threadripper, you can run two graphics cards at X16 PCIe speeds, two at X8, and still have enough lanes left over for three X4 NVMe SSDs connected directly to the CPU."

https://arstechnica.co.uk/gadgets/2017/08/amd-threadripper-r...


Good god.

So you can stick four high end cards in with 3 X4's...

That would be an absolute monster for GPU workloads.


Threadripper is an enthusiast/workstation SKU. If you really want to go nuclear, Epyc gets you 128 lanes. Boom.

(Also note that Threadripper is only 60 lanes, the other 4 go to the PCH, so the most you could do is x16x16x16x12. I'm unsure if Epyc gives you a real 128 or whether it's actually 120 or something.)


Next up, 512 lanes should be enough for anybody ;).


Most PCIE NVME SSDs use 4 lanes right now. So theoretically CPUs could support (total PCIE lanes / 4). It usually boils down to motherboard configuration or add-on card support for how many you can practically get in a system though.


The only m.2 with PCIe x4 is the 'M' key.

Everything else is max 2.


That's what NVMe SSDs use.


Intel 7900X with an X299 board can do that too, with UEFI-level RAID on top of that (might be important for Windows users, not important at all for Linux users).


The other cool thing is, (given that these will end up in a lot of servers) - instead of graphics card, read CUDA card.


Do any of the Ryzen or Threadripper skus have on-board graphics?


I feel like there have been unrealistic expectations of Vega compared to the expectations of how Ryzen was going to perform. Most people expected Ryzen to do ok but still loose to Intel by a 2 generation gap and here we are at what 5-10% behind Skylake/Kaby lake refreshes. With Vega everyone wants it to beat the gtx 1080ti but at best we might get something in between a 1080 and 1080ti for less money which isn't a fail in my views. The waiting however has sucked and has forced a lot of people to go team green who otherwise wouldn't of had too.


That combined with the fact that all new AMD GPU's (580 570 480 470) are out of stock literally EVERYWHERE (more recent issue) a lot of people bought nvidia GPU's


Have to +1 that. The current AMD GPU simply cannot be ordered, and that's not a unique occurrence specific to this release.


GTX 1070s are often sold out as well, they're popular for mining too. You can only buy lower end GTX 1050 Ti / RX 560 stuff or the 1080/1080 Ti.


1060s are back in stock. It's specifically only the 1070 that is problematic at this point, and you can actually find them although they do sell out pretty quick.

However both 1060s and 1080s have yet to return to their pre-bubble prices and are hanging out close to MSRP again.


The issue is 1080 is 1.5 years old.


And more power efficient to boot.


When is NVIDIA next gen due ?


Either later this year or early next year depending on which rumor site you choose to believe.


I've even heard rumors that early next year will be the Quadros and Teslas, gaming cards in late next year.


Volta V100 is already on sale in NVLINK/DGX form factor, they've been handing out the PCIE to researchers for a while now during events and it will go on sale in about a month.


Will Intel buy NVIDIA ? Stay Tuned !


Comedy option: Intel makes the offer and Jen-Hsun demands to be CEO of the resulting company again.

(for those not in the know: AMD made an offer to buy out NVIDIA a decade ago, and it fell through for that reason, NVIDIA's CEO Jen-Hsun Huang demanded to be CEO of the resulting company. The funny thing is that he would have been a much better CEO than Dirk Meyer or Hector Ruiz, but Ruiz was not about to give up his CEO seat. AMD ended up buying ATI instead.)

Double comedy option: Intel agrees to it.


Now, what would be the name of the new company ? nvintel ? intellidia ?


Nah, you're missing the obvious here. Now that AMD has THREADRIPPA and EPYC, the way has been cleared for other 90s-gamer-chic names.

The merger of Intel and NVIDIA would obviously result in a company literally named "Chipzilla".

They partner with rapper DMX for a new promotion for their next series of Extreme Edition chips, Jen-Hsun walks onstage to the tune of "-X gonna give it to ya". As a snub to the Vega t-shirts, everyone in the audience receives a free Coffee Lake-X leather jacket.


They can't afford them.


Yeah but with 64 pcie lanes and vulkan, that 1080i, and even 2 in SLI, is starting to look a bit old fashion. I think amd is actually ahead of the paradigm shift


The 1080ti has absolutely no competition from AMD in terms of performance/power and availability for most workloads; calling it "old fashioned" is... out of touch. And Nvidia is already on the move with Volta, which AMD will seemingly have zero answer for. They're running circles around themselves at this point.

If anything, Ryzen's increased PCIe lanes are a reason to go for nvidia. You can get Threadripper and mobo at a much smaller cost than a Xeon rig, with more lanes, but just load it with Pascal GPUs. And they'll have better performance, power, and be easier to buy than any Vega card. This would do quite well for a deep learning machine, for example.

The place Vega has advantages are in more niche areas, like open source Radeon drivers, fully unlocked FP16 support (garbage market segmentation from Nvidia), and the Instinct/SSG GPU lines they'll be offering will be unique. And they're definitely cheaper, for sure. (But unless AMD can back it up with software, it won't mean much, especially for markets like deep learning, which NVidia is going full-force.)


How are VEGA different from the 1080 in this regard ? they're designed with more parallelism in mind ?


Vulkan was based on an API design by AMD but that doesn't mean it works better with AMD hardware. From Mantle to Vulkan there has been many small changes to make it perform optimally in pretty much all modern GPU architectures.


I remember a time where AMD and ATI were not one company and the latter was always head-to-head with NVidia. At which point did mediocrity spread from AMD?


> At which point did mediocrity spread from AMD?

Based on the benchmarks, the $500 AMD Ryzen 7 matches the performance of the $1723 Intel i7 6950x.

Is this what you call mediocre? Getting the same performance for less than 1/3 of the price is being mediocre nowadays?


I have a Ryzen 1700 in the desktop I built at new job, it's performance per £ is simply staggering.

It's the first time since I got my first i5 (2500K) from an AMD64-3000 I've genuinely been staggered by performance and the desktop at my previous job was an OC'd 3570K.

Frankly the Ryzen 1700 is incredible, so much so I'm already planning on building a similar machine for home.

All that said though I'm also really impressed with the i7-7700K in the new laptop, the amount of processing power intel has shoved into a laptop is impressive as well.

Competition is good, it will be interesting to see what the mobile Ryzen's are capable off.


> I have a Ryzen 1700 in the desktop I built at new job, it's performance per £ is simply staggering.

I can confirm that, I bought a new desktop PC with a 1700 for home use around Easter, and on CPU-bound workloads with sufficient parallelism (e.g. video encoding), that thing is just insanely fast.

It's a bit of a shame, really, that most of the time it just sits there doing almost nothing. ;-)


Better to have it and not need it then need it and not have it.

I hammer mine though and it handles it with complete aplomb.


GGP specifically addressed this, no? Broadwell is ~two releases behind (and if you're explicitly looking at an Intel -E chip for cost comparison.... that doesn't make an awful lot of sense given that their E lines are basically always sky high prices compared to other Intel chips).


> GGP specifically addressed this, no? Broadwell is ~two releases behind

That's entirely irrelevant. It doesn't matter if those processors were released last year of in the last century. It's Intel's current and best offer, and the facts speak for themselves: AMD's current offering matches Intel's perfomance while trouncing Intel's in price/performance ratio.

> that doesn't make an awful lot of sense given that their E lines are basically always sky high prices compared to other Intel chips).

That's Intel's problem. It makes no sense to try to downplay AMD's stellar performance just because Intel does some major price gouging.


Intel will most likely have no advantage in usable performance on the The 12, 14, 16, and, 18 chips they use too much power and run way to hot so they will need lower clocks. The Ryzen 1950X will most likely clock around 4 Ghz on air on all cores, The 12, 14, 16, and, 18 core intel chips, will need a custom loop, or really low clocks, intel like's to do fuckery and, the only way i can see them making the The 12, 14, 16, and, 18 chips have good performance is too turbo up 4 or 6 cores for gaming etc.


> clock around 4 Ghz on air on all cores

That must be a really high end air cooler. My R7 1700 runs 4GHz on a 280mm AIO water cooler. Sure the 1950X dies are better binned so they need less voltage on average, but with literally 2x the core count, that's gonna be quite a lot of heat.

Yeah, Skylake-X is a power monster, all the memes about "Core i9 best for burning your house down" are funny, but SIXTEEN Zen cores at 4GHz would not be cool and quiet either :D


> That must be a really high end air cooler. My R7 1700 runs 4GHz on a 280mm AIO water cooler.

As you note, Threadripper is the best 5% of R7 chips, some of them are going as high as 4.2 GHz. But there appears to be a surprising amount of chip-to-chip variation even still.

> Yeah, Skylake-X is a power monster, all the memes about "Core i9 best for burning your house down" are funny, but SIXTEEN Zen cores at 4GHz would not be cool and quiet either :D

You don't know the half of it. Check out these power numbers:

https://www.overclock3d.net/reviews/cpu_mainboard/asus_x399_...


> That must be a really high end air cooler.

Good that Noctua will release their Threadripper coolers like the NH-14S next week. I wonder why they didn't release their NH-D15 (which beats many AIOs) for Threadripper as well, since it's even better than the NH-14S.


To say nothing of the AMD $$ advantage


TL;DR: clear winner in multithread productivity, intel better in single-thread productivity, around Ryzen 1800X in gaming (in game mode, which tries to pin threads to one die at a time and disables SMT), great TDP (stays under 180w at full load), has some teething issues with the NUMA nature of two-dies, but is overall a great processor.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: