I was looking to build myself a new (Dual) Xeon workstation, but looking at these specs and performance, I am going to consider threadripper instead.
With this many threads, ECC support and 64 PCIe lanes this CPU looks perfect for my intended workload. It's also gonna be slightly cheaper than a dual xeon.
I am in the same boat, one of my use cases is many many instances of the same application. (100 instances would be good, as I get 70 on my dual socket E5-2670 v1)
Only problem when compared to dual socket Xeon E5 solutions up to Broadwell(v4) is support for Windows 7 which is a must for my case.
I remain hopeful that it will be possible to run Windows 7 on Threadripper "unoficially".
It might take a PS/2 keyboard and a DVD(USB is not supported on Ryzen for Windows 7 without slipstreaming USB drivers) to install but it should be doable.
Was not planning on implementing high-end Xeons, I was looking at ~EUR600 per CPU. So total for CPU's would have been slightly more than a 1950X. Also, a dual socket motherboard is also a bit more expensive.
In total I think a single CPU 1950X is about 500 euro cheaper. So yeah, 'slightly' was a wrong choice of words.
Comparable Broadwell-EP or new Purley chips cost thousands more. They have better features and more cache but price wise these platforms are in different universes.
I am dying to know as well. I have a Ryzen 1700 and my primary workload is compiling Go code, which should, in theory, be fabulously parallel. But in reality it hits all 16 threads maybe for the first 2-3 seconds and then falls back to utilizing only 3-5 cores. Perhaps our project isn't big enough to load up all cores for long...
Heavy virtualisation, running complex infrastructures within minikube, loads of data processing for customers. Also occasional hardware synthesis and video editing (but not professionally)
I am wondering why this isn't getting more attention.
> It appears that the Ryzen PMU just isn't quite accurate enough :-(. rr
> might work OK for some kinds of usage but I wouldn't recommend it.
>
> I'll land the patches I have with a warning for Ryzen users that things
> won't be reliable.
Can someone with a technical understanding describe what's going on here?
Is this a flat-out bug- or does the x86 architecture spec allow for this?
Barring a major technical problem being uncovered (hopefully the segfault thing won't be an issue) or nefarious action by Intel I cannot see how this thing can fail. So much power at the price point. As soon as the price drops a bit I'll be replacing my FX-8350 setup with one of these. Go competition! Go AMD!
I mean I could guess one way it'd fail - if they market it heavily to the PC gaming market, and their single threaded performance isn't as good at similar price points. There's still lot of workloads out there that can't take advantage of 4 threads, let alone 16/32+. I would assume they don't market it heavily to gaming, but it's like the second thing they mention on their main landing page, so... idk.
AMD is already making a big marketing push about "you need 32 lanes for SLI/CrossFire!".
Which is utterly false, at 3.0 speeds x8x8 only costs you a few percent at most. Strangely x16x16 can actually hurt you in a few titles, which suggests we may just be seeing some other kind of variation/noise anyway.
Threadripper is no better than a Ryzen 7 at gaming, and in fact is often worse (often closer to a Ryzen 5). If you really buy into the need for x16x16 then getting a 6850K is really the obvious answer, it competes with Ryzen core-for-core in both performance and efficiency and since it's a HEDT product you get the 40 PCIe lanes as well as quad-channel RAM. Somewhat pricier than Ryzen, given, but actually much cheaper than Threadripper.
One thing I have with Intel is the virtual certainty the machine will work flawlessly with Linux. Atom, Celeron, Pentium, Core, and Xeon E3 have built-in GPUs that are very well supported.
Before jumping to AMD gear, I'd like to know if I'd have the same "just works" experience. I'm well past the age I liked to waste time debugging setups.
Ryzens do not have an integrated GPU, at least not at the moment. The CPU itself works out of the box, even slightly more exotic features like virtualization is flawless under KVM.
If you want a Ryzen-based desktop with "just works experience" similar to i7-based Intels, go with a Radeon RX460 or RX560 GPU. It will get picked up by the open source AMDGPU driver in the latest popular desktop distros.
I have a Ryzen 1700 with an RX460 (it was the cheapest card from the same supplier that would drive my screens) and its been flawless out the box with the open source radeon driver.
In fact I didn't have to touch any of the graphics at all, more by chance than anything left screen was left/right screen was right (it's 50/50 and I end up with it the wrong way around 95% of the time).
I'm driving 2x2560x1440 Dell's and it's buttery smooth.
I don't game on that machine (it's a work desktop I built).
The open source radeon driver has improved massively in the last year or two.
Well, it is bleeding edge and you probably know that linux needs some time to support that properly. The integrated graphics is a non-issue here: Neither X299 nor X399 processors have any. You have to combine them with a gpu, and then are free to chose one that is properly supported. For what it's worth, the RX line was the first gpu ever that had proper working free Linux drivers on release (but the stronger versions like the RX 570 or 580 are not in stock or overpriced, thus not a reasonable option right now).
I'd expect the RX 550 or any of the older AMD gpus with the mesa driver to be a good choice, but it depends on what you want to do.
RX 550 won't work without upgrading mesa either from source or from 3rd party repos [1]. I went with RX 460 instead and it worked flawlessly out of the box with Ubuntu 17.04
The note about this being the first time consumers will see NUMA systems is interesting. I hope reviewers are familiar with it when they're benchmarking.
Apparently, if you were interested in using these for a DAW, you might want to wait for Tech Report's review[1]. They're late to the party[2] because AMD didn't send them a review kit until TR publicly asked their readers for a CPU to review[3]. Maybe because TR is one of very few publications using DAWBench?
I haven't been paying tremendously close attention to this line of CPUs, but I'm interested. I've seen indications that it's not particularly stable under Linux. Is that true? Is there a microcode update?
AMD have acknowledged there is an issue with their mainstream Ryzen CPUs that can cause segfaults on Linux, but they claim it doesn't affect their Threadripper or Epyc models.
The PHP issue is a separate cause for segfault that got mistaken for the original linux segfault issue when Phoronix messed that up in that article. Those segfaults also happen with Intel machines. The Ryzen linux segault issue is still not completely clear, but is confirmed by AMD, can be triggered by gcc and is not the same as the PHP issue.
I don't use PHP and have a Ryzen linux machine that hard freezes every so often (around once per week) with nothing in logs. Have pretty much given up on debugging it.
You should contact AMDs support and let them figure it out with you. From what I hear they treat that issue seriously by now, and if your processor turns out to be faulty they should offer you a RMA.
Though hard freezes not during compilation should be a completely different issue, and it will be an effort to pinpoint the processor as responsible.
Yeah, we pretty sure it's not a RAM, PSU, or SATA disk problem (because we had multiples of different kinds to swap around), and we also know that the problem gets much much worse when using an M2 NVMe PCIE SSD (as in hard freezes occur way more frequently, regardless of M2 model). (And it's also possible that the M2 issue is a different problem entirely.) But it could still easily be some kind of motherboard issue. And with it only happening around every 9 days on average... We could contact AMD, but how the heck do we ever narrow down what's going on if we can't trigger replication more frequently? And that's time that my actual job isn't getting done...
(I'm personally inclined to think Mobo is at fault.)
In the LinusTechTips review he mentioned only top 5% dies go into Threadripper. So if it's related to quality of Ryzen, then that might make sense. I'd be surprised if Threadripper isn't plagued by this bug, though.
I'm not sure what you're asking - you want all cores to run at max speed, or do you want high-clock single-threaded performance when only using a single core? If it's the latter:
> The Ryzen is saving power quite aggressively. Unused units are clock gated, and the clock frequency is varying quite dramatically with the workload and the temperature. In my tests, I often saw a clock frequency as low as 8% of the nominal frequency in cases where disk access was the limiting factor, while the clock frequency could be as high as 114% of the nominal frequency after a very long sequence of CPU-intensive code. Such a high frequency cannot be obtained if all eight cores are active because of the increase in temperature.
The arstechnica numbers surprise me, given the sheer difference between the Ryzen 7 1800X and the Threadripper 1920X compared with the 1920X and the 1950X; either you've reached more or less peak parallelism by 16 cores, or the larger L3 cache and quad-channel memory makes the difference. Or something's changed in the benchmark setup (be it the compiler or whatever) since the 1800X was done.
On Twitter, the reviewer blamed it on Ryzen's L3 cache being a victim cache [1] (which I don't really understand; maybe it's related to having ephemeral data structures displace long-lived ones?). I think it's also been stated elsewhere that the Intel chips dominate during linking, which is a large chunk of the overall build time for Chromium.
Actually its a problem with Visual studio, Ryzen utterly destroys comptetition when it comes to compilation. Unfortunately Anandtech does not test with GCC or Clang.
Well, for multi core workloads AMD is a clear winner. Other features are nice too: ECC memory; no thermopaste nonsense. I'm really waiting for future Intel offerings, they must do something extraordinary to gain lead.
> Well, for multi core workloads AMD is a clear winner.
Actually not really. It's actually slower in x265 encoding than a 7900X, and it doesn't really pull away much in x264 or Blender rendering either (10-25% over a 7900X). Also, it appears to perform pretty badly at some compilation workloads too, like a 1950X barely beats a 6900K at Chromium compiling using MSVS.
At most you can say that you really need to look at the specific task. Of course there may still be some tuning, but right now it's certainly not the slam-dunk that everyone assumed it would be.
Not a particularly great showing overall for a processor with 60% more cores. Despite AMD's attempted pushback it appears Intel's smack-talk was correct and Infinity Fabric is not a magic panacea for NUMA performance problems.
It also pulls an absolutely absurd amount of power to do it, literally more than a 7900X. The onboard package-power measurement appears to be drastically undershooting the power as measured at the wall. Even factoring out PSU efficiency losses, measuring inside the case something is eating at least 130W that isn't showing up as package power.
Having to reboot to enable Game Mode really sucks. Usually I game after I finish working or maybe in between while I'm waiting for a download. I don't want to have to re-open all my programs and set up my desktop yet again after I'm done gaming.
I wonder if this will be fixed with software later on or we'll have to wait for the next model of Threadripper.
Hmm... I'm sure this is manageable. Windows Server supports Hot Add/Remove CPU and RAM. If it can do it hot, why not cold? Windows UPnP service is continuously polling hardware and reconfiguring the system.
I have a feeling Microsoft might have a solution in the works for this.
I hope they do not suffer the same issues the already released Ryzen 7 and Ryzen 5 series CPUs are having.
I'm also cautiously optimistic for EPYC the server grade CPU that AMD is releasing soon, although in this world of "per-core" licensing costs there is a strong need for less cores and more single-thread performance in the server.
AMD claims it, but until I understand the issue better I'm a little skeptical. We'll see as these parts make their way into the hands of consumers who will run these tests.
This could actually be huge news for desktop based rendering, video editing, cgi of any kind.
Beeing able to render on 64 threads for just the price of 2 high end graphics cards seriously makes me consider stick with cpu rendering for my next workstation - even at a lower frequency per core the time savings could be substanial.
I mean, it's a tradeoff. It will always be a tradeoff, so the answer you're going to get is "never". There will always be less cores vs higher base clock (i'd imagine).
However, the clock speeds of ryzen/tr/epyc/whatever are more than enough for a "good" workload. In fact you could argue that unless you're doing something which requires only one single core (and honestly, I can't envisage a workload like this in a professional environment, but i'm surely wrong) the speed is fantastic and not really noticeable.
It's incredibly easy these days, at least in .NET, to parallelize workloads locally, so using something like this would outshine any 4ghz base 8 core any day of the week. But to actually answer your question as to when can we see 4ghz+ base clocks on 16/32 core processors...the next 5 years? I guess...
Yep and the bottleneck moves around as well, Intellij used to take an age to do a full index rebuild on large multi-language projects, on the Ryzen 1700 I have it pegs all the cores at 100% for a few seconds and is done, the bottleneck now seems to be back on the disk access.
I pigz'd a 5GB archive in <20s and the bottleneck there was the SSD again, (Samsung evo 850 but it was a lot of small files and ~60,000 PDF's (don't ask...)).
The Ryzen 1700 paired with a good SSD is a dream for development workloads.
If the other cores aren't doing useful work they'll be powered down to let the remaining core run faster. Not perfectly but the extra millimeters of surface contact with the cooler the extra cores provide probably makes them a net benefit in that case.
My Ryzen system can do 2-pass 1080p24 x264 encoding on "slower" preset in real-time. That is, both passes finish in the time it would take to play the movie. It would only make sense for streaming if you had >10 users at a time, and at that point it would probably save a lot of work if you just did the transcoding ahead of time and kept the right format on disk!
Do you actually gain anything from two-pass encoding for streaming? It helps with file sizes but not with momentary bandwidth requirements, as far as I know.
It helps if you have a target filesize, or equivalently an average bitrate. The first pass sets the "budget" that each part of the video gets to use out of the total, and the second pass has to make the best encoding it can that fits in the budget for each part. This lets you re-allocate bits from easy parts and give harder parts a little extra.
Edit: so no, it doesn't produce any of the final video during the first pass so you're just adding a huge amount of latency for nothing.
A little bit. Configured for constant bit-rates, x264 can still do a bit of jiggling on the assumption that there's a buffer, which helps with quality in general. Two-pass gives it information to do that better.
But constant-quality modes are far superior when you're not bandwidth-constrained.
Single-pass will also juggle within a seconds-long buffer. Does two-pass help on that micro level, in addition to the way it can allocate bits across an entire file?
16TB of video, actual files, not raw storage, and I need to transcode it, so you're saying just transcode that data and store duplicates? Yeah not a ~$100 hard drive.
Whatever man, that's just a single hard drive. I've got 8 hard drives setup in a custom raid. I can't just willy nilly buy an off the shelf usb drive to add storage too it. You come off so high and mighty not knowing someones actual needs.
In an attempt to be as independent of third-party services as possible I switched from Plex to Emby and I see no reason to go back. You can even stream media outside your home network for free.
I don't think they plan to support dual Threadripper, because single Epyc does more or less the same thing without the inter-socket latency or expense of a dual-socket motherboard.
It's effectively two Threadrippers on a single package (2x cores, 2x memory channels, 2x PCIe lanes) for only slightly more than double the price ($999 for 16C TR, $2100 for 32C Epyc).
Cheapest 32 core Epyc is ~USD3200, with 2GHz base clock. I'd much rather have 2 x Threadripper for USD2K with base clock of 3.4. I don't think there are any Epyc features that are relevant for compute loads - like the security stuff or guaranteed x years of support.
Either way it'll be an upgrade from my 4 core Xeon :)
The $3400 model you saw is the dual-socket capable version, the single-socket version (EPYC 7551P) is only $2100. Point taken about the clock speeds though.
Oh I see, I didn't realize that difference. Well if they can build 2CPU Epyc CPUs for 150% of the price, I'll stay hoping they'll do the same for Threadrippers :) Although at that point you're right, the price would be the same as a single Epyc (1k * 1.5 * 2).
"With Threadripper, you can run two graphics cards at X16 PCIe speeds, two at X8, and still have enough lanes left over for three X4 NVMe SSDs connected directly to the CPU."
Threadripper is an enthusiast/workstation SKU. If you really want to go nuclear, Epyc gets you 128 lanes. Boom.
(Also note that Threadripper is only 60 lanes, the other 4 go to the PCH, so the most you could do is x16x16x16x12. I'm unsure if Epyc gives you a real 128 or whether it's actually 120 or something.)
Most PCIE NVME SSDs use 4 lanes right now. So theoretically CPUs could support (total PCIE lanes / 4). It usually boils down to motherboard configuration or add-on card support for how many you can practically get in a system though.
Intel 7900X with an X299 board can do that too, with UEFI-level RAID on top of that (might be important for Windows users, not important at all for Linux users).
I feel like there have been unrealistic expectations of Vega compared to the expectations of how Ryzen was going to perform. Most people expected Ryzen to do ok but still loose to Intel by a 2 generation gap and here we are at what 5-10% behind Skylake/Kaby lake refreshes. With Vega everyone wants it to beat the gtx 1080ti but at best we might get something in between a 1080 and 1080ti for less money which isn't a fail in my views. The waiting however has sucked and has forced a lot of people to go team green who otherwise wouldn't of had too.
That combined with the fact that all new AMD GPU's (580 570 480 470) are out of stock literally EVERYWHERE (more recent issue) a lot of people bought nvidia GPU's
1060s are back in stock. It's specifically only the 1070 that is problematic at this point, and you can actually find them although they do sell out pretty quick.
However both 1060s and 1080s have yet to return to their pre-bubble prices and are hanging out close to MSRP again.
Volta V100 is already on sale in NVLINK/DGX form factor, they've been handing out the PCIE to researchers for a while now during events and it will go on sale in about a month.
Comedy option: Intel makes the offer and Jen-Hsun demands to be CEO of the resulting company again.
(for those not in the know: AMD made an offer to buy out NVIDIA a decade ago, and it fell through for that reason, NVIDIA's CEO Jen-Hsun Huang demanded to be CEO of the resulting company. The funny thing is that he would have been a much better CEO than Dirk Meyer or Hector Ruiz, but Ruiz was not about to give up his CEO seat. AMD ended up buying ATI instead.)
Nah, you're missing the obvious here. Now that AMD has THREADRIPPA and EPYC, the way has been cleared for other 90s-gamer-chic names.
The merger of Intel and NVIDIA would obviously result in a company literally named "Chipzilla".
They partner with rapper DMX for a new promotion for their next series of Extreme Edition chips, Jen-Hsun walks onstage to the tune of "-X gonna give it to ya". As a snub to the Vega t-shirts, everyone in the audience receives a free Coffee Lake-X leather jacket.
Yeah but with 64 pcie lanes and vulkan, that 1080i, and even 2 in SLI, is starting to look a bit old fashion. I think amd is actually ahead of the paradigm shift
The 1080ti has absolutely no competition from AMD in terms of performance/power and availability for most workloads; calling it "old fashioned" is... out of touch. And Nvidia is already on the move with Volta, which AMD will seemingly have zero answer for. They're running circles around themselves at this point.
If anything, Ryzen's increased PCIe lanes are a reason to go for nvidia. You can get Threadripper and mobo at a much smaller cost than a Xeon rig, with more lanes, but just load it with Pascal GPUs. And they'll have better performance, power, and be easier to buy than any Vega card. This would do quite well for a deep learning machine, for example.
The place Vega has advantages are in more niche areas, like open source Radeon drivers, fully unlocked FP16 support (garbage market segmentation from Nvidia), and the Instinct/SSG GPU lines they'll be offering will be unique. And they're definitely cheaper, for sure. (But unless AMD can back it up with software, it won't mean much, especially for markets like deep learning, which NVidia is going full-force.)
Vulkan was based on an API design by AMD but that doesn't mean it works better with AMD hardware. From Mantle to Vulkan there has been many small changes to make it perform optimally in pretty much all modern GPU architectures.
I remember a time where AMD and ATI were not one company and the latter was always head-to-head with NVidia. At which point did mediocrity spread from AMD?
I have a Ryzen 1700 in the desktop I built at new job, it's performance per £ is simply staggering.
It's the first time since I got my first i5 (2500K) from an AMD64-3000 I've genuinely been staggered by performance and the desktop at my previous job was an OC'd 3570K.
Frankly the Ryzen 1700 is incredible, so much so I'm already planning on building a similar machine for home.
All that said though I'm also really impressed with the i7-7700K in the new laptop, the amount of processing power intel has shoved into a laptop is impressive as well.
Competition is good, it will be interesting to see what the mobile Ryzen's are capable off.
> I have a Ryzen 1700 in the desktop I built at new job, it's performance per £ is simply staggering.
I can confirm that, I bought a new desktop PC with a 1700 for home use around Easter, and on CPU-bound workloads with sufficient parallelism (e.g. video encoding), that thing is just insanely fast.
It's a bit of a shame, really, that most of the time it just sits there doing almost nothing. ;-)
GGP specifically addressed this, no? Broadwell is ~two releases behind (and if you're explicitly looking at an Intel -E chip for cost comparison.... that doesn't make an awful lot of sense given that their E lines are basically always sky high prices compared to other Intel chips).
That's entirely irrelevant. It doesn't matter if those processors were released last year of in the last century. It's Intel's current and best offer, and the facts speak for themselves: AMD's current offering matches Intel's perfomance while trouncing Intel's in price/performance ratio.
> that doesn't make an awful lot of sense given that their E lines are basically always sky high prices compared to other Intel chips).
That's Intel's problem. It makes no sense to try to downplay AMD's stellar performance just because Intel does some major price gouging.
Intel will most likely have no advantage in usable performance on the The 12, 14, 16, and, 18 chips they use too much power and run way to hot so they will need lower clocks. The Ryzen 1950X will most likely clock around 4 Ghz on air on all cores, The 12, 14, 16, and, 18 core intel chips, will need a custom loop, or really low clocks, intel like's to do fuckery and, the only way i can see them making the The 12, 14, 16, and, 18 chips have good performance is too turbo up 4 or 6 cores for gaming etc.
That must be a really high end air cooler. My R7 1700 runs 4GHz on a 280mm AIO water cooler. Sure the 1950X dies are better binned so they need less voltage on average, but with literally 2x the core count, that's gonna be quite a lot of heat.
Yeah, Skylake-X is a power monster, all the memes about "Core i9 best for burning your house down" are funny, but SIXTEEN Zen cores at 4GHz would not be cool and quiet either :D
> That must be a really high end air cooler. My R7 1700 runs 4GHz on a 280mm AIO water cooler.
As you note, Threadripper is the best 5% of R7 chips, some of them are going as high as 4.2 GHz. But there appears to be a surprising amount of chip-to-chip variation even still.
> Yeah, Skylake-X is a power monster, all the memes about "Core i9 best for burning your house down" are funny, but SIXTEEN Zen cores at 4GHz would not be cool and quiet either :D
You don't know the half of it. Check out these power numbers:
Good that Noctua will release their Threadripper coolers like the NH-14S next week. I wonder why they didn't release their NH-D15 (which beats many AIOs) for Threadripper as well, since it's even better than the NH-14S.
TL;DR: clear winner in multithread productivity, intel better in single-thread productivity, around Ryzen 1800X in gaming (in game mode, which tries to pin threads to one die at a time and disables SMT), great TDP (stays under 180w at full load), has some teething issues with the NUMA nature of two-dies, but is overall a great processor.
* http://www.guru3d.com/articles-pages/amd-ryzen-threadripper-...
* https://www.techspot.com/review/1465-amd-ryzen-threadripper-...
* http://www.tweaktown.com/reviews/8303/amd-ryzen-threadripper...
* https://hothardware.com/reviews/amd-ryzen-threadripper-proce...
* https://arstechnica.co.uk/gadgets/2017/08/amd-threadripper-r...
* http://www.pcworld.com/article/3214635/components-processors...
* https://www.forbes.com/sites/antonyleather/2017/08/10/amd-ry...
Other Links:
* https://videocardz.com/71804/amd-ryzen-threadripper-review-r...