I'm excited for what this will do to the cost of dedicated servers in ~1 year.
Also, as a person who used to work at Intel, I don't know whose idea this was, but that person should probably have a long hard look at themselves -- hardware people are exactly the people that this kind of shit wouldn't fly with, because they'll almost always ask for details and can spot a hack from a mile away.
On the one hand I can sympathize with Intel -- seeing how tough it was to stay on the market year over year, trying to predict and start developing the next trend in hardware. But on the other hand... Why in the world would you do this -- Intel basically dominates the high end market right now, just take your time and make a properly better thing.
> I'm excited for what this will do to the cost of dedicated servers in ~1 year.
This is the opposite though?
The dedicated servers are turning into HEDTs. AMD 32-core EPYC has been available since last year, and Intel's 28-core Skylake (although $10,000) has been also available for a year.
So dedicated servers got this tech first, then HEDT got it a bit later. I guess Threadripper is Zen+ so its technically HEDT gets the 12nm tech first, but the 32-core infrastructure was in EPYC first.
The problem IMO is that Intel HEDTs don't support ECC (as far as I know), so not very good idea when you are working with workloads that need 64GB - 128GB of RAM (video editing, etc)
Ahh I deal with bare metal cloud companies that sell dedicated HEDTs so I was thinking that I'd be able to find even more HEDTs with great specs in a year's time as Threadrippers make their way through the market. People (including me) pay a premium for faster Intel cores and I'm excited to make the switch to slower, more plentiful AMD cores when I can, because I've invested in learning languages that do their best to handle parallelism well.
In practice the only difference between the dedicated servers of yesteryear and HEDTs of today is becoming perception (well and some very specific features), and considering computational load of most things hasn't actually gotten that much bigger, in addition to a proliferation of langauges that can adequately use multiple cores, feels like everything is looking to get cheaper yet better -- that's what excites me.
I get that Intel feels threatened by AMD. They are trying to impress the consumers... but bullshitting a demo is a very bad move! When a consumer decides to build a new PC, the characteristics of the product matter, but so does the reputation of the company that manufactures it. Right now Intel is putting too much effort into sketchy marketing practices: it undermines the actual work being done on their processors by some very talented people.
Presenting it as an extreme overclocking demo would have been a much wiser option.
Unfortunately it might work. With today's news cycles, an average consumer may have noticed the headline about Intel's 5GHz 28-core monster, and that's it. Follow up articles aren't as interesting.
The average consumer may just buy a smartphone over a PC. The average tech savvy consumer may just build the PC over buying off the shelf ones. In either case most people won't be buying these 28 core chips unless they are reps for enterprises in which case they will most definitely have done their HW before buying this.
>reps for enterprises in which case they will most definitely have done their HW before buying this.
Don't assume they would. Plenty of purchasing in large companies is associated with some higher up hearing about something, wanting it, then buying it to 'help' in some obscure way.
That would be akin to i7 8086k on 7.2GHz... just add LN2.
What they presented was an extreme overclocking area with insulation (due to subambient condensation), a one HP chiller that runs on a banned gas and a 4 second benchmark (seriously aside CB being free is not a true testament of overclocking prowess). Such a demo is as pointless as it is almost as practical as daily LN2 use.
I can imagine few cases where first-to-right-the-bell performance on a single core determines if you get a specific quote in HFT but that's that.
It was always going to be that way. Threadripper 2 is just Threadripper with the two blank spacers in the IHS replaced with two more Zen dies, and a fab process change from 14nm to 12nm (but not architectural changes). It's very much what would be a "tick" in Intel CPU terminology.
I recall reading that the thread ripper 2 is supposed to use more power so early motherboards with that socket my not be able to handle it. Just something to keep in mind.
Most/all early TR boards were "gaming" branded/targetted, and those typically have power headroom for overclocking. It's certainly possible some of the early boards won't be able to support TR2 at full spec speeds, but I would anticipate most being able to.
2nd Gen TR will draw ~250W. X399 boards have at least 1x 4-Pin and 1x 8-Pin ATX-12V connector, so at least 225W. The rest is supplied by the ATX 24 pin connection. Some X399 boards use 2x 8-Pin ATX-12V connectors, that works out to 300W, which should be more than enough for Threadripper 2.
AMD's forward compatibility is one of the most amazing things to me, so much so that I still can't believe AM4 is going to remain viable for future CPUs for a while hence despite its age.
It's also a big reason I'm not going with Intel, since I know I can upgrade to something significantly better without having to get a new motherboard.
Worth noting that with AMD sockets, physical compatibility is no guarantee that it'll work without a software update. If you're unlucky/incautious during a system build, you can end up in a catch 22 with a motherboard that requires a BIOS update for your new CPU to work, and no way to boot it to update the BIOS because your CPU doesn't work. AMD will loan you an older CPU if you jump through enough hoops, but it's probably less hassle just to buy the cheapest last-gen CPU you can find on Amazon.
Still a better scenario than changing the socket all the time, but it can catch you out if you're used to Intel's "socket = generation" philosophy.
I just recently replaced my old i7 920 in the homeserver with an AMD Ryzen 5 2600. Really like it so far. Price / performance is great. This is my first AMD since probably ever....
The two things I don't like is that their CPUs are pin based. It seemed kind of old fashioned after Intel CPUs. But this is really a minor thing. The other issue is memory compatibility is a bit finicky. Maybe it has to do with the CPU being so new. Not sure.
Massive pain is an undestatement.
Fixing a socket pin with a that small of a package size and 1151 pins is nearly impossible.
Good luck. You better have a loop or magnifier.
I buy Intel Mobile CPU boards for exactly this reason. Pins.
I have not bent a single one in years.
The Notebook CPU's are so thermally efficient it is a real selling point to me.
Ryzen chips scale in performance more than Intel when you overclock the RAM. Some part of the chip cache is more tightly coupled to the RAM latency, and my rampant speculation is that Intel doesn’t really care about memory bandwidth as much on the desktop anyways.
Ryzen CPUs divide the cores into two "core complexes" that communicate over a bus called infinity fabric. Probably to make the engineering easier, the memory controller and RAM speed is the same as the infinity fabric's speed. You'll get good gains up to 3000 MHz, less so with more.
And the only thing I'll add is that "infinity fabric" is a singular clock-domain across all dies. So in Ryzen, its not a big deal cause there's only one die.
But in Threadripper (2-dies) or EPYC (4-dies), the "infinity fabric" bus is what connects the CPUs and Memory-controllers together.
Putting a Threadripper in a homeserver is overkill.
Besides I wanted to replace the i7 920, so that it won't be that hot anymore in that room (130W TDP vs 65W). I think a threadripper would achieve the opposite.
Maybe I should just do seasonal CPUs... Threadripper in Winter and Ryzen in Summer.
If you still have the old motherboard you can buy an old Xeon X56xx and use it as a drop in replacement in the LGA1366 socket. An X5650 for instance cost about $25 on ebay, is clocked at 2.66 GHz like the i7 920, has six cores, a TDP of 95W, and overclocks really well. I don't know if most i7 motherboards support ECC, but the CPU supports it.
The LGA1366 motherboards still fetch some money too, if you'd rather sell it.
I would say get something a bit more beefy like i3 or i5, as most people will want to run some basic things like Game servers, database or maybe a plex server, and it's good to be able just to run things compared to either rebuilding or renting something in the cloud in a lot of instances.
Latest Gemini Lake is already past the level of Phenom 2/Core 2 but with 10W TDP. They are also faster than Broadwell i3 on the same frequency in single thread (but have no hyperthreading).
As an outsider to 'enterprise-grade' computing, I'm curious about situations where a high number of cores in a single processor would be superior to multiple processors with the same total energy draw sitting on a single motherboard?
I can understand HPC applications where the high-speed interconnect on the chip would make a big difference.
But in business applications where the cores are dedicated to running independent VMs, or are handling independent client requests, what is really gained? There would still be some benefits from a shared cache, but how large quantitatively would that be?
Threadripper ships in single-node interleaved memory by default, at least on my motherboard. This increases latency but doubles bandwidth (because now all 4-sticks of RAM are interleaved).
There's a BIOS setting. I personally enabled it using AMD's "Ryzen Master" program to setup NUMA mode (aka: "Local" mode in Ryzen Master).
The chart at the bottom of the output is the weight for accessing a memory pool from a CPU socket. This is the most important part of the output.
On this server, CPU socket 0 is hardwired to ram slots 0-15
CPU 1 to ram slots 16-31
CPU 2 to ram slots 32-47
CPU 3 to ram slots 48-63
If CPU 0 wanted to read something outside of its local ram slots, it would have execute something on CPU n, then copy that segment to its local ram group.
I am on POWER8 at work, the wiki article [1] gives a great description of the advantages of many cores per chip though ours only has 6/12 cores. Part of our hardware configuration to migrate from POWER7 to POWER8 was to have 40g of memory per core available. I think POWER7 was 30g. We use this in the iSeries environment but we have pSeries machines with the same hardware running AIX/Oracle and POWER7 VMs running many *nix implementations.
In my usage case, the core/thread count really helps DB2's SQL implementation as an iSeries is effectively a giant DB2 database with extras added on. Hence query engine (SQE/CQE see old doc [2] on our machine can make great use of many cores/threads. When serving data to intensive batch applications as well as thousands of warehouse users and double that through web services access to data is the name of the game.
Have you compared performance of the DB running in Linux on a property sized Intel/Xeon server?
I've seen several mainframe companies dogmatically believing their sales rep their workload is special and needs a high-end system. But none of them I've talked to have actually tested for themselves.
NUMA. Latency between sockets is far higher than in a single socket. If your workload is truly wholly independent threads as you've described, then it's quite possible there is no benefit. (Although, sibling comments bring up good points about licensing fees.)
First is that a single-socket motherboard is still a simpler design to produce with all the advantages that entails.
Second is that you’re allowed to stick two of these on a two-socket board for CPU-bound loads. Better density for when you have the thermal capacity to spare.
> As an outsider to 'enterprise-grade' computing, I'm curious about situations where a high number of cores in a single processor would be superior to multiple processors with the same total energy draw sitting on a single motherboard?
Databases are the big one I'm aware of.
Intel's L3 cache is truly unified. Intel's 28-core Skylake means that the L3 of a Database is TRULY 38.5MB. When any core requests data, it goes into the giant distributed L3 cache that all cores can access efficiently.
AMD's L3 cache however is a network of 8MB chunks. Sure, there's 32MB of cache in its 32-core system, but any one core can only use 8MB of it effectively.
In fact, pulling memory off of a "remote L3 cache" is slower (higher latency) than pulling it from RAM on the Threadripper / EPYC platform. (A remote L3 pull has to coordinate over infinity fabric and remain cohesive! So that means "invalidating" and waiting to become the "exclusive owner" before a core can start writing to a L3 cache line, well according to MESI cc-protocol. I know AMD uses something more complex and efficient... but my point is that cache-coherence has a cost that becomes clear in this case. ) Which doesn't bode well for any HPC application... but also for Databases (which will effectively be locked to 8MB per thread, with "poor sharing", at least compared to Xeon).
Of course, "Databases" might be just the most common HPC application in the enterprise, that needs communication and coordination between threads.
>Intel's L3 cache is truly unified. Intel's 28-core Skylake means that the L3 of a Database is TRULY 38.5MB. When any core requests data, it goes into the giant distributed L3 cache that all cores can access efficiently.
This is less true now. Intel's L3 cache is still all on one piece of monolithic silicon, unlike the 4 separate caches of the 4 separate dies on a 32-core TR. But the L3 slice for each core is now physically placed right next to the core and other slices are accessed through the ringbus or in Skylake and later, the mesh. Still faster than leaving the die and using AMD's Infinity Fabric, and a lot less complicated than wiring up all the cores for direct L3 access.
Which one of these companies does at better job with free/libre software? I've always had a soft spot for AMD because it's the underdog, but I want to make sure that they are free, too.
They're about the same. Both contribute to linux. Intel's GPU drivers are more complete and open, but their GPUs are not in the same league as AMDs. AMD has open and closed source versions of their driver and are moving more towards the open version (which is already very good). Both companies have closed-source initial boot and AMT-like tech with potential backdoors built into their CPUs.
AMD has been doing a lot of hardwork to get their stuff into mainline (AMDGPU as a recent example) and they opensource a lot of their GPU stuff on github.
On the CPU side, AMD has patched Linux way before Ryzen was available in shops and has been contributing various patches afterwards.
I'd say they are working to get a decent track record for their Ryzen and Vega lineups.
Intel provides a lot of patches for Linux Kernel, for their hardware, performance counters, and their graphics drivers are open source under mainline. I'm not mentioning their wired Ethernet and other various drivers.
Being one of the top contributors in Linux/OSS/FLOSS doesn't allow them to come clean from their 28-core, inadvertently but conveniently miscommunicated demo.
Intel's iGPUs had the best Linux drivers for the longest time, while AMD just managed to mainline their GPU drivers into the 4.15 kernel.
I think AMD is "worse" at it but for understandable reasons. They're a smaller company, so it takes a bit longer for AMD to release low-level manuals. (Ex: still waiting on those Zen architecture instruction timings AMD!!). Even then, AMD releases "draft" manuals which allow the OSS world to move. So the important stuff is still out there, even as AMD is a bit slow on the documentation front.
Basically, Intel is a bigger company and handles the details a bit better than AMD. But both are highly supportive of OSS in great degrees.
AMD did a great job with Threadripper, making high end CPUs much more affordable. It's interesting that Intel doesn't lower their prices. What's the logic behind it?
They spent such a long time making the "best" that now they get to ride that goodwill for a while with consumers, regardless of where they are presently with respect to competition. Toyota and Honda enjoy the same luxury, they outsell the competition today more because of what they did in 90s than what they did in 2016-17.
Unsurprisingly, older generation models tend to dominate that list. Keep in mind that the thermal efficiency for Xeon v2 and v3 models is lower than v4; Passmark does not include power draw in this ranking. If you can afford the extra power usage and don't mind DDR3 RAM, high clockspeed + high core CPUs can be had relatively cheaply by going with V2.
I'm of the mind that AMD's smaller cores working together is the secret sauce to their price advantage.
Intel has done an amazing done stuffing 28 cores into one piece of silicon and extracting as much performance as possible all for the low price of $10k.
AMD took their 8 core part that they are selling essentially up and down their product line... and slapped 4 of them together.
Intel was selling 18-core chips for >$2,400 then when Threadripper came out Intel released the 18-core i9 for "only" $2,000, so that is something of a price drop.
Also, Intel's $350 8700K was cheaper at launch than AMD's $450 1800X even though the 8700K is faster in gaming.
Keep in mind that AMD has an inherent advantage: the Zen architecture and Glue.
When AMD fabs a TR2 they have to find 4 good Ryzen dies which are fairly small and they make a lot of them since they are part of the entire lineup. Once they have 4 good dies, they get glued together.
If they wanted 64 cores they'd just have to look for 8 good Ryzen dies, halving their yield compared to 32.
On the Intel side they have to increase the silicon area and then hope that all 64 cores are capable of full core speed in the current setup.
Glueing CPUs together makes it cheap to scale at little cost.
Err, that's only partially true at best. If you have many dies, the cost (in money, in chip real estate and in performance drop) of the interconnect skyrockets.
AMD's advantage of picking and choosing smaller parts still reigns supreme. If a Infinity Fabric component on the AMD die is defective enough to necessitate it is turned off the die is no longer able to participate in some of the more complex multi-die couplings. If the mesh component on an Intel die is defective the core(two actually because SKUs) has to be fused off and the part automatically bins as a lower SKU. As the mesh increases in complexity more and more of the design can be compromised.
The grandparent comment is not claiming that $2k is under $800, but instead that Intel would be able to charge a higher price for the i9 if the TR was not on the market.
I got my 1950x for $699 a few months ago. Its been $699 for a while at Microcenter.
The "price-competitive" i9-7900x is 10-cores for $799, and seems to be the best price-competitive comparison. Better single-thread, better at AVX512 tasks, but weaker in general purpose multithreading due to having fewer cores.
For a long time I saved a copy of a publication by Motorola about how Intel played fast and loose with benchmarks in comparisons of the 80386 with the 68020. (I lost it in a move, alas.) Can't say I was surprised to read about the 28-core fiasco.
This is a game all CPU manufacturers play. I still remember the days of Apple claiming their G4 processors were faster than Intel ones. Then they swapped platforms, and all of those claims evaporated without a trace.
More importantly that they can target desktops, high end desktops, and servers with the same exact silicon. So they can amortize their R&D across more units, and of course the supply line is much simpler. So AMD doesn't have to guess ahead of time what mix of ryzen, threadripper, and epyc chips they will sell.
This is a short term loss for Intel, but could end up being a long term win as an attack on AMD. Making this announcement forced AMD to advance their plans for the
32-core, possibly faster than they really wanted to right now. That depletes their product pipeline faster, making it more difficult to keep pace with future advances.
Edit: initial reports said that AMD was only planning to announce the 24-core CPU, and may have advanced the announcement of the 32-core chip due to Intel’a stunt. TFA doesn’t mention that, so possibly the initial reports were not accurate.
They maxed out the number of cores they can ship in a single CPU for now but that doesn't seem like a problem.
AMD will already launch their 7nm EPYC processor based on Zen 2 in 2019 (skipping Zen+ used by the new Threadripper and Ryzen 2xxx) which is expected to have 48 cores (some rumors even suggest 64 cores but that seems more likely for 7nm EUV instead of the first 7nm processes). So they will have no problem releasing more cores with Threadripper 3 next year (if they keep the yearly releases).
On top of that, in my layman eyes AMDs aproach of using infinity fabric to connect dies seems better suited to react to changes compared to Intels monolithic design.
I think of AMD's current approach - a microarchitecture with slower cores, but more cores, than Intel - as very similar to what Sun/Oracle tried to do from 2005 to 2010 with the Niagara family (UltraSPARC T1-T3).
Each core in those chips was seriously underclocked compared to a Xeon of similar vintage and price point (1-1.67 GHz; compared to 1.6 GHz to 3 or more), and lacked features like out-of-order execution and big caches that are almost minimum requirements for a modern server CPU. Sun hoped to make up for the slow cores in server applications with having more cores and having multiple threads per core (though with a simpler technology than SMT/hyper-threading).
However, Oracle eventually decided to focus on single-threaded performance with its more recent chips - it turns out that no OoO and < 2 GHz nominal speeds look pretty bad for many server applications. My suspicion is that even though the CPU-bound parts of games are becoming more multi-threaded, AMD will be forced to fix its slower architecture or lose out to Intel again in the server AND high-end desktop markets in a few years.
It's not similar at all. AMD's cores are 10-20% slower while Niagara was 80-90% slower. And AMD isn't intentionally slower; they designed the Zen core for maximum single-thread performance but they just didn't do as good a job as Intel because their budget is vastly smaller.
Just figured I'd mention that AMD significantly closed this gap with the Ryzen 2. The less parallel friendly applications (like many games) now seem to be faster on AMD or Intel depending on the application. Additionally the up to date security patches tend to hurt Intel more than AMD.
As always, if you really care about a single application, then you test it. But I wouldn't say that Intel wins on all single thread or few thread workloads anymore.
Especially if you consider that often a new Intel CPU requires a new Intel motherboard, and AMD often keeps motherboard compatibility across multiple generations, like the Ryzen and Ryzen 2.
Well, you can mod Z170 actually. Intel played super duper greedy and made a Z370 requirement.
Even Z170 can run 8700K, [0]
Z170 (and Z270) needs a cooked bios and the cpu needs a pin short (easy with a pencil 4B) --- and it can even be overclocked if the motherboard VRM is good enough.
AMD still takes the hit for Spectre mitigations, even if it does not need Meltdown mitigation.
And I'm not sure it's possible to fairly discount Intel's performance with meltdown mitigations applied. I think the impact will vary depending on workload.
> AMD still takes the hit for Spectre mitigations, even if it does not need Meltdown mitigation.
That's why I wanted to keep it out of the discussion and just mentioned Meltdown, AFAIK Spectre applies to both so it would be pure speculation to identify who'll be hit harder.
> And I'm not sure it's possible to fairly discount Intel's performance with meltdown mitigations applied. I think the impact will vary depending on workload.
I think we have this problem already all the time (with or without mitigations applied), that's why we (should) interpret benchmarks only as a proxy.
> That's why I wanted to keep it out of the discussion and just mentioned Meltdown, AFAIK Spectre applies to both so it would be pure speculation to identify who'll be hit harder.
That makes sense. Many people conflate the two, so I just wanted to be explicit about what I was saying :-).
> I think we have this problem already all the time (with or without mitigations applied), that's why we (should) interpret benchmarks only as a proxy.
That's totally reasonable. I think there were some discussions of the impact of the Meltdown patch (on Intel performance) on the LKML list at the time the patch(es) were being reviewed. (Other OS may have different perf impact for their Meltdown mitigations, of course, but it helps ballpark.)
Here's some discussion on anandtech, although it doesn't measure Spectre mitigations alone vs Spectre+MD; only base, MD alone, and MD+Spectre:
Zen does a decent job in clock-for-clock performance, huge killer on the desktop is raw clock speed. When you pit a Ryzen chip maxing out around 3.8-4.2GHz (depending on generation and silicon lottery) to an i7-8700K that has a base frequency of 4.7GHz it's pretty obvious which is going to come out on top.
Most of that clock hit comes from the 12nm LPP process that AMD is currently using too from what I can tell, low-power process typically equates to lower clocks (see mobile chips) so it's not surprising - and why Zen 2 being based on GloFo's 7nm process will hopefully close that gap.
Is it such a huge killer outside gaming benchmarks? And even then, the extra power is useful only if you jumped on the 4K bandwagon.
Personally, consider the amount of random crap that i run, I'd rather have more cores. And more importantly, 80% of the performance for 50% of the price is just fine(tm).
The gaming benchmarks are a bit meh since in most realistic builds the GPU will be the bottleneck, not the CPU (unless you buy a 1080Ti with a Ryzen 3 or i3, though in that case all help is lost).
More cores do benefit if you run stuff besides the game, which most people do.
I'm into VM abuse, so for me more cores would be a no brainer...
Sadly for AMD that would be IF i needed a new machine. My 2012 Core i7 still seems to be enough for my needs. (Except the GPU, that I changed recently.)
At this time the only reason I'd upgrade would be to go from 32 G ram to 64. 32 is barely enough if i happen to need a couple VMs up and the usual 100 tabs of docs in the browser :(
That's not base at any rate, you can't get all cores at 4.7Ghz stock, 4.7 is a turbo-boost single core. No way you get that w/o a pretty decent cooler on a non-delidded CPU unless happen to have a chip that requires no extra voltage at all.
It's true that many games are predominantly single core, though but it's likely to get a single core 4.5Ryzen as well.
This is a gross overestimate of how far behind AMD is in single core performance. We're talking a few lost frames per second in games not some crazy sun vs oracle stuff.
Also, as a person who used to work at Intel, I don't know whose idea this was, but that person should probably have a long hard look at themselves -- hardware people are exactly the people that this kind of shit wouldn't fly with, because they'll almost always ask for details and can spot a hack from a mile away.
On the one hand I can sympathize with Intel -- seeing how tough it was to stay on the market year over year, trying to predict and start developing the next trend in hardware. But on the other hand... Why in the world would you do this -- Intel basically dominates the high end market right now, just take your time and make a properly better thing.