Hacker News new | past | comments | ask | show | jobs | submit login
Intel 28-core fantasy vs. AMD 32-core reality (techspot.com)
398 points by BlackMonday on June 10, 2018 | hide | past | favorite | 132 comments



I'm excited for what this will do to the cost of dedicated servers in ~1 year.

Also, as a person who used to work at Intel, I don't know whose idea this was, but that person should probably have a long hard look at themselves -- hardware people are exactly the people that this kind of shit wouldn't fly with, because they'll almost always ask for details and can spot a hack from a mile away.

On the one hand I can sympathize with Intel -- seeing how tough it was to stay on the market year over year, trying to predict and start developing the next trend in hardware. But on the other hand... Why in the world would you do this -- Intel basically dominates the high end market right now, just take your time and make a properly better thing.


> I'm excited for what this will do to the cost of dedicated servers in ~1 year.

This is the opposite though?

The dedicated servers are turning into HEDTs. AMD 32-core EPYC has been available since last year, and Intel's 28-core Skylake (although $10,000) has been also available for a year.

So dedicated servers got this tech first, then HEDT got it a bit later. I guess Threadripper is Zen+ so its technically HEDT gets the 12nm tech first, but the 32-core infrastructure was in EPYC first.


The problem IMO is that Intel HEDTs don't support ECC (as far as I know), so not very good idea when you are working with workloads that need 64GB - 128GB of RAM (video editing, etc)


Ahh I deal with bare metal cloud companies that sell dedicated HEDTs so I was thinking that I'd be able to find even more HEDTs with great specs in a year's time as Threadrippers make their way through the market. People (including me) pay a premium for faster Intel cores and I'm excited to make the switch to slower, more plentiful AMD cores when I can, because I've invested in learning languages that do their best to handle parallelism well.

In practice the only difference between the dedicated servers of yesteryear and HEDTs of today is becoming perception (well and some very specific features), and considering computational load of most things hasn't actually gotten that much bigger, in addition to a proliferation of langauges that can adequately use multiple cores, feels like everything is looking to get cheaper yet better -- that's what excites me.


I get that Intel feels threatened by AMD. They are trying to impress the consumers... but bullshitting a demo is a very bad move! When a consumer decides to build a new PC, the characteristics of the product matter, but so does the reputation of the company that manufactures it. Right now Intel is putting too much effort into sketchy marketing practices: it undermines the actual work being done on their processors by some very talented people.

Presenting it as an extreme overclocking demo would have been a much wiser option.


Unfortunately it might work. With today's news cycles, an average consumer may have noticed the headline about Intel's 5GHz 28-core monster, and that's it. Follow up articles aren't as interesting.


The average consumer may just buy a smartphone over a PC. The average tech savvy consumer may just build the PC over buying off the shelf ones. In either case most people won't be buying these 28 core chips unless they are reps for enterprises in which case they will most definitely have done their HW before buying this.


>reps for enterprises in which case they will most definitely have done their HW before buying this.

Don't assume they would. Plenty of purchasing in large companies is associated with some higher up hearing about something, wanting it, then buying it to 'help' in some obscure way.


That would be akin to i7 8086k on 7.2GHz... just add LN2. What they presented was an extreme overclocking area with insulation (due to subambient condensation), a one HP chiller that runs on a banned gas and a 4 second benchmark (seriously aside CB being free is not a true testament of overclocking prowess). Such a demo is as pointless as it is almost as practical as daily LN2 use.

I can imagine few cases where first-to-right-the-bell performance on a single core determines if you get a specific quote in HFT but that's that.


> I can imagine few cases where first-to-right-the-bell performance on a single core determines if you get a specific quote in HFT but that's that.

That is actually the case from what I've heard, A lot of them buy consumer chips then disable all but one core and overclock it to the max.

Here is a guy from optiver talking about their process at CppCon: https://www.youtube.com/watch?v=NH1Tta7purM


What a great talk, thanks for sharing!


It's awesome that TR2 will be a drop in replacement on existing motherboards! Props to AMD for delivering real value to consumers.


It was always going to be that way. Threadripper 2 is just Threadripper with the two blank spacers in the IHS replaced with two more Zen dies, and a fab process change from 14nm to 12nm (but not architectural changes). It's very much what would be a "tick" in Intel CPU terminology.

Edit: Plus, the TR4 socket is guaranteed to be supported for 4 years, per AMD's roadmap at https://community.amd.com/thread/226363


Meanwhile the new Intel chips use the same LG1151 socket, but still need a new motherboard because, erm, well they do.


I recall reading that the thread ripper 2 is supposed to use more power so early motherboards with that socket my not be able to handle it. Just something to keep in mind.

Regardless damn good on AMD.


Most/all early TR boards were "gaming" branded/targetted, and those typically have power headroom for overclocking. It's certainly possible some of the early boards won't be able to support TR2 at full spec speeds, but I would anticipate most being able to.


2nd Gen TR will draw ~250W. X399 boards have at least 1x 4-Pin and 1x 8-Pin ATX-12V connector, so at least 225W. The rest is supplied by the ATX 24 pin connection. Some X399 boards use 2x 8-Pin ATX-12V connectors, that works out to 300W, which should be more than enough for Threadripper 2.


I'm just relaying what I read when they announced the TR2. Which seems to be copy-pasted to amongst the tech news sites.

Unfortunately they just state "... some first-generation X399 motherboards may not be able to deliver enough power..."

And not specifically which.

https://arstechnica.com/gadgets/2018/06/amd-unveils-threadri...

If their statement is false then that's on them.


That’s only the 32core And perhaps 24core TR. the 8-16 will probably be lower than current due to node shrink.


AMD's forward compatibility is one of the most amazing things to me, so much so that I still can't believe AM4 is going to remain viable for future CPUs for a while hence despite its age.

It's also a big reason I'm not going with Intel, since I know I can upgrade to something significantly better without having to get a new motherboard.


Worth noting that with AMD sockets, physical compatibility is no guarantee that it'll work without a software update. If you're unlucky/incautious during a system build, you can end up in a catch 22 with a motherboard that requires a BIOS update for your new CPU to work, and no way to boot it to update the BIOS because your CPU doesn't work. AMD will loan you an older CPU if you jump through enough hoops, but it's probably less hassle just to buy the cheapest last-gen CPU you can find on Amazon.

Still a better scenario than changing the socket all the time, but it can catch you out if you're used to Intel's "socket = generation" philosophy.

Source: happened to me a few weeks back.


Oh yeah, I've read the 2200G/2400G horror stories and taken precautions, thanks. :)


I just recently replaced my old i7 920 in the homeserver with an AMD Ryzen 5 2600. Really like it so far. Price / performance is great. This is my first AMD since probably ever....

The two things I don't like is that their CPUs are pin based. It seemed kind of old fashioned after Intel CPUs. But this is really a minor thing. The other issue is memory compatibility is a bit finicky. Maybe it has to do with the CPU being so new. Not sure.


> don't like is that their CPUs are pin based

To me that's a win. If a CPU pin is bent it's typically fairly easy to straighten it. Fixing a bent pin a socket is a massive pain.

But it's much easier to protect socket pins with the cover. So there are pros and cons either way.


Massive pain is an undestatement. Fixing a socket pin with a that small of a package size and 1151 pins is nearly impossible.

Good luck. You better have a loop or magnifier.

I buy Intel Mobile CPU boards for exactly this reason. Pins. I have not bent a single one in years. The Notebook CPU's are so thermally efficient it is a real selling point to me.


True, though it is much easier to damage a pin-based CPU in the first place.


Usually, I'm very careful with the CPU, more than the motherboard, so I think that balances out.


Ryzen chips scale in performance more than Intel when you overclock the RAM. Some part of the chip cache is more tightly coupled to the RAM latency, and my rampant speculation is that Intel doesn’t really care about memory bandwidth as much on the desktop anyways.


Ryzen CPUs divide the cores into two "core complexes" that communicate over a bus called infinity fabric. Probably to make the engineering easier, the memory controller and RAM speed is the same as the infinity fabric's speed. You'll get good gains up to 3000 MHz, less so with more.

https://www.anandtech.com/show/11857/memory-scaling-on-ryzen...

http://www.legitreviews.com/ddr4-memory-scaling-amd-am4-plat...


And the only thing I'll add is that "infinity fabric" is a singular clock-domain across all dies. So in Ryzen, its not a big deal cause there's only one die.

But in Threadripper (2-dies) or EPYC (4-dies), the "infinity fabric" bus is what connects the CPUs and Memory-controllers together.


Threadripper is lga ;-)


Putting a Threadripper in a homeserver is overkill.

Besides I wanted to replace the i7 920, so that it won't be that hot anymore in that room (130W TDP vs 65W). I think a threadripper would achieve the opposite.

Maybe I should just do seasonal CPUs... Threadripper in Winter and Ryzen in Summer.


> Putting a Threadripper in a homeserver is overkill.

Running 2 gaming VMs for me and my kids and one ubuntu WS on a 12 core Threadripper. Host is also doing file serving and runs Unifi controller.

One server to rule them all.


If you still have the old motherboard you can buy an old Xeon X56xx and use it as a drop in replacement in the LGA1366 socket. An X5650 for instance cost about $25 on ebay, is clocked at 2.66 GHz like the i7 920, has six cores, a TDP of 95W, and overclocks really well. I don't know if most i7 motherboards support ECC, but the CPU supports it.

The LGA1366 motherboards still fetch some money too, if you'd rather sell it.


For home server/NAS/HTPC you are likely fine with some Atom-based Celeron/Pentium ;-)


But where is the fun in that!


I would say get something a bit more beefy like i3 or i5, as most people will want to run some basic things like Game servers, database or maybe a plex server, and it's good to be able just to run things compared to either rebuilding or renting something in the cloud in a lot of instances.


Latest Gemini Lake is already past the level of Phenom 2/Core 2 but with 10W TDP. They are also faster than Broadwell i3 on the same frequency in single thread (but have no hyperthreading).


As an outsider to 'enterprise-grade' computing, I'm curious about situations where a high number of cores in a single processor would be superior to multiple processors with the same total energy draw sitting on a single motherboard?

I can understand HPC applications where the high-speed interconnect on the chip would make a big difference.

But in business applications where the cores are dedicated to running independent VMs, or are handling independent client requests, what is really gained? There would still be some benefits from a shared cache, but how large quantitatively would that be?


It has to do with memory. In server grade computers, each socket has memory local slots that it can read and write to very fast. Read this: https://en.wikipedia.org/wiki/Non-uniform_memory_access


It is already the case with Thread Ripper processors. They have multiple NUMA nodes inside one socket.


Exactly the same case as single die Xeon architecture with 2 separate rings inside with different memory modules attached to each ring - https://images.anandtech.com/doci/9193/HaswellEPHCCdie_575px...


It actually presents itself to the system as a single node:

On a TR 1920x system:

  $ numactl --hardware
  available: 1 nodes (0)
  node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
  node 0 size: 32107 MB
  node 0 free: 20738 MB
  node distances:
  node   0 
    0:  10


Threadripper ships in single-node interleaved memory by default, at least on my motherboard. This increases latency but doubles bandwidth (because now all 4-sticks of RAM are interleaved).

There's a BIOS setting. I personally enabled it using AMD's "Ryzen Master" program to setup NUMA mode (aka: "Local" mode in Ryzen Master).


I'm pretty sure you can change that, it should be a BIOS option [1].

[1] - https://www.anandtech.com/show/11697/the-amd-ryzen-threadrip...


This is from a 4 socket Xeon E7-4860 with 64 ram slots(16 in use)

  e7-4860:~ Mon Jun 11
  03:06 PM william$ numactl --hardware
  available: 4 nodes (0-3)
  node 0 cpus: 0 1 2 3 4 5 6 7 8 9 40 41 42 43 44 45 46 47 48 49
  node 0 size: 16035 MB
  node 0 free: 1306 MB
  node 1 cpus: 10 11 12 13 14 15 16 17 18 19 50 51 52 53 54 55 56 57 58 59
  node 1 size: 16125 MB
  node 1 free: 3237 MB
  node 2 cpus: 20 21 22 23 24 25 26 27 28 29 60 61 62 63 64 65 66 67 68 69
  node 2 size: 16125 MB
  node 2 free: 11004 MB
  node 3 cpus: 30 31 32 33 34 35 36 37 38 39 70 71 72 73 74 75 76 77 78 79
  node 3 size: 16123 MB
  node 3 free: 12044 MB
  node distances:
  node   0   1   2   3 
    0:  10  20  20  20 
    1:  20  10  20  20 
    2:  20  20  10  20 
    3:  20  20  20  10
The chart at the bottom of the output is the weight for accessing a memory pool from a CPU socket. This is the most important part of the output.

On this server, CPU socket 0 is hardwired to ram slots 0-15

CPU 1 to ram slots 16-31

CPU 2 to ram slots 32-47

CPU 3 to ram slots 48-63

If CPU 0 wanted to read something outside of its local ram slots, it would have execute something on CPU n, then copy that segment to its local ram group.


That's not normal. Is it set to Channel/NUMA mode?


windows is spectacularly poor at dealing with NUMA CPUS so threadripper is not displayed to the OS as NUMA.


Please don't say things that are obviously untrue.

I've got a Threadripper 1950x and got 2x NUMA nodes. You gotta enable a BIOS setting.

Second: "$ numactl --hardware " is a Linux command. The Windows equivalent is coreinfo.

https://docs.microsoft.com/en-us/sysinternals/downloads/core...


Really? I've been thinking of getting a TR for some NUMA coding experience, and if Windows can't see that then it really sucks.


It's togglable in the BIOS/UEFI.


When VMware charges per socket and not per core.


Big ditto for Oracle. probably others.


Realistically, it is easy to count cores. Some companies still count MAC addresses.


The way Oracle and VMWare bill is not bound by a technical limitation.


I am on POWER8 at work, the wiki article [1] gives a great description of the advantages of many cores per chip though ours only has 6/12 cores. Part of our hardware configuration to migrate from POWER7 to POWER8 was to have 40g of memory per core available. I think POWER7 was 30g. We use this in the iSeries environment but we have pSeries machines with the same hardware running AIX/Oracle and POWER7 VMs running many *nix implementations.

In my usage case, the core/thread count really helps DB2's SQL implementation as an iSeries is effectively a giant DB2 database with extras added on. Hence query engine (SQE/CQE see old doc [2] on our machine can make great use of many cores/threads. When serving data to intensive batch applications as well as thousands of warehouse users and double that through web services access to data is the name of the game.

[1]https://en.wikipedia.org/wiki/POWER8 [2] https://www.ibm.com/support/knowledgecenter/en/ssw_i5_54/rza... <- that is quite a few years old but describes the query engines available - CQE is 'legacy' and SQE is modern


Have you compared performance of the DB running in Linux on a property sized Intel/Xeon server?

I've seen several mainframe companies dogmatically believing their sales rep their workload is special and needs a high-end system. But none of them I've talked to have actually tested for themselves.


NUMA. Latency between sockets is far higher than in a single socket. If your workload is truly wholly independent threads as you've described, then it's quite possible there is no benefit. (Although, sibling comments bring up good points about licensing fees.)


I can see two answers to that.

First is that a single-socket motherboard is still a simpler design to produce with all the advantages that entails.

Second is that you’re allowed to stick two of these on a two-socket board for CPU-bound loads. Better density for when you have the thermal capacity to spare.


> As an outsider to 'enterprise-grade' computing, I'm curious about situations where a high number of cores in a single processor would be superior to multiple processors with the same total energy draw sitting on a single motherboard?

Databases are the big one I'm aware of.

Intel's L3 cache is truly unified. Intel's 28-core Skylake means that the L3 of a Database is TRULY 38.5MB. When any core requests data, it goes into the giant distributed L3 cache that all cores can access efficiently.

AMD's L3 cache however is a network of 8MB chunks. Sure, there's 32MB of cache in its 32-core system, but any one core can only use 8MB of it effectively.

In fact, pulling memory off of a "remote L3 cache" is slower (higher latency) than pulling it from RAM on the Threadripper / EPYC platform. (A remote L3 pull has to coordinate over infinity fabric and remain cohesive! So that means "invalidating" and waiting to become the "exclusive owner" before a core can start writing to a L3 cache line, well according to MESI cc-protocol. I know AMD uses something more complex and efficient... but my point is that cache-coherence has a cost that becomes clear in this case. ) Which doesn't bode well for any HPC application... but also for Databases (which will effectively be locked to 8MB per thread, with "poor sharing", at least compared to Xeon).

Of course, "Databases" might be just the most common HPC application in the enterprise, that needs communication and coordination between threads.


>Intel's L3 cache is truly unified. Intel's 28-core Skylake means that the L3 of a Database is TRULY 38.5MB. When any core requests data, it goes into the giant distributed L3 cache that all cores can access efficiently.

This is less true now. Intel's L3 cache is still all on one piece of monolithic silicon, unlike the 4 separate caches of the 4 separate dies on a 32-core TR. But the L3 slice for each core is now physically placed right next to the core and other slices are accessed through the ringbus or in Skylake and later, the mesh. Still faster than leaving the die and using AMD's Infinity Fabric, and a lot less complicated than wiring up all the cores for direct L3 access.

https://www.anandtech.com/show/3922/intels-sandy-bridge-arch...


If your cores are independent, you're blessed and can scale just by adding more cheap servers.


Yeah, embarrassing parallel stuff is not very interesting.

However when there is communication necessary - the length of the bus matters and having the dies next to each other does help a lot


Which one of these companies does at better job with free/libre software? I've always had a soft spot for AMD because it's the underdog, but I want to make sure that they are free, too.


They're about the same. Both contribute to linux. Intel's GPU drivers are more complete and open, but their GPUs are not in the same league as AMDs. AMD has open and closed source versions of their driver and are moving more towards the open version (which is already very good). Both companies have closed-source initial boot and AMT-like tech with potential backdoors built into their CPUs.


AMD has been doing a lot of hardwork to get their stuff into mainline (AMDGPU as a recent example) and they opensource a lot of their GPU stuff on github.

On the CPU side, AMD has patched Linux way before Ryzen was available in shops and has been contributing various patches afterwards.

I'd say they are working to get a decent track record for their Ryzen and Vega lineups.


If the answer turned out to be Intel, would you overlook the misleading 28-core demo the article discusses?


Intel provides a lot of patches for Linux Kernel, for their hardware, performance counters, and their graphics drivers are open source under mainline. I'm not mentioning their wired Ethernet and other various drivers.

Being one of the top contributors in Linux/OSS/FLOSS doesn't allow them to come clean from their 28-core, inadvertently but conveniently miscommunicated demo.


Yeah, though it's not the best thing to have on the resume. That's for sure.


GPUOpen [1] is AMD's branded Open Source / Open protocol initiative.

[1] https://gpuopen.com/


Are you talking about contributions to OSS in general, or specifically around processors?


In general, as a company.


Both AMD and Intel seem highly supportive of OSS.

Intel's iGPUs had the best Linux drivers for the longest time, while AMD just managed to mainline their GPU drivers into the 4.15 kernel.

I think AMD is "worse" at it but for understandable reasons. They're a smaller company, so it takes a bit longer for AMD to release low-level manuals. (Ex: still waiting on those Zen architecture instruction timings AMD!!). Even then, AMD releases "draft" manuals which allow the OSS world to move. So the important stuff is still out there, even as AMD is a bit slow on the documentation front.

Basically, Intel is a bigger company and handles the details a bit better than AMD. But both are highly supportive of OSS in great degrees.


AMD did a great job with Threadripper, making high end CPUs much more affordable. It's interesting that Intel doesn't lower their prices. What's the logic behind it?


They spent such a long time making the "best" that now they get to ride that goodwill for a while with consumers, regardless of where they are presently with respect to competition. Toyota and Honda enjoy the same luxury, they outsell the competition today more because of what they did in 90s than what they did in 2016-17.


It's a bummer that perf counters are not accurate on TR/ryzen, blocking rr from working on it. https://github.com/mozilla/rr/issues/2034#issuecomment-30444...

Edit: typo


Retail sales must be a very small percentage of the market. Perhaps they don't expect to sell any units anyway so the price doesn't matter.


Is there some maintained "bang for buck" chart available that would show how many units of certain performance metric one $ buys with different CPUs?


Yes, you're looking for the Passmark "Best Value" ranking: https://www.cpubenchmark.net/cpu_value_available.html It's also helpfully represented as a scatter plot: https://www.cpubenchmark.net/cpu_value_available.html#multic....

Unsurprisingly, older generation models tend to dominate that list. Keep in mind that the thermal efficiency for Xeon v2 and v3 models is lower than v4; Passmark does not include power draw in this ranking. If you can afford the extra power usage and don't mind DDR3 RAM, high clockspeed + high core CPUs can be had relatively cheaply by going with V2.


I'm of the mind that AMD's smaller cores working together is the secret sauce to their price advantage.

Intel has done an amazing done stuffing 28 cores into one piece of silicon and extracting as much performance as possible all for the low price of $10k.

AMD took their 8 core part that they are selling essentially up and down their product line... and slapped 4 of them together.


Intel did lower prices on several chips in response.


Intel was selling 18-core chips for >$2,400 then when Threadripper came out Intel released the 18-core i9 for "only" $2,000, so that is something of a price drop.

Also, Intel's $350 8700K was cheaper at launch than AMD's $450 1800X even though the 8700K is faster in gaming.


Keep in mind that AMD has an inherent advantage: the Zen architecture and Glue.

When AMD fabs a TR2 they have to find 4 good Ryzen dies which are fairly small and they make a lot of them since they are part of the entire lineup. Once they have 4 good dies, they get glued together.

If they wanted 64 cores they'd just have to look for 8 good Ryzen dies, halving their yield compared to 32.

On the Intel side they have to increase the silicon area and then hope that all 64 cores are capable of full core speed in the current setup.

Glueing CPUs together makes it cheap to scale at little cost.


Err, that's only partially true at best. If you have many dies, the cost (in money, in chip real estate and in performance drop) of the interconnect skyrockets.


The interconnect cost also skyrockets on the cores themselves. Intel's moved to a mesh network on their newest cores: https://www.anandtech.com/show/11550/the-intel-skylakex-revi...

AMD's advantage of picking and choosing smaller parts still reigns supreme. If a Infinity Fabric component on the AMD die is defective enough to necessitate it is turned off the die is no longer able to participate in some of the more complex multi-die couplings. If the mesh component on an Intel die is defective the core(two actually because SKUs) has to be fused off and the part automatically bins as a lower SKU. As the mesh increases in complexity more and more of the design can be compromised.


The top core i9 costs 2k$ on amazon. The top ryzen cost $799. So, they are cheaper than Xeon, but not cheaper than equivalent AMD's processor.


The grandparent comment is not claiming that $2k is under $800, but instead that Intel would be able to charge a higher price for the i9 if the TR was not on the market.


You should compare the top i9 with threadripper 1950X, which is more expensive, but still less expensive than an i9.


That's literally what the OP did. A 1950X is only $799 now, so less than half a top end i9.


That's extremely recent. It was more like $900 a day or two ago: https://camelcamelcamel.com/AMD-Threadripper-32-thread-Proce...

And it is still $960 on Newegg: https://www.newegg.com/Product/Product.aspx?Item=N82E1681911...


I got my 1950x for $699 a few months ago. Its been $699 for a while at Microcenter.

The "price-competitive" i9-7900x is 10-cores for $799, and seems to be the best price-competitive comparison. Better single-thread, better at AVX512 tasks, but weaker in general purpose multithreading due to having fewer cores.


Eh its just a matter of patience I guess. I got mine for $799 in December.


"And it is still $960 on Newegg"

As of 2018-06-11 18:27 PDT (when I clicked on that link) the current Newegg price is $799.99.


For a long time I saved a copy of a publication by Motorola about how Intel played fast and loose with benchmarks in comparisons of the 80386 with the 68020. (I lost it in a move, alas.) Can't say I was surprised to read about the 28-core fiasco.


This is a game all CPU manufacturers play. I still remember the days of Apple claiming their G4 processors were faster than Intel ones. Then they swapped platforms, and all of those claims evaporated without a trace.


link?


Not specific to the Motorola but there have been a number of allegations of Intel doing this over the years:

https://www.pcworld.com/article/2842647/intel-will-pay-you-1...

http://www.agner.org/optimize/blog/read.php?i=49


I believe Intel's designs are based on a single die compared to AMD threadripper which are multichip.


You’re right, but obviously the advantage that used to give them isn’t working out anymore.

AMD’s interconnect seems fast enough, and they don’t have the yield/cost problems from massive single die chips.


More importantly that they can target desktops, high end desktops, and servers with the same exact silicon. So they can amortize their R&D across more units, and of course the supply line is much simpler. So AMD doesn't have to guess ahead of time what mix of ryzen, threadripper, and epyc chips they will sell.


Intel has a lot of multi-chip technology, for example the Omni-Path network is a separate chip included in the cpu module.


Could Intel stick more than one on a single chip?


Yes, but so far they have not, instead investing in on-die mesh interconnects for scaling up core count. Here's an opinion piece on the subject: https://www.anandtech.com/show/12814/a-thought-on-silicon-de...


Their first dual-core processors were multiple chip modules.


They however did not have a proper on-die interconnect and instead communicated over the FSB which made them quite bad at scaling.


In the end Intel got bad PR and AMD got good PR. Is it a major PR failure from Intel?


There was an interview with an Intel engineer on this, it was quite revealing: https://www.youtube.com/watch?v=ozcEel1rNKM


This is a short term loss for Intel, but could end up being a long term win as an attack on AMD. Making this announcement forced AMD to advance their plans for the 32-core, possibly faster than they really wanted to right now. That depletes their product pipeline faster, making it more difficult to keep pace with future advances.

Edit: initial reports said that AMD was only planning to announce the 24-core CPU, and may have advanced the announcement of the 32-core chip due to Intel’a stunt. TFA doesn’t mention that, so possibly the initial reports were not accurate.


They maxed out the number of cores they can ship in a single CPU for now but that doesn't seem like a problem.

AMD will already launch their 7nm EPYC processor based on Zen 2 in 2019 (skipping Zen+ used by the new Threadripper and Ryzen 2xxx) which is expected to have 48 cores (some rumors even suggest 64 cores but that seems more likely for 7nm EUV instead of the first 7nm processes). So they will have no problem releasing more cores with Threadripper 3 next year (if they keep the yearly releases).

On top of that, in my layman eyes AMDs aproach of using infinity fabric to connect dies seems better suited to react to changes compared to Intels monolithic design.


It appears that Intel wanted to trump AMD. ...and also wanted to Trump AMD.


This article repeats itself many times. Are they trying to hit a word count quota? Is this high school again?


I think of AMD's current approach - a microarchitecture with slower cores, but more cores, than Intel - as very similar to what Sun/Oracle tried to do from 2005 to 2010 with the Niagara family (UltraSPARC T1-T3).

Each core in those chips was seriously underclocked compared to a Xeon of similar vintage and price point (1-1.67 GHz; compared to 1.6 GHz to 3 or more), and lacked features like out-of-order execution and big caches that are almost minimum requirements for a modern server CPU. Sun hoped to make up for the slow cores in server applications with having more cores and having multiple threads per core (though with a simpler technology than SMT/hyper-threading).

However, Oracle eventually decided to focus on single-threaded performance with its more recent chips - it turns out that no OoO and < 2 GHz nominal speeds look pretty bad for many server applications. My suspicion is that even though the CPU-bound parts of games are becoming more multi-threaded, AMD will be forced to fix its slower architecture or lose out to Intel again in the server AND high-end desktop markets in a few years.


It's not similar at all. AMD's cores are 10-20% slower while Niagara was 80-90% slower. And AMD isn't intentionally slower; they designed the Zen core for maximum single-thread performance but they just didn't do as good a job as Intel because their budget is vastly smaller.


Just figured I'd mention that AMD significantly closed this gap with the Ryzen 2. The less parallel friendly applications (like many games) now seem to be faster on AMD or Intel depending on the application. Additionally the up to date security patches tend to hurt Intel more than AMD.

As always, if you really care about a single application, then you test it. But I wouldn't say that Intel wins on all single thread or few thread workloads anymore.

Especially if you consider that often a new Intel CPU requires a new Intel motherboard, and AMD often keeps motherboard compatibility across multiple generations, like the Ryzen and Ryzen 2.


Well, you can mod Z170 actually. Intel played super duper greedy and made a Z370 requirement.

Even Z170 can run 8700K, [0] Z170 (and Z270) needs a cooked bios and the cpu needs a pin short (easy with a pencil 4B) --- and it can even be overclocked if the motherboard VRM is good enough.

[0]: https://community.hwbot.org/topic/175489-asrock-z170-mocf-li...


A slightly OT question:

Wasn't that 10-20% before meltdown? Or is there still some similar disadvantage in clock speed per Watt or IPC?


AMD still takes the hit for Spectre mitigations, even if it does not need Meltdown mitigation.

And I'm not sure it's possible to fairly discount Intel's performance with meltdown mitigations applied. I think the impact will vary depending on workload.


> AMD still takes the hit for Spectre mitigations, even if it does not need Meltdown mitigation.

That's why I wanted to keep it out of the discussion and just mentioned Meltdown, AFAIK Spectre applies to both so it would be pure speculation to identify who'll be hit harder.

> And I'm not sure it's possible to fairly discount Intel's performance with meltdown mitigations applied. I think the impact will vary depending on workload.

I think we have this problem already all the time (with or without mitigations applied), that's why we (should) interpret benchmarks only as a proxy.


> That's why I wanted to keep it out of the discussion and just mentioned Meltdown, AFAIK Spectre applies to both so it would be pure speculation to identify who'll be hit harder.

That makes sense. Many people conflate the two, so I just wanted to be explicit about what I was saying :-).

> I think we have this problem already all the time (with or without mitigations applied), that's why we (should) interpret benchmarks only as a proxy.

That's totally reasonable. I think there were some discussions of the impact of the Meltdown patch (on Intel performance) on the LKML list at the time the patch(es) were being reviewed. (Other OS may have different perf impact for their Meltdown mitigations, of course, but it helps ballpark.)

Here's some discussion on anandtech, although it doesn't measure Spectre mitigations alone vs Spectre+MD; only base, MD alone, and MD+Spectre:

https://www.anandtech.com/show/12566/analyzing-meltdown-spec...


Afaik not, at least on Linux the mitigations are turned on as needed by CPU ID data


Meltdown mitigation doesn't affect consumer workloads much. Database and server workloads are hit a lot harder.


Zen does a decent job in clock-for-clock performance, huge killer on the desktop is raw clock speed. When you pit a Ryzen chip maxing out around 3.8-4.2GHz (depending on generation and silicon lottery) to an i7-8700K that has a base frequency of 4.7GHz it's pretty obvious which is going to come out on top.

Most of that clock hit comes from the 12nm LPP process that AMD is currently using too from what I can tell, low-power process typically equates to lower clocks (see mobile chips) so it's not surprising - and why Zen 2 being based on GloFo's 7nm process will hopefully close that gap.


Is it such a huge killer outside gaming benchmarks? And even then, the extra power is useful only if you jumped on the 4K bandwagon.

Personally, consider the amount of random crap that i run, I'd rather have more cores. And more importantly, 80% of the performance for 50% of the price is just fine(tm).


The gaming benchmarks are a bit meh since in most realistic builds the GPU will be the bottleneck, not the CPU (unless you buy a 1080Ti with a Ryzen 3 or i3, though in that case all help is lost).

More cores do benefit if you run stuff besides the game, which most people do.


I'm into VM abuse, so for me more cores would be a no brainer...

Sadly for AMD that would be IF i needed a new machine. My 2012 Core i7 still seems to be enough for my needs. (Except the GPU, that I changed recently.)


I feel you there, I was using a i5 2500 since it came out until last year march when I switched to ryzen. A very good CPU indeed.


At this time the only reason I'd upgrade would be to go from 32 G ram to 64. 32 is barely enough if i happen to need a couple VMs up and the usual 100 tabs of docs in the browser :(


>i7-8700K that has a base frequency of 4.7GH.

That's not base at any rate, you can't get all cores at 4.7Ghz stock, 4.7 is a turbo-boost single core. No way you get that w/o a pretty decent cooler on a non-delidded CPU unless happen to have a chip that requires no extra voltage at all.

It's true that many games are predominantly single core, though but it's likely to get a single core 4.5Ryzen as well.


i7-8700K base frequency is 3.7 GHz, not 4.7

https://ark.intel.com/products/126684/Intel-Core-i7-8700K-Pr...


Oops, fat fingered that one.


This is a gross overestimate of how far behind AMD is in single core performance. We're talking a few lost frames per second in games not some crazy sun vs oracle stuff.


https://i.imgur.com/dpfu5K3.png

Intel is mainly faster because of a significant clock advantage. Clock for clock the advantage is small.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: