Hacker News new | past | comments | ask | show | jobs | submit login
Intel plans to rival TSMC and Samsung as a chip supplier (nikkei.com)
255 points by retskrad on Nov 4, 2022 | hide | past | favorite | 211 comments



And I plan to date supermodels. /s

Okay I'll try to be constructive. TSMC is two things: The biggest contracting foundry in the world, and a leader in EUV application for tiny geometries. I think it's possible for Intel to become a contracting foundry; it goes against their DNA but Gelsinger very much wants to do it. So that's the first thing. Maybe it'll happen.

The second thing is much more problematic.

> Intel is still working on getting its 5-nm mass production going, and says it will begin manufacturing Intel 3 -- its answer to TSMC's 3-nm tech -- in the second half of 2023.

This I'm much more skeptical about. Intel tried to make EUV work before and failed. Still they seem to be buying ASML equipment and installing it at Ronler Acres:

https://www.protocol.com/enterprise/intel-euv-fab-chips

But it still sounds like a long shot to get it working in 2023.


Some other things I noticed when I read their earnings call transcript from the other day:

"We'll start with a focus on driving $3 billion of cost reduction in 2023, 1/3 in cost of sales and 2/3 in operating expenses."

"best-in-class semiconductor companies have a financial profile that includes gross margins in the 60s" (Intel's is in the mid 40's and fell last quarter)

So they're not only trying to change a gigantic part of their business model, but ALSO drive $3B of cost reduction and try and push gross margins back up to... where they were 5 years or so ago?

They're making it impossible for themselves IMO. Pivot or profit improvements, choose one. Doing both at the same time is surely going to lead to a huge mess.


The thing to understand about Intel is that the finance org has hugely outsized power compared to other companies. This is a company that believes that any problem can be solved with capital allocation, and no project is worth doing unless there’s a financial analysis “proving” it will add at least $1B/year to the bottom line.

Intel’s culture is that of a cash cow, not an innovator. So year, profit improvements come naturally, pivoting not so much.


I worked with IBM in the early 2000s porting some IP to their processes and what was so difficult was that everything was proprietary to IBM. From the nomenclature, to the tools, to the models, etc. That was great when they were on top, but the world passed them by and there was no hope of converting them over. I never worked with Intel the same way, but some close friends did and it sounds exactly the same. They are proprietary and very inefficient. A block that would normally be laid out in two days, Intel would spend two weeks on. Their process design rules were super restrictive and their tooling just isn't efficient compared to the what the rest of the industry now uses. Unique, strict and obnoxious design rules might be OK when you are designing a high-volume processor like Intel, but that is not the kind of process you want for a foundry.

I fear it is not in Intel's DNA to make this switch.


Would you share some insight on why Intel failed at making EUV work? Gelsinger said on a podcast (Decoder) recently that in fact they fought it.

Obviously, they are kind of big enough to do both — make a half hearted attempt and fight it all the same due to some internal politics. However, for something this existential, one would think that the smart people and the leadership would try to put some focus on the field. Was it that bad? Really, looking for some insight on what really went down.


Intel (like every other chipmaker) has been chasing EUV for decades, but every time they thought they finally nailed it they turned out to be wrong.

The key that ultimately made it possible was the light source, and ASML is the only company that sells these light sources. ASML didn't invent the light source entirely on its own; there were a lot of ideas for light sources floating around and the only one that turned out to work was laser-induced plasma from tin microdroplets, which sounds completely crazy but it works [0]. ASML perfected that process (with -- to be fair -- some investment from Intel).

[0] https://www.asml.com/en/technology/lithography-principles/li...

Intel did fight EUV with a technology called SAQP which ultimately turned out not to work as well as EUV, but they were researching EUV at the same time. Here are some more historical articles about Intel's work on EUV in the past. Note the dates.

https://newsroom.intel.com/news-releases/intel-achieves-majo...

https://semiengineering.com/why-euv-is-so-difficult/

https://www.eetimes.com/intel-developing-euv-pellicle/

https://www.extremetech.com/computing/276376-intel-reportedl...


Do you have any insight on what organizational failures led to them failing to find the right EUV solution?

Was it just a matter of luck that ASML nailed the right one or was it the nature of exploring some riskier alternative (as you said "which sounds completely crazy but it works")?


I know nothing about Intel's internal organization.

There were several different light source ideas tried. Some at US national laboratories. Tin plasma was known to work but it left so much debris that it gummed up the optics so it was forgotten for a while. The key turned out to be tin microdroplets.

https://www.optica-opn.org/home/articles/volume_29/march_201...


ASML and Intel are two completely different companies. ASML makes lithography machines which is one of the many steps in producing a chip. ASML machines are used by all chip manufacturers. The challenge for chip manufacturers like Intel, but also tsmc, Samsung among others, is to optimize the entire production process. And that has been / is not easy with a lithography step (EUV) which is so drastically different then what the industry has been using before. One of the challenging choices for chip manufacturers was to drastically change the process and implement EUV or to go self-aligned quadratic patterning. The latter, if you take a simplistic view, can be seen as an evolution on existing processes. We know see that those companies who went aggressively into EUV are having an edge on their competitors.


When they say “microdroplets” (I may be out of date here) they mean firing lasers at falling drops of molten tin and then capturing the energy that the tin gives off.

Like anything, some of their PHDs were like “no that’s probably not going to work because xyz” and they trusted said PHDs.

It’s just science. It’s hard to predict.


I think it's more that they didn't try ('till now) 'cause they thought ASML's machines just weren't ready for prime time. TSMC bought ASML's bleeding edge EUV machines thinking they could make it work eventually, and they did. Intel may catch up very quickly now.


I thought that ASML got its EUV light source, basically, by buying Cymer in San Diego. Read some of [0], and then searched it, but didn't find a single instance of 'Cymer.'


Thank you.


I am much more hopeful about Intel's chance at EUV than GP. In order to avoid repeating myself, here's my analysis of what went wrong: https://news.ycombinator.com/item?id=32335873


Why is it that it's taking them so long to pivoting from that failure? 10nm didn't work out, that's fine. But it feels like that didn't work out and they've spent the years after that in a hole and still haven't dug themselves out.


From the rumors I've heard (might be true; might not) Intel lost a great deal of semiconductor engineering talent because they were making so much money on minor evolutions of x86 that they saw little need to truly innovate. The MBAs took over.

Once it became clear that AMD and ARM (and by extension TSMC) were eating their lunch, they had great difficulty pivoting because they lacked the talent and the organizational will to do so -- at least until Gelsinger took charge. I have a lot of respect for Gelsinger and I think he just might turn it around.


> ...they were making so much money on minor evolutions of x86 that they saw little need to truly innovate. The MBAs took over.

Yet another example in a long history that modeling your talent pipeline as a JIT supply chain does not work in high cognitive input settings. This is a general unsolved management challenge: capturing sufficiently granular and comprehensive institutional knowledge to be able to efficiently restart organizations that rely upon it years down the line.

I don't think we will ever solve it until we learn how to share human mind states. The complexity of these high cognitive input activities at large scale has surpassed the ability of our capabilities to transfer comprehension between minds using conventional information transfer techniques (primarily written word and audio video). Even were I to guarantee the capture of all information at sufficient granularity, the only way to QA that is to de novo staff up a brand-new parallel organization, and have them reproduce the original organization's accomplishments. No one is ever funded for that.

This is a really interesting problem space for me because I suspect it is a dimension of technical debt very rarely explored because our species has not had to really grapple with it before as our technological progression has been slow enough we could "good enough" brute force our way through it whenever it came up in less severe forms.


> Would you share some insight on why Intel failed at making EUV work? Gelsinger said on a podcast (Decoder) recently that in fact they fought it.

A cursory Google search results in multiple search hits from recent news pieces announcing Intel buying their EUV kits from ASML.

If Intel is handing out small fortunes to ASML to deliver a machine that in the past Intel was expected to develop in-house, that does not sound like a success story.

Here's a Reddit thread on the topic:

https://www.reddit.com/r/AMD_Stock/comments/miar30/why_is_in...


Intel has mastered 10nm production.

The race to the bottom of physics is not a long ride.


>Intel tried to make EUV work before and failed

Where does it said that? They haven't even launched their first node with EUV yet. So arguably they haven't failed.

I am not entirely sure if Intel needs to overtake TSMC, they just need to be a good enough alternative. But I agree even that is a tall order.


Is it really though? Intels roadmap seems to place them manufacturing on 3nm by 2024 and TSMC is having issues with their 3nm atm, iirc. It seems like competitive advantages in this industry stems from one of the major players getting caught up on a given new node/process.


It depends, it is not yet clear what Intel meant by 3nm in 2024. Shipping hundreds of wafers? Or Real Product launches by their customers?

It is the same with Samsung which arguably ship their 7nm ahead of TSMC. But with low volume, yield and poor performance characteristic it doesn't really matter. ( And I haven't even mentioned cost yet )


They're already using euv. They didn't try to use euv and failed, they tried to stay on duv too long and couldn't get the yield they needed for too long, that was the mistake.


No I plan to rival your dating of the supermodels so you have to date regular models /s


Can you explain what EUV is? I'd prefer an ELI5!


Chips are made by etching lines into a special kind of glass. The etching is done with light. The smaller the light, the smaller the lines. Red light is big, violet light is small. Extreme ultra-violet light is really small, so they can etch really small lines. When the lines are small, the electricity moves through them more quickly, so the chip’s work gets done faster.


When the lines are smaller, you can place more transistors/functionality in the same area, which leads to higher performance. The power required decreases too.


Can you explain how the transistors are placed on the glass? Are they also "etched" by the EUV light?


They are put on there by putting chemicals on specificatic places. Because it's such a complex pattern, it's easiest to use a template and spray the chemicals on, the same way some artists apply spraypaint.

The template is first printed on the glass. This is done by putting a light sensitive layer (photoresist) on the glass, and exposing some of it to light. When the light gets put on there the properties of the photoresist change so you can wash off the parts you don't want. Jl6 called this process "etching" (there may be additional steps that can also be described as etching, I don't know. Semiconductors are made with a lot of processes. This is the most well known process).

Then you spray on the chemicals you want, and then you wash away the remaining photoresist so you can put the next layer on.

This happens a lot of times for a modern chip.


I looked around at several ELI5-type sources and honestly the Wikipedia one is the best I came across.

https://en.wikipedia.org/wiki/Wafer_fabrication

Here is ASML specifically, in a very slow slideshow format, representing our finest web technology.

https://www.asml.com/en/technology/all-about-microchips/how-...


Additional to what has been posted already, the shape of the lines (which ultimately will result in transistors) is defined by manufactured patterns on quartz glass photomasks[1]. AFAIK a modern processor design has a stack of around 40-60 masks placed in sequence for the light to pass through. Producing these photomasks follows a similar manufacturing process from a lithography POV, except that electron-beams[2] are used instead of a light source. The mask manufacturing process requires its own specialized machines and Intel owns companies that produce those, eg. IMS Nanofabrication[3].

[1]: https://en.wikichip.org/wiki/mask

[2]: https://en.wikipedia.org/wiki/Electron-beam_lithography

[3]: https://www.ims.co.at/en/


The last I heard was TSMC's mask count for 3nm chips is back at just under 100. EULV had brought it down but it's right up there again at the latest node. You really need to avoid those all-layer re-spins!


They are etched and filled with different elements. So a traditional mosfet Transistor needs n, p and gate layers.


So technically, we could use Xrays/Gamma rays to get even smaller.


If you had the requisite optics, perhaps.

https://en.wikipedia.org/wiki/X-ray_optics


So essentially, these chips are made by shining the design through a mask over a silicon disc coated with photoactive chemicals and then chemically processed to make that design "stick" depending on what parts were illuminated and what parts were not, much like old school photography. There's no physical etching going on, its a chemical process where light is used to "activate" certain parts of a coating.

The wavelength, or the color of the light here limits how much minified image we can shine and currently the Extreme ultraviolet lithography(EUV) is the state of the art where they use ultraviolet light with 13.5nm wavelength. As different materials behave differently under different light wavelengths, EUV requires different process to shine the mask over the silicon disk. One major difference from the previous processes is that in EUV lithography they use mirrors instead of lenses to direct the light.

So far, it's only the European manufacturer ASML who managed to perfect this process for mass production.


I don't think they really have a choice but to do this. China is moving hard for Taiwan, and they are serious.


They’ve been trying to do this for years.

The problem always is “Intel fabs are so deeply integrated with Intel circuit design that it’s hard for more general customers to use their process.”

Intel foundry makes a bunch of ‘odd’ choices justified by the fact that most of their volume is so homogeneous in terms of circuit design.

I’m sure they’ve gotten better since I was last aware, but Intel is a hard ship to steer very far.

The other trouble is that Intel isn’t used to working from a place other than being the best and most advanced and expensive option.


Before they absorbed Altera, they were Altera's contract fab for the 10 series. That was delayed by a lot because it turns out Intel's processes are so weird. Eventually, Altera switched its mid-range Arria products to TSMC and got a chip out the door, and we waited another 2 years (at least) for the high-end Stratix 10 to come out. Intel bought Altera during that fiasco.

The last time they tried, it looked like it was a blatant move to try to prepare for an acquisition. They essentially screwed Altera into doing the work to port their chips to an Intel process while also tanking the value of Altera Inc. due to the delays. Do SoC companies want to risk chaining themselves to the Intel process? Do more of them want to be acquisition targets? Do competing CPU vendors want to give up their IP to Intel in exchange for fabrication?

Until Intel spins out its fabs, which it probably should have done a long time ago, Intel will be facing an uphill battle to acquire customers for its fabs.


This is brand new to Gelsinger's reign beginning in 2021, no?



I wrote another comment in the thread that they have tried being a customer foundry and given up multiple times in the past. Around 2002 and 2009.


Good to know! Thanks


He may be able to get it done, but they've been trying waaay before that. I'm not sure where the history started, but they were trying in 2007. I hope they can make it happen.


They need to try a lot harder. They presented to us some 10+ years ago when they had a go at this market previously. The meeting was a waste of time. Their IP was non-existent, we didn't even do the follow up to hear about packaging. They had no idea what was required.


Exactly. The IP portfolios that TSMC has is huge. You can go to Cadence, Synopsys, or TSMC themselves and get PLLs, serdes PHYs for PCI Express, USB, HDMI, DDR 3/4/5, LPDDR, etc.

Is Intel going to let a customer use their internal IP?

Most of the third party IP for TSMC is based on around ARM and the AXI / AMBA bus specs to quickly integrate. I've heard that Intel's blocks use an Intel specific bus so you would need a bunch of translation shims.

Do you remember when Intel ported the Atom CPU to TSMC 40nm around 2009? They had to pre approve every design and it couldn't be in devices with larger than a 7" screen and all kinds of ridiculous nonsense. I later read articles that they did not have a single customer license the Atom in TSMC's process. Then they decided to make Atom powered mobile phone chips and basically paid a couple of companies to design them. Then no one wanted them. On top of that there were Android apps that used native ARM code and not just Java. So Intel had to make an ARM to x86 emulator for that.

Intel is great at making server / desktop / laptop chips and pushing the industry forward with new specs and standards like USB. But I don't trust them to understand or follow through on anything else


Yes, Intel is letting customers use their IP. Up to and including their CPU cores.


I wonder if they'll invest in developing IP they themselves don't need. I'm thinking high-speed serdes (not the slower stuff for pcie etc)


Mr’ Gelsinger is rightly investing in the core business of the company he was hired to run, yes.


>>> The problem always is “Intel fabs are so deeply integrated with Intel circuit design that it’s hard for more general customers to use their process.”

Why would this be a problem ?


There are things called “design rules” that the circuit designers (basically software developers who make the map for how the circuits should be laid out) need to follow for a process (basically “the software will allow you to do X, but the way we make chips, it won’t work”)

TSMC focuses on making rules that are easy to follow (and documenting their rules for a broad audience).

Intel has been “allowed” to make design rules that are bad for the circuit designers, as long as they improve performance, since the designers are a captive audience.

That doesn’t work as a 3P vendor.


I think we would have to wait and see how far they're willing to go. If intel releases a high quality process design kit, or even develops their own handholding consulting experience, I don't see why it couldn't work.


I think it's inevitable given the geopolitical situation in Taiwan that someone eats TSMC's lunch on America's dollar. China will almost certainly invade in the next decade, and getting large volumes of delicate electronics in and out of ports that are subject to a Chinese navy blockade will be very difficult (not to mention the difficulty in manufacturing them amid explosions). The CHIPS act is a big part of preparing for this.

Intel is set up pretty well to pull it off when this happens. Samsung might be in a good position too.


TSMC is the likely candidate to eat TSMC'S lunch. The shell of their fab building in Arizona is complete and they are outfitting the inside now. The Chinese will have a considerably more difficult time invading Phoenix than they would Formosa.


What about the TSMC intelligentsia? Will they reside in Arizona or Taiwan? Because if they remain in a China-controlled Taiwan, that Arizona facility won’t be very useful after, say, 5-10 years when next-gen chips are needed. They’ll be stuck in the past.


There are multiple, very large diasporas of people fleeing the PRC. Hong Kong got its initial start from fleeing Shanghai industrialists and bankers (the Hongkong and Shanghai Banking Corporation being a notable one). And many of those people also migrated on news of the handover (like HSBC), and also in the recent events.


I think congress will get their act together for one big change to immigration policy to get the skills to the USA. Some sort of fast track to residency for those involved, maybe with some extra financial incentive (Taiwan looks pretty cozy compared to Arizona).


Arizona is a 7nm fab I thought? They are saving the good stuff for Taiwan…


TSMC Arizona is N5 [0], but by the time it opens Taiwan will be on N3.

[0] https://tsmccareers.com/tsmc-arizona/


Good point and IIRC you are correct.


> I think it's inevitable given the geopolitical situation in Taiwan that someone eats TSMC's lunch on America's dollar. China will almost certainly invade in the next decade, and getting large volumes of delicate electronics in and out of ports that are subject to a Chinese navy blockade will be very difficult (not to mention the difficulty in manufacturing them amid explosions). The CHIPS act is a big part of preparing for this.

Isn't this why TSMC is creating another foundry in the US, so they can circumvent these issues with China somewhat?


No they are building in the US because the US told them they have to. TSMC is the biggest chip Taiwan has to convince the US to defend it from Chinese invasion. The most advanced fabs and the most capacity will definitely remain in Taiwan.


> China will almost certainly invade in the next decade

Unfortunately, a decade is optimistic now. Given rising instability in China, it will probably be less than 5 years. I believe the DOD has updated their estimate


There's only two months out of the year, October and April, where the weather across the straight is calm enough to support an actual amphibious invasion. Meaning, we'll probably know ahead of time if they're going to do it.

But, I agree with you.


I disagree and I hope I'm right. Taiwan has been preparing for an invasion for decades. China will be observing Russia / Ukraine and although they may suffer less from corruption and military rot, they will be aware of the power of an enemy fighting for survival and freedom against a tyrannical invader.


I'm not optimistic. China learned that the West won't get troops involved directly, and Taiwan doesn't have a land border with a nearby friendly airport you can use to supply them with weapons (check https://twitter.com/GotfrydKarol/status/1588556253760225281).


I hope that you’re right as well. I think you would be right if the CCP still existed and was ruled by a committee. Unfortunately, China is a dictatorship run by Xi now, and his opposition seems too weak to fight now. The more power is centralized the more it exacerbates mistakes and bad decisions. Sadly, the era of Deng Xiaoping’s pragmatism is over.


Also there’s no doubt in my mind they are bitter because China wanted to be the country to get away with an illegal invasion in the eyes of the western world, and Putin already cashed that chip. Now to do so would basically let China and Russia give the world a reason to cut them off together. Yeesh.


> China will almost certainly invade in the next decade

What the hell is wrong with all of this warmongering and reddit-quality posts.


Intel needs to spend a hellva lot more than a few measly billion to rival even Samsung (sub 20 percent sharemarket).

Tsmc is spending $36 billion [1] for 2022 for capital expenditure. Those numbers are fucking nuts, and they Already have several huge fabs

[1] https://www.reuters.com/technology/tsmc-q3-profit-jumps-80-b...


I've been in the semiconductor industry for 25 years.

Intel has tried to be a fab to external customers multiple times in the past. I can remember around 2002 and 2009. After a few years the effort dies out. I know a couple of companies that tried to get Intel to fab stuff and their orders get shoved aside when Intel needs the fab space for their own chips.

The people who founded the company Open Silicon (physical design services) were kind of a spin off from Intel's design services group. They got frustrated when Intel wasn't serious about doing external chips.

I know a couple of other 3rd party external IP companies that have designed IP for Intel in Intel's process. They all said the DRC rules were very restrictive. These blocks had a large amount of analog circuits where the layout is done by hand. In contrast I am in digital physical design where we use automated tools for most of it and follow DRC rules provided by the fab. When they went to check the DRC rules for their analog design they found techniques you could use at TSMC were not allowed by Intel.

TSMC optimizes their process and rules so that 1,000 different companies can each make 1 million chips. If the yield is 80% then that's fine because in order to get the yield to 90% you would have to hire another 50 engineers and spend another 6 months of design time.

Intel optimizes their process so that 1 company can make 5 chips 1 billion times each. They can throw another 100 engineers at it and get the yield to 90% because they are making so few different types of chips.

EDIT:

Regarding DRC rules:

All fabs have DRC rules. These define things like metal layer 2 can't be thinner than X amount and if it is parallel to another segment of metal 2 for longer than 20 microns then the spacing needs to be increased. Vias on layer 13 are larger than on layer 12. The rule for minimum size and the spacing between a 4 x 3 grid of vias is different for via layer 13 vs 12. And so on

TSMC has their mandatory DRC rules. They will not manufacture the chip for you if you don't meet all the rules or get waivers for special requests. Then they have their recommended rules. If you meet these the yield will be higher but many of them involve leaving even more space between wires. This increase your die area which increases cost. But the yield will be higher. So do you save money or lose money? We have people in our team to analyze this and decide that we will meet THESE recommended rules but not THOSE rules.

TSMC also had suggested DRC rules. These would increase yield even more with the tradeoff of more design time spent to do it.


The other major challenge Intel has that contract manufacturers like TSMC don't is an addiction to inflated monopoly level margins on their chpis. The entire organization is bloated and unable to control costs meaningfully like TSMC because they simply haven't had to given the profit margins on CPUs. Contract manufacturers have to be nimble when responding to customers, and that just isn't the Intel we know. I will be really impressed if they can overcome this baggage.


Surprised by all the skepticism in this thread. The way I see it, Intel is the only real hedge against the China/Taiwan situation, and (at least according to their public statements) they're on track for the upcoming process nodes.

If the strategy works out, the upside is huge.


Agreed regarding potential upside.

The AMD vs Intel game is minuscule compared to the Intel vs TSMC vs Samsung game.

Intel doesn't have a great track record in contract mfg, but there is absolutely no reason this story can't change. Especially, if more subsidies arise in the future...


The day Intel gets EUV 3nm fabrication running at 90% yield, all commitments to fabrication for other parties will be forgotten. Given the choice between producing a contract part for you to sell or super-fat margin Xeon Platinum Gentlemen’s Club Edition for them to sell, you can eat shit.

The contract fab push exists to keep 401k funds from divesting in the near term.


Any one else remember the last time Intel did this, completely screwed up delivery of the process node, and ended up buying most of their customers. I heard a rumour that when Intel became the fab for Altera the contract barred Intel from fabbing for any other FPGA company which handed Altera a tonne of leverage on price for the acquisition - because even if Intel bought Xilinx instead they wouldn't be able to fab for them.


There are other comments in this thread suggesting Altera shares were tanked due to manufacturing delays from Intel.


Yeah, I think that's a bit of re-writing history. It's certainly true that the Intel partnership was a shitshow, but Altera shares traded at a pretty constant ratio to Xilinx and were bought at a big premium. Intel weren't exactly savvy in their acquisitions during BK's tenure.


FTA: "Intel also needs to build up a third-party intellectual property portfolio, design services, and a chip packaging and testing ecosystem with partners to make it easier for customers to use Intel's manufacturing process."

... "so far, there is no evidence to show any of these hurdles have been overcome," Shi said.

Apparently it's in this domain that TSMC is king.


Never will happen.

Source: Their high level employees that I know.


Not with that attitude.


They have had decades of close collaboration an customer feedback from their internal design teams, they have process technology teams using some of those designs as reference points to improve upon for the next node.

Don’t underestimate the kind of gravity well that creates, especially in a large organization like intel. Their process is well tuned for high performance Xeon chips. Its not impossible to change that, but extremely hard, and it will take tim.

At the same time they currently have near 0 external customers trying to use their process, they are having a hard time figuring out what their customers need. (I have no doubt they are working hard to change that) But is there a way for them to improve a process node for an external customer? What if it creates an ever so slight downside for x86 products? These are the difficult decisions they will need to make at some point.

TSMC, on the other hand, has decades of experience of doing exactly that: designing a process for everyone with every fabless company in the world, for type of chip, whether it’s server or tiny ultra low power class. And they have a very broad range of flavors of every node to show for it.


Exactly. This is such a strange attitude. If they are really ‘high-level’, they should either walk away or push the ship in the direction that their CEO is calling for.

Why stay and say things like this?


That's not really how the world works. People often stay in jobs they dont like or dont believe in. Maybe the money is better than competitors, maybe they are just afraid of change. There could be plenty of reasons.

Morale issues sink businesses. If everyone who was unhappy walked away, no companies would ever have morale issues.


Or this would be the CEO's job: convince the employees he's heading in the right direction or find other employees.


Hoping the CEO gets sacked and they get a promotion?


Golden handcuffs in the form of millions in unvested stock.

I knew people at companies that were acquired and joked that they were in "rest and vest mode"

They didn't care what the new company did as long as they got paid and their huge number of shares over the next 3 years vested


With all of the skepticism here, who would be better positioned than Intel to eventually rival TSMC?


Samsung.


Any reason why GF and UMC are not in the running?


They dropped out of the leading-edge logic race after getting to 14nm. Development and capex just got too expensive to pay off. I don't know much about the industry but it has been consolidating for 3-4 decades so it's probably just the same story, the up-front costs keep growing. I guess GF and UMC were approaching the limit of DUV where you start either having to do very complicated and expensive multi-patterning, or even more complicated and expensive EUV, so maybe that is what stopped them.


GF doesn't have the scale-at least not in the smaller nodes


SMIC?


The best shot for Intel taking over TSMC is getting the US to start a war with China at Taiwan. Of course it will be a tragedy, but in that case Intel is strategically important to the western world.


> getting the US to start a war with China at Taiwan

The wording is weird. The US cannot start a war over Taiwan, the only way for a war to break out there is for China to invade.


Given some of Intel's shady history, I'm not sure I'd like them to become the ones who take over from TSMC if anything goes south.

They would probably have a monopoly, and with that, too much control, and so, are liable to become lazy and greedy, which will harm everyone involved.


It has been talking the talk for quite a while, in fact for years, and I'm truly getting bored. I think Intel totally forgot what 'credibility' means.


So what's the big plan? Fab its own CPUs at TSMC with a better process and use its own fabs for contract manufacturing? But Apple, Nvidia, AMD, Qualcomm, Mediatek are the largest customers by volume and I don't think they would switch from TSMC to Intel soon.

Maybe some Chinese CPU manufacturers would like to use Intel but there is a trade ban for China.

Also, seeing how Intel likes to jack prices from time to time, I wouldn't want to depend on Intel for all of my manufacturing.


I also plan to do that.


I told you, the notes in my journal in GREEN MARKER are the ones you can steal from me. I specifically wrote this one down in maroon. How dare you! /s


Intel has an identity problem and greatly over-estimates its capabilities (or influence...or maybe they have jumped the narcissistic shark all together)?!?

TSMC & SAMSUNG have large backing and incentives from their respective governments....no publicly backed corporate entity can match those pockets.

Intel's designs (i.e., architectures), products and manufacturing capabilities are from the early 90s. They have failed to innovate partly because their manufacturing capabilities are pretty poor, which have a high impact on their architectures and product development. They need to decouple those two functions as AMD did when they split into GlobalFoundries and acquired ATI...which was probably the smartest thing AMD ever did.

Also, Intel needs to get off their highly egotistical perspective that chips can only be processed using PhDs and Postdocs...their entire mentality is elitist driven by a highly academic white castle mentality.


They can't even compete with AMD anymore....


There is what you want to believe, and then there is what is actually happening.

Intel is still market-leader in most aspects and is still very much fighting AMD in equal or better position when it comes to their top-end products and benchmark results.

Also, the news is about manufacturing chips, which AMD does not do any. We should be mentioning TSMC here.


The recommended budget platform has been Intel for most of this year. They're clearly still competitive but that only lasts as long as AMD continues to reorient their pricing upwards to a more luxury tier.

The new budget AMD mainboards cost 200+ Euros now. Intel doesn't even need to execute well when AMD is about to play itself.


In what respect is Intel the market leader?


OEM


Intel still commands the market with about 3/4 as far as market share. They simply make more CPUs than everyone else if you’re not looking at phones.


sure, but that's because they have contracts with PC manufacturers? not because they have a better product at a better price....


They're also put performing AMD at the top end with their latest chips.

And they dominate in the Steam hardware survey (gamers more likely to build their own machines)


... and IT procurement folks are familiar with the brand name. "No one ever got fired for buying IBM."


Did you see the benchmarks for their current gen CPUs? They are on par with Ryzen 7000 and better in performance per Dollar. Especially since they still support ddr3 and fit in 12th gen sockets, making it even more attractive if you want to upgrade from last gen. With AMD you're forced to buy all new, with expensive ddr4.


Minor correction:

Last generation and legacy support is for DDR4.

The new, more expensive stuff is DDR5.


Lol thanks, seems I'm getting old.


Why not look at phones, vehicles, dishwashers, cpus used on harddrives, eletric bicycles and so on - markets in high demand, many with supply issues at the moment ?


In that case you have many manufacturers using older nodes and processes. It’s not comparing like with like at all. Also, older process fabs are still being built to address that issue. If we are talking about leading edge, high performance chips, we must all be honest with ourselves despite our personal feelings. I dislike some of Intels business practices, but they make a high quality product at volumes that are insane. Despite poor execution on some generations they’ve remained the top of the industry. That’s an amazing feat. Additionally, when they did fail, it was for trying something others had never attempted (GAAFET), and in my mind that isn’t as bad as failing using proven methods; especially so when they still kept a market lead and their older stuff managed to stay within spitting distance of the competition.


So they still command the market as long as you ignore the largest part of the market?


By watt (hah) metric?

On raw performance; the 13900k(+f) outperforms the 7950x. On performance per watt however, AMD holds the crown.


If it is of any indication, Intel is still very strong amongst Steam users.

https://store.steampowered.com/hwsurvey/?platform=pc

Have a look at the graphic "PC Processor usage by manufacturer". The ratio hasn't changed at all since May 2021; still 70% for Intel.

edit: more numbers

https://store.steampowered.com/hwsurvey/processormfg/


Intel 13th gen seems to outsell Ryzen 7000 everywhere so...


5800X3D seems to vastly outsell both...


performance and TDP, intel do worse than AMD, both GPU and CPUs


I wasn't talking about sales... more about performance and cost (In the end I know that profit is all that matters though...)


Intel 13th gen is cheaper to buy and faster than AMD 7000. Power consumption is higher though, and according to my math with the energy prices in my area, the cost difference would be made up by AMD in about 3 or 4 years given several hours a day of heavy CPU usage.


So if the Intel chip is more expensive and slower, why is everyone buying them? (Not a fanboy and not currently up to speed with where the market is at, just trying to follow the logic.)


There's other considerations at play:

- Engineering support when you're designing a new motherboard

- CPU SKUs with different use conditions. Intel Industrial rated CPUs are rated for 24/7 operation, 100% duty cycle for 10 years. I'm not aware if AMD has an equivalent.

- Long product support - CPU SKUs that are "guaranteed" to be around for X number of years. As as example, some Icelake SKUs are planned to be supported for 10-15 years. Let's use an x-ray scanner as an example. The x-ray scanner mfg needs a computer inside the machine to help process info and display images. The x-ray equipment has an intended lifespan of at least 5 years and they plan to sell/service this equipment for at least 10 years. The mfg wants to use an embedded computer that's going to be commercially available for a long time so they don't have to re-engineer and re-certify their machine. This is one reason why you see some "stickiness" with Intel.


IIRC, The FDA (or the EU, or both) requires something like 10 years of support for medical equipment such as x-ray machines. If you want to sell the product for 5 years, then you need 15 years of parts availability.


Because your average person isn't an enthusiast who builds their computer themselves, who knows better, and will necessarily change their buying decisions and habits. Instead most buyers choose whatever is good enough and convenient, or whatever brand they've typically historically trusted, or whatever their peers end up getting, or whatever an advertisement/reviewer/salesmen tells them because they're ignorant and not tech savvy.

Then you have reasons associated with vendor lock in. I mean, most people buy DEVICES, not CPUs, and device manufactures are hesitant to switch, because doing so can potentially be complicated and prohibitively costly (especially if they already have deals with their current vendors). Whereas their customers will end up buying their devices regardless of whether they switch CPU vendors or not (because of reasons mentioned above)...

Ergo, as other's have already sort of alluded, it's largely because of branding, marketing, and vendor lock in.


If I were to guess: AMD came out really poorly on motherboard pricing for AM5, 7000 CPUs are market leading in price, availability has been poor.

Intel meanwhile has an established year old motherboard pipeline for their current socket that 13th gen can be dropped in to.

If this were 14th gen vs Ryzen 7000 the calculus would be different as all the Intel purchases would also include a motherboard.

Also, Ryzen 5000 is going gangbusters outselling everything else.


> Also, Ryzen 5000 is going gangbusters outselling everything else.

Which is why they just dropped their prices? (see, e.g., https://forums.anandtech.com/threads/amd-has-dropped-5000-se...)


Those are official cuts, they've been those prices at retail two weeks.

You can see the 5600X price drop on Amazon happened Oct 26[0].

If you look at the Amazon best sellers[1], it gives you a pretty good idea of where things stand.

0: https://camelcamelcamel.com/product/B08166SLDF?context=searc...

1: https://www.amazon.com/Best-Sellers-Computer-CPU-Processors/...


My best guess would be deals made with OEMs, advertising and "monopoly"...


Who cares about performance per watt? I really care about top end performance.


Who cares about top end performance? I really care about performance per watt.


I’d take a V8 over a Camry any day.


You could do parallel computing with mutiple CPUs or GPUs.... cars are not comparable.


Sure they are. I’d like to use multiple V8s over multiple Camrys so I can get there faster for the trivial cost of some more electricity.


People using laptops, for instance.


Because it's losing the high margin datacenter battle against Epyc that funds R&D on subsequent gens.


Intel makes billions of dollars more per year than AMD, and has throughout its whole history.

Intel isn't strapped for cash to do R&D, and if they are, AMD is way worse off. Go look at profit numbers for both companies over the last 5 years. It's not even close.

To pretend like intel is going to run out of money to fund R&D is just to not even understand what's going on.


How Intel and AMD spend R&D is structurally different. AMD is fabless so they only have to focus their R&D exclusively on chip design. Intel has to build foundries, fund manufacturing R&D, as well as chip design.

It's a bit ghoulish but Intel's saving grace may be an invasion of Taiwan.


On the flipside this is also why Intel is more profitable per chip in the long run - they own their own fabs, the cut AMD pays to TSMC to make their chips (cost, profit, and research/overhead) stays with intel. And intel is somehow still making chips with near performance parity to AMD several nodes behind on 10nm+++ - when intel catches up they will again kick AMD on multiple levels.

Intel is in a good place. The question people should be asking is why its so close at all if AMD is several nodes ahead. why is 5nm and 6nm TSMC only competitive with intel and not crushing it? Their performance per watt at high load (and not at idle) is AMDs biggest differentiator. When intel goes to smaller nodes they'll gain more thermal headroom to pack more and faster cores than AMD.

The current modern tech youtubers are asking many of the wrong questions. I watched gamersnexus crowing about AMD efficiency in the highest tier chips and asked "people buying race cars care about speed more than efficiency - why are you downplaying intel's ultrafast chips on 10nm+++ vs amd's smaller and more efficient nodes, and talking about their fuel mileage for a market segment differentiated by performance?"

Theres a lot of nonsense going around.


It's even worse than that - the youtubers don't compare like for like. They should pick a common state for comparison (eg. same power or same performance), instead it's just a free for all, and whoever picks the lower power cap has better efficiency, whoever picks the higher has better performance.

CPU efficiency is so nonlinear at high power levels as to make comparison meaningless if they aren't matched.


this is bad because certain market segments, like the most profitable consumer ones, are differentiated by performance not performance per watt. Picking an arbitrary power cap and testing them both there will tell you which is most efficient, which is a bit like telling a lamborghini customer that your honda gets better gas mileage - its not something he really cares about.

CPU efficiency is an interesting number to the tech youtubers who focus on it, but is not that market differentiator at the highest end most profitable part of the market, and only matters to them. The AMD 7950x is very fast chip which is much more efficient than its intel counterpart, but at the top end of the market what matters is who performs better, not their fuel mileage. You'd be investigating a number interesting to testers but not consumers likely to buy those products - its a waste of time and effort.


I agree, if I'm buying a top-end consumer chip I only really care about raw performance - finishing a compilation run 20s faster 20x a day or running my Grand strategy game faster in the laggy endgame :P.

I guess from a practical perspective, if you aren't modifying the chip clocks then yeah, they are talking about the right thing. But from a technical perspective, I'm more interested in the underlying architecture performance, which is to say:

If the intel chip is set to a Power Limit of 100w, and the AMD chip is set to a Power Limit of 100w, what does performance look like then? Because it's nearly meaningless to run one processor at 250w and the other at 150w, then say the 150w chip is more efficient. Processor power skyrockets at high voltages and clocks, so likely the intel chip can use 50% less power for 10% less performance, if desired - Intel merely made the (correct, imo) calculus that maximal performance is more important than efficiency for this class of chip.

I think we pretty much agree - I just think it would be interested to have a more accurate architectural efficiency/performance comparison (which would also help when choosing eg. a laptop where those numbers DO matter)


I think that performance per watt measurement ONLY applies to the budget end of the market. I'm not buying underlying architecture, I'm buying a CPU with a stated performance. If you're buying budget parts and pinching pennies for electricity its certainly relevant. If you're choosing between an AMD 7950x and an intel i9-13900k, its stupid and doesn't matter.

As I said before, the efficiency per watt is a youtuber number. Nobody who is buying chips really cares, or if they care they watch the youtubers who talk about it nonstop already. It's the youtubers bringing it up on top tier chips who are just out to lunch and not understanding the actual selling to a market and distorting things because of numbers they care about that class customers don't.

I don't care what a 7950x does at 100w, because if I wanted that performance I would have bought a cheaper chip. It's a stupid nonsensical number unrelated to why people buy $700 high end CPUs. You don't buy that chip to save $3 worth of electricity.


Again, I am mostly interested because it would be representative of architectural efficiency to perform such a test. If the top-spec processor is more efficient at a given power level than another, the same applies to lesser processors as well, in general (in particular, laptops and the like)

This comparison is meaningless though if the processors aren't running at the same power, however :(


Do you mean sports cars? Because efficiency in race cars is quite the topic.

https://guidehouseinsights.com/news-and-views/efficiency-is-...


sure


[flagged]


What's the probability that they know something that you don't, or have different priorities? I'd go to about 1.


They can in the Fab sector. Global Foundries was the spin off in 2009 of AMDs fabs.


> They can in the Fab sector.

I'm bullish on Intel's IDM 2.0 vision in the later stages of a 10-year horizon, but as of today and for at least the next few years, this assertion is unsubstantiated bullshit.

According to their latest Q3 10-Q filing[1], Intel Foundry Services segment accounts for $576M revenue and -$289M operating losses YTD.

In contrast, Globalfoundries' last relevant 6-K filing[2] reported $3.933B revenue and $501M operating profit YTD...and oh by the way, GFS Q3 earnings call isn't until next week Tuesday, so these numbers reflect only 6 months of the current fiscal year.

The reality these numbers support is that Intel's (bleeding) cash cow is in making and selling their own chips, not fabbing other companies' designs.

Furthermore, the only named portfolio segment that's currently doing worse that Intel Foundry Services is Accelerated Computing Systems and Graphics, which has hemorrhaged -$1.275B in operating losses YTD. Even Mobileye segment generated 2.3x more revenue than Intel Foundry Services segment YTD, and is profitable despite being acquired only 5 years ago.

To be fair, AMD intentionally divested their legacy foundry capabilities a decade ago; conflating GFS with AMD in the foundry space is logically indefensible, and there's meaningful underlying reasons why AMD pivoted to fabless.

[1] https://www.sec.gov/Archives/edgar/data/50863/00000508632200...

[2] https://www.sec.gov/ix?doc=/Archives/edgar/data/1709048/0001...


Long way to agree that a company that has divested their fab business doesn’t compete with Intel in the fab market.


AMD? That company shouldn’t be mentioned in the same sentence as Intel. Intel’s profits are more than AMD’s entire revenue.

I get it if you just like the products, but there’s no comparison between the two as far as being companies.


> Intel’s profits are more than AMD’s entire revenue.

I may be long Intel, but this unqualified assertion without cite is certifiable bullshit at face value.

From the latest 10-Q filings for Intel[1] and AMD[2], this is what the numbers are saying for 2022 YTD:

        (In millions) | INTC    | AMD     | Diff  |
  ====================|=========|=========|=======|
              Revenue |  49,012 |  18,002 | 36.7% |
        Cost of sales |  27,646 |   9,802 | 35.5% |
  --------------------|---------|---------|-------|
         Gross margin |  21,366 |   8,200 | 38.4% |
   Operating expenses |  17,900 |   6,787 | 37.9% |
  --------------------|---------|---------|-------|
     Operating income |   3,466 |   1,413 | 40.8% |
           Net income |   8,678 |   1,299 | 15.0% |
  --------------------|---------|---------|-------|
  Adjusted net income |   3,580 |   1,299 | 36.3% |
A few (probably obvious) notes, just in case:

- Intel's non-operating income (~80%/20% split between equity investments/interest and other) accounts for $5.098B and is 147% relative to operating income YTD! Adjusting this out of the picture tells a different story about pound-for-pound operations.

- Compared to AMD's $135M Q3 tax benefit, Intel recognized a massive $1.207B boon! Both companies swallowed operating losses this quarter (Intel -$175M v. AMD -$64M); without this tax benefit, YTD net income for both companies would have taken a hit. I keep this benefit in because Intel operates fabs, and much of that benefit was attributable to "a change in tax law from 2017 Tax Reform related to the capitalization of R&D expenses that went into effect in January 2022."

- Included in AMD's cost of sales and operating expenses are amortization of acquisition-related intangibles (Xilinx) to the tune of $1.005B and $1.499B, respectively. When this acquisition-related cost disappears, we can expect AMD's gross margins to substantially increase as you'd expect from a fabless design house.

Keeping it real.

[1] https://www.sec.gov/Archives/edgar/data/50863/00000508632200...

[2] https://www.sec.gov/Archives/edgar/data/2488/000000248822000...


Fiscal year 2022 is not complete yet for either company. Why would you look at that until it's completed? It's keeping it real disingenuous. All sorts of adjustments come into play. The most recent complete fiscal year matches my statement exactly.

FY21 AMD revenue 16.4 billion dollars[0]

FY21 Intel profit 43.815 billion dollars[1]

[0]https://ir.amd.com/news-events/press-releases/detail/1044/am....

[1]https://www.intc.com/news-events/press-releases/detail/1522/...


What's with the delusional arrogance of continuing to lean into deprecated past performance when data on present outcomes and future outlook is readily accessible? No one making informed, meaningful decisions today gives a shit about last year's Intel and AMD!

From Intel's latest Q3 press release[1], management is guiding towards $63-64B revenue to close FY22, representing -20% decline YoY.

In contrast, from AMD's latest Q3 press release[2], management is guiding towards $23.2-23.8B revenue to close FY22, representing +43% growth YoY.

Underneath the hood, Intel's two biggest segments by huge margin are going down faster than a whore spreading herpes:

- Client Computing segment revenue is down -17.0% QoQ, -18.5% YoY; segment operating income is down -53.9% QoQ, -53.3% YoY.

- Datacenter and AI segment revenue is down -27.2% QoQ, -16.3% YoY; segment operating income is down -99.3% QoQ, -68.5% YoY.

- This quarter, Intel spun off their Mobileye segment and is expected to hold ~94% of MBLY common stock upon IPO. For Q3, this segment's revenue was up +38.0% QoQ, +26.6% YoY; segment operating income was up +11.8% QoQ, +11.4% YoY...we won't be seeing this segment soften Intel's top line decline moving forward.

- Management has made it abundantly clear that they're trying to cauterize deep operating wounds now and over the next few years: "The company is focused on driving $3 billion in cost reductions in 2023, growing to $8 billion to $10 billion in annualized cost reductions and efficiency gains by the end of 2025."

- Next year, Intel will burn ~$5.4B acquiring Tower Semi to advance their IDM 2.0 vision[3]. Prevailing macroeconomic headwinds today and throughout at least the end of next year strongly suggests this will weigh a lot more unfavorably on Intel's balance sheet moving forward given the implied cost of capital went from cheap in February 2022 when the acquisition intent was announced to sky-fucking-high today trending to the moon based on what the futures market has already priced in.

[1] https://www.intc.com/news-events/press-releases/detail/1586/...

[2] https://ir.amd.com/news-events/press-releases/detail/1098/am...

[3] https://www.intc.com/news-events/press-releases/detail/1527/...


Financially yes, but how are you serious when comparing companies only based on their financials and ignoring one of their main products? AMD is currently eating Intels lunch.


What do you mean by "eating Intel's lunch"? As OP said, Intel profits dwarf AMD revenue.

So I'm not sure what you mean? Can you show numbers that would support "eating one's lunch"?

Although you did say to ignore financials, so which makes the analogy meaningless.


Performance-wise, AMD is crushing Intel. That's what people care about, at least they do as consumers and on this site.


For consumer markets the newest AMD products (Zen 4) are great chips but simply too expensive. Logical increments [1] is pretty good at showing what is the best choice at various pricepoints (taking into account local pricing) and has Intel and AMD trading blows across the range with Intel taking the majority of the wins.

All of the tech youtubers have been making similar points lately (LTT, GamersNexus, HardwareUnboxed).

Data centre does seem to be in AMD's favour at the moment, but I don't know enough about the space to know if their advantages will be long lived.

[1] https://www.logicalincrements.com/


But performance-wise, INTC is crushing AMD. Similar market cap, way higher dividends, way lower PE ratio. There are way more investors on this site than the average social media site.

I know what you mean about the chips themselves, but that hasn’t translated into actual money for AMD yet.


way lower PE ratio is not a good thing for a stock


Of course it almost always is. Would you rather buy a McDonalds that makes $1m a year for $5m, or for $1m?

A higher PE only makes sense if a business’s profit is poised to expand significantly beyond doubt. Which it almost never is.


I meant that a high PE indicates that investors have a high confidence in the company. Preferring to buy stock with a low PE is (usually) a contrarian position, i.e. you would only do so if you believe that the company will do better than the market expects it to. You might be right or it might end up being a value trap.

If someone is selling a McDonalds that makes $1m a year for $5m and somebody else selling one with the same revenue for $1m either one of the seller is behaving irrationally or the second seller knows something you don't. Why would somebody give you free money otherwise?


Gotcha.

Who you consider best is very different if you are a gamer that wants 120 fps, or an investor that wants $$$.

But you can't you speak for all consumers and people on this site: I could care less about performance because even bottom end CPUs are enough for my needs. In fact, as a stockholder, Intel matters more.

I'm old enough to remember when AMD was an actual threat to Intel in the PC space but had inferior performance compared to Intel, but AMD was far cheaper and getting design wins (for a brief period in the 1990's before Intel's mega-deal with Dell and anti-competitive wrangling).

I don't have a horse in this race, I just enjoy seeing factions fight over "who's best", and how the arguments change over time.

EDIT: phrasing.


The gaming and productivity performance claim is erroneous anyway. I provided a solid review of Intel 13th gen vs Ryzen 7000.[0]

[0]https://news.ycombinator.com/item?id=33484260


This isn't reddit #2. The original inhabitants that built HN are not kids. Most people here that won't bombard you with AMD fandom are generally appreciative of Intel's superior financial results and quality assurance.

If you want to talk performance, here's gaming performance of Ryzen 7000 vs Intel 13th gen.[0] Needless to say, I don't see where AMD crushes Intel.

And then there's the financials.

FY21 AMD revenue 16.4 billion dollars

FY21 Intel profit 43.815 billion dollars

[0]https://www.eurogamer.net/digitalfoundry-2022-intel-core-i9-...


> As OP said, Intel profits dwarf AMD revenue.

> Can you show numbers that would support "eating one's lunch"?

It's imprudent to swallow bold, unsubstantiated assertions without even bothering to sniff test.

Lunch[1] is on me this time.

[1] https://news.ycombinator.com/item?id=33477899


Why all the fighting? We're all in this together!


I'm not fighting. I'm merely attempting to dispell naive, unsupported optimism to the benefit of my fellow Intel bulls who are too drunk on yesterday's Kool-Aid to interpret what the cryptic financial writing on the walls have been warning for several years now.


Oil companies make a shit ton of money, should we blacklist EV car companies for that reason?


Who is talking about blacklisting AMD, here?


You sound like the people that say that Tesla is larger then all other car companies combined...


I don't think

Earning money is different from market valuation


how about:

China's chips doesn't sell well in the global market, and yet they managed to launch their space station with them

What matter nowadays? how much money it makes, or what it allows them to do?

Should we be stuck with that kind of questions again, for how long? i'd like to see people playing on Mars, so let's promote what does the job better, not what sells better

AMD GPUs perform better than Intel ones, and they do it with close to half the TDP, that's just one example


>AMD GPUs perform better than Intel ones, and they do it with close to half the TDP, that's just one example

I feel like talking about intel's 1st generation video cards as if that's the entire debate and future of intel is misleading and nonsensical. Intel hasn't really made discrete graphics cards before. Their first attempt missesd.

AMD was behind nvidia and intel for years and years in their respective businesses, but here we are and things have changed. Pretending like now that AMD has a efficiency edge over intel the entire game is over, ignores the fact that intel is still performance competitive on larger nodes and also improving their efficiency every year.

Intel is a more profitable company with more than enough R&D dollars to compete against amd. If you had applied your thinking to AMD anytime before the last 3 years you would have also declared them a dead company.

It's just not a consistent way of thinking.


You're right, but the GP is wrong in outright denunciation of Intel's new standalone GPUs as outright inferior to AMD's. It's also misleading to call it an outright miss. While they aren't an outright win across the board, they already have better ray-tracing performance and a better resolution upscaling image quality than AMD on comparable cards. Intel has better H264 and AV1 encoding than either Nvidia or AMD.[2] It's also a fantastic ML GPU for the price having 16GB of RAM. To dismiss it outright is short sighted.

As far as AMD vs Nvidia, NV is not behind AMD at all. Nothing has changed there. The 4090 obliterates the 6900XT at 4K in Cyberpunk.[0] And if you enable DLSS3, we're talking a minimum framerate of 10.6 FPS for the Radeon, with a minimum framerate of 118.27 FPS for the 4090. This won't dramatically change with the 7900XTX. Nvidia competes well with AMD up and down the stack. Intel has a midrange GPU that should reach Geforce 3070 performance once the drivers mature. The encoders, XeSS upscaling, and ray tracing performance are already great.

[0]https://www.eurogamer.net/digitalfoundry-2022-nvidia-geforce...

See: CYBERPUNK 2077, ULTRA RT, DX12, TAA

[1]https://www.eurogamer.net/digitalfoundry-2022-nvidia-geforce...

See: CYBERPUNK 2077 DRIVING, MAX, PSYCHO RT, DX12, DLSS 3

[2]https://www.youtube.com/watch?v=ctbTTRoqZsM


It's not 2016 anymore. This just isn't true.


It's completely true.

FY21 AMD revenue 16.4 billion dollars

FY21 Intel profit 43.815 billion dollars


After the horrible experience I’m having with 12th gen I’ll probably never buy Intel again.


The actual title is: "How Intel plans to rival TSMC and Samsung as a chip supplier".


I plan to rival TSMC and Samsung myself. I have a garage and a UV flashlight


And I, Pumpernickle the Third, Plan to rival the King of Spain with my Golden Armada!

Have at you!


Sounds like Intel is interested in losing more money as an undifferentiated, indefensible, commodity business like the PC desktop market. They're circling the drain.


Nothing creates a glut like a shortage.


How can they make their chips commercially competitive? US input costs are a lot higher than both TSMC's and China's.

There is a reason that US manufacturers have outsourced their production to Asia for the last three decades.


Well, chip manufacturing is mostly automated, so labor costs are insignificant.

Also, Intel is on path to regaining process lead, they can compete with TSMC and Samsung based on quality.


I'd say yes and no. TSMC has over 60000 employees. They're also probably the biggest siphon of fresh graduates in Taiwan. Like most Taiwanese companies people work like slaves(not Foxconn China level) and have fairly low salaries. Intel has twice as many employees, but then again they're also kind of all over the place.

I can't really comment how important labor costs are, but I don't think that they're insignificant.


so labor costs are insignificant.

When chip technology took off in the 1970s, the US semiconductor manufacturers had probably over 80% of the manufacturing share.

Why is that share so low today? Why is it that the US manufacturers would be lucky to reach double-digits in manufacturing share? How did they get to drop to so low in the first place?

The running cost of fabs is not the only cost of new fabs.


The US government is pouring a lot of money into subsidizing the cost of spinning up fabs. Additionally, China will help western foundries achieve success because they want Taiwan.


> Additionally, China will help western foundries achieve success because they want Taiwan.

Can you elaborate on this point?


I imagine the logic is that if the US has its own domestic chip production and isn’t so dependent on Taiwanese chips, then the US may not care as much if China invades Taiwan.


Terrible reasoning, but yes I think that's what he means


Ignores the fact the island itself acts a buffer to China's SE expansion/projection of power.

There is a lot of strategic value to the island, even without the chips.


That makes sense, and as someone else pointed out, Taiwan has strategic value by itself.


This, the CHIPS act was recently signed. Hundreds of billions are being invested in building onshore fabs.


The US also has a 5 times bigger military budget than the worlds next 5 biggest militaries combined. How do you explain the fact that the US has been years behind China and Russia in terms of hypersonic missile capabilities?

Money alone doesn't make the world go round. Trust does. People work for early stage startups for free because they trust the founders.

When I was researching my master thesis 90% of the papers authors were either Chinese or Indian of origin(probably of US universities). A significant number of Chinese-American scientists are making their way back to China for fear of unfair prosecution. Stigmatization of Chinese-Americans during Covid didn't help either.

I'd be willing to bet that Japans TSMC fab will be done faster than the US's


> How do you explain the fact that the US has been years behind China and Russia in terms of hypersonic missile capabilities?

I tell myself someone just had the clarity of mind to refuse to build a doomsday device (hypersonic warheads breaks MAD, or just becomes a new MAD once everyone has them). Happened before with Project Pluto [0] (among other reasons)

> The program was terminated in July 1964 by the Department of Defense and the State Department as “being too provocative”. It was believed by many that if the U.S. deployed a missile of such awesome power against which there was no known defense, then the Soviets would be compelled to do so [1]

[0] https://en.wikipedia.org/wiki/Project_Pluto

[1] http://www.vought.org/special/html/sslam4.html


Here's a NASA conference paper from 2001 on hypersonics[0]. So it's not that it hasn't been worked on, it's just that it never delivered any tangible results. Why? They spent trillions on planes and aircraft carriers that didn't really produce the expected results in modern warfare. Why is the moneyhose still feeding into that?

The question of morality is REALLY off-topic and always comes up when someone makes a point about the inefficiencies of certain systems, instead of actually addressing the main concern, which is why isn't the money being spent efficiently? And where are the checks to make sure it is?

But the only country that has so far used nuclear weapons and littered the planet with depleted uranium wasn't the Soviet Union was it? That country also threatened to use them in Vietnam[3] and brought the biggest nuclear weapon on its carrier to threaten India in 1971[1]. It also wasn't Russia that unilaterally pulled out of the arms agreements. Bolton just argued that Russia was violating it anyway, so there's no point keeping it[2].

Look, I'm not trying to bash that country, but hold your horses when talking about morals, in a discussion that is ultimately a game of power between superpowers. The reason why the US should stay a superpower isn't because it's so morally superior, it's because, if there isn't some sort equilibrium of power, one side is bound to become a bully. Just like how in a democracy there is an intention to have an equal separation of power which for the last decades has been gradually eroding, giving the executive ever more power, leading to all sorts of problems for society.

[0] https://ntrs.nasa.gov/citations/20020011010

[1] https://www.indiatimes.com/news/india/when-russia-stunned-us...

[2] https://www.npr.org/2018/10/23/659911920/bolton-affirms-u-s-...

[3] https://nationalinterest.org/blog/buzz/us-military-looked-us...


I just said "I tell myself" because its what I'd like to believe. Certainly the US is not in the business of demonstrating good morals.

Thanks for the links, particularly re: India, I wasn't aware of this chapter in the cold war at all.


I'm skeptical of Russia's useful military capabilities given that they're currently losing a war to a smaller country with ~20-year-old NATO tech. Turns out that 5x budget bought a lot more capability than Russia had.

Similarly, China's military hasn't been tested in decades.


Can you elaborate by which metric Russia seems to be losing the war?


Russia is steadily losing the territory that they took in the initial invasion (e.g. Kharkiv, Izyum, Kherson), have made no new territorial gains in months, and have lost more soldiers in less than a year than America lost in the entire twenty-year Vietnam War.

They are not winning! And very much appear to be in the process of losing.


Much bigger conventional army, 10 months later has yet to achieve any real progress towards victory.

They may not lose, but at the very least they are doing much worse than anyone expected given the relative disparity in military investments.


I'm not making any point on whether the US is likely to be successful or not, truthfully I doubt it, however I'm only pointing out that hundreds of billions of dollars were allocated for onshore fabs.


That's a fair point and a lot more than what Europe is doing. They have all the resources and the money that they need to succeed, but I feel like there is a need for a change in attitude for that to happen.


Well, subsidizing EVs made Tesla what they are, so maybe?


Um, you might want to let Intel know that the US is too expensive: https://en.wikipedia.org/wiki/List_of_Intel_manufacturing_si...

By my count, of the 7 fabs on the latest process, 5 are in the US.


Intel has been acknowledged to be having problems in keeping up with TSMC's progress towards 1 nm. They are falling behind.


Marginal costs in semiconductors are very low.


Robotics.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: