Hacker News new | past | comments | ask | show | jobs | submit login
Will 7nm And 5nm Really Happen? (semiengineering.com)
128 points by matt42 on June 20, 2014 | hide | past | favorite | 132 comments



Nothing in there about how they'll do 1.5nm. That's understandable; it's a hard problem, and I imagine any solutions they've come up with will remain trade secrets for some time.

I've heard stories, though. Scanning-tunneling microscopes on fire off the shoulder of Orion. Massively parallel arrays of atomically precise probes, used to build CPUs atom by atom. Things you wouldn't believe.

I'm sure it's coming one day, but once you've gotten that far, how far away are you from outright nanofactories? Which are science fiction, and therefore can't happen.


Brilliant.


But do androids dream of electric sheep?


Love the reference, but I'm sure all young phillistines that hang out in HN have missed it completely. :-( You have my upvote.

Time to die.


This is hardly an obscure reference... I think you underestimate the pop culture literacy of young programmers.


Shhh allow them their "programmers these days" elitism.

My 25 year old non-techie gf would have gotten that reference.

Blade runner is hardly an underground cult classic.


Blade Runner is different because of the attention it received due to the "Final Cut" version that came out in 2007. If it hadn't been for that, references to it would probably be a lot less recognizable with the younger crowd. Case in point: I recently made a "2001: A Space Odyssey" reference in front of a room full of 20-something programmers and not one person picked up on it. It wasn't even that obscure -- it was a picture of the monolith, one of the most recognizable images from the movie (and in science fiction in general). The people in the room generally just hadn't seen or even heard of (really all that is required to get the reference) the movie.


Good luck finding a copy of it with the original film noir-esq voiceover.


It's expensive, but not difficult to find. http://www.amazon.com/Runner-Five-Disc-Complete-Collectors-B... (The version without the voiceover is "the Director's Cut.")


Eh, what's that, you young whippersnapper? In my day we had to debug our difference engine with a flyswatter.


Is it a reference to blade runner ? I've guessed that from the following comments, but otherwise i wouldn't have known ( and i have seen the movie). I don't even know what part of the movie that comment makes reference to...

Edit : ok, found it. I remember that scene but nothing from the speech... Glad to know that was a cult moment.


Your username, which I assume is a re-spelling of xyzzy, is itself a cultural reference. I don't know how obscure.


I'm also 17.


We are quickly coming to an end of an era where hardware can compensate for sloppy code and processes. We are heading back to the days of 640k barrier or the old C-64 days. It was amazing how efficient programs were and the genius of the software engineers.

We might even get surprised by the code of something like Elite. An incredibly brilliant piece of software that did so much more then what was thought possible.


> We are quickly coming to an end of an era where hardware can compensate for sloppy code and processes. We are heading back to the days of 640k barrier or the old C-64 days. It was amazing how efficient programs were and the genius of the software engineers.

I don't think we've ever had a "golden age" of software development. Sort of like going back to find examples of good art (in whatever medium) from decades ago, we end up with the impression that the scifi of Asimov's era was somehow better than what we have today. Really, the crap just disappeared. Pointing to examples of good software in the 80s, certainly impressive feats can be found, doesn't mean that they were the norm. Having worked on a lot of legacy projects, I can assure you there's plenty of crap from that era as well.

What we end up with is a false impression of our current time. We see all the crap, we're filtering it out now, and we think things must be worse now than they used to be. We just won't know until we can have a proper retrospective in 10-20 years. Once the crap's filtered out, what remains?


This is precisely what sprang in my mind. It's stuff like the LISP Machines and Elite that have survived the 1980s long enough for us to consider them as part of a "golden era". Those were the exception, not the norm, of the 1980s.


"We are heading back to the days of 640k barrier or the old C-64 days. It was amazing how efficient programs were and the genius of the software engineers."

I think software is already getting there, since we've been dealing with slowing returns for a while. The previous generation of languages like Python, Perl, Ruby, etc. traded convenience for performance and waited for hardware to catch up. It was a good trade in many cases at the time, but as we collectively learned more about the design space I think it has become clear that it is not an intrinsic trade; it is possible to get a more convenient language that we had in the 80s or 90s without trading away performance.

LuaJIT was an early example of this, but the flood gates are opening now. Go is not architected to the nth degree for speed, but it's a language that's only slightly slower than C, and in my experience, only slightly slower to develop in than Python (possible with a crossover point as the program size grows where Go is simply faster). Rust ought to be a lot easier to work with than C++ once you learn it, and I expect it to reach near-optimal speeds, possibly even beating C/C++ in practice as it will be more feasible to perform some aggressive optimizations for multithreading. And I've noticed a lot of the other little bubbling languages that may become languages of the future, like Nimrod, have the same sort of focus in them, "how can we get these convenient features without a 20-100x speed penalty"?

Then, once you have these languages, I'm noticing that in many cases while bindings exist, entire stacks are being rewritten to be simpler and faster. Go has its own webserver. Dollars-to-donuts Rust will too in another couple of years. I suspect this is actually part of the trend; rather than binding to older, huge frameworks and code written without much concern for latency issues, etc, more code is going to start being rewritten to care more about those things. Between mobile pressing us on one end and desktop speed advancement stalling out on the other, there's increasing motivation to write faster code where it matters, without necessarily having to use C or C++.

I'm not saying paradise is incoming, but I get the impression that we're seeing more care about performance manifesting in real languages and code than we used to.


Pretty much every competitor I've ever seen for C/C++ has always made the same claim: with X optimization that C/C++ can't do, we can eventually meet or beat C/C++.

It's never actually came true in the general case. ever


You're thinking of the previous generation that I was referring to. "We can create languages that do whatever we want, but Sufficiently Smart Compilers will make it as fast as C!" This has become a joke. Justifiably.

The new generation that I'm referring to is more like "Well, that didn't work. Let's design nicer languages, but think about performance impacts up front this time." Go is pretty fast now, for instance... not C-fast, but not Python-slow or Javascript-slow; it's closer to C than those, even on a log scale. Lua-JIT is an early example, where I believe the design of Lua was fundamentally based on what could be done quickly, rather than what could be done nicely. You can still have nicer languages that incorporate our years of progress since the 80s/90s, but if you think about performance from day 1, they can also run pretty quickly, too. They may not be quite as nice as the scripting languages, but then, we also know ways of making up for that too, so in the balance I like them better even so.

(And A: Yes, Javascript is still slow, even after all the browser work. It's just "not as slow as it used to be"; consider, if Javascript was so fast, how does asm.js post such improvements over it? Answer: JS is still slow. And asm.js is still about 2x slower than C, last I knew, after all. B: I no longer believe "languages aren't slow, only implementations are", as, proof-by-construction, the last 10-15 years produced plenty of languages that appear to be, yes, fundamentally slow. Those who wish to argue may produce your choice of general-C-speed compiler or interpreter for Javascript, Python, Perl, or Ruby. After I-can't-even-guess how much effort has been put into speeding these up, I say I'm allowed to draw conclusions.)


No I'm not, those new-fangled compilers with that fancy ass "think about performance up front" language doesn't have the umpteen years and billions of dollars worth of research put into it the way C++ has.


If you want all the advantages of C++ compilers, you can write a backend for the language's compiler that generates C++. If the language is actually performance-focused, there shouldn't be any semantic mismatch in doing so; for example, every Rust type should be able to be translated 1:1 to a (much more syntax-laden) C++ type.

The new-generation languages (of which Go is a bad example; Rust and Nimrod, and now Swift, are much better ones) don't attempt to get the computer to do the kind of magic at runtime (garbage collection, type reflection) that made previous-gen languages so slow.

Instead, these NGLs start with the assumption of C/C++ runtime semantics, and then, through extra work done at compile time (e.g. type inference, ownership-tracking) clear away as much implied/redundant/unneeded syntax as possible.


> If you want all the advantages of C++ compilers, you can write a backend for the language's compiler that generates C++.

Well it sounds as if it's as simple as being able to take advantage of optimizations to meet or beat C++.

good luck with that.

Also, FYI: Nimrod is garbage collected, which goes counter against your reasoning.


It's much simpler than that. It actually makes writing a compiler easier as you have higher level abstractions to work compared to generating machine code. There are many languages that compile to C which should prove to you that it can be done. I don't see why targeting C++ would be any different and I'm sure there are languages already out there that do.


You can compile any language down to C or C++, but that doesn't mean that it will run as fast as well-optimized C/C++.

The moment you include dynamic dispatch, auto-conversion to big integers, or one of a zillion of other high-level features without also introducing ways for the programmer to indicate how the compiler should implement them, you give up being just as fast as C/C++.

Yes, the difference may be small, and spending time on improving your compiler can make it smaller and smaller, but for any 'real world program', it won't be zero.

The same is true for C vs assembly, but there, the difference mostly _is_ small because C has none of those high-level features.

Also, few people have the skills and the time to wrote well-optimized C/C++.


bam, nailed that shit, especially the point about performance costs inherent in a feature set rather than in a language.

I just didn't want to continue a conversation with someone who thinks adjusting the age old "optimizations will eventually make it as fast" to be "compiling to C/C++ as an optimization will eventually make it as fast" was somehow going to magically make it come true.

It's the same old argument reskinned, and as you pointed out, the feature set has a lot to do with it.


I said nothing about "compiling to C/C++ as an optimization." I said targeting C++'s semantics (i.e. having only features C++ already has) makes a language fast.


https://news.ycombinator.com/item?id=7921511 > If you want all the advantages of C++ compilers, you can write a backend for the language's compiler that generates C++.


investment in C++ compiler technology has been shared in form of llvm with rust and other languages.


LLVM is not a C++ compiler, CLANG is a C++ compiler frontend that uses LLVM as it's backend.

Significant difference there.


Go is already fast enough, I can implement parallel systems where the limiting factors are hardware-architectural. It's hard to efficiently use all of the available parallel computing power. Erlang does the best job of it now, but there is certainly room to wrest even more performance out of the same number of transistors. Multiple cores pretending to be "the" CPU stinks.


Fortran


Be careful using LuaJIT as a representative example for anything. Not every language implementation can be written by a human super-optimizer like Mike Pall!

But yeah, people have a better idea now of what features need to be built into a language to get good performance, and Rust shows off many of them.

Finally, nostalgia for the old days often includes a failure to remember the many limitations of software back then.


LuaJIT was an early example of this, but the flood gates are opening now.

Smalltalk was an early example, but the community made the wrong mindshare choices and it never lost the "slow" reputation. Java still experiences some of the same dynamic.


> Dollars-to-donuts Rust will too in another couple of years.

I believe the project to follow is "Teepee", which is being developed by the person who wrote the quick-and-dirty "rust-http".


The fastest modern 3D engines (Frostbite, CryEngine) are as brilliantly efficient as those programs from the early days. The difference is that today's highly efficient systems are much more complex, so besides brilliant ideas, more and longer hard work and studying is needed to create something that really impresses people.


I'm pretty sure those days are gone forever. Even if there's a slowdown of CPU power, there's still quite a lot left over for the sloppy coders of the world. Especially when we consider a lot of software isn't CPU bound, but storage bound, and even the most commodity SSD blows away the spinning disk. Oh, we're also all 64-bit now and computers shipping today come with 8gb of RAM.

We recently played the constraint game with the mobile/tablet space. It was fun while it lasted, but now with quad-cores and 1 to 2gb of RAM standard, its over as well. Tablets have become the new laptops. Phones have become the new tablets. Its boring.

I think the days you pine for are forever gone, at least in the general computing space. I find that once something new comes around, its naturally constricted and if it allows people some level of freedom and creativity, then the weirdo early adopters will rush in, and in that group there will always be a handful of super-stars. This minority makes the big crazy strides or the crazy efficient game and then go into more respectable work.

We also saw this with the early web, which was so much more innovative and experimental than the "social marketing" mode we've all agreed works best. That matured quickly as well. Look how the web browser is pretty much a platform, if not a quasi-OS, in itself. We keep reinventing the big bloaty OS and big bloaty applications for reasons that make sense; because people want big bloaty toys and the bells and whistles they offer.

So where's the next new constrained system young hackers are going to blow ours minds with? Maybe 3D printing. Maybe drones. Maybe VR. Maybe automated and electric cars. But even those have a certain level of maturity already. We might not know until it actually happens. Who saw TBL and the www coming? I suspect very few.

I personally hope to wake up one day to a new Jobs/Woz combo offering me something straight out of sci-fi, like a home robot with useful arms and enough AI to make use of it or a lucid dreaming machine that actually works or a nootropic that makes us all near geniuses. Come on guys, stop writing bejeweled clones and financial apps and blow our minds again. I suspect I still have a few mind blowing events left in my lifetime (almost 40 now).


Perhaps we'll end up with programming competitions that have restrictions on equipment like some forms of auto racing. Perhaps a common VM would be used for those competitions (maybe this already goes on?)

It could rate limit the CPU/peripheral access, limit memory, etc...


That's been writ large by every fixed hardware video game console.

Though the tech heroism aspect of game programming strikes me as a bit diluted these days now that highly refined licensed engines and middleware are understandably the norm.


Isn't that roughly the demoscene, where there are often categories for entries that fit in 4kB or 64kB; or competitions to do the most interesting thing in 1kB of JS, or the like? Code golf?


RPi gives a nice fixed target for that kind of competition, although it is not as limited as competitions would like.


...instruction set architecture.

Core Wars.


If only. But no.

What happens when FAB technology stagnates, everything locks in place and micro-processor advancement halts? Nope, it takes years for the state of the art FAB technology to trickle down everywhere. All the while getting cheaper and cheaper and cheaper. That state of the art smartphone in your pocket, it uses processors fabbed using 28nm technology. Even switching to 14nm technology would quadruple the transistor count and allow a corresponding increase in performance. Also, as FAB costs come down it would be more and more cost effective to increase chip sizes up to the maximum feasible die size, further increasing performance. Meanwhile, as former state of the art processor manufacturing becomes cheaper and cheaper it becomes more and more ubiquitous. So now not only your CPU but also your flash storage is made using 14nm technology, and maybe they decide to cram a shit-ton of micro-controlling processing power in there (they already do) and a bunch of static ram for caching or whatever, and performance keeps improving. Meanwhile, the processors and controllers in the radio system, on the cell towers, in the switches and routers, and in desktop and server devices keep getting faster and cheaper too. So now you have a butt-load of resources "in the cloud" or "the fog" or whatever idiom you want to ascribe to the phenomenon, all available for making sloppy code run faster and better.

Ah, but wait, there's more. This is all the boring stuff, but there's crazy stuff out there too. Memristors are almost certainly the future of computing in more ways than one, and so far it looks like they'll be as feasible to manufacture and as practical as we've theorized. With durable storage that has access times faster than modern ram the performance bottlenecks of the past (which have rarely been in CPU and have more often been in I/O or RAM) will fall by the wayside. Pocket sized computers will have terabytes of non-volatile super RAM. You can write 3D renderers in javascript and they'll run at 4k resolution / 60 fps with detail beyond what state of the art GPUs are capable of pushing today.

So no, the hardware train isn't coming to a stop, in many ways it hasn't even started yet.

That doesn't mean that caring about quality and writing highly efficient code isn't worthwhile, but it does mean that there probably won't be a big mean performance bottleneck forcing everyone to get their shit together in the near future.


With 16nm parts in production now, I suspect conventional NAND flash has less an 8X density increase, and less than 10 years left, before we need some totally new tech to keep the party going. AFAICT memristors will not be anywhere near ready for high-volume production by then, at least not in a way that's better than NAND.

With the USD supply expanding 8-10%/yr, within a decade, we may actually see flash storage costs increase over time.


I quite like that perspective. Having to accept wait and lag on i7 processors, 2digit gigs of ram as fast as never before etc * ) is really frustrating.

Luckily, in my experience at least, Microsoft is finally starting to catch up on usability and efficient use of hardware. Windows 7 didn't raise the bar for hardware specs significantly and neither did Windows 8, 8.1 etc [1]

*) not talking data science, rendering, games or user/admin misconfiguration, i.e. DNS timeouts etc.

[1] : windows.microsoft.com/en-us/windows-8/system-requirements


We also had computers that'd fall over at the drop of a hat, with a crapton of low level code that'd cause all sorts of havok because not everyone who writes code is a genuis or infallible.


> We also had computers that'd fall over at the drop of a hat

Hm. That's not my experience at all, rather the opposite. Those 8 bitters were incredibly reliable.


I think he's remembering the TSR battles of the early IBM PCs under DOS. You had to be careful what you ran because they'd conflict with each other, resulting in bizarre failure modes.


Unless you had a Kempston interface attached to a Speccy, then a butterly fluttering it's wings in South America would crash it ;).


The Sinclair was a species all by itself in that respect. I'm thinking along the lines of the bbc micro, the dragon / color computer and the TRS-80. Compared to those the Sinclair was incredibly cheap but that was definitely reflected by the build quality. I remember the microdrive, that was a funny little design, very creative but very flawed. Like an 8-track endless tape but for data.


I was (still am?) the proud owner of a microdrive. It was supposed to fill the niche between tape and floppies. Tape is interesting media but somehow it keeps losing (spoken by someone who still has tons of audio tapes lying around).

It would be cool to do a teardown of the microdrive.


We've been hearing that statement for atleast the last 15 years. Is it anymore true now than it was then?

The only thing that's caused people to start worrying about efficiency has been social: mobility and environmental concerns.

In no way has technological issues stopped us from increasing our performance throughput.


>something like Elite

What is it? Elite is a word used many places so I couldn't google it.


An old video game, which is often praised for its clever code (and gameplay): http://en.wikipedia.org/wiki/Elite_(video_game)


http://en.wikipedia.org/wiki/Elite_(video_game)

The original version did indeed to an amazing amount with a 6502 and 32kb of RAM.



The word you are looking for is "clever" and we all know that clever isn't always better. Yeah, stuffing everything you needed into <64K was difficult (and fun) But that code isn't "world class" and I wouldn't want to use that code in today's zero day security environment.

Yes people need to write cleaner, less sloppy code. But we also don't need to go back to the days of self modifying code either.


People have been talking about the impending end of Moore's law for a long time. Now, we have run into a wall with clock speeds which has already slowed things down, and led to a lot of back filling (e.g. The huge push in improving the speed of VMs and interpreted languages is essentially a tradeoff of memory for performance, which follows from memory continuing to get cheaper while performance stagnates (kind of).


The drive to smaller gate structures is because a MOSFET transistor gate is essentially a capacitor, and as such, it takes time to charge and discharge the capacitance of the gate. So to further increase the performance, one must reduce the capacitance of the gate. The gate is basically and RC circuit. The easiest way to reduce the capacitance is to reduce the physical size of the gate. The size of the gate is directly proportional to the capacitance. Once you get down to a few atoms wide, which they are approaching now, then quantum effects kick in and it starts getting impractical to go further. The alternatives for the next generation have to do with attacking the R part of the equation which is what they mean when they talk about electron mobility. Other approaches that are unrelated to the traditional MOSFET are being researched, but there is such a huge amount of IP based on MOSFET any move away from that will incur huge cost in porting over. (think of it as the hardware version of switching to a new programming language). I am not sure what the next gen answer will be as it seemed the author did not either (which tells me he knows what he is talking about).


Wow, you are so wrong, and yet no one has corrected you!

We want to increase gate capacitance, not decrease it. The higher the capacitance, the faster you can invert the channel, and therefore turn the transistor on.

Think of a transistor as a water tap (valve based). Gate capacitance is the sensitivity of the valve to the handle. In this analogy, higher capacitance is equivalent to less effort to turn the handle (to move the valve out of the way and let the water flow).

That's the reason we want to increase the area of the gate (the idea behind FinFET, and all-around gate), and increase its dielectric constant (the idea behind high-k materials).

You're also confused about RC model - perhaps you heard it in the context of a logic gate (as in NAND), and you try to apply it when talking about a transistor gate (as in FinFET). Nope.


I've heard that the whole quantum tunneling stuff is in practical terms bullshit. While it's a real effect (and physicists looove to talk about it), it's virtually irrelevant b/c the overwhelming issues is parasitic capacitances. When you have a 3Ghz clocks, everything acts like a capacitor and you have current leaking all over the place (the smaller the circuit the closer all the elements are).

So transistors in modern processors don't end up switching consistently. There is all sorts of error correction to compensate for that, but it only goes so far.


The gate leakage current was a serious concern until the High-K gate dielectric was found. It increased the gate dielectric thickness so tunneling current is reduced significantly (exponentially). Nowadays the main problems are subthreshold leakage and dynamic power. Here's a good article about it:

http://spectrum.ieee.org/semiconductors/design/the-highk-sol...


> the main problems are subthreshold leakage and dynamic power.

This is exactly correct. Even at 28nm, many SoCs are intentionally using less than the max manufacturable transistor density, due to dynamic power/thermal constraints.

SRAM is scaling poorly too. It makes up 50-60% of many SoC designs yet at 1500MHz+ its density (Mb/mm^2) looks likely to increase just 1.1X between 28nm and 16nm.

Desktop CPUs are not going to get 32MB on-die caches any time soon. At least not without eDRAM.


On the other hand, I heard just this week at imec that currently 30% of the leakage is due to quantum tunneling. Nothing to sneeze at.


I find it just amazing that modern chips already have parts that are just thirty or so atoms across [1], and that there are plans to go even smaller. I guess you can't go much smaller than 1.5nm because that's, like, three atoms, right?

[1] http://en.wikipedia.org/wiki/14_nanometer


1.5nm is more like 10 atoms across, given that, for example, a C-C bond is 0.154 nm. Not that that helps tremendously :)


https://en.wikipedia.org/wiki/Monocrystalline_silicon has 0.5nm for the lattice spacing, distance between Si atoms ranges somewhere between 0.3nm and 0.5nm, I guess. Even if you go to graphene instead of Si wafers, I don’t think you’ll get more than three atoms within 1.5nm, as graphene is not a cubical lattice.


Unit cell of Silicon is about 5 angstroms (0.5 nm), so we'd be at about 7 atoms wide for a diamond cubic structure (I think).


I don't think we can actually make chips out of carbon...


It was an number for the length of a chemical bond that I had easily to hand, other bonds aren't significantly different, definitely not 5 Angstrom.

Also, people working on graphene and related materials are trying damn hard to make chips out of carbon ;)


Unit cell is 5.43 Angstroms, according to this site: http://hyperphysics.phy-astr.gsu.edu/hbase/solids/sili2.html. Of course the bonds by which the cell is constructed are smaller.


That what I always wonder. There's gotta be a limit in there somewhere... right? Maybe? But they just keep finding new ways to make 'em even smaller yet. How far will we go, and what will our devices and industry look like by the time we can't go any further?


It's quite possible we're at the cusp of the plateau already. For many years planes kept getting faster and faster, until one day they didn't. It may be the same with processors. For years they'll keep getting smaller and smaller, until one day they don't. In fact, this already happened with clock speed 10 years ago. The top Pentium 4s, while certainly worse than the Haswell i7s of today, have about the same clock speed (3.8 GHz or so).


That's due to thermal and electrical limits of Silicon and whatever you are using to insulate (Silicon Dioxide)... having a semiconductor with a higher bandgap (Such as many III-V semiconductors, such as indium gallium arsenide), you can have a much higher clockspeed.


I'm really interested in this, could you please explain more? My impression is that the heat from subthreshold current is what's limiting today's CPUs. How does a higher bandgap help you achieve a higher clockspeed for the same power density?


> For many years planes kept getting faster and faster, until one day they didn't

Planes are still getting faster and faster. Though now they are mainly unmanned.

HTV-2 Falcon scramjet reached 13,201 mph on 22 April 2010.

NASA X-43 reached 6,600+ mph.


I don't know if you'd count these one-off experiments in the same league as making aircraft that you can actually turn a profit manufacturing and operating. That game seems to have dried up at about the size and speed of modern jetliners, given the issues we've had with operating supersonic commercial aircraft.

I'm sure somebody in a lab somewhere can make a single processor that's ridiculously fast compared to anything you can buy, but being able to manufacture usable, reliable CPUs in quantity and cheap enough to turn a profit on selling them is a whole 'nother ballgame.


I don't think that there is a way to go smaller than an atom. Most likely, there will be a sifht in technique. My guess is quantum computing, but who knows.


Your third sentence contradicts your first. Quantum deals in the realm of atomic and subatomic length scales.


I don't think that quantum computers are made of subatomic components, nor are atoms split in the process.


How i learned to stop worrying and love the miniaturization:

https://www.youtube.com/watch?v=bKGhvKyjgLY

Seriously, watch it. Memristor-tech relies on this kind of miniaturization and can provide a speed boost in several areas in current architectures.

Secondly, having worked for semi: there's a lot of conservative force holding back development. We could have had current tech with much less worries than we have now if they didn't respond so allergically to everything that looks a little exotic in the CMOS process, like high-k dielectrics.


HP and the Memristor is a good PR story. Unfortunately there is not too much beef behind it.

The Memristor is actually the same as an RRAM element (Resistive Random Access Memory). Companies other than HP have started working on it long before and are significantly ahead. For example Micron recently presented a multi-gigabit prototype chip. But there is still a lot to be done. HP lacks the funds, manpower and manufacturing muscle to really get anywhere in this area.


> HP lacks the funds, manpower and manufacturing muscle to really get anywhere in this area.

They might lack the political will, but HP has ~$15B cash on its balance sheet, 350k employees, and tens of billions in fixed assets.


They have money, but not the semiconductor expertise, assets or experience of an Intel. See this explanation forvwhy that matters: https://news.ycombinator.com/item?id=7922277


>Secondly, having worked for semi: there's a lot of conservative force holding back development. We could have had current tech with much less worries than we have now if they didn't respond so allergically to everything that looks a little exotic in the CMOS process, like high-k dielectrics.

Such as? Getting new materials into manufacturing is only the last step. Before that there has to be a significant benefit of doing it. And yes, that is usually benchmarked against risks and capex.


They respond allergically to everything that looks a little exotic because the costs of failure are colossal, and I would bet money that conservatism has allowed the industry to dodge more than a few bullets you haven't heard about. High-k just happens to be one of the things that worked out in the end.


It sounds like 7nm and 5nm are entirely feasible, but nobody is prepared to spend $10Billion on the fab until they're sure they've got (close to) the best solution.

Also, I feel like the whole industry is getting a bit lazy. The big foundries are all around 28nm and getting 100 percent utilization of their capacity. So long as their customers competitors have no better foundry to go to, there is little incentive to press forward. There is the possibility to fall behind, but waiting just means increasing certainty in the path forward. Meanwhile Intel...


Or, alternatively, the simple solutions have all been squeezed out and these are genuinely hard problems.


That will change if/when intel gives up sticking to x86 only. Then anybody who has space and power constraints (hint: mobile) will be playing fabs off against each other.


Question, probably going to come across as dumb, but I'll ask it anyway ...

Is there a market for "huge" feature chips (eg. 100nm+)? Would it be worth making these older processes very cheap and widely available and growing the market that way?

(You can still build an early Athlon at 130 nm or a 486 at 800 nm).


The cost of making chips at 100nm+, 90nm, 45nm, etc, is almost indifferent. It actually gets cheaper with smaller sizes because you can fit more chips in the same number of waffles.

The problem described by the article is that 10nm is already so small that we are hitting the barrier of what's physically possible.


What (I think) I mean by the question is this:

Why can't I have a $1M machine sitting in the corner of my lab which turns out 800nm chips on demand? Can the technology which in 1990 was incredibly demanding now be turned into a product, thus vastly opening up the chip market, albeit for "obsolete" varieties of circuits.


Making chips requires a large, expensive fab plant. Using it to make something as obsolete as that would be more expensive than just making a couple generations old chips; perhaps 30-50 nanometers. But no matter what it makes, those plants are expensive still.


I think the point that the OP is missing is that Silicon manufacturing is extremely complex, requiring 1000's of steps, tons of water and electricity, and extremely pure materials. That's not something you can fit in a lab, let alone a machine within the lab.

To give you a sense of scale, I ride past Intel's research fab (Ronler Acres) in Oregon and the Hillsboro airport on the way to work. The fab is bigger than the airport, and even that fab does not contain all the equipment necessary to manufacture chips for public consumption. There are separate plants for package and testing.


And some very nasty chemicals.


Considering the investment concentrated in such facilities and the stringent requirements of clean rooms, the 911 attacks would gave done much worse economic damage by targeting those facilities.


Are you including all the damage of the TSA and the wars that resulted from 911?


You still would have had the TSA and the wars, plus an ongoing economic impact on the cost of chip production. I'm not saying that there wasn't a lot of economic impact. I'm saying there could have been more.


Look at this ~6000nm Intel plant from the late 1970s. It was extremely high tech at the time, but nothing there seems inherently like it cannot be fully automated, simplified and shrunk down to my $1M FAB machine:

http://www.youtube.com/watch?v=ll_-_ngu4Gg

(Starts about 7 mins in)


Look at this ~6000nm Intel plant from the late 1970s. It was extremely high tech at the time, but nothing there seems inherently like it cannot be fully automated, simplified and shrunk down to my $1M FAB machine.

You can get down much lower than $1 million if you're willing to live with a feature size of 10+ microns.

http://code.google.com/p/homecmos/

http://homecmos.drawersteak.com/wiki/Main_Page

http://diyhpl.us/wiki/homecmos/

The ghetto method is "point a projector through the top of a microscope" so that you don't have to bother with photomasks. Not counting the other steps.

irc.freenode.net #homecmos ... although they are taking a break for a while, so I also recommend ##hplusroadmap and #dlp3dprinting and #reprap for now.

Btw, there's a youtube feature for "start 7 minutes in" like this: https://youtube.com/watch?v=ll_-_ngu4Gg&t=7m


(I love this question. I have a Ph.D. in the making of semiconductor devices, and I once worked as a troubleshooter in a factory that was making transistors with a twenty-year-old process.)

The first fallacy that's tripping you up is marginal cost. Just because it's cheaper to buy a 800nm-process chip today than it was in the 1990s doesn't mean that it's cheaper to build the factory, employ the packaging engineers, or source the materials (let alone stuff all those things into a refrigerator-sized box). The finished parts are cheaper because the R&D, factories, processes, and HR procedures were bought and paid for in the 1990s, and those things are all still there, so long as a market is there. The workers are very happy to keep doing their jobs, and the marginal cost to keep them working is relatively low, particularly because the yield on a mature process can be really high.

The second fallacy is the physical-plant fallacy. You look at the factory and the machines and you think that's what it takes to make semiconductors. But if I gave you the keys to a shiny new Intel factory today, you would not succeed in making 80486 processors in a few weeks. Even if I gave you a new factory and its staff and the services of the world's leading experts in semiconductor devices and went back in time to arrange the delivery of a steady stream of raw materials, you would still not succeed in making working 80486 processors in a few weeks, although the Dream Team might manage to make some things that looked like working devices right up until you tried to turn them on... or until you tried to turn them on three weeks later.

The expensive part of manufacturing is the learning curve. Every one of those shiny machines has five hundred knobs, and every one of those knobs needs to be set correctly or the products won't work. Your experts can guess the approximate settings for everything, but the crucial final 5% needs to be dialed in by trial and error. You must exercise the factory, then correct for the mistakes.

That's expensive because the feedback is expensive. The difference between a broken part and a working part might take weeks to manifest, and it's literally microscopic, so you need an entire little team of highly trained QA scientists with thermal-cycling ovens and electron microscopes and Raman spectroscopes and modeling software and coffee in order to develop hypotheses about the problems with your process, hypotheses which must be tested by running more doomed wafers through that process.

(I've watched a few thousand people come within a hair of losing their jobs because we couldn't make this iteration converge fast enough.)

This is where economy of scale comes from: Practice. The Nth wafer coming out of a fab has high yield if and only if the (N-1)st wafer had high yield, so you have to bootstrap your yield up from zero one batch at a time. Your fab is only as valuable as the number of wafers it has made, or tried to make. The factory needs practice, and practice takes time, and time costs money.

---

So, here's how your refrigerator-sized fab is going to work. You'll take delivery and set it up. Unfortunately, shipping being what it is, parts will have slipped or gotten bent or stretched. Your humidity and temperature cycles will be different than they were back in Shenzhen. Your ambient dust level will be different. The batch of photoresist that you pour into your hopper will have been manufactured on a different week than the batch that the manufacturer used to calibrate the machine, and your sputtering targets will contain a different mixture of contaminants.

All of these things can probably be calibrated out – if the knobs are well-built enough to stay where you set them, and your environmental controls are comprehensive enough that the conditions remain constant, and you aren't forced to change suppliers, and you have the operational discipline to resist the urge to get blind drunk and start twiddling settings at random while sobbing. But how do you know which experiment to run, on your microscopically-flawed parts, in order to converge on working parts? You need to order the optional "electron microscope" kit, which ships in a slightly smaller box. The box next to that one will contain the materials scientist that you ordered. Hopefully they remembered to drill the air holes!


That was a great answer, thanks.


Agreed. Best answer I've seen on HN in ages. (25 years in semis here.)


It's a lot less than a million dollars. You can fab transistors at home with a bit of patience. A number of universities have done this as part of an undergraduate class.

It would be interesting to build a single metal layer process for 120mm wafers (DVD sized). Something like a 'makerbot' but for simple chips. The tricky bit is that you really want to use things like hydrofluoric acid and getting ones hands on that stuff is time consuming. Something that with the liberties of the mid-20th century and the technology of the 21st century would be possible, but not the other way around.


I am working on a thought experiment for holographic FPGA type device that would be pretty much a DVD and it would be programmed in a regular DVD burner.


FWIW, at USC in the early 80's I was in a lab that was doing (in part) holographic image convolvers. Basically trying to create holographic "lenses" that would effectively do an image convolution when light passed through them. It was a pretty remarkable concept but I don't recall it being all that successful. The reports should still be available from the EE department there.


I tried looking this up and didn't find much. Could you point me in the right direction?


You could do that, or you could get an FPGA that runs faster at a lower price per unit and probably holds more circuitry.


Yes, you can fit more chips on lower nm, but you spend lots of money on tooling and suffer from lower yields compared to high nm processes. The cost of developing and tooling a modern, competitive ASIC, if you need a relatively modern process, can be tens of millions of dollars (not counting the IP development itself, just bringing to production).


No way is 28nm cheaper than 32nm per chip. Even though the die size shrinks, the process is much more expensive.

Also, s/waffles/wafers/


No such thing as a dumb question. (My answer could be dumb, though)

I guess you would be limited by a couple of things:

You don't have scale. To produce cheap you need scale. Go ARM for cheap processors and emulate whatever circuitry you want. It will probably be cheaper for most applications.

Power consumption would be prohibitive.

Huge upfront costs for a market that is quickly fading away.


Beyond what the others have said, one of the reasons you wouldn't want to use older chip tech is power efficiency. In many cases newer chips are so much more power efficient that the power savings alone more than make up for the difference in cost over its lifetime.


previous tech/node fabs often build things like the motherboard chipsets, and other things. 100nm node machinery probably isn't used since there is probably enough production capacity from more recent nodes that there isn't any point (as has been pointed out, silicon wafers aren't really much cheaper for the older nodes, so you might as well stamp more processors out with newer tech)


There are markets, yes, but they may not be very large ones or ones you'd think of. For example, the NSA has (as of a few years ago, anyway) a chip fab of its own using very large nm features. Why? To manufacture chips for legacy systems.


Sure, and you can find low volume prices here: http://cmp.imag.fr/products/ic/?p=prices


Yes, but for things like CMOS imagers and MEMs, not for regular semiconductors.


This question faces a tremendous amount of uncertainty. So much rides on whether the power/$ of EUV sources can be significantly boosted. No one knows when (or even whether) this will happen.

It's not even known whether we will reach 10nm any time soon, as the article asserts we will.

The industry's history has inculcated a huge amount of technological optimism, but someday that optimism will be misplaced. There is a fairly high chance that day is today.


A fairly high chance? Care to quantify that? Is it 50%? 10%? 1%?


Sure, I'd be happy try to quantify my views. What question would you like a quantifiable answer to?


Let's keep the size of the transistor in perspective. The human synapse is on the order of 20 to 40nm in diameter. I have to wonder if we will be able to do that much better than what Nature has already come up with.


Nature really has produced some amazing things, but they are generally not optimal, just good enough.


True. And Natural Selection never discovered how to extract energy by breaking apart the atom---that took an act of intelligence. Still, I suspect there may exist limits to how small a fundamental computational unit can be:

http://www.sciencedaily.com/releases/2002/11/021126203508.ht...



Oh yea?! so where does there exist a natural fusion reactor?!? Man: 1 Nature: 0

(... :P I think the point was that there are— as far as we know, no evolved organisms using fission. Not that there wasn't fission in nature prior to man.)


> a natural fusion reactor

https://en.wikipedia.org/wiki/The_sun


I thought there was bacteria that used nuclear radiation to sustain themselves. They aren't doing fission...

http://www.sciencedaily.com/releases/2006/10/061019192814.ht...


Those are really just feeding on the reactive chemical species produced by the ionizing radiation from nuclear decays in the rock. Pretty nifty, and I hadn't heard of them before.

The other possible answer to this that I know of are the fungi who might be able to derive metabolic energy from gamma rays--- though last I checked it wasn't entirely certain that's what they're doing.


If you doubled the number of transistors in the same space then wouldn't that nearly double(quadruple) the heat output? How would someone even keep these things at a reasonable temperature at 7nm?


No, because the heat generated by the transistors is a function of their size.


My limited understanding is that heat is related to the square of the voltage. And that by reducing the size of a transistor, you can lower the voltage. I don't think you can lower the voltage by quite as much as the reduction in size, so there must be a point when you can't just pack more in. Though gate design or choice of materials can reduce the voltage needed as well. They can also make the chip smarter about switching off parts that aren't needed at a given point in time to keep overall heat down.


I cant wait for haswell-like processors to be so small they can be powered by kinetic side-effects (walking, etc).


Here's an article that disproves Betteridge's law of headlines


Tl;dr. Yes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: