Hacker News new | past | comments | ask | show | jobs | submit login
Fastest-ever logic gates could make computers a million times faster (newatlas.com)
230 points by DamnInteresting on May 12, 2022 | hide | past | favorite | 104 comments



Photonic computing, to accelerate both logic gates and data transfer, is an incredibly broad and exciting field. While a lot of the promise is still in the lab, real advances are currently being commercialized.

https://spie.org/news/photonics-focus/marapr-2022/harnessing...

https://www.nextplatform.com/2022/03/17/luminous-shines-a-li...


I always had the idea that before jumping to quantum, it would make sense to use photons for as many components as possible instead of the relatively slower, heavier, and much hotter electron.

I don't know enough about computing hardware to know how feasible each component is to be refactored this way, but it is indeed exciting. You could almost imagine such a "photon computer" as a computer which uses little to no energy (at least for the actual computing part), is extremly lightweight due to lightweight components, and never gets hot!


It's a misconception that electrons are slower than photons. In a vacuum? Maybe. But you need a medium to use photons and in fiber photons go at 0.5-0.75c.

Electronic signals in copper propagate at somewhere from 0.66-0.8c.

The big benefit of photons is that they don't experience electrical interference, so you can often get a lot more bandwidth out of a arbitrarily sized photonic medium than an electronic one.

The actual latency of photons vs electrons is generally not relevant.


Just to unpack "electronic signals", what propagates is the disturbance ("signal"), not the electrons themselves. Like waves propagating at the beach, where water only moves back and forth a little, and slower.


And specifically what causes the electrons to move each other is the electromagnetic field. The boson of the electromagnetic field is the photon, so the signal is being carried from one electron to the next by photons. (Although the disturbance itself as a whole is not a photon; I think it's often described as a quasiparticle called a plasmon.)


Extremely relevant Veritasium video (skip the first two minutes):

https://www.youtube.com/watch?v=oI_X2cMHNe0


> But you need a medium to use photons

Since the speed of light in a vacuum is the prohibitive speed limit of Relativity, I always felt we should develop a medium in which light moved faster than it did in a vacuum, and I swear I've read an article about such a material which was referred to in the article as "ruby," and the images of it reminded me of those glass things at the Fortress of Solitude in Superman (1978). If such a material exists, it would not only make photon computers very fast, but make it possible to violate causality without violating Relativity.


Any idea of instead of “traditional” fiber photons, if hollow core fiber is used if this changes the calculus?


On a larger scale, the Meta Quest 2 uses a USB cable to plug into the computer so you can play VR games on your PC. The max length of the cable is something like 3 feet over copper. The link cable they sell switches from electric signals over copper to light over fiber, and then back to copper to get around the length limitations.

Not really the same thing but still cool!


It's interesting that they did that. One of the main reasons why you only see fiber optics in enterprise applications is that if you bend the cables too much, they're ruined. SFP+ hardware is cheaper and has lower latency than 10GBASE-T hardware, so in a vacuum it would be the obvious choice for everything.

I'm not sure if they just decided the conventional wisdom that consumers would ruin the cable was wrong, or if they figured out how to make them idiot-proof.


Sfp isn't copper vs fiber. It's just the standard for a modular plug on network hardware.

They have 25gbe twinax SFP28s that, as far as I'm aware, have essentially identical performance specs to using optical SFP28s with fiber. The fiber is thinner and can go further, but it also is more fragile and limited on bends. Latency is almost entirely driven by the switch and the NIC.

Photons in fiber do not move meaningfully faster than signals in copper, and speed of light in short datacenters runs is not the limiting factor In latency. Any meaningful processing will happen in electronics and converting from copper to fiber introduces more latency.

There's a good paper on it here: https://www.commscope.com/globalassets/digizuite/2799-latenc...

Tldr coax actually is faster (0.77c) than optical fiber (0.67c).


> Photons in fiber do not move meaningfully faster than signals in copper, and speed of light in short datacenters runs is not the limiting factor In latency. Any meaningful processing will happen in electronics and converting from copper to fiber introduces more latency.

My understanding is that it has nothing to do with the speed of light for short-run cables. 10GBase-T uses block encoding that adds a couple microseconds of latency (which dwarfs the impact of the speed of light at short distances) and fiber (including SFP+) doesn't need to.

Coax/twinax may not need the encoding schemes that twisted pair does, I'm not sure.


I have a 30m HDMI run that I bought several cables that were certified for HDMI 2.0b but all of them were pretty unreliable at that length. After returning a bunch I ended up buying a 50m (that's the shortest they had that would work) Optical HDMI cable and it's been smooth sailing since.


My current gig has a similar problem, with a piece of hardware that operates over USB. Cable lengths over 6m or so introduce sufficient voltage and current drop to cause intermittent issues with the sensor (which are really tough to troubleshoot if you don't already know about this).

The solution has been to use fiber optic hubs with the same behavior you describe - copper cables on either end to connect the hardware to the computing unit and a long run of fiber in the middle to make up the required length.

Pretty cool stuff!


It's probably not voltage drop that's making your device operate incorrectly. It's most likely the added capacitance and or inductance of the wires skewing the high frequency data signal out of spec.

Entirely possible it's voltage drop, I don't know the exact hardware, but at 6 meters, it's probably signal rather than power.


Since that cable is USB-C only, and I don't have USB-C ports on my gaming PC, I have a 5m (~16ft) USB-A to C cable. Pure copper all the way, just much heavier than a regular weight USB cable.

I have no issues with data rate or packet loss at all.


Any decent quality standard copper USB-C cable works fine, typically 15ft


There's multiple data rates that it can run at, the highest has a reduced length over copper


By "lightweight", do you mean the weight of photons?


you haven't noticed that copper wires sag under heavy current?



For anyone else wondering, the "1000000x faster" claim is based on a theoretical clock speed upper bound of 1PHz https://newatlas.com/electronics/absolute-quantum-speed-limi...

> The team says that other technological hurdles would arise long before optoelectronic devices reach the realm of PHz.


Not only, that, modern CPUs have transistors that switch in 0.1 ns. So even if they got to that speed, it would be 100,000x, not 1,000,000x.

And, if they only got to switching in 10 femtoseconds, it would be 10,000x, not 1,000,000x.

You might ask, what's two orders of magnitude between friends? But a job that takes a minute is quite a lot different from one that takes going on two hours.


Even 0.1ns is way slow. A modern silicon cmos gate will switch under 10ps, which is how we can fit 25+ gates in a single cycle at >3GHz. Everyone should remember that cpu frequency is not the same as the frequency a single gate can switch. Also keep in mind we are mostly wire limited anyway, as resistivity of copper at <50nm line widths is quite unlike its bulk resistivity, and scales super-linearly. This prevents us from further shrinking wires at all.


> resistivity of copper

Could we use a superconductor here instead of copper in order to achieve further wire shrinkage? Eg, if it were to be operated in a datacentre where it's plausible to power the cooler needed to keep it cold. The amount of superconducting material would be quite small


Microfabrication is quite an art. Being able to image pieces of copper down to such ludicrously small sizes with techniques like EUV and optical proximity correction is already quite advanced. Doing it for high temperature superconductors like YBCO is definitely not trivial. I don't think superconductors are a good idea due to the reduced fidelity, you might as well just use normal conductors and spread them out for better cooling


What about silver instead of copper?


Silver’s electromigration is significantly worse than copper, leading to reduced reliability. Plus the fact that silver is ~200x more expensive doesn’t help.


> But a job that takes a minute is quite a lot different from one that takes going on two hours

Though in your terms the promise would be to have a job that takes going on two hours in a second. Feasibility not discussed, one would not cry over those two orders of magnitude "that could not make it".


You can multiplex optical signals, though im not sure exactly how that would be harnessed. If nothing else, at least a few times more possible throughput?


You can multiplex electrical signals too. This is how some cable companies provided both TV and internet in the past.


> > The team says that other technological hurdles would arise long before optoelectronic devices reach the realm of PHz.

Yup... just the memory access (even if "instant", ram is so "far away" (physically) that the transmission delay will be many multiples of the clock... Currently this is a pain to implement correctly by the CPU manufacturers, but atleast with caches you don't run out of data to calculate while waiting for something new from RAM.


You can read the paper here: https://www.nature.com/articles/s41467-022-29252-1

I can see this technology being made into a super computer type setup one day, but as far as home computing, I have my doubts.


If speed was held back by gate time, then sure, but i'd have thought that propagation delays between gates will be kind of relevant.

Making the clock 1,000,000 times faster would mean the silicon would be 1,000,000 times shorter (in each dimension) so I guess such designs would support some super high clock rates for some specialist applications for small gate arrays, but for general purpose computing, hmm, i'm not so sure.


Propagation delay isn't purely about distance: it's about the time needed for the output to settle in reaction to inputs. That includes capacitive delays: containers of electrons having to fill up.

Say we are talking about some gate with a 250 picosecond propagation delay.

But light can travel 7.5 cm in that time; way, way larger than the chip on which that gate is found, let alone that gate itself. That tells you that the bottleneck in the gate isn't caused by the input-to-output distance, which is tiny.


Ya the article focuses on computing but I think it could enable totally new electronic devices like frequency/phase controllable leds, light field displays and cameras, ultra fast ir based wifi etc...


I could see this potentially allowing VLB interferometry for optical frequencies, allowing even higher resolutions than the Event Horizon Telescope.


I think that's fast enough for gravity gradiometry on a chip.


That is, by just putting a clock on each corner and counting their relative ticks.


Think pipelining ...


If I understood the logic correctly, if you think in terms of transistors, they had a laser on the gate and used that to control an electric charge.

> To reach these extreme speeds, the team made junctions consisting of a graphene wire connecting two gold electrodes. When the graphene was zapped with synchronized pairs of laser pulses, electrons in the material were excited, sending them zipping off towards one of the electrodes, generating an electrical current.

This is not what you typically call a "logic gate", where the control and the output have the same type of energy (either both electric or both photonic), this is more like a fast light sensor?

There are plenty of good applications for fast light sensors, why this article tries to spin it into a logic gate (which it is not) is incomprehensible to me.


From wikipedia:

> A logic gate is an idealized or physical device implementing a Boolean function, a logical operation performed on one or more binary inputs that produces a single binary output. Depending on the context, the term may refer to an ideal logic gate, one that has for instance zero rise time and unlimited fan-out, or it may refer to a non-ideal physical device

As long as it implements a boolean function, which this clearly does, it sure sounds like a logic gate. What difference does it make whether the control and output have the same form of energy when the real thing that matters is the information it captures?


> What difference does it make whether the control and output have the same form of energy when the real thing that matters is the information it captures?

A logic gate itself doesn't do much useful computation, you have to chain them together.

But how do you chain them, if they use a laser beam as input and an electrical charge as output? You have to use the electrical charge to drive a laser... which is much slower and more energy intensive than a classical logic gate in a modern integrated circuit.


> What difference does it make whether the control and output have the same form of energy when the real thing that matters is the information it captures?

Just thinking out loud, but it might break common assumptions about being able to (easily) compose a individual gates into a more complicated logic function.


Scalability, for one. A modern PC CPU has ~10^10 transistors forming ~10^9 logic gates that work because you can chain them easily.


Interesting to think how many 10^6 faster gates would be needed to do the work of 10^9 at the same speed. Say take the 8086 and make it a million times faster. At about 30K transistors and 5MHz. A photonic 8086 apparently would run blindingly fast around anything available now.

Serial speed is always a gain up, no questions asked I guess.

Obviously all of that is over simplified, and not considering other components to any system that would be built (but hey, it's not like any of this is happening tomorrow anyway).


The issue is not the 30k optical transistor, it's the 60k extremely precise laser pulse generators you need to drive them


Can't wait. Finally an end to "Rails is slow".


Don’t worry, we will still find ways to make the software slow.


Imagine running two Electron apps at the same time!


Software bloat is an ideal gas. It always expands to fill the volume of computational speed.


Anecdotally, when the switch from Windows 98 to XP happened I missed it.

I went from an old 386sx-33 to a Pentium 4 and brought my software along with me. The previous owner had borked the hard drive and gave the box to me for free.

I got a hard drive and installed DOS on it (which was the only OS that I had at the time) and tried to play some games.

That was a bewildering experience. Almost nothing worked, I had no drivers and no way to get them, but I did find a few games that would load and ran them. Text games were ridiculously snappy, it felt like I would press the enter key and the next section would already be up before my finger left the key.

But the real mindblower was graphical games. I got (I think) Commander Keen or some other graphic-based platformer to load and it would start the level and everything moved in super-high speed. If I pressed an arrow key I was instantly as far in that direction as the character could move. When I pressed Jump the character would twitch, completing the jump instruction before screen could fully update.

The new system running a barebones OS was so fast that the software could not operate normally. Now computers are scores of times faster than that and yet seem so much slower because of both software bloat (bad) and decoupling software clocks from processor clocks (good).


Oh there will still be many ways to make fun of rails, don’t you worry


I am excited for this to be used to speed up Microsoft Teams.


> To reach these extreme speeds, the team made junctions consisting of a graphene wire connecting two gold electrodes. When the graphene was zapped with synchronized pairs of laser pulses, electrons in the material were excited, sending them zipping off towards one of the electrodes, generating an electrical current.

> “It will probably be a very long time before this technique can be used in a computer chip..."

So this is interesting, but largely irrelevant for most HN folks. We'll be retired before it is productized.


Speak for yourself... in the great Amurica, most can't afford to retire.


This is not a logic gate. The inputs are not even the same physics as the output. Light in, charge out. In addition, the light uses phase relationship to change the output. So it's an interesting device, but a logic gate it is not.


At a size on the order of 1um, it's going to be a long, long while before this becomes a commercially viable competitor to bulk cmos. Doesn't matter much for a CPU if your transistor can switch 1000000X faster if you can only have 1/1000th of them on a die. Your speed would ultimately be limited by the physical wire delays anyways. Not to mention that it's using "exotic" process steps which means capacity is, at minimum, decades away from being meaningful.

Don't get me wrong, the research is cool, but it's not going to make "computers a million times faster".


What if it ends up in a USB scenario--fewer wires, but running at a higher speed? 4-8x smaller word size to get +10e6 sounds like a good trade. Just think, Z80s & 6502s coming back into fashion. This time, turbo-charged!

Chuck Moore was kind of on that beat already with his GreenArrays chips.

It will definitely be a while, but maybe not such a long one.


What about ASIC built with this for breaking crypto?


Key sizes are generally chosen so that brute force is infeasable even with enormous speed advancements. You cannot increment a counter to 2^256. There isn't enough energy in our solar system. So you cannot brute force 256 bit symmetric key encryption using traditional computers. Not at any speed.


yeah; to get a sense of how big 2^256 is: assuming you can increment a counter at 1PHz rate, it would take 91 million years to iterate over 2^256 values.


More like 10^55 years, actually!


You'd still probably need multiple universes full of these faster chips. Numbers in cryptography are terrifyingly huge.


I'm waiting for the experts to chime in and explain why this is in fact not going to happen, before I even bite on that title.


> I'm waiting for the experts to chime in and explain why this is in fact not going to happen, before I even bite on that title.

As 01100011 points out (https://news.ycombinator.com/item?id=31356408), the article itself already does that:

> It will probably be a very long time before this technique can be used in a computer chip ….



This seems analogous to the yearly battery breakthrough clickbait story promising 1 second charge times and 999 years of battery life if only a theoretical process is ever viable at a reasonable price.


Those batteries do exist but they can only be charged with 99% efficient solar panels.


> yearly battery breakthrough clickbait story

The frequency of those stories is much greater than yearly.


Like those radioactive diamond batteries...


So memory access will now be 1,000,000,000 times slower than register access.

Which implies a maximum speed up of what … 10% ?


I can remember carbon nano tubes and graphene mentioned couple decades ago in nanotechnology lectures by amazing professor. I was exited to live in a different future back then. But back in the reality nowadays I use Ryzen 3950x to program 28 nm CMOS FPGAs. I am still curious what manufacturing technology can replace silicon CMOS for worldwide electronics manufacturing.


Awesome - this is a good example of a recent post I saw on reddit: why should go to engineering school? cause of improvements like this. But they're gonna have to figure out how to get faster memory (maybe non Neumann) to make this really pay-off.


but most engineering programs aren't as cool as this.


Didn't even read it, responding to the headline alone. No they can't.

Will edit after reading more about why they can't. Which I stand by, as the blockchain is my witness, they just can't.

EDIT: I shouldn't have bothered checking, yes a Petahertz is a million times a Gigahertz, but that's the only thing they've got to ride on. So the size of the chip at that point comes into play, and it would have to be 3D, so then will it have a dimension left for the laser. Well I think a Terahertz would be possible, for sure. But later, like in the fifties. After researching other questions and finding answers to this question in a roundabout way.


>Logic gates don’t work instantaneously though – there’s a delay on the order of nanoseconds as they process the inputs. That’s plenty fast enough for modern computers, but there’s always room for improvement. And now the Rochester team’s new logic gates blow them out of the water, processing information in mere femtoseconds, which are a million times shorter than nanoseconds.

This is a bit misleading, no? Sure, signal does take time in order of ns to pass through entire CPU units, but on the individual gate level aren't we talking of time in the picosecond range?


Remember back in the mid-90s when Intel was developing "Voxels" to use IR to communicate between layers? The little pyramid voxels allowed for faster communication with less engineering... (I cant quite recall -- this was a conversation I had in 1997 on a hike with then CPU guy at Intel... this was when I first learned of a 64-core lab-rat they were working on...)


I'm actually more excited that they found a use for very small segments of graphene, which is needed if we're ever going to produce higher-quality, unbroken strands at scale.


Other good news is, if photonics is a viable path forward from traditional CMOS, in the distant future we can have "hardware" and "lightware" :)


Sigh. The ignorance.

The speed of computers IS NOT LIMITED BY "gate" or "transistor" speed; the speed is primarily limited by transmission line delays across the die and often off the die. You can only improve this by taking less die area or avoiding off-die communication as much as you can. The latter is the basis of the Apple Silicon speed.


If I remember correctly, quantum computers went from theory to prototype in a decade. What are the barriers to modelling a core that uses these in software and then effectively printing them, and adding a conventional computer interface at the edge?


Quantum computers were theorized in 1980, and there still isn't even a basic prototype. 18 years later, the first 2-qubit quantum computer was built. There still isn't anything usable after 40 years.


This seems very different from the view that I learned about it from in the 90s because of Dan Simon's discussion of using a QC for factorization, and then by 2000 there were single qbit computers. Sure, it may have ben theorized in the 80s, but execution is key. This optical computer was demonstrated, and is well beyond the theoretical stage. Also, our ability to prototype circuits of any kind is ridiculously more powerful than 80s tech.

If it's economically viable, I'd bet we will see it in 15 years. There are libraries for quantum algorithms in multuple languages, and I remember trying to learn them in Haskell pre-2010, with the assumption that by the time I wrapped my head around it a decade or two later, there would be computers to run it on. I gave up on that, but from an investor perspective, a game changing improvement in classical compute tech is worth considering exposure to.

Are you seriously confident this optical compute model, if economical, is 30+ years away?


So all we need now is a million times slower Javascript framework to compensate. :)


"Optimization anywhere besides the bottleneck is an illusion"


I often hear this and some other one about 'premature optimisation is evil or such'.

While it is very true that spending a whole heap of time optimising something that never needs it is a huge waste of time, spending the extra time and effort to use an optimised data structure when you have a high confidence that something will actually grow is time well spent up front.

In some domains, like financial trading systems, using a linear list and hoping it stays small or all fits in the cache is simply naive. An mature developer would never say 'premature optimisation is wrong' and laugh at people who waste a lot of time optimising instead of focussing on functionality. A very experienced developer would stop and look at the problem at hand and make an educated guess whether to optimise at this stage and not.


do scale testing across all the parameters that can grow. profile and see where the bottlenecks occur. whack and repeat until your benchmarks are acceptable for whatever you're anticipating.


The reaction time of the gold-graphene-gold gate is wonderful. How long does it take to set the phase of the lasers?


Are these going to be larger than silicon ones? If so is the increased speed going to make up for the decreased density?


Whats the most promising startup in this field? Whatever happened to lightmatterv


Headline in in 20 years:

Javascript UI library that is compiled into another javascript UI library and used in almost all desktop applications now for some reason now TWO million times slower than native desktop widgets. Here's why you should convert your native application to it anyway!


nothing wrong with using javascript in the UI, is it that much slower? i mean GNOME uses it and i can bring up the js console on my OS, does not seem to make any performance difference then something written in C++ etc.


I should think JS itself in the UI is "fine" if it's just JS we are talking about and not the DOM or other "web" technologies (a la Electron and co). After all, V8 is faster than the Python interpreter, and quite a lot of UIs are written in Python without it being a bottleneck - it's just a layer of glue pulling together various APIs and creating some sort of pipeline and none of the actual performance-sensitive stuff is done in the interpreted language either way (except when you're doing things like calculating stuff or sorting the contents of widget in Python or JS, etc. where inefficiencies in your code and in the language you are using begin to show - but those aren't strictly required to be part of your UI markup).


You are deluded if you think that. Even with DE's made with Mutter such as Budgie, the perf and smootness difference between the first and the second it's very noticeable.

I would like Gnome if it supported Guile as the scripting language (now Guile 3 has a Jit) as an alternative to GJS.


what do you mean? not sure what your saying but i tried KDE and that was a slide show... GNOME is smooth i use it daily so yes JS is fine if its used in GNOME it proves it.


Great, so soon it will take less time resolve the dependencies between our 5 billion obsolete, backdoored and crypto-stealing node dependencies! Perhaps we can use that power to create a new build tool because there aren't enough! /s


I was going to say the same. Working in the software industry has made me so cynical about the nature of our craft, that every news of hardware improvement immediately makes me wonder how exactly we are going to squander it.


just don't use dependencies write the stuff your self.


In mice.


computation is just one technique to solve problems. we should also invest in our god-talkers, who may be able to use divination or offerings to accomplish the same goals.


Can you point me to a reliable God talker


So Wall Street can pillage us at an even greater rate.


i've yet to see a high-frequency trade that can outrun this (it's a pipe bomb)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: