Photonic computing, to accelerate both logic gates and data transfer, is an incredibly broad and exciting field. While a lot of the promise is still in the lab, real advances are currently being commercialized.
I always had the idea that before jumping to quantum, it would make sense to use photons for as many components as possible instead of the relatively slower, heavier, and much hotter electron.
I don't know enough about computing hardware to know how feasible each component is to be refactored this way, but it is indeed exciting. You could almost imagine such a "photon computer" as a computer which uses little to no energy (at least for the actual computing part), is extremly lightweight due to lightweight components, and never gets hot!
Not only, that, modern CPUs have transistors that switch in 0.1 ns. So even if they got to that speed, it would be 100,000x, not 1,000,000x.
And, if they only got to switching in 10 femtoseconds, it would be 10,000x, not 1,000,000x.
You might ask, what's two orders of magnitude between friends? But a job that takes a minute is quite a lot different from one that takes going on two hours.
> > The team says that other technological hurdles would arise long before optoelectronic devices reach the realm of PHz.
Yup... just the memory access (even if "instant", ram is so "far away" (physically) that the transmission delay will be many multiples of the clock... Currently this is a pain to implement correctly by the CPU manufacturers, but atleast with caches you don't run out of data to calculate while waiting for something new from RAM.
If speed was held back by gate time, then sure, but i'd have thought that propagation delays between gates will be kind of relevant.
Making the clock 1,000,000 times faster would mean the silicon would be 1,000,000 times shorter (in each dimension) so I guess such designs would support some super high clock rates for some specialist applications for small gate arrays, but for general purpose computing, hmm, i'm not so sure.
Propagation delay isn't purely about distance: it's about the time needed for the output to settle in reaction to inputs. That includes capacitive delays: containers of electrons having to fill up.
Say we are talking about some gate with a 250 picosecond propagation delay.
But light can travel 7.5 cm in that time; way, way larger than the chip on which that gate is found, let alone that gate itself. That tells you that the bottleneck in the gate isn't caused by the input-to-output distance, which is tiny.
Ya the article focuses on computing but I think it could enable totally new electronic devices like frequency/phase controllable leds, light field displays and cameras, ultra fast ir based wifi etc...
If I understood the logic correctly, if you think in terms of transistors, they had a laser on the gate and used that to control an electric charge.
> To reach these extreme speeds, the team made junctions consisting of a graphene wire connecting two gold electrodes. When the graphene was zapped with synchronized pairs of laser pulses, electrons in the material were excited, sending them zipping off towards one of the electrodes, generating an electrical current.
This is not what you typically call a "logic gate", where the control and the output have the same type of energy (either both electric or both photonic), this is more like a fast light sensor?
There are plenty of good applications for fast light sensors, why this article tries to spin it into a logic gate (which it is not) is incomprehensible to me.
> A logic gate is an idealized or physical device implementing a Boolean function, a logical operation performed on one or more binary inputs that produces a single binary output. Depending on the context, the term may refer to an ideal logic gate, one that has for instance zero rise time and unlimited fan-out, or it may refer to a non-ideal physical device
As long as it implements a boolean function, which this clearly does, it sure sounds like a logic gate. What difference does it make whether the control and output have the same form of energy when the real thing that matters is the information it captures?
> To reach these extreme speeds, the team made junctions consisting of a graphene wire connecting two gold electrodes. When the graphene was zapped with synchronized pairs of laser pulses, electrons in the material were excited, sending them zipping off towards one of the electrodes, generating an electrical current.
> “It will probably be a very long time before this technique can be used in a computer chip..."
So this is interesting, but largely irrelevant for most HN folks. We'll be retired before it is productized.
This is not a logic gate. The inputs are not even the same physics as the output. Light in, charge out. In addition, the light uses phase relationship to change the output. So it's an interesting device, but a logic gate it is not.
At a size on the order of 1um, it's going to be a long, long while before this becomes a commercially viable competitor to bulk cmos. Doesn't matter much for a CPU if your transistor can switch 1000000X faster if you can only have 1/1000th of them on a die. Your speed would ultimately be limited by the physical wire delays anyways.
Not to mention that it's using "exotic" process steps which means capacity is, at minimum, decades away from being meaningful.
Don't get me wrong, the research is cool, but it's not going to make "computers a million times faster".
What if it ends up in a USB scenario--fewer wires, but running at a higher speed? 4-8x smaller word size to get +10e6 sounds like a good trade. Just think, Z80s & 6502s coming back into fashion. This time, turbo-charged!
Chuck Moore was kind of on that beat already with his GreenArrays chips.
It will definitely be a while, but maybe not such a long one.
This seems analogous to the yearly battery breakthrough clickbait story promising 1 second charge times and 999 years of battery life if only a theoretical process is ever viable at a reasonable price.
I can remember carbon nano tubes and graphene mentioned couple decades ago in nanotechnology lectures by amazing professor. I was exited to live in a different future back then. But back in the reality nowadays I use Ryzen 3950x to program 28 nm CMOS FPGAs. I am still curious what manufacturing technology can replace silicon CMOS for worldwide electronics manufacturing.
Awesome - this is a good example of a recent post I saw on reddit: why should go to engineering school? cause of improvements like this. But they're gonna have to figure out how to get faster memory (maybe non Neumann) to make this really pay-off.
Didn't even read it, responding to the headline alone. No they can't.
Will edit after reading more about why they can't. Which I stand by, as the blockchain is my witness, they just can't.
EDIT: I shouldn't have bothered checking, yes a Petahertz is a million times a Gigahertz, but that's the only thing they've got to ride on. So the size of the chip at that point comes into play, and it would have to be 3D, so then will it have a dimension left for the laser. Well I think a Terahertz would be possible, for sure. But later, like in the fifties. After researching other questions and finding answers to this question in a roundabout way.
>Logic gates don’t work instantaneously though – there’s a delay on the order of nanoseconds as they process the inputs. That’s plenty fast enough for modern computers, but there’s always room for improvement. And now the Rochester team’s new logic gates blow them out of the water, processing information in mere femtoseconds, which are a million times shorter than nanoseconds.
This is a bit misleading, no? Sure, signal does take time in order of ns to pass through entire CPU units, but on the individual gate level aren't we talking of time in the picosecond range?
Remember back in the mid-90s when Intel was developing "Voxels" to use IR to communicate between layers? The little pyramid voxels allowed for faster communication with less engineering... (I cant quite recall -- this was a conversation I had in 1997 on a hike with then CPU guy at Intel... this was when I first learned of a 64-core lab-rat they were working on...)
I'm actually more excited that they found a use for very small segments of graphene, which is needed if we're ever going to produce higher-quality, unbroken strands at scale.
The speed of computers IS NOT LIMITED BY "gate" or "transistor" speed; the speed is primarily limited by transmission line delays across the die and often off the die. You can only improve this by taking less die area or avoiding off-die communication as much as you can. The latter is the basis of the Apple Silicon speed.
[+] [-] btown|3 years ago|reply
https://spie.org/news/photonics-focus/marapr-2022/harnessing...
https://www.nextplatform.com/2022/03/17/luminous-shines-a-li...
[+] [-] fullstackchris|3 years ago|reply
I don't know enough about computing hardware to know how feasible each component is to be refactored this way, but it is indeed exciting. You could almost imagine such a "photon computer" as a computer which uses little to no energy (at least for the actual computing part), is extremly lightweight due to lightweight components, and never gets hot!
[+] [-] Retr0id|3 years ago|reply
> The team says that other technological hurdles would arise long before optoelectronic devices reach the realm of PHz.
[+] [-] ncmncm|3 years ago|reply
And, if they only got to switching in 10 femtoseconds, it would be 10,000x, not 1,000,000x.
You might ask, what's two orders of magnitude between friends? But a job that takes a minute is quite a lot different from one that takes going on two hours.
[+] [-] ajsnigrutin|3 years ago|reply
Yup... just the memory access (even if "instant", ram is so "far away" (physically) that the transmission delay will be many multiples of the clock... Currently this is a pain to implement correctly by the CPU manufacturers, but atleast with caches you don't run out of data to calculate while waiting for something new from RAM.
[+] [-] joshcryer|3 years ago|reply
I can see this technology being made into a super computer type setup one day, but as far as home computing, I have my doubts.
[+] [-] cesaref|3 years ago|reply
Making the clock 1,000,000 times faster would mean the silicon would be 1,000,000 times shorter (in each dimension) so I guess such designs would support some super high clock rates for some specialist applications for small gate arrays, but for general purpose computing, hmm, i'm not so sure.
[+] [-] kazinator|3 years ago|reply
Say we are talking about some gate with a 250 picosecond propagation delay.
But light can travel 7.5 cm in that time; way, way larger than the chip on which that gate is found, let alone that gate itself. That tells you that the bottleneck in the gate isn't caused by the input-to-output distance, which is tiny.
[+] [-] seiferteric|3 years ago|reply
[+] [-] amelius|3 years ago|reply
[+] [-] perlgeek|3 years ago|reply
> To reach these extreme speeds, the team made junctions consisting of a graphene wire connecting two gold electrodes. When the graphene was zapped with synchronized pairs of laser pulses, electrons in the material were excited, sending them zipping off towards one of the electrodes, generating an electrical current.
This is not what you typically call a "logic gate", where the control and the output have the same type of energy (either both electric or both photonic), this is more like a fast light sensor?
There are plenty of good applications for fast light sensors, why this article tries to spin it into a logic gate (which it is not) is incomprehensible to me.
[+] [-] andkon|3 years ago|reply
> A logic gate is an idealized or physical device implementing a Boolean function, a logical operation performed on one or more binary inputs that produces a single binary output. Depending on the context, the term may refer to an ideal logic gate, one that has for instance zero rise time and unlimited fan-out, or it may refer to a non-ideal physical device
As long as it implements a boolean function, which this clearly does, it sure sounds like a logic gate. What difference does it make whether the control and output have the same form of energy when the real thing that matters is the information it captures?
[+] [-] cutler|3 years ago|reply
[+] [-] dkersten|3 years ago|reply
[+] [-] onion2k|3 years ago|reply
[+] [-] Konohamaru|3 years ago|reply
[+] [-] mountainriver|3 years ago|reply
[+] [-] danrocks|3 years ago|reply
[+] [-] 01100011|3 years ago|reply
> “It will probably be a very long time before this technique can be used in a computer chip..."
So this is interesting, but largely irrelevant for most HN folks. We'll be retired before it is productized.
[+] [-] jugad|3 years ago|reply
[+] [-] phkahler|3 years ago|reply
[+] [-] kayson|3 years ago|reply
Don't get me wrong, the research is cool, but it's not going to make "computers a million times faster".
[+] [-] georgia_peach|3 years ago|reply
Chuck Moore was kind of on that beat already with his GreenArrays chips.
It will definitely be a while, but maybe not such a long one.
[+] [-] 2OEH8eoCRo0|3 years ago|reply
[+] [-] rambojazz|3 years ago|reply
[+] [-] JadeNB|3 years ago|reply
As 01100011 points out (https://news.ycombinator.com/item?id=31356408), the article itself already does that:
> It will probably be a very long time before this technique can be used in a computer chip ….
[+] [-] alloai|3 years ago|reply
[+] [-] 0xy|3 years ago|reply
[+] [-] coding123|3 years ago|reply
[+] [-] topspin|3 years ago|reply
The frequency of those stories is much greater than yearly.
[+] [-] Sohcahtoa82|3 years ago|reply
[+] [-] anonuser123456|3 years ago|reply
Which implies a maximum speed up of what … 10% ?
[+] [-] lnsru|3 years ago|reply
[+] [-] scrubs|3 years ago|reply
[+] [-] behnamoh|3 years ago|reply
[+] [-] daniel-cussen|3 years ago|reply
Will edit after reading more about why they can't. Which I stand by, as the blockchain is my witness, they just can't.
EDIT: I shouldn't have bothered checking, yes a Petahertz is a million times a Gigahertz, but that's the only thing they've got to ride on. So the size of the chip at that point comes into play, and it would have to be 3D, so then will it have a dimension left for the laser. Well I think a Terahertz would be possible, for sure. But later, like in the fifties. After researching other questions and finding answers to this question in a roundabout way.
[+] [-] tiagod|3 years ago|reply
This is a bit misleading, no? Sure, signal does take time in order of ns to pass through entire CPU units, but on the individual gate level aren't we talking of time in the picosecond range?
[+] [-] samstave|3 years ago|reply
[+] [-] c3534l|3 years ago|reply
[+] [-] eigenform|3 years ago|reply
[+] [-] FunnyBadger|3 years ago|reply
The speed of computers IS NOT LIMITED BY "gate" or "transistor" speed; the speed is primarily limited by transmission line delays across the die and often off the die. You can only improve this by taking less die area or avoiding off-die communication as much as you can. The latter is the basis of the Apple Silicon speed.
[+] [-] unknown|3 years ago|reply
[deleted]