IBM scientists say they're another step closer to creating computer chips circuits that efficiently use frickin' laser beams rather than copper wires to communicate. On Wednesday, Big Blue's light-wrangling boffins unveiled what's called an avalanche photodetector capable of receiving optical information pulses at 40Gbps. The …
Diamond CPUs obsolete before they're ready for market?
A number of years ago Wired magazine ran an article about artificial diamonds, and how one of the companies wanted oh-so-bad to get into producing diamond wafers for circuitry. In 2003 a diamond-based transistor was clocked at 81GHz, while silicon transistors had a maximum speed of 10GHz. Seven years down the road, and have we seen anything yet?
I wonder if the diamond fabs will be eclipsed by optical CPUs. Hmmm....
Re: Diamond based Cpu's
To paraphrase the late Michael Crichton's novel (and somewhat dire film adaptation) Congo, it's because they could'nt get enough Type 2B blue (boron doped) diamonds from the abandoned mine workings, near the fabled city of Zinj...
And since one of the characters used resonant charges, which triggered a dormant volcano into erupting, thus destroying said workings, is it any wonder that they're taking so long to dupilcate said diamonds...?
"won't happen for five to ten years"
Sounds a bit slow for an avalanche.
And no doubt by then, microsoft and the like will have managed to produce even more bloated bloatware, so it will *STILL* take more than five seconds for a PC to boot up.
What is the goal?
I've been told that electrical propagation delays on the scale of the dimensions of a typical CPU is one of the harder desgn constraints when laying out a chip.
Would this replace current interconnects on chips? Thus reducing propagation delays to approx the speed of light upon the chip dimension?
I just love..
any article that appropriately includes "frickin laser beams"!
1 small part of a big puzzle
OK you've got the detector. You now need a bunch of them and a bunch of lasers. But you'll also need some kind of routing system. Optoisolators have used white, dome shaped packages to improve the reflectance from 1 emitter to 1 detector, which would only be good for 1 channel, unlikely to very useful
Given current CPUs are running a lot of power already this is going to be difficult to justify without *very* carefully checking the benefits of this idea.
Only with naive implementations, propagation delay is a serious issue. If your have a long wire, simply don't expect the signal to arrive in the same clock cycle. Use piplines or messages instead of simple buses.
Light is not that much faster than electricity - it depends on geometry and the isolating material. A good guess is 0.5*c. And that is still 15cms per nanosecond. With a cycle time of 0.3 ns (3 GHz), you have about 5cms of wire budget. Of course, you also have to switch a transistor and load parasitic capacitances etc, so propagation delay plays a role.
Future CPUs with more than 5GHz clock rate will have to adapt their design to this reality by simply using pipeline and message-based designs.
This invention will probably be more a kind of economic improvement as you would be able to connect a fiber directly to the chip. No need for additional components. Still, fibers are bulky in reality, if you want to connect (say) ten to a chip you will need a novel mechanical arrangement.
All those optical-computing ideas have not become real up to now and I have some doubts they ever will. After all, light has huge wavelengths of hundreds of nm, while electrons are indeed (also) waves, but at a much, much shorter wavelength. Current semiconductor tech is now down to 25nm. I am not sure you can squeeze a 500nm light wave into that.
Light can be broken down into its coloured constituents which basically makes it a possible 16 possibles @ 4-bit, 256 @ 16-bit per transfer just in terms of basic VGA & 65536 at 8-bit, & an ultimate 4.2 billion @ 32-bit current detection possibility. That means for every 32-bits, our current norm, the transfer of data could be done with one light byte. Therefore the transfer possibility is much greater with light. Splitting may be down the track but its possibilities lend itself to amazing possibilities.
I think you're underestimating the potential value of this-- I'm guessing you know some circuits and comp arch as well, but I'm going to break it down a bit more than you need for the benefit of others.
You're obviously right that something like this has little or no value for local interconnect, but global interconnect is a major pain in the ass that does not scale or improve with process technology, and this could be a big help in that context. Pipelining global interconnect is most certainly not a panacea-- basically, by adding in more stages, you increase the number of cycles that pass before it's possible to determine whether or not a branch instruction was predicted correctly. When you guess wrong, therefore, you've wasted more time following a bad path, which brings down the average number of instructions that get completed per cycle. Intel tried this strategy with P4-- their thought was that they would use techniques like wire pipelining to reduce the cycle time so far that the drop in efficiency would be outweighed by blazing speed. It didn't really work. By slashing global interconnect latency in half, you could potentially have much larger cores without resorting to these shenanigans (allowing larger branch predictors for example, and possibly helping with sequential execution speed), or you could facilitate communication between cores.
For global interconnect, optical communication has a lot of attractive features even beyond the roughly 2x reduction in latency. In a traditional bus, you have multiple long wires in parallel. The metal layers used for global interconnect tend to be relatively tall and thin-- the thinness is to get density, and the tallness is to compensate for the effects of that thinness on resistence. As a result, you have large plates close to one another, and you develop significant capacitance, which means that the relative voltage of the neighboring lines will tend to stay the same. So, if one line moves from high to low voltage, it will push its neighbor down as well. This is called cross-coupling, and it can do some very nasty things. Suppose that one line is driven at constant low voltage (we'll call this the victim). The other line starts off at high voltage, and transitions low (this is the attacker). The attacker pushes the victim down below 0 volts. Suppose that the victim is driving to a latch (memory element) which is not enabled. In a typical latch, there are cross coupled inverters, and an nmos transistor acting as a gate, with the input at the drain and the output at the source. When there is a positive voltage difference between the gate and the source, electrons flow from the source to the drain. Normally, when you have 0 volts on the gate, then, you expect nothing to flow through the device, and your state will not be written. But, if the victim gets dragged below 0, you potentially have enough of a voltage difference between the source and the gate to turn the transistor on, allowing a write to your memory when the latch is supposed to be disabled. This is a serious ugly, it's transient, and it's very difficult to catch.
Circuit designers sometimes use techniques called shielding and half shielding to reduce these problems. Shielding involves inserting lines tied to ground, either every other wire, or every two wires (half shielding) in a bus. As you can imagine, this uses up a lot of area. There are other issues from cross coupling as well (burning more power for example if neighbors transition in opposite directions)-- the hard core analog side of things is not really my cup of tea-- but pretty much all of this crap should go away with optical interconnect.
Also, with all the capacitance in global interconnect, you can blow a lot of power charging and discharging the lines, and to get it to go fast, you need large, power hungry transistors, and probably repeaters every so often which burn still more power and add extra latency (now you have gate delays on top of your wire delay).
In short, if it's fast enough to convert between the electrical and optical domains, and the pitch of optical interconnects is fine enough, and the interconnects can be forked (one driver multiple receivers), this could be a big winner (faster, lower power, more reliable, what's not to like?). I do agree with you that they've been talking about this kind of thing for years and nothing's come out of it yet, but that's not to say the hurdles will never be circumvented, and there are obviously some fine minds working on this stuff, so I feel it's a bad idea to dismiss the possibilities out of hand.
on the other hand
on the other hand, your point about size is excellent, and there isn't much of a way to get around that. 8 microns minimum diameter is pretty huge and would limit the benefit significantly.
@Rex Alfie Lee
"Light can be broken down into its coloured constituents which basically makes it a possible 16 possibles @ 4-bit, 256 @ 16-bit per transfer just in terms of basic VGA & 65536 at 8-bit, & an ultimate 4.2 billion @ 32-bit current detection possibility. That means for every 32-bits, our current norm, the transfer of data could be done with one light byte."
True. Now think it through a bit further. You now need a wavelength tunable laser or multiple single frequency lasers illuminating either a set of narrow spectrum detectors or a spectrometer set up
The benefit of multiple colour lasers is they wont interfere if they cross.
As a was of increasing the range of components making up a "System on a chip," I think it's a winner, but speeding up intra-processor connectivity it's a dead end.
- Vid Hubble 'scope scans 200,000-ton CHUNKY CRUMBLE ENIGMA
- Bugger the jetpack, where's my 21st-century Psion?
- Google offers up its own Googlers in cloud channel chumship trawl
- Interview Global Warming IS REAL, argues sceptic mathematician - it just isn't THERMAGEDDON
- Apple to grieving sons: NO, you cannot have access to your dead mum's iPad