Going Ballistic

Ballistic deflection transistors are among the ways that researchers hope to speed up processors. 12/25/2006 7:34 AM Eastern

Going Ballistic

Ballistic deflection transistors are among the ways that researchers hope to speed up processors.

MOORE'S LAW SAYS THAT the number of transistors on a chip doubles roughly every 18 months, translating into a doubling of computing power. Researchers at the University of Rochester in New York are among several groups working to take that prediction to extremes by developing Terahertz (THz) transistors.

“The highest speed commercial processors today are in the vicinity of 4 GHz,” says Marc Feldman, professor of computer engineering. “That is 0.004 THz.”

Faster chips would enable a wider range of applications, including advanced AV ones such as real-time facial recognition for videoconferencing. In this $1.5 million research program, Feldman and his colleagues are focusing on transistors because they're literally the heart of processors, which use hundreds of millions of them, depending on the application. For example, high-end servers increasingly have processors with 1 billion or more transistors.

Finesse, not force

Simply packing in more transistors to achieve more processing power is a design strategy that has worked for years. The catch is that it's starting to run into a few issues, particularly the amount of heat produced. Many engineers also believe that current transistor designs are reaching their limits in terms of being able to achieve faster speeds.

The University of Rochester researchers believe that their ballistic deflection transistor is one solution. Conventional transistors handle multiple electrons at once, like a stream of water, while ballistic transistors handle them one at a time. The “ballistic” part of the name refers to an electrical engineering concept, as well as to the fact that the transistor manipulates each electron in a way that's akin to a ballistic projectile.

The “deflection” part of the name refers to a part of the design that each electron bounces off of. Altering the voltage near the deflector determines the direction in which the electron flies off. These directions in turn determine whether an electron triggers a 1 or a 0, which make up the internal binary language that processors speak as they go about their tasks. One helpful analogy is to think of the deflector and voltage field working together as a traffic cop at an intersection, pointing some cars in one direction and other vehicles in another.

If bouncing electrons off a deflector sounds low-tech, that's because it is. In fact, that approach is both a major advantage and a major departure from previous attempts that also used electrical fields to herd electrons. By creating a structure that the electrons can ricochet off of, less energy is required for the electrical field. That makes the transistor draw less power and produce less heat — two issues that are nearly as important as speed.

To make their architecture practical, the University of Rochester researchers also had to find new materials. Silicon was out of the question because the way that electrons ricochet in that material would have required a deflector so tiny that it would be difficult to manufacture. Eventually they settled on indium-gallium-arsenide and indium-phosphide because those provide more room for the electrons to move, and thus allow the deflector and other components to be sizes that are practical for manufacturing. “Our fabrication is close to the limit of modern nanolithography,” Feldman says.

That's important because to make ballistic deflection transistors commercially viable, it helps to be able to leverage existing manufacturing techniques. Even so, the University of Rochester researchers say they have a lot of work left to do before their designs are ready for prime time.

“It's too early to speculate as to when the technology will be mature enough for a general computing environment,” Feldman says. “There are a lot of factors to this question, and this is still very early in the development of this device.”

1 2 3 Next

Going Ballistic

Ballistic deflection transistors are among the ways that researchers hope to speed up processors.

The sandwich that flops

Moore's Law is named after Gordon Moore, co-founder of Intel, which is also working on new designs to increase processor speeds. One focus is on cores, which is the bundle of transistors that make up the processor. By splitting that bundle into multiple cores, it's possible to increase the processing power but spread the heat generated over a larger area so it's easier to cool.

Multi-core processors are increasingly common, including in the latest PCs. In September, Intel unveiled a prototype TeraFLOP processor — named after the 1 trillion floating point operations it can execute every second — that consists of 80 cores. The 3.1-GHz processor isn't being groomed for commercialization. Instead, it's being used to test designs that could be used in processors with tens or hundreds of cores.

“As an experimental chip, it's focused on specific experiments rather than running real software,” says Sean Koehl, a technology strategist in Intel's Tera-scale Computing Research Program. “So one shouldn't compare this research directly to the latest multi-core processors on the market today. It uses very simple cores with a simple instruction set that are used to generate tera-scale data traffic.”

Another key difference between TeraFLOP and the multi-core processors available today is the interconnects, which are the links that transfer data between the cores and memory. Like highways, the more traffic that an interconnect can handle, the less likely things will get jammed up. The TeraFLOP processor features 80 small tiles, each with a core that has a link to a network that connects it to the other 79 cores. Each core also uses this network to access a 20-MB memory chip that's bonded to the processor. This sandwich design enables a mesh of thousands of interconnects, which in turn boosts performance.

“The on-chip mesh network will enable significantly higher data throughput, as well as much lower latency (time of flight) for data exchanged between cores,” Koehl says. “That means each core will be able to transfer more data in a timelier manner. Collectively, these cores will generate over a TeraFLOP of computational performance, and the mesh network will provide over a 1 Terabit per second (TB/s) aggregate data bandwidth.”

By comparison, today's processors have only a few dozen interconnects.

“Today these busses have scaled to between 10 and 30 Gigabits per second (GB/s), but stacking will allow us to jump to bandwidths of more than 100 Megabits per second (MB/s),” Koehl says. “Also, because the vertical connections between the chips are so short, the circuits that drive them can be simpler and much more energy-efficient.”

But as revolutionary as it is, the TeraFLOP design still uses enough existing devices and manufacturing techniques so that producing them wouldn't involve re-inventing the entire wheel.

“This research effort is intended to make the best use of the additional transistors that Moore's Law will continue to provide for the near future,” Koehl says. “Tera-scale computing wouldn't require a diversion from existing manufacturing technology, though we'll certainly have to continue to advance certain new manufacturing capabilities, such as 3D stacking.”

Working in parallel

But if Intel's prototype processor is running at 3.1 GHz, why does it get a name that begins with Tera? The answer has to do with what Intel refers to as a new paradigm in computing.

Going Ballistic

Ballistic deflection transistors are among the ways that researchers hope to speed up processors.

“What we're talking about is scaling though parallelism, rather than through frequency (Hertz),” Koehl says. “So while the chip may be running at 3.1 GHz, due to the fact that we have so many cores operating in parallel, it's possible to achieve Tera-scale performance. In this case, ‘Tera-' refers to computational abilities of the chip (TeraFLOPs) and the amount of data it can move about (Terabytes).”

Like the University of Rochester's ballistic deflection transistor, Intel's TeraFLOP designs are still a long way from commercial availability.

“We expect to enter the tera-scale era for general purpose microprocessors in the next five to 10 years,” Koehl says. “The reason we're doing so much research is that scaling via parallel computing is fundamentally different than scaling by speeding up serial processing. There are many challenges to be addressed to make these processors scalable, adaptable, reliable, and programmable.”

One of those challenges goes back to the whole reason for doing this kind of research: the applications. In pro AV, it's not difficult to think of applications that could leverage faster processors to provide a better or richer user experience.

“The software must scale with the hardware,” Koehl says. “In other words, we must enable new applications that can put this vast amount of computing capability to best use for people: enabling virtual worlds and machine learning, for instance. At the same time, we must work with industry and academia to make parallel programming for highly-threaded software much easier than it is today, including new languages, operating systems, abstraction layers, and hardware techniques.”


For more information about speedy transistors and processors, check out:

  • The University of Rochester researchers' paper, “A Terahertz Transistor Based on Geometrical Deflection of Ballistic Current,” is available at A brief video at illustrates the transistor's basic concepts.
  • Details about Intel's Tera-scale research is available at
  • Tim Kridel is a freelance writer and analyst who covers telecom and technology. He's based in Kansas City and can be reached at

    Want to read more stories like this?
    Get our Free Newsletter Here!
    Past Issues
    October 2015

    September 2015

    August 2015

    July 2015

    June 2015

    May 2015

    April 2015

    March 2015