Tuesday, January 13, 2004
Among the trolls lurking under the bridge to sub-100-nanometer devices, one of the least talked about is alpha-particle-induced soft errors. But like signal integrity, power and feature formation, this one could be a showstopper.
Researchers at STMicroelectronics in Geneva have gone back to device basics to come up with a solution, at least for the embedded-memory portion of a chip. Relying on a proprietary set of tools for modeling the behavior of devices under alpha bombardment, the designers have devised an elegant version of the brute-force approach: They have increased the node capacitance of an SRAM cell substantially with only about a 5 percent area increase.
The trick lies in a pair of vertical metal-dielectric-metal capacitors cleverly worked into the cell design. The capacitors, roughly shaped like tall cylinders, stand like a pair of towers in the intermediate layers of interconnect between metal-1 and the top layer. Since there is no memory cell interconnect in those levels, the capacitors don't get in the way of the cell or increase its area-they just take up unused space above it. Of course, that limits the ability to route over SRAM.
The impact on sensitivity to alpha particles, however, is dramatic, STMicroelectronics said. The company fabricated a 120-nm test chip that included conventional SRAM arrays, SRAM arrays with error-correcting logic and arrays of the new hardened cells. The chips were bombarded with alpha particles at two ST sites, and with neutrons at Los Alamos Neutron Science Center in New Mexico. The hardened cells showed a 250x improvement in resistance over the standard ones, ST said.
A high-energy alpha particle traveling through a semiconductor releases an avalanche of free electrons that, true to their nature, find paths to an area of positive charge. If that path takes them to ground, fine. But if it takes them onto a circuit node that is being used to store the state of a memory cell or flip-flop, it can alter the state of the system. That's a soft error.
To date, soft errors have not kept most of the design community up nights. In the geometries and operating voltages in use in most designs today, the amount of charge on a storage node is far greater than the amount likely to be generated by an alpha particle. The exceptions have been marginally designed circuits, where soft errors were more a symptom than a problem in their own right, and circuits that had to operate in space, where the energy and density of alpha particles could be far greater than under our blanket of sky.
But today, scaling, a longtime ally, is looking more and more like Frankenstein's monster. True, scaling is reducing the size, and, hence, the capacitance of storage nodes. It is also putting them closer together, ensuring that at least one such node will lie close to the path of any intruding particle. And it is forcing reduced operating voltage, which, combined with the lower capacitance, means that the total charge on a node is approaching the free charge generated by an alpha particle. In other words, soft errors are becoming an observable phenomenon, not a theoretical prediction.
There are two basic fixes: make the devices bigger to increase the stored charge or cleverly design the circuits to survive the disruption of a node. The former approach undermines the density boost that would prompt most designers to use advanced processes in the first place. So it will be used primarily in small memory arrays whose area and timing slack are not first-order dependent on cell size.
The latter approach requires more thought. Some work has been done, in particular by iRoC Technologies Corp. (Santa Clara, Calif.), at detecting nodes that are vulnerable to single-event upset and replacing them with hardened circuits. This solution, working on a node-by-node basis, should have minimal impact on designs until nearing the point where many nodes are vulnerable. Then grander measures, such as new hardened libraries or new architecture, may be necessary.
To meet these challenges, the ST team rethought device basics for their SRAM solution. The company has reported less than 10 failures in time per megabit of SRAM during 1.2-volt operation. At 1.32 V, the devices exhibited no failures at all.
ST has not yet announced products using the new cell design, but expects it to be part of the standard libraries for extreme-submicron systems-on-chip in the near future.
By: DocMemory Copyright © 2023 CST, Inc. All Rights Reserved
|