Home
News
Products
Corporate
Contact
 
Monday, December 23, 2024

News
Industry News
Publications
CST News
Help/Support
Software
Tester FAQs
Industry News

AI Can't Design Chips Without People


Friday, July 14, 2023

Machines are already building other machines, so it makes sense that artificial intelligence would help build the components necessary for them to exist, including processors and memory devices. But we’re a long way from AI replacing design engineers; the immediate benefits of machine learning (ML) and algorithms are the productivity gains that come with speeding up tedious, time-consuming tasks.

Recent research from Accenture’s annual Technology Vision report on generative AI found that 69% of global high-tech executives expect their organizations to benefit from accelerated innovation thanks to AI. Syed Alam, a high-tech industry lead at Accenture, told EE Times in an exclusive interview that manufacturing and supply chains are areas where generative AI is being explored to support the chipmaking process.

“Generative AI can improve chip utilization and yield by assisting on quality inspection and production scheduling,” Alam said, while the supply chain can be improved by optimizing material availability by monitoring and identifying breaking points in product delivery.

Design is another area where AI will have an impact, he added, because it can increase speed to market by helping manage the complexities throughout a chip’s lifecycle, such as rapid prototyping and easy visibility into all the digital documentation for a product.

“What the onset of generative AI means for semiconductor companies is that they will need to radically rethink how work gets done,” he said. “The focus must be on evolving operations and training people as much as [it is] on the technology.”

AI excels at optimization

Tetsu Ho, mobile DRAM manager at Winbond Electronics Corp., told EE Times in an exclusive interview that he sees AI as having the potential to revolutionize the design and manufacturing of chips by improving performance, reducing defects and increasing efficiency. “We think AI can help design cycle time like circuit placement and efficiency simulation result.”

In terms of the design process, Ho added that ML algorithms can be used to optimize chip performance. AI can also help generate new chip architectures and optimize chip layouts that improve performance while also reducing power consumption.

He said design could be further optimized because AI can help predict chip behavior in different scenarios to identify potential issues. Similarly, AI can be used in manufacturing to identify and correct issues in production lines, as well as be used to analyze sensor data to identify defects in chips during the manufacturing process. He added that AI can also guide adjustments to the process to reduce defects and improve yield.

Additionally, AI could be employed to improve quality control because it can identify defects that are difficult for humans to detect, such as scratches or cracks that are too small to be seen by the human eye, according to Ho. “AI can expedite yield ramp-up for advanced process node products and optimize progress cycle time due to lots of meaningful labeling data.”

The most immediate impact that AI is having in chip design is as a productivity tool that reduces the number of hours people spend on a repetitive task while reducing errors.

Nvidia, for example, is exploring how AI can aid in the design process through Automated DREAMPlace-based Macro Placement (AutoDMP) to optimize macro placement in collaboration with the University of Texas, Bill Dally, chief scientist at Nvidia, told EE Times in an exclusive interview. AutoDMP uses the open-source analytical placer DREAMPlace as the placement engine for concurrent macro and cell placement along with the PyTorch deep-learning (DL) framework to train a neural network. Dally said DREAMPlace has the advantage of running very fast, so it is possible to do many trials and learning, although it’s not the DL framework most associated with AI today.

Nvidia is exploring how AI can aid in the design process through Automated DREAMPlace-based Macro Placement to optimize macro placement in collaboration with the University of Texas.

There are many tried-and-true practices for placing macros, Dally said, but finding the optimal place manually is time-consuming. Macro placement is a critical aspect of the chip design process because it has a significant impact on the landscape of the chip, directly affecting many design metrics like area and power consumption. Today, most digital chips integrate many macros, usually memory blogs or analog blocks, that are often much larger than standard cells.

Dally said the AutoDMP research demonstrates not only the effectiveness of combining GPU-accelerated placers with AI/ML multi-objective parameter optimization but could also lead to additional design-space–exploration techniques.

Simple AI could speed up the design process

Using AI for chip design doesn’t need to be complex, and the value of it is that it frees humans from repetitive tasks while also reducing errors and increasing accuracy. Dally said another example of Nvidia using AI is through reinforcement learning to design cell libraries with a tool called NVCell. “It’s better than humans.”

Nvidia’s Bill Dally.

A move from 5 nm to 3 nm would require 2,500 cells in the library to be re-laid out while observing complex modern design rules, which Dally said was a job that used to take 10 people over eight months. “Now, it’s an overnight run on a GPU.”

He said this is an example of where AI is a productivity tool that allows designers to reduce menial labor and focus on making higher-level decisions—the time saved when moving to a new process can be spent doing something more valuable. “It’s saving people time so we can do more with fewer people,” Dally said.

It’s a team effort

Dally noted that the goal isn’t to get rid of people. “We want to do more. We have lots of great ideas for other chips.”

Without AI productivity tools, there’s a shortage of people hours—five things might get scratched because there’s not enough time. “Now, we’ll be able to do those five things because we’ll have the same number of people, but they’ll be more productive.”

Training data from previous designs can also be leveraged for new designs to speed up the process, he added. “We have an archive of chips that we’ve designed that we can use to train tools. They learn what those designers put into those chips and can replicate that in other chips.”

But good chip design takes creativity and experience, Dally said, and AI is only effective in prescribed and constrained scenarios. “AI is better for optimizing things once the big decisions have already been made.”

Lam Research is also exploring how humans and AI can best work together to optimize chip design as the need for precision and accuracy increases with the added complexity that comes with continually shrinking chip sizes. The research firm recently published a study in Nature that outlines the differences when humans and machines collaborated versus engineers or AI alone.

Lam Research created a virtual environment to quickly test how well algorithms could find a suitable recipe that controls the plasma interactions with a silicon wafer.

In a briefing with EE Times, Rick Gottscho, executive vice president and advisor to the CEO at Lam, said the best approach appears to be a “human first, computer last” collaboration. The Lam study pitted engineers and computer algorithms against each other to develop an ideal recipe for atomic-level plasma-etch processes used in chipmaking, and it found that while human engineers will remain essential, a hybrid human-machine approach can help alleviate tedious and laborious aspects of research and development, freeing up engineers to focus on more challenging issues.

Gottscho said chips have been used to design chips for many decades now. “What’s happening is those automated routines for designing chips and doing layouts are getting far more sophisticated than it was in the past.”

In the processor world, however, it’s been difficult to use computer-aided design in a significant way, he said. “The recipes are developed by trial and error.” It’s a harder problem to solve than the layout challenges companies like Nvidia are solving, Gottscho said. “The physics are extraordinarily complex.”

One of the reasons self-driving vehicles are possible is because the many sensors on the cars collect a great deal of data that can inform sophisticated algorithms that apply to all vehicles. But if you’re developing a plasma deposition process, there are a hundred trillion different recipes you can run on an etcher that make a measurable difference on the wafer, Gottscho explained. “How do you pick one of those recipes out of a hundred trillion? Which one’s the best one?”

To generate the data, each experiment typically takes a day and costs thousands of dollars, he said, which makes generating big data impractical.

Lam Research’s Rick Gottscho.

Gottscho said the study itself was a challenge because Lam needed a way to evaluate the many different algorithms generated by data scientists in a fair way. An important step, he said, wasn’t doing it in the real world. “The experiments take too long and are too expensive. We need too much data to just evaluate one algorithm, let alone a whole bunch of different algorithms.”

That’s why the researchers opted to do the experiment in a virtual world, Gottscho said. “We created a virtual environment that mimicked what happens on our plasma etcher.” After confirming with its process engineers that it was just as realistic—and frustrating—as the real world, Lam was able to dramatically lower the cost of the experiment and more quickly evaluate one algorithm against another.

A key result was that the data scientists and their favorite ML algorithms all failed badly because they lacked domain knowledge and experience, Gottscho said. Meanwhile, the learning curve for both expert engineers and junior engineers followed a characteristic pattern. “They would learn very quickly in a few experiments.”

What the study ultimately showed was that there was a point where the engineers would go from making satisfying progress in their tuning to meet customer targets, to being frustrated because they weren’t making much progress. “That’s where most of the time and money is spent.”

AI needs people to solve design problems

What the research shows, Gottscho said, is that there’s a handoff point where AI can take over, having learned from the efforts of the engineers—their work teaches the algorithm so it can be exploited quickly and take care of the slow, frustrating stretch of the process. “It’s about making the people more productive.”

He said it also demonstrates that a hybrid approach is necessary. “Machines without domain knowledge are like newborn babies. There are no connections in the neural net. There’s no learning.”

This means AI isn’t replacing people in the semiconductor design business anytime soon—it takes time for a baby to learn, Gottscho said. “You need some way to codify the previous learning.”

The trick is to draw from people doing the part of the work that they enjoy because they make rapid progress, he said. “When it becomes drudgery, that’s when you turn it over to the machines that are better at that drudgery anyway.”

By: DocMemory
Copyright © 2023 CST, Inc. All Rights Reserved

CST Inc. Memory Tester DDR Tester
Copyright © 1994 - 2023 CST, Inc. All Rights Reserved