Monday, January 23, 2017
Rising complexity in developing chips at advanced nodes, and an almost perpetual barrage of new engineering challenges at each new node, are making it more difficult for everyone involved to maintain consistent skill levels across a growing number of interrelated technologies.
The result is that engineers are being forced to specialize, but when they work with other engineers with different specialties they frequently don’t understand where the gaps are. Not everyone is speaking the same language—sometimes literally—and the skills at one process node may be markedly different from another. That allows errors to creep in at every level, increasing the number of re-spins and overall costs, decreasing yield, and stretching out time to market.
Semiconductor Engineering conducted more than 20 interviews over the past three months involving all sides of the semiconductor ecosystem. Many people interviewed did not want to talk for attribution because yield and error rates, as well as the causes of those errors, are considered competitive information. But there is almost universal agreement that for each new node, the ability to share knowledge is becoming more problematic at a time when it also is becoming more essential.
Skills transfer always has been a headache for companies. In the past, though, it generally has been a matter of requiring refresher courses for engineers and scientists, as well as briefings about what is new or changing. Below 28nm, this has turned into a much more serious issue for nearly every segment of the semiconductor supply chain.
“The toughest challenge is increasing complexity,” said Jim Jozwiak, workforce development engineering supervisor at Micron. “Ten years ago, most engineers could be trained in a five-hour [update/refresher] class. Now it’s 10-plus hours, and months later the content is obsolete and irrelevant. That makes it harder to provide training because the expertise lies in a large number of engineers. Very few people have all the expertise. So you need to tap into dozens of people, and then disseminate that knowledge. But to pull people away for 10 to 12 hours, and then have that information obsolete a year later, is not practical. It also makes it harder to keep training documents current because they are subject to perpetual revision.”
Skills transfer spans every facet of the semiconductor supply chain, from design through manufacturing, but the problems are worse at each new node. There has been much discussion about the need to cross-train embedded software and hardware teams, as well as analog and digital engineers. Some of that has been automated, some is managed inside of big chipmakers through multidisciplinary team leaders who understand the challenges faced by more than one team.
But at advanced nodes this is becoming much more difficult because the next node is not just another shrink. The technology is changing significantly. A chip developed at 28nm is far different from one developed at 16/14nm. But one developed at 10/7nm also may be far different from one developed at 16/14nm, even though they both use finFETs and some form of multi-patterning. There are new materials, new processes, different lithography challenges, as well as a required shift in tools and materials.
“The question becomes how you manage the process and ensure quality, because in the middle of this are human beings,” said Selim Nahas, technical marketing manager for automated software solutions at Applied Materials. “A fab will drive quality initiatives, but then they wonder how they take a beating at every weekly review. It’s because it’s hard to get a handle on all the pieces. There is an enormous amount of data, but it’s all disparate. And if you look at fault detection, the SPC (statistical process control) tool takes measurements on a production wafer, then you have electrical tests, but every one of these systems is different.”
Nahas said the assumption is that people share the same knowledge across systems and across data sets, but frequently that turns out not to be the case. “The implication is that on 5nm and 3nm, you can’t do it with what we have today. There already is an ambiguity of the source at 28nm and below, and that’s very significant. Tool matching, fault detection and in-line SPC are all different. And every time you experience an event, that can be propagated across more material, so the damage can be greater.”
By: DocMemory Copyright © 2023 CST, Inc. All Rights Reserved
|