Home
News
Products
Corporate
Contact
 
Wednesday, December 25, 2024

News
Industry News
Publications
CST News
Help/Support
Software
Tester FAQs
Industry News

NXP application processor to use in-house developed AI accelerator IP


Wednesday, March 1, 2023

NXP’s latest application processor, the i.MX 95, uses NXP’s proprietary NPU IP for on-chip AI acceleration, in a change from previous products in the i.MX line, which used third-party IP.

The i.MX 95 series is developed for AI-enabled applications in automotive, industrial and IoT markets, with safety features for ISO 26262 ASIL-B and IEC 61508 SIL-2 functional safety standards, including a secure island. Typical applications would include factory machine vision and vehicle voice warnings, instrumentation and camera systems.

The i.MX 95 series features up to six ARM Cortex-A55 CPUs plus an ARM Mali GPU for 3D graphics, alongside NXP’s dedicated 2-TOPS Neutron NPU and an in-house–developed image signal processor (ISP). The ISP handles camera interfaces and image pre-processing, including tasks like high dynamic range (HDR), de-noising and edge enhancement.

NXP’s Neutron NPU is a general-purpose matrix-multiplication accelerator designed to offload AI workloads from on-chip CPU cores. The i.MX 95 version of Neutron is a scaled-up version of IP previously used in the MCX-N. The accelerator in the MCX-N, a 150-MHz microcontroller, offers 16 MACs per cycle, versus i.MX 95’s 2-TOPS NPU, which can run at 1 GHz or more. (Overall, the IP can scale up to 10,000 operations per cycle.)

Scaling up came with its own challenges, NXP’s director of global AI strategy and technologies for edge processing, Ali Ors, told EE Times.

“When you start going much larger in terms of compute power, you have to take care of a lot more data movement, staging, weight management, DMA buffering, etc.,” he said.

Neutron can run neural networks including CNNs, RNNs, TCNs and transformers. In-house tests on CNNs including MobileNet, MobileNet-SSD and Yolo have shown that Neutron boosts throughput by 100× to 300×, depending on the model, compared with one of the on-chip Cortex-A55s, Ors said.

The Neutron NPU in the i.MX 95 replaces the ARM Ethos-U65 in the i.MX 93. Why make the change to in-house?

“It’s part of our own strategy, independent of ARM’s product and business strategy around accelerators,” Ors said. “What we saw in the market and what we decided to execute on is that machine-learning acceleration is a fundamental part of all three of the market spaces that we are heavily involved in at the embedded processor node. So it made sense for us to own the architecture.”

If NXP owns the hardware IP, it means NXP’s eIQ software development environment can act as a unifying factor for today’s and future parts with AI acceleration on-chip, he said.

Ors also pointed out that AI workloads are still very dynamic; models are still evolving rapidly, as are the primitives and data types they use.

“Being dependent on the software constantly, to be able to match the hardware to run what’s coming that’s new in this space, was a challenge,” he said. “We felt that we can better support our customers—especially given that NXP has guarantees around availability of supply for 15 years—we have to maintain, support and make sure these [parts] are still working [long] after they’re deployed into the market.”

This includes being better able to support in-the-field updates, he added.

Prior to the i.MX 93, the i.MX 8M+ featured on-chip accelerator IP from Verisilicon, sized at 2.3 TOPS. Does the 2 TOPS engine in the i.MX 95 represent a smaller AI capability than this previous part?

“It’s around the same raw performance, but there is a big boost, at least 2× to 4×, depending on the model, to what we’re able to run on the 8M+ versus the i.MX 95,” said Ors. “This is a function of how machine-learning models have evolved and how the architectures have evolved to match what’s needed in the market … the 95’s NPU is a lot more efficient than the 8M+’s NPU for certain workloads that are more prevalent today than when the 8M+ was being designed.”

Future NXP application processors will also use the company’s Neutron IP.

“We have plans around devices that may target more specific market verticals that might use the same 2-TOPS variant [of Neutron], but even within that variant, there might be variations on the amount of internal buffer that we provide or internal interfaces we provide to the DDR, etc.,” Ors said.

NXP’s eIQ software development environment for AI includes tools for data collection and dataset curation, as well as choosing a model, training and profiling for NXP targets and deployment.

“The eIQ toolkit is a full flow, but at any stage, you can pick and choose how much of the NXP tools you want to use versus what you want to leverage from your own scripts or your own tooling preferences,” Ors said.

NXP’s API is currently open to partners on an early-access basis; this allows third parties to bring in their datasets or models for specific use cases and tools, such as proprietary quantization tools. Ors said NXP is working toward wider availability of this API.

That said, NXP will not rely on third parties to bring differentiated features to eIQ. The latest feature NXP has itself added is watermarking, designed to mitigate against IP theft, as it allows customers to tell whether their deployed model has been stolen.

Ors described how it’s possible to recreate AI models from the final working model using brute force—using certain inputs, collecting outputs and reverse-engineering the weights from there. This would allow someone to effectively copy that model in their own product. NXP’s watermarking tool is designed to detect when this has happened and prove who the stolen IP rightfully belongs to.

The watermarking tool inserts watermarks into training data—in this case, variations that may or may not be visible to the human eye. The result is that the model would misclassify certain watermarked test images so that testing a competitor’s product with the watermarked test image would prove ownership of the IP. This watermarking does not affect the performance or accuracy of the model.

Is it realistic that someone would go to the trouble of reverse-engineering an image-processing model today rather than develop their own?

“Reverse-engineering can be less effort than collecting specific training data that makes a model really robust,” Ors said. “This doesn’t make sense when it’s images that are easily collected, but when you get into very specific industrial applications or medical applications, the training data is a lot more valuable than what you can get from publicly available image datasets.”

The watermarking tool isn’t designed to explicitly prevent IP theft; it’s limited to proving that theft has occurred. Ors said NXP worked with IP law experts to identify what kind of evidence could be used in a potential lawsuit, leading to inclusion of facilities to record the watermark and artifacts necessary for legal proof of ownership, as well as accurate timestamps.

The watermarking tool is available now as part of NXP’s eIQ development environment. The i.MX 95 application processors are expected to begin sampling in the second half of 2023.

By: DocMemory
Copyright © 2023 CST, Inc. All Rights Reserved

CST Inc. Memory Tester DDR Tester
Copyright © 1994 - 2023 CST, Inc. All Rights Reserved