Home
News
Products
Corporate
Contact
 
Monday, December 23, 2024

News
Industry News
Publications
CST News
Help/Support
Software
Tester FAQs
Industry News

Microchip and IHWK Team Up to Accelerate Edge AI/ML Inferencing


Friday, September 15, 2023

To address the rapid rise of artificial intelligence (AI) computing at the edge of the network and its associated inferencing algorithms, Intelligent Hardware Korea (IHWK) is developing a neuromorphic computing platform for neurotechnology devices and field programmable neuromorphic devices. Microchip Technology Inc., via its Silicon Storage Technology (SST) subsidiary, is assisting with the development of this platform by providing an evaluation system for its SuperFlash memBrain neuromorphic memory solution based on Microchip’s nonvolatile memory (NVM) SuperFlash technology. The solution is optimized to perform vector matrix multiplication (VMM) for neural networks through an analog in-memory compute approach.

The memBrain technology evaluation kit is designed to enable IHWK to demonstrate the absolute power efficiency of its neuromorphic computing platform for running inferencing algorithms at the edge. The end goal is to create an ultra-low-power analog processing unit (APU) for applications such as generative AI models, autonomous cars, medical diagnosis, voice processing, security/surveillance and commercial drones.

As current neural net models for edge inference may require 50 million or more synapses (weights) for processing, it becomes challenging to have enough bandwidth for the off-chip DRAM required by purely digital solutions, creating a bottleneck for neural net computing that throttles overall compute power. In contrast, the memBrain solution both stores synaptic weights in the on-chip floating gate in ultra-low-power sub-threshold mode and uses the same memory cells to perform the computations—offering significant improvements in both power efficiency and system latency. When compared to traditional digital DSP and SRAM/DRAM based approaches, it delivers 10 to 20 times lower power usage per inference decision and can significantly reduce the overall bill of materials.

To develop the APU, IHWK is also working with Korea Advanced Institute of Science & Technology (KAIST), Daejeon, for device development and Yonsei University, Seoul, for device design assistance. The final APU is expected to optimize system-level algorithms for inferencing and operate between 20-80 TeraOPS per Watt, which is the best performance available for a computing-in-memory solution designed for use in battery-powered devices.

“By using proven NVM rather than alternative off-chip memory solutions to perform neural network computation and store weights, Microchip’s memBrain computing-in-memory technology is poised to eliminate the massive data communications bottlenecks otherwise associated with performing AI processing at the network’s edge,” said Mark Reiten, vice president of SST, Microchip’s licensing business unit. “Working with IHWK, the universities and early adopter customers is a great opportunity to further prove our technology for neural processing and advance our involvement in the AI space by engaging with a leading R&D company in Korea.”

“Korea is an important hotspot for AI semiconductor development,” said Sanghoon Yoon, IHWK branch manager. “Our experts on nonvolatile and emerging memory have validated that Microchip’s memBrain product based on proven NVM technology is the best option when it comes to creating in-memory computing systems.”

Permanently storing neural models inside the memBrain solution’s processing element also supports instant-on functionality for real-time neural network processing. IHWK is leveraging SuperFlash memory’s floating gate cells’ nonvolatility to achieve a new benchmark in low-power edge computing devices supporting machine learning inference using advanced ML models.

By: DocMemory
Copyright © 2023 CST, Inc. All Rights Reserved

CST Inc. Memory Tester DDR Tester
Copyright © 1994 - 2023 CST, Inc. All Rights Reserved