Home
News
Products
Corporate
Contact
 
Sunday, December 29, 2024

News
Industry News
Publications
CST News
Help/Support
Software
Tester FAQs
Industry News

Esperanto experiments with massively parallel RISC-V AI


Monday, April 25, 2022

Esperanto’s evaluation program enables users to obtain performance data from running a variety of off-the-shelf AI models including recommendation, transformer and visual networks on the ET-SoC-1 AI Inference Accelerator.

Users can set options including model and dataset selection, data type, batch size and compute configuration of up to 32 clusters containing over 1,000 RISC-V cores with ML-optimised tensor units. Customers can run many inference jobs, with the results provided in detailed histogram reports as well as fine-grain visibility into silicon performance.

“Esperanto has made very impressive progress and is now providing customers evaluation access to their RISC-V hardware and software running off-the-shelf AI models with strong performance and efficiency. This really shows the company’s confidence in their first multi-core solution,” said Karl Freund, founder and principal analyst at Cambrian-AI Research. “In addition, because Esperanto’s chip is RISC-V-based, it has the programming tools and software stack to more easily adapt to new AI workloads, alongside non-AI workloads, all running on the same silicon. This step forward is another very strong indicator of the bright future of RISC-V.”

“Harnessing the power of over 1,000 RISC-V processors is a major accomplishment, and we are very pleased with the results which validate our initial projections of performance and efficiency,” said Art Swift, president and CEO of Esperanto Technologies. “We look forward to extending access to a broader range of qualified companies, as we accelerate our RISC-V roadmap efforts with a growing number of strategic partners for applications spanning from Cloud to Edge.”

Esperanto Technologies is offering massively parallel 64-bit RISC-V-based tensor compute cores currently delivered in the form of a single chip with 1088 ET-Minion compute cores and a shared high performance memory architecture.

Designed to meet the performance, power and total cost of ownership (TCO) requirements of large-scale datacentre customers, the company’s inference chip is a general purpose, parallel processing solution that can accelerate many parallelizable workloads. It is designed to run any machine learning (ML) workload well, and to excel at ML recommendation models, one of the most important types of AI workloads in many large datacentres.

By: DocMemory
Copyright © 2023 CST, Inc. All Rights Reserved

CST Inc. Memory Tester DDR Tester
Copyright © 1994 - 2023 CST, Inc. All Rights Reserved