Home
News
Products
Corporate
Contact
 
Friday, October 18, 2024

News
Industry News
Publications
CST News
Help/Support
Software
Tester FAQs
Industry News

MIPS Sets Sights on Building the Best Data Processing Engine


Friday, July 26, 2024

MIPS, now targeting AI applications for its application-specific data movement cores, is evolving with a careful eye on its strengths. “MIPS had a choice to make, because most of our RISC-V competitors are also publicly, or not publicly, pivoting hard towards AI,” MIPS CEO Sameer Wasson told EE Times. “The choice we made was to look at the problems others are not solving well and try to match them with what we can do better.”

For MIPS, this means data movement—something both deeply embedded in MIPS’ history and expertise, and absolutely critical to performant AI chips and systems.

“The problem we want to solve is to build the best data processing engine,” Wasson said. “It’s a mission which may not have the buzz to it [versus AI IP], relatively speaking, but I’m very comfortable with it, frankly, because it allows us to fly under the radar.”

Sameer Wasson MIPS CEO

Customers have been building their own proprietary cores for data movement for a long time, he added. MIPS hopes to replace these proprietary cores.

“[AI] architecture needs to evolve,” Wasson said. “The data movement engine becomes a DPU and offloading from the CPU or GPU becomes key. That’s how we’re going to be able to do 300 Gb/s or 3 Tb/s or whatever is needed.”

More efficient data movement can help tackle power consumption in the data center by improving utilization of CPUs, GPUs, and accelerators, and improving thermal considerations by making data movement more efficient.

MIPS sees opportunities for its DPU cores in several places in a data center AI system today. This includes offloading data movement from host CPUs, using parallelism and multithreading to do inline processing of network data, like one of MIPS’ smartNIC customers does. Emerging applications for data movement include AI memories and storage, alongside GPUs and custom AI accelerators.

Wasson is particularly excited about the potential for new memory technologies—intelligent CXL fabrics or intelligent DIMMs—where MIPS’ multi-threaded, PPA optimized RISC-V cores work well.

“So far we have been bringing data over to the control side to do the processing,” he said. “What if we took the controller over to the data? This is near-memory compute…the TAM [total available market] is huge.”

The vision is to embed small compute cores into the memory—not the other way around. With CXL-enabled memory pooling becoming a reality, there is an opportunity to do some pre-processing, such as traffic shaping and prioritization.

“Think of pipelining at a system level,” he said. “What data is going to be needed first? What data will be needed next? Even if you can shave only microseconds off a transaction, that adds up, given the number of transactions, so you start getting CPU utilization back up, which reduces the number of CPUs you need, which reduces the power you need.”

Data center customers see memory as both a CapEx and OpEx problem today, Wasson said, with OpEx particularly poor when pools of memory sit idle waiting for compute, and vice versa.

“You couldn’t put an x86 core in there, because that would still be a big core,” he said. “Think about small processing tasks—data oriented, real-time processing tasks. That’s what’s going to emerge.”

Storage is a similarly big opportunity for efficient AI data movement, he said.

GPUs and AI accelerators are an emerging opportunity. Processing in a GPU is split into scalar, vector and matrix multiplication. Matrix multiplication acceleration gets a lot of attention, but what about the scalar part?

“In many ways, scalar is the most boring part, but it is also the most difficult part in many ways, because only three companies do it,” Wasson said. “If you can cater to the emerging market of custom accelerators but standardize the programming model, you’ll start catering to the largest problem out there, which is software, not hardware.”

Data movement

The main features of MIPS cores include hardware multithreading capability and tightly-coupled memory, plus the ability to enable heterogeneous compute and coherent system interconnect. These together make up a quality Wasson likes to call “MIPSiness”.

“This is basically MIPS’ legacy—MIPSiness—taken forward,” he said.

MIPS cores feature hardware multithreading and tightly-coupled memory (Source: MIPS)

The MIPS data movement solution is usually a cluster of cores, and usually all the same kind of core (MIPS has P-cores, which are out of order, and I-cores, which are in-order) together with the MIPS coherency manager.

“Large customers like MIPS because we allow them to hook their custom acceleration into the pipeline in a native format,” Wasson said, citing autonomous vehicle (AV) chipmaker Mobileye as a customer example. “That means better performance, and cost.”

Tightly coupled memories enable low latency for custom accelerators, vector engines, or DSPs, while features like hardware multithreading and hardware virtualization add to flexibility.

All these features are enabled by custom instructions. MIPS is continuing to invest in its tools to allow customers to add their own instructions. This capability was previously used widely with the MIPS ISA.

“Fifteen to twenty percent of my R&D is tooling, but we are not a tools company,” Wasson said. “We are a compute and IP company, and we enable customers with tools so they can write custom instructions, but we still take ownership of delivering performance.”

Wasson added MIPS’ customer engagement model is a key part of its IP.

“There is a value chain here, and as a compute IP company, we have to be clear about what value we bring,” he said. “We don’t bring value by getting ahead of our customer. We’re an enabling force for customers and I want to make sure that’s where we’ll stay.”

RISC-V transition

MIPS pivoted away from the MIPS ISA towards RISC-V in 2018. There are two ways to transition to RISC-V, Wasson said: build a translator on top of your ISA (a six-month effort) or fully transition ( which is more like a six-year effort). MIPS chose the latter.

“[Transitioning to RISC-V] was absolutely the right decision,” Wasson said. “Proprietary architectures existed for legacy reasons, and because hardware engineers run the [semiconductor] world. But our customers are software engineers. And we want to cater to our customer base, plain and simple.”

RISC-V brings the benefits of standardization while allowing sufficiently differentiated implementations that allows MIPS to maintain its MIPSiness, Wasson argued.

“There is a lack of education in the market because of how RISC-V has been marketed,” he said, noting that most people’s perception is of RISC-V as a potential Arm-killer. “This story caters well to the media and the investor base, but I think you’re limiting its potential by saying that. The potential is much larger, if you think about what RISC-V can do from a system perspective.”

RISC-V can maintain the heterogeneity of a system while providing a homogenous ISA, he said.

“If you want to pivot the system and make it heavy towards data processing, you can,” he said. “If you want to make it heavy towards signal processing, you can. If you want to make it heavy on custom acceleration, you can. So from a software perspective, imagine the simplicity you’re bringing in.”

An SoC today might have an Arm core, a DSP and a custom accelerator—all on different ISAs—presenting multiple compilers to the software developer. RISC-V can reduce this complexity and ultimately reduce cost, Wasson said.

“Based on what we’re seeing on the customer side, people are starting to use RISC-V to solve pretty much every problem on the SoC,” he said. “This will bring in the next round of innovation, which is about simplifying your software stacks and focusing on the real problems, versus trying to manage multiple stacks.”

While Wasson does not see Arm going anywhere, RISC-V will eventually replace many proprietary ISAs, since customers want standard architectures and standard tools.

Existing MIPS customers will need to recompile for new MIPS RISC-V cores, but Wasson said the transition should be straightforward, given the company’s purposeful design decisions.

“Software is defined for the machine, which is multithreaded, cache-coherent, etc.,” he said. “When we transitioned from the MIPS ISA to RISC-V ISA, we didn’t transition to a generic core—we maintained the MIPSiness of it. In some cases even the memory maps are the exact same…customer application code or firmware they have written and maintained over the years won’t have to change much at all.”

Customer pain points are more commonly around migrating from Arm to RISC-V, he said, though he anticipates that long-term (in the next 7-10 years) migration from Arm will represent only about a third of his customer base. The rest will be people solving new and emerging problems.

Application focus

Part of keeping the MIPSiness is retaining the company’s strong application focus. For AI data movement, MIPS’ focus is custom offerings for AI in the data center, plus ADAS and AVs.

These segments are split into data movement in the data center for DPU, memory, storage and the emerging GPU/accelerator sector. Automotive applications include latency-focused applications like the software-defined vehicle, electric vehicles and ADAS.

“Understanding these application-oriented things, that’s what’s going to allow us to compete with proprietary architectures, because quite honestly, that’s where you’ll find them,” Wasson said.

Wasson’s plan is to restrict MIPS focus to several key applications, and stick to being an IP company (no plans to become a silicon vendor).

“This is where being an IP company is helpful,” he said. “If you focus on your strengths and certain applications, you still find a large number of people who want to build that technology, because you will then serve many SoC people and many system people. So your TAM does increase, because you are an IP company.”

In 2018, MIPS was acquired by Wave Computing, one of the first AI chip startups that eventually went bankrupt. MIPS, which had been treated as a separate business unit within Wave, continued to thrive. The company has retained Wave’s IP—does Wasson have plans to offer an AI accelerator IP core any time soon?

“One thing at a time!” He laughed.

By: DocMemory
Copyright © 2023 CST, Inc. All Rights Reserved

CST Inc. Memory Tester DDR Tester
Copyright © 1994 - 2023 CST, Inc. All Rights Reserved