Photo from: Pixabay.com

Nvidia Faces Competition as Rivos Receives $250m

Rivos, based in Santa Clara, California, is setting its sights on nothing less than dethroning Nvidia as the king of AI accelerators

authorImg

Alvin - April 17, 2024

6 min read

The tech world is abuzz with news that startup Rivos has raised a whopping $250 million in Series A funding to develop artificial intelligence (AI) chips based on the open-source RISC-V instruction set architecture (ISA). This nine-figure investment validates both the incredible innovation happening in the AI chip space and the potential of RISC-V as an alternative to proprietary architectures like Arm and x86.


Rivos, based in Santa Clara, California, is setting its sights on nothing less than dethroning Nvidia as the king of AI accelerators. The company aims to design RISC-V processors optimized for the latest AI workloads like large language models (LLMs) and data analytics running on frameworks like PyTorch, JAX, Spark, and PostgreSQL.


What Are RISC-V AI Chips?


Before we dig into Rivos' bold ambitions, let's quickly cover what the heck RISC-V and AI chips even are.


At their core, all CPUs, GPUs, AI accelerators, and other processors are built on an underlying instruction set architecture (ISA) that defines how software code communicates with the hardware. Think of it as the fundamental "language" or blueprint that processors are designed around.


"Nvidia@12nm@Turing@TU102@GeForce_RTX_2080_Ti@S_TAIWAN_1834A1_PKVA94.002_TU102-300A-K1-A1___DSCx2_polysilicon@IR_Macro" by FritzchensFritz is marked with CC0 1.0.

For decades, the ISA landscape has been dominated by proprietary standards like x86 from Intel and AMD (used in most PCs/servers) and Arm's reduced instruction set computing (RISC) architecture common in smartphones, tablets and embedded devices. These ISAs require expensive licensing fees.


RISC-V, on the other hand, is an open-source ISA that can be leveraged freely without paying royalties. It was originally conceived in 2010 by computer scientists at UC Berkeley and has been further developed by the RISC-V Foundation, comprising over 300 members like Google, NVIDIA, Samsung and others.


As for AI chips, these are specialized processors designed to efficiently run machine learning (ML) models and other AI workloads far better than general-purpose CPUs or GPUs. Common types include GPUs with tensor cores (Nvidia), AI accelerators (Google TPU), neural network processors (Intel NNP), vision processors (MobileEye EyeQ), inference chips and more.


Rivos Plans to Disrupt Data Center AI With Open RISC-V


Photo from: Pixabay.com

Unified Architecture Combining CPUs and Accelerators


Rivos' forthcoming data center AI chip will feature a hybrid architecture pairing high-performance RISC-V CPU cores with AI accelerator components like GPGPUs and unified memory pools supporting up to terabytes of capacity. This unified compute platform lets the CPUs manage scheduling while the accelerator cores crunch through ML/AI workloads in parallel.


Optimized for LLMs and Data Analytics


While Nvidia's CUDA GPUs have been the go-to for training large AI models, Rivos believes the future lies in emerging combined LLM and data analytics workloads. Their chips are being architected from the ground up to efficiently handle LLMs like ChatGPT while simultaneously querying and processing large data sets.


Manufacturing Using TSMC's Latest 3nm Node

To hit performance and efficiency targets, Rivos has tapped TSMC as their foundry partner to manufacture their debut AI chip using their cutting-edge 3nm process node. This is the same semiconductor fabrication process powering Apple's latest M1 chips and other leading-edge processors.


Open Source Software Stack

Bucking the traditional semiconductor playbook of hardware first then software, Rivos started by developing an open AI software stack to program their RISC-V chips using common AI/analytics frameworks and data runtimes. This software-first approach increases flexibility versus fixed-function accelerators.


OCP Server Form Factor

Rivos chips will slot into Open Compute Project (OCP) systems, adhering to open standards for data center servers versus proprietary designs. This modularity will allow their processors to be easily swapped into or configured with x86, Arm and other OCP components.


The $250M Funding And What It Means for the AI Accelerator Landscape


So why the massive $250 million Series C funding round for Rivos? And what potential impacts could their RISC-V approach have on incumbents like Nvidia, AI cloud providers, enterprises and consumers?


For one, the sizeable investment validates RISC-V as an increasingly viable open-source architecture for AI chips. According to investor Lip-Bu Tan at Walden Catalyst, "RISC-V doesn't have a [large] software ecosystem, so I decided to form a company and then build software-defined hardware - just like what CUDA did with Nvidia."


The funding will allow Rivos to complete the development of their initial data center AI processor and tape it out for manufacturing. CEO Puneet Kumar stated their goal is to provide lower-cost AI acceleration for "potentially smaller installations where Nvidia might seem like an overkill from a cost perspective."


While Nvidia's CUDA GPUs have become ubiquitous for training and running large language models, Rivos sees an opportunity as AI/analytics workloads converge. Their open RISC-V software stack aims to provide affordable, programmable performance at scale.


But make no mistake, Rivos faces an extremely uphill battle against Nvidia's entrenched AI leadership and CUDA ecosystem. The green team increasingly integrates AI into their hardware and software stack. Formidable players like Intel, AMD, Google, Meta, Microsoft, and a number of startups are also competing fiercely for data center AI acceleration.


However, the AI accelerator landscape is highly dynamic. Models and workloads rapidly evolve, so flexible, programmable chips could have an edge long-term. If Rivos' open-source RISC-V compute platform achieves developer traction, it could drive down costs for enterprises and consumers alike.


Imagine affordable AI accelerators incorporated into everything from laptops to cameras to smart home devices enabling advanced capabilities locally. And AI cloud providers could leverage Rivos chips to diversify their infrastructure and pricing options. In the AI era, such an open, democratizing RISC-V ecosystem would fuel further innovation.


The Road Ahead for Rivos and RISC-V AI Chips


Of course, Rivos has a lengthy road ahead bringing their RISC-V vision to reality. Design challenges remain around compilers, software mapping, performance optimization and more. Recruiting seasoned talent is crucial. And they'll need to rapidly scale up manufacturing with TSMC.


But the team's software engineering DNA from companies like Intel gives them a unique perspective. And their unified memory architecture could be a key differentiator.

Photo from: Pixabay.com

Most importantly, Rivos must convince wary cloud giants, enterprise AI teams, and open-source communities to invest in yet another AI hardware and software ecosystem beyond Nvidia, Arm, AMD, Intel, and others. Attracting developers with compelling performance per dollar will be pivotal.


Regardless, Rivos' gutsy play to upend entrenched players with open-source RISC-V AI chips is the stuff Silicon Valley legends are made of. Every David needs a Goliath - in this case, a $260 billion behemoth in Nvidia that utterly dominates accelerated computing.


The ripple effects of Rivos' success could open up AI acceleration to innovators everywhere, upending business models. Their saga is one to monitor closely over the coming years as Large Language Models, generative AI and other emerging workloads reshape our world. After all, the real battle is democratizing AI for transformative breakthroughs that elevate society.


Subscribe to Our Newsletter

Stay updated with the latest tech news, articles, and exclusive offers.


Enjoyed this article?

Leave A Comment Below!


Comments