The open-source nature of RISC-V brings the benefits of a modular and royalty-free instruction set architecture (ISA) that eliminates licensing fees, can accelerate development, and fosters customization for diverse applications, including artificial intelligence (AI), machine learning (ML), the Internet of Things (IoT), and embedded systems.
Automation levels are being increased in many types of applications, from consumer retail transactions to Industry 4.0 operations and autonomous vehicles. RISC-V enables vendor independence, enhances security through transparency, and allows for tailored, specialized processors that can support power-sensitive IoT and edge applications.
It also provides support for the hybrid edge/cloud system architectures being deployed to support advanced automation solutions. Hybrid architecture combines cloud computing to support scalability. Edge solutions and embedded AI/ML contribute to low latency, privacy, power efficiency, and offline availability (Figure 1).

With RISC-V, it’s possible to design processors optimized for specific AI/ML applications while minimizing power consumption. The open-source environment speeds up innovation and accelerates the penetration of edge AI.
The common and simplified ISA facilitates unified software environments and programming across various AI hardware needed to support complex edge applications. RISC-V supports in-memory computing (IMC) and near-memory computing (NMC) through its open, modular, and extensible architecture. The use of IMC and NMC can help developers overcome limitations associated with the “memory wall” in AI/ML applications.
RISC-V inherently supports optimized resource management, critical for battery-powered edge devices. Its support for efficient AI inference enables advanced image classification and natural language AI processing in resource-limited edge applications.
Edge devices can be required to provide secure real-time processing, an area where RISC-V excels. In addition to perception tasks, RISC-V-based edge solutions can support the real-time computing needs of generative AI to adapt to user habits, fine-tune performance, and extend battery life in edge devices.
It’s not just a choice between edge and cloud computing. RISC-V can be used to support all three levels in the computing continuum from cloud to edge to embedded devices.
Continuum computing
Continuum computing is an architectural approach that breaks down the isolated silos of cloud, edge, and embedded computing to maximize overall system performance and sustainability. Computing occurs where it’s most efficient and impactful.
It’s been referred to as a synergistic approach that merges edge and cloud computing into a more cohesive system. In its most advanced embodiments, the location of computing is not fixed; it’s dynamic. Computing is assigned to the location that currently has the best combination of latency, energy, and available processing power (Figure 2).

RISC-V processors are ideal for continuum computing. They span from low-power edge devices to multicore implementations that are widely available and in production, including embedded microcontrollers and high-performance SoCs for data centers and AI/ML.
In addition, the RISC-V ISA supports continuum computing through its open, modular, and scalable architecture. By providing a common base ISA, it enables software compatibility across diverse hardware, while extensions enable customized, power-efficient, and application-specific, high-performance computing on a unified foundation.
Peripherals and ISA extensions
The growing availability of neural processing units (NPUs) and various AI/ML acceleration peripherals for RISC-V processors further enhances their suitability for use in continuum computing applications. The RISC-V ecosystem includes dedicated AI IP cores, vector extensions, and specialized matrix engines that can be integrated into system-on-chip (SoC) devices.
Specific AI/ML supports in the RISC-V ISA include standardized vector (RVV) extensions for parallel data processing, custom instruction capabilities, and specialized matrix extensions (RVM) for matrix multiplications, crucial for accelerating neural network layers.
In addition, RISC-V packed single instruction, multiple data (SIMD), the P extension, enables 8/16/32-bit subword parallelism within standard 32/64-bit integer registers (GPRs). Packed SIMD enables a single CPU instruction to operate on multiple data elements simultaneously and is ideal for low-power DSP and AI tasks.
The bit-manipulation (B) extension in the RISC-V ISA adds instructions for faster bitwise operations, rotations, and field extraction, reducing code size and improving efficiency of AI algorithms, like those involving quantized neural networks (QNNs) and binary neural networks (BNNs).
Summary
RISC-V offers a wide range of hardware options and software capabilities that make it well-suited for use in AI/ML applications in embedded, edge, and cloud applications. The ecosystem includes numerous hardware peripherals and ISA extensions optimized for AI/ML requirements. RISC-V supports both hybrid cloud/edge architectures and the latest continuum computing architectures.
References
Edge AI and Vision: Empowering automation with intelligence, CSEM
Edge Computing and Embedded Artificial Intelligence, ECS SRIA
Leveraging RISC-V as a Unified, Heterogeneous Platform for Next-Gen AI Chips, Akeana
RISC-V and AI/ML redefining the future of Edge Computing, MosChip
RISC-V & AI, RISC-V International
RISC-V Enables Performant and Flexible AI and ML Compute, Wevolver
RISC-V For Machine Learning, Meegle
RISC-V Unleashed: The definitive guide to next-gen computing, Sirin Software
The Benefits of Building New AI Accelerators with RISC-V, Google DeepMind
Related EE World content
What is the RISC-V ecosystem?
Domain specific accelerators for RISC-V
RISC-V vs. ARM vs. x86 – What’s the difference?
RISC-V implementation strategies for certification of safety-critical systems
How do heterogeneous integration and chiplets support generative AI?





Leave a Reply