Artificial Intelligence (AI) used in deep neural networks is already providing new value for the IT segment. Although many people expect to implement embedded applications with AI, AI processing requires many calculations, making it difficult to adapt to embedded devices using traditional solutions with CPU or GPU, due to insufficient performance or large power consumption demands. Also, AI is constantly evolving, and new algorithms are developed from time to time.
In the midst of the rapid evolution of AI, Renesas developed the AI accelerator (DRP-AI) and the software (DRP-AI translator) that delivers both high performance and low power consumption, and have the ability to respond to evolution. Combining the DRP-AI and the DRP-AI translator makes AI inference possible with high power efficiency, which the current AI technology is unable to support.
The AI model can be extended with the continuous update of the DRP-AI translator.
DRP-AI consists of AI-MAC (multiply-accumulate processor) and DRP (reconfigurable processor). AI processing can be executed at high speed by assigning AI-MAC for operations on the convolution layer and fully connected layer, and DRP for other complex processing such as preprocessing and pooling layer.
While most AI accelerators specialize only in AI inference and rely on the CPU for pre- and post-processing, DRP-AI integrates pre- and post-processing and AI inference into a single DRP-AI hardware to achieve superior AI processing performance.
|PDF 1.39 MB 日本語 , 简体中文||White Paper|
|PDF 642 KB 日本語||White Paper|
DRP-AI Translator and DRP-AI TVM are tools that are available to convert trained AI models into a format that can run on DRP-AI. This section describes the features of these two tools.
The DRP-AI Translator is a tool that is tuned to maximize DRP-AI performance. DRP-AI achieves high-speed performance, low power consumption and reduced CPU load by enabling DRP-AI to perform all the operations of an AI model.
DRP-AI TVM applies the DRP-AI accelerator to the proven ML compiler framework Apache TVM*2. This enables support for multiple AI frameworks (ONNX, PyTorch, TensorFlow, etc.). In addition, it enables operation in conjunction with the CPU, allowing more AI models to be run.
These two tools can be selected according to the customer's product application.
The table below lists the AI model formats and products (MPUs) supported by each tool.
The table below shows the download site for each tool, a brief description of the tool and information on deliverables.
|DRP-AI Translator||DRP-AI TVM|
|Tool Download||DRP-AI Translator (Renesas Web)||DRP-AI TVM (GitHub)|
|Tool summary description||DRP-AI Translator explanation page||DRP-AI TVM explanation page|
|Implementation guide||Included in the DRP-AI Support Package||README on GitHub to guide you through the implementation process|
|Sample Code||Performance evaluation samples available on GitHub|
|AI evaluation software|
|DRP-AI drivers||Included in the DRP-AI Support Package|
|Linux||Available in Linux Package||Available in Linux Package|
The DRP-AI Support Package provides the driver and guide needed to operate DRP-AI. Download the DRP-AI now to experience the seamless AI development from open software to device implementation.
*1. DRP-AI TVM is powered by EdgeCortix MERATM Compiler Framework
*2. For more information on Apache TVM, please refer to https://tvm.apache.org
|RZ/V2M DRP-AI Support Package [V7.40]
This product provides the software and documentation for DRP-AI embedded within RZ/V2M.
|RZ/V2L DRP-AI Support Package [V7.40]
This product provides the software and documentation for DRP-AI embedded within RZ/V2L.
|RZ/V2MA DRP-AI Support Package[V7.40]
This product provides the software and documentation for DRP-AI embedded within RZ/V2MA.
|DRP-AI Translator [V1.83]
This is an AI model conversion tool (DRP-AI Translator) for DRP-AI equipped products. Please check the Release Notes and User's Manual first before using this product.
|DRP-AI TVM (GitHub)
We provide an AI model conversion tool (DRP-AI TVM) for DRP-AI-equipped products. When using this product, please check the contents of the linked README.md first.
|Where Edge and Endpoint AI Meet the Cloud||External Link|
DRP-AI Accelerator embedded in RZ/V series MPUs provides high-speed AI processing while keeping high power efficiency at the endpoints.
|AI Supports the Way We Live and Work||Blog Post||Apr 18, 2022|
|Start Your First Step on Vision AI Development Using RZ/V Microprocessor (MPU)||Blog Post||Feb 15, 2022|
|Farewell, Heat Countermeasure: RZ/V2M Brings the Innovation for AI Products||Blog Post||Nov 15, 2021|