Skip to main content

Overview

Description

Simple for beginners and powerful for experts

Renesas Robust Unified Heterogenous Model Integration (RUHMI) Framework is a set of tools supporting AI application development for Renesas MCU/MPU products. Generate highly optimized models in minutes to run efficiently on Renesas embedded processors.

Why RUHMI Framework?

With a robust compiler and software framework, our solution enables seamless deployment of the latest neural network models across multiple frameworks. By leveraging a common front-end compiler engine* for Renesas' broad portfolio of AI MCU and MPU products, we deliver enhanced user convenience through standardized frameworks and interfaces, ensuring cross-device compatibility and a consistent development experience.

  • Seamless deployment of pre-trained deep neural networks from graph compilation to AI inference using integrated tools, APIs, automated code generation, and runtime support
  • Workflow integration and flexible customization through a standardized Python library across Renesas MCU/MPU families
  • Native support for leading ML frameworks, with ongoing expansion to enable importing common models across devices
  • Framework-independent post-training calibration and quantization for user-defined models
  • Multiple application examples, including models optimized for each supported device
  • Automatic conversion to optimized embedded code for onboard CPUs (RUHMI feature for MCUs) for simplified deployment
  • User-friendly design for smooth model selection, conversion, and storage across supported frameworks and devices
    • Highly configurable CLI environment for Linux
    • Windows environment with an intuitive GUI and expert-level CLI for MCU implementation, supporting diverse development environments

* Powered by EdgeCortix® MERA™ 2.0

Features

  • RA8 MCUs
    • Supported frameworks: TensorFlow Lite (.tflite), ONNX (.onnx), PyTorch/ExecuTorch (.pte)
    • OS: Windows (GUI, CLI), Linux (CLI)
  • RZ/V MPUs
    • Supported frameworks: Tensorflow, ONNX, Pytorch
    • OS: Linux (CLI)

Release Information

For additional information and links, visit GitHub.

Target Devices

Downloads

Documentation

Design & Development

Related Boards & Kits

Videos & Training

Support

Support Communities

  1. RUHMI / AI Navigator quantization fails (get_model_info ok) with YOLO-FastestV2 ONNX: MERA fe_onnx_cli crash / model.mir missing

    Hello Renesas team,I am working on EK-RA8P1 (RA8P1 + Ethos-U55) and using e² studio AI Navigator / RUHMI Framework to deploy a custom DMS object detection model.White check mark What I am trying to do    Run my custom YOLO-FastestV2-based DMS model on RA8P1    Flow: Camera ...

    Jan 13, 2026
  2. RA8P1 EVK – Is there a Renesas-supported development environment to train and build YOLO-Fastest models or any other models for custom datasets?

    ... fine-tune YOLO-Fastest models for RA8P1 using custom datasets? If not, is the recommended workflow to train YOLO-Fastest externally (for example using standard frameworks such as TensorFlow/PyTorch on PC or cloud) and then use AI Navigator / RUHMI only for INT8 model conversion and deployment to ...

    Jan 1, 2026
  3. E2 studio RA - Missing TFLM stack in stack tab

    Hi, I am planning to create an AI project(Sound Recognition) from scratch for the EK-RA8P1. I have started development based on the Renesas RUHMI Framework – Quick Start Guide.  As per the steps mentioned in the guide, there should be a stack element for TFLM, but it is not ...

    Feb 13, 2026
View All Results from Support Communities (7)

Knowledge Base

  1. How do I convert models for use with Ethos-U55 on RA8P1, using RUHMI?

    When using Ethos-U55 on RA8P1, the model optimization and conversion process is carried out in several steps using the RUHMI Framework tool. Model Import: First, the user imports models from TensorFlow or ONNX format into the RUHM tool. RUHMI then converts these models into Intermediate Representation (IR) for further ...

    Jul 1, 2025
  2. RA Family: CPU cache maintenance for RA8P1 Ethos-U NPU operations

    ... memory How to Implement Cache Maintenance Code This section introduces the implementation of cache maintenance code based on the Arm Ethos-U Core Driver and RUHMI Framework which are typically used for RA8P1 Ethos-U AI inference execution. AI model buffer There is no support from any HAL driver. If ...

    Feb 3, 2026
  3. How can I profile NPU inference performance and memory usage on RA8P1?

    ... to the buffer size information included in the generated code. In most cases, the generated code can be used as-is, but you may relocate memory regions as needed depending on your application. For more detail, please refer to RUHMI user's manual in GitHub via RUHMI Framework landing page.

    Jul 1, 2025
View All Results from Knowledge Base (4)
Support Communities

Support Communities

Get quick technical support online from Renesas Engineering Community technical staff.
Browse Articles

Knowledge Base

Browse our knowledge base for helpful articles, FAQs, and other useful resources.
Submit a Ticket

Submit a Ticket

Need to ask a technical question or share confidential information?