Overview

Description

DNN compilers are often dedicated to each SoC, which has been a barrier to implementation for customers who design continuously across generations.

The DNN compiler is divided into a front-end part that analyzes the graph structure of the neural network and compresses and optimizes the graph, and a back-end part that determines the optimal order of operations based on the hardware structure and generates hardware instructions. Based on our design experience over many generations, we have noticed that these front-end parts behave very similarly regardless of the SoC generation.

We will use the OSS TVM as common backbone, which has excellent functionality extensions, in the front-end and combine the TVM with the back-end designed for each SoC, so that the same tools can be used across generations.

We provide traditional quality qualified proprietary tools for each SoC generation, in parallel we will provide a set of tools that can also be used for TVM by integrating common back-end. 

We have named this set Hybrid Compiler.
The release will be in phases from January 2024.

Features

  • Hybrid Compiler
  • Open-Source Software based AI compiler as common backbone
  • IP vendor specific quality qualified AI compiler

Target Devices

Design & Development