e-AI translator (free version) supports microcontrollers with comparatively small ROM/RAM capacity. In order to compress the capacity used by the library, only functions that are often used by neural networks are supported.

Translatable Functions (Inference)

Neural net functions that are used only for learning and not for inference (for example, tf.nn.l2_loss for evaluating loss) are usable.

Type PyTorch Tensorflow (tf-keras) Keras Tensorflow Lite (New V2.1.0)
C_FP32 C_FP32 C_FP32 API C_INT8
1. Convolution Layers nn.Conv2d tf.keras.layers.Conv2D keras.layers.Conv2D tfl.conv_2d support
nn.Conv2d tf.keras.layers.DepthwiseConv2D keras.layers.DepthwiseConv2D tfl.depthwise_conv_2d support
nn.ConvTranspose2d tf.keras.layers.Conv2DTranspose keras.layers.Conv2DTranspose tfl.transpose_conv support
+ tf.keras.layers.add keras.layers.add   support
2. Pooling layers nn.MaxPool2d tf.keras.layers.MaxPool2D keras.layers.MaxPooling2D tfl.max_pool_2d support
nn.MaxPool2d tf.keras.layers.GlobalMaxPooling2D keras.layers.GlobalMaxPooling2D tfl.mean support
nn.AvgPool2d tf.keras.layers.AveragePooling2D keras.layers.AveragePooling2D tfl.average_pool_2d support
nn.AvgPool2d tf.keras.layers.GlobalAveragePooling2D keras.layers.GlobalAveragePooling2D tfl.reducemax support
3. Non-linear Activations (weighted sum, nonlinearity) nn.Hardsigmoid tf.keras.activations.hard_sigmoid
tf.keras.layers.Activation (“hard_sigmoid”)
keras.activations.hard_sigmoid
keras.layers.Activation (“hard_sigmoid”)
None -
nn.LeakyReLU tf.keras.layers.LeakyReLU keras.layers.LeakyReLU tfl.leaky_relu support
nn.ReLU tf.keras.activations.relu
tf.keras.layers.Activation (“relu”)
tf.keras.layers.relu
keras.activations.relu
keras.layers.Activation (“relu”)
keras.layers.relu
tfl.relu
(fused activation)
support
nn.ReLU6 tf.keras.activations.relu (max_value=6.)
tf.keras.layers.relu (max_value=6.)
keras.activations.relu (max_value=6.)
keras.layers.relu (max_value=6.)
tfl.relu
with max_value = 6
support
nn.Sigmoid tf.keras.activations.sigmoid
tf.keras.layers.Activation (“sigmoid”)
keras.activations.sigmoid
keras.layers.Activation (“sigmoid”)
tfl.logistic support
nn.Softplus tf.keras.activations.softplus
tf.keras.layers.Activation (“softplus”)
keras.activations.softplus
keras.layers.Activation (“softplus”)
None -
nn.Softsign tf.keras.activations.softsign
tf.keras.layers.Activation (“softsign”)
keras.activations.softsign
keras.layers.Activation (“softsign”)
None -
nn.Tanh tf.keras.activations.tanh
tf.keras.layers.Activation (“tanh”)
keras.activations.tanh
keras.layers.Activation (“tanh”)
tfl.tanh support
4. Non-linear Activations (other) nn.Softmax
nn.Softmax2d
tf.keras.activations.softmax
tf.keras.layers.Activation (“softmax”)
tf.keras.layers.Softmax
keras.activations.softmax
keras.layers.Activation (“softmax”)
keras.layers.Softmax
tfl.softmax support
torch.clip
torch.clamp
tf.keras.backend.clip (use lambda) keras.backend.clip (use lambda) None -
5. Normalization Layers nn.BatchNorm2d tf.keras.layers.BatchNormalization keras.layers.BatchNormalization tfl.mul, tfl.add -
CBR only
6. Linear Layers nn.Linear
torch.matmul
tf.keras.layers.Dense keras.layers.Dense tfl.fully_connected support
8. Utilities nn.Flatten tf.keras.layers.Flatten keras.layers.Flatten None support
(skip)
view
torch.reshape
tf.keras.layers.Reshape keras.layers.Reshape None support
(skip)
9. Mearge layers (structure check) + tf.keras.layers.add keras.layers.add tfl.add support
- tf.keras.layers.subtract keras.layers.subtract tfl.sub support
* tf.keras.layers.multiply keras.layers.multiply tfl.mul support
torch.cat tf.keras.layers.concatenate keras.layers.concatenate tfl.concatenation support

Supported Neural Network Structures (e-AI Translator V2.1.0)

Neural network name model name Status Note
AutoEncoder   support Support Conv2D-Fully Connected-Deconv2D structure, too.
Convolutional Neural Network (CNN)
Branch structures are not included.
LeNet support  
AlexNet support  
VGG support If the channel numbers of fully connected layers are big, huge memory capacity is required.
Convolutional Neural Network (CNN)
Branch structures are included. (skip connection, etc.)
Network in Network GoogleNet support RAM usage reduction function is available (New V2.1.0)
MobileNet support
ResNet support
SqueezeNet support
SENet support
ContextNet - Need to replace activation function from Swish to ReLU
Self Attention Net - Support SENe only.
Recurrent Neural Network (RNN) RNNoise - Not supported.
Consider to utilize SE module instead of RNN.
Long Short Term Memory (LSTM)   - Not supported.
Consider to utilize SE module instead of LSTM.
Gated Recurrent Unit (GRU)   - Not supported.
Consider to utilize SE module instead of GRU.

Translatable Functions (e-AI Translator V1.6.0: inference)

Type TensorFlow tf-keras Keras Caffe
1. Convolution Layers tf.nn.conv2d
tf.contrib.layers.conv2d
tf.layers.conv2d
tf.keras.layers.Conv2D keras.layers.Conv2D Convolution
tf.nn.depthwise_conv2d tf.keras.layers.DepthwiseConv2D keras.layers.DepthwiseConv2D None
tf.nn.conv2d_transpose
tf.contrib.layers.conv2d_transpose
tf.layers.conv2d_transpose
tf.keras.layers.Conv2DTranspose keras.layers.Conv2DTranspose Deconvolution
tf.add
tf.nn.bias_add
tf.contrib.layers.bias_add
+
tf.keras.layers.add keras.layers.add None
2. Pooling layers tf.nn.max_pool
tf.contrib.layers.max_pool2d
tf.layers.max_pooling2d
tf.keras.layers.MaxPool2D keras.layers.MaxPooling2D Pooling
with pooling_param {pool: MAX}
None tf.keras.layers.GlobalMaxPooling2D keras.layers.GlobalMaxPooling2D None
tf.nn.avg_pool
tf.contrib.layers.avg_pool2d
tf.layers.average_pooling2d
tf.keras.layers.AveragePooling2D keras.layers.AveragePooling2D Pooling
with pooling_param {pool: AVE}
None tf.keras.layers.GlobalAveragePooling2D keras.layers.GlobalAveragePooling2D None
3. Non-linear Activations (weighted sum, nonlinearity) None tf.keras.activations.hard_sigmoid
tf.keras.layers.Activation (“hard_sigmoid”)
keras.activations.hard_sigmoid
keras.layers.Activation (“hard_sigmoid”)
None
tf.nn.leaky_relu tf.keras.layers.LeakyReLU keras.layers.LeakyReLU None
tf.nn.relu tf.keras.activations.relu
tf.keras.layers.Activation ( “relu”)
tf.keras.layers.relu
keras.activations.relu
keras.layers.Activation (“relu”)
keras.layers.relu
ReLU
tf.nn.relu6 tf.keras.activations.relu (max_value=6.)
tf.keras.layers.relu (max_value=6.)
keras.activations.relu (max_value=6.)
keras.layers.relu (max_value=6.)
None
tf.sigmoid tf.keras.activations.sigmoid
tf.keras.layers.Activation (“sigmoid”)
keras.activations.sigmoid
keras.layers.Activation (“sigmoid”)
Sigmoid
tf.nn.softplus
tf.math.softplus
tf.keras.activations.softplus
tf.keras.layers.Activation (“softplus”)
keras.activations.softplus
keras.layers.Activation (“softplus”)
None
tf.nn.softsign
tf.math.softsign
tf.keras.activations.softsign
tf.keras.layers.Activation (“softsign”)
keras.activations.softsign
keras.layers.Activation (“softsign”)
None
tf.nn.tanh
tf.tanh
tf.keras.activations.tanh
tf.keras.layers.Activation (“tanh”)
keras.activations.tanh
keras.layers.Activation (“tanh”)
TanH
4. Non-linear Activations (other) tf.nn.softmax
tf.contrib.layers.softmax
tf.nn.softmax_cross_entropy_with_logits
tf.nn.softmax_cross_entropy_with_logits_v2
tf.keras.activations.softmax
tf.keras.layers.Activation (“softmax”)
tf.keras.layers.Softmax
keras.activations.softmax
keras.layers.Activation (“softmax”)
keras.layers.Softmax
Softmax
tf.clip_by_average_norm None None None
tf.clip_by_global_norm None None None
tf.clip_by_norm None None None
tf.clip_by_value tf.keras.backend.clip (use lambda) keras.backend.clip (use lambda) None
5. Normalization Layers tf.layers.batch_normalization tf.keras.layers.BatchNormalization keras.layers.BatchNormalization None
tf.nn.lrn
tf.nn.local_response_normalization
None None LRN
6. Linear Layers tf.matmul
tf.contrib.layers.fully_connected
tf.layers.dense
tf.keras.layers.Dense keras.layers.Dense InnerProduct
8. Utilities tf.layers.flatten
tf.contrib.layers.flatten
tf.keras.layers.Flatten keras.layers.Flatten None
tf.reshape tf.keras.layers.Reshape keras.layers.Reshape Reshape
Split
9. Mearge layers
(structure check)
tf.add
tf.math.add
+
tf.keras.layers.add keras.layers.add None
tf.substruct
tf.math.substruct
-
tf.keras.layers.subtract keras.layers.subtract None
tf.math.multiply
*
tf.keras.layers.multiply keras.layers.multiply None
tf.concat tf.keras.layers.concatenate keras.layers.concatenate None

Supported Neural Network Structures (e-AI Translator V1.6.0)

Neural network name model name Status Note
AutoEncoder   support Support Conv2D-Fully Connected-Deconv2D structure, too.
Convolutional Neural Network (CNN)
Branch structures are not included.
LeNet support  
AlexNet support  
VGG support If the channel numbers of fully connected layers are big, huge memory capacity is required.
Convolutional Neural Network (CNN)
Branch structures are included. (skip connection, etc.)
Network in Network
GoogleNet
support RAM usage reduction function is not available.
MobileNet support RAM usage reduction function is not available.
ResNet support RAM usage reduction function is not available.
SENet support RAM usage reduction function is not available.
Recurrent Neural Network (RNN)   - Not supported.
Consider to utilize SE module instead of RNN.
Long Short Term Memory (LSTM)   - Not supported.
Consider to utilize SE module instead of LSTM.