The translator (free version) supports microcontrollers with comparatively small ROM/RAM capacity. In order to compress the capacity used by the library, only functions that are often used by neural networks are supported.

Translatable Functions (Inference)

Neural net functions that are used only for learning and not for inference (for example, tf.nn.l2_loss for evaluating loss) are usable.

Neural Network Function TensorFlow Keras (New V1.4.0) Caffe
Layer Convolution tf.nn.conv2d  keras.layers.Conv2D Convolution
tf.contrib.layers.conv2d tf.keras.layers.Conv2D
tf.layers.conv2d  
Deconvolution tf.nn.conv2d_transpose keras.layers.Conv2DTranspose Deconvolution
tf.contrib.layers.conv2d_transpose tf.keras.layers.Conv2DTranspose
tf.layers.conv2d_transpose  
InnerProduct tf.matmul keras.layers.Dense InnerProduct
tf.contrib.layers.fully_connected tf.keras.layers.Dense
tf.layers.dense  
Pooling Max Pooling tf.nn.max_pool keras.layers.MaxPooling2D Pooling 
tf.contrib.layers.max_pool2d tf.keras.layers.MaxPool2D with pooling_param {pool: MAX}
tf.layers.max_pooling2d    
Average Pooling tf.nn.avg_pool keras.layers.AveragePooling2D Pooling 
tf.contrib.layers.avg_pool2d tf.keras.layers.AveragePooling2D with pooling_param {pool: AVE}
tf.layers.average_pooling2d    
Activation ReLU tf.nn.relu keras.activations.relu ReLU
tf.keras.activations.relu
TanH tf.nn.tanh keras.activations.tanh TanH
tf.tanh tf.keras.activations.tanh
Sigmoid tf.sigmoid keras.activations.sigmoid Sigmoid
tf.keras.activations.sigmoid
Softmax tf.nn.softmax keras.activations.softmax Softmax
tf.contrib.layers.softmax tf.keras.activations.softmax
tf.nn.softmax_cross_entropy_with_logits keras.layers.Softmax
tf.nn.softmax_cross_entropy_with_logits_v2 tf.keras.layers.Softmax
Normalization LRN tf.nn.lrn - LRN (Local Response Normalization)
tf.nn.local_response_normalization
BatchNormalization tf.layers.batch_normalization keras.layers.BatchNormalization BatchNorm
tf.keras.layers.BatchNormalization
Structure Input - - Input
bias_add tf.nn.bias_add    
tf.contrib.layers.bias_add
reshape tf.reshape keras.layers.Reshape Reshape
Split
tf.layers.flatten tf.keras.layers.Reshape
tf.contrib.layers.flatten keras.layers.Flatten
  tf.keras.layers.Flatten

Supported Neural Network Structures

AutoEncoder   Supports Conv2D-FC-Deconv2D Structure New V1.4.0
Convolutional Neural Network
(CNN)
w/o branch
LeNet    
AlexNet    
VGG If the number of FC channels is large, a large memory
capacity is required during conversion.
 
Convolutional Neural Network
(CNN)
w/ branch
Network in
Network
GoogleNet
× Not compatible with branched network structures
Not support Concatenation
Under development
MobileNet × Not support Depthwise Convolution
Not compatible with branched network structures
Not support multiple input add (addition of elements)
Under development
ResNet × Not compatible with branched network structures
Not support multiple input add (addition of elements)
Under development
SENet × Not compatible with branched network structures
Not support Multiply (multiplication of elements)
Under development
Recurrent Neural Network
(RNN)
  × Networks with built-in memory are not supported.
Consider CNN as an alternative.
 
Long Short Term Memory
(LSTM)
  × Networks with built-in memory are not supported.
Consider CNN as an alternative.