Prerequisites for Deep Learning with TensorFlow Lite Models
MathWorks Products
To perform inference with TensorFlow™ Lite models in MATLAB® execution, or by using MATLAB Function blocks in Simulink® models, you must install:
Deep Learning Toolbox™
In addition, to generate code for TensorFlow Lite models, you must also install MATLAB Coder™.
Third-Party Hardware and Software
Deployment Platform | MATLAB host computer or ARM® processor |
Software Libraries | TensorFlow Lite version 2.15.0 on host computer or target. For information on building the library, see this post in MATLAB Answers™: How do I build TensorFlow Lite for Deep Learning C++ code generation and deployment. Supported models include:
Models with these input and output
data types are supported: Multi-input networks are not supported. TensorFlow Lite models are forward and backward compatible. So, if your model was created using a different version of the library but contains layers that are available in version 2.15.0, you can still generate code and deploy your model. |
Operating System Support | Windows® and Linux® only. CentOS and Red Hat® Linux distributions are not supported. |
Supported Compilers | MATLAB Coder locates and uses a supported installed compiler. For generating MEX on Windows platform, use one of these compilers:
For the list of supported compilers on Linux platform, see Supported and Compatible Compilers on the MathWorks® website. You can use The C++ compiler must support C++17. |
Environment Variables
MATLAB Coder uses environment variables to locate the libraries required to generate code for deep learning networks.
For deployment on the MATLAB host computer set these environment variables on the host:
TFLITE_PATH
: Location of the TensorFlow Lite library directory.LD_LIBRARY_PATH
: Location of the run-time shared library. For example,TFLITE_PATH/lib/tensorflow/lite
. (For Linux platform.)PATH
: Location of the run-time shared library. For example,TFLITE_PATH\lib\tensorflow\lite
. (For Windows platform.)
For deployment on ARM processor, set these environment variables on the target hardware board:
TFLITE_PATH
: Location of the TensorFlow Lite library directory.LD_LIBRARY_PATH
: Location of the run-time shared library. For example,TFLITE_PATH/lib/tensorflow/lite
.TFLITE_MODEL_PATH
: :Location of the TensorFlow Lite model that you intend to deploy.
See Also
loadTFLiteModel
| predict
| TFLiteModel
Related Topics
- Generate Code for TensorFlow Lite (TFLite) Model and Deploy on Raspberry Pi
- Deploy Classification Application Using Mobilenet-V3 TensorFlow Lite Model on Host and Raspberry Pi
- Deploy Pose Estimation Application Using TensorFlow Lite Model (TFLite) Model on Host and Raspberry Pi
- Deploy Super Resolution Application That Uses TensorFlow Lite (TFLite) Model on Host and Raspberry Pi