Main Content


Parameters to configure deep learning code generation with the ARM Compute Library


The coder.ARMNEONConfig object contains ARM® Compute Library and target specific parameters that codegen uses for generating C++ code for deep neural networks.

To use a coder.ARMNEONConfig object for code generation, assign it to the DeepLearningConfig property of a code generation configuration object that you pass to codegen.


Create an ARM NEON configuration object by using the coder.DeepLearningConfig function with target library set as 'arm-compute'.


expand all

Version of ARM Compute Library used on the target hardware, specified as a character vector or string scalar. If you set ArmComputeVersion to a version later than '20.02.1', ArmComputeVersion is set to '20.02.1'.

ARM architecture supported in the target hardware, specified as a character vector or string scalar. The specified architecture must be the same as the architecture for the ARM Compute Library on the target hardware.

ARMArchitecture must be specified for these cases:

  • You do not use a hardware support package (the Hardware property of the code generation configuration object is empty).

  • You use a hardware support package, but generate code only.

Specify the precision of the inference computations in supported layers. When performing inference in 32-bit floats, use 'fp32'. For 8-bit integer, use 'int8'. Default value is 'fp32'.

Location of the MAT-file containing the calibration data. Default value is ''. This option is applicable only when DataType is set to 'int8'.

When performing quantization, the calibrate (Deep Learning Toolbox) function exercises the network and collects the dynamic ranges of the weights and biases in the convolution and fully connected layers of the network and the dynamic ranges of the activations in all layers of the network. To generate code for the optimized network, save the results from the calibrate function to a MAT-file and specify the location of this MAT-file to the code generator using this property. For more information, see Generate int8 Code for Deep Learning Networks.

Name of target library, specified as a character vector.


collapse all

Create an entry-point function squeezenet that uses the coder.loadDeepLearningNetwork function to load the squeezenet (Deep Learning Toolbox) object.

function out = squeezenet_predict(in)

persistent mynet;
if isempty(mynet)
    mynet = coder.loadDeepLearningNetwork('squeezenet', 'squeezenet');

out = predict(mynet,in);

Create a coder.config configuration object for generation of a static library.

cfg = coder.config('lib');

Set the target language to C++. Specify that you want to generate only source code.

cfg.TargetLang = 'C++';

Create a coder.ARMNEONConfig deep learning configuration object. Assign it to the DeepLearningConfig property of the cfg configuration object.

dlcfg = coder.DeepLearningConfig('arm-compute');
dlcfg.ArmArchitecture = 'armv8';
dlcfg.ArmComputeVersion = '20.02.1';
cfg.DeepLearningConfig = dlcfg;

Use the -config option of the codegen function to specify the cfg configuration object. The codegen function must determine the size, class, and complexity of MATLAB® function inputs. Use the -args option to specify the size of the input to the entry-point function.

codegen -args {ones(227,227,3,'single')} -config cfg squeezenet_predict

The codegen command places all the generated files in the codegen folder. The folder contains the C++ code for the entry-point function squeezenet_predict.cpp, header, and source files containing the C++ class definitions for the neural network, weight, and bias files.

Version History

Introduced in R2019a