Main Content

quantizationDetails

Display quantization details for a neural network

Since R2022a

Description

example

qDetails = quantizationDetails(net) returns a 1-by-1 structure array containing the quantization details for your neural network. The data is returned as a structure with the fields:

  • IsQuantized — Returns 1 (true) if the network is quantized; otherwise, returns 0 (false)

  • TargetLibrary — Target library for code generation

  • QuantizedLayerNames — List of quantized layers

  • QuantizedLearnables — Quantized network learnable parameters

Examples

collapse all

This example shows how to display the quantization details for a neural network.

Load the pretrained network. net is a SqueezeNet convolutional neural network that has been retrained using transfer learning to classify images in the MerchData data set.

load squeezenetmerch
net
net = 
  DAGNetwork with properties:

         Layers: [68×1 nnet.cnn.layer.Layer]
    Connections: [75×2 table]
     InputNames: {'data'}
    OutputNames: {'new_classoutput'}

Use the quantizationDetails function to see that the network is not quantized.

qDetails_original = quantizationDetails(net)
qDetails_original = struct with fields:
            IsQuantized: 0
          TargetLibrary: ""
    QuantizedLayerNames: [0×0 string]
    QuantizedLearnables: [0×3 table]

The IsQuantized field returns 0 (false) because the original network uses the single-precision floating-point data type.

Unzip and load the MerchData images as an image datastore. Define an augmentedImageDatastore object to resize the data for the network, and split the data into calibration and validation data sets to use for quantization.

unzip('MerchData.zip');
imds = imageDatastore('MerchData', ...
    'IncludeSubfolders',true, ...
    'LabelSource','foldernames');
[calData, valData] = splitEachLabel(imds, 0.7, 'randomized');
aug_calData = augmentedImageDatastore([227 227], calData);
aug_valData = augmentedImageDatastore([227 227], valData);

Create a dlquantizer object and specify the network to quantize. Set the execution environment to MATLAB.

quantObj = dlquantizer(net,'ExecutionEnvironment','MATLAB');

Use the calibrate function to exercise the network with sample inputs and collect range information.

calResults = calibrate(quantObj,aug_calData);

Use the quantize method to quantize the network object and return a simulatable quantized network.

qNet = quantize(quantObj)
qNet = 
Quantized DAGNetwork with properties:

         Layers: [68×1 nnet.cnn.layer.Layer]
    Connections: [75×2 table]
     InputNames: {'data'}
    OutputNames: {'new_classoutput'}

Use the quantizationDetails method to extract quantization details.

Use the quantizationDetails method to extract the quantization details.

qDetails = quantizationDetails(qNet)
qDetails = struct with fields:
            IsQuantized: 1
          TargetLibrary: "none"
    QuantizedLayerNames: [26×1 string]
    QuantizedLearnables: [52×3 table]

Inspect the QuantizedLayerNames field to see a list of the quantized layers.

qDetails.QuantizedLayerNames
ans = 26×1 string
    "conv1"
    "fire2-squeeze1x1"
    "fire2-expand1x1"
    "fire2-expand3x3"
    "fire3-squeeze1x1"
    "fire3-expand1x1"
    "fire3-expand3x3"
    "fire4-squeeze1x1"
    "fire4-expand1x1"
    "fire4-expand3x3"
    "fire5-squeeze1x1"
    "fire5-expand1x1"
    "fire5-expand3x3"
    "fire6-squeeze1x1"
    "fire6-expand1x1"
    "fire6-expand3x3"
    "fire7-squeeze1x1"
    "fire7-expand1x1"
    "fire7-expand3x3"
    "fire8-squeeze1x1"
    "fire8-expand1x1"
    "fire8-expand3x3"
    "fire9-squeeze1x1"
    "fire9-expand1x1"
    "fire9-expand3x3"
    "new_conv"

Inspect the QuantizedLearnables field to see the quantized values for learnable parameters in the network.

qDetails.QuantizedLearnables
ans=52×3 table
          Layer           Parameter          Value       
    __________________    _________    __________________

    "conv1"               "Weights"    {3×3×3×64   int8 }
    "conv1"               "Bias"       {1×1×64     int32}
    "fire2-squeeze1x1"    "Weights"    {1×1×64×16  int8 }
    "fire2-squeeze1x1"    "Bias"       {1×1×16     int32}
    "fire2-expand1x1"     "Weights"    {1×1×16×64  int8 }
    "fire2-expand1x1"     "Bias"       {1×1×64     int32}
    "fire2-expand3x3"     "Weights"    {3×3×16×64  int8 }
    "fire2-expand3x3"     "Bias"       {1×1×64     int32}
    "fire3-squeeze1x1"    "Weights"    {1×1×128×16 int8 }
    "fire3-squeeze1x1"    "Bias"       {1×1×16     int32}
    "fire3-expand1x1"     "Weights"    {1×1×16×64  int8 }
    "fire3-expand1x1"     "Bias"       {1×1×64     int32}
    "fire3-expand3x3"     "Weights"    {3×3×16×64  int8 }
    "fire3-expand3x3"     "Bias"       {1×1×64     int32}
    "fire4-squeeze1x1"    "Weights"    {1×1×128×32 int8 }
    "fire4-squeeze1x1"    "Bias"       {1×1×32     int32}
      ⋮

Input Arguments

collapse all

Quantized neural network specified as a dlnetwork, SeriesNetwork, or a DAGNetwork object.

Version History

Introduced in R2022a

expand all