Main Content

trainReidentificationNetwork

Train re-identification (ReID) deep learning network

Since R2024a

Description

trainedReID = trainReidentificationNetwork(trainingData,reID,options) trains the specified ReID network reID to output appearance feature vectors. The options input specifies the network training parameters.

Note

This functionality requires Deep Learning Toolbox™.

[trainedReID,info] = trainReidentificationNetwork(___) returns information on training progress, such as training loss, for each iteration using the input arguments from the previous syntax.

[___] = trainReidentificationNetwork(___,Name=Value) specifies options using one or more name-value arguments, in addition to any combination of arguments from previous syntaxes. For example, FreezeBackbone=false specifies not to freeze the backbone of the network during training.

Input Arguments

collapse all

Training data of RGB or grayscale images, specified as an imageDatastore object with a populated Labels property, or a datastore whose read function returns a 2-by-B cell array, where B is the number of images in the datastore. Each row of the cell array is of the form {Image Class}.

ImageClass

RGB image, stored as an H-by-W-by-3 numeric array, or grayscale image, stored as an H-by-W matrix.

String or a categorical cell vector which contains the object class name for the corresponding input image in Image. All categorical data returned by the datastore must have the same categories.

Re-identification network, specified as a reidentificationNetwork object.

Training options, specified as a TrainingOptionsSGDM, TrainingOptionsRMSProp, or TrainingOptionsADAM object returned by the trainingOptions (Deep Learning Toolbox) function. To specify the solver name and other options for network training, use the trainingOptions (Deep Learning Toolbox) function.

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Example: trainReidentificationNetwork(trainingData,reID,options,FreezeBackbone=false) specifies not to freeze the backbone of the network during training.

Loss function to use in the ReID network, specified as "additive-margin-softmax", "cross-entropy", or "cosine-softmax". To train the ReID network using data that contains very distinct objects, with large variations in shape, color, or texture, use cross-entropy loss, "cross-entropy". To train the ReID network using data that contains both subtle and distinct objects, use additive-margin-softmax loss, "additive-margin-softmax" or cosine-softmax loss, "cosine-softmax".

Freeze the backbone of the ReID network during training, specified as a numeric or logical true (1) or false (0). When true, the function freezes the ReID network up to the feature layer, as specified by the FeatureLayer property of the reidentificationNetwork object. When false, the classification layers of the ReID network remain unfrozen, and training speed is slower.

Tip

If you adjust the input size of the backbone network, or use a backbone network that has not been pretrained, do not set FreezeBackbone to false.

ReID training experiment monitoring, specified as an experiments.Monitor (Deep Learning Toolbox) object for use with the Experiment Manager (Deep Learning Toolbox) app. You can use this object to track the progress of training, update information fields in the training results table, record values of the metrics used by the training, or produce training plots. For more information on using this app, see the Train Object Detectors in Experiment Manager example.

The app monitors this information during training:

  • Training loss at each iteration

  • Learning rate at each iteration

  • Validation loss at each iteration, if the options input contains validation data

Margin in the loss function, specified as a non-negative scalar in the range [0, 1]. To specify this argument, you must specify LossFunction as "additive-margin-softmax".

Scale in the loss function, specified as a positive scalar. To specify this argument, you must specify LossFunction as "additive-margin-softmax".

Output Arguments

collapse all

Trained ReID network, returned as a reidentificationNetwork object. The trained network outputs feature vectors of the size specified by its FeatureLength property.

Training progress information, returned as a structure array with these fields. Each field corresponds to a stage of training.

  • TrainingLoss — Training loss at each iteration

  • BaseLearnRate — Learning rate at each iteration

  • ValidationLoss — Validation loss at each iteration.

  • FinalValidationLoss — Final validation loss at the end of the training

Each field contains a numeric vector with one element per training iteration. If the function does not calculate a metric for a specific iteration, it assigns a value of NaN for that iteration. The structure contains the ValidationLoss and FinalValidationLoss fields only when options specifies validation data.

Tips

  • To improve ReID accuracy, increase the number of images you use to train the network. You can expand the training data set using data augmentation. For information on how to apply data augmentation for preprocessing, see Preprocess Images for Deep Learning (Deep Learning Toolbox).

Version History

Introduced in R2024a