Main Content

testnet

Test deep learning neural network

Since R2024b

    Description

    metricValues = testnet(net,images,metrics) tests the neural network net by evaluating it with the image data and targets specified by images and the metrics specified by metrics.

    example

    metricValues = testnet(net,images,targets,metrics) tests the neural network with the image data specified by images and the targets specified by targets.

    metricValues = testnet(net,sequences,metrics) tests the neural network with the sequence data and targets specified by sequences.

    metricValues = testnet(net,sequences,targets,metrics) tests the neural network with the sequence data specified by sequences and the targets specified by targets.

    metricValues = testnet(net,features,metrics) tests the neural network with the feature data and targets specified by features.

    metricValues = testnet(net,features,targets,metrics) tests the neural network with the feature data specified by features and the targets specified by targets.

    metricValues = testnet(net,data,metrics) tests the neural network with other data layouts or combinations of different types of data.

    metricValues = testnet(net,data,targets,metrics) tests the neural network with the predictors specified by data and the targets specified by targets.

    metricValues = testnet(___,Name=Value) specifies additional options using one or more name-value arguments in addition to the input arguments in previous syntaxes. For example, InputDataFormats="CB" specifies that the first and second dimension of the input data correspond to the channel and batch dimensions, respectively.

    Examples

    collapse all

    Load a trained dlnetwork object. The MAT file digitsNet.mat contains an image classification neural network specified by the variable net.

    load digitsNet

    Load the test data. The MAT file DigitsDataTest.mat contains the test images and labels specified by the variables XTest and labelsTest, respectively.

    load DigitsDataTest

    Test the neural network using the testnet function. For single-label classification, evaluate the accuracy. The accuracy is the percentage of correct predictions.

    accuracy = testnet(net,XTest,labelsTest,"accuracy")
    accuracy = 
    98.2200
    

    Input Arguments

    collapse all

    Neural network, specified as a dlnetwork object.

    Image data, specified as a numeric array, dlarray object, datastore, or minibatchqueue object.

    Tip

    For sequences of images, for example video data, use the sequences input argument.

    If you have data that fits in memory and does not require additional processing, then specifying the input data as numeric arrays is usually the easiest option. If you want to test a neural network with image files stored on your system, or want to apply additional processing, then using datastores is usually the easiest option. For neural networks with multiple inputs or multiple outputs, you must use a TransformedDatastore, CombinedDatastore, or minibatchqueue object.

    Tip

    Neural networks expect input data with a specific layout. For example, image classification networks typically expect image representations to be h-by-w-by-c numeric arrays, where h, w, and c are the height, width, and number of channels of the images, respectively. Most neural networks have an input layer that specifies the expected layout of the data.

    Most datastores and functions output data in the layout that the network expects. If your data is in a different layout than what the network expects, then indicate that your data has a different layout by using the InputDataFormats argument or by specifying input data as a formatted dlarray object. Adjusting the InputDataFormats argument is usually easier than preprocessing the input data.

    For neural networks that do not have input layers, you must use the InputDataFormats argument or use formatted dlarray objects.

    For more information, see Deep Learning Data Formats.

    Numeric Array or dlarray Object

    For data that fits in memory and does not require additional processing, you can specify a data set of images as a numeric array or a dlarray object. If you specify images as a numeric array or a dlarray object, then you must also specify the targets argument.

    The layouts of numeric arrays and unformatted dlarray objects depend on the type of image data, and must be consistent with the InputDataFormats argument.

    Most networks expect image data in these layouts:

    DataLayout
    2-D images

    h-by-w-by-c-by-N array, where h, w, and c are the height, width, and number of channels of the images, respectively, and N is the number of images.

    Data in this layout has the data format "SSCB" (spatial, spatial, channel, batch).

    3-D images

    h-by-w-by-d-by-c-by-N array, where h, w, d, and c are the height, width, depth, and number of channels of the images, respectively, and N is the number of images.

    Data in this layout has the data format "SSSCB" (spatial, spatial, spatial, channel, batch).

    For data in a different layout, indicate that your data has a different layout by using the InputDataFormats argument or use a formatted dlarray object. For more information, see Deep Learning Data Formats.

    Datastore

    Datastores read batches of images and targets. Datastores are best suited when you have data that does not fit in memory or when you want to apply transformations to the data.

    For image data, the testnet function supports these datastores:

    DatastoreDescriptionExample Usage
    ImageDatastore

    Datastore of images saved on disk.

    Test with images saved on your system, where the images are the same size. When the images are different sizes, use an augmentedImageDatastore object.

    augmentedImageDatastoreDatastore that applies random affine geometric transformations, including resizing.

    Test with images saved on disk, where the images are different sizes.

    When you test using an augmented image datastore, do not apply additional augmentations such as rotation, reflection, shear, and translation.

    TransformedDatastoreDatastore that transforms batches of data read from an underlying datastore using a custom transformation function.

    • Transform datastores with outputs not supported by the testnet function.

    • Apply custom transformations to datastore output.

    CombinedDatastoreDatastore that reads from two or more underlying datastores.

    Test using a network with multiple inputs.

    Custom mini-batch datastoreCustom datastore that returns mini-batches of data.

    Test using data in a layout that other datastores do not support.

    For details, see Develop Custom Mini-Batch Datastore.

    To specify the targets, the datastore must output cell arrays or tables with numInputs+numOutputs columns, where numInputs and numOutputs are the number of network inputs and outputs, respectively. The first numInputs columns, correspond to the network inputs. The last numOutput columns correspond to the network outputs. The InputNames and OutputNames properties of the neural network specifies the order of the input and output data, respectively.

    Tip

    ImageDatastore objects allow batch reading of JPG or PNG image files using prefetching. For efficient preprocessing of images for deep learning, including image resizing, use an augmentedImageDatastore object. Do not use the ReadFcn property of ImageDatastore objects. If you set the ReadFcn property to a custom function, then the ImageDatastore object does not prefetch image files, and is usually significantly slower.

    You can use other built-in datastores for testing deep learning neural networks by using the transform and combine functions. These functions can convert the data read from datastores to the layout required by the testnet function. The required layout of the datastore output depends on the neural network architecture. For more information, see Datastore Customization.

    minibatchqueue Object

    For greater control over how the software processes and transforms mini-batches, you can specify data as a minibatchqueue object.

    If you specify data as a minibatchqueue object, then the testnet function ignores the MiniBatchSize property of the object and uses the MiniBatchSize argument instead. For minibatchqueue input, the PreprocessingEnvironment property must be "serial".

    Note

    This argument supports complex-valued predictors and targets.

    Sequence or time series data, specified a numeric array, a cell array of numeric arrays, a dlarray object, a cell array of dlarray objects, datastore, or minibatchqueue object.

    If you have sequences of the same length that fit in memory and do not require additional processing, then specifying the input data as a numeric array is usually the easiest option. If you have sequences of different lengths that fit in memory and do not require additional processing, then specifying the input data as a cell array of numeric arrays is usually the easiest option. If you want to test a neural network with sequences stored on your system, or want to apply additional processing such as custom transformations, then using datastores is usually the easiest option. For neural networks with multiple inputs, you must use a TransformedDatastore or CombinedDatastore object.

    Tip

    Neural networks expect input data with a specific layout. For example, vector-sequence classification networks typically expect a vector-sequence representations to be t-by-c arrays, where t and c are the number of time steps and channels of sequences, respectively. Neural networks typically have an input layer that specifies the expected layout of the data.

    Most datastores and functions output data in the layout that the network expects. If your data is in a different layout than what the network expects, then indicate that your data has a different layout by using the InputDataFormats argument or by specifying input data as a formatted dlarray object. Adjusting the InputDataFormats argument is usually easier than preprocessing the input data.

    For neural networks that do not have input layers, you must use the InputDataFormats argument or use formatted dlarray objects.

    For more information, see Deep Learning Data Formats.

    Numeric Array, dlarray Object, or Cell Array

    For data that fits in memory and does not require additional processing like custom transformations, you can specify a single sequence as a numeric array or a dlarray object or a data set of sequences as a cell array of numeric arrays or dlarray objects. If you specify sequences as a numeric array, cell array, or a dlarray object, then you must also specify the targets argument.

    For cell array input, the cell array must be an N-by-1 cell array of numeric arrays or dlarray objects, where N is the number of observations. The size and shape of the numeric arrays or dlarray objects that represent sequences depend on the type of sequence data and must be consistent with the InputDataFormats argument.

    This table describes the expected layout of data for a neural network with a sequence input layer.

    DataLayout
    Vector sequencess-by-c matrices, where s and c are the numbers of time steps and channels (features) of the sequences, respectively.
    1-D image sequencesh-by-c-by-s arrays, where h and c correspond to the height and number of channels of the images, respectively, and s is the sequence length.
    2-D image sequencesh-by-w-by-c-by-s arrays, where h, w, and c correspond to the height, width, and number of channels of the images, respectively, and s is the sequence length.
    3-D image sequencesh-by-w-by-d-by-c-by-s, where h, w, d, and c correspond to the height, width, depth, and number of channels of the 3-D images, respectively, and s is the sequence length.

    For data in a different layout, indicate that your data has a different layout by using the InputDataFormats argument or use a formatted dlarray object. For more information, see Deep Learning Data Formats.

    Datastore

    Datastores read batches of sequences and targets. Datastores are best suited when you have data that does not fit in memory or when you want to apply transformations to the data.

    For sequence and time-series data, the testnet function supports these datastores:

    DatastoreDescriptionExample Usage
    TransformedDatastoreDatastore that transforms batches of data read from an underlying datastore using a custom transformation function.

    • Transform datastores with outputs not supported by the testnet function.

    • Apply custom transformations to datastore output.

    CombinedDatastoreDatastore that reads from two or more underlying datastores.

    Combine predictors and targets from different data sources.

    Custom mini-batch datastoreCustom datastore that returns mini-batches of data.

    Test neural network using data in a layout that other datastores do not support.

    For details, see Develop Custom Mini-Batch Datastore.

    To specify the targets, the datastore must output cell arrays or tables with numInputs+numOutputs columns, where numInputs and numOutputs are the number of network inputs and outputs, respectively. The first numInputs columns, correspond to the network inputs. The last numOutput columns correspond to the network outputs. The InputNames and OutputNames properties of the neural network specifies the order of the input and output data, respectively.

    You can use other built-in datastores by using the transform and combine functions. These functions can convert the data read from datastores to the layout required by the testnet function. For example, you can transform and combine data read from in-memory arrays and CSV files using ArrayDatastore and TabularTextDatastore objects, respectively. The required layout of the datastore output depends on the neural network architecture. For more information, see Datastore Customization.

    minibatchqueue Object

    For greater control over how the software processes and transforms mini-batches, you can specify data as a minibatchqueue object.

    If you specify data as a minibatchqueue object, then the testnet function ignores the MiniBatchSize property of the object and uses the MiniBatchSize argument instead. For minibatchqueue input, the PreprocessingEnvironment property must be "serial".

    Note

    This argument supports complex-valued predictors and targets.

    Feature or tabular data, specified as a numeric array, datastore, table, or minibatchqueue object.

    If you have data that fits in memory that does not require additional processing, then specifying the input data as a numeric array or table is usually the easiest option. If you want to test with feature or tabular data stored on your system, or want to apply additional processing such as custom transformations, then using datastores is usually the easiest option. For neural networks with multiple inputs, you must use a TransformedDatastore or CombinedDatastore object.

    Tip

    Neural networks expect input data with a specific layout. For example feature classification networks typically expect feature and tabular data representations to be 1-by-c vectors, where c is the number features of the data. Neural networks typically have an input layer that specifies the expected layout of the data.

    Most datastores and functions output data in the layout that the network expects. If your data is in a different layout than what the network expects, then indicate that your data has a different layout by using the InputDataFormats argument or by specifying input data as a formatted dlarray object. Adjusting the InputDataFormats argument is usually easier than preprocessing the input data.

    For neural networks that do not have input layers, you must use the InputDataFormats argument or use formatted dlarray objects.

    For more information, see Deep Learning Data Formats.

    Numeric Array or dlarray Object

    For feature data that fits in memory and does not require additional processing like custom transformations, you can specify feature data as a numeric array. If you specify feature data as a numeric array, then you must also specify the targets argument.

    The layout of numeric arrays and unformatted dlarray objects depend must be consistent with the InputDataFormats argument. Most networks with feature input expect input data specified as a N-by-numFeatures array, where N is the number of observations and numFeatures is the number of features of the input data.

    Table

    For feature data that fits in memory and does not require additional processing like custom transformations, you can specify feature data as a table. If you specify feature data as a table, then you must not specify the targets argument.

    To specify feature data as a table, specify a table with numObservations rows and numFeatures+1 columns, where numObservations and numFeatures are the number of observations and channels of the input data. The testnet function uses the first numFeatures columns as the input features and uses the last column as the targets.

    Datastore

    Datastores read batches of feature data and targets. Datastores are best suited when you have data that does not fit in memory or when you want to apply transformations to the data.

    For feature and tabular data, the testnet function supports these datastores:

    Data TypeDescriptionExample Usage
    TransformedDatastoreDatastore that transforms batches of data read from an underlying datastore using a custom transformation function.

    • Test neural networks with multiple inputs.

    • Transform datastores with outputs not supported by the testnet function.

    • Apply custom transformations to datastore output.

    CombinedDatastoreDatastore that reads from two or more underlying datastores.

    • Test neural networks with multiple inputs.

    • Combine predictors and targets from different data sources.

    Custom mini-batch datastoreCustom datastore that returns mini-batches of data.

    Test neural network using data in a layout that other datastores do not support.

    For details, see Develop Custom Mini-Batch Datastore.

    To specify the targets, the datastore must output cell arrays or tables with numInputs+numOutputs columns, where numInputs and numOutputs are the number of network inputs and outputs, respectively. The first numInputs columns, correspond to the network inputs. The last numOutput columns correspond to the network outputs. The InputNames and OutputNames properties of the neural network specifies the order of the input and output data, respectively.

    You can use other built-in datastores for making predictions by using the transform and combine functions. These functions can convert the data read from datastores to the table or cell array format required by the testnet function. For more information, see Datastore Customization.

    minibatchqueue Object

    For greater control over how the software processes and transforms mini-batches, you can specify data as a minibatchqueue object.

    If you specify data as a minibatchqueue object, then the testnet function ignores the MiniBatchSize property of the object and uses the MiniBatchSize argument instead. For minibatchqueue input, the PreprocessingEnvironment property must be "serial".

    Note

    This argument supports complex-valued predictors and targets.

    Generic data or combinations of data types, specified as a numeric array, dlarray object, datastore, or minibatchqueue object.

    If you have data that fits in memory that does not require additional processing, then specifying the input data as a numeric array is usually the easiest option. If you want to test with data stored on your system, or want to apply additional processing, then using datastores is usually the easiest option. For neural networks with multiple inputs, you must use a TransformedDatastore or CombinedDatastore object.

    Tip

    Neural networks expect input data with a specific layout. For example, vector-sequence classification networks typically expect a vector-sequence representations to be t-by-c arrays, where t and c are the number of time steps and channels of sequences, respectively. Neural networks typically have an input layer that specifies the expected layout of the data.

    Most datastores and functions output data in the layout that the network expects. If your data is in a different layout than what the network expects, then indicate that your data has a different layout by using the InputDataFormats argument or by specifying input data as a formatted dlarray object. Adjusting the InputDataFormats argument is usually easier than preprocessing the input data.

    For neural networks that do not have input layers, you must use the InputDataFormats argument or use formatted dlarray objects.

    For more information, see Deep Learning Data Formats.

    Numeric or dlarray Objects

    For data that fits in memory and does not require additional processing like custom transformations, you can specify feature data as a numeric array. If you specify data as a numeric array, then you must also specify the targets argument.

    For a neural network with an inputLayer object, the expected layout of input data is a given by the InputFormat property of the layer.

    For data in a different layout, indicate that your data has a different layout by using the InputDataFormats argument or use a formatted dlarray object. For more information, see Deep Learning Data Formats.

    Datastores

    Datastores read batches of data and targets. Datastores are best suited when you have data that does not fit in memory or when you want to apply transformations to the data.

    The testnet function supports these datastores:

    Data TypeDescriptionExample Usage
    TransformedDatastoreDatastore that transforms batches of data read from an underlying datastore using a custom transformation function.

    • Test neural networks with multiple inputs.

    • Transform outputs of datastores not supported by testnet to the have the required format.

    • Apply custom transformations to datastore output.

    CombinedDatastoreDatastore that reads from two or more underlying datastores.

    • Test neural networks with multiple inputs.

    • Combine predictors and targets from different data sources.

    Custom mini-batch datastoreCustom datastore that returns mini-batches of data.

    Test neural network using data in a format that other datastores do not support.

    For details, see Develop Custom Mini-Batch Datastore.

    To specify the targets, the datastore must output cell arrays or tables with numInputs+numOutputs columns, where numInputs and numOutputs are the number of network inputs and outputs, respectively. The first numInputs columns, correspond to the network inputs. The last numOutput columns correspond to the network outputs. The InputNames and OutputNames properties of the neural network specifies the order of the input and output data, respectively.

    You can use other built-in datastores by using the transform and combine functions. These functions can convert the data read from datastores to the table or cell array format required by testnet. For more information, see Datastore Customization.

    minibatchqueue Object

    For greater control over how the software processes and transforms mini-batches, you can specify data as a minibatchqueue object.

    If you specify data as a minibatchqueue object, then the testnet function ignores the MiniBatchSize property of the object and uses the MiniBatchSize argument instead. For minibatchqueue input, the PreprocessingEnvironment property must be "serial".

    Note

    This argument supports complex-valued predictors and targets.

    Test targets, specified as a categorical array, numeric array, or a cell array of sequences.

    To specify targets for networks with multiple outputs, specify both the predictors and targets in a single argument using the images, sequences, features, or data arguments.

    Tip

    Metric functions expect data with a specific layout. For example for sequence-to-vector regression networks, the metric function typically expects target vectors to be represented as a 1-by-R vector, where R is the number of responses.

    Most datastores and functions output data in the layout that the metric function expects. If your target data is in a different layout to what the metric function expects, then indicate that your targets have a different layout by using the TargetDataFormats argument, specifying the data as a minibatchqueue object and specifying the TargetDataFormats property, or by specifying the target data as a formatted dlarray object. It is usually easiest to specify data formats than to preprocess the target data. If you specify both TargetDataFormats argument and the TargetDataFormats minibatchqueue property, then they must match.

    For more information, see Deep Learning Data Formats.

    The expected layout of the targets depends on the metric function. The targets listed here are only a subset. The metric functions may support additional targets with different layouts such as targets with additional dimensions. For custom metric functions, the software uses the format information of the network output data to determine the type of target data and applies the corresponding layout in this table.

    TargetTarget Layout
    Categorical labelsN-by-1 categorical vector of labels, where N is the number of observations.
    Sequences of categorical labels

    • t-by-N categorical array, where t and N are the numbers of time steps and observations, respectively.

    • N-by-1 cell array of sequences, where N is the number of observations. The sequences are t-by-1 categorical vectors. The sequences can have different lengths.

    Class indicesN-by-1 numeric vector of class indices, where N is the number of observations.
    Sequences of class indices

    • t-by-N matrix of class indices, where t and N are the numbers of time steps and observations, respectively.

    • N-by-1 cell array of sequences, where N is the number of observations. The sequences are t-by-1 numeric vectors of class indices. The sequences can have different lengths.

    Binary labels (single label)

    N-by-1 vector, where N is the number of observations.

    Binary labels (multilabel)

    N-by-c matrix, where N and c are the numbers of observations and classes, respectively.

    Numeric scalars

    N-by-1 vector, where N is the number of observations.

    Numeric vectors

    N-by-R matrix, where N is the number of observations and R is the number of responses.

    2-D images

    h-by-w-by-c-by-N numeric array, where h, w, and c are the height, width, and number of channels of the images, respectively, and N is the number of images.

    3-D images
    • h-by-w-by-d-by-c-by-N numeric array, where h, w, d, and c are the height, width, depth, and number of channels of the images, respectively, and N is the number of images.

    Numeric sequences of scalars
    • t-by-1-by-N array, where t and N are the numbers of time steps and sequences, respectively.

    • N-by-1 cell array of sequences, where N is the number of sequences. The sequences are t-by-1 vectors, where t is the number of time steps. The sequences can have different lengths.

    Numeric sequences of vectors

    • t-by-c-by-N array, where t, c, and N are the numbers of time steps, channels, and sequences, respectively.

    • N-by-1 cell array of sequences, where N is the number of sequences. The sequences are t-by-c matrices, where t and c are the numbers of time steps and channels of the sequences, respectively. The sequences can have different lengths.

    Sequences of 1-D images

    • h-by-c-by-N-by-t array, where h, c, and t are the height, number of channels, and number of numbers of time steps of the sequences, respectively, and N is the number of sequences.

    • N-by-1 cell array of sequences, where N is the number of sequences. The sequences are h-by-c-by-t arrays, where h, t, and c are the height, number of time steps, and number of channels of the sequences, respectively. The sequences can have different lengths.

    Sequences of 2-D images

    • h-by-w-by-c-by-N-by-t array, where h, w, c, and t are the height, width, number of channels, and number of numbers of time steps of the sequences, respectively, and N is the number of sequences.

    • N-by-1 cell array of sequences, where N is the number of sequences. The sequences are h-by-w-by-c-by-t arrays, where h, w, t, and c are the height, width, number of time steps, and number of channels of the sequences, respectively. The sequences can have different lengths.

    Sequences of 3-D images

    • h-by-w-by-d-by-cby-N-t array, where h, w, d, c, and t are the height, width, depth, number of channels, and number of numbers of time steps of the sequences, respectively, and N is the number of sequences.

    • N-by-1 cell array of sequences, where N is the number of sequences. The sequences are h-by-w-by-d-by-c-by-t arrays, where h, w, d, t, and c are the height, width, depth, number of time steps, and number of channels of the sequences, respectively. The sequences can have different lengths.

    For targets in a different layout, indicate that your targets has a different layout by using the TargetDataFormats argument or use a formatted dlarray object. For more information, see Deep Learning Data Formats.

    Metrics to evaluate, specified as a character vector or string scalar of a built-in metric name, a string array of names, a built-in or custom metric object, a function handle, a deep.DifferentiableFunction object, or a cell array of names, metric objects, and function handles:

    • Built-in metric or loss function name — Specify metrics as a string scalar, character vector, or a cell array or string array of one or more of these names:

      • Metrics:

        • "accuracy" — Accuracy

        • "auc" — Area under ROC curve (AUC)

        • "fscore" — F-score

        • "precision" — Precision

        • "recall" — Recall

        • "rmse" — Root mean squared error

        • "mape" — Mean absolute percentage error (MAPE)

      • Loss functions:

        • "crossentropy" — Cross-entropy loss for classification tasks.

        • "indexcrossentropy" — Index cross-entropy loss for classification tasks. Use this option to save memory when there are many categorical classes.

        • "binary-crossentropy" — Binary cross-entropy loss for binary and multilabel classification tasks.

        • "mae" / "mean-absolute-error" / "l1loss" — Mean absolute error for regression tasks.

        • "mse" / "mean-squared-error" / "l2loss" — Mean squared error for regression tasks.

        • "huber" — Huber loss for regression tasks

    • Built-in metric object — If you need more flexibility, you can use built-in metric objects. The software supports these built-in metric objects:

      When you create a built-in metric object, you can specify additional options such as the averaging type and for classification tasks, whether the task is single-label or multilabel classification.

    • Custom metric function handle — If the metric you need is not a built-in metric, then you can specify custom metrics using a function handle. The function must have the syntax metric = metricFunction(Y,T), where Y corresponds to the network predictions and T corresponds to the target responses. For networks with multiple outputs, the syntax must be metric = metricFunction(Y1,…,YN,T1,…TM), where N is the number of outputs and M is the number of targets. For more information, see Define Custom Metric Function.

      Note

      When you have data in mini-batches, the software computes the metric for each mini-batch and then returns the average of those values. For some metrics, this behavior can result in a different metric value than if you compute the metric using the whole data set at once. In most cases, the values are similar. To use a custom metric that is not batch-averaged for the data, you must create a custom metric object. For more information, see Define Custom Deep Learning Metric Object.

    • deep.DifferentiableFunction object — Function object with custom backward function. For more information, see Define Custom Deep Learning Operations.

    • Custom metric object — If you need greater customization, then you can define your own custom metric object. For an example that shows how to create a custom metric, see Define Custom Metric Object. For general information about creating custom metrics, see Define Custom Deep Learning Metric Object.

    If you specify a metric as a function handle or a custom metric object, then the layout of the targets that the software passes to the metric depends on the data type of the targets and other metrics that you specify:

    • If the targets are numeric arrays, then the software passes the targets to the metric directly.

    • If the targets are categorical arrays, and metrics contains "index-crossentropy" and does not contain "crossentropy", then the software automatically converts the targets to numeric class indices and passes them to the metric.

    • Otherwise, if the targets are categorical arrays, then the software automatically converts the targets to one-hot encoded vectors and then passes them to the metric.

    Example: ["accuracy","fscore"]

    Example: {"accuracy",@myMetric}

    Name-Value Arguments

    Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

    Example: testnet(net,data,"accuracy",InputDataFormats="CB") tests the accuracy of the neural network and specifies that the first and second dimension of the input data correspond to the channel and batch dimensions, respectively.

    Layout of output metric values, specified as "vector" or "table".

    • If OutputMode is "vector", then the output metricValues is a vector, where metricValues(i) corresponds to the value of metric(i).

    • If OutputMode is "table", then the output metricValues is a table with one row and named columns that correspond to each metric.

    Size of mini-batches to use for prediction, specified as a positive integer. Larger mini-batch sizes require more memory, but can lead to faster predictions.

    When you make predictions with sequences of different lengths, the mini-batch size can impact the amount of padding added to the input data, which can result in different predicted values. Try using different values to see which works best with your network. To specify padding options, use the SequenceLength arguments.

    Note

    If you specify the input data as a minibatchqueue object, then the testnet function uses the mini-batch size specified by this argument and not the MiniBatchSize property of the minibatchqueue object.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Option to pad or truncate the input sequences, specified as one of these options:

    • "longest" — Pad sequences to have the same length as the longest sequence. This option does not discard any data, though padding can introduce noise to the neural network.

    • "shortest" — Truncate sequences to have the same length as the shortest sequence. This option ensures that the function does not add padding, at the cost of discarding data.

    To learn more about the effects of padding and truncating the input sequences, see Sequence Padding and Truncation.

    Direction of padding or truncation, specified as one of these options:

    • "right" — Pad or truncate sequences on the right. The sequences start at the same time step and the software truncates or adds padding to the end of each sequence.

    • "left" — Pad or truncate sequences on the left. The software truncates or adds padding to the start of each sequence so that the sequences end at the same time step.

    Because recurrent layers process sequence data one time step at a time, when the recurrent layer OutputMode property is "last", any padding in the final time steps can negatively influence the layer output. To pad or truncate sequence data on the left, set the SequencePaddingDirection argument to "left".

    For sequence-to-sequence neural networks (when the OutputMode property is "sequence" for each recurrent layer), any padding in the first time steps can negatively influence the predictions for the earlier time steps. To pad or truncate sequence data on the right, set the SequencePaddingDirection option to "right".

    To learn more about the effects of padding and truncating sequences, see Sequence Padding and Truncation.

    Value by which to pad the input sequences, specified as a scalar.

    Do not pad sequences with NaN, because doing so can propagate errors through the neural network.

    Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

    Hardware resource, specified as one of these values:

    • "auto" — Use a GPU if one is available. Otherwise, use the CPU. If net is a quantized network with the TargetLibrary property set to "none", use the CPU even if a GPU is available.

    • "gpu" — Use the GPU. Using a GPU requires a Parallel Computing Toolbox™ license and a supported GPU device. For information about supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). If Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error.

    • "cpu" — Use the CPU.

    Performance optimization, specified as one of these values:

    • "auto" — Automatically apply a number of optimizations suitable for the input network and hardware resources.

    • "mex" — Compile and execute a MEX function. This option is available only when using a GPU. You must store the input data or the network learnable parameters as gpuArray objects. Using a GPU requires Parallel Computing Toolbox and a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). If Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error.

    • "none" — Disable all acceleration.

    When you use the "auto" or "mex" option, the software can offer performance benefits at the expense of an increased initial run time. Subsequent calls to the function are typically faster. Use performance optimization when you call the function multiple times using different input data.

    When Acceleration is "mex", the software generates and executes a MEX function based on the model and parameters you specify in the function call. A single model can have several associated MEX functions at one time. Clearing the model variable also clears any MEX functions associated with that model.

    When Acceleration is "auto", the software does not generate a MEX function.

    The "mex" option is available only when you use a GPU. You must have a C/C++ compiler installed and the GPU Coder™ Interface for Deep Learning support package. Install the support package using the Add-On Explorer in MATLAB®. For setup instructions, see MEX Setup (GPU Coder). GPU Coder is not required.

    The "mex" option has these limitations:

    • Only the single precision data type is supported. The input data or the network learnable parameters must have the underlying data type single.

    • Networks with inputs that are not connected to an input layer are not supported.

    • Traced dlarray objects are not supported. This means that the "mex" option is not supported inside a call to the dlfeval function.

    • Not all layers are supported. For a list of supported layers, see Supported Layers (GPU Coder).

    • MATLAB Compiler™ does not support deploying your network when using the "mex" option.

    For quantized networks, the "mex" option requires a CUDA® enabled NVIDIA® GPU with compute capability 6.1, 6.3, or higher.

    Description of the input data dimensions, specified as a string array, character vector, or cell array of character vectors.

    If the InputDataFormats argument value is "auto", then the software uses the formats expected by the network input. Otherwise, the software uses the specified formats for the corresponding network input.

    A data format is a string of characters, where each character describes the type of the corresponding data dimension.

    The characters are:

    • "S" — Spatial

    • "C" — Channel

    • "B" — Batch

    • "T" — Time

    • "U" — Unspecified

    For example, consider an array containing a batch of sequences where the first, second, and third dimensions correspond to channels, observations, and time steps, respectively. You can specify that this array has the format "CBT" (channel, batch, time).

    You can specify multiple dimensions labeled "S" or "U". You can use the labels "C", "B", and "T" once each, at most. The software ignores singleton trailing "U" dimensions after the second dimension.

    For a neural network with multiple inputs, net, specify an array of input data formats, where InputDataFormats(i) corresponds to the input net.InputNames(i).

    For more information, see Deep Learning Data Formats.

    Data Types: char | string | cell

    Description of the target data dimensions, specified as one of these values:

    • "auto" — If the target data has the same number of dimensions as the input data, then the function uses the format specified by the InputDataFormats argument. If the target data has a different number of dimensions to the input data, then the function uses the format expected by the metric function.

    • String array, character vector, or cell array of character vectors — The function uses the data formats you specify.

    A data format is a string of characters, where each character describes the type of the corresponding data dimension.

    The characters are:

    • "S" — Spatial

    • "C" — Channel

    • "B" — Batch

    • "T" — Time

    • "U" — Unspecified

    For example, consider an array containing a batch of sequences where the first, second, and third dimensions correspond to channels, observations, and time steps, respectively. You can specify that this array has the format "CBT" (channel, batch, time).

    You can specify multiple dimensions labeled "S" or "U". You can use the labels "C", "B", and "T" once each, at most. The software ignores singleton trailing "U" dimensions after the second dimension.

    For more information, see Deep Learning Data Formats.

    Data Types: char | string | cell

    Output Arguments

    collapse all

    Evaluated metric values, returned as a numeric vector or a table.

    The data type of metricValues depends on the OutputMode argument.

    • If OutputMode is "vector", then the output metricValues is a vector, where metricValues(i) corresponds to the value of metric(i).

    • If OutputMode is "table", then the output metricValues is a table with one row and named columns that correspond to each metric.

    Version History

    Introduced in R2024b