testnet
Syntax
Description
tests the neural network metricValues
= testnet(net
,images
,metrics
)net
by evaluating it with the image data and
targets specified by images
and the metrics specified by
metrics
.
tests the neural network metricValues
= testnet(net
,images
,targets
,metrics
)net
with the image data specified by
images
, the targets specified by targets
, and
the metrics specified by metrics
.
tests the neural network metricValues
= testnet(net
,sequences
,metrics
)net
with the sequence data and targets
specified by sequences
, and the metrics specified by
metrics
.
tests the neural network metricValues
= testnet(net
,sequences
,targets
,metrics
)net
with the sequence data specified by
sequences
, the targets specified by targets
,
and the metrics specified by metrics
.
tests the neural network metricValues
= testnet(net
,features
,metrics
)net
with the feature data and targets
specified by features
, and the metrics specified by
metrics
.
tests the neural network metricValues
= testnet(net
,features
,targets
,metrics
)net
with the feature data specified by
features
, the targets specified by targets
,
and the metrics specified by metrics
.
tests the neural network with other data layouts or combinations of different types of
data.metricValues
= testnet(net
,data
,metrics
)
tests the neural network with the predictors specified by metricValues
= testnet(net
,data
,targets
,metrics
)data
and
the targets specified by targets
.
specifies options using one or more name-value arguments in addition to any of the input
argument combinations in previous syntaxes. For example,
metricValues
= testnet(___,Name=Value
)InputDataFormats="CB"
specifies that the first and second dimension
of the input data correspond to the channel and batch dimensions, respectively.
Examples
Load a trained dlnetwork
object. The MAT file digitsNet.mat
contains an image classification neural network specified by the variable net
.
load digitsNet
Load the test data. The MAT file DigitsDataTest.mat
contains the test images and labels specified by the variables XTest
and labelsTest
, respectively.
load DigitsDataTest
Test the neural network using the testnet
function. For single-label classification, evaluate the accuracy. The accuracy is the percentage of correct predictions.
accuracy = testnet(net,XTest,labelsTest,"accuracy")
accuracy = 98.2200
Input Arguments
Neural network, specified as a dlnetwork
object.
Image data, specified as a numeric array, dlarray
object,
datastore, or minibatchqueue
object. For sequences of images, such as
video data, use the sequences
input argument instead.
If you have data that fits in memory and does not require additional processing,
then specifying the input data as a numeric array is usually the best option. To test a
neural network with image files stored on your system, or to apply additional
processing, then use a datastore. For neural networks with multiple inputs or multiple
outputs, you must use a TransformedDatastore
,
CombinedDatastore
, or
minibatchqueue
object.
Tip
Neural networks expect input data with a specific layout. For example, image classification networks typically expect image representations to be h-by-w-by-c numeric arrays, where h, w, and c are the height, width, and number of channels of the images, respectively. Neural networks typically have an input layer that specifies the expected layout of the data.
Most datastores and functions return data in the layout that the network expects. If your data
is in a different layout, then indicate the layout by using the InputDataFormats
name-value argument or by specifying the input
data as a formatted dlarray
object. Specifying the
InputDataFormats
name-value argument is usually easier
than adjusting the layout of the input data manually.
For neural networks that do not have input layers, you must use the
InputDataFormats
name-value argument or formatted
dlarray
objects.
For more information, see Deep Learning Data Formats.
Numeric Array or dlarray
Object
For data that fits in memory and does not require additional processing, you can
specify a data set of images as a numeric array or a dlarray
object.
When you do, you must also specify the targets
argument.
The layout of numeric arrays and unformatted dlarray
objects
depends on the type of image data, and must be consistent with the InputDataFormats
name-value argument.
Most networks expect image data in these layouts.
Data | Layout |
---|---|
2-D images | h-by-w-by-c-by-N array, where h, w, and c are the height, width, and number of channels of the images, respectively, and N is the number of images. Data in this layout has the data
format |
3-D images | h-by-w-by-d-by-c-by-N array, where h, w, d, and c are the height, width, depth, and number of channels of the images, respectively, and N is the number of images. Data
in this layout has the data format |
For data in a different layout, indicate that your data has a different layout by
using the InputDataFormats
argument or use a formatted
dlarray
object instead. For more information, see Deep Learning Data Formats.
Categorical Array (since R2025a)
For images of categorical values (such as labeled pixel maps) that fit in memory and does not require additional processing, you can specify the images as categorical arrays.
If you specify images as a categorical array, then you must also specify the
targets
argument.
The software automatically converts categorical inputs to numeric values and
passes them to the neural network. To specify how the software converts categorical
inputs to numeric values, use the CategoricalInputEncoding
argument. The layout of categorical arrays
depend on the type of image data and must be consistent with the InputDataFormats
.
Most networks expect categorical image data passed to the
testnet
function in the layouts in this
table.
Data | Layout |
---|---|
2-D categorical images | h-by-w-by-1-by-N array, where h and w are the height and width of the images, respectively, and N is the number of images. After the software converts this data to
numeric arrays, data in this layout has the data format
|
3-D categorical images | h-by-w-by-d-by-1-by-N array, where h, w, and d are the height, width, and depth of the images, respectively, and N is the number of images. Data in this layout has the data format
|
For data in a different layout, indicate that your data has a different layout by
using the InputDataFormats
argument or use a formatted dlarray
object instead. For more information, see Deep Learning Data Formats.
Datastore
A datastore reads batches of images and targets. Use a datastore when you have data that does not fit in memory or when you want to apply transformations to the data.
For image data, the testnet
function supports these
datastores.
Datastore | Description | Example Usage |
---|---|---|
ImageDatastore | Datastore of images saved on disk. | Test with images saved on your system, where the images are the
same size. When the images are different sizes, use an |
augmentedImageDatastore | Datastore that applies random affine geometric transformations, including resizing. | Test with images saved on disk, where the images are different sizes. When you test using an augmented image datastore, do not apply additional augmentations such as rotation, reflection, shear, and translation. |
TransformedDatastore | Datastore that transforms batches of data read from an underlying datastore using a custom transformation function. | Transform datastores with outputs not supported by the
|
CombinedDatastore | Datastore that reads from two or more underlying datastores. | Test using a network with multiple inputs. |
Custom mini-batch datastore | Custom datastore that returns mini-batches of data. | Test using data in a layout that other datastores do not support. For details, see Develop Custom Mini-Batch Datastore. |
To specify the targets, the datastore must return cell arrays or tables with
numInputs+numOutputs
columns, where
numInputs
and numOutputs
are the
number of network inputs and outputs, respectively. The first
numInputs
columns correspond to the network inputs. The
last numOutput
columns correspond to the network outputs. The
InputNames
and OutputNames
properties
of the neural network specify the order of the input and output data,
respectively.
Tip
ImageDatastore
objects allow batch reading of JPG or PNG image files
using prefetching. For efficient preprocessing of images for deep learning, including image
resizing, use an augmentedImageDatastore
object. Do not use the ReadFcn
property of ImageDatastore
objects. If you set the ReadFcn
property to a custom function, then the ImageDatastore
object does not
prefetch image files and is usually significantly slower.
You can use other built-in datastores for testing deep learning neural networks by using the transform
and combine
functions. These functions can convert the data read from datastores to the layout required by the testnet
function. The required layout of the datastore output depends on the neural network architecture. For more information, see Datastore Customization.
minibatchqueue
Object
For greater control over how the software processes and transforms mini-batches, you can
specify data as a minibatchqueue
object. When you do, the testnet
function ignores the
MiniBatchSize
property of the object and uses the MiniBatchSize
name-value argument instead. For minibatchqueue
input, the
PreprocessingEnvironment
property must be
"serial"
.
Note
This argument supports complex-valued predictors and targets.
Sequence or time series data, specified as a numeric array, categorical array, cell
array, datastore, or minibatchqueue
object.
If you have sequences of the same length that fit in memory and do not require
additional processing, then specifying the input data as a numeric array is usually the
best option. If you have sequences of different lengths that fit in memory and do not
require additional processing, then specifying the input data as a cell array of numeric
arrays is usually the best option. To test a neural network with sequences stored on
your system, or to apply additional processing such as custom transformations, then use
a datastore. For neural networks with multiple inputs, you must use a TransformedDatastore
or CombinedDatastore
object.
Tip
Neural networks expect input data with a specific layout. For example, vector-sequence classification networks typically expect vector-sequence representations to be t-by-c arrays, where t and c are the number of time steps and channels of sequences, respectively. Neural networks typically have an input layer that specifies the expected layout of the data.
Most datastores and functions return data in the layout that the network expects. If your data
is in a different layout, then indicate the layout by using the InputDataFormats
name-value argument or by specifying the input
data as a formatted dlarray
object. Specifying the
InputDataFormats
name-value argument is usually easier
than adjusting the layout of the input data manually.
For neural networks that do not have input layers, you must use the
InputDataFormats
name-value argument or formatted
dlarray
objects.
For more information, see Deep Learning Data Formats.
Numeric Array, Categorical Array, dlarray
Object, or Cell Array
For data that fits in memory and does not require additional processing like
custom transformations, you can specify a single sequence as a numeric array,
categorical array, or a dlarray
object, or a data set of sequences as
a cell array of numeric arrays, categorical arrays, or dlarray
objects. If you specify sequences as a numeric array, categorical array, cell array,
or a dlarray
object, then you must also specify the
targets
argument.
For cell array input, the cell array must be an
N-by-1 cell array of numeric arrays, categorical arrays, or
dlarray
objects, where N is the number of
observations.
The software automatically converts categorical
inputs to numeric values and passes them to the neural network. To specify how the
software converts categorical inputs to numeric values, use the CategoricalInputEncoding
argument.
The size and shape of the numeric arrays, categorical arrays, or
dlarray
objects that represent sequences depend on the type of
sequence data and must be consistent with the InputDataFormats
argument.
Most networks with a sequence input layer expect sequence data passed to the
testnet
function in the layouts in this table.
Data | Layout |
---|---|
Vector sequences | s-by-c matrices, where s and c are the numbers of time steps and channels (features) of the sequences, respectively. |
Categorical vector sequences | s-by-1 categorical arrays, where s is the number of time steps of the sequences. |
1-D image sequences | h-by-c-by-s arrays, where h and c correspond to the height and number of channels of the images, respectively, and s is the sequence length. |
Categorical 1-D image sequences | h-by-1-by-s categorical arrays, where h corresponds to the height of the images and s is the sequence length. |
2-D image sequences | h-by-w-by-c-by-s arrays, where h, w, and c correspond to the height, width, and number of channels of the images, respectively, and s is the sequence length. |
Categorical 2-D image sequences | h-by-w-by-1-by-s arrays, where h and w correspond to the height and width of the images, respectively, and s is the sequence length. |
3-D image sequences | h-by-w-by-d-by-c-by-s, where h, w, d, and c correspond to the height, width, depth, and number of channels of the 3-D images, respectively, and s is the sequence length. |
Categorical 3-D image sequences | h-by-w-by-d-by-1-by-s, where h, w, and d correspond to the height, width, and depth of the 3-D images, respectively, and s is the sequence length. |
For data in a different layout, indicate that your data has a different layout by
using the InputDataFormats
argument or use a formatted
dlarray
object instead. For more information, see Deep Learning Data Formats.
Datastore
A datastore reads batches of sequences and targets. Use a datastore when you have data that does not fit in memory or when you want to apply transformations to the data.
For sequence and time-series data, the testnet
function
supports these datastores.
Datastore | Description | Example Usage |
---|---|---|
TransformedDatastore | Datastore that transforms batches of data read from an underlying datastore using a custom transformation function. |
|
CombinedDatastore | Datastore that reads from two or more underlying datastores. | Combine predictors and targets from different data sources. |
Custom mini-batch datastore | Custom datastore that returns mini-batches of data. | Test neural network using data in a layout that other datastores do not support. For details, see Develop Custom Mini-Batch Datastore. |
To specify the targets, the datastore must return cell arrays or tables with
numInputs+numOutputs
columns, where
numInputs
and numOutputs
are the
number of network inputs and outputs, respectively. The first
numInputs
columns correspond to the network inputs. The
last numOutput
columns correspond to the network outputs. The
InputNames
and OutputNames
properties
of the neural network specify the order of the input and output data,
respectively.
You can use other built-in datastores by using the transform
and
combine
functions. These functions can convert the data read from datastores to the layout required
by the testnet
function. For example, you can transform and combine
data read from in-memory arrays and CSV files using ArrayDatastore
and
TabularTextDatastore
objects, respectively. The required layout of the
datastore output depends on the neural network architecture. For more information, see Datastore Customization.
minibatchqueue
Object
For greater control over how the software processes and transforms mini-batches, you can
specify data as a minibatchqueue
object. When you do, the testnet
function ignores the
MiniBatchSize
property of the object and uses the MiniBatchSize
name-value argument instead. For minibatchqueue
input, the
PreprocessingEnvironment
property must be
"serial"
.
Note
This argument supports complex-valued predictors and targets.
Feature or tabular data, specified as a numeric array, dlarray
object, table, datastore, or minibatchqueue
object.
If you have data that fits in memory and does not require additional processing,
then specifying the input data as a numeric array or table is usually the best option.
To test with feature or tabular data stored on your system, or to apply additional
processing such as custom transformations, use a datastore. For neural networks with
multiple inputs, you must use a TransformedDatastore
or CombinedDatastore
object.
Tip
Neural networks expect input data with a specific layout. For example feature classification networks typically expect feature and tabular data representations to be 1-by-c vectors, where c is the number features of the data. Neural networks typically have an input layer that specifies the expected layout of the data.
Most datastores and functions return data in the layout that the network expects. If your data
is in a different layout, then indicate the layout by using the InputDataFormats
name-value argument or by specifying the input
data as a formatted dlarray
object. Specifying the
InputDataFormats
name-value argument is usually easier
than adjusting the layout of the input data manually.
For neural networks that do not have input layers, you must use the
InputDataFormats
name-value argument or formatted
dlarray
objects.
For more information, see Deep Learning Data Formats.
Numeric Array or dlarray
Object
For feature data that fits in memory and does not require additional processing
such as custom transformations, you can specify feature data as a numeric array. When
you do, you must also specify the targets
argument.
The layout of numeric arrays and unformatted dlarray
objects must
be consistent with the InputDataFormats
name-value argument. Most
networks with feature input expect input data specified as an
N-by-numFeatures
array, where
N is the number of observations and
numFeatures
is the number of features of the input data.
Categorical Array (since R2025a)
For discrete features that fit in memory and does not require additional processing like custom transformations, you can specify the feature data as a categorical array.
If you specify features as a categorical array, then you must also specify the
targets
argument.
The software automatically converts categorical inputs to numeric values and
passes them to the neural network. To specify how the software converts categorical
inputs to numeric values, use the CategoricalInputEncoding
argument. The layout of categorical arrays must
be consistent with the InputDataFormats
argument.
Most networks with categorical feature input expect input data specified as a
N-by-1 vector, where N is the number of
observations. After the software converts this data to numeric arrays, data in this
layout has the data format "BC"
(batch, channel). The size of the
"C"
(channel) dimension depends on the CategoricalInputEncoding
argument.
For data in a different layout, indicate that your data has a different layout by
using the InputDataFormats
training option or use a formatted
dlarray
object instead. For more information, see Deep Learning Data Formats.
Table
For feature data that fits in memory and does not require additional processing
such as custom transformations, you can specify feature data as a table. When you do,
you must not specify the targets
argument.
To specify feature data as a table, specify a table with
numObservations
rows and numFeatures+1
columns, where numObservations
and numFeatures
are the number of observations and channels of the input data. The
testnet
function uses the first numFeatures
columns as the input features and uses the last column as the targets.
Datastore
A datastores reads batches of feature data and targets. Use a datastores when you have data that does not fit in memory or when you want to apply transformations to the data.
For feature and tabular data, the testnet
function supports
these datastores.
Data Type | Description | Example Usage |
---|---|---|
TransformedDatastore | Datastore that transforms batches of data read from an underlying datastore using a custom transformation function. |
|
CombinedDatastore | Datastore that reads from two or more underlying datastores. |
|
Custom mini-batch datastore | Custom datastore that returns mini-batches of data. | Test neural network using data in a layout that other datastores do not support. For details, see Develop Custom Mini-Batch Datastore. |
To specify the targets, the datastore must return cell arrays or tables with
numInputs+numOutputs
columns, where
numInputs
and numOutputs
are the
number of network inputs and outputs, respectively. The first
numInputs
columns correspond to the network inputs. The
last numOutput
columns correspond to the network outputs. The
InputNames
and OutputNames
properties
of the neural network specify the order of the input and output data,
respectively.
You can use other built-in datastores for making predictions by using the transform
and
combine
functions. These functions can convert the data read from datastores to the table or cell
array format required by the testnet
function. For more
information, see Datastore Customization.
minibatchqueue
Object
For greater control over how the software processes and transforms mini-batches, you can
specify data as a minibatchqueue
object. When you do, the testnet
function ignores the
MiniBatchSize
property of the object and uses the MiniBatchSize
name-value argument instead. For minibatchqueue
input, the
PreprocessingEnvironment
property must be
"serial"
.
Note
This argument supports complex-valued predictors and targets.
Generic data or combinations of data types, specified as a numeric array,
dlarray
object, datastore, or minibatchqueue
object.
If you have data that fits in memory and does not require additional processing,
then specifying the input data as a numeric array is usually the best option. To test
with data stored on your system, or to apply additional processing, use a datastores.
For neural networks with multiple inputs, you must use a TransformedDatastore
or CombinedDatastore
object.
Tip
Neural networks expect input data with a specific layout. For example, vector-sequence classification networks typically expect vector-sequence representations to be t-by-c arrays, where t and c are the number of time steps and channels of sequences, respectively. Neural networks typically have an input layer that specifies the expected layout of the data.
Most datastores and functions return data in the layout that the network expects. If your data
is in a different layout, then indicate the layout by using the InputDataFormats
name-value argument or by specifying the input
data as a formatted dlarray
object. Specifying the
InputDataFormats
name-value argument is usually easier
than adjusting the layout of the input data manually.
For neural networks that do not have input layers, you must use the
InputDataFormats
name-value argument or formatted
dlarray
objects.
For more information, see Deep Learning Data Formats.
Numeric Arrays, Categorical Arrays, or dlarray
Objects
For data that fits in memory and does not require additional processing like
custom transformations, you can specify data as a numeric array, categorical array, or
a dlarray
object. If you specify data as a numeric array, then you
must also specify the targets
argument.
For a neural network with an inputLayer
object, the expected
layout of input data is a given by the InputFormat
property of the
layer.
The software automatically converts categorical inputs to numeric values and
passes them to the neural network. To specify how the software converts categorical
inputs to numeric values, use the CategoricalInputEncoding
argument. The layout of categorical arrays must
be consistent with the InputDataFormats
argument.
For data in a different layout, indicate that your data has a different layout by
using the InputDataFormats
argument or use a formatted
dlarray
object instead. For more information, see Deep Learning Data Formats.
Datastores
A datastore reads batches of data and targets. Use a datastore when you have data that does not fit in memory or when you want to apply transformations to the data.
The testnet
function supports these datastores.
Data Type | Description | Example Usage |
---|---|---|
TransformedDatastore | Datastore that transforms batches of data read from an underlying datastore using a custom transformation function. |
|
CombinedDatastore | Datastore that reads from two or more underlying datastores. |
|
Custom mini-batch datastore | Custom datastore that returns mini-batches of data. | Test neural network using data in a format that other datastores do not support. For details, see Develop Custom Mini-Batch Datastore. |
To specify the targets, the datastore must return cell arrays or tables with
numInputs+numOutputs
columns, where
numInputs
and numOutputs
are the
number of network inputs and outputs, respectively. The first
numInputs
columns correspond to the network inputs. The
last numOutput
columns correspond to the network outputs. The
InputNames
and OutputNames
properties
of the neural network specify the order of the input and output data,
respectively.
You can use other built-in datastores by using the transform
and
combine
functions. These functions can convert the data read from datastores to the table or cell
array format required by testnet
. For more information, see Datastore Customization.
minibatchqueue
Object
For greater control over how the software processes and transforms mini-batches, you can
specify data as a minibatchqueue
object. When you do, the testnet
function ignores the
MiniBatchSize
property of the object and uses the MiniBatchSize
name-value argument instead. For minibatchqueue
input, the
PreprocessingEnvironment
property must be
"serial"
.
Note
This argument supports complex-valued predictors and targets.
Test targets, specified as a categorical array, numeric array, or cell array of sequences.
To specify targets for networks with multiple outputs, specify both the predictors
and targets in a single argument using the images
,
sequences
,
features
, or
data
arguments.
Tip
Metric functions expect data with a specific layout. For example for sequence-to-vector regression networks, the metric function typically expects target vectors to be represented as a 1-by-R vector, where R is the number of responses.
Most datastores and functions output data in the layout that the metric function
expects. If your target data is in a different layout than the metric function
expects, then indicate that your targets have a different layout by using the
TargetDataFormats
argument, specifying the data as a
minibatchqueue
object and specifying the
TargetDataFormats
property, or by specifying the target data as
a formatted dlarray
object. It is usually easiest to specify data
formats than to preprocess the target data. If you specify both the
TargetDataFormats
argument and the
TargetDataFormats
minibatchqueue
property, then they must match.
For more information, see Deep Learning Data Formats.
The expected layout of the targets depends on the metric function. The targets listed here are only a subset. The metric functions may support additional targets with different layouts such as targets with additional dimensions. For custom metric functions, the software uses the format information of the network output data to determine the type of target data and applies the corresponding layout in this table.
Target | Target Layout |
---|---|
Categorical labels | N-by-1 categorical vector of labels, where N is the number of observations. |
Sequences of categorical labels |
|
Class indices | N-by-1 numeric vector of class indices, where N is the number of observations. |
Sequences of class indices |
|
Binary labels (single label) | N-by-1 vector, where N is the number of observations. |
Binary labels (multilabel) | N-by-c matrix, where N and c are the numbers of observations and classes, respectively. |
Numeric scalars | N-by-1 vector, where N is the number of observations. |
Numeric vectors | N-by-R matrix, where N is the number of observations and R is the number of responses. |
2-D images | h-by-w-by-c-by-N numeric array, where h, w, and c are the height, width, and number of channels of the images, respectively, and N is the number of images. |
3-D images |
|
Numeric sequences of scalars |
|
Numeric sequences of vectors |
|
Sequences of 1-D images |
|
Sequences of 2-D images |
|
Sequences of 3-D images |
|
For targets in a different layout, indicate that your targets have a different
layout by using the TargetDataFormats
name-value argument or a
formatted dlarray
object. For more information, see Deep Learning Data Formats.
The software automatically converts categorical targets to numeric values and passes
them to the metrics function. To specify how the software converts categorical targets
to numeric values, use the CategoricalTargetEncoding
argument.
Metrics to evaluate, specified as a character vector or string scalar of a built-in
metric name, a string array of names, a built-in or custom metric object, a function
handle, a deep.DifferentiableFunction
object, or a cell array of names,
metric objects, and function handles:
Built-in metric or loss function name — Specify metrics as a string scalar, character vector, or a cell array or string array of one or more of these names:
Metrics:
"accuracy"
— Accuracy"auc"
— Area under ROC curve (AUC)"fscore"
— F-score"precision"
— Precision"recall"
— Recall"rmse"
— Root mean squared error"mape"
— Mean absolute percentage error (MAPE)"rsquared"
— R2 (R-squared or coefficient of determination) (since R2025a)
Loss functions:
"crossentropy"
— Cross-entropy loss for classification tasks."indexcrossentropy"
— Index cross-entropy loss for classification tasks. Use this option to save memory when there are many categorical classes."binary-crossentropy"
— Binary cross-entropy loss for binary and multilabel classification tasks."mae"
/"mean-absolute-error"
/"l1loss"
— Mean absolute error for regression tasks."mse"
/"mean-squared-error"
/"l2loss"
— Mean squared error for regression tasks."huber"
— Huber loss for regression tasks.
For more information about deep learning metrics and loss functions, see Deep Learning Metrics.
Built-in metric object — If you need more flexibility, you can use one of these built-in metric objects:
AccuracyMetric
— Accuracy metric objectAUCMetric
— Area under ROC curve (AUC) metric objectFScoreMetric
— F-score metric objectPrecisionMetric
— Precision metric objectRecallMetric
— Recall metric objectRMSEMetric
— RMSE metric objectMAPEMetric
— MAPE metric objectRSquaredMetric
— R2 metric object (since R2025a)
When you create a built-in metric object, you can specify additional options such as the averaging type. For classification tasks, you can specify whether the task is single-label or multilabel classification.
Custom metric function handle — If the metric you need is not a built-in metric, then you can specify custom metrics using a function handle. The function must have the syntax
metric = metricFunction(Y,T)
, whereY
corresponds to the network predictions andT
corresponds to the target responses. For networks with multiple outputs, the syntax must bemetric = metricFunction(Y1,…,YN,T1,…TM)
, whereN
is the number of outputs andM
is the number of targets. For more information, see Define Custom Metric Function.Note
When you have data in mini-batches, the software computes the metric for each mini-batch and then returns the average of those values. For some metrics, this behavior can result in a different metric value than if you compute the metric using the whole data set at once. In most cases, the values are similar. To use a custom metric that is not batch-averaged for the data, you must create a custom metric object. For more information, see Define Custom Deep Learning Metric Object.
deep.DifferentiableFunction
object — Function object with a custom backward function. For more information, see Define Custom Deep Learning Operations.Custom metric object — If you need greater customization, then you can define your own custom metric object. For an example that shows how to create a custom metric, see Define Custom Metric Object. For general information about creating custom metrics, see Define Custom Deep Learning Metric Object.
If you specify a metric as a function handle or a custom metric object, then the layout of the targets that the software passes to the metric depends on the data type of the targets and other metrics that you specify:
If the targets are numeric arrays, then the software passes the targets to the metric directly.
If the targets are categorical arrays, then the software converts categorical data to numeric values according to the
CategoricalTargetEncoding
argument.
Example: ["accuracy","fscore"]
Example: {"accuracy",@myMetric}
Name-Value Arguments
Specify optional pairs of arguments as
Name1=Value1,...,NameN=ValueN
, where Name
is
the argument name and Value
is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
Example: testnet(net,data,"accuracy",InputDataFormats="CB")
tests the
accuracy of the neural network and specifies that the first and second dimension of the
input data correspond to the channel and batch dimensions, respectively.
Layout of output metric values, specified as "vector"
or
"table"
.
If
OutputMode
is"vector"
, then the outputmetricValues
is a vector, wheremetricValues(i)
corresponds to the value ofmetric(i)
.If
OutputMode
is"table"
, then the outputmetricValues
is a table with one row and named columns that correspond to each metric.
Size of mini-batches to use for prediction, specified as a positive integer. Larger mini-batch sizes require more memory, but can lead to faster predictions.
When you make predictions with sequences of different lengths,
the mini-batch size can affect the amount of padding added to the input data, which can result
in different predicted values. Try using different values to see which works best with your
network. To specify padding options, use the SequenceLength
name-value argument.
Note
If you specify the input data as a minibatchqueue
object, then
the testnet
function uses the mini-batch size specified by
this argument and not the MiniBatchSize
property of the
minibatchqueue
object.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
Option to pad or truncate the input sequences, specified as one of these options:
"longest"
— Pad sequences to have the same length as the longest sequence. This option does not discard any data, although padding can introduce noise to the neural network."shortest"
— Truncate sequences to have the same length as the shortest sequence. This option ensures that the function does not add padding at the cost of discarding data.
To learn more about the effects of padding and truncating the input sequences, see Sequence Padding and Truncation.
Direction of padding or truncation, specified as one of these options:
"right"
— Pad or truncate sequences on the right. The sequences start at the same time step and the software truncates or adds padding to the end of each sequence."left"
— Pad or truncate sequences on the left. The software truncates or adds padding to the start of each sequence so that the sequences end at the same time step.
Recurrent layers process sequence data one time step at a time, so when the recurrent layer
OutputMode
property is "last"
, any padding in the
final time steps can negatively influence the layer output. To pad or truncate sequence data
on the left, set the SequencePaddingDirection
name-value argument to
"left"
.
For sequence-to-sequence neural networks (when the OutputMode
property is
"sequence"
for each recurrent layer), any padding in the first time
steps can negatively influence the predictions for the earlier time steps. To pad or
truncate sequence data on the right, set the SequencePaddingDirection
name-value argument to "right"
.
To learn more about the effects of padding and truncating sequences, see Sequence Padding and Truncation.
Value by which to pad the input sequences, specified as a scalar.
Do not pad sequences with NaN
, because doing so can
propagate errors through the neural network.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
Hardware resource, specified as one of these values:
"auto"
— Use a GPU if one is available. Otherwise, use the CPU. Ifnet
is a quantized network with theTargetLibrary
property set to"none"
, use the CPU even if a GPU is available."gpu"
— Use the GPU. Using a GPU requires a Parallel Computing Toolbox™ license and a supported GPU device. For information about supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). If Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error."cpu"
— Use the CPU.
Performance optimization, specified as one of these values:
"auto"
— Automatically apply a number of optimizations suitable for the input network and hardware resources."mex"
— Compile and execute a MEX function. This option is available only when using a GPU. You must store the input data or the network learnable parameters asgpuArray
objects. Using a GPU requires Parallel Computing Toolbox and a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). If Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error."none"
— Disable all acceleration.
When you use the "auto"
or "mex"
option, the software
can offer performance benefits at the expense of an increased initial run time. Subsequent
calls to the function are typically faster. Use performance optimization when you call the
function multiple times using different input data.
When Acceleration
is "mex"
, the software generates and
executes a MEX function based on the model and parameters you specify in the function call.
A single model can have several associated MEX functions at one time. Clearing the model
variable also clears any MEX functions associated with that model.
When Acceleration
is
"auto"
, the software does not generate a MEX function.
The "mex"
option is available only when you use a GPU. You must have a
C/C++ compiler installed and the GPU Coder™ Interface for Deep Learning support package. Install the support package using the Add-On Explorer in
MATLAB®. For setup instructions, see Set Up Compiler (GPU Coder). GPU Coder is not required.
The "mex"
option has these limitations:
Only the
single
precision data type is supported. The input data or the network learnable parameters must have the underlying data typesingle
.Networks with inputs that are not connected to an input layer are not supported.
Traced
dlarray
objects are not supported. This means that the"mex"
option is not supported inside a call to thedlfeval
function.Not all layers are supported. For a list of supported layers, see Supported Layers (GPU Coder).
MATLAB Compiler™ does not support deploying your network when using the
"mex"
option.
For quantized networks, the "mex"
option requires a CUDA® enabled NVIDIA® GPU with compute capability 6.1, 6.3, or higher.
Encoding of categorical inputs, specified as one of these values:
"auto"
— Ifmetrics
contains"index-crossentropy"
and does not contain"crossentropy"
, then convert categorical inputs to their integer value. Otherwise, convert categorical inputs to one-hot encoded vectors."integer"
— Convert categorical inputs to their integer value."one-hot"
— Convert categorical inputs to one-hot encoded vectors.
If you covert categorical inputs to their integer value, then the network must
have one input channel for each of the categorical inputs. Otherwise, the network must
have numCategories
channels for each of the categorical inputs,
where numCategories
is the number of categories of the
corresponding categorical input
Encoding of categorical targets for custom metrics, specified as one of these values:
"integer"
— Convert categorical targets to their integer value and pass the integer-encoded values to the metric function."one-hot"
— Convert categorical targets to one-hot encoded vectors and pass the one-hot encoded values to the metric function.
Before R2025a: If metrics
contains
"index-crossentropy"
and does not contain
"crossentropy"
, then the software automatically converts the
targets to numeric class indices and passes them to the metric. Otherwise, if the
targets are categorical arrays, then the software automatically converts the targets
to one-hot encoded vectors and then passes them to the metric.
Description of the input data dimensions, specified as a string array, character vector, or cell array of character vectors.
If InputDataFormats
is "auto"
, then the software uses
the formats expected by the network input. Otherwise, the software uses the specified
formats for the corresponding network input.
A data format is a string of characters, where each character describes the type of the corresponding data dimension.
The characters are:
"S"
— Spatial"C"
— Channel"B"
— Batch"T"
— Time"U"
— Unspecified
For example, consider an array that represents a batch of sequences where the first,
second, and third dimensions correspond to channels, observations, and time steps,
respectively. You can describe the data as having the format "CBT"
(channel, batch, time).
You can specify multiple dimensions labeled "S"
or "U"
.
You can use the labels "C"
, "B"
, and
"T"
once each, at most. The software ignores singleton trailing
"U"
dimensions after the second dimension.
For a neural networks with multiple inputs net
, specify an array of
input data formats, where InputDataFormats(i)
corresponds to the
input net.InputNames(i)
.
For more information, see Deep Learning Data Formats.
Data Types: char
| string
| cell
Description of the target data dimensions, specified as one of these values:
"auto"
— If the target data has the same number of dimensions as the input data, then the function uses the format specified by theInputDataFormats
name-value argument. If the target data has a different number of dimensions than the input data, then the function uses the format expected by the metric function.String array, character vector, or cell array of character vectors — The function uses the data formats you specify.
A data format is a string of characters, where each character describes the type of the corresponding data dimension.
The characters are:
"S"
— Spatial"C"
— Channel"B"
— Batch"T"
— Time"U"
— Unspecified
For example, consider an array that represents a batch of sequences where the first,
second, and third dimensions correspond to channels, observations, and time steps,
respectively. You can describe the data as having the format "CBT"
(channel, batch, time).
You can specify multiple dimensions labeled "S"
or "U"
.
You can use the labels "C"
, "B"
, and
"T"
once each, at most. The software ignores singleton trailing
"U"
dimensions after the second dimension.
For more information, see Deep Learning Data Formats.
Data Types: char
| string
| cell
Output Arguments
Evaluated metric values, returned as a numeric vector or a table.
The data type of metricValues
depends on the
OutputMode
name-value argument.
If
OutputMode
is"vector"
, then the outputmetricValues
is a vector, wheremetricValues(i)
corresponds to the value ofmetric(i)
.If
OutputMode
is"table"
, then the outputmetricValues
is a table with one row and named columns that correspond to each metric.
Extended Capabilities
The testnet function fully supports GPU acceleration.
By default, testnet
uses a GPU if one is available. If
net
is a quantized network with the TargetLibrary
property set to "none"
, testnet
uses the CPU
even if a GPU is available. You can specify the hardware that the
testnet
function uses by specifying the ExecutionEnvironment
name-value argument.
For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
Version History
Introduced in R2024bTo specify how to convert categorical inputs and targets to numeric values for testing a
neural network, use the CategoricalInputEncoding
and CategoricalTargetEncoding
arguments, respectively.
Starting in R2025a, you can now evaluate the network on test data using the R2 metric.
To use this metric, you can specify either "rsquared"
or, if you
require greater customization, you can create an RSquaredMetric
object.
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: United States.
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)