# regressionLayer

Create a regression output layer

## Syntax

``layer = regressionLayer``
``layer = regressionLayer(Name,Value)``

## Description

A regression layer computes the half-mean-squared-error loss for regression tasks.

````layer = regressionLayer` returns a regression output layer for a neural network as a `RegressionOutputLayer` object. Predict responses of a trained regression network using `predict`. Normalizing the responses often helps stabilizing and speeding up training of neural networks for regression. For more information, see Train Convolutional Neural Network for Regression.```

example

````layer = regressionLayer(Name,Value)` sets the optional `Name` and `ResponseNames` properties using name-value pairs. For example, `regressionLayer('Name','output')` creates a regression layer with the name `'output'`. Enclose each property name in single quotes.```

## Examples

collapse all

Create a regression output layer with the name `'routput'`.

`layer = regressionLayer('Name','routput')`
```layer = RegressionOutputLayer with properties: Name: 'routput' ResponseNames: {} Hyperparameters LossFunction: 'mean-squared-error' ```

The default loss function for regression is mean-squared-error.

Include a regression output layer in a Layer array.

```layers = [ ... imageInputLayer([28 28 1]) convolution2dLayer(12,25) reluLayer fullyConnectedLayer(1) regressionLayer]```
```layers = 5x1 Layer array with layers: 1 '' Image Input 28x28x1 images with 'zerocenter' normalization 2 '' Convolution 25 12x12 convolutions with stride [1 1] and padding [0 0 0 0] 3 '' ReLU ReLU 4 '' Fully Connected 1 fully connected layer 5 '' Regression Output mean-squared-error ```

## Input Arguments

collapse all

### Name-Value Pair Arguments

Specify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside quotes. You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`.

Example: `regressionLayer('Name','output')` creates a regression layer with the name `'output'`

Layer name, specified as a character vector or a string scalar. To include a layer in a layer graph, you must specify a nonempty, unique layer name. If you train a series network with the layer and `Name` is set to `''`, then the software automatically assigns a name to the layer at training time.

Data Types: `char` | `string`

Names of the responses, specified a cell array of character vectors or a string array. At training time, the software automatically sets the response names according to the training data. The default is `{}`.

Data Types: `cell`

## Output Arguments

collapse all

Regression output layer, returned as a `RegressionOutputLayer` object.

collapse all

### Regression Output Layer

A regression layer computes the half-mean-squared-error loss for regression tasks. For typical regression problems, a regression layer must follow the final fully connected layer.

For a single observation, the mean-squared-error is given by:

`$\text{MSE}=\sum _{i=1}^{R}\frac{{\left({t}_{i}-{y}_{i}\right)}^{2}}{R},$`

where R is the number of responses, ti is the target output, and yi is the network’s prediction for response i.

For image and sequence-to-one regression networks, the loss function of the regression layer is the half-mean-squared-error of the predicted responses, not normalized by R:

`$\text{loss}=\frac{1}{2}\sum _{i=1}^{R}{\left({t}_{i}-{y}_{i}\right)}^{2}.$`

For image-to-image regression networks, the loss function of the regression layer is the half-mean-squared-error of the predicted responses for each pixel, not normalized by R:

`$\text{loss}=\frac{1}{2}\sum _{p=1}^{HWC}{\left({t}_{p}-{y}_{p}\right)}^{2},$`

where H, W, and C denote the height, width, and number of channels of the output respectively, and p indexes into each element (pixel) of t and y linearly.

For sequence-to-sequence regression networks, the loss function of the regression layer is the half-mean-squared-error of the predicted responses for each time step, not normalized by R:

`$\text{loss}=\frac{1}{2S}\sum _{i=1}^{S}\sum _{j=1}^{R}{\left({t}_{ij}-{y}_{ij}\right)}^{2},$`

where S is the sequence length.

When training, the software calculates the mean loss over the observations in the mini-batch.

## Extended Capabilities

### Topics

Introduced in R2017a