# groupNormalizationLayer

Group normalization layer

## Description

A group normalization layer divides the channels of the input data into groups and normalizes the activations across each group. To speed up training of convolutional neural networks and reduce the sensitivity to network initialization, use group normalization layers between convolutional layers and nonlinearities, such as ReLU layers. You can perform instance normalization and layer normalization by setting the appropriate number of groups.

You can use a group normalization layer in place of a batch normalization layer. This is particularly useful when training with small batch sizes as it can increase the stability of training.

The layer first normalizes the activations of each group by subtracting the group mean and dividing by the group standard deviation. Then, the layer shifts the input by a learnable offset β and scales it by a learnable scale factor γ.

## Creation

### Syntax

``layer = groupNormalizationLayer(numGroups)``
``layer = groupNormalizationLayer(numGroups,Name,Value)``

### Description

example

````layer = groupNormalizationLayer(numGroups)` creates a group normalization layer that divides the channels in the layer input into `numGroups` groups and normalizes across each group.```

example

````layer = groupNormalizationLayer(numGroups,Name,Value)` creates a group normalization layer and sets the optional Normalization, Parameters and Initialization, Learn Rate and Regularization, and `Name` properties using one or more name-value pair arguments. You can specify multiple name-value pair arguments. Enclose each property name in quotes.```

### Input Arguments

expand all

Number of groups into which to divide the channels of the input data, specified as a positive integer, `"all-channels"` or `"channel-wise"`.

If you specify `numGroups` as a positive integer, the layer divides the incoming channels in to the specified number of groups. The specified number of groups must divide the number of channels exactly.

If you specify `numGroups` as `"all-channels"`, the layer groups all incoming channels into a single group. This is also known as layer normalization.

If you specify `numGroups` as a `"channel-wise"`, the layer treats all incoming channels as separate groups. This is also known as instance normalization.

## Properties

expand all

### Normalization

Constant to add to the mini-batch variances, specified as a numeric scalar equal to or larger than `1e-5`.

The layer adds this constant to the mini-batch variances before normalization to ensure numerical stability and avoid division by zero.

Number of input channels, specified as `'auto'` or a positive integer.

This property is always equal to the number of channels of the input to the layer. If `NumChannels` equals `'auto'`, then the software infers the correct value for the number of channels at training time.

### Parameters and Initialization

Function to initialize the channel scale factors, specified as one of the following:

• `'ones'` – Initialize the channel scale factors with ones.

• `'zeros'` – Initialize the channel scale factors with zeros.

• `'narrow-normal'` – Initialize the channel scale factors by independently sampling from a normal distribution with zero mean and standard deviation 0.01.

• Function handle – Initialize the channel scale factors with a custom function. If you specify a function handle, then the function must be of the form `scale = func(sz)`, where `sz` is the size of the scale. For an example, see Specify Custom Weight Initialization Function.

The layer only initializes the channel scale factors when the `Scale` property is empty.

Data Types: `char` | `string` | `function_handle`

Function to initialize the channel offsets, specified as one of the following:

• `'zeros'` – Initialize the channel offsets with zeros.

• `'ones'` – Initialize the channel offsets with ones.

• `'narrow-normal'` – Initialize the channel offsets by independently sampling from a normal distribution with zero mean and standard deviation 0.01.

• Function handle – Initialize the channel offsets with a custom function. If you specify a function handle, then the function must be of the form `offset = func(sz)`, where `sz` is the size of the scale. For an example, see Specify Custom Weight Initialization Function.

The layer only initializes the channel offsets when the `Offset` property is empty.

Data Types: `char` | `string` | `function_handle`

Channel scale factors γ, specified as a numeric array.

The channel scale factors are learnable parameters. When training a network, if `Scale` is nonempty, then `trainNetwork` uses the `Scale` property as the initial value. If `Scale` is empty, then `trainNetwork` uses the initializer specified by `ScaleInitializer`.

At training time, `Scale` is one of the following:

• For 2-D image input, a numeric array of size 1-by-1-by-`NumChannels`

• For 3-D image input, a numeric array of size 1-by-1-by-1-by-`NumChannels`

• For feature or sequence input, a numeric array of size `NumChannels`-by-1

Channel offsets β, specified as a numeric array.

The channel offsets are learnable parameters. When training a network, if `Offset` is nonempty, then `trainNetwork` uses the `Offset` property as the initial value. If `Offset` is empty, then `trainNetwork` uses the initializer specified by `OffsetInitializer`.

At training time, `Offset` is one of the following:

• For 2-D image input, a numeric array of size 1-by-1-by-`NumChannels`

• For 3-D image input, a numeric array of size 1-by-1-by-1-by-`NumChannels`

• For feature or sequence input, a numeric array of size `NumChannels`-by-1

### Learn Rate and Regularization

Learning rate factor for the scale factors, specified as a nonnegative scalar.

The software multiplies this factor by the global learning rate to determine the learning rate for the scale factors in a layer. For example, if `ScaleLearnRateFactor` is `2`, then the learning rate for the scale factors in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the `trainingOptions` function.

Learning rate factor for the offsets, specified as a nonnegative scalar.

The software multiplies this factor by the global learning rate to determine the learning rate for the offsets in a layer. For example, if `OffsetLearnRateFactor` equals `2`, then the learning rate for the offsets in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the `trainingOptions` function.

L2 regularization factor for the scale factors, specified as a nonnegative scalar.

The software multiplies this factor by the global L2 regularization factor to determine the learning rate for the scale factors in a layer. For example, if `ScaleL2Factor` is 2, then the L2 regularization for the offsets in the layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the `trainingOptions` function.

L2 regularization factor for the offsets, specified as a nonnegative scalar.

The software multiplies this factor by the global L2 regularization factor to determine the learning rate for the offsets in a layer. For example, if `OffsetL2Factor` is 2, then the L2 regularization for the offsets in the layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the `trainingOptions` function.

### Layer

Layer name, specified as a character vector or a string scalar. To include a layer in a layer graph, you must specify a nonempty unique layer name. If you train a series network with the layer and `Name` is set to `''`, then the software automatically assigns a name to the layer at training time.

Data Types: `char` | `string`

Number of inputs of the layer. This layer accepts a single input only.

Data Types: `double`

Input names of the layer. This layer accepts a single input only.

Data Types: `cell`

Number of outputs of the layer. This layer has a single output only.

Data Types: `double`

Output names of the layer. This layer has a single output only.

Data Types: `cell`

## Examples

collapse all

Create a group normalization layer that normalizes incoming data across three groups of channels. Name the layer `'groupnorm'`.

`layer = groupNormalizationLayer(3,'Name','groupnorm')`
```layer = GroupNormalizationLayer with properties: Name: 'groupnorm' NumChannels: 'auto' Hyperparameters NumGroups: 3 Epsilon: 1.0000e-05 Learnable Parameters Offset: [] Scale: [] Show all properties ```

Include a group normalization layer in a `Layer` array. Normalize the incoming 20 channels in four groups.

```layers = [ imageInputLayer([28 28 3]) convolution2dLayer(5,20) groupNormalizationLayer(4) reluLayer maxPooling2dLayer(2,'Stride',2) fullyConnectedLayer(10) softmaxLayer classificationLayer]```
```layers = 8x1 Layer array with layers: 1 '' Image Input 28x28x3 images with 'zerocenter' normalization 2 '' Convolution 20 5x5 convolutions with stride [1 1] and padding [0 0 0 0] 3 '' Group Normalization Group normalization 4 '' ReLU ReLU 5 '' Max Pooling 2x2 max pooling with stride [2 2] and padding [0 0 0 0] 6 '' Fully Connected 10 fully connected layer 7 '' Softmax softmax 8 '' Classification Output crossentropyex ```

expand all

## Algorithms

A group normalization normalizes its inputs xi by first calculating the mean μg and variance σg2 over the specified groups of channels. Then, it calculates the normalized activations as

`${\stackrel{^}{x}}_{i}=\frac{{x}_{i}-{\mu }_{g}}{\sqrt{{\sigma }_{g}^{2}+\epsilon }}$`

Here, ϵ (the property `Epsilon`) improves numerical stability when the group variance is very small. To allow for the possibility that inputs with zero mean and unit variance are not optimal for the layer that follows the group normalization layer, the group normalization layer further shifts and scales the activations as

`${y}_{i}=\gamma {\stackrel{^}{x}}_{i}+\beta .$`

Here, the offset β and scale factor γ (`Offset` and `Scale` properties) are learnable parameters that are updated during network training.

## References

[1] Wu, Yuxin, and Kaiming He. “Group Normalization.” ArXiv:1803.08494 [Cs], June 11, 2018. http://arxiv.org/abs/1803.08494.

Introduced in R2020b