- Normalize dlX by using the mean and variance of dlX and the provided offset and scale factor: dlY = batchnorm(dlX,offset,scaleFactor)
- Normalize the minibatch as above and build up running statistics: [dlY,updatedMu,updatedSigmaSq] = batchnorm(dlX,offset,scaleFactor,mu,sigmaSq)
The batchnorm() function input trainedMean, trainedVar has no effect on the result?
4 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
cui,xingxing
am 12 Jul. 2020
Kommentiert: cui,xingxing
am 5 Jul. 2021
Why does batchnorm() output the same result for random mean and variance(dlY is always same)?
height = 4;
width = 4;
channels = 3;
observations = 1;
X = rand(height,width,channels,observations);
dlX = dlarray(X,'SSCB');
offset = zeros(channels,1);
scaleFactor = ones(channels,1);
[dlY,mu,sigmaSq] = batchnorm(dlX,offset,scaleFactor)
useMean = rand(channels,1);
useVar = rand(channels,1);
[dlY,mu,sigmaSq] = batchnorm(dlX,offset,scaleFactor,useMean,useVar) % dlY is always same ???
0 Kommentare
Akzeptierte Antwort
Katja Mogalle
am 30 Jun. 2021
Hello cui,
If I understand it correctly, you're wondering why the normalized data returned by batchnorm is the same, no matter if you specify mean (mu) and variance (sigmaSq) values as inputs or not.
There are basically two modes in which batchnorm is used in deep learning: training mode and inference mode.
Training mode
During training mode, mean and variance are computed directly from the current input data (aka "minibatch") and are used to normalize that minibatch of data. During training, several different minibatches of data are being processed and we're trying to compute running values for the mean and variance statistics so that we have approximate statistics for the entire data set.
For training mode, you can make use of the following two syntaxes:
This documentation example shows how to build up running statistics: https://www.mathworks.com/help/deeplearning/ref/dlarray.batchnorm.html?s_tid=doc_ta#mw_b1029c55-ab31-41f2-9792-7b047819d613
The formulas for computing the updated statistics can be found here: https://www.mathworks.com/help/deeplearning/ref/dlarray.batchnorm.html?s_tid=doc_ta#mw_2fb22643-9b65-4aca-b326-321c1cadf6ce
Inference mode
During inference mode, we want to normalize each minibatch in exactly the same way, using the same mu and sigmaSq, namely the statistics of the entire training data set.
For inference mode, you can make use of this syntax:
In conclusion ... I suspect you wanted to try out the inference mode syntax (5 input arguments, one output argument) instead of the second training mode syntax I mentioned above (5 input arguments, 3 output arguments).
I hope this helps.
1 Kommentar
Weitere Antworten (0)
Siehe auch
Kategorien
Mehr zu Deep Learning Toolbox finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!