Help regarding the transposed convolution layer
1 Ansicht (letzte 30 Tage)
Ältere Kommentare anzeigen
Radians
am 6 Mär. 2020
Beantwortet: Srivardhan Gadila
am 17 Mär. 2020
Hi,
Please provide help regarding how the transposedConv2dLayer works.
I am struggling to understand the following helper function
function out = createUpsampleTransponseConvLayer(factor,numFilters)
filterSize = 2*factor - mod(factor,2);
cropping = (factor-mod(factor,2))/2;
numChannels = 1;
out = transposedConv2dLayer(filterSize,numFilters, ...
'NumChannels',numChannels,'Stride',factor,'Cropping',cropping);
end
from the example: https://uk.mathworks.com/help/deeplearning/examples/image-to-image-regression-using-deep-learning.html
How does the filtersize and stride affect the output of this layer?
What's the difference between this layer and a simple upsampling layer?
whether the weights are somehow transposed or learned from scratch?
0 Kommentare
Akzeptierte Antwort
Srivardhan Gadila
am 17 Mär. 2020
An upsampling layer uses a defined/pre-defined interpolation method to upsample the input but a transposed convolution layer learns weights from the scratch. Starting in R2019a, the software, by default, initializes the layer weights of this layer using the Glorot initializer. This behavior helps stabilize training and usually reduces the training time of deep networks.
0 Kommentare
Weitere Antworten (0)
Siehe auch
Kategorien
Mehr zu Image Data Workflows finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!