Deep network behavior in custom training loop on shared layers
1 Ansicht (letzte 30 Tage)
Ältere Kommentare anzeigen
Hello,
I study the Siamese network example in https://www.mathworks.com/help/deeplearning/examples/train-a-siamese-network-to-compare-images.html.
The example is clear. My question is about how it'd work if a dropout layer were added to the sub-network.
The question arises because dropout behaves differently in training (forward) and predicting (predict). During training the layer randomly sets input elements to zero given by the dropout mask each time it is invoked and at prediction the output of the layer is equal to its input (https://www.mathworks.com/help/deeplearning/ref/nnet.cnn.layer.dropoutlayer.html?s_tid=doc_ta). Therefore, it'd reason that the mask would be different for each image in the input images pair! But this is NOT what we want!
Please advise,
D
0 Kommentare
Antworten (0)
Siehe auch
Kategorien
Mehr zu Image Data Workflows finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!