Activations of freezed layers are different between before/after training, why?
1 Ansicht (letzte 30 Tage)
Ältere Kommentare anzeigen
ntinoson
am 29 Jun. 2018
Kommentiert: Amanjit Dulai
am 28 Aug. 2018
I follow the example "transfer-learning-using-googlenet" where, the last 3 layers ('loss3-classifier','prob','output') are replaced with 3 new ones. Then I 'freeze' the first 141 layers (that is up to and including 'pool5-drop_7x7_s1'):
layers(1:141) = freezeWeights(layers(1:141));
lgraph = createLgraphUsingConnections(layers,connections);
Then I follow fine-tuning.
Since 'pool5-7x7_s1' is BEFORE 'pool5-drop_7x7_s1', I would expect that the following two vectors were the same:
b_orig= activations(net_orig, I, 'pool5-7x7_s1');
b_tune= activations(net_tune, I, 'pool5-7x7_s1');
but they aren't!... Any idea why?
p.s. I also tried the activation of several other layers BEFORE 'pool5-drop_7x7_s1', and I got different vectors.... 'I' is an image, 'net_orig=googlenet;', and 'net_tune' is the resulting net after tuning.
2 Kommentare
Akzeptierte Antwort
Amanjit Dulai
am 14 Aug. 2018
The vectors are different because when you fine tune on a new dataset, the average image in "imageInputLayer" is recalculated for your new dataset.
2 Kommentare
Weitere Antworten (0)
Siehe auch
Kategorien
Mehr zu Image Data Workflows finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!