- Try freezing the weights of the original layers by setting the WeightLearnRateFactor and BiasLearnRateFactor to zero for convolution2dLayer and the same WeightLearnRateFactor & BiasLearnRateFactor for the fullyConnectedLayer too.
- Or retrain the complete network without freezing weights of any particular layers.
Modifying pretraind Neural Network
2 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
Shai Kendler
am 1 Apr. 2020
Beantwortet: Srivardhan Gadila
am 8 Apr. 2020
I plan to use a pretrained net such as alexnet, with input image of 227*227*5 . I exported the net to the Network Designer App. and changed the input and first convulution layers according to my requirements. I analyzed the archirecture and it seems perfect. Can I trust the new network to be a good starting point or I'm naive?
Thanks,
Shai
0 Kommentare
Akzeptierte Antwort
Srivardhan Gadila
am 8 Apr. 2020
Since the convolution2dLayer and imageInputLayer have been replaced, the output of the ImageInputLayer would be different now because initially for the zero-center nomalization the mean used was different and also the features extracted/output from the replaced convolution layer would be different and may not be useful. If you are training the network on the new dataset with image input size 227*227*5 then above all doesn't matter. Instead if you are using it for feature extraction & your data is very different from the original data, then the features extracted deeper in the network might be less useful for your task.
Here are few suggestions while retrianing:
0 Kommentare
Weitere Antworten (0)
Siehe auch
Kategorien
Mehr zu Image Data Workflows finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!