Back propagation neural network
Ältere Kommentare anzeigen
I learnt that the output of activation functions, logsig and tansig, returns the value between [0 1] and [-1 1] respectively. What will happen if the target values are beyond these limits?
2 Kommentare
Mohammad Sami
am 8 Jun. 2020
One of the reasons is that larger values can result in a problem of exploding gradients, when training the network.
Sivamani S
am 8 Jun. 2020
Antworten (0)
Kategorien
Mehr zu Deep Learning Toolbox finden Sie in Hilfe-Center und File Exchange
Produkte
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!