How can I improve low probabilities when using probabilistic neural networks?

6 Ansichten (letzte 30 Tage)
I am using Probabilistic Neural Networks (NEWPNN) to classify samples into one of four possible classes. I notice that although the results are accurate, for some samples the probability is very low (rows 1 to 3 in the dataset below). Columns 1 to 4 are the probabilities of the sample belonging to each one of the four classes. Column 5 is the PNN final classification and column 6 the sample's original class.
0.2500 0.2500 0.2500 0.2500 1.0000 1.0000
0.2500 0.2500 0.2500 0.2500 1.0000 1.0000
0.2501 0.2500 0.2500 0.2500 1.0000 1.0000
0.8596 0.0468 0.0468 0.0468 1.0000 1.0000
0.9932 0.0023 0.0023 0.0023 1.0000 1.0000
0.8760 0.0413 0.0413 0.0413 1.0000 1.0000
Notice that the probabilities are all very close (~0.25) for the first few observations.

Akzeptierte Antwort

MathWorks Support Team
MathWorks Support Team am 25 Mai 2011
To improve the low probabilities when using the NEWPNN function, one variable you can adjust is the 'spread' of the radial basis functions. If the spread is too small, then vectors between prototypes that the network has learned can end up with low probabilities.
net = newpnn(x,t,spread)
Since the best spread value varies with the problem, you can adjust it until the network does best at a test set of data (and reserve additional test data to validate the final result).
But simply normalizing the probabilities as with SOFTMAX (which is already being done in the function currently) is also a good solution.
You can also use SOFTMAX to normalize the output of a PATTERNNET network after it is trained.

Weitere Antworten (0)

Tags

Noch keine Tags eingegeben.

Produkte


Version

R2011a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by