Neural network performance function, weighted sse, and false alarms
5 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
All,
I am using a neural network for some classification/pattern recognition and would like to punish the system performance greater for false alarms than for plain misses. After looking through the documentation, it looks like I may be able to accomplish this by using a weighted sse for the performance function. The documentation on using the weights is quite poor.
1.) Does anyone have an example of how to use a weighted sse as a performance function?
2.) Is there a better way to get the system to minimize the number of false alarms?
The original problem is that the data being analyzed has many more non-events versus events, so frequently there are more false alarms than hits. When training, I present an equal number of events and non events, however this still results in many false alarms when the entire dataset is analyzed after training. Suggestions?
Things I've done:
- Normalized input to the network
- Presented equal numbers of events/non-events during training
- Applied a PCA to eliminate correlation in input
Is this just a sign that more training needs to be done? Any insight would be greatly appreciated!
Thanks!
-Eric
I apologize if this posted twice, my first attempt did not appear to work.
0 Kommentare
Akzeptierte Antwort
Greg Heath
am 23 Nov. 2011
If you are designing a classifier with c classes, use training targets that are columns of eye(c). The input is assigned to the class associated with the largest output.
Hope this helps.
Greg
P.S. The outputs are estimates of the input-conditional posterior probability P(c=i|x).
0 Kommentare
Weitere Antworten (4)
Mark Hudson Beale
am 19 Apr. 2011
Error weights can help you set which targets are most important to get correct, or equivalently, more costly to get wrong.
Let's say you had the following 12 targets for a classification problem:
t = [0 1 1 0 1 0 0 0 1 1 0 1]
You can create error weights that prioritize avoiding class 1 misclassifications twice as much as class 0 misclassifications.
ew = (t==0)*0.5 + (t==1)
The error weights can be then used to measure performance yourself, and during training.
perf = mse(net,t,y,ew)
perf = sse(net,t,y,ew)
net = train(net,x,t,[],[],ew)
0 Kommentare
Francois
am 7 Jul. 2011
Dear Mark,
did you try this, or did you just post it thinking it should work?
(see this post: http://www.mathworks.de/matlabcentral/answers/10512-neural-networks-toolbox-error-weights-get-an-error)
this line of code
net = train(net,x,t,[],[],ew)
is giving an error
??? Error using ==> trainlm at 109
Inputs and input states have different numbers of samples.
Error in ==> network.train at 107
[net,tr] = feval(net.trainFcn,net,X,T,Xi,Ai,EW,net.trainParam);
0 Kommentare
Francois
am 14 Jul. 2011
Finnaly, this is the right code
[net,tr] = train(net,x,t,{},{},EW);
0 Kommentare
Hessam
am 15 Nov. 2011
Hi there I'm also training nets to classify the composition states in combustion. Actually since the values of the data I have , as outputs, are so small, I'm using the log value of them, and then normalize them so that my data fall in the domain [0 1]. for the actual data , which were so small, although my performance function value(using trainlm )converges to 1e-10 but still I have so bad quantities(output) for target which are in the order of 10-6 or 10-5. My question is does EW help me in this regard. Also the performance function of my net , by default for trainlm, is mse, which is supposed to be the normalized/relative value? am I correct? if so how can I get rid of these so bad outputs.?
Thanks
0 Kommentare
Siehe auch
Kategorien
Mehr zu Sequence and Numeric Feature Data Workflows finden Sie in Help Center und File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!