MATLAB Answers

0

How to calculate accuracy for neural network algorithms?

Asked by sandhya sandhya on 14 Mar 2019
Latest activity Commented on by Osama Tabbakh on 15 Jul 2019 at 7:45
How to calculate accuracy for neural network algorithms?

  1 Comment

I'm pretty sure this is a topic with literally thousands of hits if you google it! Or are you asking specifically about a Matlab coded network, in which case showing some code helps.

Sign in to comment.

1 Answer

Answer by Greg Heath
on 15 Mar 2019
 Accepted Answer

I normalize the mean-square-error
MSE = mse(error) = mse(output-target)
by the minimum MSE obtained when the output is a constant.
If the output is a constant, the MSE is minimized when that constant is
the average of the target. For a 1-D target
NMSE = mse(output-target) / mse(target-mean(target))
= mse(error) / var(target,1)
This is related to the R-square statistic (AKA as R2) via
Rsquare = R2 = 1 - NMSE
Both NMSE and R2 are contained in [0,1].
I have posted zillions of examples in both the NEWSGROUP and ANSWERS.
Just search using
Greg NMSE
Thank you for formally accepting my answer
Greg

  5 Comments

Your numbers make no sense
  1. A 1x420 target requires the input to be transposed
  2. Where does 2560 come from???
  3. Your use of *.val makes no sense
Input signal with dimension 1*420 was my previous signal ,I forgot to modify the dimension.Now,the input signal with dimension 23*2560 and *.val is the inbuilt file of input signal.If you give command as plot(input),it displays the error.But ,if you give command as
plot(input.val), then it displays the output.Can you please add accuracy commands to my code.
input=load('project1.mat'); 23*2560
target=load('braineeg.mat');
hiddenLayerSize = 10;
net = feedforwardnet(hiddenLayerSize );
net.divideFcn = 'divideind';
net.divideParam.trainInd = 1:1792;
net.divideParam.valInd = 1793:2176;
net.divideParam.testInd = 2177:2560;
net = configure(net,input.val,target.val);
[net,tr] = train(net,input.val,target.val);
view(net)
output = net(input.val);
errors = gsubtract(target.val,output);
performance = perform(net,target.val,output);
view(net)
figure, plotperform(tr)
figure, plottrainstate(tr)
figure, plotconfusion(target.val,output)
[c,cm] = confusion(target.val,output);
figure, ploterrhist(errors)
trainTargets = target.val .* tr.trainMask{1};
valTargets = target.val .* tr.valMask{1};
testTargets = target.val .* tr.testMask{1};
trainPerformance = perform(net,trainTargets,output);
valPerformance = perform(net,valTargets,output);
testPerformance = perform(net,testTargets,output);
YPred = predict(net,input.val);
MSE=mse(output-target.val);
NMSE = MSE / mse(target-mean(target.val));
But what I do not understand is in the way of R-square statistic you calculate with the consideration that the behavior between the target and the output is linear. But when the behavior is nonlinear, then you get high accuracy, although the network produces a large error.

Sign in to comment.