Mean square error (MSE) and performance in training record not correct?

I noticed that performances in the training record of a neural network are always consistently different from perfomances calculated manually. It looks like the numbers in training record are not calculated directly with the performance function of the net. Here's some code:
First, I train a neural network
x = (0:0.1:10);
t = sin(x);
net = fitnet(6);
[net,trainingrecord] = train(net,x,t);
y = net(x);
then I manually calculate the performance of the net on the test sample
for i = 1:size(trainingrecord.testInd,2)
test_y(i) = y(1,trainingrecord.testInd(i));
test_t(i) = t(1,trainingrecord.testInd(i));
end;
manualperf = 0;
for i = 1:size(trainingrecord.testInd,2)
manualperf = manualperf + (test_y(i)-test_t(i))^2;
end;
manualperf = manualperf/size(trainingrecord.testInd,2);
This is the same performance, calculated by perform function and they are exactly the same:
autoperf = perform(net,test_y,test_t);
isequal(autoperf,manualperf)
ans =
1
But they both differ from trainingrecord.best_tperf
>> autoperf
autoperf =
1.129785002584019e-06
>> manualperf
manualperf =
1.129785002584019e-06
>> tr.best_tperf
ans =
1.129785002584038e-06
>> isequal(autoperf,manualperf,trainingrecord.best_tperf)
ans =
0
It looks like the performance in the training record is not calculated straightfowardly by calling the perform function or maybe there is some kind of error accumulated through the code. Any ideas?

 Akzeptierte Antwort

José-Luis
José-Luis am 30 Jun. 2016
Bearbeitet: José-Luis am 30 Jun. 2016

0 Stimmen

It appears to be well within the realm of numerical precision. You shouldn't be comparing doubles directly. It will break your heart. The fact that the first comparison worked might be a small miracle in and of itself.
Please read the part about comparing floating point numbers in the above link.

3 Kommentare

Roberto
Roberto am 30 Jun. 2016
Bearbeitet: Roberto am 30 Jun. 2016
You're absolutley right. (And 0.5 - 0.4 - 0.1 = -2.7756e-17 is amazing :D) Then of course the equality I found looks like a miracle. But I did some tests and I found out that it's not so "miraculous". For example I tried to change the order of the sum (mse = sum((xi - yi)^2)/n) and I got exactly the same number. I also tried to devide by n every element of the sum before summing or summing an explicit product instead of using the "^2" operator and still the result is exactly the same. The only reasonable way I've found to alter the result is summing xi^2 + yi^2 - 2*xi*yi instead of (xi-yi)^2.
My point is, it looks like there are no reasonable ways to calculate mse that results in a number different from the one produced by the straightforward way. So I was wondering how performances in training record are calculated.
Also in some cases the difference become relevant. For example when the performance si really good and mse is small you may have 2 nets with performances
1.2645*10^-22 1.3154*10^-22
I ALWAYS need to compare performances of neural networks and in a situation like that the error is definitely relevant.
When comparing, you should then use a tolerance, like they talk about in the link.
Please accept the answer that best solves your problem.
I'll accept your answer in a couple of days.

Melden Sie sich an, um zu kommentieren.

Weitere Antworten (0)

Kategorien

Mehr zu Deep Learning Toolbox finden Sie in Hilfe-Center und File Exchange

Gefragt:

am 30 Jun. 2016

Kommentiert:

am 30 Jun. 2016

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by