Are differences in results too large to be round off error?
5 Ansichten (letzte 30 Tage)
Ältere Kommentare anzeigen
KAE
am 20 Jul. 2023
Kommentiert: Steven Lord
am 31 Jul. 2023
There is a multi-step calculation that I have accidentally been passing the same input numbers into. I didn't realize it, because the results are different (0.00051972 vs. 0.00052553, for example). Now that I know those results come from the same input numbers, is it possible to have differences this large due to roundoff error?
2 Kommentare
Akzeptierte Antwort
John D'Errico
am 20 Jul. 2023
Bearbeitet: John D'Errico
am 20 Jul. 2023
Is it possible? Of course it is. Perhaps you have some random component in there that you do not realize is random. In fact, that is even likely, since many tools may use random starting values. The curve fitting toolbox, for example, does. And if you have a random start, then it may terminate in a subtly different solution, based on the tolerances. Other tools are also random start. Eigs, for example, uses a random start point as I recall.
Is that a big number? WHO KNOWS? (We don't see your code, or your data, and at least we cannot yet see into your mind, though the mind reading toolbox is just around the corner. You may need to be plugged in for it to work though.) If some of your numbers are on the order of 1e12 or so, then it is just a one bit difference in the least significant bit.
Do you have unseen tolerances in your code? Possibly. Again, we can't know, since we don't see your code.
Are you ABSOLUTELY posstive you passed in the same numbers? In fact, this is a mistake often made. For example:
x = rand()
Now we KNOW that x is not exactly 0.2637. In fact, a better value for x is:
format long g
x
However, many people just type in the first number, and think they have copied EXACTLY the same number that was used the first time the code was run. And that would produce a tiny difference in any result.
Could you have changed the code in some small way, and forgot you did? Possibly. How can we know?
Without seeing your code, and EXACTLY what you did, telling us a specific difference is meaningless. It could be big, could be completely insignificant.
One thing you might do is to fix the starting seed for the random numbers in MATLAB, BEFORE a run. Then see if the result stays constant, when you run the code repeatedly. I would expect to see repeatable results then.
4 Kommentare
Christine Tobler
am 31 Jul. 2023
Note EIGS used to start with a different random seed in each run, but this has changed a few years ago (previous to R2018a, so I can't easily look it up in the release notes). Now EIGS will give the same results from run to run, and it's possible to set a different random starting vector to see the effect of the random number choice.
I agree setting the random seed before calling a function twice is a great way to see if somewhere there are random numbers being generated that cause the difference in results.
Steven Lord
am 31 Jul. 2023
I believe, looking at the Release Notes in the documentation for release R2018a, that the behavior Christine described was introduced in release R2017b.
Weitere Antworten (1)
Walter Roberson
am 20 Jul. 2023
In cases where MATLAB can automatically vectorize calculations on larger arrays, splitting the work between multiple CPUs, then the different CPUs might reply at different times in different runs, so potentially some operations might not be done in the same order on each run.
This would depend on the implementation, about whether it deals with the fastest-responding core first or if it waits for each core in turn, or if it has some kind of buffering system designed to record outputs even if one core "laps" others in calculations (that is, is it possible for one core to do two full tasks before another core finishes one task? Taking into account "efficiency" versus "power" cores, it should be assumed that it is possible if you dispatch more work to each core as it finishes the previous task.)
If the order of "consolidating" calculations is not deliberately controlled in the implementation, then two different runs of the same program on the same system could potentially give different results.
One of the easier to understand cases where this could be a potential issue, is sum() over an entire array. In a perfect world, you could divide the array up into as many pieces as there were cores and have each core sum its chunk, and add the chunk-sums into a running total as each core made its results available. But this assumes that if you do (A+B)+C that the result will be identical to if you did A+(B+C) and the same as B+(A+C). That assumption fails if you are doing double-precision floating point. Consider 1 + 1e-30 + -1 . In 53 bit double precision, 1e-30 is too small to matter for 1, so (1+1e-30) --> 1 exactly. Add the -1 and you would get exact 0. But if you did (1+-1)+1e-30 then you would get 1e-30 . So in double precision the order matters.
I seem to recall seeing something a few releases ago, either in the release notes or in one of the bug reports, that in this particular case, Mathworks has been careful to arrange that the partial sums will be added in a consistent order even if the CPUs return partial results in a different order. But the note was specific to summation, and did not in itself apply more generally to arithmetic such as the \ operator. So for the case of summation, the results should be consistent on any one system provided the number of cores did not change... but the number of cores could potentially change, and the results could potentially be different on a different system (with a different number of cores, or with a different chip that had access to different hardware vector mathematics.)
0 Kommentare
Siehe auch
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!