Different results with different versions of Matlab. A way to detect what functions changed?

27 Ansichten (letzte 30 Tage)
With new versions of Matlab there are improvements or changes in some functions. Is there an easy way to test my code on a new version and detect if any functions used have changed?
I would like to avoid looking manually for the specific function that has changed and that is responsible for providing different results.
  3 Kommentare
Alexandre3
Alexandre3 am 5 Jul. 2017
Thanks for your answer Adam. Me too I try to keep up to date on the release notes, but I wanted to check for an alternative option. I will check if investing time in unit testing is convenient.
Jack Kenney
Jack Kenney am 14 Apr. 2020
Introduced in MATLAB R2017b, and available onward, are the "codeCompatibilityReport" and the "codeCompatibilityAnalysis" functions which help users determine if the MATLAB functions they are currently using have functionality changes in new releases of MATLAB:

Melden Sie sich an, um zu kommentieren.

Antworten (2)

Jan
Jan am 4 Jul. 2017
Bearbeitet: Jan am 4 Jul. 2017
As Adam said already:
  1. You need to read the list of changes in the documentation.
  2. Only an exhaustive self test of the code will reveal problems. You need unit tests for all functions, and an integration test for the main code. Automatic tests for GUIs are required in addition.
Unfortunately this is a huge pile of work. I'm maintaining a code with > 300'000 lines of code (plus comments) for clinical decision making. The unit tests to check the expected outputs and the wanted errors for wrong input for each tool function run for 2 hours, but the integration tests, which compare the results of complete studies including hundreds of patients take much longer. The comparison of the results is not trivial, because rounding errors can appear and if the model is numerically instable, the results can differ completely, but this does not mean a software problem or incompatibility with a Matlab version. This was the case for the old weak version of LOG10 and the change of ACOS to the netlib method. Even the used compiler for C-Mex files matters.
For a reliable software, even staying at the same Matlab version does not help to avoid unit tests. I'd never trust a function, if it is not tested exhaustively on my computer.
The automatic tests revealed several undocumented changes of Matlab, or to be exact: not documented in public. The internal docs contain more information. E.g. the change of fullfile('C:\', '\') which replies 'C:\\' since 2015b, or the new idea to reply TRUE for strncmp(a,b,0) instead of the former FALSE. The loss of the VaxD format in fopen was not mentioned in the public docs also.
A large code and the need for reliable results demand for exhaustive self tests. Testing is obligatory. There is no magic way to avoid it. In fact, manual tests will never be sufficient: They take too long and do not cover all code reliably.
  6 Kommentare
Jan
Jan am 6 Jul. 2017
You are right, Walter. Software for controlling machines should not have a defined end point to stop at. The goal of this kind of software is not to calculate a final result, but to acquire the current set of measurement values to determine the parameters to control the system. Now the parts of this software are the "programs": The function to calculate the remaining fuel should answer a reliable value in a finite time, or an error message, such that the controller (e.g. the pilot) get the chance to determine the value in a different way (alternative sensor, estimation, bold guessing, ignoring).
This would mean that the (infinite) control loop is not a "program", because it is not even considered to stop autonomously. Only the single function to control subsystems should be "programs" with a guaranteed limitation of the run time, e.g. even with a fixed duration in a real-time system.
For an open loop system like the software of an airplain, one should think of including the pilot as integral part to let the system become a "program" (in the definition of my former professor): The pilot guarantees to bring the passengers back to earth in finite time. Otherwise the system should be considered as bad random number generator.
The Ariane V exploded in 1996 because of a bug in the subsystem for the attitude control. This demonstrates, that the control system was designed as a "program" (with a final result), but this does not mean that the software design was "correct".
@Alexandre3: Sorry, I left the topic. All I want to say is: Unit tests help to control the quality of of the code and the compatibility with the system it runs on. They are expensive and it is painful to create them afterwards. But this question is a good hint that automatic testing is obligatory.
Adam
Adam am 7 Jul. 2017
It does depend what your code is used for too. Code for an aeroplane undeniably needs exhaustive unit tests. I write mostly research code in a non life-threatening domain. Our company's main software engineers writing our commercial software use TDD (except when management imposes infeasible time constraints and suddenly that all collapses!), but I've yet to find it useful or fast enough for what I do in research.
I may have to start doing though since we use Matlab Coder more and more nowadays so code I write just goes straight through into commercial software with no double-checking. I used to manually port my research to C++ and add more tests at that point.
Some of our software engineers say TDD is faster, but sadly they never prove it, they just give opinion as fact and my first-hand experiences are vastly to the contrary!

Melden Sie sich an, um zu kommentieren.


John D'Errico
John D'Errico am 4 Jul. 2017
Really, the only way that I know of is to read the release notes for each release, as the release comes out. A virtue of doing so is that you will sometimes learn of a new tool or capability, something that you might never have seen otherwise. Those notes are to be found on the website.
Yes, I suppose it would be nice to have a utility that could scan all of your codes, flagging each function called in there, comparing them to a database of when those functions were last changed.
Lets see, this utility I am thinking about would operate in the editor when a flag is set from the menus. Each function name (perhaps even operators, since the times operator is just a call to the function mtimes) would be color coded, on a scale from green to red, indicating how recently the function was modified, or perhaps some measure of the degree of change. So documentation changes are pretty minor, but some other changes could be viewed as major. If a function has seen no change at all since the time a script was written, then the function name would remain black.
On mouse over, a tooltip could indicate what had changed most recently for that function. I suppose if a function is due to be removed in a future release, the function could be made to blink, or pick some other color code, perhaps violet.
  1 Kommentar
Alexandre3
Alexandre3 am 5 Jul. 2017
Thanks John. Actually, I was thinking in a functionality more like the "Dependency Report". Currently this report gives you all the subfunctions that are called by a script. It would be nice to also have information if the function in the report changed with respect to the previous version.

Melden Sie sich an, um zu kommentieren.

Kategorien

Mehr zu MATLAB finden Sie in Help Center und File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by