Is there a good pattern to combine unit testing and performance testing?

7 Ansichten (letzte 30 Tage)
Andrew McLean
Andrew McLean am 21 Mär. 2019
Bearbeitet: Andy Campbell am 26 Dez. 2019
I have made extensive use of the unit testing framework in the past, and have just upgraded to a version of Matlab with the performance testing framework. I am looking for advice on how to combine the two. A typical scenario is having two different implementations of the same function where I want to both
  1. Verify the two functions give the same result
  2. Time the two functions
At the moment for (1) I will typically use class based unit tests, with a single method calling both functions and verifying that the results match. It looks to me as if to time the functions using the performance testing framework I would have to put the calls to the two functions in different methods, but then I would loose the ability to verify that the functions produce the same output. Am I missing a neat way of doing both together? Would it be better to do the unit testing and performance testing separately?
Andrew

Antworten (2)

Steven Lord
Steven Lord am 21 Mär. 2019
Generally, a performance test class that subclasses from matlab.perftest.TestCase is also a unit test, since the performance test base class matlab.perftest.TestCase is a subclass of the unit test base class matlab.unittest.TestCase.
>> ?matlab.perftest.TestCase < ?matlab.unittest.TestCase
ans =
logical
1
[An explanation of that code: from this documentation page that talks about metaclass objects: "Less than function (ClsA < ClsB). Use to determine if ClsA is a strict subclass of ClsB (i.e., a strict subclass means ClsX < ClsX is false)."]
Because a performance test class isa unit test class, you can call all the usual MATLAB qualification API methods inside your performance test. As the "Write Performance Test" section of this documentation page shows, if you do this I would wrap the code whose performance you want to measure with startMeasuring() and stopMeasuring() so that you don't measure the performance of the qualification API. You can also run the test as you're writing it using runtests to make sure that it works then once it's finished run it with runperf (which will take longer because "The performance test framework runs the tests using a variable number of measurements") to collect the performance data.
For the specific scenario you're describing, you could write your test as an Abstract base class (subclassing from matlab.perftest.TestCase) with an Abstract property. The concrete subclasses of that Abstract base class would fill that Abstract property with a function handle to the function that subclass was written to test. [Technically you wouldn't be comparing the two functions to each other directly, but you'd validate that each function returns the common expected results codified in the Abstract base class.] Run the collection of concrete tests using runperf to create an array of MeasurementResult objects and use whatever techniques you want to compare the data in those objects' Samples properties to determine which function is faster.
Alternately, if you have a lot of functions to compare (you're grading a collection of student assignments, for example) you could write your test as a parameterized test. Attached is a parameterized version of the example from the "Write Performance Test" documentation I linked above; run it with the following and review the Samples from each element in the results array.
results = runperf('fprintfTest')
  2 Kommentare
Andrew McLean
Andrew McLean am 22 Mär. 2019
Thanks. That's helpful. However, the problem is that in many cases I don't have "expected results" independent of the functions I want to test. Think of a case where there is a simple but inefficient way of solving a problem, where the code is easy to inspect for correctness and a more complex, but more efficient approach, who's operation needs to be verified.
At the moment it looks like I need to designate one function as the reference implementation. Then have a test method for each function that:
  1. verifies the output matches that of the reference
  2. times the function of interest exploiting startMeasuring() and stopMeasuring()
Done crudely that doubles the number of function calls. I'll think if there is a way to only evaluate the reference once during the performance tests.
Steven Lord
Steven Lord am 22 Mär. 2019
Try to memoize the reference implementation. Since it's going to be called with the same input each time, the MemoizedFunction will be called repeatedly but it'll retrieve the answer from the cache every time after the first. That should be faster.
Just be careful if random numbers or other piece of global state is involved, as stated in the second entry in the Tips section on that documentation page. You're going to want to exert some control via rng on the random number generator if you're testing a function whose output could depend upon the particular numbers that were generated. Otherwise your test could fail one run, pass the next three, and fail again. Sporadic failures are a real pain to investigate.

Melden Sie sich an, um zu kommentieren.


Andy Campbell
Andy Campbell am 26 Dez. 2019
Bearbeitet: Andy Campbell am 26 Dez. 2019
Hi Andrew,
Sorry for such a delay!
If you have 18a or later you can leverage labeled measurement boundaries to get both measurements in a single test procuedure. This would allow something like:
testCase.startMeasuring("reference");
expResult = referenceFcn();
testCase.stopMeasuring("reference");
testCase.startMeasuring("optimized")
actResult = optimizedFcn();
testCase.stopMeasuring("optimized");
testCase.verifyEqual(optimized, reference, 'RelTol', 1e-6);
Hope that helps!

Kategorien

Mehr zu Create and Run Performance Tests finden Sie in Help Center und File Exchange

Produkte


Version

R2018b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by