Collect Metrics on Model Testing Artifacts Programmatically
This example shows how to programmatically assess the status and quality of requirements-based testing activities in a project. When you develop software units by using Model-Based Design, you use requirements-based testing to verify your models. You can assess the testing status of one unit model by using the metric API to collect metric data on the traceability between requirements and test cases and on the status of test results. The metrics measure characteristics of completeness and quality of requirements-based testing that reflect industry standards such as ISO 26262 and DO-178. After collecting metric results, you can access the results or export them to a file. By running a script that collects these metrics, you can automatically analyze the testing status of your project to, for example, design a continuous integration system. Use the results to monitor testing completeness or to detect downstream testing impacts when you make changes to artifacts in the project.
Open the Project
Open a project that contains models and testing artifacts. For this example, in the MATLAB® Command Window, enter:
dashboardCCProjectStart('incomplete')
The project contains models and requirements and test cases for the models. Some of the requirements have traceability links to the models and test cases, which help to verify that a model's functionality meets the requirements.
Collect Metric Results
Create a metric.Engine
object for the current
project.
metric_engine = metric.Engine();
Update the trace information for metric_engine
to reflect
pending artifact changes and to track the test results.
updateArtifacts(metric_engine);
Create an array of metric identifiers for the metrics you want to collect. For
this example, create a list of the metric identifiers used in the Model Testing
Dashboard. For more information, see getAvailableMetricIds
.
metric_Ids = getAvailableMetricIds(metric_engine,... 'App','DashboardApp',... 'Dashboard','ModelUnitTesting');
For a list of model testing metrics and their identifiers, see Model Testing Metrics.
When you collect metric results, you can collect results for one unit at a time or for each unit in the project.
Collect Results for One Unit
When you collect and view results for a unit, the metrics return data for the artifacts that trace to the model.
Collect the metric results for the
db_DriverSwRequest
.
Create an array that identifies the path to the model file in the project and the name of the model.
unit = {fullfile(pwd,'models','db_DriverSwRequest.slx'),'db_DriverSwRequest'};
Execute the engine and use 'ArtifactScope'
to specify the
unit for which you want to collect results. The engine runs the metrics on only
the artifacts that trace to the model that you specify. Collecting results for
these metrics requires a Simulink®
Test™ license, a Requirements Toolbox™ license, and a Simulink
Coverage™
license.
execute(metric_engine, metric_Ids, 'ArtifactScope', unit)
Collect Results for Each Unit in the Project
To collect the results for each unit in the project, execute the engine
without the argument for
'ArtifactScope'
.
execute(metric_engine, metric_Ids)
For more information on collecting metric results, see the function execute
.
Access Results
Generate a report file that contains the results for
all units in the project. For this example, specify
the HTML file format, use pwd
to provide the path to the current folder,
and name the report
'MetricResultsReport.html'
.
reportLocation = fullfile(pwd, 'MetricResultsReport.html'); generateReport(metric_engine,'Type','html-file','Location',reportLocation);
Open the HTML report. The report is in the current folder, at the root of the project.
web('MetricResultsReport.html')
To open the table of contents and navigate to results for each unit, click the menu icon in the top-left corner of the report. For each unit in the report, there is an artifact summary table that displays the size and structure of that unit.
Saving the metric results in a report file allows you to access the results without opening the project and the dashboard. Alternatively, you can open the Model Testing Dashboard to see the results and explore the artifacts.
modelTestingDashboard
To access the results programmatically, use the getMetrics
function. The function returns the
metric.Result
objects that contain the result data for the
specified unit and metrics. For this example, store the results for the metrics
TestCaseStatus
and
TestCasesPerRequirementDistribution
in corresponding
arrays.
results_TestCasesPerReqDist = getMetrics(metric_engine, 'TestCasesPerRequirementDistribution'); results_TestStatus = getMetrics(metric_engine, 'TestCaseStatus');
View Distribution of Test Case Links per Requirement
The metric TestCasesPerRequirementDistribution
returns a
distribution of the number of test cases linked to each functional requirement
for the unit. Use the disp
function to display the
bin edges and bin counts of the distribution, which are fields in the
Value
field of the metric.Result
object. The left edge of each bin shows the number of test case links and the
bin count shows the number of requirements that are linked to that number of
test cases. The sixth bin edge is 18446744073709551615
, which
is the upper limit of the count of test cases per requirement, which shows that
the fifth bin contains requirements that have four or more test
cases.
disp(['Unit: ', results_TestCasesPerReqDist(1).Scope(1).Name]) disp([' Tests per Requirement: ', num2str(results_TestCasesPerReqDist(1).Value.BinEdges)]) disp([' Requirements: ', num2str(results_TestCasesPerReqDist(1).Value.BinCounts)])
Unit: db_DriverSwRequest Tests per Requirement: 0 1 2 3 4 18446744073709551615 Requirements: 3 6 0 0 2
This result shows that for the unit db_DriverSwRequest
there are 3 requirements that are not linked to test cases, 6 requirements that
are linked to one test case, and 2 requirements that are linked to four or more
test cases. Each requirement should be linked to at least one test case that
verifies that the model meets the requirement. The distribution also allows you
to check if a requirement has many more test cases than the other requirements,
which might indicate that the requirement is too general and that you should
break it into more granular requirements.
View Test Case Status Results
The metric TestCaseStatus
assesses the testing status of
each test case for the unit and returns one of these numeric results:
0
— Failed1
— Passed2
— Disabled3
— Untested
Display the name and status of each test case.
for n=1:length(results_TestStatus) disp(['Test Case: ', results_TestStatus(n).Artifacts(1).Name]) disp([' Status: ', num2str(results_TestStatus(n).Value)]) end
For this example, the tests have not been run, so each test case returns a
status of 3
.
See Also
Model Testing Metrics | metric.Engine
| execute
| generateReport
| getAvailableMetricIds
| updateArtifacts