Model Testing Metrics
The Model Testing Dashboard collects metric data from the model design and testing artifacts in a project, such as requirements, models, and test results. Use the metric data to assess the status and quality of your model testing. Each metric in the dashboard measures a different aspect of the quality of the testing of your model and reflects guidelines in industry-recognized software development standards, such as ISO 26262 and DO-178. Use the widgets in the Model Testing Dashboard to see high-level metric results and testing gaps, as described in Explore Status and Quality of Testing Activities Using the Model Testing Dashboard.
Alternatively, you can use the API functions to collect metric results programmatically. When using the API, use the metric IDs to refer to each metric. This figure lists the metric IDs for each widget in the dashboard:
See Collect Metrics on Model Testing Artifacts Programmatically for an example of how to collect these metrics programmatically.
Requirement linked to test cases
Metric ID:
RequirementWithTestCase
Determine whether a requirement is linked to test cases.
Description
Use this metric to determine whether a requirement is linked to a test case with a
link where the Type is set to Verifies
. The metric
analyzes only requirements where the Type is set to
Functional
and that are linked to the unit with a link where the
Type is set to Implements
.
To collect data for this metric:
In the Model Testing Dashboard, click a metric in the Requirements Linked to Tests section and, in the table, see the Test Link Status column.
Use
getMetrics
with the metric identifier,RequirementWithTestCase
.
Collecting data for this metric loads the model file and requires a Requirements Toolbox™ license.
Results
For this metric, instances of metric.Result
return
Value
as one of these logical outputs:
0
— The requirement is not linked to test cases in the project.1
— The requirement is linked to at least one test case with a link where the Type is set toVerifies
.
Capabilities and Limitations
The metric:
Analyzes only requirements where the Type is set to
Functional
and that are linked to the unit with a link where the Type is set toImplements
.Counts links to test cases in the project where the link type is set to
Verifies
, including links to test cases that test other models or subsystems. For each requirement that is linked to test cases, check that the links are to test cases that run on the unit that implements the requirement.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Percentage requirements with test cases
Metric ID:
RequirementWithTestCasePercentage
Calculate the percentage of requirements that are linked to test cases.
Description
This metric counts the fraction of requirements that are linked to at least one test
case with a link where the Type is set to
Verifies
. The metric analyzes only requirements where the Type is set to Functional
and that are linked
to a unit with a link where the Type is set to
Implements
.
This metric calculates the results by using the results of the Requirement
linked to test cases
metric.
To collect data for this metric:
In the Model Testing Dashboard, view the Requirements with Tests widget.
Use
getMetrics
with the metric identifier,RequirementWithTestCasePercentage
.
Collecting data for this metric loads the model file and requires a Requirements Toolbox license.
Results
For this metric, instances of metric.Result
return
Value
as a fraction structure that contains these fields:
Numerator
— The number of implemented requirements that are linked to at least one test case.Denominator
— The total number of functional requirements implemented in the unit with a link where the Type is set toImplements
.
Compliance Thresholds
The default compliance thresholds for this metric are:
Compliant
—100%
of unit requirements are linked to test casesNon-Compliant
— Less than100%
of unit requirements are linked to test casesWarning
— None
Capabilities and Limitations
The metric:
Analyzes only requirements where the Type is set to
Functional
and that are linked to a unit with a link where the Type is set toImplements
.Counts links to test cases in the project where the link type is set to
Verifies
, including links to test cases that test other models or subsystems. For each requirement that is linked to test cases, check that the links are to test cases that run on the unit that implements the requirement.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Requirement with test case distribution
Metric ID:
RequirementWithTestCaseDistribution
Distribution of the number of requirements linked to test cases compared to the number of requirements that are missing test cases.
Description
Use this metric to count the number of requirements that are linked to test cases and
the number of requirements that are missing links to test cases. The metric analyzes only
requirements where the Type is set to
Functional
and that are linked to a unit with a link where the
Type is set to Implements
. A
requirement is linked to a test case if it has a link where the Type
is set to Verifies
.
This metric returns the result as a distribution of the results of the
Requirement linked to test cases
metric.
To collect data for this metric:
In the Model Testing Dashboard, place your cursor over the Requirements with Tests widget.
Use
getMetrics
with the metric identifier,RequirementWithTestCaseDistribution
.
Collecting data for this metric loads the model file and requires a Requirements Toolbox license.
Results
For this metric, instances of metric.Result
return
Value
as a distribution structure that contains these fields:
BinCounts
— The number of requirements in each bin, returned as an integer vector.BinEdges
— The logical output results of the Requirement linked to test cases metric, returned as a vector with entries0
(false
) and1
(true
).
The first bin includes requirements that are not linked to test cases. The second bin includes requirements that are linked to at least one test case.
Compliance Thresholds
The default compliance thresholds for this metric are:
Compliant
—0
requirements are missing links to test casesNon-Compliant
—1
or more requirements are missing links to test casesWarning
— None
Capabilities and Limitations
The metric:
Analyzes only requirements where the Type is set to
Functional
and that are linked to a unit with a link where the Type is set toImplements
.Counts links to test cases in the project where the link type is set to
Verifies
, including links to test cases that test other models or subsystems. For each requirement that is linked to test cases, check that the links are to test cases that run on the unit that implements the requirement.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Test cases per requirement
Metric ID:
TestCasesPerRequirement
Count the number of test cases linked to each requirement.
Description
Use this metric to count the number of test cases linked to each requirement. The
metric analyzes only requirements where the Type is set
to Functional
and that are linked to the unit with a link where the
Type is set to Implements
. A test
case is linked to a requirement if it has a link where the Type is
set to Verifies
.
To collect data for this metric:
In the Model Testing Dashboard, click a metric in the section Tests per Requirement to display the results in a table.
Use
getMetrics
with the metric identifier,TestCasesPerRequirement
.
Collecting data for this metric loads the model file and requires a Requirements Toolbox license.
Results
For this metric, instances of metric.Result
return
Value
as an integer.
Capabilities and Limitations
The metric:
Analyzes only requirements where the Type is set to
Functional
and that are linked to the unit with a link where the Type is set toImplements
.Counts links to test cases in the project where the link type is set to
Verifies
, including links to test cases that test other models or subsystems. For each requirement that is linked to test cases, check that the links are to test cases that run on the unit that implements the requirement.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Test cases per requirement distribution
Metric ID:
TestCasesPerRequirementDistribution
Distribution of the number of test cases linked to each requirement.
Description
This metric returns a distribution of the number of test cases linked to each
requirement. Use this metric to determine if requirements are linked to a disproportionate
number of test cases. The metric analyzes only requirements where the Type is set to Functional
and that are linked
to the unit with a link where the Type is set to
Implements
. A test case is linked to a requirement if it has a link
where the Type is set to Verifies
.
This metric returns the result as a distribution of the results of the Test
cases per requirement
metric.
To collect data for this metric:
In the Model Testing Dashboard, view the Tests per Requirement widget.
Use
getMetrics
with the metric identifier,TestCasesPerRequirementDistribution
.
Collecting data for this metric loads the model file and requires a Requirements Toolbox license.
Results
For this metric, instances of metric.Result
return
Value
as a distribution structure that contains these fields:
BinCounts
— The number of requirements in each bin, returned as an integer vector.BinEdges
— Bin edges for the number of test cases linked to each requirement, returned as an integer vector.BinEdges(1)
is the left edge of the first bin, andBinEdges(end)
is the right edge of the last bin. The length ofBinEdges
is one more than the length ofBinCounts
.
The bins in the result of this metric correspond to the bins 0, 1, 2, 3, and >3 in the Tests per Requirement widget.
Compliance Thresholds
This metric does not have predefined thresholds. Consequently, this metric appears when you click Uncategorized in the Overlays section of the toolstrip.
Capabilities and Limitations
The metric:
Analyzes only requirements where the Type is set to
Functional
and that are linked to the unit with a link where the Type is set toImplements
.Counts links to test cases in the project where the link type is set to
Verifies
, including links to test cases that test other models or subsystems. For each requirement that is linked to test cases, check that the links are to test cases that run on the unit that implements the requirement.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Test linked to requirements
Metric ID:
TestCaseWithRequirement
Determine whether a test case is linked to requirements.
Description
Use this metric to determine whether a test case is linked to a requirement with a
link where the Type is set to Verifies
. The metric
analyzes only test cases that run on the model or subsystems in the unit for which you
collect metric data.
To collect data for this metric:
In the Model Testing Dashboard, click a metric in the Tests Linked to Requirements section and, in the table, see the Requirement Link Status column.
Use
getMetrics
with the metric identifier,TestCaseWithRequirement
.
Collecting data for this metric loads the model file and requires a Simulink® Test™ license.
Results
For this metric, instances of metric.Result
return
Value
as one of these logical outputs:
0
— The test case is not linked to requirements that are implemented in the unit.1
— The test case is linked to at least one requirement with a link where the Type is set toVerifies
.
Capabilities and Limitations
The metric:
Analyzes only test cases in the project that test:
Unit models
Atomic subsystems
Atomic subsystem references
Atomic Stateflow® charts
Atomic MATLAB® Function blocks
Referenced models
Counts only links where the Type is set to
Verifies
that link to requirements where the Type is set toFunctional
. This includes links to requirements that are not linked to the unit or are linked to other units. For each test case that is linked to requirements, check that the links are to requirements that are implemented by the unit that the test case runs on.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Test linked to requirement percentage
Metric ID:
TestCaseWithRequirementPercentage
Calculate the fraction of test cases that are linked to requirements.
Description
This metric counts the fraction of test cases that are linked to at least one
requirement with a link where the Type is set to
Verifies
. The metric analyzes only test cases that run on the model
or subsystems in the unit for which you collect metric data.
This metric calculates the results by using the results of the Test linked to
requirements
metric.
To collect data for this metric:
In the Model Testing Dashboard, view the Tests with Requirements widget.
Use
getMetrics
with the metric identifier,TestCaseWithRequirementPercentage
.
Collecting data for this metric loads the model file and requires a Simulink Test license.
Results
For this metric, instances of metric.Result
return
Value
as a fraction structure that contains these fields:
Numerator
— The number of test cases that are linked to at least one requirement with a link where the Type is set toVerifies
.Denominator
— The total number of test cases that test the unit.
Compliance Thresholds
The default compliance thresholds for this metric are:
Compliant
—100%
of test cases are linked to requirementsNon-Compliant
— Less than100%
of test cases are linked to requirementsWarning
— None
Capabilities and Limitations
The metric:
Analyzes only test cases in the project that test:
Unit models
Atomic subsystems
Atomic subsystem references
Atomic Stateflow charts
Atomic MATLAB Function blocks
Referenced models
Counts only links where the Type is set to
Verifies
that link to requirements where the Type is set toFunctional
. This includes links to requirements that are not linked to the unit or are linked to other units. For each test case that is linked to requirements, check that the links are to requirements that are implemented by the unit that the test case runs on.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Test linked to requirement distribution
Metric ID:
TestCaseWithRequirementDistribution
Distribution of the number of test cases linked to requirements compared to the number of test cases that are missing links to requirements.
Description
Use this metric to count the number of test cases that are linked to requirements and
the number of test cases that are missing links to requirements. The metric analyzes only
test cases that run on the model or subsystems in the unit for which you collect metric
data. A test case is linked to a requirement if it has a link where the
Type is set to Verifies
.
This metric returns the result as a distribution of the results of the Test
linked to requirements
metric.
To collect data for this metric:
In the Model Testing Dashboard, place your cursor over the Tests with Requirements widget.
Use
getMetrics
with the metric identifier,TestCaseWithRequirementDistribution
.
Collecting data for this metric loads the model file and requires a Simulink Test license.
Results
For this metric, instances of metric.Result
return the
Value
as a distribution structure that contains these fields:
BinCounts
— The number of test cases in each bin, returned as an integer vector.BinEdges
— The logical output results of the Test linked to requirements metric, returned as a vector with entries0
(false
) and1
(true
).
The first bin includes test cases that are not linked to requirements. The second bin includes test cases that are linked to at least one requirement.
Compliance Thresholds
The default compliance thresholds for this metric are:
Compliant
—0
unit tests are missing links to requirementsNon-Compliant
—1
or more unit tests are missing links to requirementsWarning
— None
Capabilities and Limitations
The metric:
Analyzes only test cases in the project that test:
Unit models
Atomic subsystems
Atomic subsystem references
Atomic Stateflow charts
Atomic MATLAB Function blocks
Referenced models
Counts only links where the Type is set to
Verifies
that link to requirements where the Type is set toFunctional
. This includes links to requirements that are not linked to the unit or are linked to other units. For each test case that is linked to requirements, check that the links are to requirements that are implemented by the unit that the test case runs on.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Requirements per test case
Metric ID:
RequirementsPerTestCase
Count the number of requirements linked to each test case.
Description
Use this metric to count the number of requirements linked to each test case. The
metric analyzes only test cases that run on the model or subsystems in the unit for which
you collect metric data. A test case is linked to a requirement if it has a link where the
Type is set to Verifies
.
To collect data for this metric:
In the Model Testing Dashboard, click a metric in the section Requirements per Test to display the results in a table.
Use
getMetrics
with the metric identifier,RequirementsPerTestCase
.
Collecting data for this metric loads the model file and requires a Simulink Test license.
Results
For this metric, instances of metric.Result
return
Value
as an integer.
Capabilities and Limitations
The metric:
Analyzes only test cases in the project that test:
Unit models
Atomic subsystems
Atomic subsystem references
Atomic Stateflow charts
Atomic MATLAB Function blocks
Referenced models
Counts only links where the Type is set to
Verifies
that link to requirements where the Type is set toFunctional
. This includes links to requirements that are not linked to the unit or are linked to other units. For each test case that is linked to requirements, check that the links are to requirements that are implemented by the unit that the test case runs on.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Requirements per test case distribution
Metric ID:
RequirementsPerTestCaseDistribution
Distribution of the number of requirements linked to each test case.
Description
This metric returns a distribution of the number of requirements linked to each test
case. Use this metric to determine if test cases are linked to a disproportionate number
of requirements. The metric analyzes only test cases that run on the model or subsystems
in the unit for which you collect metric data. A test case is linked to a requirement if
it has a link where the Type is set to
Verifies
.
This metric returns the result as a distribution of the results of the
Requirements per test case
metric.
To collect data for this metric:
In the Model Testing Dashboard, view the Requirements per Test widget.
Use
getMetrics
with the metric identifier,RequirementsPerTestCaseDistribution
.
Collecting data for this metric loads the model file and requires a Simulink Test license.
Results
For this metric, instances of metric.Result
return
Value
as a distribution structure that contains these fields:
BinCounts
— The number of test cases in each bin, returned as an integer vector.BinEdges
— Bin edges for the number of requirements linked to each test case, returned as an integer vector.BinEdges(1)
is the left edge of the first bin, andBinEdges(end)
is the right edge of the last bin. The length ofBinEdges
is one more than the length ofBinCounts
.
The bins in the result of this metric correspond to the bins 0, 1, 2, 3, and >3 in the Requirements per Test widget.
Compliance Thresholds
This metric does not have predefined thresholds. Consequently, this metric appears when you click Uncategorized in the Overlays section of the toolstrip.
Capabilities and Limitations
The metric:
Analyzes only test cases in the project that test:
Unit models
Atomic subsystems
Atomic subsystem references
Atomic Stateflow charts
Atomic MATLAB Function blocks
Referenced models
Counts only links where the Type is set to
Verifies
that link to requirements where the Type is set toFunctional
. This includes links to requirements that are not linked to the unit or are linked to other units. For each test case that is linked to requirements, check that the links are to requirements that are implemented by the unit that the test case runs on.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Test case type
Metric ID: TestCaseType
Return the type of the test case.
Description
This metric returns the type of the test case. A test case is either a baseline, equivalence, or simulation test.
Baseline tests compare outputs from a simulation to expected results stored as baseline data.
Equivalence tests compare the outputs from two different simulations. Simulations can run in different modes, such as normal simulation and software-in-the-loop.
Simulation tests run the system under test and capture simulation data. If the system under test contains blocks that verify simulation, such as Test Sequence and Test Assessment blocks, the pass/fail results are reflected in the simulation test results.
To collect data for this metric:
In the Model Testing Dashboard, click a widget in the section Tests by Type to display the results in a table.
Use
getMetrics
with the metric identifier,TestCaseType
.
Collecting data for this metric loads the model file and test files and requires a Simulink Test license.
Results
For this metric, instances of metric.Result
return
Value
as one of these integer outputs:
0
— Simulation test1
— Baseline test2
— Equivalence test
Capabilities and Limitations
The metric includes only test cases in the project that test the model or subsystems in the unit for which you collect metric data.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Test case type distribution
Metric ID:
TestCaseTypeDistribution
Distribution of the types of the test cases for the unit.
Description
This metric returns a distribution of the types of test cases that run on the unit. A test case is either a baseline, equivalence, or simulation test. Use this metric to determine if there is a disproportionate number of test cases of one type.
Baseline tests compare outputs from a simulation to expected results stored as baseline data.
Equivalence tests compare the outputs from two different simulations. Simulations can run in different modes, such as normal simulation and software-in-the-loop.
Simulation tests run the system under test and capture simulation data. If the system under test contains blocks that verify simulation, such as Test Sequence and Test Assessment blocks, the pass/fail results are reflected in the simulation test results.
This metric returns the result as a distribution of the results of the Test
case type
metric.
To collect data for this metric:
In the Model Testing Dashboard, view the Tests by Type widget.
Programmatically, use
getMetrics
with the metric identifier,TestCaseTypeDistribution
.
Collecting data for this metric loads the model file and requires a Simulink Test license.
Results
For this metric, instances of metric.Result
return
Value
as a distribution structure that contains these fields:
BinCounts
— The number of test cases in each bin, returned as an integer vector.BinEdges
— The outputs of the Test case type metric, returned as an integer vector. The integer outputs represent the three test case types:0
— Simulation test1
— Baseline test2
— Equivalence test
Compliance Thresholds
This metric does not have predefined thresholds. Consequently, this metric appears when you click Uncategorized in the Overlays section of the toolstrip.
Capabilities and Limitations
The metric includes only test cases in the project that test the model or subsystems in the unit for which you collect metric data.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Test case tag
Metric ID: TestCaseTag
Return the tags for a test case.
Description
This metric returns the tags for a test case. You can add custom tags to a test case by using the Test Manager.
To collect data for this metric:
In the Model Testing Dashboard, click a widget in the Tests with Tag section to display the results in a table.
Use
getMetrics
with the metric identifier,TestCaseTag
.
Collecting data for this metric loads the model file and test files and requires a Simulink Test license.
Results
For this metric, instances of metric.Result
return
Value
as a string.
Capabilities and Limitations
The metric includes only test cases in the project that test the model or subsystems in the unit for which you collect metric data.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Test case tag distribution
Metric ID:
TestCaseTagDistribution
Distribution of the tags of the test cases for the unit.
Description
This metric returns a distribution of the tags on the test cases that run on the unit. For a test case, you can specify custom tags in a comma-separated list in the Test Manager. Use this metric to determine if there is a disproportionate number of test cases that have a particular tag.
This metric returns the result as a distribution of the results of the Test
case tag
metric.
To collect data for this metric:
In the Model Testing Dashboard, view the Tests with Tag widget.
Use
getMetrics
with the metric identifier,TestCaseTagDistribution
.
Collecting data for this metric loads the model file and requires a Simulink Test license.
Results
For this metric, instances of metric.Result
return
Value
as a distribution structure that contains these fields:
BinCounts
— The number of test cases in each bin, returned as an integer vector.BinEdges
— The bin edges for the tags that are specified for the test cases, returned as a string array.
Compliance Thresholds
This metric does not have predefined thresholds. Consequently, this metric appears when you click Uncategorized in the Overlays section of the toolstrip.
Capabilities and Limitations
The metric includes only test cases in the project that test the model or subsystems in the unit for which you collect metric data.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Test case status
Metric ID: TestCaseStatus
Return the status of the test case result.
Description
This metric returns the status of the test case result. A test status is passed, failed, disabled, or untested.
To collect data for this metric:
In the Model Testing Dashboard, click a widget in the Model Test Status section to display the results in a table.
Use
getMetrics
with the metric identifier,TestCaseStatus
.
Collecting data for this metric loads the model file and test result files and requires a Simulink Test license.
Results
For this metric, instances of metric.Result
return
Value
as one of these integer outputs:
0
— The test case failed.1
— The test case passed.2
— The test case was disabled.3
— The test case was not run (untested).
Capabilities and Limitations
The metric:
Includes only test cases in the project that test the model or subsystems in the unit for which you collect metric data.
Does not count the status of test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode. The metric shows these test cases as untested.
Reflects the status of the whole test case if the test case includes multiple iterations.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Test case status percentage
Metric ID:
TestCaseStatusPercentage
Calculate the fraction of test cases that passed.
Description
This metric counts the fraction of test cases that passed in the test results.
This metric calculates the results by using the results of the Test case
status
metric.
To collect data for this metric:
In the Model Testing Dashboard, in the Model Test Status section, place your cursor over the Passed widget.
Use
getMetrics
with the metric identifier,TestCaseStatusPercentage
.
Collecting data for this metric loads the model file and requires a Simulink Test license.
Results
For this metric, instances of metric.Result
return
Value
as a fraction structure that contains these fields:
Numerator
— The number of test cases that passed.Denominator
— The total number of test cases that test the unit.
Compliance Thresholds
The default compliance thresholds for this metric are:
Compliant
—100%
of test cases passedNon-Compliant
— Less than100%
of test cases passedWarning
— None
Capabilities and Limitations
The metric:
Includes only test cases in the project that test the model or subsystems in the unit for which you collect metric data.
Does not count the status of test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode. The metric shows these test cases as untested.
Reflects the status of the whole test case if the test case includes multiple iterations.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Test case status distribution
Metric ID:
TestCaseStatusDistribution
Distribution of the statuses of the test case results for the unit.
Description
This metric returns a distribution of the status of the results of test cases that run on the unit. A test status is passed, failed, disabled, or untested.
This metric returns the result as a distribution of the results of the Test
case type
metric.
To collect data for this metric:
In the Model Testing Dashboard, use the widgets in the Model Test Status section to see the results.
Use
getMetrics
with the metric identifier,TestCaseStatusDistribution
.
Collecting data for this metric loads the model file and requires a Simulink Test license.
Results
For this metric, instances of metric.Result
return
Value
as a distribution structure that contains these fields:
BinCounts
— the number of test cases in each bin, returned as an integer vector.BinEdges
— The outputs of the Test case status metric, returned as an integer vector. The integer outputs represent the test result statuses:0
— The test case failed.1
— The test case passed.2
— The test case was disabled.3
— The test case was not run (untested).
Compliance Thresholds
The default compliance thresholds for this metric are:
Compliant
:0
unit tests are untested.0
unit tests failed.0
unit tests are disabled.
Non-Compliant
—1
or more unit tests are untested, disabled, or have failed.Warning
— None
Capabilities and Limitations
The metric:
Includes only test cases in the project that test the model or subsystems in the unit for which you collect metric data.
Does not count the status of test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode. The metric shows these test cases as untested.
Reflects the status of the whole test case if the test case includes multiple iterations.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Test case includes pass/fail criteria
Metric ID:
TestCaseVerificationStatus
This metric determines whether the test case has pass/fail criteria such as verify statements, verification blocks, custom criteria, and logical or temporal assessments.
Description
Use this metric to determine whether a test case has pass/fail criteria.
A test case has pass/fail criteria if it has at least one of the following:
at least one executed verify statement
at least one executed temporal or logical assessment
custom criteria that has a pass/fail status in Simulink Test Manager
baseline criteria which determine the pass/fail criteria of the test case
To collect data for this metric:
In the Model Testing Dashboard, in the Model Test Status section, click the Inconclusive widget to view the
TestCaseVerificationStatus
results in a table.
Use
getMetrics
with the metric identifier,TestCaseVerificationStatus
.
Collecting data for this metric loads the model file and test result files and requires a Simulink Test license.
Results
For this metric, instances of metric.Result
return
Value
as one of these integer outputs:
0
— The test case is missing pass/fail criteria.1
— The test case has pass/fail criteria.2
— The test case was not run.
Capabilities and Limitations
The metric:
Includes only test cases in the project that test the model or subsystems in the unit for which you collect metric data.
Does not count the pass/fail criteria of test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode. The metric shows these test cases as Missing Pass/Fail Criteria.
Reflects the status of the whole test case if the test case includes multiple iterations.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Test case includes pass/fail criteria distribution
Metric ID:
TestCaseVerificationStatusDistribution
Distribution of the number of test cases that do not have pass/fail criteria compared to the number of test cases that do have pass/fail criteria.
Description
Use this metric to count the number of test cases that do not have pass/fail criteria and the number of test cases that do have pass/fail criteria.
A test case has pass/fail criteria if it has at least one of the following:
at least one executed verify statement
at least one executed temporal or logical assessment
custom criteria that has a pass/fail status in Simulink Test Manager
baseline criteria which determine the pass/fail criteria of the test case
This metric returns the result as a distribution of the results of the
TestCaseVerificationStatusDistribution
metric.
To collect data for this metric:
In the Model Testing Dashboard, in the Model Test Status section, place your cursor over the Inconclusive widget.
Use
getMetrics
with the metric identifier,TestCaseVerificationStatusDistribution
.
Collecting data for this metric loads the model file and test files and requires a Simulink Test license.
Results
For this metric, instances of metric.Result
return
Value
as a distribution structure that contains these fields:
BinCounts
— The number of test cases in each bin, returned as an integer vector.BinEdges
— The outputs of theTestCaseVerificationStatus
metric, returned as an integer vector. The integer outputs represent the three test case verification statuses:0
— The test case is missing pass/fail criteria.1
— The test case has pass/fail criteria.2
— The test case was not run.
Compliance Thresholds
The default compliance thresholds for this metric are:
Compliant
—0
unit tests are missing pass/fail criteriaNon-Compliant
—1
or more unit tests do not have pass/fail criteriaWarning
— None
Capabilities and Limitations
The metric:
Includes only test cases in the project that test the model or subsystems in the unit for which you collect metric data.
Does not count the pass/fail criteria of test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode. The metric shows these test cases as Missing Pass/Fail Criteria.
Reflects the status of the whole test case if the test case includes multiple iterations.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Justified and achieved execution coverage
Metric ID:
ExecutionCoverageBreakdown
Model condition coverage achieved by test cases and justifications.
Description
This metric returns the model execution coverage measured in the test results. The metric result includes the percentage of execution coverage achieved by the test cases, the percentage of coverage justified in coverage filters, and the percentage of execution coverage missed by the tests.
To collect data for this metric:
In the Model Testing Dashboard, in the Model Coverage section, place your cursor over the bars in the Execution widget.
Use
getMetrics
with the metric identifier,ExecutionCoverageBreakdown
.
Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage™ license.
Results
For this metric, instances of metric.Result
return the
Value
as a double vector that contains these elements.
Value(1)
— The percentage of execution coverage achieved by the tests.Value(2)
— The percentage of execution coverage justified by coverage filters.Value(3)
— The percentage of execution coverage missed by the tests.
Compliance Thresholds
The default compliance thresholds for this metric are:
Compliant
— Test results return0%
missed coverageNon-Compliant
— Test results return missing coverageWarning
— None
Capabilities and Limitations
The metric:
Returns aggregated coverage results.
Does not include coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.
Returns 100% coverage for models that do not have execution points.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Justified and achieved condition coverage
Metric ID:
ConditionCoverageBreakdown
Model condition coverage achieved by test cases and justifications.
Description
This metric returns the model condition coverage measured in the test results. The metric result includes the percentage of condition coverage achieved by the test cases, the percentage of coverage justified in coverage filters, and the percentage of condition coverage missed by the tests.
To collect data for this metric:
In the Model Testing Dashboard, in the Model Coverage section, place your cursor over the bars in the Condition widget.
Use
getMetrics
with the metric identifier,ConditionCoverageBreakdown
.
Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.
Results
For this metric, instances of metric.Result
return the
Value
as a double vector that contains these elements:
Value(1)
— The percentage of condition coverage achieved by the tests.Value(2)
— The percentage of condition coverage justified by coverage filters.Value(3)
— The percentage of condition coverage missed by the tests.
Compliance Thresholds
The default compliance thresholds for this metric are:
Compliant
— Test results return0%
missed coverageNon-Compliant
— Test results return missed coverageWarning
— None
Capabilities and Limitations
The metric:
Returns aggregated coverage results.
Does not include coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.
Returns 100% coverage for models that do not have condition points.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Justified and achieved decision coverage
Metric ID:
DecisionCoverageBreakdown
Model decision coverage achieved by test cases and justifications.
Description
This metric returns the model decision coverage measured in the test results. The metric result includes the percentage of decision coverage achieved by the test cases, the percentage of coverage justified in coverage filters, and the percentage of decision coverage missed by the tests.
To collect data for this metric:
In the Model Testing Dashboard, in the Model Coverage section, place your cursor over the bars in the Decision widget.
Use
getMetrics
with the metric identifier,DecisionCoverageBreakdown
.
Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.
Results
For this metric, instances of metric.Result
return the
Value
as a double vector that contains these elements:
Value(1)
— The percentage of decision coverage achieved by the tests.Value(2)
— The percentage of decision coverage justified by coverage filters.Value(3)
— The percentage of decision coverage missed by the tests.
Compliance Thresholds
The default compliance thresholds for this metric are:
Compliant
— Test results return0%
missed coverageNon-Compliant
— Test results return missing coverageWarning
— None
Capabilities and Limitations
The metric:
Returns aggregated coverage results.
Does not include coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.
Returns 100% coverage for models that do not have decision points.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.
Justified and achieved MC/DC coverage
Metric ID:
MCDCCoverageBreakdown
Model modified condition and decision (MCDC) coverage achieved by test cases and justifications.
Description
This metric returns the modified condition and decision (MCDC) measured in the test results. The metric result includes the percentage of MCDC coverage achieved by the test cases, the percentage of coverage justified in coverage filters, and the percentage of MCDC coverage missed by the tests.
To collect data for this metric:
In the Model Testing Dashboard, in the Model Coverage section, place your cursor over the bars in the MC/DC widget.
Use
getMetrics
with the metric identifier,MCDCCoverageBreakdown
.
Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.
Results
For this metric, instances of metric.Result
return the
Value
as a double vector that contains these elements:
Value(1)
— The percentage of MCDC coverage achieved by the tests.Value(2)
— The percentage of MCDC coverage justified by coverage filters.Value(1)
— The percentage of MCDC coverage missed by the tests.
Compliance Thresholds
The default compliance thresholds for this metric are:
Compliant
— Test results return0%
missed coverageNon-Compliant
— Test results return missing coverageWarning
— None
Capabilities and Limitations
The metric:
Returns aggregated coverage results.
Does not include coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.
Returns 100% coverage for models that do not have condition/decision points.
See Also
For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.