In February 2014, technology blogs and news outlets were abuzz about a newly discovered vulnerability in Apple’s iOS iPhone, iPod, iPad, and Mac OS X devices. There was a problem in the Transport Layer Security (TLS) and Secure Sockets Layer (SSL) code that could be exploited by what is known as Man in the Middle attack (MitM). The vulnerability was dubbed Goto Fail, and Apple quickly patched the defect with iOS 7.0.6 for its mobile platform and OS X 10.9.2 for the desktop platform.
Although this software defect was discovered in mobile and IT systems, we feel that it outlines important lessons applicable to embedded software engineers who develop software for critical systems found in automobile ECUs, avionics, and medical devices.
Here, we introduce you to the software vulnerability, describe how it escaped into production, and outline how static analysis tools such as Polyspace® products can help engineers prevent this type of error in the future.
The Anatomy of the Defect
This software defect permits an attacker to act as a proxy and intercept, view, and modify data being transferred between you and the intended recipient or web site. For example, the attacker could discover passwords, account numbers, and other sensitive information. It’s also possible for the attacker to mimic a trusted web site in order to install malware on your device. Reports cite that MitM attacks typically occur when connecting to the internet via unprotected WiFi networks.
The defect originated in the TLS/SSL library in source file
sslKeyExchange.c. The following screenshot shows the stripped code snippet that represents the defect:
As shown in the code snippet above, all of the function calls named
SSLHashSHA1 check the return against zero. If the function return is not zero, the routine aborts to a
goto statement labeled
fail. The fail exits with the error return from the prior function call.
The problem with this code is that there are two repeated
goto statements, and under certain circumstances the second
goto fail will trigger. This means that any code after the second
goto fail will not be executed. In other words, any code after the second goto statement is unreachable, dead code. Therefore the function
SSLHashSHA1.final() will not be called and the return code
err will be zero—it will never throw an error.
This scenario provides a set of conditions that can be exploited for an attack.
Software that is used in critical systems is tested extensively. Usually the verification process also includes a through code review. However, in this case, a bug slipped into production code.
How Can We Avoid Allowing Bugs into Production?
In order to answer this question, we can turn to the lessons taught by Edsger W. Dijkstra3, a founding member of the computer science community. Dijkstra is considered a leading contributor to the founding principles of programming languages. In a world where software is in everything, his guiding principles are at the core of the development of contemporary critical embedded systems that directly impact our safety and security.
For the purpose of this discussion, let us consider several of Dijkstra’s observations about software verification and validation. He states:
- The quality of programmers is a decreasing function of the density of
gotostatements in the programs they produce.
- Testing shows the presence, not the absence, of bugs.4
In theory, performing code review and executing the right set of test cases can catch most code defects. In practice, however, the challenge is in finding a balancing between the time spent reviewing code and in applying enough tests to find all errors. Even for the simplest operations, such as adding two 32-bit integers, one would have to spend significant time to complete exhaustive testing, which is not realistic.
Given the limitations of code review and testing, how do we apply Dijkstra’s principles to develop secure embedded systems?
Turning to Static Code Analysis
Static code analysis is a software verification activity where source code is analyzed for quality and reliability. Metrics produced by static code analysis provide a means by which software quality can be measured and improved.
In contrast to other verification techniques, static code analysis is automated, and can therefore be done without executing the program or developing test cases. Static code analysis involves finding bugs, checking compliance to standards, and applying formal methods to prove that the software will not fail with a run-time error. Polyspace Bug Finder™ and Polyspace Code Prover™ are examples of static analysis products from MathWorks.
You can adhere to Dijkstra’s guidance about limiting
goto statements by following the MISRA (Motor Industry Software Reliability Association) standard. Rule 15.1 (2012) and 14.4 (2004) for the C language stipulates that “the
goto statement shall not be used.”
The screenshot below shows how Polyspace Bug Finder identifies the use of
goto statements in C code during the code review phase. The MISRA report provides an opportunity for the development team to discuss the violation of the standard. The merits of the use of the
goto can be discussed with the software developer, and the development team is made aware of the risks of the use of this statement.
Better Revealing the Presence of Bugs
How can we use static analysis to comply with Dijkstra’s statements regarding the way in which testing shows the presence of bugs, but not their absence?
Clearly in the case of the Goto Fail bug, testing did not detect this defect. Polyspace Bug Finder can help you to detect the dead code that results from the spurious
goto fail statement. The screenshot below shows the results on the code snippet. In this case, Polyspace Bug Finder is identifying the dead
if statement that encapsulates the function call to
The Goto Fail bug is a fairly straightforward example in this case. What about something more complicated—something that requires being able to follow the data flow to detect? For that we can use Polyspace Code Prover.
Let’s look at the example above. Polyspace Code Prover has flagged three places in this code as not green (not proven reliable). The
while statement is flagged as red, because Polyspace has determined that the loop may not terminate before a run-time error occurs. That run-time error will be a divide-by-zero from line 13, which is why the division operator is flagged as orange. But relevant to our Goto Fail bug discussion is line 17, flagged as gray unreachable code. Why?
Because Polyspace can follow the data flow, it knows that the
if statement on line 16 will always evaluate to false. If count ever did equal zero, a run-time error would have occurred on line 13 before the program reaches line 16. And Polyspace Code Prover can detect all this without requiring program execution, code instrumentation, or test cases.
Takeaways and Lessons Learned
The Apple Goto Fail bug is a teaching moment for embedded software engineers that develop and maintain critical automotive ECU, avionics, and medical devices. It highlights the need to augment code review and test with static code analysis. By using static analysis, you can add a layer of protection to avoid releasing buggy software to production. Using principles outlined by Edsger W. Dijkstra, we can turn to static analysis tools like Polyspace products to scrutinize critical software components, find bugs, and help ensure that no new bugs creep into your code.
Code review involves line-by-line manual inspection of the source code, and detecting subtle bugs can be difficult. For example, an overflow due to complex mathematical operations that involve programmatic control can be missed—or, as in the case of Apple, so can a repeated
goto statement. Since the review process is dependent on human interpretation, results can vary based on the team.
Dynamic testing is used to verify the execution flow of software. This process involves the creation of test cases and test vectors and the software is executed using these tests. Dynamic testing is well-suited to finding design errors, as the test cases often match functional requirements. Test teams compare the results to the expected behavior. As noted by Edsger W. Dijkstra, testing can show the presence of errors, but not their absence3.