A while back, I blogged on the topic of Sovereignty and National Security. Since then, much has happened, most notably the moves by some governments to require access to source code on the grounds of national security before a foreign product can be imported and used in the country. Others have insisted for products to be manufactured locally, or that intellectual know-how of the product be transferred as part of the conditions of permitting a product to be procured. These are variations of the recurring theme of requiring local control to ensure national security and to protect sovereignty against foreign influence.
One cannot deny that there are very real security concerns and threats faced by governments today that need to be addressed more adequately. Even consumers are rightly worried about security of their data and personal information, especially as more cloud computing services become available.
Some argue that proprietary products are ‘secretive’, and that they rely on the customers’ faith in the vendor that the products operate securely. Others say that it is much easier for attackers to uncover vulnerabilities when they have access to the source code, rather than trying to compromise a “black-box”.
Who is right? Is the disclosure of source code directly correlated to product security? Is there a better way to ensure security without resorting to excluding the use of foreign manufactured products?
A range of methods for product security evaluation can address public and private sector customer concerns. However, the availability of source code is NOT always important, or even necessary, to provide customers with the assurance that the products they rely on will function as intended.
For instance, all major customers of security technologies, including governments worldwide, rely on comprehensive testing and audit. They verify the characteristics or behavior of security products against an extensive checklist to ensure conformance to specifications and guidelines. These methods of assessment are akin to compliance checking in other fields, e.g. in building construction to ensure structural, fire or earthquake safety.
Examining how a product responds to attempts of tampering, misuse or other direct attacks – commonly known as vulnerability analysis – can provide important insights about the product’s ability to withstand actual malicious attacks or serious stress. An important method known as “fuzzing” is based on this concept of supplying the product with a variety of unexpected inputs with the hope of causing the product to fail, thus uncovering the presence of a bug or vulnerability.
In these evaluations, source code is not required to determine the security of the product. In fact, having access to source code can give a false sense of security. With publicly available source code, one common assumption is that the code has been reviewed by someone skilled in identifying vulnerabilities through code analysis. In fact, such expertise may often not be readily available, or the code may simply not have been reviewed.
However, this is not to say that access to source code is of no value to security evaluations. Where the right skills and motivations are available for code review, it can complement the other security assessment methods. Even then, source code scrutiny alone does not test the robustness of an implementation in the same way as “fuzzing”. Access to source code is by no means the only way security can be guaranteed, nor is having such access any guarantee that backdoors do not exist or that they will be uncovered. A number of popularly-used Internet infrastructure applications, even with their source code published, contained backdoors and security vulnerabilities for many years before they were discovered. It has also been shown that having access to source code does not prevent a backdoor from being hidden and remain undetected in software. In a classic paper by Ken Thompson, one of the fathers of UNIX, “Reflections on Trusting Trust”, he noted that no amount of source-level verification or scrutiny will protect against untrusted code.
What then is the better alternative?
Certification! This is the reliance on the evaluation of qualified independent certifying bodies. Vendors submit their products to such trusted agencies for scrutiny by trained experts. Source code may also be submitted to the trusted body for study depending on the level of security evaluation required. Upon being satisfied with its assessment of the security of a product, the trusted body issues a certificate stating the level of security to which the product is evaluated against.
There are international cross recognition certification schemes between countries, such as the Common Criteria Recognition Agreement (CCRA), that allow the certification of products in one country to be relied on in another country, and vice-versa. This saves both vendors and customers time and resources in not needing to repeat the evaluation process. The underlying premise is, of course, that the bodies and countries trust each other to recognize the certification. This trust can be built up with time and experience with each other.
There are those who remain skeptical and speculate that back doors exist in foreign products, thus allowing a third party to eavesdrop or shutdown their networks remotely. With reverse engineering technology and expertise available today, it is extremely unlikely for a trap door that is deliberately hidden in security products to go undetected for any extended period of time. It is equally unrealistic to believe that any well-established commercial entity would risk its reputation, branding and future livelihood by hiding such a trap door in its products deliberately. The fallout from any such trap door coming to light would effectively wipeout any goodwill the vendor may have had with its customers.
Security evaluation of products represents a key part of any organization’s thorough assessment of its overall risk exposure. However, this does not mean that vendor source code needs to be disclosed to address this concern, or that parts of the system needs to be customized and made non-standard, effectively rendering it non-interoperable with other products. Ultimately, well-developed and well-designed product implementing leading practices and processes is the most trusted product. Security-related features matter, but many variables also depend on the customer taking due care – total security depends just as much on how well the software is deployed, configured, updated and maintained, including whether product vulnerabilities are discovered and resolved through timely updates.
The most secure product in the world can be rendered utterly useless if the customer misconfigures it or simply does not activate its features correctly.