Cisco Blogs


Cisco Blog > Security

Trust Gap: Certificate-Signed Malware

“Disorder increases with time because we measure time in the direction in which disorder increases.” -- Stephen Hawking, A Brief History of Time

F-Secure researcher Jarno Niemlä recently released a presentation on the increasing tendency for malware authors to sign their software with digital certificates. In the presentation, Niemlä notes a number of methods used by malware writers to produce and assign the signatures, as well as the implications of those signatures and what value, insight, or warnings they can provide to defenders. I’m thankful for Niemlä’s perspective, but thought it might be worthwhile to dive a little deeper into some of the subtexts that exist and perhaps lend some more context to F-Secure’s work, as well as our own brief coverage in the CRR.

Trusting Trust

Information security is built on the shoulders of giants, and despite all the advances in technology, procedures, and capabilities, there are some very big ideas that remain true and fundamental to our contemporary efforts. I remember vividly the first time I heard of, and then read, Ken Thompson’s “Reflections on Trusting Trust“. This paper had a profound impact on my decision to pursue information assurance as a career path, and as this recent presentation from F-Secure shows, Thompson’s work is as applicable as ever.

While Thompson was specifically discussing a self-propagating backdoor that could be included in the C compiler, the programmatic examples in his paper existed as illustration to his broader point:

 

“The moral is obvious. You can’t trust code that you did not totally create yourself.”

 

Or I could paraphrase somewhat more generally: “You can’t trust anything that you did not totally create yourself.” This can very quickly descend to the absurdly paranoid, but Thompson very clearly states that this is not limited to compilers, and extends even to hardware (and beyond):

 

“I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well-installed microcode bug will be almost impossible to detect.”

 

What Niemlä and others have seen in the malware world is a type of trusting trust. As vendors and customers have pushed for restricting the privileges of unsigned code, malicious actors have striven to overcome that technical barrier and muddy the waters of what can and cannot be trusted. Coincidentally, the increasing value provided by the “branding” of a secure site, let alone the technical value of link encryption et al, has made acquiring a certificate more desirable than Certificate Authorities can scale to handle efficiently. As a result, there is an intersection between the two activities that creates a dangerous trust gap, and as security professionals we must understand what it will take for us to overcome it.

Understanding the Trust Gap

Demand for certificates has increased, and in order to satisfy the expanding and recurring customer base, Certificate Authorities at the root of the trust model are lowering the barriers of entry that allow customers to obtain certificates. Moxie Marlinspike, Dan Kaminsky, and many others have shown that there are technical and procedural errors in the certificate issuing process. And since SSL is a web of trust, the CA with the lowest standards is the bar to meet — other competitors that are “as trusted” as that CA must drive for lower standards for proving identity so that they can compete for business with any other authority that meets that low mark.

Conversely, vendors are relying upon certificates to assert trustworthiness. If code is signed, then it can be trusted implicitly. But if there is a “race to the bottom” for identification of who is receiving certificates, then this trust is awfully misplaced. Niemlä gives several examples, including e-commerce vendors that offer binary redistribution, signing whatever content that their customers provide to them, more or less blind to its purpose.

The signed-content schemes vary — for example, applications submitted to the Apple App Store are all signed by a certificate with Apple at the root, giving Apple centralized control over signing, validation, and revocation. As the web of trust expands from a narrow web, such as the Apple App Store, toward the wider web of digital certification of various types (e.g. Microsoft Authenticode’s acceptance of PKCS #7, PKCS #10, or even MD5), then risk increases and effectiveness of signing for the purposes of trusting content decreases, until it matches the weakest link within the web of trust. If simple integrity mechanisms (such as MD5) are given the same trust level as full certificate signing by a single root, the effective value of that trust is greatly diminished.

Bridging the Trust Gap

There are two key rewards from signed code that can improve security. First, signed code has a signature — and of course signatures can be used to classify content. If researchers and defenders can identify a set of signatures or participants along the signature chain that should not be trusted, then these signatures can backfire on the malware authors’ intended usage. This is exactly how Google and Apple respond to content that is published to their smartphones — they revoke the child certificate related to software that they deem in violation of their platform policies. Likewise, administrators who require system security through code signing have a few options. For example, they could constrain the set of allowed certificate types and providers, and/or maintain their own revocation authorities to disable signed code that is discovered to be problematic.

Second, over the long term, increasing amounts of signed malware might provoke a review of certificate issuance and trust overall. Managing the intricacies of commonly bundled public key infrastructure (e.g. the lists of trusted root certificates include by OS vendors, browser distributions, etc.) is no easy task under the current model. I’d venture that few administrators are comfortable with developing their own methods for vetting and assigning trust to each individual root authority or their intermediate level child nodes. Let’s face it — it is this very problem that has led the CAs themselves down the path of lowered identification standards, creating the situation we are observing. And those CAs do this for a living.

Conclusion

Overall, Niemlä and others are describing a problem that is emerging underfoot. I’m sure that what I’ve offered here doesn’t even begin to fully describe the issue, let alone fully explain the solutions available. One of the biggest challenges in this scenario is that it is so maddeningly complex — and of course, security is no friend of complexity. Signed code can lead to more effective decisions rooted in trust, but administrators and decision makers must be aware of what it is they are really trusting. As a step in the right direction, these schemes should be recognized for their value. But the presence of malicious code with certificates should indicate that we haven’t reached the desired goal just yet. And like Hawking said, we measure time in the direction of increasing disorder.

Comments Are Closed