Cisco Logo


Security

Should we or should we not keep our security protocols and algorithms public? The debate has been going on for quite some time. It might even have taken place in the Roman Empire when Caesar Cipher was used to encrypt Julius Caesar’s messages. It has been the norm for a long time for all new security methods to be published externally in order to receive academic and public scrutiny, in a way so that they prove themselves.

There are probably very few people that would think that a new security protocol or algorithm should be kept secret. Their main argument would likely be that if a security scheme is kept secret then it would be tougher for adversaries to reverse engineer or “beat” it. There are multiple responses to that argument. Firstly, sometimes people who know more about a secret algorithm could reveal or leak that information, which would jeopardize the scheme anyway. You can’t trust that a secret will always be a secret. Secondly, by thorough analysis and work, any scheme’s operation could be exposed. For example, if ISAKMP key exchange was kept secret, a cryptanalyst capturing the key exchange would finally be able to get an idea of what type of messages are required for the exchange to happen. And exposing to the public a scheme that is kept private without first being studied carefully for flaws could open a whole new can of worms. For instance, to use the ISAKMP example, what if ISAKMP was not using Diffie-Hellman key exchange and it was using some other non-secure method? Then the moment key exchange messages were revealed there would be a point where the whole scheme would fail. In other words, we want security protocols to be tested, studied and attacked, because that increases our confidence that they are secure as designed, and brings up their potential flaws. It is some sort of “penetration testing” and “security assessment,” so to speak.

Security mechanisms that are not published today are probably part of a trade secret or are easier to be exported due to export regulations. There probably are military schemes that are classified, but even if they are, the reason would not be to make them hard to break, but rather to prevent enemies from using them themselves. In the past, there have been all kinds of conspiracy theories that involve the government being able to break widely accepted algorithms. For example DES, which only uses 56-bit keys, was believed by some conspiracy theorists to be chosen so that the government could brute-force attack it using its “infinite resources” two decades ago. Or some even believed that RSA itself had a “backdoor” where the government could go back and decrypt without knowing any private key. These were merely theories, and over the years we haven’t seen evidence to support them. And the fact that over all these years so many talented, educated and intelligent people haven’t been able to “break” schemes like RSA proves that these conspiracy theories probably don’t hold.

The process that is followed for new cryptographic algorithms nowadays is that an agency (i.e. NIST) puts out a call for proposals for a specific scheme (i.e. hash function). All participants bring their proposals, and after proper investigation and study, the finalists are chosen. With further work and open comments from the community, one algorithm is chosen, and later it becomes a standard. That is how NIST recently chose the AES and SHA2 standards. There are also cases where the research community introduces a scheme that finally proves itself useful, and later becomes accepted with the official organization stamps (NSA, NIST). An example is Elliptic Curve Cryptography (ECC), which became NSA’s Suite B. To be more accurate, ECC started in the academic community but was later patented in the private sector. NSA later bought the rights, and now ECC is used in Suite B. There are also cases where private sector research came up with a scheme to solve an industry problem that was later adopted (if the private sector gives up the rights). Examples are some VPN solutions, or Galois Counter Mode (GCM), now used in IEEE’s 802.1ae. The common ground in all above cases is that all the schemes received extensive review before being widely accepted.

To summarize, security mechanisms will keep being invented and improved, but keeping them secret does not contribute to their security. We could imagine dozens of fiascoes of widely deployed systems that were kept in the dark, thought safe by their inventors, and one day their functionality was leaked and the “world ended.” What one person thinks as being secure, someone else might be able to prove insecure. Researchers studying the security protocols and algorithms will always be the best way to build secure systems. That is how it has been and that is how we expect it to remain.

In an effort to keep conversations fresh, Cisco Blogs closes comments after 90 days. Please visit the Cisco Blogs hub page for the latest content.

1 Comments.


  1. Very nice, well worded argument explaining the weaknesses of security by obscurity.

    Auston

       0 likes

  1. Return to Countries/Regions
  2. Return to Home
  1. All Security
  2. All Security
  3. Return to Home