Security through Obscurity

The other day at the IET Secure Mobile conference in London, Steve Babbage, Vodafone’s Group Chief Crypographer (great job title) gave the keynote, and I was fortunate to speak to him afterwards about his ideas.  One interesting area was “security through obscurity”, where he maintained that in some situations it makes sense to make an attacker’s job as difficult as possible through the use of secret algorithms.  I hope I can do the argument justice here. 

The World has changed today, and generally governments do not try to interfere in the issues of what crypto gets used in commercial mobile networks.  However, when GSM was born, 40bit encryption was a (rather weak) standard that governments agreed should be used.  In this environment, Steve Babbage maintains, the cellcos would have been mad to release all the details of the algorithm to the public, since the added obscurity would make it even harder for an attacker to get a foothold.  In the context of SIM attacks (being physically in contact with the SIM to decrypt it,  a so-called “side-channel attack”), sometimes attackers can gain knowledge about the secret key by measuring the power usage of the chip under attack.  On the other hand, if the algorithm is secret, then it is impossible for an attacker to map power fluctuations against a model, since all he has is a seemingly patternless output from an engine of unknown design. 

The use of secret algorithms is generally thought of these days as a “bad thing”, since if the algorithm is openly published it means that academics and researchers can test the thing to death and publish vulnerabilities that they find.  This should result in better algorithms and fewer defects in the long term.  Babbage doesn’t argue in favour of “cobbling something together in secret”, but rather he is saying that if you take a proven good thing like AES/Rijndael, and then add a further secret component to the algorithm, then the intellectual rigour is still there, with an added component to defeat foes.  

What do you think about security via obscurity?

4 thoughts on “Security through Obscurity

  1. Dustin D. Trammell

    Unfortunately, adding anything to a cryptosystem after the intellectual rigour has been done by the crypto community to prove the cryptosystem can potentially weaken or create vulnerabilities in the system. At the end of the day, you would still have a component of your cryptosystem that lacks the peer-review and testing that the rest of the system had attained, thus making it potentially less secure. Quite often, differences in implementation of the same algorithm or even different utilizations of the same implementation can introduce vulnerabilities that were nonexistant in the original vetted algorithm.

    Essentially it comes down to balancing how much of your cryptosystem as a whole is public and has been extensively reviewed and tested and how much of it is not. It may be worth the potential insecurity for a component of your cryptosystem to be private if perhaps it’s a very small part of the system as a whole, however the more of your system that is not public and has not recieved extensive review increases the potential for insecurity. As it usually is with security it’s a trade-off and becomes a question of how comfortable you are with the risk that that trade-off creates.

    In my opinion, the hurdle that you’re throwing in front of an attacker via obscurity probably isn’t worth as much as the real security that you would achieve by having the entire cryptosystem publicly peer-reviewed. Keep in mind that my opinion here is very specific to cryptosystems; There are cases with other technologies where I believe you can successfully create a hurdle of obscurity without impacting the actual security of the technology itself. In such cases as those, it’s a no-brainer to add the obscurity as well.

  2. Pingback: Vulnerability Disclosure, Cryptography Research, and Open Source « Dustin D. Trammell

  3. Steve Babbage

    I only just discovered this blog ….

    I certainly agree with Dustin that meddling with a well-trusted algorithm is not something to be done lightly. But a competent cryptographer can add something on top of a well-proven algorithm in such a way that it’s intuitively obvious, or even provable, that all the strength of the well-trusted algorithm is retained (i.e. if you can break the overall system, then you can break the underlying well-trusted algorithm).

    I’m a competent cryptographer. I would trust myself to design a cryptosystem based on AES but with some additional secret components, in such a way that all the strength of AES is retained. I wouldn’t trust myself to design a brand new secret algorithm, alone, that no one could break – and I don’t think any other wise cryptographer would trust herself to do so either. To build a strong fundamental algorithm needs the collective analysis of many experts, over a long period of time; to build a strong hybrid algorithm from an EXISTING strong fundamental algorithm needs fewer eyes and less time.

    So I do believe you can have the best of both worlds: all the benefits of the well-trusted algorithm to protect you against mathematical cryptanalysis, plus some additional obscurity to help protect against (for instance) side channel attacks.

Comments are closed.