Don’t trust me: I might be a spook
Don’t trust me: I might be a spook: Shortly after the Snowden papers started to be published, I was invited to write an op-ed about PRISM and its implications for privacy and online security. I initially agreed, but after spending a few hours putting some thoughts together I changed my mind: I really had nothing useful to say. Yes, the NSA is spying on us, listening to our phone calls, and reading our email — but we already knew that, and a few powerpoint slides of confirmation really doesn’t change anything. When the first revelations about BULLRUN — the fact that the NSA can read a lot of encrypted data on the internet — appeared, I was similarly unimpressed: If you can find a weakness in an implementation of a cryptographic system, you can often bypass the cryptography, and the US government, via defense contractors, has hundreds of open job postings for exploit writers with Top Secret clearances. If the NSA can break 2048-bit RSA, it would be a Big Deal; if they can break OpenSSL, not so much.
But the latest revelations scare me. It’s one thing to find and exploit vulnerabilities in software; there’s a lot of software out there which was written by developers with very little understanding of cryptography or software security, and it shows. If you care about security, we reasoned, stick to software written by people who know what they’re doing — indeed, when I talk to users of Tarsnap, my online backup service, one of the most common things I hear is “you’re good at security, so we know your code will keep our data safe”. That reasoning is now clearly flawed: We now have evidence that the NSA is deliberately sabotaging online security — influencing (and weakening) cryptographic standards, bribing companies to insert “back doors” into their software, and even sending developers to “accidentally” insert bugs into products. It’s not enough to trust that I know what I’m doing: You have to trust that I’m not secretly working for the NSA.
I’m not working for the NSA, of course, and I haven’t sabotaged any of the software I’ve written — and while that’s exactly what someone working for the NSA would say, there are a few reasons to believe me. For a start, I’m not a US citizen, so it would be difficult for me to get a US security clearance, and since my first instinct if approached by the NSA would be to blog about it, I’m not exactly the sort of person they would be inclined to trust. More significantly, I have published cryptographic research: First, in 2005 the first (public) side channel attack exploiting Intel HyperThreading; and in 2009, I published the scrypt key derivation function, which is designed specifically to protect passwords (and the accounts and data they are used to guard) against attack from agencies like the NSA. The NSA does not publish cryptographic research (or much at all, in fact — there’s a reason people joke that their name is really an abbreviation for “Never Say Anything”) so my having published such research argues against the possibility that I’m covertly working for the NSA. Finally, my reputation and identity are very heavily tied up in security, both as Security Officer for the FreeBSD project and as the author of Tarsnap. If I sabotaged Tarsnap it would indelibly damage my reputation, and it’s hard to imagine what inducement anyone could offer which would make me do such a thing.
But none of this is conclusive. Despite all the above, it is still possible that I am working for the NSA, and you should not trust that I am not trying to steal your data. Fortunately, the first principle behind Tarsnap’s design is that you should not need to trust me: Data is encrypted on individual client machines, and you have the source code to verify that this is being done securely (and without the keys being in any way leaked to the NSA). If you are a developer who understands C, download the Tarsnap source code and read it — and don’t feel that a lack of expertise in security should stop you either: My experience as FreeBSD Security Officer was that most vulnerabilities were found by developers looking at code and noticing that something “seemed wrong”, rather than by people with security expertise specifically looking for security vulnerabilities. (If protecting the free world from the NSA is insufficient motivation, I also pay for bugs people find in Tarsnap, as well as scrypt, kivaloo, and spiped, right down to the level of typographical errors in comments).
Naturally, what applies to me also applies to everybody else. For most products, in fact, it applies many times over: It only takes one person to introduce a vulnerability into software, and most organizations do not have a sufficient code review process to reliably catch such bugs (if they did, we would have vastly superior code!) even assuming that there is no institutional corruption. Microsoft may have decided to cooperate with the NSA while Google resisted; but all the NSA needs is one or two cooperative Google employees in the right place.
The only solution is to read source code and look for anything suspicious. Linus’s Law states that “given enough eyeballs, all bugs are shallow”: If enough people read source code, we will find the bugs — including any which the NSA was hoping to exploit in order to spy on us. The Department of Homeland Security wants to have an army of citizens on the look out for potential terrorists; it’s time to turn that around. We need an army of software developers on the look out for potential NSA back doors — to borrow a phrase, if you see something, say something.
And if you can’t see anything because you can’t get the source code… well, who knows that they might be hiding?