Categories
Security

How the NSA cheated cryptography

Of all the revelations made by Edward Snowden, I find the recent one about Dual_EC_DRBG definitely the most intriguing and possibly the most shocking – even if it wasn’t really news.
It intrigues me because it is about elliptic curves. I love elliptic curves. I studied them quite extensively when I worked as a mathematician and although I don’t use them anymore, I still feel a fondness for them.
But more importantly, it intrigues me because initially I didn’t realise what had really happened – and judging from comments and articles I’ve seen, I wasn’t the only one.
The NSA didn’t weaken a crypto standard. Rather, it put a backdoor inside the standard. There’s an important difference. As a consequence, if you use Dual_EC_DRBG, you’re still well-protected if the adversary you’re defending against isn’t the NSA. But if it is, you’re pretty much stuffed.
Dual_EC_DRBG is a pseudorandom number generator (or deterministic random bit generator; hence the name). It is one of four of its kind that were defined in the 2006 NIST standard SP 800-90A (PDF). The standard was written with the help of some people at the NSA. As we now know*, the NSA effectively wrote the standard.

nsa-ack
Well, this is awkward.

Randomness is an essential part of any crypto system. It is also where many crypto systems have weaknesses, so if you’re implementing cryptography, it makes sense to use a standard provided by a reputable organisation like NIST.
What pseudorandom number generators do is turn a small ‘seed’ of proper random data into a constant stream of random numbers, which enables you to get such a number with arbitrarily high entropy. Entropy is usually defined as a way to measure randomness, but here (and possibly in general) it is best to see it as a way to measure surprise to an adversary. A high entropy means the adversary will know very little about the random numbers the system generates.
Dual_EC_DRBG uses a given elliptic curve. Elliptic curves come with an extra structure, called a group structure. For the purpose of this post, it suffices to say that this allows you to walk along the curve but, rather than simply following the shape of the curve, your walk makes you seemingly go all over the place. It is this all-over-the-placeness which makes them useful to generate pseudorandom numbers (and for cryptography in general).
Elliptic curve with group structure
The group structure on an elliptic curve. Don’t worry if it doesn’t make sense.

Apart from the curve, the algorithm also uses two given points P and Q on this curve. Like the curve, they are given in an appendix to the NIST standard.
Now there exists a relationship between these points P and Q: if you start at Q and you continue walking, then, for some large number e, after e steps you end up at P. This is not a secret: it is a simple property of the group structure of elliptic curves. But if the curve is large (which the one used in this standard is), it will take you a long time to compute e. Think in terms of millions of years. So no one knows e and no one can know e.
No one? Well, if you simply choose a point P on the curve and choose a (very large) number e, you can use that to compute a point Q. If you then give out these P and Q to someone, they will still need a million years to compute e. But you know it.
And that’s exactly what the NSA did. They provided the P and the Q in the standard. They, as has become clear from Snowden’s documents, know e. We don’t. And we can’t even compute it.
Does this matter?
It does. In 2007, Dan Shumow and Niels Ferguson, two researchers then working for Microsoft, showed (pdf) that, if you know e, cracking the pseudorandom number generation becomes a little easier. A little easier? Actually, it becomes almost child’s play. They effectively showed that to the NSA, your high-entropy pseudorandom number generator, generates output with very few surprises.
In practise this means that, by knowing e, can read almost all TLS-traffic (which includes HTTPS) that is encrypted using a algorithm based on Dual_EC_DRBG.
After the likely backdoor was found in 2007, NIST actually updated the standard. It now shows you a method to choose ‘good’ P and Q yourself (for you can’t just choose arbitrary points). But it still says that if you want your crypto to be FIPS 140-certified, you need to choose the points they’ve chosen for you. “Trust us,” you read between the lines, “we know they work.”
So why would anyone trust them, especially after it was shown that someone could likely have inserted a backdoor? That is beyond me. But the standard is used in quite a few implementations.
What makes this even more strange is that, as Matthew Green pointed out in an excellent blog post, the algorithm is pretty flawed in a number of other ways too. No wonder the crypto world suddenly finds itself in an existential crisis.
Now it would have been bad if the NSA had somehow managed to make us all use weaker cryptography. Still, the playing field would have remained level, albeit with lower security for everyone.
It would have been a little worse if the NSA knew of a secret algorithm that enabled them to break cryptography. (It is possibly that one of the future revelations that Bruce Schneier hinted at will show they can do that for certain crypto standards.) Still, ultimately that is just beating your opponent by being more clever.
But what the NSA did was plain cheating. The crypto remains secure against any of us. But they can crack it. Because they wrote it. And they put a backdoor into it. And even though we know (and have known for some time) there was such a backdoor, it still doesn’t help us.
Cheating with the privacy of billions of Internet users is nothing but very, very wrong.
(Apart from the linked blog post by Matthew Green, there is this Wired piece on Dual_EC_DRBG that the aforementioned Bruce Schneier wrote back in 2007, when Edward Snowden was but a junior employee at the CIA working in Switzerland. As just about anything Schneier has written on cryptography, it is well worth a read.)
* The NSA hasn’t owned up and it is unlikely they ever will. While no one doubts that the NSA planted a backdoor into Dual_EC_DRBG, we can’t prove it. Throughout the blog post, I have assumed we are sure. It made for easier reading. And, frankly, we are quite sure.
Categories
Security

There is no 'I know what I am doing' trump card in security

Ever since Edward Snowden revealed details of the NSA‘s PRISM program, I had been wanting to write something about it.
While most people in the security community are rather unhappy, if not outraged, about PRISM, a lot of focus has been on the fact that the NSA is apparently evil.
While this may be true, I don’t think this is relevant. Of course, no one wants to be spied upon by an organisation they consider evil. But what I think is relevant here is that even if the people at the NSA are good and well-meaning, mass-surveillance is still very wrong. (As Robert Graham put it: “NSA is wrong, not evil”.)
So, inspired by the Black Hat keynote given by the NSA‘s director gen. Keith Alexander, I wrote a blog post about it:

We have all been there. To continue the product you’re working on, you need to get some extra permission: a port needs to be opened, or perhaps some files need to be uploaded onto a protected system. You ask the IT department for this permission and, much to your frustration, they won’t give it to you until you’ve explained in full detail why you need it, and even then they will have to check with their management.
“But I know what I’m doing. And my manager says it is fine.”

Read the rest of the post at Virus Bulletin.

Categories
Security

On Twitter and Censorship

I wrote this post back in 2012 on a wordpress.com blog that I set up for that purpose. I decided to copy it here, because I still think it’s relevant.

I like Twitter.
Not just because I like microblogging — which I think is great — but I like them as a company too. Yesterday’s announcement made me like them a little more. In fact, as it is has been misreported widely and unfairly as ‘Twitter introduces censorship’, it prompted me to finally start this blog.
At a first glance, it doesn’t sound too good: Twitter has given itself the ability to block tweets on a per-country basis. “Censorship!” people have been screaming and that’s what it sounded like to me as well.
But wait, Twitter can already delete tweets when it sees fit to do so. In fact, it has done so in a number of cases when it was required by law. Failing to comply to legal demands to block tweets would mean company could be shut down altogether and its employees could be arrested.
But most of such demands (the US, where Twitter is registered as a company, may be an exception, but I’m not a legal expert) only affect the visibility of the relevant tweets in certain countries. For instance, as Twitter points out, pro-Nazi content is illegal in Germany and France but not in most other countries. Rather than deleting the tweets altogether, Twitter will only withhold them from users in the relevant countries. The announcement thus means that, in fact, there will be less censorship.
But what about repressive regimes? Didn’t Twitter play an essential role in the Arab Spring? And will they now start to block all political tweets from Syria, as no doubt these are illegal under local laws?
I think you have to be very cynical to believe that to be the case. As far as I’m aware, they have never removed any political tweets. And the chances of Twitter opening an office in Damascus — in which case it would have to comply to Syrian laws — seem pretty low, at least under the Assad government. I think it’s much more likely that the Syrian government will block Twitter altogether.
In which case, as in the case of Twitter blocking certain tweets in certain countries, there are many ways around it. In its final days, the Mubarak government in Egypt tried to curtail protests by shutting down the Internet altogether. They failed.
Shouldn’t Twitter just ignore those demands to block content? Yes, they should. But they have to obey the laws, which means they can’t. Within those laws, it seems like they are doing everything they can. They even say they will make it clear when a tweet is withhold from the user and will post all take down notices on Chilling Effects.
Of course, we will have to wait and see how well Twitter lives up to these promises, including the promise to only remove tweets reactively, not pro-actively. I am positive they will though. And until I am proven wrong, I will continue to love and praise Twitter.