Popular Tropes

And now for something completely different ...

Wednesday, 26 November 2014

Brute Force Attack

Malcolm Rifkind may not appear to have much in common with fiancé-of-the-moment Benedict Cumberbatch, but they have both contributed to the debate on surveillance this week. There should be no surprise that the Intelligence and Security Committee, chaired by Rifkind, should have decided in its Report on the intelligence relating to the murder of Fusilier Lee Rigby that the "intelligence community" were blameless while taking the opportunity to criticise Internet companies' inadequate support for state surveillance. This might look like crude deflection, but the spooks are clearly playing a double game in which Rifkind and his fellow stooges (Hazel Blears, Robin Butler, Mark Field et al) are taking the role of the bad cop. The good cop is sympathetic to the Internet companies' plight and is keen to find a solution.

The committee's conclusion is that our boys in the shadows, whose job it is to monitor threats and who were well aware of Lee Rigby's killers, did not fail in their duties. Facebook, which is a commercial operation that monitors its users' data only to the extent that it can monetise it, did fail in its duties to alert the state when one of the suspects posted a comment boasting of his desire to kill a soldier. The committee even went so far as to accuse it of providing a "safe haven for terrorists", echoing the intemperate words of the head of GCHQ recently. If you substitute the word "trolls" for "terrorists" you'd be more accurate, but that's not going to garner the headlines. The committee obviously feels that Facebook should be capable of discriminating between the threats of potential terrorists and the somewhat larger crowd of muppets who routinely threaten murder and rape against people they slightly dislike.

The focus on Facebook's "negligence" not only continues the power struggle between state and business over the control of personal data, but it conveniently deflects attention from the secret services' modus operandi, which often requires it to allow crimes to be committed in order to advance intelligence-gathering. While there is no evidence the spooks knew Lee Rigby was going to be murdered, let alone that they would have countenanced it, they clearly knew the killers were active in radical Islamist circles and were happy to keep half an eye on them as low-value targets. They were a lot more aware of the potential threat than Facebook, so the committee's conclusion is perverse.

The agencies' aversion to rounding up all suspects, as the tabloids would urge, is practical, not the product of a keen respect for human rights. The justification for this approach is central to The Intelligence Game, which tells the now famous story of Alan Turing, played by Benedict Cumberbatch, and the breaking of the Enigma code. In order to prevent the Germans realising the boffins at Bletchley Park had cracked it, the Brits had to exploit the intel in such a way as to not arouse suspicions. Dramatically, this is shown as accepting the sacrifice of one of the code-breakers' brothers on an Atlantic convoy. Reality has here been compressed to the point of implausibility - plotting the courses of U-boats would have been done by naval intel analysts not code-breakers - but the essential truth remains: the secret services rarely foil plots because that is not their job.


The film is a highly conventional mixture of tropes: autistic genius, team bonding, the vulnerability of sexuality, plucky gel, Rattigan-like public school pathos etc. Much of the plot is factually incorrect and often nonsensical - e.g. the anachronistic use of Tipp-Ex and the casual security at Bletchley Park. What is significant is that we never see the enemy, other than in the form of newsreel interludes. Insofar as there are baddies on stage, they are the dunderheaded military (Charles Dance, clearly having a laugh) and the cynical face of MI6, played by Mark Strong who also featured in the 2011 adaptation of Tinker Tailor Soldier Spy. What the new film shares with the work of John le Carré (such as the recent adaptation of A Most Wanted Man) is the focus on deception and betrayal on "our side", which is prompted by decisions on the value of information and its carriers.

While much of this is a metaphor for Britain's diminished postwar status (particularly in respect of the US) and the tensions of class (John Cairncross, the Soviet mole in The Imitation Game, is most definitely below the salt), it also reflects the ethical dilemma of the agencies. Lying and sacrificing the innocent are seen as evils; sometimes necessary, sometimes not, but always a matter for judgement. Where le Carré is now outdated is in embodying information as people, i.e. informants, double-agents etc. Human intel is still important, but the core work is data analysis: Turing's legacy. Despite its title, the committee's report is not about the unwitting sacrifice of one soldier's life, but about the sacrifice of communications privacy for everyone.

The key recommendation (page 146 of the report) is the following: "We note that several of the companies ascribed their failure to review suspicious content to the volume of material on their systems. Whilst there may be practical difficulties involved, the companies should accept they have a responsibility to notify the relevant authorities when an automatic trigger indicating terrorism is activated and allow the authorities, whether US or UK, to take the next step. We further note that several of the companies attributed the lack of monitoring to the need to protect their users' privacy. However, where there is a possibility that a terrorist atrocity is being planned, that argument should not be allowed to prevail".

An "automatic trigger" would be words (and conceivably images) or associations (likes, friends etc) considered of interest by the agencies. As this would be constantly changing, and as there would be a lot of false positives (i.e. probably over 99%), it would be impractical to expect the Internet companies to keep on top of this and manage it as an exceptional process. This is the point being helpfully made by the "good cop". As some analysis would be retrospective, rather than real-time, it would also make more sense for GCHQ to simply syphon off the data and analyse it in parallel, much as the Brits transcribed every coded German message in WW2. Over time, this parallel dataset would expand to the point where it became easier to simply mirror the original rather that attempt any selection - i.e. the automatic trigger is all-encompassing. But there is an emerging problem.

According to MI5 testimony to the committee, "one of the effects of the Snowden disclosures has been to accelerate the use of default encryption by the internet companies… which was coming anyway, but I think that’s why I’m underlining the word 'accelerate' ...". The Internet companies know they have to introduce encryption to stay in business. While most users won't shift provider if they can help it, the "winner takes all" nature of the online market (i.e. one dominant social network, one search provider, one smartphone microblogger etc) means that incumbents are highly vulnerable to new entrants. If you go out of fashion online, you can be commercially dead in a matter of months. For a new entrant to the market, encryption could be enough of a USP to build momentum. The incumbents need to mitigate that risk, just as Facebook mitigated Instagram (by buying it) and Apple mitigated Spotify (by buying Beats).

What the agencies want is to circumvent the encryption. They know they cannot backdoor the encryption itself (because, duh, maths), just as Turing realised that Enigma could not be decoded by human cleverness (the crossword-solving angle). What it took was a brute-force attack. At Bletchley Park, this meant building a mechanical program that could work through billions of combinations, much as modern cracker programs work through billions of hash values to decrypt stolen passwords. Given the volume of data and the number of different keys (i.e. one per user), this is simply not going to be feasible on a mass scale if full, end-to-end encryption is widely implemented.

But is full encryption really coming? The user's data will still be open, not only to whoever they are communicating with (i.e. privacy of choice) but more broadly to the Internet companies themselves. After all, their business model is based on targeted advertising, which ideally means "reading" all the data, and a move to a fully secure, ad-free, paid-for service would be commercial suicide. The question then is whether the companies will continue to provide the agencies with largely unhindered access to the data, which is a PR issue rather a technical one.

Most secrecy is circumvented via social engineering, e.g. tricking people into revealing passwords, or tricking them into thinking they are not being surveilled. Unless you utilise an independent, end-to-end encryption tool, you will remain dependent on the Internet companies' interpretation of privacy, and that isn't going to be an interpretation that will jeopardise the revenue they earn from the users' data assets. The report by Malcom Rifkind and company is part of the ongoing campaign by the state to insist that business continues to provide the security agencies with a "cut" of these valuable assets. It's a different form of brute force attack in which a dead soldier is used to batter down a door.

6 comments:

  1. Herbie Kills Children26 November 2014 at 16:53

    The internet is a threat to the establishment, one they will lie awake at night thinking about. Piracy sites (wholly progressive in my opinion) have moved Eastwards as the state attacks them. Many of them operated/are operating in the Ukraine. Is this totally irrelevant to the current struggles in that nation?

    I think people underestimate this battle, I was waiting for the Lee Rigby case to be used as an excuse to curb the internet.

    I suspect this is one battle that will be won by the side of reaction. As they all have a vested interests and the masses can be kept complaint with the casual link to terrorism and pedo's.

    ReplyDelete
  2. "They know they cannot backdoor the encryption itself (because, duh, maths) ..."

    Actually IIUC one of the accusations against the US Gov is that they did introduce exactly that - an insecure algorithm into the standard (and mandatory) encryption set.

    ReplyDelete
  3. The alleged NSA backdoor involved a random number generator. This wasn't mandatory and was one of 4 that were accepted as industry standard. Few companies employed it because it was slow and it was rumoured to have been targeted by the NSA well before the Snowden revelations. RSA did employ it, allegedly because the NSA paid them $10m.

    RNGs are key to cryptography, but compromising them is only an issue in fairly basic encoding of the type used in RSA's products (e.g. the SecureID fob) and the default security transmission used by Web browsers (SSL/TLS). In other words, "scrambling" where the transposition is wholly derived from the number.

    Serious players would employ end-to-end encryption on top of this, utilising public and private keys. Even if the key generation were vulnerable to a compromised RNG, the message remains encrypted because of the addition of a user password that feeds into the algorithm. At best, the NSA could reduce the time required for a successful brute-force attack from decades to years.

    ReplyDelete
    Replies
    1. Just to be doubly clear on this: the essence of encryption (i.e. the properties of prime numbers) means that it cannot be subverted directly.

      The state can undermine the immediate infrastructure - i.e. the software wrapper - but this often only shortens the odds for a brute-force attack, hence it increasingly depends on the "social engineering" of compliant companies (like RSA) selling the idea that their products are secure.

      The other approach taken over the last decade is for the state to develop and disseminate malware (Regin is a recent example) - possibly with the connivance of business - which potentially allows it to circumvent encryption by grabbing data before it is encrypted (or after it is decrypted) on the user's device.

      The common thread here is the cooperation of the state and business. The security agencies and the Internet giants both have a shared interest in the maintenance of "pseudo-privacy". While this is not central to the buisiness model of security companies (i.e. those providing encryption and antivirus tools), they too have an interest in keeping onside with the state.

      Delete
  4. Sure, I work in IT as well, but was being brief. My point was only that, in some circumstances, you couldn't necessarily rely on encryption.

    ReplyDelete