Popular Tropes

And now for something completely different ...

Wednesday, 15 February 2017

Algopops

One of the defining characteristics of the debate on the role of software in modern society is the tendency towards anthropomorphism. Despite the stories about job-stealing robots, what we apparently fear most is not machines that look vaguely like humans, with their metal arms whirling over production lines, but malicious code that exists in a realm beyond the corporeal. We speculate about the questionable motives of algorithms and worry about them going "rogue". Such language reveals a fear of disobedience as much as malevolence, which should indicate its origin in the old rhetoric of class (much like the etymology of the word "robot"). In a similar vein, the trope of the hacked kettle recycles the language of outside agitators and suborned servants. In contrast, artificial intelligence is subject to theomorphism: the idea that in becoming superior to human intelligence it becomes god-like, an event that can occur well short of the technological singularity (as Arthur C. Clarke noted, "Any sufficiently advanced technology is indistinguishable from magic").

This distinction between algorithms and AI has echoes of the "great chain of being", the traditional theory of hierarchy that has enjoyed something of a revival among the neo-reactionary elements of the alt-right, but which can also be found buried deep within the ecological movement and the wider culture of Sci-Fi and fantasy. Given that mix, there should be no surprise that the idea of hierarchy has always been central to the Californian Ideology and its (non-ironic) interpretation of "freedom". If Marxism and anarchism treat class and order as historically contingent, and therefore mutable, the defining characteristic of the party of order - and one that reveals the fundamental affinity between conservatives and liberals - is the belief that hierarchy is natural. Inheritance and competition are just different methods used to sort the array, to employ a software term, and not necessarily incompatible.

Inevitably the cry goes up that we must regulate and control algorithms for the public good, and just as inevitably we characterise the bad that algorithms can do in terms of the threat to the individual, such as discrimination arising from bias. The proposed method for regulating algorithms and AI is impeccably liberal: an independent, third-party "watchdog" (a spot of zoomorphism for you there). Amusingly, this would even contain a hierarchy of expertise: "made up of law, social science, and philosophy experts, computer eggheads, natural scientists, mathematicians, engineers, industry, NGOs, and the public". This presents a number of scale problems. Software is unusual, compared to earlier general purpose technologies such as steam power or electricity, in that what needs regulation is not its fundamental properties but its specific applications. Regulating the water supply means ensuring the water is potable - it doesn't mean checking that it's as effective in cleaning dishes as in diluting lemon squash. When we talk about regulating algorithms we are proposing to review the purpose and operation of a distinct program, not the integrity of its programming language.


In popular use, the term "algorithm" is a synecdoche for the totality of software and data. An application is made up of multiple, inter-dependent algorithms and its consequential behaviour may be determined by data more than code. To isolate and examine the relevant logic a regulator would need an understanding of the program on a par with its programmers. If that sounds a big ask, consider how a regulator would deal with an AI system that "learns" new rules from its data, particularly if those rules are dynamic and thus evanescent. This is not to suggest that software might become "inscrutable", which is just another anthropomorphic trope on the way to the theomorphism of a distracted god, but that understanding its logic may be prohibitively expensive. Perhaps we could automate this to a degree, but that would present a fresh problem of domain knowledge. Software bias isn't about incorrect maths but encoded assumptions that reflect norms and values. This can only be properly judged by humans, but would a regulator have the broad range of expertise necessary to evaluate the logic of all applications?

Initially, a regulator would probably respond to individual complaints after-the-fact, however history suggests that the regime will evolve towards up-front testing, at least within specific industries. The impetus for standards and regulation is typically a joint effort by the state, seeking to protect consumers, and capital, seeking to protect its investment. While the former is dominant to begin with, the latter becomes more dominant over time as the major firms seek to cement their incumbency through regulatory capture and as their investors push for certification and thus indemnities in advance. You'd need a very large regulator (or lots of them) to review all software up-front, and this is amplified by the need to regression test every subsequent software update to ensure new biases haven't crept in. While this isn't inconceivable (if the robots take all the routine jobs, being a software investigator may become an major career choice - a bit like Blade Runner but without the guns), it would represent the largest regulatory development in history.

An alternative approach would be to leverage software engineering itself. While not all software employs strict modularisation or test-driven development, these practices are prevalent enough to expect most programs to come with a comprehensive set of unit tests. If properly constructed (and this can be standardised), the tests should reveal enough about the assumptions encoded within the program logic (the what and why), while not exposing the programming itself (the how), to allow for meaningful review and even direct interrogation using heterogeneous data (i.e. other than the test data employed by the programmers). Unit tests are sufficiently black box-like to prevent reverse engineering and their architecture allows the test suite to be extended. What this means is that the role of regulation could be limited to ensuring that all applications publish standard unit tests within an open framework (i.e. one that could be interfaced with publicly) and perhaps ensuring that certain common tests (e.g. for race, gender or age discrimination) are included by default.


The responsibility for independently running tests, and for developing extended tests to address particular concerns, could then be crowdsourced. Given the complexity of modern software applications, let alone the prospect of full-blown artificial general intelligence systems, it might seem improbable that an "amateur" approach would be effective, but that is to ignore three salient points. First, the vast majority of software flaws are the product of poor development practice (i.e. inadequate testing), the disinterest of manufacturers in preventing vulnerabilities (those hackable kettles), and sheer incompetence in systems management (e.g. TalkTalk). Passing these through the filter of moderately-talented teenagers would weed out most of them. Second, pressure groups with particular concerns could easily develop standard tests that could be applied across multiple applications - for example, checking for gender bias. Third, quality assurance in software development already (notoriously) depends on user testing in the form of feedback on bugs in public releases. Publication of unit tests allows that to be upgraded from a reactive to a proactive process.

Interestingly, the crowdsource approach is already being advocated for fact-checking. While traditional media make a fetish of militant truth and insist on their role as a supervisor of propriety (technically a censor, but you can understand why they avoid that term), some new media organisations are already down with the idea of active public invigilation rather than just passive applause for the gatekeeper. For example, Mr Wikipedia, Jimmy Wales, reckons "We need people from across the political spectrum to help identify bogus websites and point out fake news. New systems must be developed to empower individuals and communities – whether as volunteers, paid staff or both. To tap into this power, we need openness ... If there is any kryptonite to false information, it’s transparency. Technology platforms can choose to expose more information about the content people are seeing, and why they’re seeing it." In other words, power to the people. Of course, near-monopoly Internet platforms, like global media companies, have a vested interest in limiting transparency and avoiding responsibility: the problem of absentee ownership.

The idea that we would do better to rely on many eyes is hardly new, and nor is the belief that collaboration is unavoidable in the face of complexity. As the philosopher Daniel Dennett put it recently, "More and more, the unit of comprehension is going to be group comprehension, where you simply have to rely on a team of others because you can’t understand it all yourself. There was a time, oh, I would say as recently as, certainly as the 18th century, when really smart people could aspire to having a fairly good understanding of just about everything". Dennett's recycling of the myth of "the last man who knew everything" (a reflection of the narrowness of elite experience in the era of Diderot's Encyclopedie) hints at the underlying distaste for the diffusion of knowledge and power beyond "really smart people" that also informs the anxiety over fake news and the long-running caricature of postmodernism as an assault on truth. While this position is being eroded in the media under the pressure of events, it remains firmly embedded in the discourse around the social control of software. We don't need AI watchdogs, we need popular sovereignty over algorithms.

3 comments:

  1. Herbie Kills Children15 February 2017 at 18:58

    “Given that mix, there should be no surprise that the idea of hierarchy has always been central to the Californian Ideology and its (non-ironic) interpretation of "freedom". If Marxism and anarchism treat class and order as historically contingent, and therefore mutable, the defining characteristic of the party of order - and one that reveals the fundamental affinity between conservatives and liberals - is the belief that hierarchy is natural. Inheritance”

    You have played somewhat of a five card trick by going from hierarchy to class back to hierarchy again. Why am I making this point? Well Marxism may treat class as mutable but it certainly does not treat hierarchy (or actually order for that natter) as such, which is both an anarchist critique of Marxism and a Marxist critique of anarchism! Think of Engels “On Authority” as an example, where Engels makes the point that while class may be done away with the operating needs of a factory may make hierarchy a necessity. The anarchists of course will complain Marxism is just replacing one class of overlords with another. I tend to lean more to the anarchist position on hierarchy, in that socialism should seek to create organisational structures that undermine hierarchy, for example rotation.

    ReplyDelete
    Replies
    1. Engels was talking about hierarchy, or more precisely authority, in instrumental terms. I was referring to the idea of hierarchy as an eternal order, ordained by God or nature.

      Delete
    2. Herbie Kills Children15 February 2017 at 19:33

      Well he was certainly rejecting the ordained by god! I am not sure he denied mature so absolutely!

      Delete