Artificial Intelligence (AI) is a misleading term because we don't have a full understanding of human intelligence yet, so the comparison is necessarily imprecise. The Turing Test, aka the imitation game, isn't simply about a machine mimicing human responses - which is easy enough to do - but about our ability to reliably identify the expressions of human intelligence in comparison to software-generated text. The uncertainty as to whether it is a human or a machine on the other side of the screen reflects our inability to infallibly spot the human as much as our inability to unmask a computer-generated mimic. This uncertainty is obvious when we consider bureaucracy. Long before the possibility of LLMs, it was common for people to complain that interacting with commercial organisations or the state was like dealing with a machine. Bureaucratic procedures seemed designed to excise all humanity from what were notionally social interactions. As a result, bureaucracy was routinely derided as stupid, even though the manifestations of this stupidity were deliberately designed and presumably satisfactory to the designers.
The Large Language Model (LLM) approach to AI assumes that human-like intelligence can be derived from enough text. But there is a problem with this that is becoming increasingly apparent. Improving the model means expanding the corpus, but that in turn means more rubbish, which humanity routinely produces in text form. Many think that we may already have reached a limit and that futher expansion of LLMs will result not simply in diminishing returns but the passing of an inflexion point after which the models become dumber and dumber. The underlying issue here is not that factual errors in the corpus can give rise to "hallucinations" but that text is not a uniform expression of a general human intelligence. Rather text is a highly formalised set of what we might call genres. These are broader than knowledge domains, e.g. technical dialects, and reflect more generic purposes for which text is used, such as education or reportage. For example, the text produced by bureaucracy has well-known characteristics. It can be ambiguous, sometimes impenetrable and even downright nonsensical, but these are not necessarily failings from the perspective of the authors.
Henry Mance in the Financial Times recently noted how the problem of contamination is increasingly framed as one of reputation: "[the cognitive scientist] Gary Marcus suggests performance may get worse: LLMs produce untrustworthy output, which is then sucked back into other LLMs. The models become permanently contaminated. Scientific journals’ peer-review processes will be overwhelmed, “leading to a precipitous drop in reputation”, Marcus wrote recently." Reputation is an interesting word to choose in this context. It doesn't just suggest predictability or reliability - the idea that you will get the "right" answer. It also suggests that the answer is definitionally true because it is the answer given by authority. But this is a mundane truth rather than ex cathedra. According to Mance, "AI will become embedded in lots of behind-the-scenes tools that we take for granted." Again, the phrase "taking for granted" suggests that AI will advance to the point where its output is accepted as authoritative even if trivial. This doesn't assume that the AI will never be wrong, that it will be infallible, but that the rate of error will be low enough to be tolerable, much as bureaucratic mistakes are.
The future of AI may turn out to be restricted language models rather than the largest possible. Much of what is described as "AI training" is actually human intervention to limit the interpretative scope of the software: to rein it in. This will help address the contamination issue, essentially through brute force quality control, but it will also allow the AI to operate within a narrower semantic field where the epistemological rules are rigorously observed. In other words, just like a bureaucracy. This is intelligence in a very narrow, dry and unimaginative form. And that points to a rather depressing future. While there may be exciting applications of the technology in sexy areas like medical scanning and diagnosis, the big returns have always been anticipated in administrative and service functions, hence the predictions for "lost" jobs tend to focus on accountancy, customer support and the like. AI will probably thrive best in areas where rigidity of thought and a strictly bounded intelligence, even an unyielding monomania, is prized.
Technological development reflects above all the appetite and capability of the socio-economic environment to exploit new techniques (or old ones rediscovered). And that in turn may be determined by the longevity and ubiquity of previous technologies. Famously, the Chinese writing system - tens of thousands of morphemes - led to the dominance of woodblock printing and the relative underutilisation of movable type until the mid-19th century. But the latter technology produced a revolution when combined with the Latin alphabet in 15th century Europe. To give a contemporary example, modern software is riddled with skeuomorphs, from calendar apps that mimic desk diaries to the shutter-click sound of a smartphone camera app. The visual and aural prompts are intuitive only to the extent that we have been trained to recognise their forbears. If camera apps mimiced the "poof" of a flashlight powder explosion, rather than a click, we'd still understand it perfectly well, despite few of us ever directly experiencing a magnesium flare.
We may not be able to create a genuine artificial intelligence - i.e. artificial in the sense that it convincingly mimics the human sort - because we cannot escape the constraints that we place on human intelligence. The great myth, shared by liberals and libertarians (though not echt conservatives), is that human genius is unbounded. In reality, it is inescapably situated in history and society. In theory, globalisation and modern mass media should mean that new ideas spread rapidly and pervasively, but you'd have to be naive to imagine that there are no technologies underappreciated or lying dormant in the modern world. AI hasn't come to the fore because it is our shining hope (though it's worth noting that it has quickly acquired the near horizon of expectation characteristic of fusion power), but because it seems already familiar, all too familiar, in its combination of impressive authority and crass stupidity. AI will advance largely through the realms of business administration and public services, and so it will inevitably inherit the cultures (i.e. the vocabulary and semantics) of those realms. AI will not be a Culture Mind, of the sort imagined by Iain M. Banks, but a faceless version of the DHSS circa 1983.
No comments:
Post a Comment