Popular Tropes

And now for something completely different ...

Wednesday, 5 July 2023

Some Monsters Are Bigger Than Others

Henry Farrell and Cosma Shalizi published a provocative essay in The Economist last month that drew a parallel, via the meme of the Lovecraftian shoggoth, between what we are obliged to refer to as AI for shorthand and older monsters of oppression and alien knowledge: "But what such worries [about AI] fail to acknowledge is that we’ve lived among shoggoths for centuries, tending to them as though they were our masters. We call them “the market system”, “bureaucracy” and even “electoral democracy”. The true Singularity began at least two centuries ago with the industrial revolution, when human society was transformed by vast inhuman forces. Markets and bureaucracies seem familiar, but they are actually enormous, impersonal distributed systems of information-processing that transmute the seething chaos of our collective knowledge into useful simplifications." This idea of an unknowable immensity is central to Friedrich Hayek's view of the market and the occult value of the price signal. But it also clearly stands in contradistinction to the tradition that starts with Marx in which the superstructure is rationally explicable as a product of the base and, as developed by later thinkers, its mystification is a means of control rather than an innate characteristic.

One useful service Farrell and Shalizi perform by highlighting this social science lineage is that it brings AI squarely within the realm of politics - i.e. as a matter of choice rather than simply as an external inevitability that must be regulated, as was the case with industrialisation in the nineteenth century. As the authors say, "We would be better off figuring out what will happen as LLMs compete and hybridise with their predecessors than weaving dark fantasies about how they will rise up against us." An obvious place to start is in the mating dance of the current incumbent AI developers and the state over the issue of regulation. Predictably, given that this is appearing on the pages of The Economist, the essay imagines this contest in terms of an equilibrium between the market and democracy, thus: "We eke out freedom by setting one against another, deploying bureaucracy to limit market excesses, democracy to hold bureaucrats accountable, and markets and bureaucracies to limit democracy’s monstrous tendencies. How will the newest shoggoth change the balance, and which politics might best direct it to the good?"

In an extended version published on the Crooked Timber blog, Farrell emphasises that the occult nature of these social technologies has long been recognised: "When one looks past the ordinary justifications and simplifications, these enormous systems seem irreducibly strange and inhuman, even though they are the condensate of collective human understanding. Some of their votaries have recognized this. Hayek – the great defender of unplanned markets – admitted, and even celebrated the fact that markets are vast, unruly, and incapable of justice. He argues that markets cannot care, and should not be made to care whether they crush the powerless, or devour the virtuous." But there's an obvious qualification to be made here. While the unknowability and the lack of conscience of the market has been celebrated, the same can't be said with regard to bureaucracy or democracy; quite the opposite. The demand is always that the former be accountable (i.e. virtuous) and that the latter be representative (i.e. an accurate reflection of the popular will). And look back at that quote in the previous paragraph: it is bureacracy that limits market excess not democracy, and it is democracy that poses the "monstrous" threat. Is that the way the world works or is that simply how The Economist views it?

Farrell and Shalizi's conclusion is that the consequence of LLMs "will involve the modest-to-substantial transformation, or (less likely) replacement of their older kin." Building on their essay, Daniel Davies suggests that they "are basically telling us that the current generation of AI should be seen first and foremost as a new technique for the amplification of management capacity. And this means that it’s likely to drive significant organisational change, of one kind or another. The robots are perhaps neither our overlords nor our colleagues; perhaps they’re a bunch of management consultants that we’ve hired." This emphasis on information management is reasonable given the nature of LLMs, but it also implies that the application of this new technology is likely to bias towards those older social technologies where clarity is required, i.e. bureaucracy and democracy. Its application in the realm of the market, where the occult is lauded, seems much less plausible. Farrell and Shalizi's suggestion in The Economist article that LLMs might deliver on the promise of central planning is clearly intended to invite the derision of the paper's readers.


Davies' metaphor of the management consultant might suggest possibilities in the private sector, but we need to remember that the highest profile use of such resources as a transformative agent has long been in the public sector, under the rubric of "reform". And much of what that consultancy boils down to is the need to mimic the "best practice" of the commerical world as a preparatory stage towards outsourcing, not least the introduction of pricing and internal markets. The metaphor should also alert us that LLMs will likely promote a particular brand of management, just as the likes of McKinsey and Boston Consulting do, if only because the corpus of data from which it derives its suggestions will be dominated by that particular tradition. And this brings us to another key point, that of language. A paradox of LLM-based AIs in popular commentary is that language itself appears to be incidental, rather than foundational, which reflects the anglocentric nature of the technology industry and indeed the preponderance of English text online. 

But LLMs do not operate at the level of a Chomskyan universal grammar, so the idea that this can lead to an artifical general intelligence (AGI), with the emphasis on "general", is obviously flawed. At best you can talk about a general English intelligence, to which the Spanish and Mandarin equivalents will be puny in comparison. If AI presents an existential threat it is probably to the idea of the monolingual non-English speaker, not to humanity at large, but that threat has existed for some time now and has been significantly advanced by the spread of the Internet. The dominance of English not only in business and culture but in information processing itself long ago meant that the social technologies of the market, bureaucracy and democracy were stamped as indelibly anglocentric. This isn't even a new thought. The recognition that systems of knowledge (epistemes, in Michel Foucault's phrase) govern what we can know and express has been around since the 1950s and the emergence of structuralism, while the recognition of the inherent bias of such systems (the conditions of the possibility of knowledge) is the foundation of post-structuralism.

Where Farrell and Shalizi's lineage is perhaps lacking is in its recognition of the variety of those older social technologies and how much that variety has reduced over time due to the growing dominance of English and information management. If we think about an LLM more realistically as a language corpus rather than as an "intelligence" then the transformation they imagine AI leading to has been underway for decades. The twin recognition of historic variety and the tendency towards homogenisation can be seen in the way that national bureaucracies, reflecting different cultural norms, have gradually converged in their practices. But crucially the agent for that convergence has been the market, and specifically an Anglo-Saxon conception of it, most obviously in the form of New Public Management and other attempts to instill "commercial rigour". Likewise the practice of democracy has changed over the last 50 years, notably in the electorate's understanding of whose interests individuals are voting in. One relationship Farrell and Shalizi didn't highlight in The Economist is the way that the market has promoted the notion of the self-interested voter as a consumer of "retail policies", replacing the older idea of the altruistic voter considering what is best for her nation, locality or class. 

In other words, and counter to Farrell and Shalizi's story, there hasn't been an equilibrium between the market, bureaucracy and democracy since the 1970s. The first has succesfully dominated and altered the latter two. Do LLMs offer the possibility of a counter-movement to that encroachment by the market? Perhaps a better way of putting that is to ask what social values are inherent to a predominantly English language corpus built on the Internet since the early-90s? Do we think that the distilled essence of the blogosphere and social media inclines towards justice? This week Mark Zuckerberg has claimed that his new Threads social media platform will promote "kindness" rather than the "hostility" of Twitter. Given that the platform is largely just a rebadging of Instagram to make it look like a microblog, you have to understand "kindness" in terms of the relentless visual consumption and curated feed of the photo-centric platform, i.e. a perfect expression of an advert-led market, and "hostility" as a synonym for politics, i.e. an expression of messy plurality and dissent. It would appear that the monstrous market continues to dominate and has already absorbed the transformative potential of LLMs. This is not an equal fight.

No comments:

Post a Comment