In parallel with the media fears that AI will bring about the collapse of Western civilisation, or at least the loss of jobs among journalists and commentators, there has been a countervailing, more hopeful narrative that AI may in fact save us from the horrors of social media, or at least its negative impact on journalists and commentators. The latest to make the link is John Burn-Murdoch in the Financial Times. The headline and lede provides a summary of his argument: "Social media is populist and polarising; AI may be the opposite. Large language models elevate expert consensus and moderate views, in sharp contrast to social platforms". He doesn't offer any evidence for the claim that social media is populist, or even explain what that means in this context. The subject of political polarisation plays an important ideological role in the US, as the corollary of "bipartisanship", hence it is there we find the best evidence for historic trends. And what these show is that polarisation between Democrats and Republicans started to increase in the 1970s for well-known political reasons: the decline of the postwar social and economic consensus and the deliberate embrace of divisive anti-state rhetoric by the Republican Party.
Social media may have helped amplify that polarisation over the last twenty years but it didn't cause it, so the idea that the technology is inherently polarising is unproven, while the claim that it is populist is simply a category error. In fact, there is evidence that social media increases exposure to different viewpoints: that structurally it tends towards diversity rather than the uniformity of the filter bubble, and that it is traditional media that has more consistently amplified political polarisation (i.e. the New York Times or Fox News). This makes sense when you consider that the consumer of social media has far more (potential) control over what they see and read, despite all the tales of malign algorithms, than the consumer of a tightly-edited newspaper or TV programme. Burn-Murdoch's second claim, that LLMs favour expert consensus and moderate views, assumes that these are related: that the one gives rise to the other. The infamous case of climate change, where the expert consensus has been undermined by traditional media airing the views of lobbyists and motivated sceptics in the service of "balance", suggests otherwise.
The underlying belief of the hopeful narrative is that LLMs avoid the structural bias and partisan editorialising of traditional media because of their omnivorous nature and because they lack the status consciousness and condescension of human experts. In contrast, social media is problematic because it airs the uncurated opinions of millions, many of whom are idiots. In this worldview, LLMs embody the wisdom of crowds while social media embody the madness of crowds. The idea that media have these inherent epistemological qualities is evident in Burn-Murdoch's potted history, which is worth quoting at length: "Every media revolution has transformed who distributes information, what messages are distributed and what form they take. As such, some media are fundamentally democratising and polarising, widening the pool of publishers and views beyond a narrow elite and amplifying radical and anti-establishment voices. TikTok and the printing press arrived almost 600 years apart but share these characteristics. Others push the opposite way: radio and television had high barriers to entry, creating a monopoly for the voices and views of elites and experts."
The idea that the media changes whose voices are heard is crude technological determinism. The reality is that new technology is absorbed into existing power frameworks. There is feedback from the one to the other and thus change - newspapers gave rise to press barons, for example - but the power framework is dominant and adapts. This is evident in the fact that capitalists control most social media platforms and AI chatbots, an outcome that surprises no one. Equally, few people question whether AI must necessarily follow a capital-intensive development path. We worry about covering the Earth in data centres, but alternative paths are unthinkable, particularly those that would democratise decision-making (that would be populist). Burn-Murdoch's yoking of "democratising and polarising" should raise eyebrows, but we should also remember that "moderate" does not simply mean average. The word comes from the Latin for controlled. And it is control which commends AI to centrists rather than its tendency to "elevate expert consensus", just as the valorisation of such concepts as consensus, bipartisanship, civility and the like is ultimately about ensuring that political discourse is kept within strict bounds.
To return to Burn-Murdoch's history lesson, the moveable type printing press, when introduced in the mid-15th century, was a very expensive and initially rare piece of technology that required a team of craftsmen and labourers to operate. It was the IBM mainframe of its day. It was also quickly put under state control - e.g. the Stationers Company monopoly in England. The idea that ordinary people could access and make use of the press, in the way that they can with a social media platform like TikTok today, is absurd. Cheap prints (chapbooks) did not arrive in any great numbers till a century later and were still subject to censorship up until the Statute of Anne in 1710. And while there was a radical fringe, particularly in respect of religious nonconformism and political dissent during the 17th century, most chapbooks were little different to the popular press of later eras, their content dominated by tall tales, true crime and bawdiness.
While television transmission had high barriers to entry, radio did not. Amateur broadcasters ("radio hams") were a feature from the 1920s onwards, which was hard on the heels of broadcast radio's expansion following the introduction of vacuum tube receivers. It's certainly true that the bulk of broadcast spectrum, and the listening audience, was quickly taken over by large commercial firms and state corporations, like the BBC, but radio was always a more democratic medium than both television and print (for most of its history). Even today, despite the impact of the Internet and the decline of radio as a hobby, there are over 100,000 amateur broadcasting licences held in the UK. Understanding the history is important because it highlights how the existing power framework (the role of the state, the dominance of capital) absorbs the new media. But it also highlights how that media can be adopted and potentially repurposed by the people (democracy).
Burn-Murdoch's central claim is "that where social media’s inherent mechanisms push towards personalisation and fragmentation, LLMs are innately “converging” — their underlying dynamics push them towards objective reality". He sets out to prove this by comparing the responses of AI chatbots on political topics to the general population: "I found that while different AI platforms behave in subtly different ways, all of them nudge people away from the most extreme positions and towards more moderate and expert-aligned stances. On average, Grok guides conversations about policy and society towards the centre-right — a rightward push for most people but a moderating nudge towards the centre for those who start out as conservative hardliners. OpenAI’s GPT, Google’s Gemini and the Chinese model DeepSeek all exert similarly sized nudges towards a centre-left worldview — a slight leftward nudge for most people but a moderating push away from fringe leftwing positions."The data he provides to justify this is questionable. The profile of the general (US) population employed in his charts above suggests that Americans are mostly to be found left of the political centre and predominantly at the left extreme (the Y-axis is responses). After some toing-and-froing with him on Blue Sky, it became evident that the source data he was using was designed to accentuate differences between Democrat and Republican voters and that the far left position was essentially that of Barack Obama. It's hard to avoid the suspicion that this converging is simply the product of an LLM-based AI chatbot lacking intentionality and simply tending slightly towards a median position, which he describes as "objective reality", despite an LLM being at one remove from reality. More interesting is to wonder why the chatbot doesn't converge to a greater degree. In other words, why don't we see a normal distribution (a bell-curve) in which the moderate position is predominant? That would be the actual "opposite" of the supposed polarising effect of social media.
One explanation is that AI's ability to counter the anchoring and confirmation bias that users bring to it is undermined by its desire to be agreeable. There is a commercial rationale to this. People won't use a tool that is confrontational and repeatedly tells them that they're wrong, a problem well-known in areas such as public health policy where expertise is often viewed with suspicion (think vaccines or diet). As Dan Williams puts it, "Being human, experts are often biased, partisan, and simply annoying, and when they seek to “educate” the public, it can be perceived—and is sometimes intended—as condescending and rude. In contrast, LLMs deliver expert opinion without such status threats." This tendency towards sycophancy is well-known. Williams recognises the risks it entails, and the related risk that personalisation may simply reflect the idosyncracies of users, but ultimately he thinks "LLMs will produce much more reliable, expert-aligned information than most of these real-world alternatives [i.e. traditional media and information sources], even if sycophancy and personalisation introduce genuine biases."
I suspect the key for Burn-Murdoch and other political centrists is not that AI "elevates expert consensus and moderate views" but that it marginalises what he describes as extreme or fringe positions: "In addition, I found that while conspiratorial beliefs about topics including rigged elections and a link between vaccines and autism are over-represented among people who post to social media relative to the overall population, the opposite is true of AI chatbots, which almost never express agreement with these claims." But some of today's fringe opinions may turn out to be right. LLMs are expressions of conventional wisdom, but that means they will certainly be wrong about many things because expert opinion is currently wrong or incomplete. That he cites rightwing opinions on election rigging and vaccines is interesting, as not a few leftwing "conspiracy theories" have been proved right of late. In fact, many critiques from the political left have been categorised by centrists as conspiracy theories solely in order that they can be dismissed. Ironically, this has led to many centrist conspiracy theories, such as the prevalence of antisemitism on the left.
Noah Smith offers a typically more trenchant view when he claims that "the people who create LLMs have difficulty imparting their political bias to their creations", but also thinks that "Because of the way they’re trained, LLMs will be a force for homogenization and moderation of opinion", which is just another way of saying that they will promote an orthodoxy. As ever, centrism is deemed to be beyond ideology and therefore bias. It's just common sense, or Burn-Murdoch's "objective reality". Smith's claims are contradicted by Burn-Murdoch's data which show that the chatbots in his study do exhibit a bias consistent with the preferences of their owners (thus Grok is clearly more conservative while the others are more liberal) and that they maintain the (apparently) polarised distribution of the general population, despite "nudging" to the centre. In other words, the evidence actually points to the marginalisation of heterodox opinions more than it does to homogenisation.
The confidence displayed by these supporters of the hopeful narrative has to be read in the context of the last 18 years, since the financial crash of 2008. What that event, and the subsequent failure of austerity, showed was that the political centre was bereft of ideas. It was unable to satisfactorily explain why financialisation was always doomed or why neoliberalism would always tend towards greater inequality without conceding ground to the left, and it had no coherent response to the rise of rightwing anger and bigotry, falling between the stools of pandering ("legitimate concerns") and contempt ("deplorables"). The traditional arguments of centrism - of moderation, technocratic pragmatism and the "third way" - no longer work. The problem that centrist politicians face is not that they are poor communicators, a la Starmer, but that that they have no convincing story to tell, a la Macron or Harris.
The belief that AI may help nudge the population towards more moderate views is a counsel of despair. The democratic ideal of a Habermasian discourse has given way to the subconscious sculpting of opinion through a technology dominated by the rich. This is little different to the ideological role played by earlier media, such as newspapers and TV, even if it takes a more subtle form. For all the talk of "expert consensus" and "moderate views", what matters is simply the marginalisation of views beyond the narrow bounds of centrism. The role of social media in this, or more accurately the caricature of social media as a cesspit of malign propaganda and wilful ignorance, is simply to provide a "worse" alternative that flatters AI by comparison. To that end, the myth of the filter bubble is joined by the myth that social media is inherently polarising and even "populist". AI won't revive the centre by stealth, and it won't marginalise the "extreme" left any more than traditional media have done, but it may well put a few journalists and commentators out of work.


No comments:
Post a Comment