Search

Friday 16 June 2023

Who Wants to Regulate AI?

The short answer is the large companies that want to protect their initial advantage. In understanding this paradox, it is worth considering the wider history of regulation, given that for all its revolutionary promises and the talk of it as a general purpose technology (GPT), like iron smelting or electric motors, AI is simply a novel application of existing, poorly-regulated technologies whose "product" is a variety of software commodities. The narrative of industrial regulation, from the UK Factories Act of 1847 onwards, is conventionally presented as the state, on behalf of society, constraining the anti-social practices of business. Karl Marx even hailed that legislation, limiting the working day to ten hours, as a victory for the working classes: "it was the victory of a principle; it was the first time that in broad daylight the political economy of the middle class succumbed ignominiously, ludicrously, before the political economy of the working class". 

But while the regulation of labour and the working environment would develop in a pro-social direction, not least through the efforts of trade unions, it remained marginal to the bulk of state regulation which was focused (ostensibly) on the protection of consumers rather than producers. Under neoliberalism, there has been a steady eroding of worker's rights and protections through deregulation, but there hasn't been a noticeable reversal of consumer rights, despite much propaganda about the freedom of choice. If anything, regulation has increased in scope. The bonfire of red tape that Brexit promised to deliver has proved to be a damp squib, not simply because it isn't in the interest of British companies to diverge from international standards and thereby jeopardise trading relations, but because a regulatory free-for-all lowers the barriers to entry and thus risks promoting disruptive competition in domestic markets. 

This gets to the nub of the matter, which is that regulation is invariably biased in favour of incumbency: the aim is not to disrupt an industry by forcing non-compliant firms out of business but to cement their position by moulding the regulations to suit their preferred practice. This doesn't mean those regulations are toothless or irrelevant, any more than the 1847 Act was a con that left workers worse off, but it does mean that regulation necessarily accommodates those who already dominate an industry. In the case of AI, the companies that currently dominate the space are the established major technology firms, such as Microsoft and Google. This is barely questioned even though there is no reason to believe that artificial intelligence should be limited to the paradigm of software, or that large companies whose expertise (and profit) is in providing business services and advertising should be the natural progenitors.

AI is more than Excel with knobs on, or a weirdly intrusive version of AdWords, but it's also better thought of as different in degree rather than kind. The term "AI" is obviously misleading: there's no actual intelligence and therefore nothing artificial about it. Under the covers, AI is simply a set of machine learning (ML) algorithms trained on massive datasets. Though the algorithms are the intellectual property, the asset is the data, and that asset has been built by you and I: it's simply the Internet. As that has grown, and as advances in computing power have allowed for more complex algorithms, so AI has emerged like a butterfly from the ML chrysalis. The algorithms themselves are not revolutionary either, being essentially geared to pattern recognition and statistical prediction. As such, they are pretty obviously an outgrowth from financial "quants" and even DSGE models, hence the more cynical dismissal of a product like ChatGPT as a "stochastic parrot". AI is fintech.


The political reaction to the emergence of AI, and in particular the "Stop me before I kill again" warnings of various AI "pioneers" and "concerned industry insiders", has been both a predictable demand for state regulation and a jockeying for position as the regulator for what will inescapably be a global industry. The Conservatives have insisted that the UK is not Ruritania, while Labour has decided AI developers need licences, proving that they understand regulatory arbitrage no more than they understand technology. At the edges we have seen the UK security services make their usual pitch, claiming that terrorists might use AI to groom the neurodivergent, and Tony Blair wade in with another business buzzword-heavy report from his institute. I don't know what's more delusional, his belief that AI can provide a "new national purpose" or the suggestion that "Requiring generative-AI companies to label the synthetic media they produce as deepfakes and social-media platforms to remove unlabelled deepfakes" could possibly work.

Where Blair gives the game away is in his proposal that the state must take the lead in "Building AI-era infrastructure, including compute capacity and remodelling data, as a public asset with the creation of highly valuable, public-good data sets". Stripped of the turgid language, this sounds pretty much like a plan to harvest existing public datasets, such as the clinical information held by the NHS or the educational data built up since the introduction of the national curriculum. Insofar as the UK has a possible edge in the emerging market of AI regulation, it is in its ability to offer access to such rich and coherent datasets as a quid pro quo, something no equivalent regulatory entity could offer in the US. And if there is one thing that distingushes it from other European countries with equally rich public datasets it is the willingness to sacrifice the digital rights of its citizens in pursuit of public-private partnerships.

There has been plenty of pushback to both the apocalyptic hyperbole of the tech companies and the cynicism of politicians, but this has also been problematic because of the tendency to focus on horses that have long since bolted, such as the biases built into training data, or on the disruptive impact on jobs, which is going to happen come what may because it always does. To note the damage companies like Microsoft and Google have already done is all well and good, though this can easily slip into a more general techno-pessimism or simply fuel the existing campaign of media incumbents to denigrate social media and more democratic forms of critique, but the fact that these industry giants have been only minimally constrained over the last three decades (the EU's various competition and data protection rulings have never seriously dented share prices) does not suggest that states are about to get tougher with them. If anything, what is being proposed is closer cooperation. 

In this respect there is obviously a shared interest in hyping both the potential and therefore the risks of AI. For politicians, it offers a Get Out of Jail Free card, whether real or imagined, like North Sea oil or the sunny uplands of Brexit, while for the tech companies it bolsters their claims to be at the forefront of world history at a time when their profits are softening and market valuations depend on assuring investors they have a de facto monopoly on the next big thing. As Corey Doctorow put it, "For those sky-high valuations to remain intact until the investors can cash out, we need to think about AI as a powerful, transformative technology, not as a better autocomplete." What we certainly don't need is to look behind the curtain, hence the lesser media attention paid to the armies of low-paid workers who toil away in the AI data-mines trying to label the training data to improve the quality of answers, or at those unwitting Internet users whose "free labour" choices augment that data. We haven't moved that far on from the ten-hour working day.

No comments:

Post a Comment