Search

Monday 18 March 2024

AI, Comparative Advantage and Natality

Beyond the banality of stochastic parrots and their "hallucinations", AI exists as a socio-political thought-exercise: a what-if. As has been the way since the emergence of sociology and its creative cousin Science Fiction, this encompasses both dystopian hell and utopian heaven. Perhaps the most obvious combination of the two is the idea that AI will take all the jobs, delivering either something akin to The Matrix, where humanity is reduced to its utility as fuel, or to a world of leisure and ease in which we can all pursue our talents and interests: "to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic." We have now reached the stage where official projections for growth routinely factor in AI as a magic ingredient. But beyond the glorified press releases of business consultancies predicting that AI will "impact" half of all jobs (consider what an accurate prediction of the impact of electricity would have been), there hasn't been much discussion of the mechanisms. In other words, how in practice will AI spread though the economy and how will employment patterns respond?

This is odd insofar as we have no shortage of historical data on the way previous technologies were deployed and how they reconfigured society. We all understand how the combustion engine substituted for horses and how that resulted in grooms being replaced by mechanics, and more recently we have seen how the technology of logistics has allowed employment in developed economies to shift from the primary and secondary sectors to the tertiary. The optimistic take on AI is that we'll see something similar: old jobs being replaced by new ones and aggregate wealth increasing, which translates to higher wages and greater purchasing power. If there is a fly in this ointment, it relates to the relative narrowing of wages between different parts of the world as industry reconfigures optimally. The middle class expands in the Far East while its peers in the American Mid-West stagnate, but at a global level there is aggregate growth. AI probably won't have a differential impact around geography, as raw material extraction and manufacturing does, but it will have an impact around cognitive differentials, which is why the more pessimistic prediction is for a decline in whitecollar employment.

Noah Smith is to be found on the optimistic side of the debate (as usual), but while his just-so stories of neoliberal progress can grind your teeth, he has made a useful contribution by trying to explain how the mechanism might work during the transition. He starts by outlining the negative view: "humans will have nothing left to do, and we will become obsolete like horses. Human wages will drop below subsistence level, and the only way they’ll survive is on welfare, paid by the rich people who own all the AIs that do all the valuable work. But even long before we get to that final dystopia, this line of thinking predicts that human wages will drop quite a lot, since AI will squeeze human workers into a rapidly shrinking set of useful tasks." In answering this, Smith's core point is that AI is not limitless. It will be constrained by computing power and energy - i.e. material resources. This will raise the opportunity cost of using it for low-value tasks that could be done by humans. Relative opportunity cost means that it will still make sense for humans to do jobs and be well-paid for them while AI concentrates on the really important stuff.


One of the features of economics (which proves that it is a social science rather than a hard science) is that many of its theories cannot be proven. This is not simply about the crisis of replicability, which typically affects microeconomics, but the difficulty of conducting real-world empirical trials at the macroeconomic level (hence the delusion that micro-foundations can be used to extrapolate macro policy). However, there are a few theories that are demonstrably true because the global economy provides a reliable test environment. One obvious example is the gravity theory of trade, which posits that it is cheaper to trade with near neighbours than far-off countries due to relative transportation costs. Despite the best efforts of Brexiteers to claim that technology has abolished distance, this clearly still holds true. The relevant theory for Smith's intuition about AI is comparative advantage, which is also demonstrably true for reasons to do with differential endowments - e.g. it makes more sense to grow bananas in the Caribbean and oats in Scotland than vice versa, and likewise someone with a high IQ and someone with lots of muscle power will gravitate to different jobs best suited to their abilities. So AI can concentrate on curing cancer rather than trying to win the Nobel Prize for Literature. 

One issue with Smith's model is that AI will produce greater rates of growth in those areas that it addresses. This is partly due to the compounding effect of the technology itself, but also because the movement of humanity into services that cannot be cheaply automated will necessarily intensify the unbalanced growth between the two sectors. William Baumol noted the effect by which low-productivity sector wages rose because of the competition for labour by high-productivity sectors. But if the AI sector isn't competing with the human sector for labour, there is no reason to think that wages will be bidded up. In other words, AI may not make humans redundant but it may lead to further wage stagnation because of that unbalanced growth. The best argument against this is that some of the fruits of AI must be recycled into wages simply to keep the system from collapsing (Smith envisages this in the perjorative terms of "welfare", i.e. UBI, though this is functionally no different to in-work benefits or a minimum wage), but that then emphasises the issue of distribution which comparative advantage does not circumvent. That gal with the high IQ is likely earning more than the guy with the muscles because there are more people with the latter than the former so basic supply and demand leads to different wages.

So perhaps the dystopia of AI is not that we all lose our jobs but that the economy becomes even more unequal in its outcomes. The jobs that AI won't do will include both grunt work and highly-valued work, but at the aggregate level of the economy we may end with many more of the former relative to the latter than was previously the case. Another way of thinking about this is to note that wages (returns to labour) have declined while asset wealth (returns to capital) has increased since the 1970s without the input of AI. We've had plenty of wage stagnation, particularly since 2008. If there is a more fundamental and powerful force at work (let's call it neoliberalism) the question then becomes how will that force accommodate AI? It's a reasonable assumption that it will reinforce or even exacerbate the existing trend towards wealth inequality and wage stagnation. In theory AI could disrupt this trend, but then any number of technological breakthroughs since the 70s that were characterised as disruptive turned out to actually reinforce neoliberal political economy (that is in the real world version, rather than the textbook fantasy, e.g. the growth of monopoly), so there's little reason to think this episode will be any different. 


The error is to assume that AI will carry all before it, reconfiguring society in its own image, hence the hysterical colouring of much commentary, but technologies are moulded by society as much as they do the moulding. Marx noted that "The hand-mill gives you society with the feudal lord; the steam-mill society with the industrial capitalist", but the reality, as Karl Polanyi countered, is that society also gave us the Factory Acts. If AI really were disruptive of the fundamentals of the economic system, the counter-movement would focus on the protection of the key factors of production, such as land, labour and capital, but to date the focus has been on the enforcement of propriety around generative deepfakes and the threat to the traditional media's interpretation of truth. If this carries an echo of the counter-movement of the nineteenth century, it is more of Christian revivalism than social progressivism. What this suggests is that AI won't be anywhere near as impactful as either the optimists or pessimists predict, but it also suggests that its greatest impact may be to exacerbate and accelerate existing trends.

One of those trends, which can be directly linked to the way that neoliberalism has expanded the logic of the market into the sphere of the family, is falling birth rates. As Steve Randy Waldman notes, natality is one area where comparative advantage has to cede to human emotion. Though much of child-rearing is handed over to specialists, most obviously teachers, it remains essentially a form of "cottage production" that struggles under the neoliberal logic of competition: "The relationship between wealth and natality is nuanced. When wealth is certain, its increase is likely pronatal, as people can bank on greater resources to cover the burdens of childrearing. But when wealth is uncertain, when it is delivered via tournaments that deliver outsize rewards to winners, then increases in 'expected' (meaning average) wealth likely translate to decreases in natality. The bigger the prize, the greater the cost of anything that will reduce your chance of winning." There should be no surprise that the current iteration of the Californian Ideology emphasises both the dramatic potential of AI and the necessity of pronatalism.

My guess is that AI won't become a general purpose technology (GPT) on a par with electricity or the Internet. This is because it will be too expensive, and that in turn is because there is no upper limit to its ambition, even if there are hard limits in the form of silicon and energy. You only need so much power and bandwith for most tasks, so GPTs like electricity and datacoms only need to be good enough. We already have good-enough AI, but it's fruits are underwhelming. Consequently, AI resources will increasingly be focused in specialist areas where its potential is greatest, e.g. more medical imaging rather than chasing the dream of fully autonomous vehicles. Noah Smith is right in his emphasis on comparative advantage, but as a neoliberal he assumes that the invisible hand of the market will optimally decide on how to allocate those scarce AI resources, rather than it being a political decision that will be taken to reinforce neoliberalism itself. Wealth inequality will continue to grow, even if AI companies replace oil corporations and technology manufacturers on the stock exchange, and the symptoms of that inequality, notably falling birth rates, will continue to elicit angst and ineffective amelioration.

No comments:

Post a Comment