Search

Friday 12 July 2024

The AI Hype Cycle

Gartner, the American technology research consultancy, launched its Hype Cycle in 1995 at a time when there was a lot of new hardware and software being punted towards its corporate clients. The original value of the cycle was that it allowed Gartner to visually represent the relative maturity of various technologies within the wider cycle of business adoption, not unlike a wine vintage chart (too early, drinkable now, past its best). As the somewhat cartoonish graphic indicates, this was pitched at non-technically-literate executives. The origin story told by Jackie Finn makes clear that the purpose was to advise on the timing of adoption, hence the research note in which the image was first published was entitled "When to Leap on the Hype Cycle". The absence of "whether" simply reflected that Gartner's business model was to sell consultancy to firms that were in adoption mode. This in turn reflected the times: not simply a point at which the World Wide Web was taking off but the latter stages of the great IT fit-out of the corporate world that commenced with the arrival of PCs and mini-computers (what would become known as "servers") in the 1980s.


The Hype Cycle quickly became a fixture not only for Gartner but in the culture of business and even wider society. But as it spread, an idea developed that this was actually the common life-cycle of emergent technologies: there would always be a period of hype followed by disillusion followed by the payoff of improved productivity. You can see the problem with this by considering some of the examples on that first graphic. Wireless communications never really had a peak of inflated expectations because the underlying technology (radio) was mature and the benefits already proven. Bluetooth was frankly a bit Ronseal. Handwriting recognition turned out to be a solution in search of a problem, hence it never made it to the plateau of productivity. Object-oriented programming (OOP) turned out just to be a style of programming. Insofar as there was hype, it resided in the promise of code re-use, modularity, inheritance etc, which were all crudely understood by executives to mean a shift from the artisan approach to the production line and thus fewer, lower-paid programmers. The explosion of the Internet and the consequent demand for programming killed that idea off.

It's difficult to say on which side of the peak of inflated expectations we are today in respect of AI - i.e. artifical intelligence but more specifically large language models (LLMs). If I can be forgiven for introducing another term, we may well have reached the capability plateau of AI some months ago, possibly even a couple of years ago. This is because we are running out of data with which to populate the models. All the good stuff has been captured already (Google and others have been on the case for decades now) and the quality of newer data, essentially our collective digital exhaust, is so poor that it is making generative AI applications dumber. The current vogue for throwing even more computing resources at the technology, which has boosted chip suppliers like Nvidia and led to worries that the draw on power will destroy all hope of meeting climate targets, reflects the belief that we can achieve some sort of exponential breakthrough - who knows, perhaps even the singularity - if we just work the data harder. 

The appearance of financial analyst reports suggesting that AI isn't worth the investment strongly suggests that the trough of disllusionment may be upon us, but for the optimistic this simply means we are closer to the slope of enlightenment. The classic real-world template for this was the dotcom stock boom in the late 90s, which was followed first by the bust in 2000 and then the steady, incremental improvement that culminated in the mid-00s with the iPhone and Android, Facebook and Twitter, and the first examples of cloud computing with Google Docs and Amazon Web Services. For those with a Schumpeterian worldview, the bust was simply the necessary stage to weed out the pointless and over-valued. The Internet eventually became pervasive, generating highly valuable businesses in the realms of hardware, software and services, because it met the consumer demand for killer apps: first email, word-processing and spreadsheets, and then social media, video and streaming. But what is the killer app for LLMs? An augmented search function, like ChatGPT or Microsoft's Copilot, is small potatoes, while the ability to create wacky images with DALL-E 3 is on a par with meme-generators in terms of value.

The wider promise of AI has been that it will replace the need for certain workers, notably "backoffice" staff whose knowledge is highly formalised and whose data manipulation can be handled by "intelligent agents" (to use the terminology, if not the meaning, of that original Gartner graphic). You may remember this idea from the history of OOP. Indeed, you may remember it from pretty much every technology ever applied to industry, starting with the power looms used to depress wages that the Luddites railed against. The standard story is that new technologies do destroy jobs but they create other, even better jobs in turn by freeing up labour for more cognitively demanding (and rewarding) tasks. Of course, from the perspective of an individual business this isn't the case. The promise of greater productivity is predicated on either reducing labour or increasing output. The new jobs will be created elsewhere and are thus someone else's problem. This is a good example of the difference between microeconomics (the rational choice of a single firm to replace staff with technology) and macroeconomics (the impact on aggregate levels of employment and effective demand in the economy as a whole). The story of job substitution is true, as far as it goes, but it is also obviously a consolation: there is no guarantee that the new jobs will actually be better.

The dirty secret of AI is that it requires an ever-growing army of human "editors" (to dignify them with a title that does not reflect their paltry pay and poor working conditions) to maintain the data used in the LLMs. These are mostly "labellers" or "taggers", and mostly employed in the global south. The fact that the jobs are done at all tells you that they must be sufficiently attractive in local terms. In other words, spending all day tagging pictures of cars is probably better paid than tending goats. This means that the anticipated benefits of AI may simply be a futher round of the offshoring familiar from manufacturing in the 1980s and 90s, but this time with AI acting as a veil that makes the human reality even more obscure than sweatshops in Dhaka or Shenzen. Indeed, the veil may be the point for many AI boosters in the technology industry: a way of preserving their idealised vision of a tech-augmented humanity that isn't shared by many beyond their own limited social milieu and geography. Inevitably the boosters also include many who have zero understanding of any technology, like Tony Blair, but who are very keen on the idea of a dehumanised workforce and a disciplined polity.


AI serves as a massive distraction for technocratic neoliberalism. It offers a form of salvation for all the disappointments of the last few decades: secular stagnation is averted, productivity growth picks up, truculent labour is made docile. After all the waffle over the last three decades about the knowledge economy - the need to raise our skills to take advantage of globalisation - it is notable that the promise of better jobs has been downgraded. In a recent "report" by the IPPR think-tank, we are told that "Deployment of AI could also free up labour to fill gaps related to unaddressed social needs. For instance, workers could be re-allocated to social care and mental health services which are currently under-resourced." From spreadsheets to bed-pans. The lack of resource for health and social care is simply a matter of money, i.e political choice. There is no suggestion that AI-powered businesses will be paying a higher rate of tax, rather the implication is that AI will fuel growth that will increase revenue in aggregate (again, macroeconomics provides the consolation for microeconomics). 

One of the key dynamics of postwar social democracy was the idea that public services depended on a well-paid workforce paying tax. The value created in the economy was funneled via pay packets and PAYE into the NHS and elsewhere. Neoliberalism broke this model by offshoring and casualising labour. Workers (i.e. average earners) funnel less value to the state, which leads both to greater government efforts to raise revenues elsewhere and to pressure to cut public spending ("cut your coat according to your cloth"). An AI revolution that reduces the number of average paying jobs (those "backoffice" roles) and substitutes more lower paid "caring" roles will simply further increase the pressure on the public sector. I suspect that the deployment of AI will not be marked by a slope of enlightenment, let alone a plateau of productivity, but by the determination of the private sector to reduce labour costs and by the determination of the state to trim the public sector as tax revenues decline. It is often comically bad, but in combination with offshored digital peons, AI technology is probably already "good enough" to meet those ends. In terms of the hype cycle, we are probably a lot further along than we imagine.

No comments:

Post a Comment