Search

Monday, 12 November 2018

Why Don't We Have a Job Guarantee?

Basic income is often derided as impractical despite the evidence of success in past (albeit limited) trials and the willingness of governments to conduct further experiments. In contrast, the job guarantee is advocated as being more realistic (because closer to our belief in the value of work), and tends to enjoy more favourable coverage in the media, yet it has never been successfully trialled let alone implemented. Why is this? One explanation is that basic income experiments are often limited in scope to the unemployed, reflecting an ideological assumption about the desirability of integrating them into the labour market or bypassing the perceived disincentives of traditional welfare. In other words, schemes sold as basic income trials are often no more than attempts to reform existing unemployment benefits in a "market-friendly" way. A universal basic income (UBI) trial would need to address a full cross-section of society (the majority would be employed) and would necessitate a parallel tax and benefits system. In order to gauge pro-social benefits and the wider effects on the labour market, such a trial would need to run for longer than a couple of years.

The problem then is one of scale and the same challenge has historically limited the opportunity for a job guarantee (JG) system to be trialled. It's worth emphasising at this point that temporary job creation schemes, such as India's rural employment guarantee, or those aimed at the young or long-term unemployed in developed nations, are not job guarantees in the sense that the term is used by JG advocates. The chief feature of the JG is that it provides a permanent option that any adult can choose rather than being a temporary, counter-cyclical programme or a targeted relief. This make it particularly suitable for unemployment black-spots in an otherwise buoyant economy - for example, where a large plant closes or a major employer goes bust in a small town. When a worker is laid off by the private sector, she can immediately get a job provided by the public sector (this may actually still be a job operationally in the private sector, in that the provision of work projects may be outsourced, but her wage is paid by the state). Once the private sector starts hiring again, the worker would be incentivised to move back by the prospect of better pay and a better career fit.

The traditional arguments against the JG are essentially critiques of government: the enormous cost and potential for waste, the state's managerial incompetence and the crowding out of the more dynamic private sector. The solution to government is usually "the market", but this suffers in the case of a JG because the actual jobs market has already passed judgement in the form of unemployment, which leads some on the right to insist that the only solution to worklessness is to abolish minimum wage levels and severely reduce benefits to price labour back into the market. This has proven socially disastrous when tried. While the anti-government argument is plausible its material significance is rarely assessed. There is plenty of waste in the private sector and the social benefits of maintaining local communities may be considerable, so all of these imputed downsides might be tolerable given the upside benefits of employment and demand stimulus. A more honest criticism would be that capitalism still requires a "reserve army" of the unemployed, whether to maintain its political power (per Michael Kalecki) or to guard against inflation in the interests of rentiers (the NAIRU concept).

A key macroeconomic argument in favour of a JG is that it is a form of automatic stabiliser, maintaining aggregate demand in a recession and unwinding naturally through the labour market during a recovery. In counterpoint to this, JG sceptics fear that the system could become too attractive, thereby making labour "sticky" - i.e. reluctant to move into the private sector when vacancies arise, which would drive up the cost of labour for the private sector and so impede any recovery. The automatic stabiliser then becomes a fetter. While JG schemes often propose payment of the minimum wage, this could still make them more attractive than many private sector jobs because of a less stressful working environment or because of better fringe benefits. For example, schemes proposed in the US would provide access to healthcare, which is significantly better than what most minimum wage jobs in the private sector provide, though some advocates admit that this is intended to force the private sector to offer comparable benefits. In the UK, the NHS means that employment benefits would not be as great a differentiator (though they might be in the case of childcare). Even so, there would still be the possibility that some workers would prefer tidying-up public gardens to delivering pizzas.

The implied dynamic is that workers smoothly move between regular employment and the JG of their own volition, but for this to happen they must generally consider a JG job a second-best alternative to regular employment but must also be prepared to upgrade from the JG as soon as regular vacancies become available. The wage paid for a JG job must be sufficiently attractive to prevent recourse to residual benefits (assuming these are still available for hardcore refuseniks), while also being sufficiently unattractive to ensure almost no one would stick around once there were employment opportunities in the private sector. This implies both a significant differential between unemployment benefit and the JG, and between the JG wage and the minimum wage available in the private sector. The former is easily achieved (there would be widespread support to be parsimonious with those who refused to work) but the latter less so. In an era when low-wage private sector employment is precarious and fringe benefits have largely disappeared, the JG wage would have to be significantly lower to prompt workers to leave the security of a JG job.


One way round this is to make the JG wage variable, turning it up to the minimum wage level during periods of high unemployment and turning it down during periods of private sector jobs growth. But this assumes that labour is largely fungible and ignores the possibility of differential employment growth across sectors and geographies. If construction is expanding, should we reduce the pay of retail workers currently on a JG scheme? If London and the South East is booming, should we reduce JG wage rates in the North to encourage mobility? An answer might be to make rates variable by sector and location, but that is potentially destabilising for workers (it undermines the concept of a "guarantee") and may even end up producing sub-optimal behaviour, such as encouraging people to move to high unemployment areas to guarantee a liveable JG wage. The suspicion is that a JG can only work if it is both parsimonious (to encourage movement back to the private sector) and coercive (to prevent mass non-cooperation in the face of low pay). This might be one reason why it has never been fully trialled.

There has recently been a tendency to consider a basic income and a job guarantee as complementary rather than as alternatives: "the only cost-effective policy for comprehensive welfare is a combination of a modest basic income with job offer by local authorities below the minimum wage." In effect, the basic income would be parsimonious - enough to live on but not enough to flourish - making the JG wage sufficiently attractive to maintain employment, while the differential between the latter and a minimum wage job would encourage a speedy return to the private sector. This appears to combine the best of both worlds: security from destitution, meaningful employment and incentives to re-enter the labour market. A more progressive interpretation of this idea is that "Pairing a job guarantee with a UBI would mitigate the risk that the 'guarantee' would transmogrify under political pressure into a punitive workfare program. Pairing a UBI with a job guarantee would mitigate the risk that we neglect the broader project of integrating one another into a vibrant society, that we let a check in the mail substitute for human engagement".

Leaving aside the assumption that a UBI is anti-social (most basic income advocates emphasise its pro-social benefits), I'm not convinced that a basic income safety net would make the JG less coercive, even if it is no longer technically workfare as the entire income is not dependent on cooperation. The problem is that most of the unemployed would be obliged to accept JG jobs because the basic income would be insufficient, but those jobs are unlikely to be of high calibre because they would be paid at a much lower level that the worst jobs in the private sector. They won't be intrinsically rewarding or entail training in marketable skills. This is by design - the aim is to keep any viable minimum wage jobs in the private sector - but the result is more likely to be demoralising make-work jobs with little social benefit. There is also the challenge of preventing JG jobs being covertly used to service the private sector at a cost below the minimum wage. Given the public sector's reliance on outsourcing, drawing a clear line between public and private jobs is hard at the best of times. If the JG system was highly devolved, as most schemes propose, there would be obvious scope for corruption and abuse.

While basic income has remained conceptually consistent over the years, but has been given salience by the growth in precarious work and fears of technological unemployment, the job guarantee has had to reinvent itself to keep up with contemporary concerns. One significant change in JG advocacy is the move away from the idea that it will discipline labour (by institutionalising the reserve army of the unemployed) towards the twin ideas that it will embolden private sector workers in an era of weak unions (because they have an alternative) and provide upward pressure on private sector wages and employment benefits (to encourage mobility). This reflects both the growth in wage inequality and (more locally) the prominence of healthcare as an issue in the US, though there are clearly easier ways to address both problems than using the JG as a Trojan Horse (e.g. reversing anti-union laws and implementing universal healthcare). This latest iteration assumes that JG jobs would be relatively attractive, but the whole premise of the JG as an automatic stabiliser is that they would be sufficiently repellent to avoid sticky labour. It's not obvious that this fine balance can be struck, given the difference across sectors and geographies, while the notion of JG pay rates being selectively varied to address differences and encourage movement into the private sector undermines the idea of an automatic mechanism.

The job guarantee remains a superficially attractive idea that plays to our beliefs that work is intrinsically beneficial, that welfare should entail some quid pro quo, and that maintaining the calibre of labour for the benefit of the private sector is the responsibility of the public sector. All are dubious beliefs, in my view, but they are clearly commonly-held. Despite this broad acceptance, proposals for a comprehensive job guarantee system have invariably dwindled into marginal or targeted schemes that look remarkably like workfare or make-work. This probably reflects the dominance of the anti-state argument: a fear that a comprehensive job guarantee system would disrupt the "natural" labour market and lead to the harmful expansion of the public sector. Schemes that attempt to mitigate this fear through public sector jobs that would be non-competitive with the private sector ignore that the boundary between the two is not permanently fixed. Capital will not support a JG system that is in any way attractive to workers, or that restricts its ability to extract rents from the public sector, even if it helps support aggregate demand. As Michael Kalecki noted, it's about power.

Friday, 2 November 2018

Idiot Summer

An Indian Summer is a brief reappearance of warm weather after the first frost. This week has felt like a less welcome reprise of the political silly season, when the news is dominated by chaff during the parliamentary recess. An idiot summer, perhaps. Despite the high theatre of a budget, and despite the best efforts of the usual suspects to revive that feelgood hit of the summer, "Labour at war", more ink appears to have been spilt over the inanity of a supermarket magazine editor losing his job for being disobliging about vegans. That the week has closed with a report that David Cameron is "bored shitless" and considering a return to the House of Commons (the views of prospective voters on the matter seem to be irrelevant) suggests that British politics is becalmed as we all await the coming storm of the Brexit denouement. I could add to this pile of non-stories the rumour that Twitter is thinking about retiring the like button, but instead I'm going to treat it as an excuse for a slightly more serious discussion about our attitudes towards digital property and the antisocial behaviour known as "blocking".

Noah Smith made the point that the like button is not only a handy way of signalling approval, or acknowledging a reply without cluttering up people's feeds, but it is also the only structural support for positivity on a platform that for many is blighted by what he describes as "ambient negativity". What particularly caught my eye was his suggestion that "From conversations with Twitter employees assigned to stop abuse, it seems to me that their main worry isn’t the negativity or threats on the platform. Instead, what frightens them most is the idea that Twitter might be used to create echo chambers, where like-minded people aren’t exposed to contrary viewpoints". The problem with this interpretation is that there is no lack of evidence that online echo chambers are a myth. Jack Dorsey may well subscribe to that myth, but it seems obvious to me that this is because it provides a useful justification for seeding timelines with "alternative viewpoints" - in other words, over-riding the user's own curatorial control to promote recommended (and possibly sponsored) content.

My own view is that Twitter is the best mass-use social media platform currently available, largely because it is the one that most closely approximates real world interactions in all their messy glory. For me, the relentless jeering, preening and snark is evidence of the platform's humanity. For many these characteristics are evidence of its irremediable vulgarity and subversion of social norms, but it is important to remember that those critics are talking about the manners of its users (i.e. the common herd) rather than the technology. Noah is clearly a fan of Twitter, as evidenced by his frequent use (155k tweets since 2011) but his suggestions on how to improve it are an uneasy combination of reasonable changes to usability and a questionable reinforcement of restrictive property rights. He makes three proposals: first, if you block a user, his tweets should no longer be visible to third parties in the replies to your tweets; second, users should be able to lock individual tweets, closing them to replies and so preventing a pile-on; and third, there should be an option to view a list of other users who have quote-tweeted you recently.

The first is retrospective blocking. Users who you block will not only be prevented from reading your tweets in future, but their replies will be removed from your historical threads. Deleting those replies outright would be easy enough to do, but that would mean erasing the blocked user's own "speech", which would be an infringement of their intellectual property rights. Preserving the replies but suppressing their appearance in a particular thread would be technically costly to achieve because it depends on a particular intersection of two user IDs that has to be dynamically checked. It also raises the same "airbrushing" problem associated with the call for an edit button. If a third party quote-retweeted the reply of a subsequently blocked user to your tweet, should that historical record also be amended so rendering it meaningless? The second proposal enables Twitter to be used for broadcasting (you can potentially do this already if you set up a random code as a muted word and include this in the tweet body - this doesn't prevent replies but you won't see them in your notifications). The third proposal is essentially a canned search to support those who wish to block disobliging quote-tweeters (you can already see quote-tweets by searching for "twitter.com/[handle] -from:[handle]").


The root issue is one of property rights. At present, any user has the ultimate sanction over their own tweets of deletion, but Noah's proposal would extend this to deleting the tweets of others where they are linked (as replies) to yours. The assumption is that a respondent partially cedes his own rights when he adds to a thread that you originated. Noah justifies this by a parallel with blog owners who can control comments to a post, however this isn't persuasive because in the case of a blog there remains only the one instance of the post. With Twitter, a retweet or quote-retweet creates a new instance not only of the tweet but effectively of the thread. It is also moot whether ownership of an entire thread vests with the author of the original tweet or whether that ownership becomes multiple once others post replies. To turn Noah's parallel on its head, a blog owner can delete hostile comments but she cannot prevent the commenter from incorporating the post into another medium (e.g. a screenshot on Instagram) for disparaging quotation. Selectively turning off replies is reasonable, just as it is when a blog owner chooses to turn off comments  on a particular post (though I think muting should be sufficient on Twitter), but I'm not persuaded of the need to selectively prevent retweets.

My own prescriptions for Twitter head in a different direction, away from the sanctity of property. Twitter isn't built for closed networks and can only realistically prosper as a public medium, so its design ought to privilege public rather than private rights. If you want to exert greater control over your statements and the responses to them then you should stick to a blog. For example, while I would retain the ability to mute users I would drop the ability to block them (I've never blocked anyone, but then I am both obscure and devil-may-care). Allowing users to block others at will, often for the most trivial or arbitrary reasons, is performative authoritarianism (that some users seem to get a thrill out of telling their followers that they have blocked someone strikes me as unhealthy and akin to the bullying that they otherwise decry). I don't object to being muted (we have a right to speak but not a right to be heard) but I do object to being told that there are statements in a de facto public realm that I have been explicitly barred from seeing (for the record, I can only think of one person who has blocked me - and I'm still not sure why he did it - so this is a theoretical concern more than a practical one).

There is a general consensus that online harassment and bullying is anti-social, but this is sloppy thinking. Such behaviour is actually social, just as mobs are by definition social entities. The reason we should object to bullying and harassment is that it aims to isolate individuals from society, in the same way that the playground bully seeks to isolate his victim from the sympathy of the crowd. The real anti-social behaviour is blocking, as it seeks to reconstitute society in the image of the individual. To emphasise again: everyone has the right to ignore arseholes (or even the mildly annoying) online, just as you could avoid someone in the offline world, but none of us has the right to impose our own view of who constitutes an acceptable public. You take the public as you find it. Just as the demand for an edit button is a retrograde attempt to control history that has the result of degrading the online "memory" of other users without their permission, so erasing the visible speech of those you have blocked is simply the petty exercise of the power of a tyrant and the digital equivalent of the oubliette.

Friday, 26 October 2018

The Ethical Corporation

When George Osborne took on various jobs after leaving politics few people imagined he was bringing rare skills to his new employers or that he was still motivated by public service. He was clearly trading on his influence and intent on making a lot of money by doing so. In contrast, the news that Nick Clegg has joined Facebook has prompted a plethora of comment focused on his own values and the company's need for greater ethical guidance. For example, Paddy Ashdown "has urged Nick Clegg to stand up for the values of liberalism and democracy at Facebook". Even those inclined to be more cynical about the company's objectives do so by regretting the compromise this entails for Clegg, as if disappointing supporters represented some sort of break from his career to date: "You're better than that, Nick", as Carole Cadwalladr put it. Perhaps the most bizarre (though perhaps only semi-serious) idea is that Clegg will act as Facebook's conscience: "Roman emperors used to employ a slave to whisper in their ear 'You too are mortal'. Zuckerberg may need that role and Clegg may be the person to deliver it".

Clegg has been hired to smooth Facebook's relationships with governments and regulators, specifically to mitigate the threat of legislation to the company's earnings and steer regulation towards its own interests. His utility is primarily in respect of the EU, where he previously worked as a trade negotiator and has many current contacts. The UK is already of minor interest to Facebook (recall Zuckerberg's unwillingness to appear before a select committee hearing) and will be of less interest after Brexit, hence there is little downside to recruiting someone who squandered his political capital and who would have zero leverage with a future government of any stripe. In the US he is largely unknown and superfluous in what remains a benign environment for technology businesses, though he may prove of some value as a less robotic deputy to Zuckerberg in appearances before legislators and there is an expectation that he will be a key player in "shifting corporate culture at a company whose founder announced earlier this year he would 'fix' it".

Clegg's recruitment is part of a wider corporate trend towards the appointment of Chief Ethical Officers. This is partly an admission that burying corporate social responsibility (CSR) within human resource, compliance or marketing functions is not an effective approach if you genuinely want to "shift corporate culture", and partly a recognition that reputational damage is a greater risk for businesses that manage consumer assets and thus rely on their sympathy. As these have grown over time with the expansion of financial services and the Internet, ethics has taken on a more important role in providing customer's with assurance about the safety of their assets, whether money or data. That Facebook's user base is ageing is no secret and most observers reckon the turn-off for the young is as much to do with the company's heavy-handedness and dubious exploitation of data as it is the heavy-handedness of intrusive parents or the excess of cat videos. In the circumstances, the choice of a "centrist dad" politician in the role isn't likely to make Facebook cool again, but it does mark a conscious political adjustment.


Conventional wisdom assumed that Silicon Valley was Democrat during the Bush and Obama years for no better reason than its use of progressive rhetoric. As its nature has become clearer, from restrictive employment terms through systematic surveillance to political donations, its Republican and libertarian spirit has become more visible. Politically, Clegg helps to provide a liberal sheen that Zuckerberg and his ilk imagine will restore Silicon Valley's bipartisan reputation (Clegg's Orange Book liberalism makes him mainstream Democrat). This cold calculation is still obscured by the media's desire to paint the industry's leaders as a breed apart: "As one ethical quandary after another has hit its profoundly ill-prepared executives, their once-pristine reputations have fallen like palm trees in a hurricane. These last two weeks alone show how tech is stumbling to react to big world issues armed with only bubble world skills". Employing someone from a different bubble, that of politics, isn't smart if you wish to engage with the "big world", but it makes a lot of sense if your primary concern is legislation.

Ever since the tech titans came to prominence there has been a tendency to treat them as a peculiar sub-species of capitalist, combining the unworldliness of the autistic nerd and the naïve enthusiasm of the teenage Randian. This ignores that a lack of ethical scruples is simply par for the course in big business. The leaders of Silicon Valley are not doing it for the lulz or trying to build a Utopia. They're just trying to make as much money as possible. The image of the industry leader as a man-child allows the apparent absence of ethics to be presented as a matter of maturation: young companies must grow and learn and we should forgive them the odd mistake along the way. But given that ethics is necessarily grounded in society, it is absurd to claim that an individual company, let alone an entire industry, can have emerged without an ethical system and that it needs to be taught right from wrong. Silicon Valley has ethics. It has values and norms. It's just that these aren't particularly attractive to wider society, grounded as they are in presumptions of elite entitlement.

A better approach may be to acknowledge this and seek to uncover and elucidate a company's ethics. Conventionally, this analytical role is fulfilled by a number of independent parties ranging from whistle-blowers through journalists to law enforcement. The problem with Silicon Valley is that the legal framework is formally supportive of whistle-blowers but in reality dedicated to protecting the corporation, while too many journalists have been compromised by the industry and law enforcement agencies have been generally reluctant to intervene. Tech's issue is not a lack of ethics but an insufficiency of invigilation and enforcement. Clegg's appointment is an attempt to keep it that way. The theatre of Zuckerberg's appearance before Congress, like Elon Musk's more recent antics over market-sensitive announcements, suggest that we are some way off Silicon Valley being treated anywhere near as rigorously as other, more established industries where there would be less tolerance for such "immaturity". As with the banks, if more technology business CEOs had been sent to jail for their abuse of power, we might not be debating the wisdom of injecting ethics into the industry.

Friday, 19 October 2018

Is Cannabis the New Oil?

Uruguay became the first country to legalise cannabis in 2013, though commercial sales were only implemented in 2017 and are limited to 16 retail pharmacies. As yet, it remains little more than a trial. The recent legalisation in Canada represents a more significant "experiment", not least because it is a G7 economy, but again it will be highly regulated. In the US, 9 states (plus the District of Columbia) allow the sale of marijuana for recreational use, and 31 states allow it for medical use, producing a legal market worth $9 billion a year. Estimates for the size of the potential Canadian market range from $4 to $6 billion. Were the US to legalise cannabis at a federal level, the national market might be as large as $50 billion, which would be somewhere between video-games and cigarettes. The global market for cannabis is likely to be over $200 billion, out of a total global market for all illicit drugs of over half a trillion dollars (it's obviously difficult to estimate this with precision - some think the global market may be as big as $4 trillion - but one illustrative claim is that drug profits were key to the banking sector's liquidity in 2008).

Aside from the eccentricity of Brexit and the contingency of the Syrian refugee crisis, the long-term move in the developed world has been towards more open borders; and despite Donald Trump's efforts to confect and win a trade war, the secular trend is still towards the freer movement of goods as well. This means that the illegal drug trade is becoming progressively more difficult to interrupt. Combined with the commercial potential, this leads policy-makers to consider whether accommodation might be a better strategy than prohibition. Another macro-level development that is changing the dynamics of the drug market is the rise in synthetics. In the short-term this encourages small-scale production, which leads to problems with quality-control and risk-management (notably avoiding police raids and laundering proceeds). In the medium-term it will encourage the creation of large-scale production facilities in parts of the globe where oversight is slack or the authorities are compromised, fuelling greater trade. In the longer-term it will make the development of major domestic production facilities - i.e. close to the retail market - more attractive both to producers and to authorities prepared to treat drug use as a problem of safeguarding and regulation.

At the micro level, the social distinction between cannabis and legal drugs, such as alcohol and tobacco, has blurred to the point where smoking a spliff is only likely to generate protest because of the smoke while staggering drunkenness is now considered a greater faux pas than being out of it on a park bench. Vaping, with its more sophisticated paraphernalia and dedicated shops, looks like the template for a new retail infrastructure rather than the last gasp of an old one. The emergence of first ecstasy, then skunk and now spice has been framed by the media as primarily a matter of health and safety, with much concern over poor quality control and unscrupulous practices among producers and dealers. There is obviously an irony here given the media's usual attitude to H&S, but it also indicates the power of the notion of consumer protection and the responsibility of the state in this area. While legalisation is probably a long way off in the UK, this has less to do with the power of the press than the historic association of drugs with race and class.


The linkage of drugs with criminality has long been a proxy for race politics, from the "yellow peril" fear of the late 19th century's opium dens to the black and latino association with marijuana. This has also allowed drugs to be presented as an alien plant invasion, a frame of mind that allowed the McCarthyite paranoia of Invasion of the Body Snatchers to live on in official culture. It is no coincidence that the area of the United States most resistant to the legalisation of marijuana has been the Old South, any more than that blacks are disproportionately more likely to be frisked or end up in jail for possession. I doubt the Confederacy will secede over the issue, but the de jure emergence of two Americas seems likely before any federal legislation to legalise marijuana nationwide, and one that will be made concrete in the already disproportionate size of the prison industry across the states. In this light, Canada's initiative clearly owes something to a nation that is consciously multi-racial (at least in the big cities) and where race itself is not a simple proxy for class.

Historically, drugs have been accorded a social status and legal consequentiality based on their class incidence. Cocaine has never lost its upper class "fast set" cachet, and users have usually been treated relatively leniently by the law, while the shift in attitude towards opium and its derivatives from the acceptable laudanum to the unacceptable heroin largely reflected the growing association first with the Chinese in the nineteenth century, then with hardcore bohemians for most of the twentieth century, and finally with housing estates battered by socio-economic change in the 80s and after (and flooded by cheap heroin following the Iranian revolution and the war in Afghanistan - the shape of the "market" is never wholly beyond the power of the state). The treatment of both skunk and spice in the British media has been heavily class-inflected, though it is notable that the former has become less of a bogey as more middle-class kids have suffered psychotic episodes while the latter has become associated with prisoners, the homeless and (for no better reason it seems other than proximity to TV production in Salford) Manchester.

The call for drugs to be regulated has been growing for years, though the call is still limited to peripheral nations who lack clout (e.g. in South and Central America), international bodies that can be safely ignored (including the UN), and former government ministers who no longer control the levers of power. The latter often claim to have seen the light, but their language betrays a persistent neoliberal appetite for the colonisation of a hitherto inaccessible market. Consider Charles Falconer, the former Labour Lord Chancellor: "Above all, we need to take back control of drug supply from the most violent gangsters. And it needs to be done sooner rather than later. … To regulate drugs is to apply the regulatory principles and tools that are routinely applied to everything else to a set of risky products and behaviours that have, until now, been controlled entirely within a criminal economy". Regressive drug policies are increasingly associated with "backward" nations in Africa, Asia and (notably) Russia, and thus associated with imperfect capitalism.


Philip Collins, who is a reliable weather-vane of neoliberal thinking, has added his voice to the cause in The Times: "Canada follows the US states of Washington, Nevada, California, Massachusetts and Colorado which have all legalised cannabis for recreational use. Prohibition has done nothing to control use but has instead created a criminal supply chain. It has put people through court proceedings who ought to be nowhere near the criminal justice system and it means that drugs come on to the black market without any regulation of their strength". I don't know who he mean by those "who ought to be nowhere near the criminal justice system", but I'm not sure it's 16 year-old black kids from Peckham that he has in mind. Painting the "victims" (and ignoring the many non-users collaterally damaged by drugs) as sympathetic makes tactical sense, but this is close to suggesting that we should legalise cannabis because it is in the narrow interests of readers of The Times.

In a similar vein, Simon Jenkins in The Guardian imagines an epicurean heaven in Colorado in contrast to a hell-hole somewhere behind Shoreditch High Street, or wherever it is he buys his hash: "While Americans are spinning their weed wheels and going to sommelier classes, British buyers are left to the mercy of pub lavatories and grubby street corners. While Canadian consumers can sit in saloons and cafes, safe in the knowledge that what they enjoy is inspected and tested, Britons must run the gauntlet of violent gangs, proliferating from cities to rural 'county lines'. Their activities contribute nothing to the state, and cost it a fortune." This isn't an appeal for legalisation so much as a better class of drug-taking. It is worth remembering that alcohol prohibition in America in the 1920s may have enriched bootleggers but it didn't lead to squalor for consumers, and repeal didn't deal organised crime a mortal blow either. What it did lead to in the US was brewing industry consolidation and consequently poor quality beer for most of the twentieth century.

As a classical liberal, Jenkins emphasises both the material benefits of freedom and the improved moral tone that it entails. Collins, as a neoliberal, can't help making a different dimension explicit: "It is sometimes said that legalisation creates a free-for-all when regulation in fact disciplines supply." While the desire to open up (i.e. appropriate) a large and seemingly inexhaustible market is at the root of the current vogue for legalisation, thereby finding a new frontier for capital in case either data or water don't live up to their billing as "the new oil", we shouldn't underestimate the attraction of an industry whose regulation will be deemed socially necessary from day one. The last thing that Big Pharma or the state wants is a truly free market, with low barriers to entry and the risk of profits being eroded by competition. While artisan producers and retailers will be promoted initially, if only to establish the meta-brand, the aim will be to use regulatory leverage to advantage larger manufacturers and retail chains. Big Weed is coming.

Monday, 15 October 2018

Varieties of Populism

Populism isn't a form of governance but a style of rhetoric employed when seeking power in a representative political system. It opposes the people to an elite and suggests that the latter are unrepresentative and therefore illegitimate. It is a critique of institutional democracy. Though the term has acquired a pejorative meaning equivalent to demagoguery, it's worth remembering that the most successful populist movements of the last fifty years were those of the insurgent centre. Margaret Thatcher presented herself as the leader of a long-suffering people who sought freedom from an establishment of civil servants and trade unionists, while Tony Blair claimed to lead a "radical centre" that better reflected an emergent "young nation" than a discredited Tory elite and an antiquated socialism. The rhetoric can be sincere - Thatcher remained a populist loose cannon even as she increasingly suffered monarchical delusions - or it can quickly prove to be insincere, as in the case of Blair (though to be fair, the creeping disillusion of New Labour looks tame compared to the speed with which Emmanuel Macron has disappointed the French). The contemporary "populist moment" is less a revival of something thought consigned to the past than the spread of a particular electoral strategy beyond the political centre.

With the centre ground emptied by the intellectual funk of neoliberalism after 2008, attention has focused on the populism of the "extremes", and in particular that of the right. Unfortunately, this is too often confused with what has come to be known as "illiberal democracy", a particular form of authoritarianism in which the regime monopolises power while allowing the formalities of opposition and democratic elections. Illiberal democracy tends to employ the language of right-populism but in a notably paranoid register: it aims to redefine the actual establishment as the tribune of the people against an "other". For example, Viktor Orban presents Hungary as being under threat simultaneously from George Soros, Islam and the EU. What has changed in recent years is that whereas right-populist rhetoric was adopted by dominant parties as they transitioned towards illiberal democracy (recall that both Putin and Orban emerged from the political centre), it is now being adopted by right-wing parties that make no bones about their intention of creating an illiberal democracy, such as in Brazil.

Where right-populism has secured office but the liberal democratic state remains fundamentally unchanged, as in the US, the rhetoric focuses less on the threats to the regime, which are likely to be dismissed as "lame" anyway, than on the celebration of the regime's achievements: "we got a great deal", "we're doing great things". Donald Trump will continue to employ populist rhetoric for as long as he is in the White House, both because he can do no other and because he is still running against the establishment. Where right-populism remains insurgent but not dominant, as in the UK, the rhetoric tends towards hyperbole. This is evident not just among extra-parliamentary populist movements like UKIP, which has started to employ the language of illiberal democracy, but among the established parties of the right who are trying to shore up their electorate or otherwise opportunistically leverage populist rhetoric for positive media coverage (e.g. Jeremy Hunt, the British Foreign Secretary, if you can credit it, claiming that the EU is like the Soviet Union).


Right-populism is based on the presumption of a legitimate demos that is not wholly inclusive (the "decent people" rather than everyone) and the assumption of an irreducible antagonism (the friend/enemy distinction of Carl Schmitt). The "we" of right-populism is essential, hence it usually maps to national identity, ethnicity or social class (e.g. "taxpayers"). There will always be an irreconcilable "them", made up of both enemies within and without, and thus there will always be antagonism. In contrast, left-populism tends to be transactional and material. Where neoliberalism promotes the rewards due to individual effort and qualifies rights with responsibilities, a worldview that is also adopted by right-populists but within essentialist parameters, the left-populist theory of just deserts focuses squarely on collective entitlements: the rights inherent in community, such as housing and a living wage. Left-populism is congruent with democracy and equality, a point made by theorists like Chantal Mouffe, which means that it is a reproach to liberal democracy but not a threat to it. It opposes the people to an oligarchic elite that seeks to curtail those entitlements, including democratic representation, but it differs from right-populism in making the "we" elective and the relationship agonistic - one of struggle - thus holding out the possibility of resolution.

One of the more striking characteristics of the current "populist moment" is that while left-populism has revived the positive language of collective deserts and entitlements in areas such as income and housing, indicating the degree to which neoliberal hegemony has been challenged, contemporary right-populism has moved away from the material towards the metaphysical. Though it still trades in the rhetoric of grievance, it has largely dropped transactional demands in favour of emblematic policies whose material benefits are uncertain. As recently as 2009 in the USA, the Tea Party was demanding that the government did not bail out subprime mortgage-holders and thereby increase taxes and/or national debt. While there was an obvious racial angle to this demand, it was also a rationally material one given the Tea Party's supporters self-identification as put-upon taxpayers. By 2016 the right-populist movement was marching under the banner of "Make America great again", which is no more substantive than a third way campaign slogan. The rich are obviously benefiting from tax cuts and deregulation, but the populism of Donald Trump appeals to voters primarily on the basis of identity rather than material interest.

Going further back, insisting on the maintenance of educational segregation in the US in the 1960s was the defence of a real privilege with tangible benefits. In contrast, advocating lower immigration today is not a policy that will deliver reliable gains, despite talk of tighter labour markets and less pressure on housing and public services, and is likely to have the opposite effect by reducing aggregate demand. You can argue that anti-immigrant policies are proxies for more existential concerns, such as protection against terrorism and crime, or that a sense of community and security is a valuable good in itself, but it is difficult to see contemporary right-populism as anything other than an identity politics increasingly divorced from popular material interests. The same pattern is visible elsewhere, from Hungary to Italy. The people are promised the restoration of national status, perhaps leavened with modest tax cuts and the protection of the benefits of the "decent" (such as pensions), while the rich proceed to loot the public treasury. Beyond the substitution of the rhetoric of national pride for that of personal development, this is still neoliberalism.


Significantly, where the modest material benefits go into reverse, as with Russia's pension reforms, national pride proves to be an inadequate compensation. This suggests that the identity basis of right-populism, its essentialism, may be weaker than liberal critics (who dominate populism studies) have imagined. Rather than the somewhere/nowhere dichotomy and the pathologies of the "white working class", what may ultimately matter are tangible concerns like wages and housing, issues that the political centre has failed to represent except in scolding, moralistic terms (i.e. you have no right to any of this - you must earn it through right behaviour). The weakness of right-populism's material agenda allows appalled liberals to treat it as an irrational, reactionary spasm: a desire for the restoration of an older social order embodied in the "angry, white male". For some, it is a full-blown rejection of the Enlightenment and thus a threat to democracy (this is a non sequitur: popular democracy was abhorred by most Enlightenment thinkers): "They rose up against the demand imposed by modernity – that we use reason to figure things out for ourselves – and replaced it not with the old rules, but with impulse itself, with the vengeance and cruelty and rage that Trump so brilliantly embodies".

As it became clear that right-populism in both the UK and US was going to be incoherent and largely ineffective, a view emerged that such populist eruptions might be a periodic and necessary corrective: "These revolts against remote elites are essential to the vitality, and viability, of modern democracy – even as (and precisely because) they challenge the status quo, destructive though that challenge may be". This has traditionally been a position that liberals have treated sceptically, concerned as they are with the implied threat to pluralism, but it has the advantage of being fundamentally transactional and therefore comprehensible: though crudely expressed, the claims of the people are "legitimate" and amenable to negotiation. One reason for the contemporary prominence of this view is the growing suspicion that the ultimate beneficiary of the "populist moment" may turn out to be the left rather than the right, precisely because of its material agenda and transactional approach, something that has certainly seemed more likely in the UK since 2017. This view reflects two assumptions: that right-populism is impulsive and emotional, so it is likely to blow over, and that the more pragmatic left-populism can be bought off through negotiation.

The problem is that this interpretation can easily slide towards thinking of right-populism as a necessary purgative, or simply as bad political weather that must be endured until it passes, which then leads to the equanimity displayed by The Wall Street Journal towards the prospect of Jair Bolsonaro becoming the next President of Brazil. Bolsonaro's antagonistic rhetoric towards a variety of "social deviants" sounds like hyperbole, but the risk that he is a right-populist ramp for an illiberal democratic coup is very real. You shouldn't be surprised if a guy who lauds the former military dictatorship ushers in a new military dictatorship, even an informal one that features more suits than fatigues. Left-populism is not a threat to liberal democracy, despite the pearl-clutching claims of centrist commentators who espy a cult or a threat to free-speech. Right-populism, on the other hand, may be a threat because it can serve both as a precursor to, and as a mode of expression for, illiberal democracy. In that light, the liberal media's indulgence of the spectacle of right-populist demagoguery, from Nigel Farage to Tommy Robinson, looks particularly foolish.