The Commons Defence Committee has been harrumphing about the SNP's defence plans for an independent Scotland. Apparently, the proposed budget of £2.5 billion would not be sufficient to provide a credible airforce, submarines, a fully-equipped surface fleet, or create its own intelligence systems (that GCHQ shit is seriously expensive). The committee has also warned that moving the Trident nuclear fleet to a new base outside Scotland would take years and cost billions of pounds (who'da guessed?)
The Scottish plans are light on detail, though presumably kilts will be involved, but I think they're missing a massive opportunity to advance the cause of independence. Their problem is that they are approaching the question the wrong way round. Rather than start with a threat analysis, i.e figure out what you need to defend against, they have started by considering what a suitable amount of money would be relative to current expenditure (i.e. incremental as opposed to zero-based budgeting).
The total annual UK defence budget is about £36 billion (excluding foreign aid), of which Scotland contributes about £3.3 billion through taxation. However, it is estimated that only £2 billion of that latter amount is spent in Scotland, so it is assumed that the future budget should be somewhere between the two figures. The other datum they have thrown into the calculation is the defence budget of the country they think would be their closest comparator as an independent nation, Denmark. The Danes currently spend about £2.6 billion (DKK 23 billion). These two factors combined produce the finger-in-the-air estimate of £2.5 billion for Scotland's defence.
Menzies Campbell's claim that the SNP's sums were "done on the back of an envelope" is close to the truth, however he, together with the other unionist critics, is guilty of a different error, which is to assume that Scotland needs a mini-version of the UK's defence establishment, sans Trident. Let's try and approach this more logically by first considering the threats that an independent Scotland might conceivably have to face.
If we assume that Scotland as an entity stems from the Kingdom of Alba around 900, after the end of the Norse raids, then the only persistent threat to Scottish territorial integrity up to the union in 1707 came from England. Everyone has been too polite to point this out so far, perhaps because of the 500th anniversary of Flodden Field, when the roles were for once reversed. Of course, it's improbable that England would ever threaten an independent Scotland with invasion again (though you could imagine a future in which Boris Johnson PM rattles his sabre), but to be on the safe side, it is estimated that it would cost about £400 million to rebuild Hadrian's Wall. This is not an entirely facetious observation.
At about 90 miles, a wall between Berwick and the Solway Firth would be longer than the 73 miles between Wallsend and Bowness, but using modern materials, rather than replicating Roman stonework, it would be cheaper to build per mile, so the net cost is probably in the same region. For comparison, the 165 mile border fence between Israel and Egypt in Sinai is expected to cost somewhere around £500 to 700 million (NIS 3-4 billion) once completed.
Of course, a wall alone would not be an effective defence, and (assuming good neighbourly relations) could be an impediment to trade (though future fiscal policy could give rise to smuggling, which a wall might help control), but it does suggest that an annual defence budget of £2.5 billion would be more than adequate to counter the threat of invasion from the South. The lesson of history (certainly Scottish history, see messrs Wallace and Bruce) is that effective defence depends on an ability to raise a militia (blue face-paint optional), with the regular armed forces serving to hold up an invasion and buy time for full mobilisation. Switzerland is a good example of this approach, having an active force (95% part-time) of about 130,000 troops on a budget of £3 billion, which compares with the UK's active force of about 200,000 (full-time), plus 180,000 reservists, on ten times that amount.
Beyond England, and leaving aside stateless or domestic threats, the nearest territorial rivals to Scotland would be Norway, Iceland, The Faroe Islands (under Danish sovereignty), and the Republic of Ireland. There is obviously the potential for a new Cod War, or perhaps disputes over North Sea or West of Shetland oil and gas, but the odds look pretty slim, not least because of the calmative effect of NATO and the EU. A large standing army typically exists to project state power beyond national borders - i.e. its purpose is invasion rather than defence. Assuming an independent Scotland would not be casting envious eyes at the Faroe Islands (some of their footballers are quite useful), a small professional army coupled with part-time territorials looks appropriate.
Denmark's budget is clearly driven by regional considerations. For them, invasion from the South is a credible threat. Though relations are now good, and a bellicose Germany looks unlikely any time soon, the fact that they were invaded 73 years ago (and at 2 hours it was the briefest military campaign on record) means they remain cautious, if not exactly jumpy. The other consideration is Denmark's strategic position at the entrance to the Baltic, which means they could be dragged into wider strife involving other Scandinavian and Baltic states, not to mention Russia. In other words, Denmark faces more obvious threats than Scotland, and greater scope for being in the way of someone else's aggression.
In fact, it may make sense to also consider Iceland as a comparator for Scotland. As the only member of NATO without a standing army (they have 200 peace-keepers, plus a coastguard of 4 vessels and 4 aircraft), they are obviously at the other extreme to Denmark. Their total annual defence budget is a minuscule £8 million. It's improbable an independent Scotland would ever go this far, if only because popular sentiment would demand the maintenance of historic units like the Black Watch, as well as retention of various naval bases and airfields that provide domestic employment, but it does suggest that a credible defence force could be mobilised for significantly less than £2.5 billion a year.
It should also be borne in mind that the cost of moving the Faslane and Coulport Trident facilities to Cumbria (Northern Ireland would be ruled out for obvious reasons) would not be a problem for Scotland, it would be a problem for the rump UK. Indeed, an astute politician would charge the UK a hefty rent to maintain the bases for a decade or two (Scottish nationalists would surely accept their temporary presence as the price of independence), which would defray a large part of the Scottish defence cost.
The point about all this lunacy is that it reveals a few sordid truths: that defence spending has morphed into subventions to depressed areas of the country; that defence capability is determined by political posturing; and that the amount of money we spend on defence is ultimately arbitrary, owing more to special interests than actual need.
If Scotland could get by with a defence budget of £2 billion (and I have no doubt that it could), then the UK as a whole could certainly get by on £20 billion, which would put us on a par with Italy, Brazil and South Korea (and probably Israel, who officially spend about £9 billion a year but have an "off budget" nuclear programme, not to mention hefty US subsidies). We could halve our defence spend without adversely affecting our security, though our ability to intervene abroad would be reduced (this might be a blessing in disguise).
The Scots should be ridiculing defence expenditure as waste driven by English post-imperial delusion, rather than trying to explain how they'll afford the latest in fighter jets and submarines on a "paltry" £2.5 billion, but the cry of "defence jobs" is too emotive. At least they're not proposing a wee aircraft carrier, to be named Braveheart.
Popular Tropes
▼
And now for something completely different ...
▼
Sunday, 29 September 2013
Friday, 27 September 2013
The Revolution Devours its Children
The reinvention of neoliberalism continues. After the "one nation" and "responsible capitalism" soundbites of last year, Labour has this year started to put flesh on the bones with emblematic promises on housebuilding and energy prices, but within the context of fiscal prudence (austerity-lite). As anyone who can count will appreciate, the "Stalinist" plan to build a million new homes over a 5-year parliament is pathetically inadequate, though as any homeowner who can count will also appreciate, it means that house values won't be undermined, and nor will the balance sheets of banks be impaired.
Despite shouts of "Clause IV", the energy price freeze is pretty meaningless (the suppliers already offer fixed price deals), and certainly not the harbinger of a 70s-style three-day week (which happened under a Tory government, not a Labour one, as historian-cum-propagandist Dominic Sandbrook artlessly implied in The Daily Mail). A cut in energy prices, which would have boosted disposable incomes and thus helped stimulate the economy, would have been the real deal. The true significance of these policies is the reaffirmation of the role of government in addressing market failures at a national level, which is a key characteristic of neoliberal practice (the covert objective being to rig the market for privileged corporations). It does not herald the return of nationalisation.
Another sign of this rebranding was uber-centrist Martin Kettle praising David Sainsbury's Progressive Capitalism, which is being promoted as a new "third way" brand for Labour. Kettle retrospectively redefines neoliberalism as a "fundamental belief in unfettered markets and minimal government ". That would work as a definition of classical liberalism, but omits everything that constituted the "neo". In contrast, all-new progressive capitalism "means embracing capitalism in two particular ways – the recognition that most assets are privately owned and the understanding that goods and income are best distributed through markets. But it also means embracing progressivism in three distinct ways: the role of institutions, the role of the state, and the use of social justice as a measure of performance". In other words: the primacy of property, the efficiency of markets, the institutional defence against "populism" (i.e. irresponsible democracy), the managerial state, and the fetish of metrics. Altogether, a pretty decent thumbnail sketch of neoliberalism.
The symbiotic relationship of big business and the state (supported by the key ideologies of managerialism and public-private partnership) has always been the central feature of neoliberalism in practice. Henry Farrell provides a spotter's guide: "As one looks from business to state and from state to business again, it is increasingly difficult to say which is which. The result is a complex web of relationships that are subject neither to market discipline nor democratic control." A current example of this is HS2, which is now being advanced by the government (on behalf of rail and engineering companies) on the grounds that it will tackle a capacity problem, after the hyperbolic claims about regional regeneration, jobs and passenger productivity were shown to be dubious.
As any fule kno, there is an inverse relationship between speed and capacity due to increased stopping distances (they increase by the square of the increase in speed - i.e. if you double the speed, you must quadruple the stopping distance). The highest capacity lines are the slowest, e.g. metropolitan railways like the Tube, because they have the shortest gaps between trains, so while there may be an argument for an extra line, there is no argument for a high-speed one, unless HS2 is intended to be an exclusive facility for well-heeled commuters, and thus only a marginal increment on total capacity (not an easy sell). The lack of conclusive evidence that HS2 is the optimum use of £50bn, i.e. the absence of a clear price signal in an artificial market, and the failure to respond to popular opinion, shows how neoliberal programmes (and not just les grand projets) habitually bypass both markets and democracy.
Andrew Adonis, who remains a cheerleader for HS2, reviewed Sainsbury's book in The New Statesman when it was released back in May. The author's hilarious rejection of the Dark Side is quoted by the adoring Adonis: "It was only after I left government ... that I began to question fundamentally the neoliberal political economy which had dominated governments in the western world for the last 35 years". It's worth remembering that Sainsbury was never elected: he got into government as a Blair-appointed lord after making large donations to the Labour party (a classic neoliberal vector), having previously been the main money-man behind the SDP in the 1980s.
The lack of self-awareness over this route to power, not to mention his failure to spot the flaws in the neoliberal agenda while in government, is a breathtaking example of naivety born of entitlement (he joined the family firm in 1963, "working in the personnel department", and was appointed a director 3 years later at the age of 26). In his review, Adonis admiringly lists Sainsbury's many proposals on how to improve equity markets - this is typical: "Sainsbury is especially bold on takeovers. He proposes higher 'hurdles' in shareholder support required from within the target company, and restrictions on those who can vote to those who have held shares in the target company for 'a certain number of years'". In other words, Sainsbury wants to privilege incumbent capitalists. This is not progressive in any sense of the word.
The relaunching of "the project" in more populist clothing is bad news for the LibDems, who have already been spooked by the collapse of their ideological confreres, the FDP, in Germany (their insistence that there is no parallel is telling). This is because their economic platform (now dominated by the neoliberal Orange Bookers) is all they have left after compromising their social liberal policies in office (barely a squeak about the NSA/GCHQ revelations) and pissing away their traditional fund of goodwill over student fees. Part of the mythos of Thatcher and Reagan was that they sold raw free-market economics successfully to the electorate. In reality, it has taken 30 years to normalise the idea that the Royal Mail might be privatised. The LibDems' enthusiasm for overt economic liberalism, without the saving grace of civil rights or empathy for the poor, is not a vote-winner.
The market for a laissez-faire party has been fragmented by UKIP and the LibDems' overlap (confusing to many voters) with the Tories, so this leaves Clegg and co fighting on an increasingly narrow front. Their achievements in office have been underwhelming, even if you include the list of mad Tory policies that they supposedly kyboshed. The only achievement most people will remember is the increased tax-free allowance, which benefited the better-off more than the poor. Against this, everyone remembers the student fees fiasco and the electoral reform bungle. Their electoral challenge in 2015 will be a lack of novelty and distinctiveness, as much as the background noise about broken promises and opportunism (amusingly, Tim Montgomerie in 2011 imagined Clegg emulating the FDP's Guido Westervelle and resigning as party leader in order to concentrate on his government job - don't rule it out, or a sorrowful video about making sacrifices for the national good).
The decline of Germany's FDP is being partly explained by the rise of the eurosceptic AfD (a less eccentric UKIP). Assuming there was direct movement between the two, this (allowing for national differences) supports my suspicion that a strong showing for the Kippers in 2015 may be as damaging for the LibDems as for the Tories. Labour's decision to rule out a EU referendum commitment looks like an astute tactic to detach the pro-EU LibDem "left", who were already halfway out the door. Labour can afford to be a pro-EU party (or at least neutral) as the anti vote will be split between the Tories and UKIP.
Germany also parallels the UK in the relative decline of the Greens. This is partly due to the centrist parties adopting greenish policies (Merkel's decision to abandon nuclear power post-Fukushima appears to have been pivotal in Germany), partly due to the Greens acting like killjoys when they get near power, and partly because they look increasingly like a middle-class indulgence (the NIMBYism of the anti-fracking protests at Balcombe didn't help). If Labour augments their intervention in the energy and housing markets with green-tinged policies around renewables and sustainability, they could well hoover up ex-Green votes.
Meanwhile, neoliberalism has never gone out of fashion with the Tories, even if the name is carefully avoided. Stitch-ups between business and the state are integral to their natural mode of operation. By 2015 the penetration of the NHS by private interests will have reached critical mass. Labour may well slow this, and may even reverse some of the more offensive features, but we won't be going back to the status quo ante of 2010, let alone dismantling the internal market (the volte face of Blair after 1997 is the obvious template). You can also expect Royal Mail to stay in private hands and HS2 to reappear in some form.
While many are determined to put a positive interpretation on Labour's evolving policy (Duncan Weldon's "moving from living with capitalism to reshaping it" is a touching triumph of hope over experience), it should be obvious that there has been no philosophical watershed. Despite the intellectual bankruptcy revealed in 2008, the nexus of the defence of privilege (austerity) and the privileging of corporations ("good businesses") remains at the heart of politics. The false choice is still between laissez-faire and the neoliberal state, or as Michael Meacher puts it: "It opens up the critical divide between those who believe 'Let the markets get on with it, and government get out of the way' (such as the Tories, Orange Book Lib Dems, Blairites like Mandelson, and pretend-Labour ex-ministers like Digby Jones), and those who believe that when a private market is utterly dysfunctional and broken, the state has to step in and re-set the rules".
All that has happened here is that the previous generation of neoliberals have been recast as pantomime villains (a role Mandelson appears to be enjoying more than Blair), thus allowing the ideology to survive with a suitable makeover. 2008 is painted as a failure of individual competence and morality, from greedy bankers to hubristic politicians. We will now be saved by progressive capitalism.
Despite shouts of "Clause IV", the energy price freeze is pretty meaningless (the suppliers already offer fixed price deals), and certainly not the harbinger of a 70s-style three-day week (which happened under a Tory government, not a Labour one, as historian-cum-propagandist Dominic Sandbrook artlessly implied in The Daily Mail). A cut in energy prices, which would have boosted disposable incomes and thus helped stimulate the economy, would have been the real deal. The true significance of these policies is the reaffirmation of the role of government in addressing market failures at a national level, which is a key characteristic of neoliberal practice (the covert objective being to rig the market for privileged corporations). It does not herald the return of nationalisation.
Another sign of this rebranding was uber-centrist Martin Kettle praising David Sainsbury's Progressive Capitalism, which is being promoted as a new "third way" brand for Labour. Kettle retrospectively redefines neoliberalism as a "fundamental belief in unfettered markets and minimal government ". That would work as a definition of classical liberalism, but omits everything that constituted the "neo". In contrast, all-new progressive capitalism "means embracing capitalism in two particular ways – the recognition that most assets are privately owned and the understanding that goods and income are best distributed through markets. But it also means embracing progressivism in three distinct ways: the role of institutions, the role of the state, and the use of social justice as a measure of performance". In other words: the primacy of property, the efficiency of markets, the institutional defence against "populism" (i.e. irresponsible democracy), the managerial state, and the fetish of metrics. Altogether, a pretty decent thumbnail sketch of neoliberalism.
The symbiotic relationship of big business and the state (supported by the key ideologies of managerialism and public-private partnership) has always been the central feature of neoliberalism in practice. Henry Farrell provides a spotter's guide: "As one looks from business to state and from state to business again, it is increasingly difficult to say which is which. The result is a complex web of relationships that are subject neither to market discipline nor democratic control." A current example of this is HS2, which is now being advanced by the government (on behalf of rail and engineering companies) on the grounds that it will tackle a capacity problem, after the hyperbolic claims about regional regeneration, jobs and passenger productivity were shown to be dubious.
As any fule kno, there is an inverse relationship between speed and capacity due to increased stopping distances (they increase by the square of the increase in speed - i.e. if you double the speed, you must quadruple the stopping distance). The highest capacity lines are the slowest, e.g. metropolitan railways like the Tube, because they have the shortest gaps between trains, so while there may be an argument for an extra line, there is no argument for a high-speed one, unless HS2 is intended to be an exclusive facility for well-heeled commuters, and thus only a marginal increment on total capacity (not an easy sell). The lack of conclusive evidence that HS2 is the optimum use of £50bn, i.e. the absence of a clear price signal in an artificial market, and the failure to respond to popular opinion, shows how neoliberal programmes (and not just les grand projets) habitually bypass both markets and democracy.
Andrew Adonis, who remains a cheerleader for HS2, reviewed Sainsbury's book in The New Statesman when it was released back in May. The author's hilarious rejection of the Dark Side is quoted by the adoring Adonis: "It was only after I left government ... that I began to question fundamentally the neoliberal political economy which had dominated governments in the western world for the last 35 years". It's worth remembering that Sainsbury was never elected: he got into government as a Blair-appointed lord after making large donations to the Labour party (a classic neoliberal vector), having previously been the main money-man behind the SDP in the 1980s.
The lack of self-awareness over this route to power, not to mention his failure to spot the flaws in the neoliberal agenda while in government, is a breathtaking example of naivety born of entitlement (he joined the family firm in 1963, "working in the personnel department", and was appointed a director 3 years later at the age of 26). In his review, Adonis admiringly lists Sainsbury's many proposals on how to improve equity markets - this is typical: "Sainsbury is especially bold on takeovers. He proposes higher 'hurdles' in shareholder support required from within the target company, and restrictions on those who can vote to those who have held shares in the target company for 'a certain number of years'". In other words, Sainsbury wants to privilege incumbent capitalists. This is not progressive in any sense of the word.
The relaunching of "the project" in more populist clothing is bad news for the LibDems, who have already been spooked by the collapse of their ideological confreres, the FDP, in Germany (their insistence that there is no parallel is telling). This is because their economic platform (now dominated by the neoliberal Orange Bookers) is all they have left after compromising their social liberal policies in office (barely a squeak about the NSA/GCHQ revelations) and pissing away their traditional fund of goodwill over student fees. Part of the mythos of Thatcher and Reagan was that they sold raw free-market economics successfully to the electorate. In reality, it has taken 30 years to normalise the idea that the Royal Mail might be privatised. The LibDems' enthusiasm for overt economic liberalism, without the saving grace of civil rights or empathy for the poor, is not a vote-winner.
The market for a laissez-faire party has been fragmented by UKIP and the LibDems' overlap (confusing to many voters) with the Tories, so this leaves Clegg and co fighting on an increasingly narrow front. Their achievements in office have been underwhelming, even if you include the list of mad Tory policies that they supposedly kyboshed. The only achievement most people will remember is the increased tax-free allowance, which benefited the better-off more than the poor. Against this, everyone remembers the student fees fiasco and the electoral reform bungle. Their electoral challenge in 2015 will be a lack of novelty and distinctiveness, as much as the background noise about broken promises and opportunism (amusingly, Tim Montgomerie in 2011 imagined Clegg emulating the FDP's Guido Westervelle and resigning as party leader in order to concentrate on his government job - don't rule it out, or a sorrowful video about making sacrifices for the national good).
The decline of Germany's FDP is being partly explained by the rise of the eurosceptic AfD (a less eccentric UKIP). Assuming there was direct movement between the two, this (allowing for national differences) supports my suspicion that a strong showing for the Kippers in 2015 may be as damaging for the LibDems as for the Tories. Labour's decision to rule out a EU referendum commitment looks like an astute tactic to detach the pro-EU LibDem "left", who were already halfway out the door. Labour can afford to be a pro-EU party (or at least neutral) as the anti vote will be split between the Tories and UKIP.
Germany also parallels the UK in the relative decline of the Greens. This is partly due to the centrist parties adopting greenish policies (Merkel's decision to abandon nuclear power post-Fukushima appears to have been pivotal in Germany), partly due to the Greens acting like killjoys when they get near power, and partly because they look increasingly like a middle-class indulgence (the NIMBYism of the anti-fracking protests at Balcombe didn't help). If Labour augments their intervention in the energy and housing markets with green-tinged policies around renewables and sustainability, they could well hoover up ex-Green votes.
Meanwhile, neoliberalism has never gone out of fashion with the Tories, even if the name is carefully avoided. Stitch-ups between business and the state are integral to their natural mode of operation. By 2015 the penetration of the NHS by private interests will have reached critical mass. Labour may well slow this, and may even reverse some of the more offensive features, but we won't be going back to the status quo ante of 2010, let alone dismantling the internal market (the volte face of Blair after 1997 is the obvious template). You can also expect Royal Mail to stay in private hands and HS2 to reappear in some form.
While many are determined to put a positive interpretation on Labour's evolving policy (Duncan Weldon's "moving from living with capitalism to reshaping it" is a touching triumph of hope over experience), it should be obvious that there has been no philosophical watershed. Despite the intellectual bankruptcy revealed in 2008, the nexus of the defence of privilege (austerity) and the privileging of corporations ("good businesses") remains at the heart of politics. The false choice is still between laissez-faire and the neoliberal state, or as Michael Meacher puts it: "It opens up the critical divide between those who believe 'Let the markets get on with it, and government get out of the way' (such as the Tories, Orange Book Lib Dems, Blairites like Mandelson, and pretend-Labour ex-ministers like Digby Jones), and those who believe that when a private market is utterly dysfunctional and broken, the state has to step in and re-set the rules".
All that has happened here is that the previous generation of neoliberals have been recast as pantomime villains (a role Mandelson appears to be enjoying more than Blair), thus allowing the ideology to survive with a suitable makeover. 2008 is painted as a failure of individual competence and morality, from greedy bankers to hubristic politicians. We will now be saved by progressive capitalism.
Tuesday, 24 September 2013
The Nature of Jobs
Back in 2009, Adair Turner, then chairman of the Financial Services Authority, came up with the phrase "socially useless" to describe much of the activity of the City of London in the pre-crash years. This gained a lot of attention, not just because of the honest assessment of the City's predatory relationship with society, but because it reintroduced the ethical dimension to the workplace. Many people agreed with the judgement but also ruefully admitted that the job they did made little or no contribution to the common good either, even if it wasn't as unashamedly parasitical. The dirty little secret of capitalism is that very few people genuinely believe themselves to be wealth-creators, though they will claim the title (and absorb the associated anxiety of being "found out") for fear of being lumped in with the "moochers".
This moral stock-taking coincided with the peak in popularity of "happiness indices" as an alternative to simple GDP growth. You will notice that this has gone out of fashion lately, perhaps because ... house prices are rising again, yippee! (small print: only in London). The idea that happiness was an appropriate matter for public policy never sat well with the right, for whom "the pursuit of happiness" is by definition a personal activity that the state should keep its nose out of, while the centre-left struggled to define happiness beyond the anodyne. Under the covers, the right was uncomfortable exposing its profound belief that the only true happiness is the power that money bestows (and vice versa), while the centre-left was uncomfortable admitting that happiness does not equate to work and that there is much to be said for loafing.
Having (mostly) failed to spot the macroeconomic implications of useless activities before 2008, economists attempted to restore their credibility by providing a more systematic basis for Turner's argument about social worth, distinguishing between "creative" and "distributive" activities, though this merely echoed a critique that dated from the industrial revolution and was described by William Morris in Useful Work versus Useless Toil. Creative activities are incremental in aggregate, i.e. they add to the sum of wealth, while distributive activities are zero-sum, i.e. a gain for one party means a loss for another. A nurse is creative in that reducing hours lost to illness increases productivity and thus GDP. A marketing executive is distributive in that persuading people to buy product A in preference to product B (the "puffery of wares" as Morris puts it) does not increase aggregate demand and thus production.
The problem, as Turner himself pointed out in 2010, is that the proportion of distributive activities increases as a society gets wealthier. This is because ongoing advances in technological productivity mean we need fewer people to do creative activities, even allowing for growth in demand and the profusion of commodities. As Morris noted, this hides much waste as a lot of the creative production of society is geared to the "articles of folly and luxury" demanded by the rich ("a class which does not even pretend to work") and the middle class ("a class which pretends to work but which produces nothing"), while inequality drives the production of "miserable makeshifts" and "coarse food" for the poor (think Primark t-shirts made in Bangladesh and Tesco horse-burgers made who-knows-where).
The suspicion, that an increasing number of the jobs created by modern capitalism are without merit, if not actively bad for society, has prompted a new essay, On the Phenomenon of Bullshit Jobs, by David Graeber, Professor of Anthropology at the LSE. He outlines the problem as follows: "It’s as if someone were out there making up pointless jobs just for the sake of keeping us all working. And here, precisely, lies the mystery. In capitalism, this is precisely what is not supposed to happen. Sure, in the old inefficient socialist states like the Soviet Union, where employment was considered both a right and a sacred duty, the system made up as many jobs as they had to (this is why in Soviet department stores it took three clerks to sell a piece of meat). But, of course, this is the very sort of problem market competition is supposed to fix."
The comparison with the USSR isn't particularly helpful (he's an anarchist, so this is just him establishing his anti-totalitarian credentials), as deliberately creating enough jobs to ensure full employment is a respectable policy that stretches well beyond the old communist regimes, even if it isn't current orthodoxy. Graeber's analysis is a classic left-libertarian one: "The answer clearly isn’t economic: it’s moral and political. The ruling class has figured out that a happy and productive population with free time on their hands is a mortal danger." In other words, the economic system (or more specifically the 35-hour week) is the product of repression and the defence of privilege, rather than the other way round.
The cart/horse problem requires him to assume the improbability of an intelligent designer to explain the ideology: "If someone had designed a work regime perfectly suited to maintaining the power of finance capital, it’s hard to see how they could have done a better job. Real, productive workers are relentlessly squeezed and exploited. The remainder are divided between a terrorised stratum of the, universally reviled, unemployed and a larger stratum who are basically paid to do nothing, in positions designed to make them identify with the perspectives and sensibilities of the ruling class (managers, administrators, etc) – and particularly its financial avatars – but, at the same time, foster a simmering resentment against anyone whose work has clear and undeniable social value."
This is entertaining stuff, particularly on the psychic damage it does to those compelled to do these jobs, but his argument is circular if he claims the cause of this is moral and political, i.e. the ideology is the product of ideology. The underlying reason has to be economic, or at least involve some economic tradeoff, otherwise businesses would not willingly transfer wealth to these unproductive middle class job-holders in the form of salaries. While some may represent unavoidable rents (corporate lawyers, for example), most are discretionary. Graeber's claim, that capitalists aim to keep the population occupied and quiescent, skips over the question of what the mechanism is that achieves this (the "mystery") - i.e. what makes this happen at the microeconomic level of the individual firm?
An alternative economic explanation for bullshit jobs is that business needs to create a large enough population of consumers willing to spend on the commodities it produces. In other words, disbursements to the middle class (in the form of income rather than dividends or rents that would challenge the ownership of the rich) are a form of investment that preserves and expands the market for commodities, and stimulates innovation and productivity, so growing the overall pie of wealth of which an increasing proportion is appropriated by capitalists (the 0.1% for shorthand). But this faces the same problem as Graeber's political explanation. What is the mechanism that achieves this via the medium of bullshit jobs?
The key issue for any theory, whether posited on a political or an economic explanation, is the free-rider problem, i.e. what would prevent some capitalists avoiding having to subsidise supernumerary roles and thus gaining a competitive advantage? This is a real issue given that free-riding undoubtedly exists, and not just among small capitalists who despise "backoffice" functions as much as they do the public sector.
A possible explanation is provided by the work of Ronald Coase (who died recently), the author of The Nature of the Firm. Coase sought to explain (in the 1930s) why firms grew to a particular shape and size. He believed the key determinant was transaction costs, i.e. the cost of doing business. In a perfect market, where an entrepreneur could contract-out every piece of work at an optimum price, there would be no need to directly employ anyone. The reason a firm is created is because the overhead cost of these contractual transactions is too high. In other words, it is often cheaper to employ your own specialists, even if they are not always productively occupied, than to contingently contract independent suppliers.
Coase's theory suggests that the issue is not black and white, that many jobs may be useful (from the firm's perspective) at some times and useless at others. In this context, the useful/useless dichotomy becomes a separate dimension to the creative/distributive dichotomy: a role can be both distributive and useful to a firm (having a marketing bod who wins you market share), even if this is useless at the aggregate level of society (a psychic burden borne by the marketing bod, not by the firm's owners).
The implication is that small capitalists pay a premium, either through periodic outsourcing at a higher price (and greater risk due to lack of employee insight/loyalty), or through the lack of a critical skill at a key juncture that would otherwise grow revenue. This partly explains the difference in productivity and profitability between small and big capitalists - i.e. it isn't purely down to economies of scale. Free-riding means you are more likely to stay a small capitalist, while big capitalists who free-ride are more likely to limit their opportunities for expansion. A related suspicion is that many big capitalist firms who do not free-ride are operating some way short of their optimum point, in terms of the balance between internal and external resources, due to innate caution, which is perhaps supported by the evidence of the as-yet underwhelming impact of online outsourcing and the recent evidence of job-hoarding during the recession.
But while transactions costs do provide a plausible explanation as to why firms may accept some structural inefficiencies among non-productive roles (i.e. like advertising, they know that 50% of their managerial and administrative overhead is wasted, but don't know which 50%), it doesn't explain why the number of these roles should grow over time, not so much as a relative proportion of the firm (which can be explained by technology and automation) but as an absolute number. The gradual shift from the exploitation of labour value to intellectual property and capital assets could explain the growth in legal and financial roles, but it doesn't explain the growth in HR; while the bete noire of regulation and red tape only excuses so much: you are not obliged to employ management accountants, internal auditors, or CSR managers, let alone someone to manage your Twitter account.
Empire-building (to justify inflated executive pay - i.e. the principal-agent problem) and defence in depth (employing enough people beneath you to act as a fire-break come the periodic redundancy programme) probably play a part, while the Peter Principle should not be underestimated as a feeder into the limbo of "special projects", but I think the chief cause is simply that supernumerary roles tend to create work to pad out those periods when their skills are not required by the firm. This is essentially Parkinson's Law, and more precisely the observation that these roles tend to "make work for each other". As technological advance reduces the number of productive roles in the firm, the touch-points for the distributive roles are increasingly with each other, which increases the likelihood of further role creation and grade inflation.
Does this imply the reductio ad absurdum of a firm with only overhead roles and therefore no real purpose beyond a self-referential mission statement? In one sense, such firms are already closer to reality. The impact of containerisation and ICT over the last 50 years (in reducing transaction costs) has allowed some businesses to strip down to their "core competence" (usually managing a balance sheet) through outsourcing and offshoring. Though these often tend to start their "unencumbered" phase with legacy overhead roles, these quickly fall away as the business leaders realise they will never need the skills again, or at least not sufficiently frequently to justify the expense of an inhouse resource. These are not just modern variations on the holding company, but genuinely approach the "ideal" of Coase's theory, i.e. a small team of contract managers. The overhead roles don't wholly disappear; some merely step down the food chain to suppliers or go freelance, which means (for most) lower wages and worse terms.
The emblematic form of the modern firm increasingly looks like a hedge fund or a software house. The evidence suggests that technological change, particularly since 2000, has allowed firms to both start and stay smaller. As already noted, technological change also means that it's more likely that new firms will be distributive rather than creative (in passing, it's worth noting that a "creative" in modern parlance is often someone engaged in distributive activities who's in denial about their social worth, e.g. "marketing creatives"). While these firms have some supernumerary roles, they are often just corporate adornments that function as status symbols externally, e.g. "Executive Chef", or as ideological exemplars internally, e.g. "Guru". The assumption that a mature firm needs a substantial backoffice no longer holds.
Though it may not feel like it to the "squeezed middle", today's labour-hoarding and wage restraint among large and medium-sized businesses may well mark the end of the golden age for supernumerary middle class careers. The debate on the value of a degree, and the related angst over graduates struggling to move beyond non-professional jobs, may be the canary in the coalmine. Though recovery (however weak) is assumed to herald happier days for the jobs market, the paradox is that layoffs are now more likely as businesses have the confidence (and access to capital, if interest rates stay low and funds aren't monopolised by mortgages) to invest in automation and further outsourcing/offshoring.
Though this will restrain growth in aggregate demand, it will make sense at the level of the individual firm where overheads continue to grow as a percentage of the cost-base and external transaction costs continue to fall, in both cases due to technological advance. In the face of this pincer movement, the decline in bullshit jobs seems likely and thus Graeber's essay takes on the air of a lament for a soon-to-disappear world, not unlike the typing-pools and reprographic offices of yore.
This moral stock-taking coincided with the peak in popularity of "happiness indices" as an alternative to simple GDP growth. You will notice that this has gone out of fashion lately, perhaps because ... house prices are rising again, yippee! (small print: only in London). The idea that happiness was an appropriate matter for public policy never sat well with the right, for whom "the pursuit of happiness" is by definition a personal activity that the state should keep its nose out of, while the centre-left struggled to define happiness beyond the anodyne. Under the covers, the right was uncomfortable exposing its profound belief that the only true happiness is the power that money bestows (and vice versa), while the centre-left was uncomfortable admitting that happiness does not equate to work and that there is much to be said for loafing.
Having (mostly) failed to spot the macroeconomic implications of useless activities before 2008, economists attempted to restore their credibility by providing a more systematic basis for Turner's argument about social worth, distinguishing between "creative" and "distributive" activities, though this merely echoed a critique that dated from the industrial revolution and was described by William Morris in Useful Work versus Useless Toil. Creative activities are incremental in aggregate, i.e. they add to the sum of wealth, while distributive activities are zero-sum, i.e. a gain for one party means a loss for another. A nurse is creative in that reducing hours lost to illness increases productivity and thus GDP. A marketing executive is distributive in that persuading people to buy product A in preference to product B (the "puffery of wares" as Morris puts it) does not increase aggregate demand and thus production.
The problem, as Turner himself pointed out in 2010, is that the proportion of distributive activities increases as a society gets wealthier. This is because ongoing advances in technological productivity mean we need fewer people to do creative activities, even allowing for growth in demand and the profusion of commodities. As Morris noted, this hides much waste as a lot of the creative production of society is geared to the "articles of folly and luxury" demanded by the rich ("a class which does not even pretend to work") and the middle class ("a class which pretends to work but which produces nothing"), while inequality drives the production of "miserable makeshifts" and "coarse food" for the poor (think Primark t-shirts made in Bangladesh and Tesco horse-burgers made who-knows-where).
The suspicion, that an increasing number of the jobs created by modern capitalism are without merit, if not actively bad for society, has prompted a new essay, On the Phenomenon of Bullshit Jobs, by David Graeber, Professor of Anthropology at the LSE. He outlines the problem as follows: "It’s as if someone were out there making up pointless jobs just for the sake of keeping us all working. And here, precisely, lies the mystery. In capitalism, this is precisely what is not supposed to happen. Sure, in the old inefficient socialist states like the Soviet Union, where employment was considered both a right and a sacred duty, the system made up as many jobs as they had to (this is why in Soviet department stores it took three clerks to sell a piece of meat). But, of course, this is the very sort of problem market competition is supposed to fix."
The comparison with the USSR isn't particularly helpful (he's an anarchist, so this is just him establishing his anti-totalitarian credentials), as deliberately creating enough jobs to ensure full employment is a respectable policy that stretches well beyond the old communist regimes, even if it isn't current orthodoxy. Graeber's analysis is a classic left-libertarian one: "The answer clearly isn’t economic: it’s moral and political. The ruling class has figured out that a happy and productive population with free time on their hands is a mortal danger." In other words, the economic system (or more specifically the 35-hour week) is the product of repression and the defence of privilege, rather than the other way round.
The cart/horse problem requires him to assume the improbability of an intelligent designer to explain the ideology: "If someone had designed a work regime perfectly suited to maintaining the power of finance capital, it’s hard to see how they could have done a better job. Real, productive workers are relentlessly squeezed and exploited. The remainder are divided between a terrorised stratum of the, universally reviled, unemployed and a larger stratum who are basically paid to do nothing, in positions designed to make them identify with the perspectives and sensibilities of the ruling class (managers, administrators, etc) – and particularly its financial avatars – but, at the same time, foster a simmering resentment against anyone whose work has clear and undeniable social value."
This is entertaining stuff, particularly on the psychic damage it does to those compelled to do these jobs, but his argument is circular if he claims the cause of this is moral and political, i.e. the ideology is the product of ideology. The underlying reason has to be economic, or at least involve some economic tradeoff, otherwise businesses would not willingly transfer wealth to these unproductive middle class job-holders in the form of salaries. While some may represent unavoidable rents (corporate lawyers, for example), most are discretionary. Graeber's claim, that capitalists aim to keep the population occupied and quiescent, skips over the question of what the mechanism is that achieves this (the "mystery") - i.e. what makes this happen at the microeconomic level of the individual firm?
An alternative economic explanation for bullshit jobs is that business needs to create a large enough population of consumers willing to spend on the commodities it produces. In other words, disbursements to the middle class (in the form of income rather than dividends or rents that would challenge the ownership of the rich) are a form of investment that preserves and expands the market for commodities, and stimulates innovation and productivity, so growing the overall pie of wealth of which an increasing proportion is appropriated by capitalists (the 0.1% for shorthand). But this faces the same problem as Graeber's political explanation. What is the mechanism that achieves this via the medium of bullshit jobs?
The key issue for any theory, whether posited on a political or an economic explanation, is the free-rider problem, i.e. what would prevent some capitalists avoiding having to subsidise supernumerary roles and thus gaining a competitive advantage? This is a real issue given that free-riding undoubtedly exists, and not just among small capitalists who despise "backoffice" functions as much as they do the public sector.
A possible explanation is provided by the work of Ronald Coase (who died recently), the author of The Nature of the Firm. Coase sought to explain (in the 1930s) why firms grew to a particular shape and size. He believed the key determinant was transaction costs, i.e. the cost of doing business. In a perfect market, where an entrepreneur could contract-out every piece of work at an optimum price, there would be no need to directly employ anyone. The reason a firm is created is because the overhead cost of these contractual transactions is too high. In other words, it is often cheaper to employ your own specialists, even if they are not always productively occupied, than to contingently contract independent suppliers.
Coase's theory suggests that the issue is not black and white, that many jobs may be useful (from the firm's perspective) at some times and useless at others. In this context, the useful/useless dichotomy becomes a separate dimension to the creative/distributive dichotomy: a role can be both distributive and useful to a firm (having a marketing bod who wins you market share), even if this is useless at the aggregate level of society (a psychic burden borne by the marketing bod, not by the firm's owners).
The implication is that small capitalists pay a premium, either through periodic outsourcing at a higher price (and greater risk due to lack of employee insight/loyalty), or through the lack of a critical skill at a key juncture that would otherwise grow revenue. This partly explains the difference in productivity and profitability between small and big capitalists - i.e. it isn't purely down to economies of scale. Free-riding means you are more likely to stay a small capitalist, while big capitalists who free-ride are more likely to limit their opportunities for expansion. A related suspicion is that many big capitalist firms who do not free-ride are operating some way short of their optimum point, in terms of the balance between internal and external resources, due to innate caution, which is perhaps supported by the evidence of the as-yet underwhelming impact of online outsourcing and the recent evidence of job-hoarding during the recession.
But while transactions costs do provide a plausible explanation as to why firms may accept some structural inefficiencies among non-productive roles (i.e. like advertising, they know that 50% of their managerial and administrative overhead is wasted, but don't know which 50%), it doesn't explain why the number of these roles should grow over time, not so much as a relative proportion of the firm (which can be explained by technology and automation) but as an absolute number. The gradual shift from the exploitation of labour value to intellectual property and capital assets could explain the growth in legal and financial roles, but it doesn't explain the growth in HR; while the bete noire of regulation and red tape only excuses so much: you are not obliged to employ management accountants, internal auditors, or CSR managers, let alone someone to manage your Twitter account.
Empire-building (to justify inflated executive pay - i.e. the principal-agent problem) and defence in depth (employing enough people beneath you to act as a fire-break come the periodic redundancy programme) probably play a part, while the Peter Principle should not be underestimated as a feeder into the limbo of "special projects", but I think the chief cause is simply that supernumerary roles tend to create work to pad out those periods when their skills are not required by the firm. This is essentially Parkinson's Law, and more precisely the observation that these roles tend to "make work for each other". As technological advance reduces the number of productive roles in the firm, the touch-points for the distributive roles are increasingly with each other, which increases the likelihood of further role creation and grade inflation.
Does this imply the reductio ad absurdum of a firm with only overhead roles and therefore no real purpose beyond a self-referential mission statement? In one sense, such firms are already closer to reality. The impact of containerisation and ICT over the last 50 years (in reducing transaction costs) has allowed some businesses to strip down to their "core competence" (usually managing a balance sheet) through outsourcing and offshoring. Though these often tend to start their "unencumbered" phase with legacy overhead roles, these quickly fall away as the business leaders realise they will never need the skills again, or at least not sufficiently frequently to justify the expense of an inhouse resource. These are not just modern variations on the holding company, but genuinely approach the "ideal" of Coase's theory, i.e. a small team of contract managers. The overhead roles don't wholly disappear; some merely step down the food chain to suppliers or go freelance, which means (for most) lower wages and worse terms.
The emblematic form of the modern firm increasingly looks like a hedge fund or a software house. The evidence suggests that technological change, particularly since 2000, has allowed firms to both start and stay smaller. As already noted, technological change also means that it's more likely that new firms will be distributive rather than creative (in passing, it's worth noting that a "creative" in modern parlance is often someone engaged in distributive activities who's in denial about their social worth, e.g. "marketing creatives"). While these firms have some supernumerary roles, they are often just corporate adornments that function as status symbols externally, e.g. "Executive Chef", or as ideological exemplars internally, e.g. "Guru". The assumption that a mature firm needs a substantial backoffice no longer holds.
Though it may not feel like it to the "squeezed middle", today's labour-hoarding and wage restraint among large and medium-sized businesses may well mark the end of the golden age for supernumerary middle class careers. The debate on the value of a degree, and the related angst over graduates struggling to move beyond non-professional jobs, may be the canary in the coalmine. Though recovery (however weak) is assumed to herald happier days for the jobs market, the paradox is that layoffs are now more likely as businesses have the confidence (and access to capital, if interest rates stay low and funds aren't monopolised by mortgages) to invest in automation and further outsourcing/offshoring.
Though this will restrain growth in aggregate demand, it will make sense at the level of the individual firm where overheads continue to grow as a percentage of the cost-base and external transaction costs continue to fall, in both cases due to technological advance. In the face of this pincer movement, the decline in bullshit jobs seems likely and thus Graeber's essay takes on the air of a lament for a soon-to-disappear world, not unlike the typing-pools and reprographic offices of yore.
Friday, 20 September 2013
Loan Words
David Cameron has said it's OK for Tottenham fans to chant "yiddo" and "yid army". When asked by the Jewish Chronicle if Spurs fans who use the word should be prosecuted, he said: "You have to think of the mens rea. There’s a difference between Spurs fans self-describing themselves as Yids and someone calling someone a Yid as an insult. You have to be motivated by hate. Hate speech should be prosecuted — but only when it’s motivated by hate."
This has led to David Baddiel and others pointing out that the word is as offensive as "nigger" or "paki", and its use in this way should not be tolerated. It's also the case that many Spurs fans have come to think that the adoption of the term, in ostensible defiance of others with an antisemitic agenda, has been counter-productive. On the other hand, some Jewish Spurs fans claim to be happy with the use of the word. Even the Daily Telegraph is conflicted, expressing doubts while a poll of its readers (presumably mostly gentile) showed a majority supportive of Cameron's stance.
The PM's lapse into lawyerly Latin in a philological debate is amusing in itself, but he is making an important distinction. Mens rea means a wrongful mind, in the sense that a person has an evil intention. For a crime to be committed, the accused must be shown to have both a mens rea and to have carried out an actus reus, a wrongful act. If you accidentally kill someone, you are not guilty of murder. Cameron is tacitly accepting that chanting "yid" is prima facie (this Latin is catching) an actus reus, i.e. illegal, but that if a Spurs fan appropriates the term as a badge of honour, there is no mens rea (no "hate" in his formulation) and thus no actual crime.
This type of thinking is hardly novel given the decades-long debate over whether it's OK for black people to use the term "nigga", on which it's fair to say the jury is still out. As the recent documentary on the jobs and freedom march on Washington of 1963, which featured Martin Luther King's famous "I have a dream" speech, reminded us, it was a novel and radical act for John Lewis, one of the other speakers, to eschew the word "negro" and use the term "black". Words are mutable yet never fully drop their historic freight.
Cameron is right that the value of the word is a matter of context, but he is wrong to think it's solely about intention. Language is a social construct, which means that words mean what we agree they mean, not what any one individual decides. The joke in Alice Through the Looking Glass was not Humpty Dumpty's positivist worldview but his egomania.
Because society is dynamic, and constituted of many overlapping subcultures, language can have multiple, simultaneous meanings, which is an affront to those who would rather society wasn't dynamic. The constant struggle to control language is manifested in the trope that treats words as a type of property, whose ownership (and thus the right of interpretation, i.e. exploitation) can be prescribed by law and convention. This originates in the twofold movement, starting in the 17th century, that equated language with nationhood and territory, and simultaneously sought standardisation and conformity within the language. These trends were both facilitators of, and responses to, the evolution of mercantile and industrial capitalism.
This is why we use property-related terms, like "appropriate" or "reclaim", when we refer to one group changing or inverting the meaning of a word, such as Spurs fans adopting "yid" as an assertive identity. There is the sense, on the part of those who regret this mutability, of "rights" being trampled on and "liberties" being taken. When words (such as "yid" itself) jump between languages, we even talk of them being "borrowed" or "loaned", as if there is an expectation that one day they'll be given back, that property will be restored to its rightful owner. Who knows, perhaps the Romans will reappear to reclaim mens rea.
The reason why it is regrettable that Spurs fans use the term "yid" is because the vast majority of them (95%, according to Baddiel) aren't Jewish, so this carries no more personal consequentiality than a white lad from Aberdeen with a taste for Rap calling his mates "niggas" (innit, blud). In fact, Tottenham aren't an unusually Jewish club, despite the claims made about their "heritage". This is a relatively recent invention (and lazy received wisdom today), dating from the 70s and 80s when the NF/BNP strategy of converting terrace crews into "streetfighters" gave Chelsea and West Ham fans a new trope for their hate. If you're going to start sieg-heiling on The Shed and singing songs about Auschwitz, characterising Spurs as the exceptional Jewish club is a no-brainer.
Ironically, this occurred at the tail-end of the historic migration of Jews out of the East End, with the decline of the garment and furniture trades, to places like Brentwood and Harlow, which gradually reduced their proportion in the crowd at White Hart Lane. Today, Arsenal are the most widely-supported club among London's Jews (the recent change in Chief Rabbi saw an Arsenal fan replaced by a Spurs fan), which hasn't stopped some Gooners routinely chanting "yiddo" at ex-Spurs players, even when Yossi Benayoun was in our team. Outside of London, Man City and Leeds both have strong Jewish support, but don't feel the need to make a totem of it. The "reclamation" of the term by Tottenham fans is not an "I am Spartacus" moment, showing solidarity with Jews against antisemitism, but an example of the ironic strain of terrace humour, with the added frisson of lines being crossed, not unlike Baddiel and Skinner's baiting of Jason Lee on Fantasy Football in the 90s.
Back in the 80s, there possibly was a moment when the adoption of the term by Spurs fans might have had some value as resistance to the knuckleheads at Stamford Bridge and the Boleyn Ground, but that moment has long since passed. If Jewish Spurs fans gleefully used the term today, it would be a different matter, but they (generally) don't. It's pretty obvious that the persistence of the "yid" vocabulary, when other offensive language has declined in grounds, is in part due to the continued use of it by non-Jewish Spurs fans. If they stopped, we could more easily isolate and deprecate its use elsewhere. Like a loan deal gone wrong, leaving a disaffected player in limbo, it would be best to cut your losses and move on.
Cameron's nuance is not an example of wishy-washy relativism (though you could have fun quizzing him about the mens rea of muslim women wearing veils), but an assertion of property rights. He is channelling classic conservatism, both the Burkean notion that the little platoons should be left to their own devices and the reactionary resistance to words such as "queer" being appropriated and retooled. It's about ownership and his view that gentile Spurs fans have at least as much right to the word as the various Jewish community bodies that have abhorred its use. "The question is," said Humpty Dumpty, "which is to be master - - that's all."
This has led to David Baddiel and others pointing out that the word is as offensive as "nigger" or "paki", and its use in this way should not be tolerated. It's also the case that many Spurs fans have come to think that the adoption of the term, in ostensible defiance of others with an antisemitic agenda, has been counter-productive. On the other hand, some Jewish Spurs fans claim to be happy with the use of the word. Even the Daily Telegraph is conflicted, expressing doubts while a poll of its readers (presumably mostly gentile) showed a majority supportive of Cameron's stance.
The PM's lapse into lawyerly Latin in a philological debate is amusing in itself, but he is making an important distinction. Mens rea means a wrongful mind, in the sense that a person has an evil intention. For a crime to be committed, the accused must be shown to have both a mens rea and to have carried out an actus reus, a wrongful act. If you accidentally kill someone, you are not guilty of murder. Cameron is tacitly accepting that chanting "yid" is prima facie (this Latin is catching) an actus reus, i.e. illegal, but that if a Spurs fan appropriates the term as a badge of honour, there is no mens rea (no "hate" in his formulation) and thus no actual crime.
This type of thinking is hardly novel given the decades-long debate over whether it's OK for black people to use the term "nigga", on which it's fair to say the jury is still out. As the recent documentary on the jobs and freedom march on Washington of 1963, which featured Martin Luther King's famous "I have a dream" speech, reminded us, it was a novel and radical act for John Lewis, one of the other speakers, to eschew the word "negro" and use the term "black". Words are mutable yet never fully drop their historic freight.
Cameron is right that the value of the word is a matter of context, but he is wrong to think it's solely about intention. Language is a social construct, which means that words mean what we agree they mean, not what any one individual decides. The joke in Alice Through the Looking Glass was not Humpty Dumpty's positivist worldview but his egomania.
Because society is dynamic, and constituted of many overlapping subcultures, language can have multiple, simultaneous meanings, which is an affront to those who would rather society wasn't dynamic. The constant struggle to control language is manifested in the trope that treats words as a type of property, whose ownership (and thus the right of interpretation, i.e. exploitation) can be prescribed by law and convention. This originates in the twofold movement, starting in the 17th century, that equated language with nationhood and territory, and simultaneously sought standardisation and conformity within the language. These trends were both facilitators of, and responses to, the evolution of mercantile and industrial capitalism.
This is why we use property-related terms, like "appropriate" or "reclaim", when we refer to one group changing or inverting the meaning of a word, such as Spurs fans adopting "yid" as an assertive identity. There is the sense, on the part of those who regret this mutability, of "rights" being trampled on and "liberties" being taken. When words (such as "yid" itself) jump between languages, we even talk of them being "borrowed" or "loaned", as if there is an expectation that one day they'll be given back, that property will be restored to its rightful owner. Who knows, perhaps the Romans will reappear to reclaim mens rea.
The reason why it is regrettable that Spurs fans use the term "yid" is because the vast majority of them (95%, according to Baddiel) aren't Jewish, so this carries no more personal consequentiality than a white lad from Aberdeen with a taste for Rap calling his mates "niggas" (innit, blud). In fact, Tottenham aren't an unusually Jewish club, despite the claims made about their "heritage". This is a relatively recent invention (and lazy received wisdom today), dating from the 70s and 80s when the NF/BNP strategy of converting terrace crews into "streetfighters" gave Chelsea and West Ham fans a new trope for their hate. If you're going to start sieg-heiling on The Shed and singing songs about Auschwitz, characterising Spurs as the exceptional Jewish club is a no-brainer.
Ironically, this occurred at the tail-end of the historic migration of Jews out of the East End, with the decline of the garment and furniture trades, to places like Brentwood and Harlow, which gradually reduced their proportion in the crowd at White Hart Lane. Today, Arsenal are the most widely-supported club among London's Jews (the recent change in Chief Rabbi saw an Arsenal fan replaced by a Spurs fan), which hasn't stopped some Gooners routinely chanting "yiddo" at ex-Spurs players, even when Yossi Benayoun was in our team. Outside of London, Man City and Leeds both have strong Jewish support, but don't feel the need to make a totem of it. The "reclamation" of the term by Tottenham fans is not an "I am Spartacus" moment, showing solidarity with Jews against antisemitism, but an example of the ironic strain of terrace humour, with the added frisson of lines being crossed, not unlike Baddiel and Skinner's baiting of Jason Lee on Fantasy Football in the 90s.
Back in the 80s, there possibly was a moment when the adoption of the term by Spurs fans might have had some value as resistance to the knuckleheads at Stamford Bridge and the Boleyn Ground, but that moment has long since passed. If Jewish Spurs fans gleefully used the term today, it would be a different matter, but they (generally) don't. It's pretty obvious that the persistence of the "yid" vocabulary, when other offensive language has declined in grounds, is in part due to the continued use of it by non-Jewish Spurs fans. If they stopped, we could more easily isolate and deprecate its use elsewhere. Like a loan deal gone wrong, leaving a disaffected player in limbo, it would be best to cut your losses and move on.
Cameron's nuance is not an example of wishy-washy relativism (though you could have fun quizzing him about the mens rea of muslim women wearing veils), but an assertion of property rights. He is channelling classic conservatism, both the Burkean notion that the little platoons should be left to their own devices and the reactionary resistance to words such as "queer" being appropriated and retooled. It's about ownership and his view that gentile Spurs fans have at least as much right to the word as the various Jewish community bodies that have abhorred its use. "The question is," said Humpty Dumpty, "which is to be master - - that's all."
Wednesday, 18 September 2013
His Dork Materials
Philip Pullman, author of fantasy fiction for well-bred teens and president of the Society of Authors, has decided that illegal downloading of books and music is a kind of "moral squalor". This media-friendly hyperbole originates in an otherwise polite joint article with Cathy Casserly of Creative Commons in the Index on Censorship magazine (whose strapline is "the voice of free expression"). Pullman is just doing his job as a trades unionist, but his logic is as wonky as a timber-framed North Oxford bookshop. Though he takes aim at the likes of Google for facilitating illegal downloads, and thus "theft" from the pockets of artists, he is chary of voicing more than the traditional author's exasperation with the unequal terms of trade, noting that when he produces a book the publisher will "collect the money it sells for, and pass on a small proportion to me" (my italics).
What actually caught my eye was not this conventional bleat, but Pullman's extension of his criticism to encompass digital music: "The ease and swiftness with which music can be acquired in the form of MP3 downloads is still astonishing even to those of us who have been building up our iTunes list for some time" (can't you just feel the anguish in those last few words?) At no point does he acknowledge that Apple are the real culprit when it come to the transformation of the economics of artistic content, presumably because the Cupertino corporation's prestige remains strong among Macbook-wielding creatives. Despite the popular history that holds the arrival of Napster and other P2P networks around 1999 as responsible for the implosion of music sales, it was actually the introduction of the iTunes Store in 2003, and specifically the decision to sell individual tracks at 99 cents, that did the deed. The change to the landscape can be seen in this chart (source here), showing the distribution of revenue by music format.
Albums were always where the money was, while singles were primarily promotional devices. In terms of the quantum of revenue, there has been a steady decline since the turn of the millennium, however this appears relatively worse because of the peak that occurred during the 90s when we all spent extra money duplicating old vinyl with new CDs, as this chart shows.
What the chart doesn't include is the growth in revenues from live performances and ancillary merchandising (US live ticket sales tripled from $1.5b to $4.6b between 1999 and 2009, when cumulative inflation was only about 28%). Digital media revolutionised the music industry, but this simply meant that the money moved to new channels and publishers, it didn't mean that we stopped buying music or related commodities. The claim that taping/downloading was (and still is) killing music is simply the special pleading of incumbent publishers whose lunch was eaten by newer, more nimble service providers.
In ways that would make a dialectical materialist happy, you can see the history of music culture in this progression. After the 8-track bump (a mainly US feature) and a flurry of big-selling LPs (Rumours, Hotel California etc), revenues began to drop in the late 70s as home-taping set off the first wave of "illegal sharing" (this mainly ate into LP sales - C46 and C60 cassettes were obviously designed for album length recordings). This was exacerbated by the increased market segmentation, and consequently lower profit margins, driven by the popular reaction to the overwhelming dullness of the mid-70s. Record companies attempted to offset this by pushing higher-margin singles, often with coloured vinyl and fancy sleeves, which produced a flowering of new music (notably Punk and New Wave) as bands that would never have got an album deal were indulged for a couple of singles or an EP, but this strategy proved unequal to the task and so we got the inexorable 80s trend back towards the superstar and the multi-platinum record, exemplified by Michael Jackson's Thriller, which was amplified by the effect of videos and MTV.
The invention of the CD brought in the golden age of music sales in the 90s, more than offsetting the losses experienced due to taping, but with the consequence of slowly killing off the once-more marginal single and the EP (and eventually Top of the Pops). The CD also allowed ageing artists to monetise their back catalogue with reissues and remastered versions. The birth of corporate rock and all-conquering global sellers, from Nirvana and REM to Madonna and the Spice Girls, was a product of the CD age. One positive consequence of the new format, which has a much lower per unit production cost than vinyl, was that it allowed for more short run releases. This encouraged the reissuing of rarities and the introduction of more foreign music ("World Music" is a product of the nexus of CDs, the feelgood end of Apartheid and cheap global travel), which in turn helped fuel the growing eclecticism of music in the late 90s.
The decline in cassettes was sparked by the introduction of dual tape/CD players in the early 90s, accelerated by PC-based media players (and the start of digital sharing), and finished off by the shift from personal tape-players to digital music players (the iPod launched in 2001). Illegal sharing via cassette was always a fiddle and prone to poor quality, but the biggest drawback was simply getting access to the source. I, like many others, would make mix-tapes of tracks off the John Peel radio show, jumping up and down to stop and start the recorder (he had good taste but his chat was incidental), and often missing the first few notes. The purpose was not to rip off the artists, but to have the ability to play songs of interest on demand. Once I'd decided I particularly liked a group, I'd go out and buy the vinyl and/or go to a gig.
Radio was a conspiracy of music company-endorsed playlists (Janie Jones was just the louche fringe of this) in an era when the opportunity to hear new music was relatively restricted. The actual liberation of the Internet was the chance to find and sample a vastly larger range of music than before, not the chance to steal it. It was about "everything" rather than "free". I still buy music, but I also download "illegally shared" tracks, which I consider to be a form of browsing, or "user-directed sampling", if you prefer. The music industry has accommodated itself to this new marketplace, both in its acceptance of Apple and Amazon's insistence on individual track sales, and in its support for the "new radio" of streaming subscription services like Spotify (again, I use the free version to sample music of interest before buying elsewhere). The idea that "digital piracy" is impoverishing musicians is no more true than the earlier claim that home-taping was killing music.
That said, the changes in the means of production, distribution and exchange have certainly affected some musicians more than others, leading to the polarisation familiar from the wider economy. It's now much more difficult to make money as a musician at the lower end of the popularity scale. This is partly because publishers and distributors take an even larger slice of the pie (hence the furore over Spotify's terms), partly because gig opportunities are reduced as older bands seek to monetise their back catalogue (in the CD era they could sit at home and watch the royalties roll in - now they hog the festival stage), and partly because wannabes giving it away for free on YouTube undercut the market for new music (the dominance of parent-funded upper-middle-class musicians is partly a consequence of this).
Meanwhile, at the top end of the income scale, a small number of rights-holders have become even richer (and in this regard Simon Cowell counts as much as Lady Gaga), as the enforcement of copyright and the profusion of distribution channels delivers ever greater scale economies to superstars. As with job polarisation elsewhere, the "middle class" of bands that could once make a living gigging in pubs and midscale venues, and even pay off the mortgage with a top-100 album, has all but disappeared. With the ability to reach economic viability as an independent increasingly difficult, and success in the industry increasingly dependent on corporate channels, more and more market share is being taken by existing rights-holders, which is not unlike where we came in during the mid-70s.
In contrast, the world of fiction remains little changed. Few authors have ever been able to make a real living from writing, even before the arrival of e-books and the chimera of long-tail sales. Success depends on co-option by the small coterie of author/critics that dominate each genre, the big money comes from film and TV rights and other commercial spinoffs, and you must build a personal brand if you want to monetise readings and appearances. "Bestsellers" are, by definition, atypical, and increasingly subject to superstar economies of scale as the same content is spread across more channels and commodities (e.g. JK Rowling and EL James). In this context, the idea that the chief challenge to writers is spotty kids downloading files in breach of copyright, rather than the predatory pricing of online retailers and multinational publishing houses, is laughable. As SF writer and blogger Cory Doctorow says (quoted by Cathy Casserly), "My problem is not piracy, it’s obscurity".
The central, revealing flaw of Pullman's argument is that he blames the consumer: guilty of "moral squalor" and little better than a thief. This is the classic, intemperate language of the property-holder, demanding that a yokel be hung for rustling a sheep. I sympathise with Pullman's desire to secure a larger share of the rewards accruing to the product of his members' labour, but his target ought to be closer to home: "The editor, the jacket designer, the publicist, the printer, the library assistant, the bookshop manager, the PLR administrator, and others, all earn a living on the back of the fact that I and my fellow authors have written books that people want to read". This list excludes the shareholders of Scholastic Inc and Random House, Pullman's publishers. He also omits to mention Amazon, which provides the online shopping facility for his own website. Nothing morally squalid there.
What actually caught my eye was not this conventional bleat, but Pullman's extension of his criticism to encompass digital music: "The ease and swiftness with which music can be acquired in the form of MP3 downloads is still astonishing even to those of us who have been building up our iTunes list for some time" (can't you just feel the anguish in those last few words?) At no point does he acknowledge that Apple are the real culprit when it come to the transformation of the economics of artistic content, presumably because the Cupertino corporation's prestige remains strong among Macbook-wielding creatives. Despite the popular history that holds the arrival of Napster and other P2P networks around 1999 as responsible for the implosion of music sales, it was actually the introduction of the iTunes Store in 2003, and specifically the decision to sell individual tracks at 99 cents, that did the deed. The change to the landscape can be seen in this chart (source here), showing the distribution of revenue by music format.
Albums were always where the money was, while singles were primarily promotional devices. In terms of the quantum of revenue, there has been a steady decline since the turn of the millennium, however this appears relatively worse because of the peak that occurred during the 90s when we all spent extra money duplicating old vinyl with new CDs, as this chart shows.
What the chart doesn't include is the growth in revenues from live performances and ancillary merchandising (US live ticket sales tripled from $1.5b to $4.6b between 1999 and 2009, when cumulative inflation was only about 28%). Digital media revolutionised the music industry, but this simply meant that the money moved to new channels and publishers, it didn't mean that we stopped buying music or related commodities. The claim that taping/downloading was (and still is) killing music is simply the special pleading of incumbent publishers whose lunch was eaten by newer, more nimble service providers.
In ways that would make a dialectical materialist happy, you can see the history of music culture in this progression. After the 8-track bump (a mainly US feature) and a flurry of big-selling LPs (Rumours, Hotel California etc), revenues began to drop in the late 70s as home-taping set off the first wave of "illegal sharing" (this mainly ate into LP sales - C46 and C60 cassettes were obviously designed for album length recordings). This was exacerbated by the increased market segmentation, and consequently lower profit margins, driven by the popular reaction to the overwhelming dullness of the mid-70s. Record companies attempted to offset this by pushing higher-margin singles, often with coloured vinyl and fancy sleeves, which produced a flowering of new music (notably Punk and New Wave) as bands that would never have got an album deal were indulged for a couple of singles or an EP, but this strategy proved unequal to the task and so we got the inexorable 80s trend back towards the superstar and the multi-platinum record, exemplified by Michael Jackson's Thriller, which was amplified by the effect of videos and MTV.
The invention of the CD brought in the golden age of music sales in the 90s, more than offsetting the losses experienced due to taping, but with the consequence of slowly killing off the once-more marginal single and the EP (and eventually Top of the Pops). The CD also allowed ageing artists to monetise their back catalogue with reissues and remastered versions. The birth of corporate rock and all-conquering global sellers, from Nirvana and REM to Madonna and the Spice Girls, was a product of the CD age. One positive consequence of the new format, which has a much lower per unit production cost than vinyl, was that it allowed for more short run releases. This encouraged the reissuing of rarities and the introduction of more foreign music ("World Music" is a product of the nexus of CDs, the feelgood end of Apartheid and cheap global travel), which in turn helped fuel the growing eclecticism of music in the late 90s.
The decline in cassettes was sparked by the introduction of dual tape/CD players in the early 90s, accelerated by PC-based media players (and the start of digital sharing), and finished off by the shift from personal tape-players to digital music players (the iPod launched in 2001). Illegal sharing via cassette was always a fiddle and prone to poor quality, but the biggest drawback was simply getting access to the source. I, like many others, would make mix-tapes of tracks off the John Peel radio show, jumping up and down to stop and start the recorder (he had good taste but his chat was incidental), and often missing the first few notes. The purpose was not to rip off the artists, but to have the ability to play songs of interest on demand. Once I'd decided I particularly liked a group, I'd go out and buy the vinyl and/or go to a gig.
Radio was a conspiracy of music company-endorsed playlists (Janie Jones was just the louche fringe of this) in an era when the opportunity to hear new music was relatively restricted. The actual liberation of the Internet was the chance to find and sample a vastly larger range of music than before, not the chance to steal it. It was about "everything" rather than "free". I still buy music, but I also download "illegally shared" tracks, which I consider to be a form of browsing, or "user-directed sampling", if you prefer. The music industry has accommodated itself to this new marketplace, both in its acceptance of Apple and Amazon's insistence on individual track sales, and in its support for the "new radio" of streaming subscription services like Spotify (again, I use the free version to sample music of interest before buying elsewhere). The idea that "digital piracy" is impoverishing musicians is no more true than the earlier claim that home-taping was killing music.
That said, the changes in the means of production, distribution and exchange have certainly affected some musicians more than others, leading to the polarisation familiar from the wider economy. It's now much more difficult to make money as a musician at the lower end of the popularity scale. This is partly because publishers and distributors take an even larger slice of the pie (hence the furore over Spotify's terms), partly because gig opportunities are reduced as older bands seek to monetise their back catalogue (in the CD era they could sit at home and watch the royalties roll in - now they hog the festival stage), and partly because wannabes giving it away for free on YouTube undercut the market for new music (the dominance of parent-funded upper-middle-class musicians is partly a consequence of this).
Meanwhile, at the top end of the income scale, a small number of rights-holders have become even richer (and in this regard Simon Cowell counts as much as Lady Gaga), as the enforcement of copyright and the profusion of distribution channels delivers ever greater scale economies to superstars. As with job polarisation elsewhere, the "middle class" of bands that could once make a living gigging in pubs and midscale venues, and even pay off the mortgage with a top-100 album, has all but disappeared. With the ability to reach economic viability as an independent increasingly difficult, and success in the industry increasingly dependent on corporate channels, more and more market share is being taken by existing rights-holders, which is not unlike where we came in during the mid-70s.
In contrast, the world of fiction remains little changed. Few authors have ever been able to make a real living from writing, even before the arrival of e-books and the chimera of long-tail sales. Success depends on co-option by the small coterie of author/critics that dominate each genre, the big money comes from film and TV rights and other commercial spinoffs, and you must build a personal brand if you want to monetise readings and appearances. "Bestsellers" are, by definition, atypical, and increasingly subject to superstar economies of scale as the same content is spread across more channels and commodities (e.g. JK Rowling and EL James). In this context, the idea that the chief challenge to writers is spotty kids downloading files in breach of copyright, rather than the predatory pricing of online retailers and multinational publishing houses, is laughable. As SF writer and blogger Cory Doctorow says (quoted by Cathy Casserly), "My problem is not piracy, it’s obscurity".
The central, revealing flaw of Pullman's argument is that he blames the consumer: guilty of "moral squalor" and little better than a thief. This is the classic, intemperate language of the property-holder, demanding that a yokel be hung for rustling a sheep. I sympathise with Pullman's desire to secure a larger share of the rewards accruing to the product of his members' labour, but his target ought to be closer to home: "The editor, the jacket designer, the publicist, the printer, the library assistant, the bookshop manager, the PLR administrator, and others, all earn a living on the back of the fact that I and my fellow authors have written books that people want to read". This list excludes the shareholders of Scholastic Inc and Random House, Pullman's publishers. He also omits to mention Amazon, which provides the online shopping facility for his own website. Nothing morally squalid there.
Sunday, 15 September 2013
Openness and Enclosure
The longtime privacy-sceptic Jeff Jarvis has an upbeat take on the NSA/GCHQ revelations: "It has been said that privacy is dead. Not so. It's secrecy that is dying. Openness will kill it." Jarvis draws a distinction between privacy and secrecy: "Think of it this way: privacy is what we keep to ourselves; secrecy is what is kept from us. Privacy is a right claimed by citizens. Secrecy is a privilege claimed by government". He then seeks to downplay the chief threat to the former: "It's often said that the internet is a threat to privacy, but on the whole, I argue it is not much more of a threat than a gossipy friend or a nosy neighbor, a slip of the tongue or of the email 'send' button". The imagery harks back to the parochial, but Jarvis appears to be oblivious to the irony of this, despite the ample recent evidence that the Internet facilitates on a global scale the "tyranny of common opinion" that once marked village life (trolls are just narrow-minded oafs with broadband).
For good measure, he claims that the Internet itself is a force for good: "The agglomeration of data that makes us fear for our privacy is also what makes it possible for one doubting soul – one Manning or Snowden – to learn secrets. The speed of data that makes us fret over the devaluation of facts is also what makes it possible for journalists' facts to spread before government can stop them. The essence of the Snowden story, then, isn't government's threat to privacy, so much as it is government's loss of secrecy". The suggestion that the state struggles to keep secrets any more is not merely wrong, it exemplifies Jarvis's working method, which is to omit all mention of the essential actor in this drama, namely the Internet companies. As such, this is a fairly obvious pitch that we should trust the likes of Google and Yahoo and back them in their noble struggle with government: "the agents of openness will continue to wage their war on secrecy".
The separation of privacy and secrecy is a false dichotomy, central to liberal philosophy and neoliberal practice. This views privacy as the property of individual citizens, and secrecy as the restricted property of legitimate government. In reality, there is no clear boundary as both are species of privatisation, the enclosure of the commons, in the sense that rights of access are limited to privileged minorities. The true dichotomy is between private and public. Jarvis avoids the latter word in his Guardian piece with one exception, when he quotes Gabriel GarcĂa Marquez to the effect that "All human beings have three lives: public, private, and secret". The purpose of this quote is to focus attention on the last two. In place of the public realm, Jarvis offers openness.
In doing so, he attempts to link it with opensource (while simultaneously name-dropping and flattering his hosts) "When I worried on Twitter that we could not trust encryption now, technologist Lauren Weinstein responded with assurances that it would be difficult to hide 'backdoors' in commonly used PGP encryption – because it is open-source. Openness is the more powerful weapon. Openness is the principle that guides, for example, Guardian journalism. Openness is all that can restore trust in government and technology companies. And openness – in standards, governance, and ethics – must be the basis of technologists' efforts to take back the net."
As Tom Slee notes, the NSA and other government agencies are extensive users of opensource and have been involved in (and have thus potentially compromised) many opensource developments and industry standards: "to trumpet free and open source software as an alternative to the surveillance systems it has helped to build is nothing but wishful thinking". This is not to say that all opensource software is compromised, and you can certainly find reliable encryption tools if you want, but even the most basic communication device has scores of different programs and utilities running, any one of which may have been "backdoored", not to mention whatever might be running on the systems of an interlocutor. In practice, privacy is partial and contingent, which obviously doesn't fit well with the ideological premise of a citizen's inalienable "property right".
The concept of opennness in ICT, from the early days of OSI, has always been highly contested. What it means and the way you implement it, which is not the binary open/closed choice the name implies, is a political decision. The blithe acceptance of openness as a universal good defined by technology companies (the nebulous "cloud" has inherited much of this style of utopian thinking) has marginalised these political and economic considerations. Media-friendly Internet commentators like Jarvis are a large part of the problem. As Henry Farrell notes: "There are few real left-wingers among technology intellectuals. There are even fewer conservatives. The result is both blandness and blindness. Most technology intellectuals agree on most things. They rarely debate, for example, how private spaces governed by large corporations such as Google and Facebook can generate real inequalities of power".
It's likely that once the German federal election is over later this month, the NSA/GCHQ scandal will run out of steam. There will still be official investigations and some calling-to-account, and regulatory safeguards will no doubt be enhanced (if only to minimise commercial damage), but the general principle, that the state can unilaterally access our digital data, will have been established. This will represent not the triumph of the post-9/11 neoconservative worldview - the "war on terror" is history and "liberal intervention" out of fashion - but the triumph of the neoliberal economic order, specifically the symbiotic relationship between the state and monopoly businesses. We have traded our rights to privacy for the utility of Google, Facebook and Twitter. Contra Jeff Jarvis, openness (as defined by him and his ilk) is killing privacy, not secrecy.
For good measure, he claims that the Internet itself is a force for good: "The agglomeration of data that makes us fear for our privacy is also what makes it possible for one doubting soul – one Manning or Snowden – to learn secrets. The speed of data that makes us fret over the devaluation of facts is also what makes it possible for journalists' facts to spread before government can stop them. The essence of the Snowden story, then, isn't government's threat to privacy, so much as it is government's loss of secrecy". The suggestion that the state struggles to keep secrets any more is not merely wrong, it exemplifies Jarvis's working method, which is to omit all mention of the essential actor in this drama, namely the Internet companies. As such, this is a fairly obvious pitch that we should trust the likes of Google and Yahoo and back them in their noble struggle with government: "the agents of openness will continue to wage their war on secrecy".
The separation of privacy and secrecy is a false dichotomy, central to liberal philosophy and neoliberal practice. This views privacy as the property of individual citizens, and secrecy as the restricted property of legitimate government. In reality, there is no clear boundary as both are species of privatisation, the enclosure of the commons, in the sense that rights of access are limited to privileged minorities. The true dichotomy is between private and public. Jarvis avoids the latter word in his Guardian piece with one exception, when he quotes Gabriel GarcĂa Marquez to the effect that "All human beings have three lives: public, private, and secret". The purpose of this quote is to focus attention on the last two. In place of the public realm, Jarvis offers openness.
In doing so, he attempts to link it with opensource (while simultaneously name-dropping and flattering his hosts) "When I worried on Twitter that we could not trust encryption now, technologist Lauren Weinstein responded with assurances that it would be difficult to hide 'backdoors' in commonly used PGP encryption – because it is open-source. Openness is the more powerful weapon. Openness is the principle that guides, for example, Guardian journalism. Openness is all that can restore trust in government and technology companies. And openness – in standards, governance, and ethics – must be the basis of technologists' efforts to take back the net."
As Tom Slee notes, the NSA and other government agencies are extensive users of opensource and have been involved in (and have thus potentially compromised) many opensource developments and industry standards: "to trumpet free and open source software as an alternative to the surveillance systems it has helped to build is nothing but wishful thinking". This is not to say that all opensource software is compromised, and you can certainly find reliable encryption tools if you want, but even the most basic communication device has scores of different programs and utilities running, any one of which may have been "backdoored", not to mention whatever might be running on the systems of an interlocutor. In practice, privacy is partial and contingent, which obviously doesn't fit well with the ideological premise of a citizen's inalienable "property right".
The concept of opennness in ICT, from the early days of OSI, has always been highly contested. What it means and the way you implement it, which is not the binary open/closed choice the name implies, is a political decision. The blithe acceptance of openness as a universal good defined by technology companies (the nebulous "cloud" has inherited much of this style of utopian thinking) has marginalised these political and economic considerations. Media-friendly Internet commentators like Jarvis are a large part of the problem. As Henry Farrell notes: "There are few real left-wingers among technology intellectuals. There are even fewer conservatives. The result is both blandness and blindness. Most technology intellectuals agree on most things. They rarely debate, for example, how private spaces governed by large corporations such as Google and Facebook can generate real inequalities of power".
It's likely that once the German federal election is over later this month, the NSA/GCHQ scandal will run out of steam. There will still be official investigations and some calling-to-account, and regulatory safeguards will no doubt be enhanced (if only to minimise commercial damage), but the general principle, that the state can unilaterally access our digital data, will have been established. This will represent not the triumph of the post-9/11 neoconservative worldview - the "war on terror" is history and "liberal intervention" out of fashion - but the triumph of the neoliberal economic order, specifically the symbiotic relationship between the state and monopoly businesses. We have traded our rights to privacy for the utility of Google, Facebook and Twitter. Contra Jeff Jarvis, openness (as defined by him and his ilk) is killing privacy, not secrecy.
Thursday, 12 September 2013
Watch Your Phone
The launch of the iPhone 5S and 5C has been greeted with a general "meh", once it became clear the C stood for colour rather than cheap, and the 5S's only new feature was a fingerprint reader. This rather misses the strategic significance of the latter, which is less about addressing the password management hell of online services and the poor security of smartphones and tablets, and more about the way that technology is becoming integrated with biology. Another aspect of this is the emerging field of smartwatches, which is thought to be Apple's next big product target.
Strictly speaking, smartwatches have been around for decades. Digital watches that could be used as calculators date from the 70s, while modern sports watches with accelerometers and GPS are common. But we're obviously on the cusp of something new now that companies such as Sony, Samsung and Apple are piling into the market opened up by the crowd-funded Pebble. (Just as an aside, I wonder if anyone told Samsung that "gear" is slang for heroin. Presumably they chose the word more for its "Top Gear" connotations, implying the target audience are mainly male and a bit dim. I suppose they could have gone for Gadget).
The choice of a watch has not traditionally been about utility for all but a few specialists (or those who fantasised they were jet pilots or divers). Being able to tell the time in 3 different places simultaneously isn't the reason why those who can afford it buy a Rolex. If you check your watch 10 times a day, that's probably no more than 30 seconds all told, which is a pretty low level of utility, irrespective of price. It's like buying a car and only driving it on the 29th of February. Watches have traditionally been positional goods, both those that are essentially jewellery and cheaper models that are a statement of personality (the ironic Mickey Mouse, the retro LCD Timex).
The smartwatch promises to augment this limited timekeeping utility and positional value with two new uses: acting as a display for alerts relayed by a smartphone, and housing a variety of bio-readers. As the former is somewhat underwhelming (if you really need to scan tweets in real time, you'll pull out the phone), and the latter is already well established with sports watches and bracelets, like the Nike FuelBand, there has been some scepticism that smartwatches can really be disruptive. They're not creating a new market after all, unless you count the tech-heads who dispensed with their watches back in the 90s because they could then check the time on their mobile phones. Given the long life of most watches, there won't be many people eyeing a smartwatch as a necessary replacement, and if the new device's lifetime is as short as a smartphone, many (of those with the necessary moolah) will consider a £500 analogue watch (with resale value) a far better investment that a £300 smartwatch that depreciates to junk inside 5 years.
Watches are part of the personal brand, so the core market for the smartwatch may be the intersection of those that take personal branding seriously and are into geek-chic and the "quantified self". The first person to wear both Google Glass and a smartwatch can expect to be stoned in public. Like the Borg eyewear, a key feature of the smartwatch is that it will be on display (don't expect wearers to discreetly shove it under the cuff). As a slave device, the smartwatch is a way of showing that you own an iPhone or top-end Galaxy without having to constantly and ostentatiously use the damn thing. Because you can now keep your phone trousered or bagged, the less-easy-to-snatch smartwatch will help reduce theft, which will please hipsters currently torn between showing off and staying safe.
The use of fingerprint authentication with the iPhone 5S, plus the parallel development of heartbeat signature sensors (a potentially more efficient biometric identity mechanism), means that in the not too distant future you may be obliged to buy a smartphone and watch combo in order to enjoy the full benefits of each, which provides growth for the nearly-saturated smartphone market as much as addressing the "how do you create a market for smartwatches?" problem. Some analysts even anticipate the smartwatch replacing key-fobs and secure access tokens for corporate workers, which means that businesses now giving staff smartphones may soon be shelling out for smartwatches too. If that market segment reaches critical mass, then you can expect the general population to be attracted, much as happened with BlackBerry.
Another potential new market segment is health and social care. A smartwatch stuffed with bioreaders could communicate with a smartphone and from there relay far richer diagnostic data to a central monitoring team than is possible with the current pager-like devices being trialled by the NHS. Of course, OAPs and the disabled (and possibly tagged offenders) are the sort of demographic that might pollute the brand, so you can expect the "public sector" version to look ugly and be cheaper, even if it is constructed in the same Chinese factories as the aspirational Apple and Samsung models.
The key point is that the "quantified self" is coming, and it may be approaching from multiple directions. Just as the NSA/GCHQ revelations have been a great leveller, proving that the self-proclaimed tech-savvy (as opposed to the real cognoscenti) are under exactly the same degree of observation and control as silver surfers and techno-buffoons (I'm thinking Claire Perry), so the smartwatch may mark the first step towards the ubiquity of biometrics. There is a clear progression here: from the compromising of devices that we periodically use (PCs), to devices that we carry with us (smartphones), to devices that we wear (smartwatches). The logical next step would be bioreader/transponder implants, which we'll probably trial in the health, social care and penal sectors. Neural lace and The Matrix are probably some way off, but you can see the direction of travel. Personally, I'm going to stick with my steampunk Seiko.
Strictly speaking, smartwatches have been around for decades. Digital watches that could be used as calculators date from the 70s, while modern sports watches with accelerometers and GPS are common. But we're obviously on the cusp of something new now that companies such as Sony, Samsung and Apple are piling into the market opened up by the crowd-funded Pebble. (Just as an aside, I wonder if anyone told Samsung that "gear" is slang for heroin. Presumably they chose the word more for its "Top Gear" connotations, implying the target audience are mainly male and a bit dim. I suppose they could have gone for Gadget).
The choice of a watch has not traditionally been about utility for all but a few specialists (or those who fantasised they were jet pilots or divers). Being able to tell the time in 3 different places simultaneously isn't the reason why those who can afford it buy a Rolex. If you check your watch 10 times a day, that's probably no more than 30 seconds all told, which is a pretty low level of utility, irrespective of price. It's like buying a car and only driving it on the 29th of February. Watches have traditionally been positional goods, both those that are essentially jewellery and cheaper models that are a statement of personality (the ironic Mickey Mouse, the retro LCD Timex).
The smartwatch promises to augment this limited timekeeping utility and positional value with two new uses: acting as a display for alerts relayed by a smartphone, and housing a variety of bio-readers. As the former is somewhat underwhelming (if you really need to scan tweets in real time, you'll pull out the phone), and the latter is already well established with sports watches and bracelets, like the Nike FuelBand, there has been some scepticism that smartwatches can really be disruptive. They're not creating a new market after all, unless you count the tech-heads who dispensed with their watches back in the 90s because they could then check the time on their mobile phones. Given the long life of most watches, there won't be many people eyeing a smartwatch as a necessary replacement, and if the new device's lifetime is as short as a smartphone, many (of those with the necessary moolah) will consider a £500 analogue watch (with resale value) a far better investment that a £300 smartwatch that depreciates to junk inside 5 years.
Watches are part of the personal brand, so the core market for the smartwatch may be the intersection of those that take personal branding seriously and are into geek-chic and the "quantified self". The first person to wear both Google Glass and a smartwatch can expect to be stoned in public. Like the Borg eyewear, a key feature of the smartwatch is that it will be on display (don't expect wearers to discreetly shove it under the cuff). As a slave device, the smartwatch is a way of showing that you own an iPhone or top-end Galaxy without having to constantly and ostentatiously use the damn thing. Because you can now keep your phone trousered or bagged, the less-easy-to-snatch smartwatch will help reduce theft, which will please hipsters currently torn between showing off and staying safe.
The use of fingerprint authentication with the iPhone 5S, plus the parallel development of heartbeat signature sensors (a potentially more efficient biometric identity mechanism), means that in the not too distant future you may be obliged to buy a smartphone and watch combo in order to enjoy the full benefits of each, which provides growth for the nearly-saturated smartphone market as much as addressing the "how do you create a market for smartwatches?" problem. Some analysts even anticipate the smartwatch replacing key-fobs and secure access tokens for corporate workers, which means that businesses now giving staff smartphones may soon be shelling out for smartwatches too. If that market segment reaches critical mass, then you can expect the general population to be attracted, much as happened with BlackBerry.
Another potential new market segment is health and social care. A smartwatch stuffed with bioreaders could communicate with a smartphone and from there relay far richer diagnostic data to a central monitoring team than is possible with the current pager-like devices being trialled by the NHS. Of course, OAPs and the disabled (and possibly tagged offenders) are the sort of demographic that might pollute the brand, so you can expect the "public sector" version to look ugly and be cheaper, even if it is constructed in the same Chinese factories as the aspirational Apple and Samsung models.
The key point is that the "quantified self" is coming, and it may be approaching from multiple directions. Just as the NSA/GCHQ revelations have been a great leveller, proving that the self-proclaimed tech-savvy (as opposed to the real cognoscenti) are under exactly the same degree of observation and control as silver surfers and techno-buffoons (I'm thinking Claire Perry), so the smartwatch may mark the first step towards the ubiquity of biometrics. There is a clear progression here: from the compromising of devices that we periodically use (PCs), to devices that we carry with us (smartphones), to devices that we wear (smartwatches). The logical next step would be bioreader/transponder implants, which we'll probably trial in the health, social care and penal sectors. Neural lace and The Matrix are probably some way off, but you can see the direction of travel. Personally, I'm going to stick with my steampunk Seiko.
Tuesday, 10 September 2013
Why Don't We Use Chemical Weapons?
With the possibility that they may be neutralised in Syria, now is perhaps a good time to ask why chemical weapons have not been used more extensively in warfare. The question is often asked in the context of WW2, given the precedent of mustard gas in WW1 and the fear that led to the general distribution of gas-masks at the start of the later conflict. Popular answers include a willingness on all sides to observe the 1925 Geneva protocol banning such weapons (which may be confusing cause and effect) and Hitler's own experience of being gassed in the trenches (which didn't lead him to outlaw Zyklon-B). In fact, if you consider the second world war to have started with the earliest aggression by Axis powers, then chemical weapons were used by Italy in Abyssinia and Japan in China in the late 1930s. The eurocentric view is rather blinkered.
The real reason for their limited use is that chemical weapons, particularly airborne gases, are of little military value due to their lack of discrimination and can even be counter-productive, as the tales of a change in wind causing blowback in WW1 attest. The stalemate of the trenches also explains why they were used there. The hope was that they could provide a more effective means of clearing the line ahead of another big push, shelling having proved ineffective due to digging-in, but the key enabling factor was proximity, in much the same way that distinguished pre-industrial chemical weapons, such as catapulting a disease-ridden animal corpse into a besieged town or poisoning an accessible water supply. Without the trenches of the Western Front, it's unlikely that gas would have been used.
There was limited tactical use of chemical weapons during WW2, notably the defoliant Napalm in the Pacific, but they were never decisive on the battlefield. Even as "terror weapons", chemical agents would not have been as deadly as the incendiaries that created firestorms at Coventry and Dresden, or as psychologically stressful as the random V1 and V2 rockets that hit London. Though the debate around the strategic value of mass-bombing continues, and in particular the balance between hitting military targets and hitting civilians, it is clear that no one thought that terrorising the population alone would win the war, or at least not until Hiroshima and Nagasaki.
Ultimately, the unwillingness of the belligerents to deploy chemical weapons during WW2 (both sides had large stockpiles of mustard gas) was probably down to a combination of their poor military utility and the fear of reprisals. Since then, they have been used occasionally, mainly in the context of civil wars: Yemen in the 60s, Vietnam and Angola in the 70s, and possibly Afghanistan in the 70s as well. Iraq used chemical weapons, mainly mustard gas and sarin, against Iran in their 1980s war, but it's most infamous use was against Iraqi Kurds at Halabja in 1988.
It is claimed that chemical weapons are "too horrible, even in war", but the suggestion that there is an ethical red line is nonsense. There are plenty of conventional weapons just as horrible, and they kill people just as effectively, while the ultimate deterrence of nuclear weapons is premised on a willingness to kill more people in a hour than have died in all the wars throughout history. Some part of the visceral distaste for chemical weapons stems from the use of gas in the Nazi death camps, and there is much truth in the observation that the nuclear powers don't want lesser states to access "terror weapons on the cheap", but another reason is the realisation that while they are often ineffective on the battlefield, they are well suited for use against an adjacent civilian population. In other words, if you want to kill indiscriminately, and aren't particularly concerned about the risk of collateral civilian losses on your own side, then chemical weapons may fit the bill. This is the practical lesson of Halabja.
In this sense, Assad's recent claim, that US intervention in Syria could cause the whole region to "explode", is not mere bluster. The use of chemical weapons, assuming Assad's forces were responsible, looks like a demonstration, a threat of wider chaos if the West directly threatens the regime. The red line then is not about the use of chemicals weapons per se, but about the escalation of the Syrian conflict and its potential spread to neighbouring states (the metaphor of an indiscriminate gas cloud at the mercy of the winds is obvious). The dilemma for the US is that intervention might just as effectively provoke this very result, which is why a face-saving exit strategy has been engineered this week (the idea that John Kerry's suggestion was a gaffe is absurd - diplomats do not make unplanned remarks in press conferences).
I've mentioned previously that the US's policy in Iraq can best be explained by assuming that the current state, a fractious and emasculated power, was the desired outcome. In this sense, it's worth noting that the developments of the last 3 years have similarly undermined Syria as a regional power, and at a time when Egypt has been forced to the sidelines by domestic upheavals and Turkey has begun to raise its profile as a regional player. I doubt the US has a policy of deliberately "balkanising" the Middle East, but it does seem comfortable with the emerging patchwork of sectarian enclaves, just as it remains comfortable with the patchwork of Gulf monarchies and emirates.
It's worth remembering that the bogey for the West (and Israel) in the region has long been pan-Arabism, the idea that the post-Ottoman era should have led to a unified Arab state stretching from Egypt to Iraq (there was a short-lived union between Egypt and Syria between 1958-61). The chief impediment to that has been the tendency of various regimes to manipulate and exacerbate religious, ethnic and tribal divisions, paying only lip-service to the idea of an Arab nation. The willingness to use chemical weapons against your own people is as much a symbol of that as the scapegoating of Copts in Egypt or the pitting of Sunni against Shia in Iraq. The fear is that whatever the outcome of the current decommissioning initiative in Syria, chemical weapons remain more likely to be used in the Middle East than elsewhere.
The real reason for their limited use is that chemical weapons, particularly airborne gases, are of little military value due to their lack of discrimination and can even be counter-productive, as the tales of a change in wind causing blowback in WW1 attest. The stalemate of the trenches also explains why they were used there. The hope was that they could provide a more effective means of clearing the line ahead of another big push, shelling having proved ineffective due to digging-in, but the key enabling factor was proximity, in much the same way that distinguished pre-industrial chemical weapons, such as catapulting a disease-ridden animal corpse into a besieged town or poisoning an accessible water supply. Without the trenches of the Western Front, it's unlikely that gas would have been used.
There was limited tactical use of chemical weapons during WW2, notably the defoliant Napalm in the Pacific, but they were never decisive on the battlefield. Even as "terror weapons", chemical agents would not have been as deadly as the incendiaries that created firestorms at Coventry and Dresden, or as psychologically stressful as the random V1 and V2 rockets that hit London. Though the debate around the strategic value of mass-bombing continues, and in particular the balance between hitting military targets and hitting civilians, it is clear that no one thought that terrorising the population alone would win the war, or at least not until Hiroshima and Nagasaki.
Ultimately, the unwillingness of the belligerents to deploy chemical weapons during WW2 (both sides had large stockpiles of mustard gas) was probably down to a combination of their poor military utility and the fear of reprisals. Since then, they have been used occasionally, mainly in the context of civil wars: Yemen in the 60s, Vietnam and Angola in the 70s, and possibly Afghanistan in the 70s as well. Iraq used chemical weapons, mainly mustard gas and sarin, against Iran in their 1980s war, but it's most infamous use was against Iraqi Kurds at Halabja in 1988.
It is claimed that chemical weapons are "too horrible, even in war", but the suggestion that there is an ethical red line is nonsense. There are plenty of conventional weapons just as horrible, and they kill people just as effectively, while the ultimate deterrence of nuclear weapons is premised on a willingness to kill more people in a hour than have died in all the wars throughout history. Some part of the visceral distaste for chemical weapons stems from the use of gas in the Nazi death camps, and there is much truth in the observation that the nuclear powers don't want lesser states to access "terror weapons on the cheap", but another reason is the realisation that while they are often ineffective on the battlefield, they are well suited for use against an adjacent civilian population. In other words, if you want to kill indiscriminately, and aren't particularly concerned about the risk of collateral civilian losses on your own side, then chemical weapons may fit the bill. This is the practical lesson of Halabja.
In this sense, Assad's recent claim, that US intervention in Syria could cause the whole region to "explode", is not mere bluster. The use of chemical weapons, assuming Assad's forces were responsible, looks like a demonstration, a threat of wider chaos if the West directly threatens the regime. The red line then is not about the use of chemicals weapons per se, but about the escalation of the Syrian conflict and its potential spread to neighbouring states (the metaphor of an indiscriminate gas cloud at the mercy of the winds is obvious). The dilemma for the US is that intervention might just as effectively provoke this very result, which is why a face-saving exit strategy has been engineered this week (the idea that John Kerry's suggestion was a gaffe is absurd - diplomats do not make unplanned remarks in press conferences).
I've mentioned previously that the US's policy in Iraq can best be explained by assuming that the current state, a fractious and emasculated power, was the desired outcome. In this sense, it's worth noting that the developments of the last 3 years have similarly undermined Syria as a regional power, and at a time when Egypt has been forced to the sidelines by domestic upheavals and Turkey has begun to raise its profile as a regional player. I doubt the US has a policy of deliberately "balkanising" the Middle East, but it does seem comfortable with the emerging patchwork of sectarian enclaves, just as it remains comfortable with the patchwork of Gulf monarchies and emirates.
It's worth remembering that the bogey for the West (and Israel) in the region has long been pan-Arabism, the idea that the post-Ottoman era should have led to a unified Arab state stretching from Egypt to Iraq (there was a short-lived union between Egypt and Syria between 1958-61). The chief impediment to that has been the tendency of various regimes to manipulate and exacerbate religious, ethnic and tribal divisions, paying only lip-service to the idea of an Arab nation. The willingness to use chemical weapons against your own people is as much a symbol of that as the scapegoating of Copts in Egypt or the pitting of Sunni against Shia in Iraq. The fear is that whatever the outcome of the current decommissioning initiative in Syria, chemical weapons remain more likely to be used in the Middle East than elsewhere.
Friday, 6 September 2013
Dance to Your Daddy
The tale of the man asked to dance as part of his interview for a job at Currys is a multi-faceted gem of modern manners. Initial reports have veered between shock at the callousness of the interviewers and glee at the comedy of David Brent-style corporate bollocks. Some are already (correctly) pointing out that the humiliation of the interviewee is an inevitable transaction cost in an economy with high unemployment, but this will be drowned out by the stock commentaries shortly bemoaning the crass irresponsibility of managers on the one hand, and the inflexibility and sense of entitlement of candidates on the other - i.e. people are to blame.
We can also expect the candidate, Alan Bacon, to experience both fleeting fame (apparently the Sun have tactfully offered to pay him to recreate his dance moves for posterity) and a backlash for his temerity in complaining. In fairness, what he has obviously displayed is the unworldliness of a 21-year old graduate, as indicated by this: "They told us there would be five minutes to talk about our hobbies, and I like astronomy so I had spent some money printing off some pictures I had taken through my telescope". Having lost count of the number of CVs I've read over the years, I remain bemused that some candidates still put down this totally irrelevant data. No one gives a shit what you do in your spare time, any more than they care that you've got a swimming proficiency badge.
What the vignette should emphasise is that most interviews are a farce, whether they involve role-play, Powerpoint or psychometric tests. Despite the best endeavours of snake-oil salemen to convince us that the recruitment process can be made objective and empirical, it remains more art than science. Experienced interviewers always seek to challenge the interviewees, because you learn more about them when they are discomfited than when they are reading a script. That said, the ability to think on your feet (or just bullshit) isn't in itself sufficient qualification for any job. In truth, putting the candidate on the spot is in large part a way of staving off the bordeom of listening to another career resumé ("and then I decided I needed a fresh challenge ...")
It's clear that the Currys interviewers had prioritised being a "good team-player" and a "fun guy" in their selection criteria, and that Mr Bacon was not a good fit for a frontline sales job where a thick skin and an ability to "have a laugh" (i.e. suffer daily humiliation) would be assets. To that extent, the interview process was actually successful, and probably more effective than the traditional routine of unenlightening CV walkthroughs and presentations ("what I can bring to your business") in separating the sheep from the goats. In light of Currys decision to re-interview everyone, I wonder what the successful candidate(s) must now be thinking ("I'm an easy-come, easy-go kind of a guy, but with a passion for consumer electronics").
The wider significance of this spectacle is not just the revelation that business has the whip-hand over labour, which has been obvious for some time, but that the fetishisation of labour surplus ("going the extra mile") has extended from the workplace (presenteeism, wage restraint etc) to the application stage. Day-long "interviews" are now common, and "trial starts/working intrerviews" (i.e. working for free for up to a week) are not unusual. If the claims of the right were true, that there are plenty of jobs out there and candidates are simply reluctant to take them due to over generous welfare, then we would not be seeing these developments. A 10-minute phone interview followed by "can you start tomorrow?" indicates a market where the demand for labour is high. A day spent jumping through hoops for the amusement of others does not.
Recruitment has always tended towards ritualised humiliation, particularly where labour is casual and can be treated more obviously as a commodity, such as the daily hiring processes of the building trade (the "lump") or pre-container docks. This has gradually crept into the world of white-collar jobs, slowly relegating the traditional modes of application that were signifiers of class: writing, wearing smart attire, speaking "proper". Some of the anxiety at the David Brent nonsense stems from a belief that middle class workers should not be treated in this way (notice the emotive word "forced" in the reports). Of course, Mr Bacon's humiliation is trivial compared to that still experienced by casual manual labour, where intrusions of privacy and presumptions of guilt are the norm during one-sided interviews.
The boundary between blue and white-collar has long been blurred, as call-centres and other low-pay service industries have turned a desk job into the modern equivalent of operating a machine loom. The real boundary in the world of work is now much higher up the pay-grade, at the point where these symbolic humiliations disappear. Interviews are a "chat", "soundings" are taken rather than references, and the institutionalised humiliations of performance reviews are "inappropriate for someone at your level". Being asked to dance to Daft Punk is just business's way of saying you're not a member of the executive club, as much as you won't "fit in" at a shop in Cardiff.
We can also expect the candidate, Alan Bacon, to experience both fleeting fame (apparently the Sun have tactfully offered to pay him to recreate his dance moves for posterity) and a backlash for his temerity in complaining. In fairness, what he has obviously displayed is the unworldliness of a 21-year old graduate, as indicated by this: "They told us there would be five minutes to talk about our hobbies, and I like astronomy so I had spent some money printing off some pictures I had taken through my telescope". Having lost count of the number of CVs I've read over the years, I remain bemused that some candidates still put down this totally irrelevant data. No one gives a shit what you do in your spare time, any more than they care that you've got a swimming proficiency badge.
What the vignette should emphasise is that most interviews are a farce, whether they involve role-play, Powerpoint or psychometric tests. Despite the best endeavours of snake-oil salemen to convince us that the recruitment process can be made objective and empirical, it remains more art than science. Experienced interviewers always seek to challenge the interviewees, because you learn more about them when they are discomfited than when they are reading a script. That said, the ability to think on your feet (or just bullshit) isn't in itself sufficient qualification for any job. In truth, putting the candidate on the spot is in large part a way of staving off the bordeom of listening to another career resumé ("and then I decided I needed a fresh challenge ...")
It's clear that the Currys interviewers had prioritised being a "good team-player" and a "fun guy" in their selection criteria, and that Mr Bacon was not a good fit for a frontline sales job where a thick skin and an ability to "have a laugh" (i.e. suffer daily humiliation) would be assets. To that extent, the interview process was actually successful, and probably more effective than the traditional routine of unenlightening CV walkthroughs and presentations ("what I can bring to your business") in separating the sheep from the goats. In light of Currys decision to re-interview everyone, I wonder what the successful candidate(s) must now be thinking ("I'm an easy-come, easy-go kind of a guy, but with a passion for consumer electronics").
The wider significance of this spectacle is not just the revelation that business has the whip-hand over labour, which has been obvious for some time, but that the fetishisation of labour surplus ("going the extra mile") has extended from the workplace (presenteeism, wage restraint etc) to the application stage. Day-long "interviews" are now common, and "trial starts/working intrerviews" (i.e. working for free for up to a week) are not unusual. If the claims of the right were true, that there are plenty of jobs out there and candidates are simply reluctant to take them due to over generous welfare, then we would not be seeing these developments. A 10-minute phone interview followed by "can you start tomorrow?" indicates a market where the demand for labour is high. A day spent jumping through hoops for the amusement of others does not.
Recruitment has always tended towards ritualised humiliation, particularly where labour is casual and can be treated more obviously as a commodity, such as the daily hiring processes of the building trade (the "lump") or pre-container docks. This has gradually crept into the world of white-collar jobs, slowly relegating the traditional modes of application that were signifiers of class: writing, wearing smart attire, speaking "proper". Some of the anxiety at the David Brent nonsense stems from a belief that middle class workers should not be treated in this way (notice the emotive word "forced" in the reports). Of course, Mr Bacon's humiliation is trivial compared to that still experienced by casual manual labour, where intrusions of privacy and presumptions of guilt are the norm during one-sided interviews.
The boundary between blue and white-collar has long been blurred, as call-centres and other low-pay service industries have turned a desk job into the modern equivalent of operating a machine loom. The real boundary in the world of work is now much higher up the pay-grade, at the point where these symbolic humiliations disappear. Interviews are a "chat", "soundings" are taken rather than references, and the institutionalised humiliations of performance reviews are "inappropriate for someone at your level". Being asked to dance to Daft Punk is just business's way of saying you're not a member of the executive club, as much as you won't "fit in" at a shop in Cardiff.
Tuesday, 3 September 2013
Tomorrow Belongs to Me(sut)
I've deliberately avoided commenting on Arsenal since the end of last season, and am even now wondering if it is still too early to make any sort of reasoned judgement (6 league games in was my mental target before the season began), but the closure of the transfer window seems an appropriate time to make a few observations (the fans who chanted "spend some fucking money" during the Villa debacle were a tad premature). The first observation is that the window is helping, along with UEFA qualifiers and staggered games, to blur the sense of a definite start to the season. I'm not a traditionalist, but I would prefer all the opening fixtures to kick-off at 3pm on the first Saturday, and for the transfer window to have closed, so we have a real sense of a beginning, rather than the current ragged procession.
Paul Lambert looks like he may be able to fashion a mid-table team out of last season's relegation-flirters, but Villa's victory at the Emirates owed as much to Arsenal's slow start as their own hard work. The goals arose from mistakes by two of our better players, Wilshere and Cazorla, neither of whom looked fully match-fit. Generally the team were a fraction too slow and appeared to be still in pre-season mode. Fears that this might be 2011 all over again were allayed by hard-working victories over Fulham and Spurs, interspersed with a highly professional two-legged performance against Fenerbahce in the Champions League qualifier.
The victory over the Lilywhites left everyone feeling much more bullish, in no small measure because it highlighted the value of team familiarity, in the face of the neighbours' transfer splurge, and the rewards for patience, notably in the case of Aaron Ramsey. As I suggested back in May, our prospects for this season would be as much about who didn't leave in the summer as who arrived. In the event, Wenger deserves credit for moving on a lot of stale players (8 former first-team squad members), though this is being drowned-out by chuntering over the paucity of replacements: no new experienced striker or central defender. The signing of Mesut Ozil has lifted spirits, but some fans question whether we need another attacking midfielder.
I think this indicates that too many are still stuck with the old paradigm in which the front and back lines have equal weight with the middle. What the world record fees for Ronaldo and Bale should indicate is that attacking midfielders are the key to successful teams. In fact, the world record hasn't been held by a striker since Hernan Crespo in 2000. Arsenal were unwilling to meet Real Madrid's valuation of £30m for Higuain, and I suspect that £40m for Suarez wasn't far off their maximum bid (another tenner, perhaps). However, £42.5m for Ozil looks like an intelligent investment.
Wenger is often accused of being tactically out-thought (though anyone with eyes to see on Sunday knows he got the better of Villas Boas), largely because he tends to focus on maximising individual performance within a team framework rather than mastering drills and set-pieces (Spurs looked like a better Stoke, and I'm not being uncharitable in saying that). A characteristic of this approach is that he develops players' ability to operate in multiple roles and to dynamically switch during the game, which places a premium on intelligent midfielders. The debate last season over whether Walcott should play centrally ignored the fact that Wenger has long asked his players to be flexible: on the wing and down the middle are not mutually exclusive over the course of 90 minutes. Mathieu Flamini, who famously thrived as a full-back before settling as a midfield terrier, is an emblematic re-acquisition in this regard, while Ramsey's stint on the wing during his recuperation should be seen as part of his education rather than "easing him back in".
Having said all that, we are (as ever) only a couple of injuries away from a problem, so another striker and a central defender would have been nice. I suspect Wenger does not think the latter an issue, as he obviously considers both Sagna and Arteta capable of playing there (and was happy to let Miquel and Djourou go on loan), while it's worth remembering that Arsenal classify both Walcott and Oxlade-Chamberlain as strikers. A David Villa or Luis Suarez would have kept the fans happy, but the bigger issue in the event of an injury to Giroud would be the loss of his aerial ability, notably as an outlet for a pressed defence (he worked his socks off in this role against Spurs). Ba for Bendtner would have been good business. With the Great Dane still exit-bound, I suspect we might pick up the thread when the transfer window reopens in January.
It's a long way to the business end of the season, so we can expect further setbacks and surprises, but the signing of Ozil, and the revelation that we really do have a war-chest after all, will probably keep the fans in a glass-half-full frame of mind for now. It's all hype of course, both the extremes of "Wenger's reached his sell-by-date" and "Ozil is the new Bergkamp" (no pressure there, then). The one thing you can predict is that Arsenal's midfield is going to be the most interesting to watch this season, which will probably leave messrs Mourinho, Pellegrini and Moyes feeling a bit sour today.
My final observation is that while Arsene is famous for buying French and Francophone African players, and much press coverage has focused on his developing a young British core in recent years, he has always had an eye for Germans, from Stefan Malz and Moritz Volz through Jens Lehmann. As well as the current crop of first-teamers, there are promising signs with youngsters Serge Gnabry and Gideon Zelalem, both of whom made the bench against Tottenham. There would be a certain symmetry if Wenger, who hails from the border lands of Alsace, should book-end his Arsenal career with great teams based on first a Franco-British mix and then a Germano-British one.
Paul Lambert looks like he may be able to fashion a mid-table team out of last season's relegation-flirters, but Villa's victory at the Emirates owed as much to Arsenal's slow start as their own hard work. The goals arose from mistakes by two of our better players, Wilshere and Cazorla, neither of whom looked fully match-fit. Generally the team were a fraction too slow and appeared to be still in pre-season mode. Fears that this might be 2011 all over again were allayed by hard-working victories over Fulham and Spurs, interspersed with a highly professional two-legged performance against Fenerbahce in the Champions League qualifier.
The victory over the Lilywhites left everyone feeling much more bullish, in no small measure because it highlighted the value of team familiarity, in the face of the neighbours' transfer splurge, and the rewards for patience, notably in the case of Aaron Ramsey. As I suggested back in May, our prospects for this season would be as much about who didn't leave in the summer as who arrived. In the event, Wenger deserves credit for moving on a lot of stale players (8 former first-team squad members), though this is being drowned-out by chuntering over the paucity of replacements: no new experienced striker or central defender. The signing of Mesut Ozil has lifted spirits, but some fans question whether we need another attacking midfielder.
I think this indicates that too many are still stuck with the old paradigm in which the front and back lines have equal weight with the middle. What the world record fees for Ronaldo and Bale should indicate is that attacking midfielders are the key to successful teams. In fact, the world record hasn't been held by a striker since Hernan Crespo in 2000. Arsenal were unwilling to meet Real Madrid's valuation of £30m for Higuain, and I suspect that £40m for Suarez wasn't far off their maximum bid (another tenner, perhaps). However, £42.5m for Ozil looks like an intelligent investment.
Wenger is often accused of being tactically out-thought (though anyone with eyes to see on Sunday knows he got the better of Villas Boas), largely because he tends to focus on maximising individual performance within a team framework rather than mastering drills and set-pieces (Spurs looked like a better Stoke, and I'm not being uncharitable in saying that). A characteristic of this approach is that he develops players' ability to operate in multiple roles and to dynamically switch during the game, which places a premium on intelligent midfielders. The debate last season over whether Walcott should play centrally ignored the fact that Wenger has long asked his players to be flexible: on the wing and down the middle are not mutually exclusive over the course of 90 minutes. Mathieu Flamini, who famously thrived as a full-back before settling as a midfield terrier, is an emblematic re-acquisition in this regard, while Ramsey's stint on the wing during his recuperation should be seen as part of his education rather than "easing him back in".
Having said all that, we are (as ever) only a couple of injuries away from a problem, so another striker and a central defender would have been nice. I suspect Wenger does not think the latter an issue, as he obviously considers both Sagna and Arteta capable of playing there (and was happy to let Miquel and Djourou go on loan), while it's worth remembering that Arsenal classify both Walcott and Oxlade-Chamberlain as strikers. A David Villa or Luis Suarez would have kept the fans happy, but the bigger issue in the event of an injury to Giroud would be the loss of his aerial ability, notably as an outlet for a pressed defence (he worked his socks off in this role against Spurs). Ba for Bendtner would have been good business. With the Great Dane still exit-bound, I suspect we might pick up the thread when the transfer window reopens in January.
It's a long way to the business end of the season, so we can expect further setbacks and surprises, but the signing of Ozil, and the revelation that we really do have a war-chest after all, will probably keep the fans in a glass-half-full frame of mind for now. It's all hype of course, both the extremes of "Wenger's reached his sell-by-date" and "Ozil is the new Bergkamp" (no pressure there, then). The one thing you can predict is that Arsenal's midfield is going to be the most interesting to watch this season, which will probably leave messrs Mourinho, Pellegrini and Moyes feeling a bit sour today.
My final observation is that while Arsene is famous for buying French and Francophone African players, and much press coverage has focused on his developing a young British core in recent years, he has always had an eye for Germans, from Stefan Malz and Moritz Volz through Jens Lehmann. As well as the current crop of first-teamers, there are promising signs with youngsters Serge Gnabry and Gideon Zelalem, both of whom made the bench against Tottenham. There would be a certain symmetry if Wenger, who hails from the border lands of Alsace, should book-end his Arsenal career with great teams based on first a Franco-British mix and then a Germano-British one.