The trailers for Ben Stiller's Mitty (which looks pretty awful, in truth) remind me that the irresistible rise of Grant Shapps to the heights of Conservative Party Chairman has been one of the more amusing sub-plots of the coalition government. The arch-fantasist has become the champion of small capital, not to mention the hammer of Labour and the BBC. If you wanted to construct a fictional dodgy-dealer - part Arthur Daley, part Kim Dotcom - to give the small business sector a bad name, then you'd probably reject Shapps's history of inept content-scraping, sock-puppetry and multiple pseudonyms as way too implausible. Though he escaped prosecution for fraud, he is clearly an unscrupulous idiot with a cavalier attitude towards the truth. He makes the delusional psychosis of Jeffrey Archer, a former Tory Chairman, appear almost benign.
Shapps's fast-and-loose style is in evident in his latest thoughts on the positive role of small businesses, where he expands on the traditional myth that SMEs power the economy (they don't) to claim that they are also the facilitators of social mobility: "[it is] flourishing businesses – the opportunity of owning, running or working in one – that ultimately help people to escape the circumstances of their birth" (that use of the cliché "circumstances of their birth" had me thinking fondly of the Master of Granchester). In support of this bold claim, he refers to a World Bank study that links "the rise of small business ownership among women in India with greater legal and political rights" and also points to Eastern Europe, "where small firms are repairing the damage left by decades of socialist stagnation".
No link is provided for the Indian "study", though it is likely (if it exists) to relate to the growth of micro-finance. Though this can provide the seed-corn for a new business, most loans are used for consumption - i.e. this is mainly micro-credit rather than capital investment. The growth of the Indian economy over the last thirty years has been powered by globalisation, beneficial demographics, and improvements in productivity. Flourishing SMEs are a consequence, not a cause. The claim in respect of Eastern Europe is simplistic, ignoring the existence of small businesses in Hungary and Yugoslavia before 1989, as well as the mixed economic record since. The reparative contribution of SMEs in Russia in the 90s was negligible, in the face of oligarchic looting, while the story of the former GDR is of West German corporations buying up and often dismantling large businesses, with the countervailing growth of SMEs dependent on state-directed investment, notably via the KfW development bank.
Shapps's fairy story also ignores the reality of how small businesses are formed. While some skilled workers are able to translate their experience (often acquired working for a large business) into self-employment, this rarely leads to the employment of others. 75% of the businesses in the UK in 2012 had zero employees (see table 1, page 3). SMEs with employees tend to require working capital to start and grow, which immediately gives an advantage to the already wealthy or those with access to credit (i.e. possessing assets for collateral or having guarantors). The stirring tale of "entrepreneurs and inventors who work all hours in their own garage [and] ex-apprentices who start out with a mobile business" is a consoling fiction for wage-slaves. Most SMEs fail, and those that start without premises or in a garage tend to fail faster.
To be fair to Shapps, even Labour MPs are suckers for tales of the little guy beating the odds: "There are countless examples of working-class people climbing a ladder of opportunity through high street businesses that have gone on to become international success stories". They seem unable to understand that once a business expands it denies market share to others. An "international success story" is usually a chain store that drives out local variety. This is not necessarily a bad thing - so long as the better drives out the worse - but it does not grow the population of successful entrepreneurs who started out as market traders, it actually shrinks it.
Small businesses are inherently inimical to social mobility, if viewed from the perspective of someone seeking to advance from the working class to the middle class, though they can provide a route for progression through the various levels of the middle class - i.e from the petit bourgeois to the bourgeois, in classical terms. A talented working class kid is more likely to progress to the middle class by working for a large company, e.g. by reaching middle management or a professional role, than by working for or founding a small business. The great enabler of social mobility in the 1945-75 era was the expansion of the public sector.
At times, Shapps's chutzpah can only be admired. For example: "Businesses create every penny of the wealth we need to pay for our nation's schools, our NHS and our pensions". The obvious rejoinder is that wealth is created by people, i.e. workers, not by businesses. Businesses create nothing, because that is not their purpose. Their function as legal entities is to distribute wealth. This is why company law deals with shares, dividends, limited liability, and social obligations such as tax and externalities like pollution. This is not a pedantic point. By privileging businesses (wealth distributors) over workers (wealth creators), we privilege business owners (capitalists) who are in a position to secure the lion's share of profit.
Shapps ends his eulogy thus: "Conservatives don't love business for some abstract reason. We love it because of what it offers our children. Hope". If you substituted "inheritance" for the last word, you'd have a more honest statement of the Shappsian worldview.
Search
Sunday, 29 December 2013
Tuesday, 24 December 2013
Blood and Honour
The consensus is that last night's game against Chelsea won't live long in the memory. A combination of difficult weather, a negative opposition and some indulgent refereeing made it an unattractive spectacle. Putting John Terry on the pitch was just rubbing salt in the wound. The only fun to be had was the subsequent sight of Jose Mourinho auditioning for UKIP, with his "English blood is best" speech. If Sam Allardyce came out with that, he'd be crucified. Of course, the Portugeezer's chief attraction for many is not his ability to grind out the points, but his talent for distraction, not least in respect of the failings of owners and players. Chelski were thuggish on the field and off it (their fans singing about 96 dead scousers was particularly classy). Mourinho's jibe that Arsenal players "cry" was bizarre - the only time a player kicked the ball out of play was when Ramsey did so after Ramires collapsed in a heap. Of course, bullies always see compassion as weakness.
17 games in is just short of the half-way mark, but I thought an update on my clairvoyant powers would be appropriate now, before the commentariat herd gather at the halfway waterhole. Liverpool and Arsenal are on 36 points. A straight-line extrapolation would suggest that the champions will finish on 80 points, which is in line with my prediction after 11 games. At this stage last season, Manure had 42 points, while City were second on 36 and we were on 27. We then produced 46 points from the remaining 21 games, which, if repeated, would deliver a total points haul of 82. In the 2011-12 season, City were top with 44 and Manure second on 42. We made 38 points from the remaining 21 games. If repeated, that form would get us to a total of 74, which would at least ensure another season in the Champions League. I think we can dare to dream, so I'm still predicting a final tally of 78-80 points.
While there are plenty of tough periods ahead - Liverpool and Manure back to back in February and Chelsea and Man City back to back in March - I think we'll do reasonably well in these "big games", probably 6 points from 12, which will keep us on track. We owe both the Manchester teams a strong performance at the Emirates, while our track record at Anfield and Stamford Bridge has improved in recent years. The cups are likely to be less of a distraction than usual. Few expect us to progress against Bayern, though I fancy us to squeak through on away goals, which will be deliciously ironic given Wenger's recent comments, probably by winning 1-0 at home and losing 2-1 away. The FA Cup is in the lap of the comedy gods. Spurs under Tim Sherwood may discover their roots, which means beating us (Adebayor to score), getting to the final and losing, and finishing mid-table.
In terms of the squad, a number of players have seen a dip in their performance over recent weeks (Giroud, Ramsey, Wilshere and Ozil), while others are only starting to get back to their best (Walcott and Cazorla). Podolski and Oxlade-Chamberlain have yet to get their season started. It was interesting that Wenger chose not to make any substitutions last night, suggesting that he doesn't think any players are approaching the "red zone" of exhaustion yet. As the training regime is geared to reaching peak fitness and stamina in the New Year, this suggests that we're in good physical shape. I suspect Ozil may get a couple of games off over the next fortnight to recharge his batteries, while Podolski should allow Giroud to limit himself to 60 minutes for a few matches. Once we get past game 19, transfer speculation will go into over-drive, which might be the Frenchman's cue to start banging in the goals again.
Overall, we should be pleased with the way that the season has gone so far. We're better than we were last year, and there are signs that we can be better still. 2 points from the last 9 is obviously disappointing, but against tough opponents we were only outplayed once, and the 6-3 scoreline against Man City reflected an atypically poor defensive display by us rather than their habitual free-scoring. They look like the one team that could draw away from the chasing pack, but they also look vulnerable at the back - a classic City team, in other words. They, rather than Manure, Chelski or Liverpool, remain the obvious threat to our ambitions. If we're still in the mix come our return fixture on the 29th of March, then we'll have our destiny in our own hands.
There's a definite retro air to the Premier League at present: ugly Chelski, inconsistent Citeh, efficient Liverpool, struggling Manure, tough Everton, laughing-stock Spuds. Arsenal look like a squad that is gelling, with an improving defence, an inventive midfield and an under-appreciated striker. It's just like 1989 all over again. The shit-for-brains Chelsea fans who sang about Hillsborough don't know what spirits they may be invoking. My money's on the honourable Aaron Ramsey to provide the Michael Thomas moment when we get down to the wire.
17 games in is just short of the half-way mark, but I thought an update on my clairvoyant powers would be appropriate now, before the commentariat herd gather at the halfway waterhole. Liverpool and Arsenal are on 36 points. A straight-line extrapolation would suggest that the champions will finish on 80 points, which is in line with my prediction after 11 games. At this stage last season, Manure had 42 points, while City were second on 36 and we were on 27. We then produced 46 points from the remaining 21 games, which, if repeated, would deliver a total points haul of 82. In the 2011-12 season, City were top with 44 and Manure second on 42. We made 38 points from the remaining 21 games. If repeated, that form would get us to a total of 74, which would at least ensure another season in the Champions League. I think we can dare to dream, so I'm still predicting a final tally of 78-80 points.
While there are plenty of tough periods ahead - Liverpool and Manure back to back in February and Chelsea and Man City back to back in March - I think we'll do reasonably well in these "big games", probably 6 points from 12, which will keep us on track. We owe both the Manchester teams a strong performance at the Emirates, while our track record at Anfield and Stamford Bridge has improved in recent years. The cups are likely to be less of a distraction than usual. Few expect us to progress against Bayern, though I fancy us to squeak through on away goals, which will be deliciously ironic given Wenger's recent comments, probably by winning 1-0 at home and losing 2-1 away. The FA Cup is in the lap of the comedy gods. Spurs under Tim Sherwood may discover their roots, which means beating us (Adebayor to score), getting to the final and losing, and finishing mid-table.
In terms of the squad, a number of players have seen a dip in their performance over recent weeks (Giroud, Ramsey, Wilshere and Ozil), while others are only starting to get back to their best (Walcott and Cazorla). Podolski and Oxlade-Chamberlain have yet to get their season started. It was interesting that Wenger chose not to make any substitutions last night, suggesting that he doesn't think any players are approaching the "red zone" of exhaustion yet. As the training regime is geared to reaching peak fitness and stamina in the New Year, this suggests that we're in good physical shape. I suspect Ozil may get a couple of games off over the next fortnight to recharge his batteries, while Podolski should allow Giroud to limit himself to 60 minutes for a few matches. Once we get past game 19, transfer speculation will go into over-drive, which might be the Frenchman's cue to start banging in the goals again.
Overall, we should be pleased with the way that the season has gone so far. We're better than we were last year, and there are signs that we can be better still. 2 points from the last 9 is obviously disappointing, but against tough opponents we were only outplayed once, and the 6-3 scoreline against Man City reflected an atypically poor defensive display by us rather than their habitual free-scoring. They look like the one team that could draw away from the chasing pack, but they also look vulnerable at the back - a classic City team, in other words. They, rather than Manure, Chelski or Liverpool, remain the obvious threat to our ambitions. If we're still in the mix come our return fixture on the 29th of March, then we'll have our destiny in our own hands.
There's a definite retro air to the Premier League at present: ugly Chelski, inconsistent Citeh, efficient Liverpool, struggling Manure, tough Everton, laughing-stock Spuds. Arsenal look like a squad that is gelling, with an improving defence, an inventive midfield and an under-appreciated striker. It's just like 1989 all over again. The shit-for-brains Chelsea fans who sang about Hillsborough don't know what spirits they may be invoking. My money's on the honourable Aaron Ramsey to provide the Michael Thomas moment when we get down to the wire.
Friday, 20 December 2013
The Big Boy Made Me Do It
Tuesday's meeting between Barack Obama and the technology companies was an object lesson in corporate manipulation. The hot topic was always going to be government surveillance, given a federal judge's ruling on Monday that the NSA's activities were probably illegal, but it suited the companies' purpose to claim that Obama's decision to start the session by announcing the appointment of a senior Microsoft guy to help with the healthcare.gov site, i.e. an inspiring example of cooperation, was an attempt to avoid the subject.
The reality was a guest list, prepared by the White House, that included almost no companies that could meaningfully contribute to a debate on a systems integration challenge. Microsoft, who could, sent their chief legal officer, while Oracle and Cisco, who had attended the first "tech summit meeting" in 2011, were absent. It strains belief to think that the White House was dumb enough to imagine the best brains to pick in respect of healthcare.gov would be the CEOs of Apple, Twitter, Yahoo, Google, AT&T, Facebook, LinkedIn, Dropbox, Netflix etc. If they really wanted to hijack the meeting, they'd have done better to invite Larry Ellison back. The NSA fallout was always going to be the main topic of debate.
The media spinning was intended to suggest that Obama is being dragged reluctantly to the negotiating table by Silicon Valley, this despite the fact that a Presidential panel would report on Wednesday recommending curbs on the NSA. The New York Times quoted the non-attending CEO of CloudFlare, a website optimisation company, to the effect that both "sides" were suffering from a loss of trust: "If you’re on the White House side, the issue is they’re getting beaten up because they’re seen as technically incompetent. On the other side, the tech industry needs the White House right now to give a stern rebuke to the N.S.A. and put in real procedures to rein in a program that feels like it’s out of control." The suggestion is that the tech industry is making the running on curtailing surveillance abuses, rather than being an accessory to the crime.
The relentless propaganda of the right, endorsed by Silicon Valley ideologues, is that government cannot be trusted to act in the best interests of citizens and that it is inherently incompetent. In contrast, private businesses spend a lot of advertising money extolling their own trustworthiness and insisting that the need to serve customers keeps them honest (their anxiety about reputational damage is largely PR, as customers rarely exercise effective sanction). The Snowden revelations have given the lie to this, both in the evidence of industry connivance and the competency of government. The current campaign seeks to paint a picture of a freedom-loving industry facing-off against an intrusive state. David bullied by Goliath.
Following the meeting, the tech titans issued a statement to a grateful world: "We appreciated the opportunity to share directly with the president our principles on government surveillance that we released last week and we urge him to move aggressively on reform". Those high-minded "principles" naturally do not question the rights of the technology companies to exploit personal data, and even go so far as to insist that states should harmonise "conflicting laws" and reject "Balkanisation" (i.e. support global monopolies). In other words, the usual neoliberal agenda. The peremptory nature of the wording ("The undersigned companies believe that it is time for the world’s governments to address the practices and laws regulating government surveillance of individuals and access to their information") is telling. They'll be demanding a Nobel Peace Prize next. After all, Obama's got one.
While there is a sense that the debate in the US is predicated on the assumption that people are idiots who can't distinguish between the rights of citizens and self-interested corporations, this remains a significant improvement on the level of the debate in the UK, where the attitude of the state is that we are not merely stupid but untrustworthy too. Alan Rusbridger, the well-known country-lover, highlights the irony: "This muted debate about our liberties – and the rather obvious attempts to inhibit, if not actually intimidate, newspapers – have puzzled Americans, Europeans and others who were brought up to regard the UK as being the cradle of free speech and an unfettered press". Of course, Rusbridger is a romantic liberal, so he actually believes in such unicorns as "an unfettered press".
The crudeness of the British state's attitude towards digital surveillance, exemplified in the view of the former Head of GCHQ that the security services should not be accountable to Parliament, is not the result of the lack of a constitution or formal rights, but the consequence of the irrelevance of the domestic IT industry (Silicon Roundabout isn't demanding a summit with David Cameron) and the waning power of the press. Rupert Murdoch's baleful legacy is not simply page 3 and phone-hacking but the corrupt interconnectedness of journalism, police and politics. The press has become an ever more obliging propaganda arm of government, and exposés of expense-fiddling parliamentarians are little more than spiteful attempts to assert their nominal independence.
The purpose of Tuesday's meeting was to get the US government to agree to rein in state surveillance, but not to go so far as to challenge the right of the technology companies to exploit personal data. Neither "side" wishes to jeopardise Silicon Valley's global dominance, though they recognise that some concessions will have to be made to neoliberal power blocs elsewhere, notably the EU. Once a satisfactory protocol is agreed, the "debate about our liberties" will become as muted in the US and Europe as it is in the UK today. Our proud boast in Britain is that we are "open for business", which means that we have no significant domestic Internet industry to champion and an aversion to prosecuting multinational technology companies for tax dodging. We are well ahead of the game in terms of neoliberal compliance.
The reality was a guest list, prepared by the White House, that included almost no companies that could meaningfully contribute to a debate on a systems integration challenge. Microsoft, who could, sent their chief legal officer, while Oracle and Cisco, who had attended the first "tech summit meeting" in 2011, were absent. It strains belief to think that the White House was dumb enough to imagine the best brains to pick in respect of healthcare.gov would be the CEOs of Apple, Twitter, Yahoo, Google, AT&T, Facebook, LinkedIn, Dropbox, Netflix etc. If they really wanted to hijack the meeting, they'd have done better to invite Larry Ellison back. The NSA fallout was always going to be the main topic of debate.
The media spinning was intended to suggest that Obama is being dragged reluctantly to the negotiating table by Silicon Valley, this despite the fact that a Presidential panel would report on Wednesday recommending curbs on the NSA. The New York Times quoted the non-attending CEO of CloudFlare, a website optimisation company, to the effect that both "sides" were suffering from a loss of trust: "If you’re on the White House side, the issue is they’re getting beaten up because they’re seen as technically incompetent. On the other side, the tech industry needs the White House right now to give a stern rebuke to the N.S.A. and put in real procedures to rein in a program that feels like it’s out of control." The suggestion is that the tech industry is making the running on curtailing surveillance abuses, rather than being an accessory to the crime.
The relentless propaganda of the right, endorsed by Silicon Valley ideologues, is that government cannot be trusted to act in the best interests of citizens and that it is inherently incompetent. In contrast, private businesses spend a lot of advertising money extolling their own trustworthiness and insisting that the need to serve customers keeps them honest (their anxiety about reputational damage is largely PR, as customers rarely exercise effective sanction). The Snowden revelations have given the lie to this, both in the evidence of industry connivance and the competency of government. The current campaign seeks to paint a picture of a freedom-loving industry facing-off against an intrusive state. David bullied by Goliath.
Following the meeting, the tech titans issued a statement to a grateful world: "We appreciated the opportunity to share directly with the president our principles on government surveillance that we released last week and we urge him to move aggressively on reform". Those high-minded "principles" naturally do not question the rights of the technology companies to exploit personal data, and even go so far as to insist that states should harmonise "conflicting laws" and reject "Balkanisation" (i.e. support global monopolies). In other words, the usual neoliberal agenda. The peremptory nature of the wording ("The undersigned companies believe that it is time for the world’s governments to address the practices and laws regulating government surveillance of individuals and access to their information") is telling. They'll be demanding a Nobel Peace Prize next. After all, Obama's got one.
While there is a sense that the debate in the US is predicated on the assumption that people are idiots who can't distinguish between the rights of citizens and self-interested corporations, this remains a significant improvement on the level of the debate in the UK, where the attitude of the state is that we are not merely stupid but untrustworthy too. Alan Rusbridger, the well-known country-lover, highlights the irony: "This muted debate about our liberties – and the rather obvious attempts to inhibit, if not actually intimidate, newspapers – have puzzled Americans, Europeans and others who were brought up to regard the UK as being the cradle of free speech and an unfettered press". Of course, Rusbridger is a romantic liberal, so he actually believes in such unicorns as "an unfettered press".
The crudeness of the British state's attitude towards digital surveillance, exemplified in the view of the former Head of GCHQ that the security services should not be accountable to Parliament, is not the result of the lack of a constitution or formal rights, but the consequence of the irrelevance of the domestic IT industry (Silicon Roundabout isn't demanding a summit with David Cameron) and the waning power of the press. Rupert Murdoch's baleful legacy is not simply page 3 and phone-hacking but the corrupt interconnectedness of journalism, police and politics. The press has become an ever more obliging propaganda arm of government, and exposés of expense-fiddling parliamentarians are little more than spiteful attempts to assert their nominal independence.
The purpose of Tuesday's meeting was to get the US government to agree to rein in state surveillance, but not to go so far as to challenge the right of the technology companies to exploit personal data. Neither "side" wishes to jeopardise Silicon Valley's global dominance, though they recognise that some concessions will have to be made to neoliberal power blocs elsewhere, notably the EU. Once a satisfactory protocol is agreed, the "debate about our liberties" will become as muted in the US and Europe as it is in the UK today. Our proud boast in Britain is that we are "open for business", which means that we have no significant domestic Internet industry to champion and an aversion to prosecuting multinational technology companies for tax dodging. We are well ahead of the game in terms of neoliberal compliance.
Tuesday, 17 December 2013
No More Triumphs
David Cameron has apparently declared "mission accomplished" for British troops in Afghanistan. Pedants might point out that he was responding to a journalist's prompt, so the term is a plant with an eye to a headline, but this is incidental. The meaning of the spectacle is captured in the use of the word "declared", with its cricketing connotations of the captain's judgement. The Prime Minister has decided that we have done enough and it is time to quit the field. Just to reinforce the point, he was even accompanied by a celebrity retiree in the person of Michael Owen.
In an earlier age, the term "mission accomplished" would have been voiced by a military leader reporting to the civil power that had commissioned him. This was not merely confirmation that the will of the state had been successfully executed, but that the powers temporarily vested in the military had now been formally returned. The model for this was the practice in classical Greece and Republican Rome of treating warfare as an episodic activity requiring limited sanction. In the modern era of citizen armies, conscription and demobilisation became the ceremonies that marked these power exchanges, with triumphal processions and laurel wreaths going out of fashion in democracies in favour of impromptu kissing couples in public squares.
The ragged wars of the post-1945 period could not be characterised as successes by the West, and usually lacked call-ups and demobs, so there were few opportunities to claim "mission accomplished" until Thatcher decided to buck the trend with a Roman triumph as the armed forces marched into the City in 1982 to celebrate victory in the Falklands. But if that was a throwback in style, it also marked a return to the celebration of political will as much as military accomplishment. For all the praise of the fallen and the scarred, the chief message was clearly "we were right, and don't anyone say otherwise". The triumph was less a military conclusion than a political validation.
This neocon fashion reached the height of unintentional parody in 2003 when George W Bush did his Top Gun impression on the USS Abraham Lincoln beneath a banner with the legend "Mission Accomplished". The irony was that the military (rightly) did not consider that the mission in Iraq was anywhere near accomplished. The White House subsequently claimed that the banner's meaning was taken out of context, but their failure to spot the obvious hostage to fortune was a pretty good indication of both their hubris and the casual contempt for facts in the "war on terror".
Cameron's declaration is less vainglorious, but it is no less of a stunt. It is not difficult to make the case that the original objectives of Britain's involvement in Afghanistan have not been met, but the government has now reframed the brief to match the current reality, specifically bumping the Taliban training camps into Pakistan. According to the Prime Minister: "the purpose of our mission was always to build an Afghanistan and Afghan security forces that were capable of maintaining a basic level of security so this country never again became a haven for terrorist training camps". Job done. Let's get out of here before that "basic level" crumbles.
Given the current cuts in military spending, and the determination of most politicians to spunk an increasing slice of what is left on a Trident replacement, not to mention the emerging money-pits of cyber-warfare and drones, British troops aren't likely to see much action over the next decade or so (bumping the al-Qaida franchise out of Syria will be pursued via proxies). The current sentimental mood of Help for Heroes and tabloid-sponsored award ceremonies will linger on, but we'll gradually revert to the British tradition of seeing squaddies as crypto-hooligans whom it's best to steer clear of.
Just as Obama's kill-list symbolises the increasing bureaucratisation of war (and the militarisation of the bureaucracy), Cameron's downbeat triumph symbolises the increasingly trivial nature of the decisions taken to enter and exit wars. It's just business. With the ever-present background hum of total surveillance and cyber-warfare already in place, we are now in an era of permanent, low-level conflict, preferably at arms-length. The exceptional powers of the military have been abrogated by the state as a none-too-subtle way to bypass democracy. In such a world, we have no need of triumphs.
In an earlier age, the term "mission accomplished" would have been voiced by a military leader reporting to the civil power that had commissioned him. This was not merely confirmation that the will of the state had been successfully executed, but that the powers temporarily vested in the military had now been formally returned. The model for this was the practice in classical Greece and Republican Rome of treating warfare as an episodic activity requiring limited sanction. In the modern era of citizen armies, conscription and demobilisation became the ceremonies that marked these power exchanges, with triumphal processions and laurel wreaths going out of fashion in democracies in favour of impromptu kissing couples in public squares.
The ragged wars of the post-1945 period could not be characterised as successes by the West, and usually lacked call-ups and demobs, so there were few opportunities to claim "mission accomplished" until Thatcher decided to buck the trend with a Roman triumph as the armed forces marched into the City in 1982 to celebrate victory in the Falklands. But if that was a throwback in style, it also marked a return to the celebration of political will as much as military accomplishment. For all the praise of the fallen and the scarred, the chief message was clearly "we were right, and don't anyone say otherwise". The triumph was less a military conclusion than a political validation.
This neocon fashion reached the height of unintentional parody in 2003 when George W Bush did his Top Gun impression on the USS Abraham Lincoln beneath a banner with the legend "Mission Accomplished". The irony was that the military (rightly) did not consider that the mission in Iraq was anywhere near accomplished. The White House subsequently claimed that the banner's meaning was taken out of context, but their failure to spot the obvious hostage to fortune was a pretty good indication of both their hubris and the casual contempt for facts in the "war on terror".
Cameron's declaration is less vainglorious, but it is no less of a stunt. It is not difficult to make the case that the original objectives of Britain's involvement in Afghanistan have not been met, but the government has now reframed the brief to match the current reality, specifically bumping the Taliban training camps into Pakistan. According to the Prime Minister: "the purpose of our mission was always to build an Afghanistan and Afghan security forces that were capable of maintaining a basic level of security so this country never again became a haven for terrorist training camps". Job done. Let's get out of here before that "basic level" crumbles.
Given the current cuts in military spending, and the determination of most politicians to spunk an increasing slice of what is left on a Trident replacement, not to mention the emerging money-pits of cyber-warfare and drones, British troops aren't likely to see much action over the next decade or so (bumping the al-Qaida franchise out of Syria will be pursued via proxies). The current sentimental mood of Help for Heroes and tabloid-sponsored award ceremonies will linger on, but we'll gradually revert to the British tradition of seeing squaddies as crypto-hooligans whom it's best to steer clear of.
Just as Obama's kill-list symbolises the increasing bureaucratisation of war (and the militarisation of the bureaucracy), Cameron's downbeat triumph symbolises the increasingly trivial nature of the decisions taken to enter and exit wars. It's just business. With the ever-present background hum of total surveillance and cyber-warfare already in place, we are now in an era of permanent, low-level conflict, preferably at arms-length. The exceptional powers of the military have been abrogated by the state as a none-too-subtle way to bypass democracy. In such a world, we have no need of triumphs.
Friday, 13 December 2013
Walter Shows the Way
The big news in the world of wacky baccy this week was the decision of Uruguay to partially legalise marijuana, which has resulted in apparently sober calls for the nation to be awarded the Nobel peace prize (I love you too, man). Ultimately this is a sideshow. Perhaps of more significance is the suggestion that the UK government may look to regulate rather than ban synthetic drugs, though I suspect they'll pass up this opportunity in the short term. The emblematic role of drugs in the tabloid press (see Nigella Lawson) means that we'll probably be a late adopter as far as liberalisation or decriminalisation is concerned, but the fact that it is being considered indicates that a shift in attitude is under way. The key word is "regulate".
Perhaps the most overt sign of this change in popular culture has been Breaking Bad. Because of their length, TV drama series tend to require a lot of supporting comment - the auxiliary of constant blather intended to make you watch the programme and thus the adverts. Drama provides more grist for this than comedy, which suggests that the rise of social media has been decisive in creating the current "golden age of TV". With the exception of those aimed at knowing niche audiences (The IT Crowd, The Big Bang Theory), sitcom has been on the slide since 2006 and the birth of Twitter. If we'd imagined microblogging in the 90s, we'd probably have thought that the dissemination of jokes would be a killer app, but it turns out that the TV cheese for this particular wine is the traditional water-cooler guff of dramatic reveals and shouting at talent shows and politicians. It's reassuringly like Drury Lane in the eighteenth century.
Though much ink and many bytes are spent explicating the "narrative arc" and the moral quandaries of the central characters, the key meaning of these dramas can be found in the mise en scene, which doesn't tend to change much from beginning to end. Thus The Sopranos was a study of an SME in self-destructive and terminal decline, while The Wire looked more widely at institutional failure. Breaking Bad suggests that drugs might be a domestic manufacturing industry of the future, with a bit of luck. Despite the thick icing of morality and symbolic violence, all of these series are worrying away at industrial decline in the US and its social consequences. In Walter White's fall from grace as a chemistry teacher, there is a recognition that recreational drugs are the misapplication of a noble calling. The implication is that a small shift in the law (remember prohibition) could make this a respectable business.
The background to this spectacle is international economic negotiations, both the global efforts coordinated by the World Trade Organisation and regional initiatives such as the Transatlantic Trade and Investment Partnership and the Trans-Pacific Partnership. The WTO was created in 1995 as an institutional upgrade on GATT, the General Agreement on Tariffs and Trade, a rolling series of negotiations that was started in 1947 with the aim of avoiding the protectionism and autarky that had scarred the 1930s. Though GATT remains active, in the form of the outstanding Doha round, the global focus has long since shifted from the reduction of tariffs towards the harmonisation of regulations, notably in the areas of commercial services, intellectual copyright and foreign investment. The last GATT agreement before the creation of the WTO was, coincidentally, the Uruguay round, which ran from 1986 to 1994 (the duration, longer than the Congress of Vienna, is indicative of the scope and detail of these negotiations as much as the difficulty in securing agreement).
Despite the regular use of the words "trade" and "tariffs", and the implicit valorisation of "free trade", international agreements since the Uruguay round have had less and less to do with the traditional exchange of raw materials, agricultural produce and manufactured commodities. The objective in the neoliberal age has been to extend the rights of multinational corporations in the areas of intellectual property, investor-state dispute settlement (ISDS, i.e. the rights of foreign investors to trump domestic legislation), and the regulation of regulation (i.e. ensuring that domestic laws are harmonised to the satisfaction of global capital). As Dean Baker says, "the dirty secret about most trade negotiations today is that they aren’t really about 'conventional barriers to trade' any more. 'Non-tariff barriers', which get most of the attention in trade talks these days are a euphemism for differing national approaches to regulation".
While a lot of the criticism directed at these negotiations focuses on the anti-democratic implications of ISDS, the really big issue is intellectual property. The Trans-Pacific Partnership (TTP) is in large part targeted at extending US IP rights in South East Asia, where a large proportion of the world's "knock-offs" currently originate. But the scope of this goes beyond bootleg copies of The Hobbit. As Walter White has shown, we now have the technology to create knock-off drugs. As well as crystal meth, we can safely produce mildly psychoactive agents with minimal harmful effects (certainly less harmful than alcohol). If Big Pharma doesn't do this, then the market will be left to "unregulated" and "unscrupulous" producers in Vietnam and Mexico. It should be obvious that the gradual extension of IP rights is preparation for the decriminalisation of drugs, not to mention the ubiquity of high-profit GM.
There are many who cheer the Uruguay decision because it proposes nationalisation, rather than the regulation of a free market, and thus the adoption of a more socially-embedded response to the collateral damage of the drugs trade, but what they fail to appreciate is that this is only possible because there is no patent on cannabis or THC. In the future, the rights of nation states to manage their drug policy and direct their drug industries will be constrained by the rights of Big Pharma, who will own the "good stuff".
Perhaps the most overt sign of this change in popular culture has been Breaking Bad. Because of their length, TV drama series tend to require a lot of supporting comment - the auxiliary of constant blather intended to make you watch the programme and thus the adverts. Drama provides more grist for this than comedy, which suggests that the rise of social media has been decisive in creating the current "golden age of TV". With the exception of those aimed at knowing niche audiences (The IT Crowd, The Big Bang Theory), sitcom has been on the slide since 2006 and the birth of Twitter. If we'd imagined microblogging in the 90s, we'd probably have thought that the dissemination of jokes would be a killer app, but it turns out that the TV cheese for this particular wine is the traditional water-cooler guff of dramatic reveals and shouting at talent shows and politicians. It's reassuringly like Drury Lane in the eighteenth century.
Though much ink and many bytes are spent explicating the "narrative arc" and the moral quandaries of the central characters, the key meaning of these dramas can be found in the mise en scene, which doesn't tend to change much from beginning to end. Thus The Sopranos was a study of an SME in self-destructive and terminal decline, while The Wire looked more widely at institutional failure. Breaking Bad suggests that drugs might be a domestic manufacturing industry of the future, with a bit of luck. Despite the thick icing of morality and symbolic violence, all of these series are worrying away at industrial decline in the US and its social consequences. In Walter White's fall from grace as a chemistry teacher, there is a recognition that recreational drugs are the misapplication of a noble calling. The implication is that a small shift in the law (remember prohibition) could make this a respectable business.
The background to this spectacle is international economic negotiations, both the global efforts coordinated by the World Trade Organisation and regional initiatives such as the Transatlantic Trade and Investment Partnership and the Trans-Pacific Partnership. The WTO was created in 1995 as an institutional upgrade on GATT, the General Agreement on Tariffs and Trade, a rolling series of negotiations that was started in 1947 with the aim of avoiding the protectionism and autarky that had scarred the 1930s. Though GATT remains active, in the form of the outstanding Doha round, the global focus has long since shifted from the reduction of tariffs towards the harmonisation of regulations, notably in the areas of commercial services, intellectual copyright and foreign investment. The last GATT agreement before the creation of the WTO was, coincidentally, the Uruguay round, which ran from 1986 to 1994 (the duration, longer than the Congress of Vienna, is indicative of the scope and detail of these negotiations as much as the difficulty in securing agreement).
Despite the regular use of the words "trade" and "tariffs", and the implicit valorisation of "free trade", international agreements since the Uruguay round have had less and less to do with the traditional exchange of raw materials, agricultural produce and manufactured commodities. The objective in the neoliberal age has been to extend the rights of multinational corporations in the areas of intellectual property, investor-state dispute settlement (ISDS, i.e. the rights of foreign investors to trump domestic legislation), and the regulation of regulation (i.e. ensuring that domestic laws are harmonised to the satisfaction of global capital). As Dean Baker says, "the dirty secret about most trade negotiations today is that they aren’t really about 'conventional barriers to trade' any more. 'Non-tariff barriers', which get most of the attention in trade talks these days are a euphemism for differing national approaches to regulation".
While a lot of the criticism directed at these negotiations focuses on the anti-democratic implications of ISDS, the really big issue is intellectual property. The Trans-Pacific Partnership (TTP) is in large part targeted at extending US IP rights in South East Asia, where a large proportion of the world's "knock-offs" currently originate. But the scope of this goes beyond bootleg copies of The Hobbit. As Walter White has shown, we now have the technology to create knock-off drugs. As well as crystal meth, we can safely produce mildly psychoactive agents with minimal harmful effects (certainly less harmful than alcohol). If Big Pharma doesn't do this, then the market will be left to "unregulated" and "unscrupulous" producers in Vietnam and Mexico. It should be obvious that the gradual extension of IP rights is preparation for the decriminalisation of drugs, not to mention the ubiquity of high-profit GM.
There are many who cheer the Uruguay decision because it proposes nationalisation, rather than the regulation of a free market, and thus the adoption of a more socially-embedded response to the collateral damage of the drugs trade, but what they fail to appreciate is that this is only possible because there is no patent on cannabis or THC. In the future, the rights of nation states to manage their drug policy and direct their drug industries will be constrained by the rights of Big Pharma, who will own the "good stuff".
Monday, 9 December 2013
Counting Peanuts in the Monkey House
How much should we pay MPs, given that there is no objective criteria to determine their salaries? There isn't a market that can provide a "clearing price", and the suggestion that candidates should make salary bids part of their election manifesto would institutionalise corruption. Nor is there any obvious measure of productivity, unless you want to encourage a greater output of poorly-scrutinised bills. Even international benchmarks are unhelpful as there are widely differing approaches to pay, expenses and outside earnings. The popular answer is that their pay should reflect our subjective opinion of their relative worth, which is a social judgement, not a market evaluation - i.e. closer to a form of barter than monetary exchange, so one MP equals three estate agents, or 0.75 of a brain surgeon.
This belief is founded on an assumption about class and rank, hence the tendency to use roles with traditional social standing, such as lawyers, doctors and headteachers, as the peer group. The problem is that 30 years of neoliberalism has seen these roles transformed in two ways. First, they have benefited from widening income inequality as part of the upper half of the middle class. As the earnings of the top 1% have accelerated away, this has helped drag the earnings of the next 9% up. Second, they have become more internally polarised due to the increased rewards to "leadership" and the mainstreaming of bonuses. Are MPs now to be compared to corporate lawyers, heads of commissioning GP consortia and academy "superheads", or are they to be compared to legal aid solicitors, locum doctors and the heads of comprehensive community schools?
Fearful of drawing too much attention to the growth in inequality before 2008, all of the parties acquiesced in an informal stitch-up that saw headline pay restrained while incomes were boosted through liberal office allowances, nod-and-a-wink expenses and generous pension contributions. Meanwhile, in the spirit of the age, MPs continued to pursue their outside commercial interests, and in some cases decided to combine the two domains through lucrative lobbying and "cash for questions".
The current thinking of the Independent Parliamentary Standards Authority, which now sets MPs pay, appears to be that it should be linked to average earnings, with a multiple of around 3. This contrasts with a multiple of 5.7 when pay was first introduced in 1911, though this was soon corrected by inflation during WW1. The multiple was around 2.5 during the 1920s, and then wobbled around 3.25 between the mid-30s and the oil shock of 1973. It dropped to 2.5 during the era of high inflation and public pay policies (and after expenses were separated-out in 1971), jumped up to 3 during the Major years, and has gradually declined since then to around 2.7 now (see chart below). On the face of it, they have a reasonable argument for an uprating now to a multiple of 3.
The problem is that "average earnings" in this model is the statistical mean, not the median. In other words, the aggregate of everyone's earnings divided by everyone who works, rather than the point at which 50% of the population earn more and 50% earn less. As the FT noted when this pay rise was first mooted in July, "Mean earnings have grown faster than median earnings since 1980, largely because of higher pay at the top of the income scale. In other words, Ipsa has made its measurement more sensitive to the rise in income inequality". Hooking their pay to median earnings might be fairer in terms of relative worth, but it would condemn MPs to an ongoing decline relative to the top-end of the income scale if inequality continues to rise. The choice of the mean is therefore a vote of no confidence in the prospect of inequality narrowing any time soon.
There are many drivers of unequal earnings growth: technology has increasingly automated median-skill roles, leading to job polarisation; globalisation has created competitive downward pressure on unskilled wages while raising the premium for high-skill roles; and politicians have lightened the tax burden on the wealthy. But perhaps the most emotionally significant for MPs has been the exemplary role of bankers since the 1980s. This has not only set the bar high in London, it has had a cascade effect through related employment sectors such as law and business services, from which MPs are disproportionately drawn. "Why shouldn't we have some of that?" isn't an edifying motive, but it's perfectly understandable.
One of the features of neoliberal corporate practice has been its apparently schizoid attitude towards rank and status. First names are used, ties may be dispensed with, and a flatter organisation chart appears (as mid-tier roles evaporate). Yet at the same time there is greater reliance on independent salary benchmarking (you're no longer just competing with the firm up the road) and executive remuneration committees (whose wisdom is little more than an old saw about peanuts and monkeys). Though this is justified by reference to "paying the going rate", it is clear that the primary driver is status, a social currency, rather than market pricing.
The reported distaste of politicians for the proposed 11% pay increase is not mere hypocrisy; it also reflects embarrassment at the growth in income inequality during the neoliberal era, and a realisation that after the 2008 "setback", this unequal growth has returned and that the chief beneficiaries are many of the same bankers, corporate lawyers and executive chancers who filled their boots the first time round. Galloping price inflation at the top end of the London property market is not just the result of Chinese investors buying up Battersea Power Station for its excellent Feng Shui. The flats there will be very handy for Westminster.
This belief is founded on an assumption about class and rank, hence the tendency to use roles with traditional social standing, such as lawyers, doctors and headteachers, as the peer group. The problem is that 30 years of neoliberalism has seen these roles transformed in two ways. First, they have benefited from widening income inequality as part of the upper half of the middle class. As the earnings of the top 1% have accelerated away, this has helped drag the earnings of the next 9% up. Second, they have become more internally polarised due to the increased rewards to "leadership" and the mainstreaming of bonuses. Are MPs now to be compared to corporate lawyers, heads of commissioning GP consortia and academy "superheads", or are they to be compared to legal aid solicitors, locum doctors and the heads of comprehensive community schools?
Fearful of drawing too much attention to the growth in inequality before 2008, all of the parties acquiesced in an informal stitch-up that saw headline pay restrained while incomes were boosted through liberal office allowances, nod-and-a-wink expenses and generous pension contributions. Meanwhile, in the spirit of the age, MPs continued to pursue their outside commercial interests, and in some cases decided to combine the two domains through lucrative lobbying and "cash for questions".
The current thinking of the Independent Parliamentary Standards Authority, which now sets MPs pay, appears to be that it should be linked to average earnings, with a multiple of around 3. This contrasts with a multiple of 5.7 when pay was first introduced in 1911, though this was soon corrected by inflation during WW1. The multiple was around 2.5 during the 1920s, and then wobbled around 3.25 between the mid-30s and the oil shock of 1973. It dropped to 2.5 during the era of high inflation and public pay policies (and after expenses were separated-out in 1971), jumped up to 3 during the Major years, and has gradually declined since then to around 2.7 now (see chart below). On the face of it, they have a reasonable argument for an uprating now to a multiple of 3.
The problem is that "average earnings" in this model is the statistical mean, not the median. In other words, the aggregate of everyone's earnings divided by everyone who works, rather than the point at which 50% of the population earn more and 50% earn less. As the FT noted when this pay rise was first mooted in July, "Mean earnings have grown faster than median earnings since 1980, largely because of higher pay at the top of the income scale. In other words, Ipsa has made its measurement more sensitive to the rise in income inequality". Hooking their pay to median earnings might be fairer in terms of relative worth, but it would condemn MPs to an ongoing decline relative to the top-end of the income scale if inequality continues to rise. The choice of the mean is therefore a vote of no confidence in the prospect of inequality narrowing any time soon.
There are many drivers of unequal earnings growth: technology has increasingly automated median-skill roles, leading to job polarisation; globalisation has created competitive downward pressure on unskilled wages while raising the premium for high-skill roles; and politicians have lightened the tax burden on the wealthy. But perhaps the most emotionally significant for MPs has been the exemplary role of bankers since the 1980s. This has not only set the bar high in London, it has had a cascade effect through related employment sectors such as law and business services, from which MPs are disproportionately drawn. "Why shouldn't we have some of that?" isn't an edifying motive, but it's perfectly understandable.
One of the features of neoliberal corporate practice has been its apparently schizoid attitude towards rank and status. First names are used, ties may be dispensed with, and a flatter organisation chart appears (as mid-tier roles evaporate). Yet at the same time there is greater reliance on independent salary benchmarking (you're no longer just competing with the firm up the road) and executive remuneration committees (whose wisdom is little more than an old saw about peanuts and monkeys). Though this is justified by reference to "paying the going rate", it is clear that the primary driver is status, a social currency, rather than market pricing.
The reported distaste of politicians for the proposed 11% pay increase is not mere hypocrisy; it also reflects embarrassment at the growth in income inequality during the neoliberal era, and a realisation that after the 2008 "setback", this unequal growth has returned and that the chief beneficiaries are many of the same bankers, corporate lawyers and executive chancers who filled their boots the first time round. Galloping price inflation at the top end of the London property market is not just the result of Chinese investors buying up Battersea Power Station for its excellent Feng Shui. The flats there will be very handy for Westminster.
Saturday, 7 December 2013
An Inspiration to Lawyers Everywhere
The first time I came across the word Apartheid may have been in Arthur C Clarke's 1953 novel, Childhood's End, which I think I read around 11 or 12, so about 1972. Superior aliens, the Overlords, turn up out of the blue to stop the Cold War and save humanity from extinction. They insist that this will only be a watching brief (their shyness is eventually explained by their resemblance to traditional European images of the devil), and that they will keep their interventions to a minimum. The two notable exceptions are to stop the bloody killing of the whites in South Africa, following the collapse of Apartheid a few years earlier, and the bloody killing of bulls in Spain, which Clarke obviously felt had gone on long enough.
South Africa also featured in another SF classic, Michael Moorcock's The Land Leviathan of 1974. In an alternate early twentieth century, the republic is an enlightened outpost of democracy and racial harmony, whose president is the former lawyer, Mohandas K Gandhi. Moorcock's Oswald Bastable books are now seen through the prism of what would subsequently be pigeon-holed as Steampunk, and consequently works of techno-whimsy, but they were actually a satire on colonialism and the compromises that liberalism makes with it. Though set in an Edwardian world of imperial self-confidence and Fabian social progress, the critique had a sharp, contemporary resonance in the 70s when the reactionary right still urged that we should sympathise with the predicament of a white minority faced by communist encirclement without and the "immaturity" of blacks within.
The transformation of the ANC from part of the problem to the basis of the solution is now attributed to the dignity and forbearance of Mandela and his imprisoned colleagues, aided and abetted by the wider anti-apartheid movement, but this was actually the product of more profound forces, notably the global triumph of neoliberalism. I recall meeting a South African businessman in the early 80s who assured me that change was inevitable, partly because disinvestment and sanctions were hurting, but more because the inefficiencies of the system were holding back capital. Apartheid prevented the growth of a larger consumer society, and it stopped industry making full use of the available talent. It just wasn't good business. While the Afrikaaner small capitalists, farmers and state functionaries were in two or three minds, symbolised by the lunacy of the Bantustan strategy and the AWB, the predominantly "anglo" big capitalists were largely reconciled to the inevitability of majority rule after the Soweto Uprising in 1976. It was a matter of cutting a deal that would keep the country open to international capital and marginalise the SACP.
In 1982 Mandela was moved from Robben Island to Pollsmoor Prison, which (it subsequently transpired) was the first fruit of the unofficial negotiations opened between the Apartheid regime and the ANC that would culminate in his release in 1990. Exploratory discussions between "people of goodwill", whether through deniable back channels or semi-official "Track II" NGOs, is a key modus operandi of neoliberalism. Where the 50s and 60s had been marked by a reluctance to talk except under duress, symbolised by the absurd "hot line" (and parodied by the Batphone), the era since the 70s has been one of promiscuous chat on the back of increased trade and travel, improved communications technology and globalisation. The strong commercial slant has fed back into the language of politics and diplomacy, thus "conferences" and "treaties" have been updated to "talks" and "deals", and the official products of negotiation are often aspirational and hazy: words like "openness", "reconciliation" and "commitment" feature a lot. The real promise is always more talks, more chat, more sidebar business opportunities.
From our vantage point today, it is clear that big capital was the winner in South Africa in the 90s and 00s. An inefficient and debilitating racial divide was replaced by a more efficient but equally debilitating class divide. In some respects, the fate of the ANC was the result of its leaders being lawyers, very much in the tradition of the Edwardian Gandhi, if not exclusively committed to non-violence. At the same time that Algeria was undergoing a bloody war of independence, the ANC was fighting the long drawn out treason trial of 1956-61. It is little remembered now, but the founding of the Pan-African Congress in 1959, and the Sharpeville massacre in 1960, were seen by many contemporaries as reproaches to the strategy of the ANC. Paradoxically, the jailing of Mandela and other ANC leaders after the Rivonia trial in 1963 reinforced their pre-eminent role in the struggle. Had they been released, they might have been marginalised by more militant elements in the townships.
As the needs of capital increasingly pointed towards the dismantling of Apartheid, Mandela increasingly became a symbol of hope and his eventual release a promissory note of change, but with the specifics left suitably vague. The deferred gratification of "hope" was a leitmotif of the times, from Berlin in 1989, through New Labour in 1997, to Obama in 2008. Since then, we have realised the extent to which neoliberal society was based on illusory hope: that incomes and house prices would keep on rising, that education would pay, that ability would determine success. One of the best films of the immediate post-crash era was 2009's District 9. Though most people interpreted it as a specific parable of Apartheid, it was actually a universal parable of class and its fragility, with the white protagonist's accidental infection, and the instrumental attitude of his employer and family, forcing him into the underclass as he transforms into an alien "prawn". Hope had turned to fear.
When I first saw the film, I recalled Childhood's End because of the South African connection and the hovering mother ship, though the aliens are quite different. Whereas the "prawns" of District 9 are troublesome proles, the Overlords can be read as a prescient metaphor of neoliberal interventionism (the image of Tony Blair as a horned devil will obviously please some). Michael Moorcock's vision of an alternate South Africa was obviously ironic, but in one respect he too was prescient in imagining a society whose figurehead and moral compass was a crusading lawyer. What he perhaps didn't anticipate is that it would be the corporate lawyers who would ultimately be the power behind the throne.
South Africa also featured in another SF classic, Michael Moorcock's The Land Leviathan of 1974. In an alternate early twentieth century, the republic is an enlightened outpost of democracy and racial harmony, whose president is the former lawyer, Mohandas K Gandhi. Moorcock's Oswald Bastable books are now seen through the prism of what would subsequently be pigeon-holed as Steampunk, and consequently works of techno-whimsy, but they were actually a satire on colonialism and the compromises that liberalism makes with it. Though set in an Edwardian world of imperial self-confidence and Fabian social progress, the critique had a sharp, contemporary resonance in the 70s when the reactionary right still urged that we should sympathise with the predicament of a white minority faced by communist encirclement without and the "immaturity" of blacks within.
The transformation of the ANC from part of the problem to the basis of the solution is now attributed to the dignity and forbearance of Mandela and his imprisoned colleagues, aided and abetted by the wider anti-apartheid movement, but this was actually the product of more profound forces, notably the global triumph of neoliberalism. I recall meeting a South African businessman in the early 80s who assured me that change was inevitable, partly because disinvestment and sanctions were hurting, but more because the inefficiencies of the system were holding back capital. Apartheid prevented the growth of a larger consumer society, and it stopped industry making full use of the available talent. It just wasn't good business. While the Afrikaaner small capitalists, farmers and state functionaries were in two or three minds, symbolised by the lunacy of the Bantustan strategy and the AWB, the predominantly "anglo" big capitalists were largely reconciled to the inevitability of majority rule after the Soweto Uprising in 1976. It was a matter of cutting a deal that would keep the country open to international capital and marginalise the SACP.
In 1982 Mandela was moved from Robben Island to Pollsmoor Prison, which (it subsequently transpired) was the first fruit of the unofficial negotiations opened between the Apartheid regime and the ANC that would culminate in his release in 1990. Exploratory discussions between "people of goodwill", whether through deniable back channels or semi-official "Track II" NGOs, is a key modus operandi of neoliberalism. Where the 50s and 60s had been marked by a reluctance to talk except under duress, symbolised by the absurd "hot line" (and parodied by the Batphone), the era since the 70s has been one of promiscuous chat on the back of increased trade and travel, improved communications technology and globalisation. The strong commercial slant has fed back into the language of politics and diplomacy, thus "conferences" and "treaties" have been updated to "talks" and "deals", and the official products of negotiation are often aspirational and hazy: words like "openness", "reconciliation" and "commitment" feature a lot. The real promise is always more talks, more chat, more sidebar business opportunities.
From our vantage point today, it is clear that big capital was the winner in South Africa in the 90s and 00s. An inefficient and debilitating racial divide was replaced by a more efficient but equally debilitating class divide. In some respects, the fate of the ANC was the result of its leaders being lawyers, very much in the tradition of the Edwardian Gandhi, if not exclusively committed to non-violence. At the same time that Algeria was undergoing a bloody war of independence, the ANC was fighting the long drawn out treason trial of 1956-61. It is little remembered now, but the founding of the Pan-African Congress in 1959, and the Sharpeville massacre in 1960, were seen by many contemporaries as reproaches to the strategy of the ANC. Paradoxically, the jailing of Mandela and other ANC leaders after the Rivonia trial in 1963 reinforced their pre-eminent role in the struggle. Had they been released, they might have been marginalised by more militant elements in the townships.
As the needs of capital increasingly pointed towards the dismantling of Apartheid, Mandela increasingly became a symbol of hope and his eventual release a promissory note of change, but with the specifics left suitably vague. The deferred gratification of "hope" was a leitmotif of the times, from Berlin in 1989, through New Labour in 1997, to Obama in 2008. Since then, we have realised the extent to which neoliberal society was based on illusory hope: that incomes and house prices would keep on rising, that education would pay, that ability would determine success. One of the best films of the immediate post-crash era was 2009's District 9. Though most people interpreted it as a specific parable of Apartheid, it was actually a universal parable of class and its fragility, with the white protagonist's accidental infection, and the instrumental attitude of his employer and family, forcing him into the underclass as he transforms into an alien "prawn". Hope had turned to fear.
When I first saw the film, I recalled Childhood's End because of the South African connection and the hovering mother ship, though the aliens are quite different. Whereas the "prawns" of District 9 are troublesome proles, the Overlords can be read as a prescient metaphor of neoliberal interventionism (the image of Tony Blair as a horned devil will obviously please some). Michael Moorcock's vision of an alternate South Africa was obviously ironic, but in one respect he too was prescient in imagining a society whose figurehead and moral compass was a crusading lawyer. What he perhaps didn't anticipate is that it would be the corporate lawyers who would ultimately be the power behind the throne.
Wednesday, 4 December 2013
Drone Alone for Christmas
The announcement that Amazon are thinking about using light-weight drones for deliveries has been variously dismissed as a stunt to boost pre-Christmas sales, a distraction from their dodgy record on employee conditions and tax avoidance, and a rather laboured geek joke. Predictably, various "business commentators" and "legal experts" have taken the proposal seriously and started to opine about its feasibility and impact. Equally predictably, the Interwebs have had a field-day pointing out the many and various problems, from Americans shooting them out of the sky in defence of their constitutional rights, to over-eager family pets colliding with rotor blades. First prize goes to this little beauty:
The Amazon announcement is just a bit of nonsense at this stage, but the response to it indicates the extent to which we have already become reconciled to the idea that drones will be whizzing rounds our skies in the near future. In practice, their main non-military use will be as mobile CCTV. They're not well-suited to delivering bottles of wine, or getting close to humans, but they're excellent for surveillance.
The drone is also emblematic of Amazon's intention to automate as much of their operation as possible. The shitty terms and conditions of their distribution staff is simply a reflection of that staff's planned obsolescence. Amazon are not a "value-add" business, in the sense that they can charge an increment on costs for a better service. Despite all the paeans to their convenience, online shoppers want their goods cheaper than they would find in bricks-n-mortar shops. Consequently, Amazon must leverage their size to drive down wholesale prices, pare overheads (mainly distribution) to the bone, and encourage volume purchases (the marginal profit on the second or third item in a delivery is greater than the first).
Given this business model, drones are a poor investment. While they appear to reduce the need for a delivery guy, this will simply shift the cost for labour elsewhere, i.e. to more expensive drone operators or mechanics. The systems could be designed to be wholly autonomous, which is feasible in terms of avoiding other drones and reaching a GPS-guided location, but this would present major issues on arrival where all possible obstacles could not be planned for (e.g. getting in to a block of flats, avoiding that yapping dog etc). Drones also lack the carrying capacity required to reduce overheads, unless they are scaled up to a level where fuel costs would make them more expensive than a road vehicle. The truth is that a van and a driver will remain a better choice for a long time to come.
If you abstract the Amazon model to purchase-pick-delivery, then they have automated purchase (through a website) and are well on the way to automating picking (i.e. what happens in their distribution centres). The stage least viable for automation, because it contains the point where unpredictable interaction with the buyer is inescapable, is delivery. The logical approach here would be to outsource as much of this to the buyer as possible. For that reason, Amazon's use of pick-up points and low-tech lockers is probably more significant than their championing of drones.
The Amazon announcement is just a bit of nonsense at this stage, but the response to it indicates the extent to which we have already become reconciled to the idea that drones will be whizzing rounds our skies in the near future. In practice, their main non-military use will be as mobile CCTV. They're not well-suited to delivering bottles of wine, or getting close to humans, but they're excellent for surveillance.
The drone is also emblematic of Amazon's intention to automate as much of their operation as possible. The shitty terms and conditions of their distribution staff is simply a reflection of that staff's planned obsolescence. Amazon are not a "value-add" business, in the sense that they can charge an increment on costs for a better service. Despite all the paeans to their convenience, online shoppers want their goods cheaper than they would find in bricks-n-mortar shops. Consequently, Amazon must leverage their size to drive down wholesale prices, pare overheads (mainly distribution) to the bone, and encourage volume purchases (the marginal profit on the second or third item in a delivery is greater than the first).
Given this business model, drones are a poor investment. While they appear to reduce the need for a delivery guy, this will simply shift the cost for labour elsewhere, i.e. to more expensive drone operators or mechanics. The systems could be designed to be wholly autonomous, which is feasible in terms of avoiding other drones and reaching a GPS-guided location, but this would present major issues on arrival where all possible obstacles could not be planned for (e.g. getting in to a block of flats, avoiding that yapping dog etc). Drones also lack the carrying capacity required to reduce overheads, unless they are scaled up to a level where fuel costs would make them more expensive than a road vehicle. The truth is that a van and a driver will remain a better choice for a long time to come.
If you abstract the Amazon model to purchase-pick-delivery, then they have automated purchase (through a website) and are well on the way to automating picking (i.e. what happens in their distribution centres). The stage least viable for automation, because it contains the point where unpredictable interaction with the buyer is inescapable, is delivery. The logical approach here would be to outsource as much of this to the buyer as possible. For that reason, Amazon's use of pick-up points and low-tech lockers is probably more significant than their championing of drones.
Sunday, 1 December 2013
Secular Stagnation as a Software Glitch
The big noise in the econoblogosphere over the last fortnight has been the reaction to Larry Summers' reintroduction of the concept of "secular stagnation", the idea that all is not well with capitalism and that we may need to get used to low growth and persistent unemployment (or underemployment). Summers first notes a dog-that-didn't-bark oddity of the economy prior to the 2008 crisis: "Too easy money, too much borrowing, too much wealth. Was there a great boom? Capacity utilization wasn't under any great pressure. Unemployment wasn't under any remarkably low level. Inflation was entirely quiescent. So somehow, even a great bubble wasn't enough to produce any excess in aggregate demand". In other words, the new economy was a bit pants.
One could arguably extend Summers' description across the entire period of the "Great Moderation", back to the mid-80s. Though there was volatility in specific assets and interest rates, due to well-known local conditions (e.g. UK house prices and interest rates in the early 90s, the US dotcom boom in the late 90s etc), volatility at the macroeconomic level, i.e. GDP and inflation, was low. The industrial restructuring of the early 80s did not lead to a step-up in GDP growth across the developed world (let alone wealth "trickle-down"), but rather a regression to the postwar mean (2.6% in the UK), while unemployment stayed high. If we manage to hit that rate of growth in the UK by 2018, ten years after the crash, it will be hailed as a triumph.
Summers then turns to another puzzle, the aftermath of the successful attempts in 2009 to "normalise" the financial system: "You'd kind of expect that there'd be a lot of catch-up: that all the stuff where inventories got run down would get produced much faster, so you'd actually kind of expect that once things normalized, you'd get more GDP than you otherwise would have had -- not that four years later, you'd still be having substantially less than you had before. So there's something odd about financial normalization, if that was what the whole problem was, and then continued slow growth". In other words, where was the bounce back once Gordon & co saved the world?
The concept of secular stagnation was originally popularised by the US economist Alvin Hansen in the 1930s as "sick recoveries which die in their infancy and depressions which feed on themselves and leave a hard and seemingly immovable core of unemployment" (he was observing the petering-out of the New Deal recovery in 1937 and couldn't anticipate the impact that the coming war would have). The assumption behind this was that the motors of economic expansion, such as rapid population growth, the development of new territory and new resources, and rapid technological progress, had played out. Consequently, the upswing of the business cycle lacked momentum. This finds an echo in modern "stagnationist" theories like those of Tyler Cowen ("no more low-hanging fruit") and Robert Gordon ("modern technology is rubbish" - I paraphrase).
The origin of Hansen's thinking lay in Keynes's observation that net saving at full employment tends to grow, whereas net investment at full employment tends to fall. This is Keynes's justification for government to act as the investor of last resort, thereby maintaining aggregate demand and full employment. The socialisation of investment is back on the agenda, even if the S-word is to be avoided and pro-middle class projects (like HS2 and Help to Buy) preferred.
An implication of Summers' analysis, spelled out by Paul Krugman, is that "we may be an economy that needs bubbles just to achieve something near full employment", however the track record since the 80s suggests that these bubbles have actually been relatively poor at the job of providing a stimulus, just as QE has been in recent years, hence the persistent unemployment and absence of high inflation. This in turn suggests that there is a very powerful secular trend at work driving stagnation, and that bubbles and monetary policy have been able to do little more than ameliorate its effects. As Krugman says, "we have become an economy whose normal state is one of mild depression, whose brief episodes of prosperity occur only thanks to bubbles and unsustainable borrowing". So what causes this underlying mild depression?
The cause of stagnation in the Keynes/Hansen model is a combination of supply-side deficiencies (an ageing population, declining returns from education, not enough new monetisable technologies) and demand-side deficiencies (not enough consumption and/or productive investment). Supply-siders like Tyler Cowen naturally emphasise the former, with the accent on demography, moral decline and the non-appearance of jet-packs, while demand-siders like Duncan Weldon emphasise the latter, with the accent on inequality and wage stagnation. Some demand-siders, like Yves Smith, also point to the pernicious effects of modern finance: "Companies are not reinvesting at a rate sufficient rate to sustain growth, let alone reduce unemployment ... managers and investors have short term incentives, and financial reform has done nothing to reverse them".
Other commentators have sought moralistic explanations. FlipChartRick suggests that the growth of superstar executive pay has led to the decline in investment, but I think this is confusing cause and effect. Declining investment, along with weakened trades unions, has grown profits at the expense of wages and thus created a larger pot of winnings for distribution among shareholders and executives. Rising inequality certainly has a dampening effect on aggregate investment, because of the greater marginal propensity of the rich to save rather than consume, and save in non-productive forms like property, but it doesn't follow that investment is deliberately curtailed (in concert, across thousands of companies) in order to advance inequality. There must be a structural cause - i.e. something that isn't the result of policy but the unplanned product of changes in the material base.
Investment as a share of retained income has been trending down since the late 80s, yet profits have held up. One perspective on this, put forward by Ben Bernanke in 2005, is that the "dearth of domestic investment opportunities" produces an increase in lending abroad, the so-called "global savings glut", reflecting higher rates of return for capital in emerging economies. A second perspective is an "investment strike", i.e. capitalists are choosing to depress capital expenditure, despite growing profits in emerging economies, leading to an aggregate fall in global investment levels. But how can declining investment be sustained beyond the short-term? Surely lower levels of investment will lead to lower profits in future, and thus a "crisis in capital accumulation"?
A possible answer, according to L Randall Wray, is that the problem is neither a savings glut nor an investment dearth, but rather an excess of capacity due to "the productivity of capitalist investment in plant and equipment. To put it in simple terms, the problem is that investment is just too damned productive. The supply side effect of investment (capacity creation) is much larger than the demand side effect (the multiplier), and the outcome is demand-depressing excess capacity. We call that a demand gap". The importance of Wray's analysis is the focus on the material base, i.e. technological productivity.
Paul Krugman appears to be receptive to the idea that we may be living through a technological revolution, despite the naysayers: "What Bob Gordon (pdf) is predicting is disappointment on the supply side; what Larry Summers and I have been suggesting is that we may face a persistent shortfall on the demand side". He is also sceptical (as a good SciFi fan) about the assumed triviality of modern technology: "I know it doesn’t show in the productivity numbers yet, but anyone who tracks technology has a strong sense that something big has been happening the past few years, that seemingly intractable problems - like speech recognition, adequate translation, self-driving cars, etc. - are suddenly becoming tractable. Basically, smart machines are getting much better at interacting with the natural environment in all its complexity." Krugman's list of wonders is significant because what he is talking about is essentially software, the machine "smarts".
A paradox of eras of rapid growth is that they are also periods of great waste. This is the core truth of Schumpeter's "creative destruction": for every successful idea there must be a long tail of failures. But this is not a problem in macroeconomic terms as any spending helps boost aggregate demand, regardless of the return on investment, hence Keynes's suggestion to bury old banknotes in mines and let the private sector dig them out. The peculiar feature of the dotcom boom of the 90s was that it was insufficiently wasteful, despite the best efforts of venture capitalists, stock-boosters and a seemingly infinite supply of bonkers business plans. The reason for this, I think, was the shift in investment from hardware to software.
The 120 years from 1870 to 1990 can be thought of as the era of hardware. Technological advance accelerated because of three institutional features (this is a key premise of innovation economics). The first was the expansion of state-funded universities and technical institutes in the late nineteenth century, which provided the foundation for systematic R&D. The second was the growth of private-sector labs in large industrial companies in the early twentieth century (e.g. IBM and Xerox), which boosted the returns to applied research. The third was the growth of international standards bodies, particularly after WW2 (e.g. ISO, IEEE and IETF), which encouraged the widespread adoption of new technologies. You can see the ideological legacy of this institutional approach to innovation in endogenous growth theory, the lionisation of instrumental education, the fashion for "innovation clusters", and in the search for "synergies" between business, academia and the public sector.
An area that benefited from this approach was logistics, which is the unsung hero of the modern economy. In the century before 1960, there had been few major changes to the technology beyond the growth of road haulage (i.e. lorries) at the expense of rail. International trade was still dependent on cargo ships and predominantly manual docks. Containerisation (based on ISO standards) was the revolutionary change, leading to the closure of the old city docks, a vast increase in trade volumes, and a consequent fall in commodity prices. But there was a second efficiency gain in the 80s, as a result of the impact of ICT (mainframes, mini-computers, private datacoms networks) on inventory management, which led to the development of just-in-time inventories and lean manufacturing. These improvements in logistics appear to have been a major factor in the reduced volatility of GDP and the chief cause of the "labour supply shock" that we call globalisation.
This points to the increasingly transformative impact of software over the last 30 years. While the early phases of the ICT revolution were hardware-heavy, by the mid-80s software was becoming the dominant element in business productivity growth. From episodic capital-labour substitution (e.g. machine installations), industry moved towards continuous improvement and optimisation, hence the growing importance of process management and statistical control, and latterly data analysis. This didn't just improve productivity, it also made production more modular and portable (necessary to be measurable), which was an important factor in facilitating offshoring and outsourcing. Software also has a high "spillover" value, i.e. its adoption by one business can also benefit others (e.g. improved inventory management by suppliers reduced inventory costs for retailers as well).
Though LANs and email had arrived by the early 90s, the mass adoption of ICT only came in the late 90s with the second wave of Internet technologies, notably the Web and SMTP email, and the deployment of Windows 95/98 PCs on every desk. Parallel to this, the corporate data centre was transformed by the replacement of expensive mainframes and minis with commodity Wintel and Unix servers, the development of application-independent RDBMSs (which allowed you to build custom applications cheaply), and the growth of off-the-shelf ERP and CRM systems (boosted by Y2K) that centralised corporate data.
The result of all this was a simultaneous explosion in the utility of software and a fall in the price of hardware. This was masked initially because total budgets remained high during the 90s - i.e. what was once spent on a single mainframe was now spent on hundreds of PCs - but it became apparent that this was a one-time bonanza, even before the dotcom bubble burst. Though some technology providers sought to move their profit margins from hardware to software and ancillary services, the impact of freeware and opensource (whose roots go back to the 70s), plus the democratisation of software development, meant that the days of huge, year-on-year capex budgets were over. The more recent arrival of SaaS (software as a service) and the "cloud" is merely confirmation that the technology is now pervasive and practically abundant (i.e. very cheap if not yet free). In the 80s, only the biggest companies could afford programmers. Now, many SMEs can afford their own "Web guy", and a tech startup is by definition a business with minimal capital. The cost of entry for high-tech innovation has not been lower since the evolution of insitutional R&D.
According to the US Information Technology & Innovation Foundation: "Between 1980 and 1989, business investment in equipment, software and structures grew by 2.7 percent per year on average and 5.2 percent per year between 1990 and 1999. But between 2000 and 2011 it grew by just 0.5 percent per year... Moreover, as a share of GDP, business investment has declined by more than three percentage points since 1980". They attribute this decline to two main factors, a loss of US competitiveness and increasing market short-termism (the primacy of shareholder value). I think both of these play a part, but I also think a crucial factor is software, which has become the dominant element in business technology since 2000. Costly high-tech hardware is still central to the manufacturing sector (e.g. robots), but as that has declined in the US and UK from 25% of GDP in 1980 to around 12% now, it is clearly not the dominant element in the wider economy. Software, on the other hand, is extensively used by every industry sector.
Technology is cumulative in nature, i.e. it builds on prior knowledge and seeks to constantly improve its operation, but software is particularly efficient in this regard because it can be augmented and edited, rather than requiring complete redesign and replacement, and because a lot of the knowledge is publicly shared, notably through the incorporation of opensource, so patents are less of a restriction on the spread of techniques. While a machine might only be replaced every 3 or 4 years, software can be upgraded weekly. This means that software tends to improve at a quicker (and smoother) rate than hardware. We are distracted by Moore's Law to think ICT productivity growth is all about faster processors, when in reality it is about better exploitation of increasingly cheaper hardware resources.
This two-way movement, accelerating utility and commodity deflation, is not historically unknown, but it has traditionally relied on economies of scale. What is unprecedented is that this dynamic can now apply at very small scale levels. An SME can exploit ICT and logistics to create a new market with minimal capital outlay. It should hardly be a surprise then that the demand for capital is falling at a time of widespread innovation. The madness of the dotcom boom, both the desperate search for something to throw capital at, and the promiscuity of VCs who realised you could afford to back an entire field of losing horses, was telling us something profound.
For some, like Frances Coppola, the coming abundance is problematic: "As the productivity of both labour and capital increases, the need for them diminishes. This is why the economy is creating bubbles. Those with assets are desperately looking for yield, and governments are desperately trying to generate jobs. Like animals in a drought, investors and governments crowd into the last remaining waterholes, as the water in them gradually evaporates". The modern economy is generating a lot of profit but insufficient employment (i.e. well-paid jobs) because it has become very efficient. The problem then is one of transmission, i.e. the distribution of this profit, not a malfunctioning engine. It is a matter of political economy.
It is a commonplace that banking by 2008 was no longer fit for its social purpose, but the implication of secular stagnation may be that the traditional model of industry - based on capital accumulation, high employment and the leverage of institutional innovation - may have already run its course as well. The solution may not just require the socialisation of investment, and the corollary of a job guarantee, but the socialisation of capital and its remittance as funded free time (i.e. a basic income). If you think that throwing money at a crowd of people in the hope that it may produce an aggregate return sounds mad, I would suggest you need to take a closer look at how software development is already funded.
One could arguably extend Summers' description across the entire period of the "Great Moderation", back to the mid-80s. Though there was volatility in specific assets and interest rates, due to well-known local conditions (e.g. UK house prices and interest rates in the early 90s, the US dotcom boom in the late 90s etc), volatility at the macroeconomic level, i.e. GDP and inflation, was low. The industrial restructuring of the early 80s did not lead to a step-up in GDP growth across the developed world (let alone wealth "trickle-down"), but rather a regression to the postwar mean (2.6% in the UK), while unemployment stayed high. If we manage to hit that rate of growth in the UK by 2018, ten years after the crash, it will be hailed as a triumph.
Summers then turns to another puzzle, the aftermath of the successful attempts in 2009 to "normalise" the financial system: "You'd kind of expect that there'd be a lot of catch-up: that all the stuff where inventories got run down would get produced much faster, so you'd actually kind of expect that once things normalized, you'd get more GDP than you otherwise would have had -- not that four years later, you'd still be having substantially less than you had before. So there's something odd about financial normalization, if that was what the whole problem was, and then continued slow growth". In other words, where was the bounce back once Gordon & co saved the world?
The concept of secular stagnation was originally popularised by the US economist Alvin Hansen in the 1930s as "sick recoveries which die in their infancy and depressions which feed on themselves and leave a hard and seemingly immovable core of unemployment" (he was observing the petering-out of the New Deal recovery in 1937 and couldn't anticipate the impact that the coming war would have). The assumption behind this was that the motors of economic expansion, such as rapid population growth, the development of new territory and new resources, and rapid technological progress, had played out. Consequently, the upswing of the business cycle lacked momentum. This finds an echo in modern "stagnationist" theories like those of Tyler Cowen ("no more low-hanging fruit") and Robert Gordon ("modern technology is rubbish" - I paraphrase).
The origin of Hansen's thinking lay in Keynes's observation that net saving at full employment tends to grow, whereas net investment at full employment tends to fall. This is Keynes's justification for government to act as the investor of last resort, thereby maintaining aggregate demand and full employment. The socialisation of investment is back on the agenda, even if the S-word is to be avoided and pro-middle class projects (like HS2 and Help to Buy) preferred.
An implication of Summers' analysis, spelled out by Paul Krugman, is that "we may be an economy that needs bubbles just to achieve something near full employment", however the track record since the 80s suggests that these bubbles have actually been relatively poor at the job of providing a stimulus, just as QE has been in recent years, hence the persistent unemployment and absence of high inflation. This in turn suggests that there is a very powerful secular trend at work driving stagnation, and that bubbles and monetary policy have been able to do little more than ameliorate its effects. As Krugman says, "we have become an economy whose normal state is one of mild depression, whose brief episodes of prosperity occur only thanks to bubbles and unsustainable borrowing". So what causes this underlying mild depression?
The cause of stagnation in the Keynes/Hansen model is a combination of supply-side deficiencies (an ageing population, declining returns from education, not enough new monetisable technologies) and demand-side deficiencies (not enough consumption and/or productive investment). Supply-siders like Tyler Cowen naturally emphasise the former, with the accent on demography, moral decline and the non-appearance of jet-packs, while demand-siders like Duncan Weldon emphasise the latter, with the accent on inequality and wage stagnation. Some demand-siders, like Yves Smith, also point to the pernicious effects of modern finance: "Companies are not reinvesting at a rate sufficient rate to sustain growth, let alone reduce unemployment ... managers and investors have short term incentives, and financial reform has done nothing to reverse them".
Other commentators have sought moralistic explanations. FlipChartRick suggests that the growth of superstar executive pay has led to the decline in investment, but I think this is confusing cause and effect. Declining investment, along with weakened trades unions, has grown profits at the expense of wages and thus created a larger pot of winnings for distribution among shareholders and executives. Rising inequality certainly has a dampening effect on aggregate investment, because of the greater marginal propensity of the rich to save rather than consume, and save in non-productive forms like property, but it doesn't follow that investment is deliberately curtailed (in concert, across thousands of companies) in order to advance inequality. There must be a structural cause - i.e. something that isn't the result of policy but the unplanned product of changes in the material base.
Investment as a share of retained income has been trending down since the late 80s, yet profits have held up. One perspective on this, put forward by Ben Bernanke in 2005, is that the "dearth of domestic investment opportunities" produces an increase in lending abroad, the so-called "global savings glut", reflecting higher rates of return for capital in emerging economies. A second perspective is an "investment strike", i.e. capitalists are choosing to depress capital expenditure, despite growing profits in emerging economies, leading to an aggregate fall in global investment levels. But how can declining investment be sustained beyond the short-term? Surely lower levels of investment will lead to lower profits in future, and thus a "crisis in capital accumulation"?
A possible answer, according to L Randall Wray, is that the problem is neither a savings glut nor an investment dearth, but rather an excess of capacity due to "the productivity of capitalist investment in plant and equipment. To put it in simple terms, the problem is that investment is just too damned productive. The supply side effect of investment (capacity creation) is much larger than the demand side effect (the multiplier), and the outcome is demand-depressing excess capacity. We call that a demand gap". The importance of Wray's analysis is the focus on the material base, i.e. technological productivity.
Paul Krugman appears to be receptive to the idea that we may be living through a technological revolution, despite the naysayers: "What Bob Gordon (pdf) is predicting is disappointment on the supply side; what Larry Summers and I have been suggesting is that we may face a persistent shortfall on the demand side". He is also sceptical (as a good SciFi fan) about the assumed triviality of modern technology: "I know it doesn’t show in the productivity numbers yet, but anyone who tracks technology has a strong sense that something big has been happening the past few years, that seemingly intractable problems - like speech recognition, adequate translation, self-driving cars, etc. - are suddenly becoming tractable. Basically, smart machines are getting much better at interacting with the natural environment in all its complexity." Krugman's list of wonders is significant because what he is talking about is essentially software, the machine "smarts".
A paradox of eras of rapid growth is that they are also periods of great waste. This is the core truth of Schumpeter's "creative destruction": for every successful idea there must be a long tail of failures. But this is not a problem in macroeconomic terms as any spending helps boost aggregate demand, regardless of the return on investment, hence Keynes's suggestion to bury old banknotes in mines and let the private sector dig them out. The peculiar feature of the dotcom boom of the 90s was that it was insufficiently wasteful, despite the best efforts of venture capitalists, stock-boosters and a seemingly infinite supply of bonkers business plans. The reason for this, I think, was the shift in investment from hardware to software.
The 120 years from 1870 to 1990 can be thought of as the era of hardware. Technological advance accelerated because of three institutional features (this is a key premise of innovation economics). The first was the expansion of state-funded universities and technical institutes in the late nineteenth century, which provided the foundation for systematic R&D. The second was the growth of private-sector labs in large industrial companies in the early twentieth century (e.g. IBM and Xerox), which boosted the returns to applied research. The third was the growth of international standards bodies, particularly after WW2 (e.g. ISO, IEEE and IETF), which encouraged the widespread adoption of new technologies. You can see the ideological legacy of this institutional approach to innovation in endogenous growth theory, the lionisation of instrumental education, the fashion for "innovation clusters", and in the search for "synergies" between business, academia and the public sector.
An area that benefited from this approach was logistics, which is the unsung hero of the modern economy. In the century before 1960, there had been few major changes to the technology beyond the growth of road haulage (i.e. lorries) at the expense of rail. International trade was still dependent on cargo ships and predominantly manual docks. Containerisation (based on ISO standards) was the revolutionary change, leading to the closure of the old city docks, a vast increase in trade volumes, and a consequent fall in commodity prices. But there was a second efficiency gain in the 80s, as a result of the impact of ICT (mainframes, mini-computers, private datacoms networks) on inventory management, which led to the development of just-in-time inventories and lean manufacturing. These improvements in logistics appear to have been a major factor in the reduced volatility of GDP and the chief cause of the "labour supply shock" that we call globalisation.
This points to the increasingly transformative impact of software over the last 30 years. While the early phases of the ICT revolution were hardware-heavy, by the mid-80s software was becoming the dominant element in business productivity growth. From episodic capital-labour substitution (e.g. machine installations), industry moved towards continuous improvement and optimisation, hence the growing importance of process management and statistical control, and latterly data analysis. This didn't just improve productivity, it also made production more modular and portable (necessary to be measurable), which was an important factor in facilitating offshoring and outsourcing. Software also has a high "spillover" value, i.e. its adoption by one business can also benefit others (e.g. improved inventory management by suppliers reduced inventory costs for retailers as well).
Though LANs and email had arrived by the early 90s, the mass adoption of ICT only came in the late 90s with the second wave of Internet technologies, notably the Web and SMTP email, and the deployment of Windows 95/98 PCs on every desk. Parallel to this, the corporate data centre was transformed by the replacement of expensive mainframes and minis with commodity Wintel and Unix servers, the development of application-independent RDBMSs (which allowed you to build custom applications cheaply), and the growth of off-the-shelf ERP and CRM systems (boosted by Y2K) that centralised corporate data.
The result of all this was a simultaneous explosion in the utility of software and a fall in the price of hardware. This was masked initially because total budgets remained high during the 90s - i.e. what was once spent on a single mainframe was now spent on hundreds of PCs - but it became apparent that this was a one-time bonanza, even before the dotcom bubble burst. Though some technology providers sought to move their profit margins from hardware to software and ancillary services, the impact of freeware and opensource (whose roots go back to the 70s), plus the democratisation of software development, meant that the days of huge, year-on-year capex budgets were over. The more recent arrival of SaaS (software as a service) and the "cloud" is merely confirmation that the technology is now pervasive and practically abundant (i.e. very cheap if not yet free). In the 80s, only the biggest companies could afford programmers. Now, many SMEs can afford their own "Web guy", and a tech startup is by definition a business with minimal capital. The cost of entry for high-tech innovation has not been lower since the evolution of insitutional R&D.
According to the US Information Technology & Innovation Foundation: "Between 1980 and 1989, business investment in equipment, software and structures grew by 2.7 percent per year on average and 5.2 percent per year between 1990 and 1999. But between 2000 and 2011 it grew by just 0.5 percent per year... Moreover, as a share of GDP, business investment has declined by more than three percentage points since 1980". They attribute this decline to two main factors, a loss of US competitiveness and increasing market short-termism (the primacy of shareholder value). I think both of these play a part, but I also think a crucial factor is software, which has become the dominant element in business technology since 2000. Costly high-tech hardware is still central to the manufacturing sector (e.g. robots), but as that has declined in the US and UK from 25% of GDP in 1980 to around 12% now, it is clearly not the dominant element in the wider economy. Software, on the other hand, is extensively used by every industry sector.
Technology is cumulative in nature, i.e. it builds on prior knowledge and seeks to constantly improve its operation, but software is particularly efficient in this regard because it can be augmented and edited, rather than requiring complete redesign and replacement, and because a lot of the knowledge is publicly shared, notably through the incorporation of opensource, so patents are less of a restriction on the spread of techniques. While a machine might only be replaced every 3 or 4 years, software can be upgraded weekly. This means that software tends to improve at a quicker (and smoother) rate than hardware. We are distracted by Moore's Law to think ICT productivity growth is all about faster processors, when in reality it is about better exploitation of increasingly cheaper hardware resources.
This two-way movement, accelerating utility and commodity deflation, is not historically unknown, but it has traditionally relied on economies of scale. What is unprecedented is that this dynamic can now apply at very small scale levels. An SME can exploit ICT and logistics to create a new market with minimal capital outlay. It should hardly be a surprise then that the demand for capital is falling at a time of widespread innovation. The madness of the dotcom boom, both the desperate search for something to throw capital at, and the promiscuity of VCs who realised you could afford to back an entire field of losing horses, was telling us something profound.
For some, like Frances Coppola, the coming abundance is problematic: "As the productivity of both labour and capital increases, the need for them diminishes. This is why the economy is creating bubbles. Those with assets are desperately looking for yield, and governments are desperately trying to generate jobs. Like animals in a drought, investors and governments crowd into the last remaining waterholes, as the water in them gradually evaporates". The modern economy is generating a lot of profit but insufficient employment (i.e. well-paid jobs) because it has become very efficient. The problem then is one of transmission, i.e. the distribution of this profit, not a malfunctioning engine. It is a matter of political economy.
It is a commonplace that banking by 2008 was no longer fit for its social purpose, but the implication of secular stagnation may be that the traditional model of industry - based on capital accumulation, high employment and the leverage of institutional innovation - may have already run its course as well. The solution may not just require the socialisation of investment, and the corollary of a job guarantee, but the socialisation of capital and its remittance as funded free time (i.e. a basic income). If you think that throwing money at a crowd of people in the hope that it may produce an aggregate return sounds mad, I would suggest you need to take a closer look at how software development is already funded.
Sunday, 24 November 2013
An Adventure in Space and Time
So who is Doctor Who? The Mark Gatiss-penned An Adventure in Space and Time last week provided us with some clues, ahead of the global brand-fest of The Day of the Doctor on Saturday. I don't mean by this the secret name of the Doctor (probably Gerald), or what moral traits he embodies as a humanised deity (something called lurv, no doubt). What I mean is where do you position the character in social and cultural history?
The title of Gatiss's drama is suggestive. The official guide to the coronation of ER2 in 1953, written by the rhapsodic historian Arthur Bryant, claimed that "a nation is a union in both space and time. We are as much the countrymen of Nelson, Wesley and Shakespeare as of our own contemporaries. Our queen is the symbol of that union in time" (the echoes of Burke are clear). In 1963, the Tardis (a metaphor for the powers of TV and film generally) allowed us to escape from a particular space and time, i.e. the UK's diminished position in the postwar world and the burden of post-imperial history, and go gallivanting through these earlier, more glorious epochs, not to mention a wonderful future. The appearance of ER1 in The Day of the Doctor was quite knowing.
Another important clue was the CV of William Hartnell, the first Doctor. As Gatiss noted, he had previously been type-cast as a tough soldier, usually a hectoring NCO, having featured in the films The Way Ahead and Carry On Sergeant, plus the TV series The Army Game. This was a neat link to the appearance of John Hurt as the "war Doctor" in the 50th anniversary spectacular. The legacy of wartime was all-too apparent in the original TV series, from the Daleks, who combined nostalgia for a dependably evil enemy (the Nazis) with high tech weaponry (the Nazis), to the Doctor's initially authoritarian style and obsession with secrecy. The WMD in 1963 was the threat of mutually assured destruction (Dr. Strangelove came out the following year). Today, John Hurt gets a finely-worked box with a big red button, which he ultimately declines to press (Gallifray is saved to some sort of inter-temporal USB drive).
The coincidence of the first broadcast with the assassination of John F Kennedy means that Doctor Who is associated with a pivotal moment in the orthodox (and sentimental) narrative of US-inflected postwar history: the moment when political illusions were lost and the counter-culture started (the pains of adolescence etc). This obscured the British significance, which started in the late 50s with the acceptance that empire was over (the "New Elizabethan Age" didn't last long) and the breakdown of class rigidities marked by kitchen sink realism and the false dawn of meritocracy. The formal moment was Harold MacMillan's "Winds of change" speech in 1960, while the informal moment was the birth of Europhile mod subculture (first jazz and French style, then R&B and Italian style).
British establishment dramas about international relations tend to employ one of two tropes: the spy or adventurer who must defend British interests alone (an extrapolation of nineteenth century self-reliant liberalism and imperial honour, from Samuel Smiles to Gordon of Khartoum), or the quintessential (yet paradoxically eccentric) Briton whose mere presence is reassuring (from Phileas Fogg to Henry Higgins). The most popular examples usually combine both tropes, from The Lady Vanishes to The Avengers. The 1960s was a particularly fruitful period for new variations on these old themes. The James Bond film series (with Q as significant as 007) is the most territorially aggressive (i.e. the most resentful at loss of empire), while The Prisoner turns frustration and doubt into existential crisis within a claustrophobic society (despite the 60s style and fashionable paranoia, this was about 50s conformism).
The character of Doctor Who for the first 25 years was essentially Edwardian, with stylistic nods to Prospero (via Forbidden Planet, and ultimately repaid via the Sycorax) and the Great Oz. This meant an emphasis on brains over brawn and an insistence on a certain deportment - i.e. upper middle class values. For all the scary aliens and innovative music, the initial series was as comforting as a Sherlock Holmes story in The Strand Magazine (another influence), hence its popularity across generations. This was a Time Lord, not a Time Pleb, who dressed in a fashion that would not have been out of place in 1900 (steampunk avant la lettre), and who was always associated with the "deep state" of UNIT and other loyal but covert agencies (I suspect Doctor Who is very popular at GCHQ, and only copyright kept him out of The League of Extraordinary Gentlemen). The choice of a police call box was not arbitrary.
In An Adventure in Space and Time Gatiss subtly suggests that the Doctor's growing eccentricity and kindliness in the first series was as much a product of Hartnell's declining health as a conscious decision to lighten-up the authoritarian sourpuss. As such, the Doctor became a sympathetic emblem of national self-doubt: you don't have to be a psychologist to recognise eccentricity as anxiety displacement. Over the years, Doctor Who has fruitfully overlapped with Monty Python (no one expects the Spanish inquisition in much the same way as no one expects the Doctor) and A Hitchhiker's Guide to the Galaxy (Douglas Adams wrote and edited a number of Doctor Who scripts). In its modern incarnation, the programme's affinities have tended towards retro adventure (the many juvenile spin-offs) and a hankering for Edwardian stability (Steven Moffat's Sherlock, despite the contemporary setting).
The modern revival of the series in 2005 was interesting because it initially went for the gritty, modern, pan-sexual style of Christopher Eccleston. (The best Who joke ever? When challenged on his accent: "Lots of planets have a North"). This has gradually reverted to Edwardian type, via the retro New Wave style of David Tennant to the teddy-boy-about-to-become-mod style of Matt Smith. The suspicion is that Peter Capaldi will be a full-on Robert Louis Stevenson tribute act, complete with cavalier 'tache and wee goatee. I really hope they let him have a Scottish accent.
The 1989-2005 interregnum (excepting the 1996 Paul McGann TV film) can retrospectively be bracketed by the Lawson Boom and the early warning signs of the Great Recession. The series had been canned by the BBC not simply because of its declining quality (it had always had imaginative production values, but never what you'd call quality ones), but because it seemed to have lost touch with modern concerns. In an era of sanctified individualism and conformist ambition, the more traditional values of Doctor Who (loyalty, sacrifice, selflessness etc) seemed out of tune with an audience flitting between Neighbours and Eastenders. In fact, Doctor Who never went away. The torch was simply handed on to Doc Brown in the Back to the Future series of films (a conservative trilogy about restoring the natural order), complete with juvenile companion, a somewhat sexier time machine, and a running nerd joke (the flux capacitor = reverse the polarities).
The 90s was the era of large-scale and often circular (i.e. going nowhere) American SF TV series, such as the revamped Star Trek, The X-Files and Stargate, plus bullish cinema spectaculars about defeated threats to Earth, such as Armageddon, Independence Day and Men in Black. Parallel to this was the growth of a more interesting strand of films dealing with the nature of reality and power in an increasingly networked and virtual world, such as The Matrix, Existenz and various Philp K Dick adaptations, which, unlike the popular pyrotechnics, could at least survive the return of history in 2001. In this rich speculative ecology, the spirit of Doctor Who eked out a shadow life in the student rag-week deconstruction of Red Dwarf.
In retrospect, the attempt to relaunch the Doctor in 1996 made one crucial error: Paul McGann should have been the youthful companion and Richard E Grant the main man. That would have been brilliant, particularly if he could have channelled full-on Withnail. In fact, Grant did play the Doctor, first in a 1999 Red Nose Day TV skit (the "Quite Handsome Doctor"), and then looking like a disappointed Dracula in the 2003 animation Scream of the Shalka. That would have been one Bad Doctor in the flesh. Perhaps that's the secret of his longevity: he is a vampire on our nostalgia as well as our aspirations. Perhaps the real Who is the woman they call ER2. Or perhaps she's a shape-shifting Zygon, and has been for over 400 years.
The title of Gatiss's drama is suggestive. The official guide to the coronation of ER2 in 1953, written by the rhapsodic historian Arthur Bryant, claimed that "a nation is a union in both space and time. We are as much the countrymen of Nelson, Wesley and Shakespeare as of our own contemporaries. Our queen is the symbol of that union in time" (the echoes of Burke are clear). In 1963, the Tardis (a metaphor for the powers of TV and film generally) allowed us to escape from a particular space and time, i.e. the UK's diminished position in the postwar world and the burden of post-imperial history, and go gallivanting through these earlier, more glorious epochs, not to mention a wonderful future. The appearance of ER1 in The Day of the Doctor was quite knowing.
Another important clue was the CV of William Hartnell, the first Doctor. As Gatiss noted, he had previously been type-cast as a tough soldier, usually a hectoring NCO, having featured in the films The Way Ahead and Carry On Sergeant, plus the TV series The Army Game. This was a neat link to the appearance of John Hurt as the "war Doctor" in the 50th anniversary spectacular. The legacy of wartime was all-too apparent in the original TV series, from the Daleks, who combined nostalgia for a dependably evil enemy (the Nazis) with high tech weaponry (the Nazis), to the Doctor's initially authoritarian style and obsession with secrecy. The WMD in 1963 was the threat of mutually assured destruction (Dr. Strangelove came out the following year). Today, John Hurt gets a finely-worked box with a big red button, which he ultimately declines to press (Gallifray is saved to some sort of inter-temporal USB drive).
The coincidence of the first broadcast with the assassination of John F Kennedy means that Doctor Who is associated with a pivotal moment in the orthodox (and sentimental) narrative of US-inflected postwar history: the moment when political illusions were lost and the counter-culture started (the pains of adolescence etc). This obscured the British significance, which started in the late 50s with the acceptance that empire was over (the "New Elizabethan Age" didn't last long) and the breakdown of class rigidities marked by kitchen sink realism and the false dawn of meritocracy. The formal moment was Harold MacMillan's "Winds of change" speech in 1960, while the informal moment was the birth of Europhile mod subculture (first jazz and French style, then R&B and Italian style).
British establishment dramas about international relations tend to employ one of two tropes: the spy or adventurer who must defend British interests alone (an extrapolation of nineteenth century self-reliant liberalism and imperial honour, from Samuel Smiles to Gordon of Khartoum), or the quintessential (yet paradoxically eccentric) Briton whose mere presence is reassuring (from Phileas Fogg to Henry Higgins). The most popular examples usually combine both tropes, from The Lady Vanishes to The Avengers. The 1960s was a particularly fruitful period for new variations on these old themes. The James Bond film series (with Q as significant as 007) is the most territorially aggressive (i.e. the most resentful at loss of empire), while The Prisoner turns frustration and doubt into existential crisis within a claustrophobic society (despite the 60s style and fashionable paranoia, this was about 50s conformism).
The character of Doctor Who for the first 25 years was essentially Edwardian, with stylistic nods to Prospero (via Forbidden Planet, and ultimately repaid via the Sycorax) and the Great Oz. This meant an emphasis on brains over brawn and an insistence on a certain deportment - i.e. upper middle class values. For all the scary aliens and innovative music, the initial series was as comforting as a Sherlock Holmes story in The Strand Magazine (another influence), hence its popularity across generations. This was a Time Lord, not a Time Pleb, who dressed in a fashion that would not have been out of place in 1900 (steampunk avant la lettre), and who was always associated with the "deep state" of UNIT and other loyal but covert agencies (I suspect Doctor Who is very popular at GCHQ, and only copyright kept him out of The League of Extraordinary Gentlemen). The choice of a police call box was not arbitrary.
In An Adventure in Space and Time Gatiss subtly suggests that the Doctor's growing eccentricity and kindliness in the first series was as much a product of Hartnell's declining health as a conscious decision to lighten-up the authoritarian sourpuss. As such, the Doctor became a sympathetic emblem of national self-doubt: you don't have to be a psychologist to recognise eccentricity as anxiety displacement. Over the years, Doctor Who has fruitfully overlapped with Monty Python (no one expects the Spanish inquisition in much the same way as no one expects the Doctor) and A Hitchhiker's Guide to the Galaxy (Douglas Adams wrote and edited a number of Doctor Who scripts). In its modern incarnation, the programme's affinities have tended towards retro adventure (the many juvenile spin-offs) and a hankering for Edwardian stability (Steven Moffat's Sherlock, despite the contemporary setting).
The modern revival of the series in 2005 was interesting because it initially went for the gritty, modern, pan-sexual style of Christopher Eccleston. (The best Who joke ever? When challenged on his accent: "Lots of planets have a North"). This has gradually reverted to Edwardian type, via the retro New Wave style of David Tennant to the teddy-boy-about-to-become-mod style of Matt Smith. The suspicion is that Peter Capaldi will be a full-on Robert Louis Stevenson tribute act, complete with cavalier 'tache and wee goatee. I really hope they let him have a Scottish accent.
The 1989-2005 interregnum (excepting the 1996 Paul McGann TV film) can retrospectively be bracketed by the Lawson Boom and the early warning signs of the Great Recession. The series had been canned by the BBC not simply because of its declining quality (it had always had imaginative production values, but never what you'd call quality ones), but because it seemed to have lost touch with modern concerns. In an era of sanctified individualism and conformist ambition, the more traditional values of Doctor Who (loyalty, sacrifice, selflessness etc) seemed out of tune with an audience flitting between Neighbours and Eastenders. In fact, Doctor Who never went away. The torch was simply handed on to Doc Brown in the Back to the Future series of films (a conservative trilogy about restoring the natural order), complete with juvenile companion, a somewhat sexier time machine, and a running nerd joke (the flux capacitor = reverse the polarities).
The 90s was the era of large-scale and often circular (i.e. going nowhere) American SF TV series, such as the revamped Star Trek, The X-Files and Stargate, plus bullish cinema spectaculars about defeated threats to Earth, such as Armageddon, Independence Day and Men in Black. Parallel to this was the growth of a more interesting strand of films dealing with the nature of reality and power in an increasingly networked and virtual world, such as The Matrix, Existenz and various Philp K Dick adaptations, which, unlike the popular pyrotechnics, could at least survive the return of history in 2001. In this rich speculative ecology, the spirit of Doctor Who eked out a shadow life in the student rag-week deconstruction of Red Dwarf.
In retrospect, the attempt to relaunch the Doctor in 1996 made one crucial error: Paul McGann should have been the youthful companion and Richard E Grant the main man. That would have been brilliant, particularly if he could have channelled full-on Withnail. In fact, Grant did play the Doctor, first in a 1999 Red Nose Day TV skit (the "Quite Handsome Doctor"), and then looking like a disappointed Dracula in the 2003 animation Scream of the Shalka. That would have been one Bad Doctor in the flesh. Perhaps that's the secret of his longevity: he is a vampire on our nostalgia as well as our aspirations. Perhaps the real Who is the woman they call ER2. Or perhaps she's a shape-shifting Zygon, and has been for over 400 years.
Wednesday, 20 November 2013
25 Years a Slave
A new report from the Centre for Social Justice, Maxed Out, reveals that UK household debt is now almost as big as GDP. The right-of-centre think tank (founded by the well-known social scientist, Ian Duncan Smith), wrings its hands over the disproportionate impact on the poor, but has few suggestions beyond "access to affordable credit", "responsible lenders" and greater "financial literacy" (the poor must be taught better habits). I was particularly struck by this nugget (pg 40): "The rapid growth of mortgages over the past two decades has contributed the largest total amount to Britain’s personal debt. However it is not as concerning as the rise in consumer debt over that same period. Unlike mortgage debts, which are tied to the value of a house, unsecured consumer borrowing is at higher interest rates and is more likely to spiral out of control, driving people into problem debt". In other words, despite the evidence of your own eyes, mortgages are not a problem. No, sireee.
Mortgage debts are not really tied to the value of a house, contrary to appearances. Though we think of a mortgage as a loan for which the property acts as collateral, in reality the loan is securitised against future income. As such, a mortgage is a form of "fictitious capital", like stocks and shares. It's a claim on the future. A subprime mortgage is risky not because of the quality of the property, but because of the quality of the borrower's future income stream. Property does not have an intrinsic value beyond that of the land it stands on (the productive value) and its utility as shelter. This is not trivial, but it is clearly much less than the market value. This Christmas there will no doubt be a new must-have toy, and with demand temporarily greater than supply, these will change hands on eBay and in pubs at a markup to the retail price, but that markup won't be £1 million, because the market-clearing valuation of buyers isn't that high, even on Christmas Eve. So what determines the persistent high price of housing?
I'm going to approach the answer via Paul Gilroy's preview of Steve McQueen's new film, 12 Years a Slave, in which he notes that "slaves are capital incarnate. They are living debts and impersonal obligations as well as human beings fighting off the sub-humanity imposed upon them by their status as commercial objects". Our key image of the antebellum South is of plantations and pathological gentility. In fact, most whites in the South were self-sufficient small farmers of modest means, i.e. "rednecks". This in turn meant low levels of urbanisation and industry, compared to the North, and consequently low levels of credit and a shortage of ready money. After slave imports were banned in 1808 (following the British abolition of the slave trade in 1807), domestic reproduction became the main means of growing the plantation workforce. The number of slaves grew from 750 thousand in 1800 to 4 million by 1860. The growth in the slave "crop" fuelled the expansion of cotton and other cash-crop production into the new territories west of the Mississippi, which was a primary source of the friction that led to the Civil War.
The recycling of export revenues into expansion, combined with limited money, meant that slaves became the dominant form of capital in the South and a de facto medium of exchange, used to settle debts and act as collateral for loans. The market value of a slave represented their future production. During the eighteenth century, slaves were seen as disposable commodities, often being worked to death within a few years of purchase. This partly reflected their economic equivalence with white indentured labour on fixed-term contracts (i.e. one died after an average 7 years, the other was freed), and partly the relatively low cost of adult replacements via the trade from Africa. Over the course of the century, rising prices for imported slaves and the growth of cash-crop demand (notably cotton for the Lancashire mills), led to an increasing reliance on slave reproduction, which is why the end of imports after 1808 did not lead to crisis. This in turn made it economically attractive to maximise the slaves' working lives, which led to the ideological reframing of them from the subhuman commodity of earlier years to the "children" in need of benevolent discipline familiar from Southern rhetoric.
The capital value of the slave was a claim on the future, i.e. the potential production of their remaining working lifetime, rather than the embodiment of past production. In the Northern states of the US, capital and labour were separate, even though all capital (e.g. plant and machinery) was ultimately the transformed surplus value of past labour. Future labour was "free", in the sense of uncommitted, although in reality the sharp tooth of necessity (and the flow of immigrants) meant the factory owners could rely upon its ready availability. So long as labour was plentiful, capital did not need to make any claim on the future beyond the investment in health and education required to improve the quality of the workforce in aggregate. The increase in demand for labour after WW2 allowed workers to negotiate improved wages, i.e. current income, but it also drove their demand for shorter hours and earlier retirement on decent pensions, i.e. a larger share of future time was to be enjoyed by the worker rather than transformed into capital.
The counter-measure to this claim on future time by workers was the growth of the property mortgage, which is a claim on future labour. The need for housing is constant and the rate of turnover is low, but these characteristics actually make it particularly elastic in price for two reasons. The first concerns disposable income. In a society where most people rent, average rents will settle at a level representing an affordable percentage of average disposable income. This level will in turn reflect the cost of other necessities, such as food and fuel, required for labour to reproduce itself and thus keep earning. This is a basic economic equilibrium. The price of a necessity cannot go so high that it crowds out the purchase of other necessities without social breakdown (hence why the price of bread is still regulated in some countries).
The cost of housing therefore relates to the value of current labour time - i.e. what you can earn this week or this month - and the percentage of income left after non-housing necessities are paid for. This serves to put a upper limit on the current cost of housing (i.e. rents and equivalent mortgage repayments), even where the rental sector is relatively small. The 80s and 90s were an era of falling real prices for food and clothing and flat prices for domestic fuel, which meant that the share of disposable income available for housing grew. From the mid-00s, the real cost of these other necessities started to increase above inflation, so constraining income for housing. Schemes like Help to Buy are therefore a reflection of increasing utility bills and more expensive shopping baskets as much as limited mortgage availability arising from the credit crunch.
The second reason is that the cost of housing also reflects longevity. In a society where most people buy a house, the utility of the property will typically be a factor of the future years of the buyer - i.e. how long you expect to be able to enjoy living in it. Assuming you have the funds, you will pay more, relative to current income, if you expect 100 years of utility rather than 50. The assumption is that a longer life means a longer income stream, which essentially means a longer working life. Mortgage terms are typically based on a peak earning period of 25 years, with a buffer of a decade or so either side - e.g. start work at 21, buy a house at 31, pay off the mortgage at 56, and retire at 66. If longevity were 100, and the retirement age were 80, we would have mortgage terms nearer 40 years. But that would not mean correspondingly lower repayments (the monthly outgoings would be the same because that would reflect the "rent" level), rather purchase prices would be higher.
The combination of these two factors - current relative affordability and the duration of a working lifetime - is what determines the long-run cost of housing. Though property prices in the UK, and particularly in London, are heavily influenced by induced scarcity, this localised "froth" serves to mask the strength of these underlying forces.
Between the mid-70s and mid-00s, house purchase prices relative to average earnings roughly doubled, from 2.5 to 5 times, facilitated by easier mortgage credit. However, this did not cause a "crisis of affordability" for three reasons. First, more properties are now bought by dual-income couples. The increase in working women has two effects: it increases the income stream for some buyers, but it also lowers the average of earnings (because of the gender pay gap). Second, the average ratio is affected by increasing inequality - i.e. if the prices paid by the top 10% grow faster than those paid by the remaining 90%, the average cost of housing will be dragged up. Third, the average age of a first-time buyer has increased from mid-20s to mid-30s since the 1970s. This means that their income will tend to be higher, relative to the average of the population as a whole, because most people hit their earnings peak around 40.
Seen in this light, historic house price inflation reflects three secular trends: increased longevity; the absorption of women into the workforce; and increasing income inequality. This is not to say that the share of income going to housing hasn't increased - it has and the UK is particularly expensive - but that the increase in housing costs is driven by more than just demand outstripping supply. Even without induced scarcity over the last 40 years, we'd probably have seen the cost of housing go up. The point is not that "houses are more expensive", but that the amount of future income (and thus labour time) you have to promise in return has grown. The consequence of this will be an increase in average mortgage terms to keep pace with an increasing retirement age.
A paradox of modern society is that while technological advance should allow a reduction in working time, we are taking on more debt and thus pushing back the point at which we can begin to reduce hours. Some of this debt can be considered as investment finance, i.e. where we expect increased future income as a result, such as student loans, but the bulk of it is mortgage debt, which is unproductive at a macroeconomic level beyond the consumption sugar-rush of equity release. We kid ourselves that this is an investment too, but the capital appreciation on property, which reflects the future income of potential buyers, is only possible in a society committing an ever-larger amount of future labour time. While some individuals can buck this trend, either through luck or calculation (some will always buy low and sell high), society in aggregate must make a bigger contribution with every passing year, which for most people means working longer.
One result of this is that average working hours have increased in a bid to maximise future income, which means that the housing market works against productivity growth - i.e. we are driven towards increasing the quantity of time, not its quality, as a quicker way of increasing income. Job security has evolved from a dull constraint in the 60s, through a refuge from turbulence in the 80s, to an elite aspiration now. Ultimately, this constrains innovation and risk-taking (outside of financial services). It is a commonplace that high property prices distort the economy because more and more capital is tied up unproductively, but what isn't so readily recognised is that the embodiment of labour in mortgages also acts as a psychological drag: "The delirious rise in property prices over the last twenty years is probably the single most important cause of cultural conservatism in the UK and the US".
The "capital incarnate" of slavery undermined the economy of the South (1). When the Civil War broke out, the Confederacy had only 10% of manufactured goods in the US and only 3% of the firearms. Though it had been responsible for 70% of US exports by value before the war (mainly "King Cotton"), most of the receipts had been recycled into slaves or used to buy goods from the North. The failure to grow industries outside of the plantation system meant it had a population of only 9 million compared to 22 million on the Union side, and 4 million of that figure were slaves who could not be trusted with a weapon. The duration of the war was largely a result of the South having a third of the soldiery and (initially) the better generals, but unable to buy sufficient materiel, and unable to liquidate their slave capital or use it as collateral for foreign loans, the outcome was never in doubt.
I was always bemused by the claim made in the 1970s and 80s that the "right-to-buy" was a good thing because it meant council tenants would take better care of their properties. It was obvious on the ground that the tenant's pride reflected the quality of the housing, not the nature of tenure, and it was a myth that councils wouldn't let you paint your doors a different colour. It is only with time that I have come to appreciate the ideological foundation of that claim, and to see the parallel with the claims made by Southern planters in the US that their slaves, as valued property, were better cared for than free Northerners thrown out of work during industrial slumps.
1. The total capital value of slaves in 1860 was $3 billion. The total cost of fighting the Civil War was roughly $3.3bn for each side, though this was a much larger per capita burden for the South. In other words, fighting the war cost the Union roughly the same as it would have done to buy-out the slave-owners (cf. the £20 million spent by the British government compensating West Indies slave owners in the 1830s). As such a prohibitively expensive scheme would have been politically impossible, while war against an aggressively secessionist South would have patriotic backing irrespective of the casus belli, an armed conflict was probably the only way of reforming and integrating the US economy as it expanded westwards.
Mortgage debts are not really tied to the value of a house, contrary to appearances. Though we think of a mortgage as a loan for which the property acts as collateral, in reality the loan is securitised against future income. As such, a mortgage is a form of "fictitious capital", like stocks and shares. It's a claim on the future. A subprime mortgage is risky not because of the quality of the property, but because of the quality of the borrower's future income stream. Property does not have an intrinsic value beyond that of the land it stands on (the productive value) and its utility as shelter. This is not trivial, but it is clearly much less than the market value. This Christmas there will no doubt be a new must-have toy, and with demand temporarily greater than supply, these will change hands on eBay and in pubs at a markup to the retail price, but that markup won't be £1 million, because the market-clearing valuation of buyers isn't that high, even on Christmas Eve. So what determines the persistent high price of housing?
I'm going to approach the answer via Paul Gilroy's preview of Steve McQueen's new film, 12 Years a Slave, in which he notes that "slaves are capital incarnate. They are living debts and impersonal obligations as well as human beings fighting off the sub-humanity imposed upon them by their status as commercial objects". Our key image of the antebellum South is of plantations and pathological gentility. In fact, most whites in the South were self-sufficient small farmers of modest means, i.e. "rednecks". This in turn meant low levels of urbanisation and industry, compared to the North, and consequently low levels of credit and a shortage of ready money. After slave imports were banned in 1808 (following the British abolition of the slave trade in 1807), domestic reproduction became the main means of growing the plantation workforce. The number of slaves grew from 750 thousand in 1800 to 4 million by 1860. The growth in the slave "crop" fuelled the expansion of cotton and other cash-crop production into the new territories west of the Mississippi, which was a primary source of the friction that led to the Civil War.
The recycling of export revenues into expansion, combined with limited money, meant that slaves became the dominant form of capital in the South and a de facto medium of exchange, used to settle debts and act as collateral for loans. The market value of a slave represented their future production. During the eighteenth century, slaves were seen as disposable commodities, often being worked to death within a few years of purchase. This partly reflected their economic equivalence with white indentured labour on fixed-term contracts (i.e. one died after an average 7 years, the other was freed), and partly the relatively low cost of adult replacements via the trade from Africa. Over the course of the century, rising prices for imported slaves and the growth of cash-crop demand (notably cotton for the Lancashire mills), led to an increasing reliance on slave reproduction, which is why the end of imports after 1808 did not lead to crisis. This in turn made it economically attractive to maximise the slaves' working lives, which led to the ideological reframing of them from the subhuman commodity of earlier years to the "children" in need of benevolent discipline familiar from Southern rhetoric.
The capital value of the slave was a claim on the future, i.e. the potential production of their remaining working lifetime, rather than the embodiment of past production. In the Northern states of the US, capital and labour were separate, even though all capital (e.g. plant and machinery) was ultimately the transformed surplus value of past labour. Future labour was "free", in the sense of uncommitted, although in reality the sharp tooth of necessity (and the flow of immigrants) meant the factory owners could rely upon its ready availability. So long as labour was plentiful, capital did not need to make any claim on the future beyond the investment in health and education required to improve the quality of the workforce in aggregate. The increase in demand for labour after WW2 allowed workers to negotiate improved wages, i.e. current income, but it also drove their demand for shorter hours and earlier retirement on decent pensions, i.e. a larger share of future time was to be enjoyed by the worker rather than transformed into capital.
The counter-measure to this claim on future time by workers was the growth of the property mortgage, which is a claim on future labour. The need for housing is constant and the rate of turnover is low, but these characteristics actually make it particularly elastic in price for two reasons. The first concerns disposable income. In a society where most people rent, average rents will settle at a level representing an affordable percentage of average disposable income. This level will in turn reflect the cost of other necessities, such as food and fuel, required for labour to reproduce itself and thus keep earning. This is a basic economic equilibrium. The price of a necessity cannot go so high that it crowds out the purchase of other necessities without social breakdown (hence why the price of bread is still regulated in some countries).
The cost of housing therefore relates to the value of current labour time - i.e. what you can earn this week or this month - and the percentage of income left after non-housing necessities are paid for. This serves to put a upper limit on the current cost of housing (i.e. rents and equivalent mortgage repayments), even where the rental sector is relatively small. The 80s and 90s were an era of falling real prices for food and clothing and flat prices for domestic fuel, which meant that the share of disposable income available for housing grew. From the mid-00s, the real cost of these other necessities started to increase above inflation, so constraining income for housing. Schemes like Help to Buy are therefore a reflection of increasing utility bills and more expensive shopping baskets as much as limited mortgage availability arising from the credit crunch.
The second reason is that the cost of housing also reflects longevity. In a society where most people buy a house, the utility of the property will typically be a factor of the future years of the buyer - i.e. how long you expect to be able to enjoy living in it. Assuming you have the funds, you will pay more, relative to current income, if you expect 100 years of utility rather than 50. The assumption is that a longer life means a longer income stream, which essentially means a longer working life. Mortgage terms are typically based on a peak earning period of 25 years, with a buffer of a decade or so either side - e.g. start work at 21, buy a house at 31, pay off the mortgage at 56, and retire at 66. If longevity were 100, and the retirement age were 80, we would have mortgage terms nearer 40 years. But that would not mean correspondingly lower repayments (the monthly outgoings would be the same because that would reflect the "rent" level), rather purchase prices would be higher.
The combination of these two factors - current relative affordability and the duration of a working lifetime - is what determines the long-run cost of housing. Though property prices in the UK, and particularly in London, are heavily influenced by induced scarcity, this localised "froth" serves to mask the strength of these underlying forces.
Between the mid-70s and mid-00s, house purchase prices relative to average earnings roughly doubled, from 2.5 to 5 times, facilitated by easier mortgage credit. However, this did not cause a "crisis of affordability" for three reasons. First, more properties are now bought by dual-income couples. The increase in working women has two effects: it increases the income stream for some buyers, but it also lowers the average of earnings (because of the gender pay gap). Second, the average ratio is affected by increasing inequality - i.e. if the prices paid by the top 10% grow faster than those paid by the remaining 90%, the average cost of housing will be dragged up. Third, the average age of a first-time buyer has increased from mid-20s to mid-30s since the 1970s. This means that their income will tend to be higher, relative to the average of the population as a whole, because most people hit their earnings peak around 40.
Seen in this light, historic house price inflation reflects three secular trends: increased longevity; the absorption of women into the workforce; and increasing income inequality. This is not to say that the share of income going to housing hasn't increased - it has and the UK is particularly expensive - but that the increase in housing costs is driven by more than just demand outstripping supply. Even without induced scarcity over the last 40 years, we'd probably have seen the cost of housing go up. The point is not that "houses are more expensive", but that the amount of future income (and thus labour time) you have to promise in return has grown. The consequence of this will be an increase in average mortgage terms to keep pace with an increasing retirement age.
A paradox of modern society is that while technological advance should allow a reduction in working time, we are taking on more debt and thus pushing back the point at which we can begin to reduce hours. Some of this debt can be considered as investment finance, i.e. where we expect increased future income as a result, such as student loans, but the bulk of it is mortgage debt, which is unproductive at a macroeconomic level beyond the consumption sugar-rush of equity release. We kid ourselves that this is an investment too, but the capital appreciation on property, which reflects the future income of potential buyers, is only possible in a society committing an ever-larger amount of future labour time. While some individuals can buck this trend, either through luck or calculation (some will always buy low and sell high), society in aggregate must make a bigger contribution with every passing year, which for most people means working longer.
One result of this is that average working hours have increased in a bid to maximise future income, which means that the housing market works against productivity growth - i.e. we are driven towards increasing the quantity of time, not its quality, as a quicker way of increasing income. Job security has evolved from a dull constraint in the 60s, through a refuge from turbulence in the 80s, to an elite aspiration now. Ultimately, this constrains innovation and risk-taking (outside of financial services). It is a commonplace that high property prices distort the economy because more and more capital is tied up unproductively, but what isn't so readily recognised is that the embodiment of labour in mortgages also acts as a psychological drag: "The delirious rise in property prices over the last twenty years is probably the single most important cause of cultural conservatism in the UK and the US".
The "capital incarnate" of slavery undermined the economy of the South (1). When the Civil War broke out, the Confederacy had only 10% of manufactured goods in the US and only 3% of the firearms. Though it had been responsible for 70% of US exports by value before the war (mainly "King Cotton"), most of the receipts had been recycled into slaves or used to buy goods from the North. The failure to grow industries outside of the plantation system meant it had a population of only 9 million compared to 22 million on the Union side, and 4 million of that figure were slaves who could not be trusted with a weapon. The duration of the war was largely a result of the South having a third of the soldiery and (initially) the better generals, but unable to buy sufficient materiel, and unable to liquidate their slave capital or use it as collateral for foreign loans, the outcome was never in doubt.
I was always bemused by the claim made in the 1970s and 80s that the "right-to-buy" was a good thing because it meant council tenants would take better care of their properties. It was obvious on the ground that the tenant's pride reflected the quality of the housing, not the nature of tenure, and it was a myth that councils wouldn't let you paint your doors a different colour. It is only with time that I have come to appreciate the ideological foundation of that claim, and to see the parallel with the claims made by Southern planters in the US that their slaves, as valued property, were better cared for than free Northerners thrown out of work during industrial slumps.
1. The total capital value of slaves in 1860 was $3 billion. The total cost of fighting the Civil War was roughly $3.3bn for each side, though this was a much larger per capita burden for the South. In other words, fighting the war cost the Union roughly the same as it would have done to buy-out the slave-owners (cf. the £20 million spent by the British government compensating West Indies slave owners in the 1830s). As such a prohibitively expensive scheme would have been politically impossible, while war against an aggressively secessionist South would have patriotic backing irrespective of the casus belli, an armed conflict was probably the only way of reforming and integrating the US economy as it expanded westwards.
Subscribe to:
Posts (Atom)