So who is Doctor Who? The Mark Gatiss-penned An Adventure in Space and Time last week provided us with some clues, ahead of the global brand-fest of The Day of the Doctor on Saturday. I don't mean by this the secret name of the Doctor (probably Gerald), or what moral traits he embodies as a humanised deity (something called lurv, no doubt). What I mean is where do you position the character in social and cultural history?
The title of Gatiss's drama is suggestive. The official guide to the coronation of ER2 in 1953, written by the rhapsodic historian Arthur Bryant, claimed that "a nation is a union in both space and time. We are as much the countrymen of Nelson, Wesley and Shakespeare as of our own contemporaries. Our queen is the symbol of that union in time" (the echoes of Burke are clear). In 1963, the Tardis (a metaphor for the powers of TV and film generally) allowed us to escape from a particular space and time, i.e. the UK's diminished position in the postwar world and the burden of post-imperial history, and go gallivanting through these earlier, more glorious epochs, not to mention a wonderful future. The appearance of ER1 in The Day of the Doctor was quite knowing.
Another important clue was the CV of William Hartnell, the first Doctor. As Gatiss noted, he had previously been type-cast as a tough soldier, usually a hectoring NCO, having featured in the films The Way Ahead and Carry On Sergeant, plus the TV series The Army Game. This was a neat link to the appearance of John Hurt as the "war Doctor" in the 50th anniversary spectacular. The legacy of wartime was all-too apparent in the original TV series, from the Daleks, who combined nostalgia for a dependably evil enemy (the Nazis) with high tech weaponry (the Nazis), to the Doctor's initially authoritarian style and obsession with secrecy. The WMD in 1963 was the threat of mutually assured destruction (Dr. Strangelove came out the following year). Today, John Hurt gets a finely-worked box with a big red button, which he ultimately declines to press (Gallifray is saved to some sort of inter-temporal USB drive).
The coincidence of the first broadcast with the assassination of John F Kennedy means that Doctor Who is associated with a pivotal moment in the orthodox (and sentimental) narrative of US-inflected postwar history: the moment when political illusions were lost and the counter-culture started (the pains of adolescence etc). This obscured the British significance, which started in the late 50s with the acceptance that empire was over (the "New Elizabethan Age" didn't last long) and the breakdown of class rigidities marked by kitchen sink realism and the false dawn of meritocracy. The formal moment was Harold MacMillan's "Winds of change" speech in 1960, while the informal moment was the birth of Europhile mod subculture (first jazz and French style, then R&B and Italian style).
British establishment dramas about international relations tend to employ one of two tropes: the spy or adventurer who must defend British interests alone (an extrapolation of nineteenth century self-reliant liberalism and imperial honour, from Samuel Smiles to Gordon of Khartoum), or the quintessential (yet paradoxically eccentric) Briton whose mere presence is reassuring (from Phileas Fogg to Henry Higgins). The most popular examples usually combine both tropes, from The Lady Vanishes to The Avengers. The 1960s was a particularly fruitful period for new variations on these old themes. The James Bond film series (with Q as significant as 007) is the most territorially aggressive (i.e. the most resentful at loss of empire), while The Prisoner turns frustration and doubt into existential crisis within a claustrophobic society (despite the 60s style and fashionable paranoia, this was about 50s conformism).
The character of Doctor Who for the first 25 years was essentially Edwardian, with stylistic nods to Prospero (via Forbidden Planet, and ultimately repaid via the Sycorax) and the Great Oz. This meant an emphasis on brains over brawn and an insistence on a certain deportment - i.e. upper middle class values. For all the scary aliens and innovative music, the initial series was as comforting as a Sherlock Holmes story in The Strand Magazine (another influence), hence its popularity across generations. This was a Time Lord, not a Time Pleb, who dressed in a fashion that would not have been out of place in 1900 (steampunk avant la lettre), and who was always associated with the "deep state" of UNIT and other loyal but covert agencies (I suspect Doctor Who is very popular at GCHQ, and only copyright kept him out of The League of Extraordinary Gentlemen). The choice of a police call box was not arbitrary.
In An Adventure in Space and Time Gatiss subtly suggests that the Doctor's growing eccentricity and kindliness in the first series was as much a product of Hartnell's declining health as a conscious decision to lighten-up the authoritarian sourpuss. As such, the Doctor became a sympathetic emblem of national self-doubt: you don't have to be a psychologist to recognise eccentricity as anxiety displacement. Over the years, Doctor Who has fruitfully overlapped with Monty Python (no one expects the Spanish inquisition in much the same way as no one expects the Doctor) and A Hitchhiker's Guide to the Galaxy (Douglas Adams wrote and edited a number of Doctor Who scripts). In its modern incarnation, the programme's affinities have tended towards retro adventure (the many juvenile spin-offs) and a hankering for Edwardian stability (Steven Moffat's Sherlock, despite the contemporary setting).
The modern revival of the series in 2005 was interesting because it initially went for the gritty, modern, pan-sexual style of Christopher Eccleston. (The best Who joke ever? When challenged on his accent: "Lots of planets have a North"). This has gradually reverted to Edwardian type, via the retro New Wave style of David Tennant to the teddy-boy-about-to-become-mod style of Matt Smith. The suspicion is that Peter Capaldi will be a full-on Robert Louis Stevenson tribute act, complete with cavalier 'tache and wee goatee. I really hope they let him have a Scottish accent.
The 1989-2005 interregnum (excepting the 1996 Paul McGann TV film) can retrospectively be bracketed by the Lawson Boom and the early warning signs of the Great Recession. The series had been canned by the BBC not simply because of its declining quality (it had always had imaginative production values, but never what you'd call quality ones), but because it seemed to have lost touch with modern concerns. In an era of sanctified individualism and conformist ambition, the more traditional values of Doctor Who (loyalty, sacrifice, selflessness etc) seemed out of tune with an audience flitting between Neighbours and Eastenders. In fact, Doctor Who never went away. The torch was simply handed on to Doc Brown in the Back to the Future series of films (a conservative trilogy about restoring the natural order), complete with juvenile companion, a somewhat sexier time machine, and a running nerd joke (the flux capacitor = reverse the polarities).
The 90s was the era of large-scale and often circular (i.e. going nowhere) American SF TV series, such as the revamped Star Trek, The X-Files and Stargate, plus bullish cinema spectaculars about defeated threats to Earth, such as Armageddon, Independence Day and Men in Black. Parallel to this was the growth of a more interesting strand of films dealing with the nature of reality and power in an increasingly networked and virtual world, such as The Matrix, Existenz and various Philp K Dick adaptations, which, unlike the popular pyrotechnics, could at least survive the return of history in 2001. In this rich speculative ecology, the spirit of Doctor Who eked out a shadow life in the student rag-week deconstruction of Red Dwarf.
In retrospect, the attempt to relaunch the Doctor in 1996 made one crucial error: Paul McGann should have been the youthful companion and Richard E Grant the main man. That would have been brilliant, particularly if he could have channelled full-on Withnail. In fact, Grant did play the Doctor, first in a 1999 Red Nose Day TV skit (the "Quite Handsome Doctor"), and then looking like a disappointed Dracula in the 2003 animation Scream of the Shalka. That would have been one Bad Doctor in the flesh. Perhaps that's the secret of his longevity: he is a vampire on our nostalgia as well as our aspirations. Perhaps the real Who is the woman they call ER2. Or perhaps she's a shape-shifting Zygon, and has been for over 400 years.
Search
Sunday, 24 November 2013
Wednesday, 20 November 2013
25 Years a Slave
A new report from the Centre for Social Justice, Maxed Out, reveals that UK household debt is now almost as big as GDP. The right-of-centre think tank (founded by the well-known social scientist, Ian Duncan Smith), wrings its hands over the disproportionate impact on the poor, but has few suggestions beyond "access to affordable credit", "responsible lenders" and greater "financial literacy" (the poor must be taught better habits). I was particularly struck by this nugget (pg 40): "The rapid growth of mortgages over the past two decades has contributed the largest total amount to Britain’s personal debt. However it is not as concerning as the rise in consumer debt over that same period. Unlike mortgage debts, which are tied to the value of a house, unsecured consumer borrowing is at higher interest rates and is more likely to spiral out of control, driving people into problem debt". In other words, despite the evidence of your own eyes, mortgages are not a problem. No, sireee.
Mortgage debts are not really tied to the value of a house, contrary to appearances. Though we think of a mortgage as a loan for which the property acts as collateral, in reality the loan is securitised against future income. As such, a mortgage is a form of "fictitious capital", like stocks and shares. It's a claim on the future. A subprime mortgage is risky not because of the quality of the property, but because of the quality of the borrower's future income stream. Property does not have an intrinsic value beyond that of the land it stands on (the productive value) and its utility as shelter. This is not trivial, but it is clearly much less than the market value. This Christmas there will no doubt be a new must-have toy, and with demand temporarily greater than supply, these will change hands on eBay and in pubs at a markup to the retail price, but that markup won't be £1 million, because the market-clearing valuation of buyers isn't that high, even on Christmas Eve. So what determines the persistent high price of housing?
I'm going to approach the answer via Paul Gilroy's preview of Steve McQueen's new film, 12 Years a Slave, in which he notes that "slaves are capital incarnate. They are living debts and impersonal obligations as well as human beings fighting off the sub-humanity imposed upon them by their status as commercial objects". Our key image of the antebellum South is of plantations and pathological gentility. In fact, most whites in the South were self-sufficient small farmers of modest means, i.e. "rednecks". This in turn meant low levels of urbanisation and industry, compared to the North, and consequently low levels of credit and a shortage of ready money. After slave imports were banned in 1808 (following the British abolition of the slave trade in 1807), domestic reproduction became the main means of growing the plantation workforce. The number of slaves grew from 750 thousand in 1800 to 4 million by 1860. The growth in the slave "crop" fuelled the expansion of cotton and other cash-crop production into the new territories west of the Mississippi, which was a primary source of the friction that led to the Civil War.
The recycling of export revenues into expansion, combined with limited money, meant that slaves became the dominant form of capital in the South and a de facto medium of exchange, used to settle debts and act as collateral for loans. The market value of a slave represented their future production. During the eighteenth century, slaves were seen as disposable commodities, often being worked to death within a few years of purchase. This partly reflected their economic equivalence with white indentured labour on fixed-term contracts (i.e. one died after an average 7 years, the other was freed), and partly the relatively low cost of adult replacements via the trade from Africa. Over the course of the century, rising prices for imported slaves and the growth of cash-crop demand (notably cotton for the Lancashire mills), led to an increasing reliance on slave reproduction, which is why the end of imports after 1808 did not lead to crisis. This in turn made it economically attractive to maximise the slaves' working lives, which led to the ideological reframing of them from the subhuman commodity of earlier years to the "children" in need of benevolent discipline familiar from Southern rhetoric.
The capital value of the slave was a claim on the future, i.e. the potential production of their remaining working lifetime, rather than the embodiment of past production. In the Northern states of the US, capital and labour were separate, even though all capital (e.g. plant and machinery) was ultimately the transformed surplus value of past labour. Future labour was "free", in the sense of uncommitted, although in reality the sharp tooth of necessity (and the flow of immigrants) meant the factory owners could rely upon its ready availability. So long as labour was plentiful, capital did not need to make any claim on the future beyond the investment in health and education required to improve the quality of the workforce in aggregate. The increase in demand for labour after WW2 allowed workers to negotiate improved wages, i.e. current income, but it also drove their demand for shorter hours and earlier retirement on decent pensions, i.e. a larger share of future time was to be enjoyed by the worker rather than transformed into capital.
The counter-measure to this claim on future time by workers was the growth of the property mortgage, which is a claim on future labour. The need for housing is constant and the rate of turnover is low, but these characteristics actually make it particularly elastic in price for two reasons. The first concerns disposable income. In a society where most people rent, average rents will settle at a level representing an affordable percentage of average disposable income. This level will in turn reflect the cost of other necessities, such as food and fuel, required for labour to reproduce itself and thus keep earning. This is a basic economic equilibrium. The price of a necessity cannot go so high that it crowds out the purchase of other necessities without social breakdown (hence why the price of bread is still regulated in some countries).
The cost of housing therefore relates to the value of current labour time - i.e. what you can earn this week or this month - and the percentage of income left after non-housing necessities are paid for. This serves to put a upper limit on the current cost of housing (i.e. rents and equivalent mortgage repayments), even where the rental sector is relatively small. The 80s and 90s were an era of falling real prices for food and clothing and flat prices for domestic fuel, which meant that the share of disposable income available for housing grew. From the mid-00s, the real cost of these other necessities started to increase above inflation, so constraining income for housing. Schemes like Help to Buy are therefore a reflection of increasing utility bills and more expensive shopping baskets as much as limited mortgage availability arising from the credit crunch.
The second reason is that the cost of housing also reflects longevity. In a society where most people buy a house, the utility of the property will typically be a factor of the future years of the buyer - i.e. how long you expect to be able to enjoy living in it. Assuming you have the funds, you will pay more, relative to current income, if you expect 100 years of utility rather than 50. The assumption is that a longer life means a longer income stream, which essentially means a longer working life. Mortgage terms are typically based on a peak earning period of 25 years, with a buffer of a decade or so either side - e.g. start work at 21, buy a house at 31, pay off the mortgage at 56, and retire at 66. If longevity were 100, and the retirement age were 80, we would have mortgage terms nearer 40 years. But that would not mean correspondingly lower repayments (the monthly outgoings would be the same because that would reflect the "rent" level), rather purchase prices would be higher.
The combination of these two factors - current relative affordability and the duration of a working lifetime - is what determines the long-run cost of housing. Though property prices in the UK, and particularly in London, are heavily influenced by induced scarcity, this localised "froth" serves to mask the strength of these underlying forces.
Between the mid-70s and mid-00s, house purchase prices relative to average earnings roughly doubled, from 2.5 to 5 times, facilitated by easier mortgage credit. However, this did not cause a "crisis of affordability" for three reasons. First, more properties are now bought by dual-income couples. The increase in working women has two effects: it increases the income stream for some buyers, but it also lowers the average of earnings (because of the gender pay gap). Second, the average ratio is affected by increasing inequality - i.e. if the prices paid by the top 10% grow faster than those paid by the remaining 90%, the average cost of housing will be dragged up. Third, the average age of a first-time buyer has increased from mid-20s to mid-30s since the 1970s. This means that their income will tend to be higher, relative to the average of the population as a whole, because most people hit their earnings peak around 40.
Seen in this light, historic house price inflation reflects three secular trends: increased longevity; the absorption of women into the workforce; and increasing income inequality. This is not to say that the share of income going to housing hasn't increased - it has and the UK is particularly expensive - but that the increase in housing costs is driven by more than just demand outstripping supply. Even without induced scarcity over the last 40 years, we'd probably have seen the cost of housing go up. The point is not that "houses are more expensive", but that the amount of future income (and thus labour time) you have to promise in return has grown. The consequence of this will be an increase in average mortgage terms to keep pace with an increasing retirement age.
A paradox of modern society is that while technological advance should allow a reduction in working time, we are taking on more debt and thus pushing back the point at which we can begin to reduce hours. Some of this debt can be considered as investment finance, i.e. where we expect increased future income as a result, such as student loans, but the bulk of it is mortgage debt, which is unproductive at a macroeconomic level beyond the consumption sugar-rush of equity release. We kid ourselves that this is an investment too, but the capital appreciation on property, which reflects the future income of potential buyers, is only possible in a society committing an ever-larger amount of future labour time. While some individuals can buck this trend, either through luck or calculation (some will always buy low and sell high), society in aggregate must make a bigger contribution with every passing year, which for most people means working longer.
One result of this is that average working hours have increased in a bid to maximise future income, which means that the housing market works against productivity growth - i.e. we are driven towards increasing the quantity of time, not its quality, as a quicker way of increasing income. Job security has evolved from a dull constraint in the 60s, through a refuge from turbulence in the 80s, to an elite aspiration now. Ultimately, this constrains innovation and risk-taking (outside of financial services). It is a commonplace that high property prices distort the economy because more and more capital is tied up unproductively, but what isn't so readily recognised is that the embodiment of labour in mortgages also acts as a psychological drag: "The delirious rise in property prices over the last twenty years is probably the single most important cause of cultural conservatism in the UK and the US".
The "capital incarnate" of slavery undermined the economy of the South (1). When the Civil War broke out, the Confederacy had only 10% of manufactured goods in the US and only 3% of the firearms. Though it had been responsible for 70% of US exports by value before the war (mainly "King Cotton"), most of the receipts had been recycled into slaves or used to buy goods from the North. The failure to grow industries outside of the plantation system meant it had a population of only 9 million compared to 22 million on the Union side, and 4 million of that figure were slaves who could not be trusted with a weapon. The duration of the war was largely a result of the South having a third of the soldiery and (initially) the better generals, but unable to buy sufficient materiel, and unable to liquidate their slave capital or use it as collateral for foreign loans, the outcome was never in doubt.
I was always bemused by the claim made in the 1970s and 80s that the "right-to-buy" was a good thing because it meant council tenants would take better care of their properties. It was obvious on the ground that the tenant's pride reflected the quality of the housing, not the nature of tenure, and it was a myth that councils wouldn't let you paint your doors a different colour. It is only with time that I have come to appreciate the ideological foundation of that claim, and to see the parallel with the claims made by Southern planters in the US that their slaves, as valued property, were better cared for than free Northerners thrown out of work during industrial slumps.
1. The total capital value of slaves in 1860 was $3 billion. The total cost of fighting the Civil War was roughly $3.3bn for each side, though this was a much larger per capita burden for the South. In other words, fighting the war cost the Union roughly the same as it would have done to buy-out the slave-owners (cf. the £20 million spent by the British government compensating West Indies slave owners in the 1830s). As such a prohibitively expensive scheme would have been politically impossible, while war against an aggressively secessionist South would have patriotic backing irrespective of the casus belli, an armed conflict was probably the only way of reforming and integrating the US economy as it expanded westwards.
Mortgage debts are not really tied to the value of a house, contrary to appearances. Though we think of a mortgage as a loan for which the property acts as collateral, in reality the loan is securitised against future income. As such, a mortgage is a form of "fictitious capital", like stocks and shares. It's a claim on the future. A subprime mortgage is risky not because of the quality of the property, but because of the quality of the borrower's future income stream. Property does not have an intrinsic value beyond that of the land it stands on (the productive value) and its utility as shelter. This is not trivial, but it is clearly much less than the market value. This Christmas there will no doubt be a new must-have toy, and with demand temporarily greater than supply, these will change hands on eBay and in pubs at a markup to the retail price, but that markup won't be £1 million, because the market-clearing valuation of buyers isn't that high, even on Christmas Eve. So what determines the persistent high price of housing?
I'm going to approach the answer via Paul Gilroy's preview of Steve McQueen's new film, 12 Years a Slave, in which he notes that "slaves are capital incarnate. They are living debts and impersonal obligations as well as human beings fighting off the sub-humanity imposed upon them by their status as commercial objects". Our key image of the antebellum South is of plantations and pathological gentility. In fact, most whites in the South were self-sufficient small farmers of modest means, i.e. "rednecks". This in turn meant low levels of urbanisation and industry, compared to the North, and consequently low levels of credit and a shortage of ready money. After slave imports were banned in 1808 (following the British abolition of the slave trade in 1807), domestic reproduction became the main means of growing the plantation workforce. The number of slaves grew from 750 thousand in 1800 to 4 million by 1860. The growth in the slave "crop" fuelled the expansion of cotton and other cash-crop production into the new territories west of the Mississippi, which was a primary source of the friction that led to the Civil War.
The recycling of export revenues into expansion, combined with limited money, meant that slaves became the dominant form of capital in the South and a de facto medium of exchange, used to settle debts and act as collateral for loans. The market value of a slave represented their future production. During the eighteenth century, slaves were seen as disposable commodities, often being worked to death within a few years of purchase. This partly reflected their economic equivalence with white indentured labour on fixed-term contracts (i.e. one died after an average 7 years, the other was freed), and partly the relatively low cost of adult replacements via the trade from Africa. Over the course of the century, rising prices for imported slaves and the growth of cash-crop demand (notably cotton for the Lancashire mills), led to an increasing reliance on slave reproduction, which is why the end of imports after 1808 did not lead to crisis. This in turn made it economically attractive to maximise the slaves' working lives, which led to the ideological reframing of them from the subhuman commodity of earlier years to the "children" in need of benevolent discipline familiar from Southern rhetoric.
The capital value of the slave was a claim on the future, i.e. the potential production of their remaining working lifetime, rather than the embodiment of past production. In the Northern states of the US, capital and labour were separate, even though all capital (e.g. plant and machinery) was ultimately the transformed surplus value of past labour. Future labour was "free", in the sense of uncommitted, although in reality the sharp tooth of necessity (and the flow of immigrants) meant the factory owners could rely upon its ready availability. So long as labour was plentiful, capital did not need to make any claim on the future beyond the investment in health and education required to improve the quality of the workforce in aggregate. The increase in demand for labour after WW2 allowed workers to negotiate improved wages, i.e. current income, but it also drove their demand for shorter hours and earlier retirement on decent pensions, i.e. a larger share of future time was to be enjoyed by the worker rather than transformed into capital.
The counter-measure to this claim on future time by workers was the growth of the property mortgage, which is a claim on future labour. The need for housing is constant and the rate of turnover is low, but these characteristics actually make it particularly elastic in price for two reasons. The first concerns disposable income. In a society where most people rent, average rents will settle at a level representing an affordable percentage of average disposable income. This level will in turn reflect the cost of other necessities, such as food and fuel, required for labour to reproduce itself and thus keep earning. This is a basic economic equilibrium. The price of a necessity cannot go so high that it crowds out the purchase of other necessities without social breakdown (hence why the price of bread is still regulated in some countries).
The cost of housing therefore relates to the value of current labour time - i.e. what you can earn this week or this month - and the percentage of income left after non-housing necessities are paid for. This serves to put a upper limit on the current cost of housing (i.e. rents and equivalent mortgage repayments), even where the rental sector is relatively small. The 80s and 90s were an era of falling real prices for food and clothing and flat prices for domestic fuel, which meant that the share of disposable income available for housing grew. From the mid-00s, the real cost of these other necessities started to increase above inflation, so constraining income for housing. Schemes like Help to Buy are therefore a reflection of increasing utility bills and more expensive shopping baskets as much as limited mortgage availability arising from the credit crunch.
The second reason is that the cost of housing also reflects longevity. In a society where most people buy a house, the utility of the property will typically be a factor of the future years of the buyer - i.e. how long you expect to be able to enjoy living in it. Assuming you have the funds, you will pay more, relative to current income, if you expect 100 years of utility rather than 50. The assumption is that a longer life means a longer income stream, which essentially means a longer working life. Mortgage terms are typically based on a peak earning period of 25 years, with a buffer of a decade or so either side - e.g. start work at 21, buy a house at 31, pay off the mortgage at 56, and retire at 66. If longevity were 100, and the retirement age were 80, we would have mortgage terms nearer 40 years. But that would not mean correspondingly lower repayments (the monthly outgoings would be the same because that would reflect the "rent" level), rather purchase prices would be higher.
The combination of these two factors - current relative affordability and the duration of a working lifetime - is what determines the long-run cost of housing. Though property prices in the UK, and particularly in London, are heavily influenced by induced scarcity, this localised "froth" serves to mask the strength of these underlying forces.
Between the mid-70s and mid-00s, house purchase prices relative to average earnings roughly doubled, from 2.5 to 5 times, facilitated by easier mortgage credit. However, this did not cause a "crisis of affordability" for three reasons. First, more properties are now bought by dual-income couples. The increase in working women has two effects: it increases the income stream for some buyers, but it also lowers the average of earnings (because of the gender pay gap). Second, the average ratio is affected by increasing inequality - i.e. if the prices paid by the top 10% grow faster than those paid by the remaining 90%, the average cost of housing will be dragged up. Third, the average age of a first-time buyer has increased from mid-20s to mid-30s since the 1970s. This means that their income will tend to be higher, relative to the average of the population as a whole, because most people hit their earnings peak around 40.
Seen in this light, historic house price inflation reflects three secular trends: increased longevity; the absorption of women into the workforce; and increasing income inequality. This is not to say that the share of income going to housing hasn't increased - it has and the UK is particularly expensive - but that the increase in housing costs is driven by more than just demand outstripping supply. Even without induced scarcity over the last 40 years, we'd probably have seen the cost of housing go up. The point is not that "houses are more expensive", but that the amount of future income (and thus labour time) you have to promise in return has grown. The consequence of this will be an increase in average mortgage terms to keep pace with an increasing retirement age.
A paradox of modern society is that while technological advance should allow a reduction in working time, we are taking on more debt and thus pushing back the point at which we can begin to reduce hours. Some of this debt can be considered as investment finance, i.e. where we expect increased future income as a result, such as student loans, but the bulk of it is mortgage debt, which is unproductive at a macroeconomic level beyond the consumption sugar-rush of equity release. We kid ourselves that this is an investment too, but the capital appreciation on property, which reflects the future income of potential buyers, is only possible in a society committing an ever-larger amount of future labour time. While some individuals can buck this trend, either through luck or calculation (some will always buy low and sell high), society in aggregate must make a bigger contribution with every passing year, which for most people means working longer.
One result of this is that average working hours have increased in a bid to maximise future income, which means that the housing market works against productivity growth - i.e. we are driven towards increasing the quantity of time, not its quality, as a quicker way of increasing income. Job security has evolved from a dull constraint in the 60s, through a refuge from turbulence in the 80s, to an elite aspiration now. Ultimately, this constrains innovation and risk-taking (outside of financial services). It is a commonplace that high property prices distort the economy because more and more capital is tied up unproductively, but what isn't so readily recognised is that the embodiment of labour in mortgages also acts as a psychological drag: "The delirious rise in property prices over the last twenty years is probably the single most important cause of cultural conservatism in the UK and the US".
The "capital incarnate" of slavery undermined the economy of the South (1). When the Civil War broke out, the Confederacy had only 10% of manufactured goods in the US and only 3% of the firearms. Though it had been responsible for 70% of US exports by value before the war (mainly "King Cotton"), most of the receipts had been recycled into slaves or used to buy goods from the North. The failure to grow industries outside of the plantation system meant it had a population of only 9 million compared to 22 million on the Union side, and 4 million of that figure were slaves who could not be trusted with a weapon. The duration of the war was largely a result of the South having a third of the soldiery and (initially) the better generals, but unable to buy sufficient materiel, and unable to liquidate their slave capital or use it as collateral for foreign loans, the outcome was never in doubt.
I was always bemused by the claim made in the 1970s and 80s that the "right-to-buy" was a good thing because it meant council tenants would take better care of their properties. It was obvious on the ground that the tenant's pride reflected the quality of the housing, not the nature of tenure, and it was a myth that councils wouldn't let you paint your doors a different colour. It is only with time that I have come to appreciate the ideological foundation of that claim, and to see the parallel with the claims made by Southern planters in the US that their slaves, as valued property, were better cared for than free Northerners thrown out of work during industrial slumps.
1. The total capital value of slaves in 1860 was $3 billion. The total cost of fighting the Civil War was roughly $3.3bn for each side, though this was a much larger per capita burden for the South. In other words, fighting the war cost the Union roughly the same as it would have done to buy-out the slave-owners (cf. the £20 million spent by the British government compensating West Indies slave owners in the 1830s). As such a prohibitively expensive scheme would have been politically impossible, while war against an aggressively secessionist South would have patriotic backing irrespective of the casus belli, an armed conflict was probably the only way of reforming and integrating the US economy as it expanded westwards.
Sunday, 17 November 2013
The State We're In
David Cameron's recent commitment to a "leaner, more efficient state" has been widely interpreted as a promise of permanent austerity, though as he delivered his speech at the Lord Mayor's Banquet, in white tie and tails, a more cynical interpretation would be "tax cuts for the rich". This pre-election manoeuvring has also revived the idea that the size of the state is a policy differentiator and we may be faced with a clear choice in 2015: Labour offering more state, the Tories offering less, and the LibDems pitching for the "don't know, best leave it as it is" vote. The voice of neoliberalism, i.e. Martin Kettle in The Guardian, is sceptical about these promises: "No modern political party in this country has a very clear answer to the question of what size the state should be".
This is an odd claim because the evidence of history is that the parties largely agree that it should be around 40% of GDP, and have done since WW2. Though there are real differences between the parties in respect of priorities and delivery, the overall level of public expenditure is more a matter of agreement than disagreement. It has fluctuated over the last 50 years, broadly in the 38-45% range, with troughs and peaks around 35 and 50%, however this fluctuation is mainly driven by the economic cycle and secular trends, rather than by government policy. In recessions, a fall in GDP causes the percentage to increase as the numerator (public spending) cannot contract as quickly as the denominator (GDP). This is amplified by counter-cyclical "automatic stabilisers", such as increased spending on benefits as more people become unemployed or lose income. Conversely, when GDP growth picks up, public spending's share of national output tends to drop. By their very nature, public services do not quickly expand or contract in real terms, despite the threats and promises of politicians.
Substantial policy-driven change to public spending usually means transferring economic activity between the public and private sectors. Changes to expenditure levels, whether cuts or investment drives, have only a marginal impact at the aggregate level. Between 1966 and 1986 public spending as a share of GDP was consistently a bit above 40%. Between 1986 and 2007 it was consistently a bit below. The main factor in this shift was the change in housing policy, which moved construction activity from the public to the private sector, followed by the privatisation of state industries and local authority services such as transport. Between 1997 and 2007, the share of GDP rose from 37% to 39%, having first dipped to 35% in 2000, reflecting the initial constraint and subsequent expansion of public services under New Labour. The aftermath of the 2008 crash saw this jump to 46% by 2010, reflecting both the absorption of the banks into the public sector and the counter-cyclical effects of the recession.
In 1978, public sector workers totalled 7 million. By 1998 this was down to 5.2 million. The number increased to 5.8 million during Labour's expansion of public services in the early 00s (mainly in health and education), with a further jump to 6 million as a result of the bank nationalisations in 2008, however it has begun to decline again (partly because sixth-form and FE colleges were recategorised as private sector in 2012) and is now at 5.7 million. With the return of the banks to the private sector, in addition to ongoing cuts in services and the privatisation of Royal Mail, that figure will probably drop close to 5 million later this decade. Measured against the total population, that is a decline from 12% to 8% since 1978, but it's important to remember that this reduction is largely a product of privatisation in the 1980s, while around half of the likely drop this decade will probably be down to shifts between sectors rather than absolute cuts.
The "doom" narrative of public spending assumes that we face an ever-increasing level of expenditure on health, social care and pensions due to an ageing population. This is over-stated, in part because of the way reporting of this expenditure has changed over time. As an area becomes more politically salient, government tends to split it out - e.g. pensions were separated out from general welfare spending in the early 90s. Similarly, a lot of expenditure that would have been classed as "general" or "local government" in years gone by has been absorbed into the larger departmental budgets, due to a mixture of centralisation and political opportunism. In some cases this has been done informally, for example cuts in local authority care budgets have led to increased demands on the NHS. The chart below shows the composition over time.
Excluding the big ticket items, the underlying trend in public expenditure has actually been downwards, from 25% to 18% over 60 years, a decline wholly attributable to cuts in defence expenditure, from a post-war high of 11.2% in 1952 to a pre-crash trough of 2.6% in 2007. While demography has increased the total cost of pensions, and put pressure on health, it has also eased pressure on education, which peaked at 6.5% in 1975 (the end of the postwar baby-boom), dropped to under 5% for most of the 80s and 90s, and only got back above 5% after 2004. Welfare other than pensions took a step-up in the late 70s and early 80s as a result of the recession and longer-term deindustrialisation. It has not stepped back since, despite the real value of benefits being severely reduced. We now have an underlying trend rate of 6%, peaking at 8% in recessions. To put this in perspective, the reserve army of labour now costs more than twice the entire defence establishment.
UK public spending and the size of the state is broadly in line with our peers in Europe. This reflects a common approach to fiscal policy, i.e. tax and spend, despite a varied approach to service provision, from nationalised providers (like the NHS) through non-profit social enterprises to for-profit private suppliers. In contrast, the US state is about 10% smaller, as a percentage of GDP, which reflects the public sector's greater absence from health, pensions and education. It's important to note here that the difference relates to the sectoral categorisation of similar services, not a fundamental difference in the composition of the economy. The small state rhetoric of the right claims that by reducing public spending we will encourage more productive activity in the private sector, but the evidence is that advanced economies just end up producing the same goods and services mix. Public health workers become private health workers, not export-oriented manufacturers.
In practice, the scale of public spending only changes through large-scale privatisation, i.e. the state stepping out of provision both in respect of tax and spend. When this occurs, it usually proceeds at a relatively slow pace, which means it depends on cross-party consensus. House building is a case in point. The current lack of affordable homes is as much the product of New Labour's desertion of local authority building as it is of the Tories' "Right To Buy" policy in the 80s. But such sectoral shifts, in either direction, are rare. Most public sector "reform" concerns outsourcing (and occasionally insourcing). The neoliberal strategy is not to "privatise", in the literal sense of that word, but to transfer the economic control of public services to privileged private sector corporations. In other words, the state is used to guarantee funding through taxation while rent-extraction and profiteering is placed outside public scrutiny. A G4S or Serco contract does not change the overall share of GDP.
The real significance of David Cameron's speech was not the content but the audience, i.e. the City of London. He isn't promising tax cuts - that's for later. What he is promising is that under an outright Tory government more contracts will be awarded to private corporations for the provision of public services, and that the nationalised banks will be returned to the private sector once all risks have been mopped up by the public purse. The timing of the speech, 18 months ahead of the next general election, reflects the need to get commitments on donations from those who will benefit from these policies. A leaner state means a fatter City.
This is an odd claim because the evidence of history is that the parties largely agree that it should be around 40% of GDP, and have done since WW2. Though there are real differences between the parties in respect of priorities and delivery, the overall level of public expenditure is more a matter of agreement than disagreement. It has fluctuated over the last 50 years, broadly in the 38-45% range, with troughs and peaks around 35 and 50%, however this fluctuation is mainly driven by the economic cycle and secular trends, rather than by government policy. In recessions, a fall in GDP causes the percentage to increase as the numerator (public spending) cannot contract as quickly as the denominator (GDP). This is amplified by counter-cyclical "automatic stabilisers", such as increased spending on benefits as more people become unemployed or lose income. Conversely, when GDP growth picks up, public spending's share of national output tends to drop. By their very nature, public services do not quickly expand or contract in real terms, despite the threats and promises of politicians.
Substantial policy-driven change to public spending usually means transferring economic activity between the public and private sectors. Changes to expenditure levels, whether cuts or investment drives, have only a marginal impact at the aggregate level. Between 1966 and 1986 public spending as a share of GDP was consistently a bit above 40%. Between 1986 and 2007 it was consistently a bit below. The main factor in this shift was the change in housing policy, which moved construction activity from the public to the private sector, followed by the privatisation of state industries and local authority services such as transport. Between 1997 and 2007, the share of GDP rose from 37% to 39%, having first dipped to 35% in 2000, reflecting the initial constraint and subsequent expansion of public services under New Labour. The aftermath of the 2008 crash saw this jump to 46% by 2010, reflecting both the absorption of the banks into the public sector and the counter-cyclical effects of the recession.
In 1978, public sector workers totalled 7 million. By 1998 this was down to 5.2 million. The number increased to 5.8 million during Labour's expansion of public services in the early 00s (mainly in health and education), with a further jump to 6 million as a result of the bank nationalisations in 2008, however it has begun to decline again (partly because sixth-form and FE colleges were recategorised as private sector in 2012) and is now at 5.7 million. With the return of the banks to the private sector, in addition to ongoing cuts in services and the privatisation of Royal Mail, that figure will probably drop close to 5 million later this decade. Measured against the total population, that is a decline from 12% to 8% since 1978, but it's important to remember that this reduction is largely a product of privatisation in the 1980s, while around half of the likely drop this decade will probably be down to shifts between sectors rather than absolute cuts.
The "doom" narrative of public spending assumes that we face an ever-increasing level of expenditure on health, social care and pensions due to an ageing population. This is over-stated, in part because of the way reporting of this expenditure has changed over time. As an area becomes more politically salient, government tends to split it out - e.g. pensions were separated out from general welfare spending in the early 90s. Similarly, a lot of expenditure that would have been classed as "general" or "local government" in years gone by has been absorbed into the larger departmental budgets, due to a mixture of centralisation and political opportunism. In some cases this has been done informally, for example cuts in local authority care budgets have led to increased demands on the NHS. The chart below shows the composition over time.
Excluding the big ticket items, the underlying trend in public expenditure has actually been downwards, from 25% to 18% over 60 years, a decline wholly attributable to cuts in defence expenditure, from a post-war high of 11.2% in 1952 to a pre-crash trough of 2.6% in 2007. While demography has increased the total cost of pensions, and put pressure on health, it has also eased pressure on education, which peaked at 6.5% in 1975 (the end of the postwar baby-boom), dropped to under 5% for most of the 80s and 90s, and only got back above 5% after 2004. Welfare other than pensions took a step-up in the late 70s and early 80s as a result of the recession and longer-term deindustrialisation. It has not stepped back since, despite the real value of benefits being severely reduced. We now have an underlying trend rate of 6%, peaking at 8% in recessions. To put this in perspective, the reserve army of labour now costs more than twice the entire defence establishment.
UK public spending and the size of the state is broadly in line with our peers in Europe. This reflects a common approach to fiscal policy, i.e. tax and spend, despite a varied approach to service provision, from nationalised providers (like the NHS) through non-profit social enterprises to for-profit private suppliers. In contrast, the US state is about 10% smaller, as a percentage of GDP, which reflects the public sector's greater absence from health, pensions and education. It's important to note here that the difference relates to the sectoral categorisation of similar services, not a fundamental difference in the composition of the economy. The small state rhetoric of the right claims that by reducing public spending we will encourage more productive activity in the private sector, but the evidence is that advanced economies just end up producing the same goods and services mix. Public health workers become private health workers, not export-oriented manufacturers.
In practice, the scale of public spending only changes through large-scale privatisation, i.e. the state stepping out of provision both in respect of tax and spend. When this occurs, it usually proceeds at a relatively slow pace, which means it depends on cross-party consensus. House building is a case in point. The current lack of affordable homes is as much the product of New Labour's desertion of local authority building as it is of the Tories' "Right To Buy" policy in the 80s. But such sectoral shifts, in either direction, are rare. Most public sector "reform" concerns outsourcing (and occasionally insourcing). The neoliberal strategy is not to "privatise", in the literal sense of that word, but to transfer the economic control of public services to privileged private sector corporations. In other words, the state is used to guarantee funding through taxation while rent-extraction and profiteering is placed outside public scrutiny. A G4S or Serco contract does not change the overall share of GDP.
The real significance of David Cameron's speech was not the content but the audience, i.e. the City of London. He isn't promising tax cuts - that's for later. What he is promising is that under an outright Tory government more contracts will be awarded to private corporations for the provision of public services, and that the nationalised banks will be returned to the private sector once all risks have been mopped up by the public purse. The timing of the speech, 18 months ahead of the next general election, reflects the need to get commitments on donations from those who will benefit from these policies. A leaner state means a fatter City.
Friday, 15 November 2013
We Are Responsible For Our Dreams
Cinema is first and foremost an exercise in framing, and Sophie Fiennes's 2012 documentary, The Pervert's Guide to Ideology (which I just got round to seeing at the Wimbledon Film Club), is a witty deconstruction of this. The frame is largely occupied by the Slovenian philosopher and cultural gadfly, Slavoj Zizek, who proceeds to unpeel the layers of ideological meaning in familiar films, continuing the pattern he and Fiennes established in 2006's The Pervert's Guide to Cinema. The recurrent trope is to cut the nose-tugging Zizek into key scenes, or have him inhabit mocked-up sets, which draws attention both to each film's artifice and to our common tendency to project our ego into the fantasy before us. Where Woody Allen's Zelig was a vapid bystander, Zizek is a noisy intruder, and all the more enjoyable for it.
Another frame is the structure of the documentary itself, which deals with each film as a discrete subject, like a series of Freudian case studies (Zizek happily adopts classic analysand poses, such as lying on Travis Bickle's bed, presumably at Fiennes's bidding). I half expected the Wolfman to be one of the featured films, though perhaps not Dora the Explorer. Zizek's schtick is a critical theory melding of Marxism and Lacanian psychoanalytic theory, with the former's notion of ideology as false consciousness "perverted" by Lacan's concept of the big Other, which allows us to externalise responsibility for our own willed fantasies. This idea, that ideology is a projection of our own desires, as much as the imposition of a hegemonic order that benefits others, is Zizek's central premise.
The legacy of 50 years of critical theory provides a further frame, both in the acceptance of popular cinema as an object of serious study and in the particular fascination (of European theorists) with American films. In Fiennes and Zizek's 2006 documentary we got critical favourites and Oedipal fanboys Alfred Hitchcock and David Lynch. In the 2012 release we get the more political Martin Scorcese and Stephen Spielberg. This reflects Zizek's own burgeoning career as a popular critic of American blockbusters in the upper echelons of the media, but it is also one of the weaknesses of the documentary as it leads him to waste time on conservative (i.e. dull and self-regarding) lumps like Titantic and The Dark Knight. It is hardly surprising to discover that the former considers capitalists exploiting workers to be the natural order of things, or that the latter accepts the reactionary claim that we must lie to protect the innocence of the people.
Zizek is also guilty of some ideological sleight-of-hand himself (though I'm sure he'd happily admit to this) in two of his best "case studies". In his sympathetic reading of The Sound of Music, he wheels out the trope of cynical Catholics permitted anything by confession and absolution. This is a Protestant prejudice that originated in the Reformation critique of indulgences, and which still informs the modern reading of Italian politics and society as fundamentally corrupt (see Bill Emmott's 2012 documentary, Girlfriend in a Coma for an example). In the second case, his analysis of Taxi Driver focuses on the unreasonableness of the hero's fantasy and the ungratefulness of the liberated, which he counterpoints with John Ford's more ambiguous The Searchers. But while Scorcese's film represented a political response to the failure of liberal intervention in Vietnam (with obvious echoes today in Iraq and Afghanistan), as Zizek contends, it can also be read as the disappointment of the 60s generation (i.e. male directors of a certain age) at the choices of the 70s: we set you free and you opted for suburban comfort, or feminism, or smack.
The core of the documentary is philosophical, and it approaches this via two paths. The first is antisemitism, which Zizek introduces via the role of the shark in Jaws as the sum of all fears (the Jews/Jaws joke has been around since the release of Spielberg's film, and even got an oblique reference in Annie Hall). He then cuts to Nazi propaganda, pointing out the obvious illogicalities: Jews as simultaneously brutish lowlife and cunning assimilators. This suggests that we are fully aware of the nonsense, and therefore that we accept ideology willingly. We aren't brainwashed - we welcome the fantasy and reject the reality. This reinforces the significance of the film that Zizek opens the documentary with, John Carpenter's 1988 They Live, where the hero's fight with his friend is literally the struggle of the latter not to confront reality. This in turn recalls The Matrix, not in respect of Morpheus's offer to Neo of the blue pill versus the red pill (which featured in The Pervert's Guide to Cinema), but in the willing acceptance of the blue pill (illusion) by the traitor Cypher: "ignorance is bliss".
The second path is religion, which Zizek approaches first obliquely, through the deification of "The People" and the cult of personality in Stalinist and Maoist ideology (the excerpts from The Fall of Berlin are startling). He rejects the idea of historical necessity, insisting that there is no communist big Other. He then confronts religion more directly, through a consideration of Scorcese's The Last Temptation of Christ: "The only real way to be an atheist is to go through Christianity. Christianity is much more atheist than the usual atheism". In other words, atheism only has value as a process, the move from belief to unbelief, which allows you to savour both. If belief is the blue pill of The Matrix, then the rational deduction of non-belief is the red pill, the lifting of the veil, the denial of the big Other. What Zizek wants is the third pill, to recognise the reality in illusion itself, to cherish the symbolic fictions that regulate our reality, to domesticate the big Other.
In Zizek's philosophy, we are alone in reality and stuck with our dreams, so we might as well enjoy them. This is a postmodern sentiment that seeks to free us from the anxiety of desire, which he illustrates through Celia Johnson's vain search for a sympathetic auditor in Brief Encounter. And, despite the defiant closing words ("you ... can never kill a true idea", i.e. communism), it is also politically pessimistic: "The depressing lesson of the last decades is that capitalism has been the true revolutionizing force, even as it only serves itself". Looking on the bright side, as Zizek implicitly urges, at least you can enjoy the upcoming Fast and Furious 7 without any guilt.
Another frame is the structure of the documentary itself, which deals with each film as a discrete subject, like a series of Freudian case studies (Zizek happily adopts classic analysand poses, such as lying on Travis Bickle's bed, presumably at Fiennes's bidding). I half expected the Wolfman to be one of the featured films, though perhaps not Dora the Explorer. Zizek's schtick is a critical theory melding of Marxism and Lacanian psychoanalytic theory, with the former's notion of ideology as false consciousness "perverted" by Lacan's concept of the big Other, which allows us to externalise responsibility for our own willed fantasies. This idea, that ideology is a projection of our own desires, as much as the imposition of a hegemonic order that benefits others, is Zizek's central premise.
The legacy of 50 years of critical theory provides a further frame, both in the acceptance of popular cinema as an object of serious study and in the particular fascination (of European theorists) with American films. In Fiennes and Zizek's 2006 documentary we got critical favourites and Oedipal fanboys Alfred Hitchcock and David Lynch. In the 2012 release we get the more political Martin Scorcese and Stephen Spielberg. This reflects Zizek's own burgeoning career as a popular critic of American blockbusters in the upper echelons of the media, but it is also one of the weaknesses of the documentary as it leads him to waste time on conservative (i.e. dull and self-regarding) lumps like Titantic and The Dark Knight. It is hardly surprising to discover that the former considers capitalists exploiting workers to be the natural order of things, or that the latter accepts the reactionary claim that we must lie to protect the innocence of the people.
Zizek is also guilty of some ideological sleight-of-hand himself (though I'm sure he'd happily admit to this) in two of his best "case studies". In his sympathetic reading of The Sound of Music, he wheels out the trope of cynical Catholics permitted anything by confession and absolution. This is a Protestant prejudice that originated in the Reformation critique of indulgences, and which still informs the modern reading of Italian politics and society as fundamentally corrupt (see Bill Emmott's 2012 documentary, Girlfriend in a Coma for an example). In the second case, his analysis of Taxi Driver focuses on the unreasonableness of the hero's fantasy and the ungratefulness of the liberated, which he counterpoints with John Ford's more ambiguous The Searchers. But while Scorcese's film represented a political response to the failure of liberal intervention in Vietnam (with obvious echoes today in Iraq and Afghanistan), as Zizek contends, it can also be read as the disappointment of the 60s generation (i.e. male directors of a certain age) at the choices of the 70s: we set you free and you opted for suburban comfort, or feminism, or smack.
The core of the documentary is philosophical, and it approaches this via two paths. The first is antisemitism, which Zizek introduces via the role of the shark in Jaws as the sum of all fears (the Jews/Jaws joke has been around since the release of Spielberg's film, and even got an oblique reference in Annie Hall). He then cuts to Nazi propaganda, pointing out the obvious illogicalities: Jews as simultaneously brutish lowlife and cunning assimilators. This suggests that we are fully aware of the nonsense, and therefore that we accept ideology willingly. We aren't brainwashed - we welcome the fantasy and reject the reality. This reinforces the significance of the film that Zizek opens the documentary with, John Carpenter's 1988 They Live, where the hero's fight with his friend is literally the struggle of the latter not to confront reality. This in turn recalls The Matrix, not in respect of Morpheus's offer to Neo of the blue pill versus the red pill (which featured in The Pervert's Guide to Cinema), but in the willing acceptance of the blue pill (illusion) by the traitor Cypher: "ignorance is bliss".
The second path is religion, which Zizek approaches first obliquely, through the deification of "The People" and the cult of personality in Stalinist and Maoist ideology (the excerpts from The Fall of Berlin are startling). He rejects the idea of historical necessity, insisting that there is no communist big Other. He then confronts religion more directly, through a consideration of Scorcese's The Last Temptation of Christ: "The only real way to be an atheist is to go through Christianity. Christianity is much more atheist than the usual atheism". In other words, atheism only has value as a process, the move from belief to unbelief, which allows you to savour both. If belief is the blue pill of The Matrix, then the rational deduction of non-belief is the red pill, the lifting of the veil, the denial of the big Other. What Zizek wants is the third pill, to recognise the reality in illusion itself, to cherish the symbolic fictions that regulate our reality, to domesticate the big Other.
In Zizek's philosophy, we are alone in reality and stuck with our dreams, so we might as well enjoy them. This is a postmodern sentiment that seeks to free us from the anxiety of desire, which he illustrates through Celia Johnson's vain search for a sympathetic auditor in Brief Encounter. And, despite the defiant closing words ("you ... can never kill a true idea", i.e. communism), it is also politically pessimistic: "The depressing lesson of the last decades is that capitalism has been the true revolutionizing force, even as it only serves itself". Looking on the bright side, as Zizek implicitly urges, at least you can enjoy the upcoming Fast and Furious 7 without any guilt.
Wednesday, 13 November 2013
Who's In, Who's Out?
Last week's "three spooks in a row" pantomime emphasised the extent to which secrecy depends on spectacle, from the spy chiefs' imitation of stern hierophants to the choreography of poppies in the chorus's button holes. The coincidental news that the Cabinet Office is reluctant to release government documents from a century ago is as much a part of the performance as the security chiefs' claims about "our adversaries rubbing their hands with glee" (when not stroking Persian cats or twisting their moustachios). The realm of secrecy recognises no limits in space or time, hence the symbolic importance of always having some ancient paperwork embargoed.
The current block on the Chilcott Report - i.e. the refusal to produce slightly less ancient documents that might suggest Tony Blair wasn't entirely straight in his dealings with Parliament - has been previously criticised by Peter Oborne as indicative of a "culture of secrecy" (and even a "conspiracy of silence"), but this misses the point. It isn't a secret that the UK supported the US decision to go after Saddam Hussein regardless, nor that Blair et al are keen to finesse this inconvenient fact out of official existence. The point at issue with Chilcott is whether Blair can be legitimately accused of misleading the House of Commons.
The formulation "culture of ..." is a classic neoliberal vector for attacking vested interests. The NHS is accused of a "culture of cruelty", the BBC of a "culture of waste and secrecy", the entire welfare state is a "culture of entitlement". It is amusing that this should now be deployed by a reactionary like Oborne to attack the arch-neoliberal Blair. The culture trope combines the traditional moralistic view of politics, that bad policy is the product of bad men, and the modern neoliberal view of government, that everything but the market is a conspiracy against the public (oh, irony). The bias of the term's use is a tactical choice: the collective culture is targeted to engineer reform (i.e. privatisation), while the failings of the private sector are excused by naming the guilty men (e.g. in banking).
You'll notice that no one has accused the security services of a "culture of secrecy", even though this must surely apply to them more than anyone else. But one thing they do share with this wider abuse of the term is the focus on the individual, hence the desire to shoot the messenger that is Edward Snowden and to personalise secrecy and security in the mind of the British public, of which William Hague's "the innocent have nothing to fear" was an early sighting. As the jurist Eben Moglen notes: "If we are not doing anything wrong, then we ... have a right to be obscure". In other words, not only does the individual have rights to privacy and secrecy, but society has a collective right not to be individuated.
Iain Lobban, the Head of GCHQ, insists "We do not spend our time listening to the telephone calls or reading the emails of the majority". Well, nobody thought you did, chum. Lobban is being disingenuous in his distinction as he knows (or believes) that "big data" is the real prize, both for the state and for privileged corporations, but the demonisation of the needle is a useful distraction from the regulation of the haystack. It might appear that the state and the corporations have different objectives, the former being interested in the few and the latter in the many, but the reality is that corporations are only interested in people they can "monetise", while the few is an ever-shifting and ever-growing subset. In practice, both discriminate, dividing the world into the "in" and the "out". This allows them to make a political distinction between potential and practice, when the real issue is the all-encompassing capability.
In 1997, the US Congress produced The Report Of The Commission On Protecting And Reducing Government Secrecy. The commission was chaired by Democrat Senator Daniel Patrick Moynihan, who wrote a foreword in which he noted that "secrecy is a mode of regulation. In truth, it is the ultimate mode, for the citizen does not even know that he or she is being regulated. Normal regulation concerns how citizens must behave, and so regulations are widely promulgated. Secrecy, by contrast, concerns what citizens may know; and the citizen is not told what may not be known". We know that documents revealing the duplicity or vanity of a still living ex-PM "may not be known". It's the stuff that we don't know about, e.g. the relationship between security agencies and large corporations, and how that affects our personal data, that constitutes the real secret.
Moynihan made an interesting point about the lure of the occult: "In a culture of secrecy, that which is not secret is easily disregarded or dismissed". Secrecy is thereby in conflict with evidence-based policy-making, as the most robust evidence is that which has been checked by many eyes: it stands up to scrutiny. The recourse to secrecy - "we believe this is right but can't tell you why" - easily segues into selectivity, only using data that support a prior belief. The famous Iraqi WMD dossier is a case in point. Where the realm of secrecy has expanded in recent years, such as in the "commercially confidential" contracts of outsourced public services, or the inner workings of "free" schools, you can be sure that this will not be criticised as a debilitating "culture". And should things go completely tits-up, e.g. G4S or the al-Madinah school, then failure will be personalised and the guilty parties replaced.
Pragmatically, there is no reason to believe that secrecy is actually helpful for government. As Moynihan urged: "We need government agencies staffed with argumentative people who can live with ambiguity and look upon secrecy as a sign of insecurity". That sounds more like a description of Edward Snowden than of any of the US or UK spy chiefs. But the Senator was writing at a time when he could refer back to the Pearl Harbor Syndrome, the desire of the security services never to be caught out again, and suggest it influenced the paranoia of J Edgar Hoover's FBI and the "Red Scare" of the 1950s. A few years later came 9/11, which inevitably gave the syndrome a reboot. This drove the massive investment in surveillance ("gather everything") that eventually overflowed into the public realm with the Manning and Snowden revelations.
It is not true that the US military invented the Internet, but it is true that network resilience in war was a goal of the packet-switching design that underpins it. The evolution of this from a closed network to an open one, an "internetwork", where redundancy and routing were strategies to mitigate unreliable hardware and outages, rather than a nuclear strike, was the work of academics. What is important is the opportunistic bleeding of ideas from the "national security" realm to the academic, and later from there to the commercial. There is no reason to believe this is uni-directional. In such an environment, which punctured national boundaries and evaporated the distinction between "home and away", it should be no surprise that the the realm of surveillance and thus secrecy expanded. And just as the state took advantage of the Internet, so too corporations took advantage of the state's connivance in universal surveillance.
What the US and UK security chiefs appear not to have appreciated was that the publication of this was inevitable given the scale of the programmes and the nature of the technology: you can't as easily hide a haystack as a needle. 850,000 people apparently had access to the Snowden files, while 2 million had access to the WikiLeaks material. It seems odd they had no contingency plan ready for the day of revelation, unless you believe that "Nosey" Parker's "gift to terrorists" jibe was the product of their finest minds and a decade of careful redrafting.
The fundamental purpose of secrecy is not to entrench a self-interested bureaucracy (though that obviously plays a part), but to divide society. The secret state is born in the cruel selections of the playground and adolescent anxiety about the in-crowd. All countries suffer from this ingroup/outgroup psychology, but we have a particularly bad case of it, in terms of our obsession with secrecy and privilege, in the UK. I think this is because secrecy is bound up with the development of London as a concierge service centre for global elites (the City, the law, private schools etc). Confidentiality and discretion are brand values. The fact that we locals tend not to rubber-neck the rich and famous, a feature welcomed by foreign footballers as much as oligrachs, is held up as a sign of our sophistication. What we are selling to the global nouveaux riches is secrecy. The "spook theatre" was as much an advert for this as it was a reassurance to Parliament.
The current block on the Chilcott Report - i.e. the refusal to produce slightly less ancient documents that might suggest Tony Blair wasn't entirely straight in his dealings with Parliament - has been previously criticised by Peter Oborne as indicative of a "culture of secrecy" (and even a "conspiracy of silence"), but this misses the point. It isn't a secret that the UK supported the US decision to go after Saddam Hussein regardless, nor that Blair et al are keen to finesse this inconvenient fact out of official existence. The point at issue with Chilcott is whether Blair can be legitimately accused of misleading the House of Commons.
The formulation "culture of ..." is a classic neoliberal vector for attacking vested interests. The NHS is accused of a "culture of cruelty", the BBC of a "culture of waste and secrecy", the entire welfare state is a "culture of entitlement". It is amusing that this should now be deployed by a reactionary like Oborne to attack the arch-neoliberal Blair. The culture trope combines the traditional moralistic view of politics, that bad policy is the product of bad men, and the modern neoliberal view of government, that everything but the market is a conspiracy against the public (oh, irony). The bias of the term's use is a tactical choice: the collective culture is targeted to engineer reform (i.e. privatisation), while the failings of the private sector are excused by naming the guilty men (e.g. in banking).
You'll notice that no one has accused the security services of a "culture of secrecy", even though this must surely apply to them more than anyone else. But one thing they do share with this wider abuse of the term is the focus on the individual, hence the desire to shoot the messenger that is Edward Snowden and to personalise secrecy and security in the mind of the British public, of which William Hague's "the innocent have nothing to fear" was an early sighting. As the jurist Eben Moglen notes: "If we are not doing anything wrong, then we ... have a right to be obscure". In other words, not only does the individual have rights to privacy and secrecy, but society has a collective right not to be individuated.
Iain Lobban, the Head of GCHQ, insists "We do not spend our time listening to the telephone calls or reading the emails of the majority". Well, nobody thought you did, chum. Lobban is being disingenuous in his distinction as he knows (or believes) that "big data" is the real prize, both for the state and for privileged corporations, but the demonisation of the needle is a useful distraction from the regulation of the haystack. It might appear that the state and the corporations have different objectives, the former being interested in the few and the latter in the many, but the reality is that corporations are only interested in people they can "monetise", while the few is an ever-shifting and ever-growing subset. In practice, both discriminate, dividing the world into the "in" and the "out". This allows them to make a political distinction between potential and practice, when the real issue is the all-encompassing capability.
In 1997, the US Congress produced The Report Of The Commission On Protecting And Reducing Government Secrecy. The commission was chaired by Democrat Senator Daniel Patrick Moynihan, who wrote a foreword in which he noted that "secrecy is a mode of regulation. In truth, it is the ultimate mode, for the citizen does not even know that he or she is being regulated. Normal regulation concerns how citizens must behave, and so regulations are widely promulgated. Secrecy, by contrast, concerns what citizens may know; and the citizen is not told what may not be known". We know that documents revealing the duplicity or vanity of a still living ex-PM "may not be known". It's the stuff that we don't know about, e.g. the relationship between security agencies and large corporations, and how that affects our personal data, that constitutes the real secret.
Moynihan made an interesting point about the lure of the occult: "In a culture of secrecy, that which is not secret is easily disregarded or dismissed". Secrecy is thereby in conflict with evidence-based policy-making, as the most robust evidence is that which has been checked by many eyes: it stands up to scrutiny. The recourse to secrecy - "we believe this is right but can't tell you why" - easily segues into selectivity, only using data that support a prior belief. The famous Iraqi WMD dossier is a case in point. Where the realm of secrecy has expanded in recent years, such as in the "commercially confidential" contracts of outsourced public services, or the inner workings of "free" schools, you can be sure that this will not be criticised as a debilitating "culture". And should things go completely tits-up, e.g. G4S or the al-Madinah school, then failure will be personalised and the guilty parties replaced.
Pragmatically, there is no reason to believe that secrecy is actually helpful for government. As Moynihan urged: "We need government agencies staffed with argumentative people who can live with ambiguity and look upon secrecy as a sign of insecurity". That sounds more like a description of Edward Snowden than of any of the US or UK spy chiefs. But the Senator was writing at a time when he could refer back to the Pearl Harbor Syndrome, the desire of the security services never to be caught out again, and suggest it influenced the paranoia of J Edgar Hoover's FBI and the "Red Scare" of the 1950s. A few years later came 9/11, which inevitably gave the syndrome a reboot. This drove the massive investment in surveillance ("gather everything") that eventually overflowed into the public realm with the Manning and Snowden revelations.
It is not true that the US military invented the Internet, but it is true that network resilience in war was a goal of the packet-switching design that underpins it. The evolution of this from a closed network to an open one, an "internetwork", where redundancy and routing were strategies to mitigate unreliable hardware and outages, rather than a nuclear strike, was the work of academics. What is important is the opportunistic bleeding of ideas from the "national security" realm to the academic, and later from there to the commercial. There is no reason to believe this is uni-directional. In such an environment, which punctured national boundaries and evaporated the distinction between "home and away", it should be no surprise that the the realm of surveillance and thus secrecy expanded. And just as the state took advantage of the Internet, so too corporations took advantage of the state's connivance in universal surveillance.
What the US and UK security chiefs appear not to have appreciated was that the publication of this was inevitable given the scale of the programmes and the nature of the technology: you can't as easily hide a haystack as a needle. 850,000 people apparently had access to the Snowden files, while 2 million had access to the WikiLeaks material. It seems odd they had no contingency plan ready for the day of revelation, unless you believe that "Nosey" Parker's "gift to terrorists" jibe was the product of their finest minds and a decade of careful redrafting.
The fundamental purpose of secrecy is not to entrench a self-interested bureaucracy (though that obviously plays a part), but to divide society. The secret state is born in the cruel selections of the playground and adolescent anxiety about the in-crowd. All countries suffer from this ingroup/outgroup psychology, but we have a particularly bad case of it, in terms of our obsession with secrecy and privilege, in the UK. I think this is because secrecy is bound up with the development of London as a concierge service centre for global elites (the City, the law, private schools etc). Confidentiality and discretion are brand values. The fact that we locals tend not to rubber-neck the rich and famous, a feature welcomed by foreign footballers as much as oligrachs, is held up as a sign of our sophistication. What we are selling to the global nouveaux riches is secrecy. The "spook theatre" was as much an advert for this as it was a reassurance to Parliament.
Monday, 11 November 2013
We Are Top of the League
Now seems as good a time as any to mention the football. 11 games out of 38 isn't a obvious break-point, but the end of our latest "defining week", plus the impending interlull, makes this a good time to pull onto the hard shoulder, purchase a styrofoam of scalding tea, and chew the fat (there's a burger van there as well).
Let's start with the defining week. Two out of three is not bad, but while the victory over Liverpool was convincing, I'm of the view that they are being a little over-praised at the moment (while many question our lofty perch, curiously few suggest Liverpool are out of place in second). Winning in Dortmund was special, but if it had been a two-legged tie we'd be out on goal difference, so it was no better than the recent ties against Milan and Bayern. That said, we've shown we can, as the cliché has it, beat anyone on the day. Ironically, only losing one-nil at Old Trafford (and having 60% of the possession) might be a better result than we think. United were at their recent best, but that wasn't up to much. Absent van Persie and Rooney, they look pretty ordinary. We were clearly jaded (a combination of Germany and a mystery bug perhaps), but we were not outclassed. The 8-2 spanking seems a long time ago. The broad conclusion then is that we are credible contenders.
Yesterday's result will no doubt have some pundits saying "I told you so". When asked in recent weeks about Arsenal's title chances, the Brains Trust of MOTD have tended to hesitate, smirk and then dismiss them. This vignette implies that the ex-pro has tacit knowledge not available to the common fan who may be deluded by the Gunners sitting atop the table. However, this insider knowledge is never fully revealed beyond anodynes about inexperience and squad depth, which are no more insightful than the prejudice you get down the pub. Wayne Rooney gave an example of this when he spoke, ahead of the game, about Arsenal tending to fade in the New Year, pointing to 2008 and 2011. Aaron Ramsey pointed out that in 2012 and 2013 the team had got stronger from that point onwards. They're both factually correct, but Rooney is clearly ignoring recent form, both Arsenal's and Man United's. In terms of trend, Ramsey is making the more convincing argument, and that remains the case despite the weekend defeat.
Arsenal have gradually improved their total points haul over the last 3 seasons: 68, 70, and 73. With no damaging departures over the summer, and with the useful reacquisition of Mathieu Flamini, a further incremental improvement in a relatively immature squad is possible (and there is still a January transfer window to come). 76 points would be a reasonable target. The addition of Mesut Ozil has undoubtedly been a psychological fillip, but more importantly, I suspect the £42m we paid might actually translate into extra points over the course of the season (money does generally buy success). Allowing for some luck (or the absence of bad luck re injuries and squad-sapping viruses), a step-up means we could hit 78 to 80 points by season end.
Given that the other title contenders have not yet put together consistent runs, and there are grounds to believe that none of them will do enough to leave everyone else in their wake (e.g. Chelsea and Man City's own travails this weekend), then 80 points, which would normally get you to roughly 2nd place (the last 3 runners-up finished on 71, 89 and 78), may prove sufficient to win the title this season (the last 3 champions finished on 80, 89 and 89, with the overall EPL average being 86). The Bloomberg index of estimated points currently predicts a final four of: Chelsea on 77.8, Man City on 76.9, Arsenal on 76.8, and Manure on 75.2. If we can continue our 2013 calendar year form into 2014, we'll be in range of top spot.
Some statistical models (like Bloomberg's) suggest that Arsenal are currently "over-priced" in pole position, and specifically that they have been fortunate in scoring more than their attacking play would merit, with Aaron Ramsey's hot-streak fingered as exceptional (and therefore likely to end at some point). I think this may be an unconscious nod to the belief that we're light up-front and only an injury (to Giroud) away from firing blanks (our duck at Old Trafford was our first in 11 games). I think this discounts the goal-scoring value we have coming back from injury in Walcott (14 league goals last season), Podolski (11 ditto), as well as the recently restored Cazorla (12 ditto). Our ratio of 22 goals from 11 games to date (2.0) is slightly better than last season's 72 from 38 (1.9). Before yesterday we were on 2.2, which suggests we're capable of finishing the season around the level of United's ratio from last season of 2.26 (86 from 38), assuming we maintain form.
So far this season we've scored more than 3 goals in a game only once (4 against Norwich), whereas last season we did so in 7 games overall. These can be considered "wasted" goals (though fun) as you still only get 3 points. We've also only failed to score once this season, while keeping 3 clean sheets. Last season after 11 games we'd got 3 ducks and 4 clean-sheets. In other words, we've been more efficient at converting goals into points. Last season produced 73 points from 72 goals (1.01). So far, we've got 25 points from 22 (1.14). In comparison, United finished last season with 89 points from 86 goals, a ratio of 1.03 points per goal. Regression to that mean would put us on 23 points today, but still in first place, while a ratio of 1.01 would give us 22 and third place. While we may be slightly over-priced, this masks a genuine improvement on last season.
It is still too early to make any confident predictions about final positions, but we're operating within a fairly narrow range of likely outcomes. In recent seasons we've oscillated between third and fourth. If all the stars align, we could make first, but a safer bet would be second on 76 to 78 points. Of course, the key thing to remember, and the only concrete datum that actually means anything at this stage of the season, is that WE ... ARE ... TOP OF THE LEAGUE.
Let's start with the defining week. Two out of three is not bad, but while the victory over Liverpool was convincing, I'm of the view that they are being a little over-praised at the moment (while many question our lofty perch, curiously few suggest Liverpool are out of place in second). Winning in Dortmund was special, but if it had been a two-legged tie we'd be out on goal difference, so it was no better than the recent ties against Milan and Bayern. That said, we've shown we can, as the cliché has it, beat anyone on the day. Ironically, only losing one-nil at Old Trafford (and having 60% of the possession) might be a better result than we think. United were at their recent best, but that wasn't up to much. Absent van Persie and Rooney, they look pretty ordinary. We were clearly jaded (a combination of Germany and a mystery bug perhaps), but we were not outclassed. The 8-2 spanking seems a long time ago. The broad conclusion then is that we are credible contenders.
Yesterday's result will no doubt have some pundits saying "I told you so". When asked in recent weeks about Arsenal's title chances, the Brains Trust of MOTD have tended to hesitate, smirk and then dismiss them. This vignette implies that the ex-pro has tacit knowledge not available to the common fan who may be deluded by the Gunners sitting atop the table. However, this insider knowledge is never fully revealed beyond anodynes about inexperience and squad depth, which are no more insightful than the prejudice you get down the pub. Wayne Rooney gave an example of this when he spoke, ahead of the game, about Arsenal tending to fade in the New Year, pointing to 2008 and 2011. Aaron Ramsey pointed out that in 2012 and 2013 the team had got stronger from that point onwards. They're both factually correct, but Rooney is clearly ignoring recent form, both Arsenal's and Man United's. In terms of trend, Ramsey is making the more convincing argument, and that remains the case despite the weekend defeat.
Arsenal have gradually improved their total points haul over the last 3 seasons: 68, 70, and 73. With no damaging departures over the summer, and with the useful reacquisition of Mathieu Flamini, a further incremental improvement in a relatively immature squad is possible (and there is still a January transfer window to come). 76 points would be a reasonable target. The addition of Mesut Ozil has undoubtedly been a psychological fillip, but more importantly, I suspect the £42m we paid might actually translate into extra points over the course of the season (money does generally buy success). Allowing for some luck (or the absence of bad luck re injuries and squad-sapping viruses), a step-up means we could hit 78 to 80 points by season end.
Given that the other title contenders have not yet put together consistent runs, and there are grounds to believe that none of them will do enough to leave everyone else in their wake (e.g. Chelsea and Man City's own travails this weekend), then 80 points, which would normally get you to roughly 2nd place (the last 3 runners-up finished on 71, 89 and 78), may prove sufficient to win the title this season (the last 3 champions finished on 80, 89 and 89, with the overall EPL average being 86). The Bloomberg index of estimated points currently predicts a final four of: Chelsea on 77.8, Man City on 76.9, Arsenal on 76.8, and Manure on 75.2. If we can continue our 2013 calendar year form into 2014, we'll be in range of top spot.
Some statistical models (like Bloomberg's) suggest that Arsenal are currently "over-priced" in pole position, and specifically that they have been fortunate in scoring more than their attacking play would merit, with Aaron Ramsey's hot-streak fingered as exceptional (and therefore likely to end at some point). I think this may be an unconscious nod to the belief that we're light up-front and only an injury (to Giroud) away from firing blanks (our duck at Old Trafford was our first in 11 games). I think this discounts the goal-scoring value we have coming back from injury in Walcott (14 league goals last season), Podolski (11 ditto), as well as the recently restored Cazorla (12 ditto). Our ratio of 22 goals from 11 games to date (2.0) is slightly better than last season's 72 from 38 (1.9). Before yesterday we were on 2.2, which suggests we're capable of finishing the season around the level of United's ratio from last season of 2.26 (86 from 38), assuming we maintain form.
So far this season we've scored more than 3 goals in a game only once (4 against Norwich), whereas last season we did so in 7 games overall. These can be considered "wasted" goals (though fun) as you still only get 3 points. We've also only failed to score once this season, while keeping 3 clean sheets. Last season after 11 games we'd got 3 ducks and 4 clean-sheets. In other words, we've been more efficient at converting goals into points. Last season produced 73 points from 72 goals (1.01). So far, we've got 25 points from 22 (1.14). In comparison, United finished last season with 89 points from 86 goals, a ratio of 1.03 points per goal. Regression to that mean would put us on 23 points today, but still in first place, while a ratio of 1.01 would give us 22 and third place. While we may be slightly over-priced, this masks a genuine improvement on last season.
It is still too early to make any confident predictions about final positions, but we're operating within a fairly narrow range of likely outcomes. In recent seasons we've oscillated between third and fourth. If all the stars align, we could make first, but a safer bet would be second on 76 to 78 points. Of course, the key thing to remember, and the only concrete datum that actually means anything at this stage of the season, is that WE ... ARE ... TOP OF THE LEAGUE.
Thursday, 7 November 2013
Software Engineering Meets Politics
It is a feature of our times that a software development project can become a major rolling news item in the mainstream media. In the US, the problem-ridden launch of the "Obamacare" website has inevitably been used as a stick with which to beat the Affordable Care Act specifically, the president personally, and government generally. In the UK, the more modest mess that is the Universal Credit system is variously held up as proof of the incompetence of Ian Duncan Smith, the coalition, or government of every stripe. There are plenty of bloggers poring over the political significance of each failure (trivial, in my view), but I want to look at the way that software engineering itself has been dragged reluctantly centre-stage.
A little background first. The US Patient Protection and Affordable Care Act extends private health insurance to potentially 24 million people not covered by existing public schemes, such as Medicare (for the old and disabled) and Medicaid (for the poorest). It doesn't create a state insurance body (the "single payer" approach advocated by dangerous radicals), but instead requires regulated private insurers to offer state-wide policies with a common rate ("community rating"). In return, they get government subsidies to make premiums affordable. Enrolment is mandated - i.e. young, healthy adults cross-subsidise older or chronically ill adults. As there are multiple insurers, the scheme operates as a state-level marketplace or exchange, preserving the principle (or illusion) that the individual selects a private provider. To complicate matters, 23 states are creating their own exchanges (or already have them in place, e.g. Massachusetts), while 27 are defaulting to the federal exchange. On top of that, the exchange needs to interface with multiple other federal systems for the purposes of ID and entitlement verification. That's one big mutha.
The Act was passed in March 2010. The prime contractor, CGI Federal, was awarded the contract in December 2011. This time-lag probably reflects a combination of turf-wars in Washington, a volatile spec due to continuing Republican attempts at sabotage, the difficulty of creating a brief for such a complex system, and the intrinsic dysfunction of the government procurement process. If March 2010 was the moment the starting pistol was fired, the next Presidential election in November 2016 is the finishing-tape. If Obamacare isn't fully bedded-in by then, an incoming Republican President could conceivably abandon or gradually choke it. To be irreversible, Obamacare will need to have completed at least one successful cycle - a calendar year of coverage and claims - without major problems. This implies at least two full years of operation (i.e. year one problems solved for year two), hence go-live for 2014 was an immovable object.
There are reports that political manoeuvring meant the spec could not be nailed down until after Obama's re-election in November 2012, which meant the start of development for the core system didn't commence till spring this year. In effect, a massive and complex systems integration project was to be implemented inside about 7 months. The central healthcare.gov site was launched on the 1st of October. Americans wanting health insurance cover from the 1st of January need to have enrolled (and made a first payment) by the 15th of December, though enrolment remains open till the end of next March. It is clear that the system was buggy at launch, a fact conceded by the US government after three weeks as "glitches". This has led to the promise of a "tech surge" to resolve all issues by the end of November.
A number of commentators have greeted the "tech surge" announcement by reference to Brooks's Law: "adding manpower to a late software project makes it later". This has been subtly misinterpreted by Clay Shirky: "adding more workers tends to delay things, as the complexity of additional communications swamps the boost from having extra help". Fred Brooks's point was about attempts to speed-up development, usually in order to hit a management-dictated deadline, and how this becomes counter-productive after a certain point - i.e. you can absorb a modest increment in labour midway, but not a doubling of heads at the eleventh hour. In a post-deployment situation, where you are trouble-shooting a semi-stable system, bringing in fresh eyes and novel thinking is actually a good idea, as the developers may be too close to the code (or determined to cover-up errors), so citing Brooks's Law in respect of the "tech surge" is not insightful.
Shirky, as a techno-booster, sees government as the problem: "The business of government, from information-gathering to service delivery, will be increasingly mediated by the internet". In other words, government must change to suit the technology. This belief is founded on the assumption that technology is apolitical. The corollary of this is that government simply can't do technology: "Everything about the way the government builds large technical projects contrasts unfavorably [to non-government], from specification to procurement, to hiring, to management". Strangely, the recently-revealed competence of the NSA and GCHQ in delivering massive projects has not altered this view, any more than the precedent of the Manhattan or Apollo projects did.
It's worth noting that the limited evidence available points to integration and infrastructure issues as much as code bugs. This would support the suspicion that the tight window for development precluded adequate integration, regression and load testing. In other words, the problems are a consequence of the politics: a complex healthcare system, eleventh-hour compromises on operation that affected the software design, and a ridiculously small window for development and rollout. It's a wonder they got anything out the door. In this light, Shirky's criticism is not just ill-informed, it is clearly an attempt to create a false dichotomy between technology and politics, with the implication that the former is always and everywhere corrupted by the latter ("let my people go!"). In reality, they are inextricably intertwined.
A couple of weeks ago, Rusty Foster in the New Yorker pointed to the apparent lack of a decent spec: "building a complex software product without a clear, fixed set of specifications is impossible ... [it] is like being told to build a skyscraper without any blueprints, while the client keeps changing the desired location of things like plumbing and wiring". This suggests that in terms of software engineering, Rusty was stuck in a daydream of chisel-jawed architects straight out of Mad Men (perhaps he imagines this is New Yorker house-style). A client who keeps changing their mind is the premise of agile software development. To be fair to Foster, it's pretty clear that this was a waterfall project, rather than an agile one, which is to be expected when dealing with myriad contractors keen to milk government, so the absence of a spec ("big design") should have set alarm bells ringing. To judge from the testimony of the contractors before Congress, they were unable to cope with the government's late changes to the plan and the skimpy "big testing" at the end of development (two weeks is mentioned). Whichever way you look at it, this suggests that they were the wrong choice as contractors.
Amusingly, Foster appears to have discovered agile within a week of his earlier article (possibly because of some helpful comments below the line), though he doesn't appear to fully understand it : "An agile version of the Healthcare.gov project, for example, might have first released just the log-in component, or 'front door', for public use, before developing any of the tools to find and buy insurance plans". Yeah, that'll work. A population, a sizable proportion of whom still believe in "death panels" and think Obama is Nigerian, will have no trouble carrying out end-user testing and providing structured feedback. This is priceless: "An agile Healthcare.gov would have evolved, over time, in incremental steps that were always subjected to real-world use and evaluation". Foster appears to have missed the point that incremental evolution and real-world use is precisely what has been happening since the 1st of October.
One consequence of the US debacle may be a PR boost for agile, but that may be a mistake. Not only can agile be a disaster if treated as another commodity bought from a clueless consultancy, or if the customer is incapable of being agile themselves (customer involvement is the sine qua non of agile success), but it isn't necessarily appropriate for a systems integration project, or at least not without significant tailoring. It's pretty clear that the claims originally made by the DWP that the Universal Credit system would be "agile" were specious. Not only was this another spaghetti project - partially consolidating lots of legacy systems into one "simplified" whole - but the lead contractors were the usual waterfall-friendly suspects: HP, Accenture, CapGemini and IBM. This, combined with the traditional failings of "weak management, ineffective control and poor governance", inevitably led to by-the-numbers failure: malfunctioning software and financial waste (aka big profits for the aforementioned contractors).
The healthcare.gov system was going to be a partial failure come what may, largely because of the political constraints. I'm actually surprised it is working as well as it is, and I suspect that it will be operating at an acceptable level for most users by the end of the month. That said, there will be a long tail of "glitches" into next year. Some problems will not be shaken out until the end of the first full annual cycle. But, as noted above, the watershed will be a largely trouble-free 2015. Obama will be able to point to Obamacare as his signal policy achievement, and may even point to the way "we sorted it out" as evidence that government can get stuff done. His opponents will stop attacking the policy because of its popularity (just as they did with Medicare and Medicaid) and will point to the launch problems as evidence that government screws stuff up. Plus ça change.
The mini-me of Universal Credit will probably grind on and be quietly shelved after the 2015 general election. The neoliberals in Labour probably like the idea, but will want to rebrand and relaunch, which means happy days again for the big consultancies. If the Tories get back in, IDS will be gone and a change in priorities will marginalise the project. The ultimate aim was never simplification or a better service for claimants, but a reduction in the total welfare bill. There will be plenty of other ways to achieve the same goal, and plenty of other software "solutions" to punt. Meanwhile, software engineering is going to have to get used to the limelight.
A little background first. The US Patient Protection and Affordable Care Act extends private health insurance to potentially 24 million people not covered by existing public schemes, such as Medicare (for the old and disabled) and Medicaid (for the poorest). It doesn't create a state insurance body (the "single payer" approach advocated by dangerous radicals), but instead requires regulated private insurers to offer state-wide policies with a common rate ("community rating"). In return, they get government subsidies to make premiums affordable. Enrolment is mandated - i.e. young, healthy adults cross-subsidise older or chronically ill adults. As there are multiple insurers, the scheme operates as a state-level marketplace or exchange, preserving the principle (or illusion) that the individual selects a private provider. To complicate matters, 23 states are creating their own exchanges (or already have them in place, e.g. Massachusetts), while 27 are defaulting to the federal exchange. On top of that, the exchange needs to interface with multiple other federal systems for the purposes of ID and entitlement verification. That's one big mutha.
The Act was passed in March 2010. The prime contractor, CGI Federal, was awarded the contract in December 2011. This time-lag probably reflects a combination of turf-wars in Washington, a volatile spec due to continuing Republican attempts at sabotage, the difficulty of creating a brief for such a complex system, and the intrinsic dysfunction of the government procurement process. If March 2010 was the moment the starting pistol was fired, the next Presidential election in November 2016 is the finishing-tape. If Obamacare isn't fully bedded-in by then, an incoming Republican President could conceivably abandon or gradually choke it. To be irreversible, Obamacare will need to have completed at least one successful cycle - a calendar year of coverage and claims - without major problems. This implies at least two full years of operation (i.e. year one problems solved for year two), hence go-live for 2014 was an immovable object.
There are reports that political manoeuvring meant the spec could not be nailed down until after Obama's re-election in November 2012, which meant the start of development for the core system didn't commence till spring this year. In effect, a massive and complex systems integration project was to be implemented inside about 7 months. The central healthcare.gov site was launched on the 1st of October. Americans wanting health insurance cover from the 1st of January need to have enrolled (and made a first payment) by the 15th of December, though enrolment remains open till the end of next March. It is clear that the system was buggy at launch, a fact conceded by the US government after three weeks as "glitches". This has led to the promise of a "tech surge" to resolve all issues by the end of November.
A number of commentators have greeted the "tech surge" announcement by reference to Brooks's Law: "adding manpower to a late software project makes it later". This has been subtly misinterpreted by Clay Shirky: "adding more workers tends to delay things, as the complexity of additional communications swamps the boost from having extra help". Fred Brooks's point was about attempts to speed-up development, usually in order to hit a management-dictated deadline, and how this becomes counter-productive after a certain point - i.e. you can absorb a modest increment in labour midway, but not a doubling of heads at the eleventh hour. In a post-deployment situation, where you are trouble-shooting a semi-stable system, bringing in fresh eyes and novel thinking is actually a good idea, as the developers may be too close to the code (or determined to cover-up errors), so citing Brooks's Law in respect of the "tech surge" is not insightful.
Shirky, as a techno-booster, sees government as the problem: "The business of government, from information-gathering to service delivery, will be increasingly mediated by the internet". In other words, government must change to suit the technology. This belief is founded on the assumption that technology is apolitical. The corollary of this is that government simply can't do technology: "Everything about the way the government builds large technical projects contrasts unfavorably [to non-government], from specification to procurement, to hiring, to management". Strangely, the recently-revealed competence of the NSA and GCHQ in delivering massive projects has not altered this view, any more than the precedent of the Manhattan or Apollo projects did.
It's worth noting that the limited evidence available points to integration and infrastructure issues as much as code bugs. This would support the suspicion that the tight window for development precluded adequate integration, regression and load testing. In other words, the problems are a consequence of the politics: a complex healthcare system, eleventh-hour compromises on operation that affected the software design, and a ridiculously small window for development and rollout. It's a wonder they got anything out the door. In this light, Shirky's criticism is not just ill-informed, it is clearly an attempt to create a false dichotomy between technology and politics, with the implication that the former is always and everywhere corrupted by the latter ("let my people go!"). In reality, they are inextricably intertwined.
A couple of weeks ago, Rusty Foster in the New Yorker pointed to the apparent lack of a decent spec: "building a complex software product without a clear, fixed set of specifications is impossible ... [it] is like being told to build a skyscraper without any blueprints, while the client keeps changing the desired location of things like plumbing and wiring". This suggests that in terms of software engineering, Rusty was stuck in a daydream of chisel-jawed architects straight out of Mad Men (perhaps he imagines this is New Yorker house-style). A client who keeps changing their mind is the premise of agile software development. To be fair to Foster, it's pretty clear that this was a waterfall project, rather than an agile one, which is to be expected when dealing with myriad contractors keen to milk government, so the absence of a spec ("big design") should have set alarm bells ringing. To judge from the testimony of the contractors before Congress, they were unable to cope with the government's late changes to the plan and the skimpy "big testing" at the end of development (two weeks is mentioned). Whichever way you look at it, this suggests that they were the wrong choice as contractors.
Amusingly, Foster appears to have discovered agile within a week of his earlier article (possibly because of some helpful comments below the line), though he doesn't appear to fully understand it : "An agile version of the Healthcare.gov project, for example, might have first released just the log-in component, or 'front door', for public use, before developing any of the tools to find and buy insurance plans". Yeah, that'll work. A population, a sizable proportion of whom still believe in "death panels" and think Obama is Nigerian, will have no trouble carrying out end-user testing and providing structured feedback. This is priceless: "An agile Healthcare.gov would have evolved, over time, in incremental steps that were always subjected to real-world use and evaluation". Foster appears to have missed the point that incremental evolution and real-world use is precisely what has been happening since the 1st of October.
One consequence of the US debacle may be a PR boost for agile, but that may be a mistake. Not only can agile be a disaster if treated as another commodity bought from a clueless consultancy, or if the customer is incapable of being agile themselves (customer involvement is the sine qua non of agile success), but it isn't necessarily appropriate for a systems integration project, or at least not without significant tailoring. It's pretty clear that the claims originally made by the DWP that the Universal Credit system would be "agile" were specious. Not only was this another spaghetti project - partially consolidating lots of legacy systems into one "simplified" whole - but the lead contractors were the usual waterfall-friendly suspects: HP, Accenture, CapGemini and IBM. This, combined with the traditional failings of "weak management, ineffective control and poor governance", inevitably led to by-the-numbers failure: malfunctioning software and financial waste (aka big profits for the aforementioned contractors).
The healthcare.gov system was going to be a partial failure come what may, largely because of the political constraints. I'm actually surprised it is working as well as it is, and I suspect that it will be operating at an acceptable level for most users by the end of the month. That said, there will be a long tail of "glitches" into next year. Some problems will not be shaken out until the end of the first full annual cycle. But, as noted above, the watershed will be a largely trouble-free 2015. Obama will be able to point to Obamacare as his signal policy achievement, and may even point to the way "we sorted it out" as evidence that government can get stuff done. His opponents will stop attacking the policy because of its popularity (just as they did with Medicare and Medicaid) and will point to the launch problems as evidence that government screws stuff up. Plus ça change.
The mini-me of Universal Credit will probably grind on and be quietly shelved after the 2015 general election. The neoliberals in Labour probably like the idea, but will want to rebrand and relaunch, which means happy days again for the big consultancies. If the Tories get back in, IDS will be gone and a change in priorities will marginalise the project. The ultimate aim was never simplification or a better service for claimants, but a reduction in the total welfare bill. There will be plenty of other ways to achieve the same goal, and plenty of other software "solutions" to punt. Meanwhile, software engineering is going to have to get used to the limelight.
Tuesday, 5 November 2013
Dragnet
I was reading a review of a new biography of Charles Manson the other day and was struck by how often the direction of his life turned on stolen cars and police check-points. Though we associate the growth of car-ownership and the highway system in postwar America with freedom, the reality was increased state surveillance and control, which has an obvious analogue with the Internet of today. Daniel Albert captures this well in an article on the coming robot chauffeur: "A century ago, as cars suddenly made it possible for Americans to slip local bonds, government responded with rules and regulations to keep track of who was driving what. Whether intentionally or not, traffic regulations became a national dragnet serving all manner of police purposes, suppressing everything from sex trafficking to adolescent rebellion".
Albert is a fine satirist as well as a car-geek (his take on flying cars is a hoot), but he could still learn a thing or two from Allister Heath, the editor of City AM, who told us last week to "Forget high speed rail. Driverless cars will revolutionise transport". The chief boon of this revolution is that "Driverless cars will reduce accidents by 90 per cent by eliminating human error". As a free-market evangelist, Heath is shrewd enough to recognise a corporatist gravy-train like HS2 when he sees one, but he dons rosé specs when arguing the case for autonomous vehicles, particularly when you realise that the government-backed trial planned for Milton Keynes (which triggered his panegyric) involves vehicles that are little more than golf-buggies (you could argue that a top-speed of 12mph will inevitably reduce accidents, but that's not what he is suggesting).
Though he doesn't provide a link, the 90% claim can be traced back to the Eno Center for Transportation, a US lobby group that bills itself as "a neutral, non-partisan think-tank" (i.e. it's funded by transport and technology companies). The figure originates not in a peer-reviewed study but in "a competitive paper competition among Eno’s Leadership Development Conference Fellows". Presumably the competition was to see who could be most on-message. The central claim in the paper, in respect of US fatal car accidents in 2011, is that: "Over 40 percent of these fatal crashes involve alcohol, distraction, drug involvement and/or fatigue. Self-driven vehicles would not fall prey to human failings, suggesting the potential for at least a 40 percent fatal crash-rate reduction, assuming automated malfunctions are minimal and everything else remains constant".
Simple logic would suggest that "at least" should be "at most", while the assumptions are asking a lot. The 40% figure is opaquely derived from a table of multiple causes - i.e. a crash may be the result of both drink and drugs. Alcohol is a factor in 31% of fatal crashes, distraction a factor in 21%, drugs a factor in 7% and fatigue a factor in 2.5%. You could make the figure of 40% by adding together alcohol, drugs and fatigue (perhaps the real benefit of driverless cars will be drunken snoozing), but it looks more like a guesstimate. It also implies, given that alcohol is the chief factor, that integrated breathalysers could deliver most of the safety benefits promised by robot chauffeurs.
The paper further observes that "Driver error is believed to be the main reason behind over 90 percent of all crashes". This is the opinion that has morphed into Heath's factoid. The objection to this "belief" is that it ignores who actually dies or is injured in car crashes. According to the US government, there were 29,757 crashes resulting in 32,367 fatalities in 2011. The Eno Center paper misrepresents the latter as "Total Fatal & Inurious [sic] Crashes per Year in U.S." (not only wasn't it peer-reviewed, the paper may not have been proof-read). Of the deaths, 22,448 were car occupants, 4,612 were motorcyclists, 4,432 were pedestrians and 677 were pedal-cyclists, with 198 other/unknown. Given that 30% of all fatalities were non-car-occupants (and this figure is probably similar for non-fatal injuries), who will continue to be the cause of many accidents and will presumably not all be augmented by technology, it is a stretch to believe that driverless cars will save two-thirds of them from their own folly. A robot may be able to brake quicker than a human, but stopping distances will be no shorter. If you step straight in front of a robot car, you'll still get hit.
Heath then proceeds to roll out a series of popular nonsense claims in respect of driverless cars: "They will free up commute time for work, rest, sleep or entertainment, unleashing a huge increase in productivity and revolutionising the economics of commuting and thus of cities". This is the Winnebago claim, the idea that cars will evolve into a combination of office, gym, kitchen and bedroom on wheels. Obviously the editor of City AM has not thought through the economics of this, given the inescapable fact that a Winnebago costs a lot more than a Ford Fiesta. Or perhaps he has ...
He goes on: "Many workers will be able to live much further out and even commute for hours. Driverless cars will be able to drop passengers off and then wait for them at a convenient location, returning to pick them up; it may even be that they will be rented, rather than owned, which could do away with the need for homeowners to worry about parking spaces". This is the corral claim. Massive parking lots in the suburbs, and at the edges of city centres, would increase traffic miles and fuel consumption far in excess of the gains from more efficient robot driving. In practice, most city commuters will continue to use mass transit systems, i.e. buses and trains, which will remain the most efficient use of resources per capita. While these may have robot drivers in future, they won't have beds or desks, so "commuting for hours" won't be any more attractive than it is today.
It seems odd that a Hayekian libertarian like Heath should be a proponent of driverless cars, given that their credible benefits (in terms of safety, crime and road capacity) derive from a reduction in personal autonomy. Even the idea of "eliminating human error" has a distinctly un-Hayekian air to it. A robot chauffeur is still a chauffeur, so perhaps the attraction is the idea of command, with the hint that this will be an elite preference. Or perhaps this is just the automophilia that drove Margaret Thatcher's distaste for public transport. In a Telegraph article in April, Heath espied the future: "In 20 years’ time, the demand will be for far more car journeys and more roads, reversing the trend of the past decade and the recent resurgence of the railways. We will be entering a new golden age for car travel". In 20 years, Charles Manson will be 99. By then, car-thefts and police road-blocks may have been consigned to history.
Albert is a fine satirist as well as a car-geek (his take on flying cars is a hoot), but he could still learn a thing or two from Allister Heath, the editor of City AM, who told us last week to "Forget high speed rail. Driverless cars will revolutionise transport". The chief boon of this revolution is that "Driverless cars will reduce accidents by 90 per cent by eliminating human error". As a free-market evangelist, Heath is shrewd enough to recognise a corporatist gravy-train like HS2 when he sees one, but he dons rosé specs when arguing the case for autonomous vehicles, particularly when you realise that the government-backed trial planned for Milton Keynes (which triggered his panegyric) involves vehicles that are little more than golf-buggies (you could argue that a top-speed of 12mph will inevitably reduce accidents, but that's not what he is suggesting).
Though he doesn't provide a link, the 90% claim can be traced back to the Eno Center for Transportation, a US lobby group that bills itself as "a neutral, non-partisan think-tank" (i.e. it's funded by transport and technology companies). The figure originates not in a peer-reviewed study but in "a competitive paper competition among Eno’s Leadership Development Conference Fellows". Presumably the competition was to see who could be most on-message. The central claim in the paper, in respect of US fatal car accidents in 2011, is that: "Over 40 percent of these fatal crashes involve alcohol, distraction, drug involvement and/or fatigue. Self-driven vehicles would not fall prey to human failings, suggesting the potential for at least a 40 percent fatal crash-rate reduction, assuming automated malfunctions are minimal and everything else remains constant".
Simple logic would suggest that "at least" should be "at most", while the assumptions are asking a lot. The 40% figure is opaquely derived from a table of multiple causes - i.e. a crash may be the result of both drink and drugs. Alcohol is a factor in 31% of fatal crashes, distraction a factor in 21%, drugs a factor in 7% and fatigue a factor in 2.5%. You could make the figure of 40% by adding together alcohol, drugs and fatigue (perhaps the real benefit of driverless cars will be drunken snoozing), but it looks more like a guesstimate. It also implies, given that alcohol is the chief factor, that integrated breathalysers could deliver most of the safety benefits promised by robot chauffeurs.
The paper further observes that "Driver error is believed to be the main reason behind over 90 percent of all crashes". This is the opinion that has morphed into Heath's factoid. The objection to this "belief" is that it ignores who actually dies or is injured in car crashes. According to the US government, there were 29,757 crashes resulting in 32,367 fatalities in 2011. The Eno Center paper misrepresents the latter as "Total Fatal & Inurious [sic] Crashes per Year in U.S." (not only wasn't it peer-reviewed, the paper may not have been proof-read). Of the deaths, 22,448 were car occupants, 4,612 were motorcyclists, 4,432 were pedestrians and 677 were pedal-cyclists, with 198 other/unknown. Given that 30% of all fatalities were non-car-occupants (and this figure is probably similar for non-fatal injuries), who will continue to be the cause of many accidents and will presumably not all be augmented by technology, it is a stretch to believe that driverless cars will save two-thirds of them from their own folly. A robot may be able to brake quicker than a human, but stopping distances will be no shorter. If you step straight in front of a robot car, you'll still get hit.
Heath then proceeds to roll out a series of popular nonsense claims in respect of driverless cars: "They will free up commute time for work, rest, sleep or entertainment, unleashing a huge increase in productivity and revolutionising the economics of commuting and thus of cities". This is the Winnebago claim, the idea that cars will evolve into a combination of office, gym, kitchen and bedroom on wheels. Obviously the editor of City AM has not thought through the economics of this, given the inescapable fact that a Winnebago costs a lot more than a Ford Fiesta. Or perhaps he has ...
He goes on: "Many workers will be able to live much further out and even commute for hours. Driverless cars will be able to drop passengers off and then wait for them at a convenient location, returning to pick them up; it may even be that they will be rented, rather than owned, which could do away with the need for homeowners to worry about parking spaces". This is the corral claim. Massive parking lots in the suburbs, and at the edges of city centres, would increase traffic miles and fuel consumption far in excess of the gains from more efficient robot driving. In practice, most city commuters will continue to use mass transit systems, i.e. buses and trains, which will remain the most efficient use of resources per capita. While these may have robot drivers in future, they won't have beds or desks, so "commuting for hours" won't be any more attractive than it is today.
It seems odd that a Hayekian libertarian like Heath should be a proponent of driverless cars, given that their credible benefits (in terms of safety, crime and road capacity) derive from a reduction in personal autonomy. Even the idea of "eliminating human error" has a distinctly un-Hayekian air to it. A robot chauffeur is still a chauffeur, so perhaps the attraction is the idea of command, with the hint that this will be an elite preference. Or perhaps this is just the automophilia that drove Margaret Thatcher's distaste for public transport. In a Telegraph article in April, Heath espied the future: "In 20 years’ time, the demand will be for far more car journeys and more roads, reversing the trend of the past decade and the recent resurgence of the railways. We will be entering a new golden age for car travel". In 20 years, Charles Manson will be 99. By then, car-thefts and police road-blocks may have been consigned to history.
Sunday, 3 November 2013
They're Gonna Break the Internet!
The "Balkanisation of the Internet" trope has reached critical mass. The bogey of the "splinternet" has been around the tech scene since year dot, and the Balkanisation metaphor too, but this largely centred on the pressure to replace open technical standards with proprietary ones - i.e. the drive for commercial lock-in. Some in the early 90s even saw the phenomenal growth of the Internet, and the lack of international management of connectivity, as likely to lead to Balkanisation by way of anarchy. In fact, it led to the "dark net" (or dark address space), those Internet resources that were unreachable by conventional means due to design or accident (the unindexed "deep Web" of dynamic or password-protected content, and deliberately hidden sites like Silk Road, are subsets of this). The Internet manages to be simultaneously a supra-national world-state and the Balkans, which is a neat trick.
Over the last 10 years, the Balkanisation metaphor has been used to describe three distinct developments in respect of the Internet: the growth of national regulation and censorship (China being the prime example); the push towards integrated services by the technology companies (e.g. bloatware on PCs that "manages" your Internet access); and the undermining of net neutrality - i.e. the desire of ISPs to charge different rates for different data traffic (e.g. you pay more for reliable streaming), which is a reflection of the cartelised market for network access (the local loop being a natural monopoly). With the NSA/GCHQ revelations this year, the frequency of the term's use has swerved decisively towards the political realm and flowered as a full-blown trope: the nationalised intranet.
The use of "Balkanisation" in a geopolitical context, which implies both the petty and the internecine, originates in the nineteenth century patchwork of states that emerged in South East Europe from the crumbling Ottoman empire. Following its comeback tour during the Balkan Wars of the 1990s, and its association with the ideology of "liberal interventionism", it took on a second life as a key neoliberal term of dismissive censure, the opposite of (positive) globalisation and shorthand for reactionary backwardness.
Many of the "Balkanisation of the Internet" stories can be read as simplistic neoliberal propaganda, both in its corporate and supra-national state forms. Thus US commentators emphasise the threat to the "free and open" Internet, which they equate with "modern economic activity", while Europeans emphasise the need for regulation: "protecting and promoting human rights and the rule of law also on the Internet, alongside its universality, integrity and openness". This "defence of freedom" versus "need for oversight" dichotomy is currently being played out in the Obama-Merkel spat.
One thing the Snowden affair has reminded us of is that the vast majority of espionage is focused on industrial and commercial advantage, not national security. It's about acquiring non-lethal technology and establishing negotiation vulnerabilities, not James Bond and poison-tipped umbrellas, despite the foaming nonsense in the UK about "gifts to terrorists". The fuss over the NSA in Brazil is more about spying on Petrobras than politicians, and the interest in Merkel's "handy" ultimately reflects German economic dominance, not her indiscretion. Of course, the term "national security" is routinely elided with "strategic economic interests", and there are plenty of stooges who will justify espionage as necessary when dealing with corrupt foreigners (conveniently forgetting the role that US economic imperialism has in encouraging that corruption).
The suggestion that the Snowden revelations may lead to the "break-up" of the Internet is absurd. The Internet is not a single public network, but a loose collection of private networks using common protocols (there's a clue in the name: the interconnection of networks). Short of abandoning TCP/IP, it isn't going to happen. Brazil's proposals to "nationalise" the Internet simply mean privileging its own corporations in-country, and securing regional domination across South America, if possible, much as the EU seeks to do in Europe. It does not mean your email server will be administered by a civil servant if you live in Rio, or that Brazilians will become "isolated". Even in China, the network is a joint venture between private companies and the state, and the censorship of content owes more to a social norm of self-repression than to technical impediments (which are not difficult to circumvent).
Predictably, conservatives have started wailing about "Soviet-style" repression and "national firewalls" crushing online freedom. Some neoliberals prefer the management consultant's argument that this will lead to inefficiency, raising the spectre of redundant bandwidth and hardware ("This would be expensive and likely to reduce the rapid rate of innovation that has driven the development of the internet to date"), but this is illogical. If nations decide to build their own server farms and lay additional cables, this will increase network capacity and spur domestic technology firms. Super-fast broadband would reach rural areas in the UK a lot quicker if we actually had a nationalised network.
The Balkanisation of the Internet trope is interesting because it comes at a time when the tech companies are increasingly driving users back towards the "walled garden" model of the 1990s. The rapid adoption of smartphones and tablets means that increasing numbers of users have little control over their Internet access. For most, this is a boon - they don't have to worry about that head-splitting tech stuff and their devices just work - but it means that more and more exist in a bubble: "What people are choosing is less an iPhone 5s over a Moto X than an entire digital ecosystem that surrounds and permeates their life, and which will affect every other piece of technology that they buy". Even ostensibly "open" platforms like Google (i.e not tied to particular devices) seek this control through the "filter bubble" of personalisation.
The major technology companies, from verticals like Apple through pure-plays like Amazon, are engaged in the "capture" of users and the partitioning of the Internet at the application layer, not at the network layer. But their ambitions go far beyond mere brand loyalty to exclusive control of the shop, which is (ironically) nearer to the "Soviet-style" monopoly of old. The EU's antipathy towards this, expressed in concerns over privacy and data protection (i.e. consumer rights), is a straight tussle between competing neoliberal power blocs.
The real threat to the Silicon Valley companies is the decoupling and fragmentation of the cloud, in other words, the repatriation of data assets. National big data is where the money is - or is presumed to be. The parasitism of the security services is incidental. While the companies could resist this pressure, the more likely outcome will be a deal with national governments. The state will regulate access to national data assets, which in practice means dividing the spoils between global providers, privileged national providers, and the security services of the nation state and its allies. This means the Internet will no longer provide such a pronounced "home advantage" to the US - though it will be a long time (if ever) before it becomes a genuinely level playing field - while preserving the relative commercial advantage of US corporations. The Internet will be no more broken tomorrow than it was yesterday.
Over the last 10 years, the Balkanisation metaphor has been used to describe three distinct developments in respect of the Internet: the growth of national regulation and censorship (China being the prime example); the push towards integrated services by the technology companies (e.g. bloatware on PCs that "manages" your Internet access); and the undermining of net neutrality - i.e. the desire of ISPs to charge different rates for different data traffic (e.g. you pay more for reliable streaming), which is a reflection of the cartelised market for network access (the local loop being a natural monopoly). With the NSA/GCHQ revelations this year, the frequency of the term's use has swerved decisively towards the political realm and flowered as a full-blown trope: the nationalised intranet.
The use of "Balkanisation" in a geopolitical context, which implies both the petty and the internecine, originates in the nineteenth century patchwork of states that emerged in South East Europe from the crumbling Ottoman empire. Following its comeback tour during the Balkan Wars of the 1990s, and its association with the ideology of "liberal interventionism", it took on a second life as a key neoliberal term of dismissive censure, the opposite of (positive) globalisation and shorthand for reactionary backwardness.
Many of the "Balkanisation of the Internet" stories can be read as simplistic neoliberal propaganda, both in its corporate and supra-national state forms. Thus US commentators emphasise the threat to the "free and open" Internet, which they equate with "modern economic activity", while Europeans emphasise the need for regulation: "protecting and promoting human rights and the rule of law also on the Internet, alongside its universality, integrity and openness". This "defence of freedom" versus "need for oversight" dichotomy is currently being played out in the Obama-Merkel spat.
One thing the Snowden affair has reminded us of is that the vast majority of espionage is focused on industrial and commercial advantage, not national security. It's about acquiring non-lethal technology and establishing negotiation vulnerabilities, not James Bond and poison-tipped umbrellas, despite the foaming nonsense in the UK about "gifts to terrorists". The fuss over the NSA in Brazil is more about spying on Petrobras than politicians, and the interest in Merkel's "handy" ultimately reflects German economic dominance, not her indiscretion. Of course, the term "national security" is routinely elided with "strategic economic interests", and there are plenty of stooges who will justify espionage as necessary when dealing with corrupt foreigners (conveniently forgetting the role that US economic imperialism has in encouraging that corruption).
The suggestion that the Snowden revelations may lead to the "break-up" of the Internet is absurd. The Internet is not a single public network, but a loose collection of private networks using common protocols (there's a clue in the name: the interconnection of networks). Short of abandoning TCP/IP, it isn't going to happen. Brazil's proposals to "nationalise" the Internet simply mean privileging its own corporations in-country, and securing regional domination across South America, if possible, much as the EU seeks to do in Europe. It does not mean your email server will be administered by a civil servant if you live in Rio, or that Brazilians will become "isolated". Even in China, the network is a joint venture between private companies and the state, and the censorship of content owes more to a social norm of self-repression than to technical impediments (which are not difficult to circumvent).
Predictably, conservatives have started wailing about "Soviet-style" repression and "national firewalls" crushing online freedom. Some neoliberals prefer the management consultant's argument that this will lead to inefficiency, raising the spectre of redundant bandwidth and hardware ("This would be expensive and likely to reduce the rapid rate of innovation that has driven the development of the internet to date"), but this is illogical. If nations decide to build their own server farms and lay additional cables, this will increase network capacity and spur domestic technology firms. Super-fast broadband would reach rural areas in the UK a lot quicker if we actually had a nationalised network.
The Balkanisation of the Internet trope is interesting because it comes at a time when the tech companies are increasingly driving users back towards the "walled garden" model of the 1990s. The rapid adoption of smartphones and tablets means that increasing numbers of users have little control over their Internet access. For most, this is a boon - they don't have to worry about that head-splitting tech stuff and their devices just work - but it means that more and more exist in a bubble: "What people are choosing is less an iPhone 5s over a Moto X than an entire digital ecosystem that surrounds and permeates their life, and which will affect every other piece of technology that they buy". Even ostensibly "open" platforms like Google (i.e not tied to particular devices) seek this control through the "filter bubble" of personalisation.
The major technology companies, from verticals like Apple through pure-plays like Amazon, are engaged in the "capture" of users and the partitioning of the Internet at the application layer, not at the network layer. But their ambitions go far beyond mere brand loyalty to exclusive control of the shop, which is (ironically) nearer to the "Soviet-style" monopoly of old. The EU's antipathy towards this, expressed in concerns over privacy and data protection (i.e. consumer rights), is a straight tussle between competing neoliberal power blocs.
The real threat to the Silicon Valley companies is the decoupling and fragmentation of the cloud, in other words, the repatriation of data assets. National big data is where the money is - or is presumed to be. The parasitism of the security services is incidental. While the companies could resist this pressure, the more likely outcome will be a deal with national governments. The state will regulate access to national data assets, which in practice means dividing the spoils between global providers, privileged national providers, and the security services of the nation state and its allies. This means the Internet will no longer provide such a pronounced "home advantage" to the US - though it will be a long time (if ever) before it becomes a genuinely level playing field - while preserving the relative commercial advantage of US corporations. The Internet will be no more broken tomorrow than it was yesterday.
Friday, 1 November 2013
From Bank to Bude
This week's Snowden news, that the NSA and GCHQ have been copying Google and Yahoo data en masse, may prove to be more significant that the earlier PRISM and TEMPORA revelations, though the latter pointed to the likelihood of a programme targeting specific data types (i.e. Gmail or Yahoo Mail). While some appear more interested in the post-it note network schematic, complete with smiley face, the IT security maven Bruce Schneier gets to the point: "In light of this, PRISM is really just insurance: a way for the NSA to get legal cover for information it already has. My guess is that the NSA collects the vast majority of its data surreptitiously, using programs such as these. Then, when it has to share the information with the FBI or other organizations, it gets it again through a more public program like PRISM".
The wording of the slides implies that the NSA geeks are familiar with Yahoo's internal technology, notably the Narchive data format, but are not involved in discussions about operational use. They were unclear whether Narchive transfers were triggered by user relocation between data centres, backups from non-US to US sites, or "some other reason". Though this is potentially rich data (email content and attachments), the amount of it, and its age (over 6 months old), means "the relatively small intelligence value it contains does not justify the sheer volume of collection at MUSCULAR (1/4th of the total daily collect)", according to the NSA's own analysts. In essence, the NSA are leveraging Yahoo's housekeeping. Like an industrial-scale Benjamin Pell, they are rifling through the bins.
The schematic led to some initial speculation that the NSA might have hacked Google and Yahoo's front-end servers, or even compromised the SSL (aka TLS) cryptographic protocol, but the obvious interpretation is that they are simply harvesting clear-text packets within the companies' private networks. The best way to overcome security is to circumvent it, and the NSA and GCHQ had a method already in place because of their long-standing relationship with the telcos and cable providers for tapping. It's not clear where the quoted access point DS200B is, but a strong contender would be Bude in Cornwall, where many of the transatlantic cables land and GCHQ has a facility. A look at the global network of undersea cables highlights the importance of the peninusla. (Incidentally, it also indicates the importance of Australia in providing access to the cables that service South Asia, which in turn explains why the apparently nostalgic "Five Eyes" club remains so important to the US.)
Some see this revelation as a watershed moment for the technology companies. Though they were obviously aware of PRISM, because they were served with specific orders through the front door, this back door approach will only have needed cooperation by the telcos (and a willing foreign host, i.e. the UK), giving the Internet companies the grounds to claim ignorance and express outrage at the NSA's behaviour. In reality, it is improbable that the Silicon Valley firms did not know this was going on.
The schematic shows the Google front-end server as the gateway between the public Internet and the Google cloud, with the explanation "SSL added and removed here". Assuming this bears some relation to the truth, it implies that Google routinely decrypts data on arrival in the cloud, which is logical if your business model depends on analysing it for exploitation. The company's decision earlier this year to encrypt all of its traffic between data centres was probably not coincidental, but it is also somewhat misleading as encryption over the fibre backbone stills allows it to decrypt and leverage the data while resident on its cloud servers. Gmail has been encrypted since 2010, but that just means the "local loop" between the SMTP mail server and your device. It's been held in clear-text at Google's end all the time.
One point this highlights is that the operational reality of data storage, which means replicating it across the globe, makes the whole "on US territory" distinction re the legality of surveillance irrelevant. Regardless of where the data originates or is used, a copy can probably be accessed in a benign legislative environment. In effect, countries like the UK are replicating in the cloud (which has a physical reality) their traditional role as offshore havens (or "secrecy jurisdictions"), where domestic legal constraints and regulatory oversight can be circumvented. Conceptually, Bude in Cornwall is a lot like the City of London.
It would be naive to believe that this would solely be of benefit to state agencies, just as it is naive to believe that offshore tax havens are used exclusively by corrupt politicians and criminals. Fig leaves, like the recently announced register of beneficial owners for UK companies, ignore the reality that tax dodging is carried out quite openly. The problem is not covert evasion but shameless avoidance, and the irony is that companies like Google and Yahoo, otherwise outraged at the exploitation of their transparency, are practiced and determined avoiders..
In the realm of ideology, the revelation prompted neoliberal calls to reform the state to the advantage of personal liberty. Martin Kettle even went so far as to quote Spinoza: "the true purpose of the state is in fact freedom ... Its aim is to free everyone from fear so that they may live in security so far as is possible, that is, so that they may retain, to the highest possible degree, their right to live and to act without harm to themselves and others". According to Kettle, "On both the left and the right, many reflexively see the state as overly powerful in various ways. The internet age has undoubtedly intensified this view. It says: there's the state over there, with all its powers and controls. And there's us over here, answerable to it but not part of it." This false dichotomy ignores the central role of business, even though the evidence that the state and favoured corporations are engaged in a stitch-up across almost every area of public life is now overwhelming.
In the political realm, the debate over the NSA has already moved on from civil liberties and commercial exploitation to the struggles of great powers internationally (e.g. the current Merkel-Obama face-off) and the "oversight of surveillance" domestically (a serious debate in the US, comic opera in the UK). While neoliberals fear the balkanisation of the Internet, and the threats this poses to "online freedom", the reality is that the Internet has always been US "home advantage". The gradual erosion of this dominant position will continue, even if the NSA is reined in and Silicon Valley pays more tax in the countries where it trades.
The struggle is a two-dimensional matrix: the neoliberal dialectic of state and corporations; and the power bloc tussle of national and regional advantage (accentuated, rather than dissolved, by globalisation). The driver behind this is technology and the temptation for capitalism to monetise (and commoditise) a previously worthless asset, our private opinions.
The wording of the slides implies that the NSA geeks are familiar with Yahoo's internal technology, notably the Narchive data format, but are not involved in discussions about operational use. They were unclear whether Narchive transfers were triggered by user relocation between data centres, backups from non-US to US sites, or "some other reason". Though this is potentially rich data (email content and attachments), the amount of it, and its age (over 6 months old), means "the relatively small intelligence value it contains does not justify the sheer volume of collection at MUSCULAR (1/4th of the total daily collect)", according to the NSA's own analysts. In essence, the NSA are leveraging Yahoo's housekeeping. Like an industrial-scale Benjamin Pell, they are rifling through the bins.
The schematic led to some initial speculation that the NSA might have hacked Google and Yahoo's front-end servers, or even compromised the SSL (aka TLS) cryptographic protocol, but the obvious interpretation is that they are simply harvesting clear-text packets within the companies' private networks. The best way to overcome security is to circumvent it, and the NSA and GCHQ had a method already in place because of their long-standing relationship with the telcos and cable providers for tapping. It's not clear where the quoted access point DS200B is, but a strong contender would be Bude in Cornwall, where many of the transatlantic cables land and GCHQ has a facility. A look at the global network of undersea cables highlights the importance of the peninusla. (Incidentally, it also indicates the importance of Australia in providing access to the cables that service South Asia, which in turn explains why the apparently nostalgic "Five Eyes" club remains so important to the US.)
Some see this revelation as a watershed moment for the technology companies. Though they were obviously aware of PRISM, because they were served with specific orders through the front door, this back door approach will only have needed cooperation by the telcos (and a willing foreign host, i.e. the UK), giving the Internet companies the grounds to claim ignorance and express outrage at the NSA's behaviour. In reality, it is improbable that the Silicon Valley firms did not know this was going on.
The schematic shows the Google front-end server as the gateway between the public Internet and the Google cloud, with the explanation "SSL added and removed here". Assuming this bears some relation to the truth, it implies that Google routinely decrypts data on arrival in the cloud, which is logical if your business model depends on analysing it for exploitation. The company's decision earlier this year to encrypt all of its traffic between data centres was probably not coincidental, but it is also somewhat misleading as encryption over the fibre backbone stills allows it to decrypt and leverage the data while resident on its cloud servers. Gmail has been encrypted since 2010, but that just means the "local loop" between the SMTP mail server and your device. It's been held in clear-text at Google's end all the time.
One point this highlights is that the operational reality of data storage, which means replicating it across the globe, makes the whole "on US territory" distinction re the legality of surveillance irrelevant. Regardless of where the data originates or is used, a copy can probably be accessed in a benign legislative environment. In effect, countries like the UK are replicating in the cloud (which has a physical reality) their traditional role as offshore havens (or "secrecy jurisdictions"), where domestic legal constraints and regulatory oversight can be circumvented. Conceptually, Bude in Cornwall is a lot like the City of London.
It would be naive to believe that this would solely be of benefit to state agencies, just as it is naive to believe that offshore tax havens are used exclusively by corrupt politicians and criminals. Fig leaves, like the recently announced register of beneficial owners for UK companies, ignore the reality that tax dodging is carried out quite openly. The problem is not covert evasion but shameless avoidance, and the irony is that companies like Google and Yahoo, otherwise outraged at the exploitation of their transparency, are practiced and determined avoiders..
In the realm of ideology, the revelation prompted neoliberal calls to reform the state to the advantage of personal liberty. Martin Kettle even went so far as to quote Spinoza: "the true purpose of the state is in fact freedom ... Its aim is to free everyone from fear so that they may live in security so far as is possible, that is, so that they may retain, to the highest possible degree, their right to live and to act without harm to themselves and others". According to Kettle, "On both the left and the right, many reflexively see the state as overly powerful in various ways. The internet age has undoubtedly intensified this view. It says: there's the state over there, with all its powers and controls. And there's us over here, answerable to it but not part of it." This false dichotomy ignores the central role of business, even though the evidence that the state and favoured corporations are engaged in a stitch-up across almost every area of public life is now overwhelming.
In the political realm, the debate over the NSA has already moved on from civil liberties and commercial exploitation to the struggles of great powers internationally (e.g. the current Merkel-Obama face-off) and the "oversight of surveillance" domestically (a serious debate in the US, comic opera in the UK). While neoliberals fear the balkanisation of the Internet, and the threats this poses to "online freedom", the reality is that the Internet has always been US "home advantage". The gradual erosion of this dominant position will continue, even if the NSA is reined in and Silicon Valley pays more tax in the countries where it trades.
The struggle is a two-dimensional matrix: the neoliberal dialectic of state and corporations; and the power bloc tussle of national and regional advantage (accentuated, rather than dissolved, by globalisation). The driver behind this is technology and the temptation for capitalism to monetise (and commoditise) a previously worthless asset, our private opinions.
Subscribe to:
Posts (Atom)