Stories such as these have been appearing in ever greater numbers
recently, as the technologies involved become ever more integrated into
our lives. They form part of the Internet of Things (IoT),
the embedding of sensors and internet connections into the fabric of
the world around us. Over the last year, these technologies, led by Amazon’s Alexa and Google’s Home, have begun to make their presence felt in our domestic lives, in the form of smart home devices that allow us to control everything in the house just by speaking.
We might look at stories like those above as isolated technical
errors, or fortuitous occurrences serving up justice. But behind them,
something much bigger is going on: the development of an entire class of
technologies seeking to remake the fundamentals of our everyday lives.
Breaking the social order
These technologies want to be ubiquitous, seamlessly spanning the
physical and virtual worlds, and awarding us frictionless control over
all of it. The smart home promises a future in which largely hidden tech
provides us with services before we’ve even realised we want them,
using sensors to understand the world around us and navigate it on our
behalf. It’s a promise of near limitless reach, and effortless
convenience.
It’s also completely incompatible with social realities. The problem
is, our lives are full of limits, and nowhere is this better
demonstrated than in the family home, which many of these technologies
target. From the inside, these places often feel all too chaotic but
they’re actually highly ordered. This is a world full of boundaries and
hierarchies: who gets allowed into which rooms, who gets the TV remote,
who secrets are shared with, who they are hidden from.
Much of this is mundane, but if you want to see how important these kind of systems of order are to us, consider the “breaching experiments”
of sociologist Harold Garfinkel in the 1960s. Garfinkel set out to
deliberately break the rules behind social order in order to reveal
them. Conducting the most humdrum interaction in the wrong way was shown
to elicit reactions in others that ranged from distress to outright
violence. You can try this yourself. When sat round the dinner table try
acting entirely normal save for humming loudly every time someone
starts speaking, and see how long it is before someone loses their
temper.
The technologies of the smart home challenge our orderings in
countless small ways. A primary limitation is their inability to
recognise boundaries we take for granted. I had my own such experience a
week ago while sitting in my front room. With the accidental slip of a
finger I streamed a (really rather sweary) YouTube video from my phone
onto my neighbour’s TV, much to the surprise of their four-year-old
daughter in the middle of watching Paw Patrol.
Slip of the finger.Shutterstock
A finger press was literally all it took, of a button that can’t be
disabled. That, and the fact that I have their Wi-Fi password on my
phone as I babysit for them from time to time. To current smart home
technology, those who share Wi-Fi networks share everything.
Of course, we do still have passwords to at least offer some crude
boundaries. And yet smart home technologies excel at creating data that
doesn’t fit into the neat, personalised boxes offered by consumer
technologies. This interpersonal data concerns groups, not individuals,
and smart technologies are currently very stupid when it comes to
managing it. Sometimes this manifests itself in humorous ways, like
parents finding “big farts”
added to their Alexa-generated shopping list. Other times it’s far more
consequential, as in the pregnant daughter story above.
In our own research into this phenomena, my colleagues and I have
discovered an additional problem. Often, this tech makes mistakes, and
if it does so with the wrong piece of data in the wrong context, the
results could be disastrous. In one study we carried out,
a wife ended up being informed by a digital assistant that her husband
had spent his entire work day at a hotel in town. All that had really
happened was an algorithm had misinterpreted a dropped GPS signal, but
in a relationship with low trust, a suggestion of this kind could be
grounds for divorce.
Rejecting the recode
These technologies are, largely unwittingly, attempting to recode
some of the most basic patterns of our everyday lives, namely how we
live alongside those we are most intimate with. As such, their placement
in our homes as consumer products constitute a vast social experiment.
If the experience of using them is too challenging to our existing
orderings, the likelihood is we will simply come to reject them.
This is what happened with Google Glass,
the smart glasses with a camera and heads-up-display built into them.
It was just too open to transgressions of our notions of proper
behaviour. This discomfort even spawned the pejorative “Glasshole” to describe its users.
Undoubtedly, the tech giants selling these products will continue to
tweak them in the hope of avoiding similar outcomes. Yet a fundamental
challenge remains: how can technologies that sell themselves on
convenience be taught the complexities and nuances of our private
worlds? At least without needing us to constantly hand-hold them,
entirely negating their aim of making our lives easier.
Their current approach – to ride roughshod over the social terrain of
the home – is not a sustainable approach. Unless and until the day we
have AI systems capable of comprehending human social worlds, it may be
that the smart home promised to us ends up being a lot more limited than
its backers imagine. Right now, if you’re taking part in this
experiment, the advice must be to proceed with caution, because when it
comes to social relationships, the smart home remains pretty dumb. And
be very careful not to stream things to your neighbour’s TV.
[A piece for the Sociological Imagination blog, on the subject given by the title above.]
My first experience of interdisciplinarity was genuinely exciting
to be a part of. To some degree of course the quality of the experience
was shaped by the particular focus of research, and the characters of
those on the team. But fundamentally, the work of attempting to
understand a shared problem, and enact a shared solution, was deeply
satisfying, often surprising, very difficult in usually a good way, and
only on occasion terrifyingly overwhelming.
As the talk of ‘solution’ suggests, this was interventionary project, tasked with achieving ‘impact’. Public Access Wi-Fi Service (PAWS)
was an Internet access model by which existing domestic broadband
connections could securely share a small slice of connectivity (2mb)
with others living close by. In doing so it would address one barrier to
online access, that of cost (and/or credit worthiness). It was never
intended to address absences of relevant skills or positive meanings,
but previous work suggested that cost was a big enough hindrance for
enough of those categorised as ‘digitally excluded’ that it was worthwhile to tackle on its own.
At the time, and still today, this
struck me as a noble goal to pursue. We cited a UN report that spoke of
digital access as a human right, and whilst acknowledging the
limitations imposed by today’s privatised market orthodoxy, spoke of the
possibilities of a National Broadband Service. To be genuinely invested in the social value in your project is enormously beguiling, perhaps dangerously so in hindsight.
Our approach felt resolutely
socio-technical. Computer scientists would create the software which
carried this transformational potential; two sociologists (of which I
was one) would study its deployment in a real world
setting. We would do it at scale – up to 50 installations – and at the
margins – a socio-economically troubled inner city estate. This was ‘in-the-wild’
research of a kind that simply isn’t done (perhaps with good reason
given what followed). The ‘wild’ of technology deployments is often
rather tame
– it is outside the lab, but it’s a world conterminous with the white,
middle class and educated inside. By necessity of seeking out the
digitally excluded, we had to go further, venturing “across the parking
lot” (Kjeldskov & Skov 2014) and beyond.
In hindsight it is easy to
disassemble this endeavour and critique the techno-utopianism which lay
at the heart of it. That though is not what I want to write about,
certainly not directly, not least because PAWS still feels to me to have
been genuinely brave, and if it was flawed, it tried. The detachment of side-line critique is easy by comparison.
What I do want to write about is the
experience of doing PAWS. Judged by its starting goals, PAWS ultimately
failed. We – the sociologists – never really got to study PAWS in its
intended setting. Instead, we worked, endlessly, at embedding
it in the setting. We rarely got to step back and observe. The work of
embedding a research technology in a setting is little spoken of. Rare
exceptions include Peneff’s (1988) study of French fieldworkers carving
out the necessary agency to adapt formalised, large scale survey
instruments to localised conditions, and Tolmie et al. (2009) on
‘digital plumbing’, that is of reconciling deployed technologies with
the social worlds in which they are to be set loose. Here I want to
highlight three challenges that emerged from this work of embedding.
These are discussed in detail in our paper (Goulden et al 2016) [Open Access], where we also offer some means of resolving them. I merely introduce them here.
Problems of time:
When, as sociologists, we approached this collaboration with computer
scientists, we were aware of a long history of ethnographic work within
CS, primarily in the form of the subdiscipline of Computer-Supported
Cooperative Work (CSCW). We failed to appreciate that PAWS was different
from the canonical CSCW study, in which an existing or novel technology
is studied within an organisational setting. Perhaps the single most
important difference was this question of embedding – in the typical
CSCW study, the embedding is being done by the organisation, and the
ethnographer is there to study it. We were attempting to do both,
simultaneously. Furthermore, our setting – a marginalised inner city
estate – was significantly more socially ‘distant’ from us, as middle
class white-collar professionals, than any typical office might be. The
result of these differences was that the work was slow.
There was not prospect here of ‘quick and dirty’ ethnography of the
kind which is commonplace is traditional technology-led projects.
The cadence of the work was entirely
out of kilter with that of computer science. This is a field in which
talk of iterative, “agile” development abounds, where ‘Moore’s Law’
dictates that the capacity of the underlying technology doubles every 18
months, where Mark Zuckerberg extols the mantra of “move fast and break things”. As strangers, and guests, in a foreign land, we could not afford to break anything.
It wasn’t that the computer science work was constantly ahead of us.
Rather that the development cycles of the two disciplines were rarely in
sync, which greatly complicated everything else.
Digital plumbing: in
turning attention to the work of installing deployed research tech in
homes and other non-lab settings, Tolmie et al. (2009) were drawing
attention to how fundamentally socio-technical
this work is. This was all the more so in PAWS, where the division of
the work into lab-based ‘technical’ labour, and real world ‘social’
labour was split cleanly between technologists and sociologists. The
work of doing the embedding of technology was all our own then. The task
did not appear overly complicated – plugging-in additional routers in
the houses of those ‘sharing’ their signal, and installing software on
the devices of those making use of this signal. The latter commonly
threw up all kinds of errors and snags which slowed us down, but in and
of itself was rarely insurmountable.
What was more so was the range of the
Wi-Fi which underpinned the entire system. Huge amounts of additional
labour were generated by the fact that Wi-Fi signal strength was highly
unpredictable. Sometimes, due to the specific local material
circumstances – the positioning of walls, trees, inclines etcetera – it
travelled far further than anticipated. More often it didn’t come close.
We had been caught out here not by the labour which falls between disciplines, but by the knowledge.
It turns out that real world Wi-Fi performance is a poorly understood
phenomenon, beyond perhaps very specific niches. As one of the computer
scientists on the team summarised: Radio physicists know what the
answer is in theory; the lab engineers know what the answer is by
simulation; computer scientists don’t care what the range is, they care
what the throughput or latency is. The greatest challenge for our
fieldwork came when this technical labour combined with the demand for
emotional labour. Peneff (1988) speaks of the means by which
fieldworkers “cope” with the many ambiguities and tensions of fieldwork,
in a setting in which they must execute a formalised task in manner
naturalistic enough that the human participant might engage as if it was
a conversation with a trusted acquaintance. Trying to deduce why an
iPad was refusing to connect to PAWS – instead complaining of an ‘Out of
date security certificate’ – whilst simultaneously presenting the
required attention and sympathy towards a participant met five minutes
earlier, who was now relating her recent ordeal at the local hospital
following a heart scare, it was difficult for us not to look on Peneff’s
fieldworkers with envy. This simultaneous performance of emotional and
technical labour, orientating to both human and non-human, is a
challenge particular to this form of fieldwork.
Going native:
Doing interdisciplinarity means stepping outside traditional discipline
boundaries and making a commitment to meaningful engagement with what
may be very different logics of enquiry. There is a balancing act to be
done here. As social scientists we should maintain a critical appraisal
of the technological programme and its conception of the setting.
Perhaps too enamoured by the laudable goals of PAWS, we did not always
do this, becoming too close to the project’s “technical boosterism” (Savage 2015).
Within PAWS this was realised in how
our original plan constituted its participants. During these initial
stages, the greatest concern amongst the project team was that PAWS
might fail to find enough residents willing to act as sharers. It was
easy to adopt the computer scientists’ concerns that the notion of
sharing a resource with strangers would be rejected by many, or that
security fears might prove insurmountable. Those using the system were
less of a concern: it was thought that the combination of free access to
the Internet and a £50 voucher for participating in the research would
be sufficiently compelling for those with limited resources.
In hindsight it became clear that in
buying into PAWS’ technological programme we had been insufficiently
sensitive to the social orientations of those we were seeking out. We
were appraising the project through the eyes of the technologists not
the members of the setting. Those using the system were liable to be
amongst the most marginalised of a marginalised community. The
implications of this for the door-to-door recruitment we conducted are
made clear in McKenzie’s (2015) ethnography of life on inner city
estates (actually conducted on another Nottingham estate just 3 miles
away from ours). She writes
it was actually very impolite to
turn up unannounced. This practice was always about risk management –
there was a lot of fear and suspicion on the estate, fear of the
unannounced visitor, which meant the police, the ‘social’, the TV
licensing people. It always meant problems, and doors would not be
opened if they didn’t know who was on the other side of it. (p. 89)
Our experience of going door-to-door
seemed to support McKenzie’s account: potential users of the system were
hard to find, and many properties never answered the door, despite
knocking on more than one occasion, and often when it was clear someone
was home. The result was that we never recruited anything like as many
users as we hoped for, and this was ultimately where the project failed
to achieve its original goals. —–
Where PAWS succeed was in
demonstrating some of the challenges to be overcome if we are to become
serious about doing ‘in the wild’ research. In turning increasingly
towards applied, technology-led research, directed towards specific
‘social problems’, we overlook at our peril the work of embedding, both as a task in itself, and in what it implies for interdisciplinary collaboration.
Facebook came in for considerable criticism recently when they revealed that, over a one week period in 2012, they conducted a study
on 689,000 users. By filtering the status updates that appeared on
individuals’ new feeds, the study set out to measure “emotional
contagion”, a rather hyperbolic term for peer effects on the emotional
state of individuals. The study concludes that Facebook users more
exposed to positive updates from contacts in turn posted more positive
updates, and less negatives ones, themselves. The result for those
exposed to more negative updates was the opposite.
Some of the claims made by the study are open to challenge, but here
I’m interested in the response to it. I’ll go on to argue that we should
actually be grateful to Facebook for this study.
Criticism revolved around two interrelated themes: manipulation and
consent. Of the first, Labour MP Jim Sheridan, member of the Commons
media select committee, declared that “people are being
thought-controlled”, and several news stories declared it “creepy”. The second response, as reported here,
was led by academics highlighting clear differences between the ethical
requirements they are required to meet when conducting research, and
those the Facebook study adopted.
Without informed consent, it’s difficult not to see an attempt at
mass emotional manipulation as creepy, so it is highly problematic that
the study’s claim for consent is so weak. It states “Facebook’s Data Use
Policy, to which all users agree prior to creating an account on
Facebook, [constitutes] informed consent for this research”. This is
simply nonsense. Even if we pretend every user actually read the Data Use Policy
rather than just clicked past it (and how many of us ever read the
Terms and Conditions?), this should have happened shortly before the
study – rather than potentially as long ago as 2005 when Facebook first
launched. Further stretching the definition of “informed” is the fact
that the key sentence in the Policy comes at the bottom of a long list
of things Facebook “may” use your data for. This solitary sentence –
“internal operations, including troubleshooting, data analysis, testing,
research and service improvement” – embedded in a 9,091 word document,
may legally constitute consent, but from an ethical standpoint it
certainly isn’t informed.
It is perhaps surprising, having said all this, that I think Facebook
should be congratulated on this study. What they have managed to do is
draw back the curtain on the increasingly huge impact that tech
companies’ algorithms have on our lives. The surprising, and rather
worrying thing, is that Facebook has done this inadvertently, whilst
reporting on a study of social contagion. The really important
‘contagion’ revealed by this work is that of invisible filters and
rankings in structuring our access to information online. The reason why
this is worrying is that not seeing this controversy coming suggests
Facebook are as blind to the social consequences of these processes as
most of the public have been.
Despite the accusations of “creepy” manipulation, the only thing that
is unique about what Facebook did in this experiment is that in
reporting it they publically admitted that the processing they carried
out was not done in the “interests” (as defined by Facebook) of the
individuals involved. There are two issues here. The first concerns what is
in the interests of the individual. For Facebook, and no doubt other
tech companies relying on advertising income base on page views, this is
defined as what people like (or ‘Like’ in Facebook’s case). In a
healthy, pluralist society we shouldn’t only be exposed to things we
like. Being a citizen in a democracy is a job for grown-ups, and
important information is not always as immediately palatable as videos
of cats on skateboards are. And what of serendipity, of finding
interesting content from a novel source? Filters strip this away, in a
manner which is entirely purposeful and almost always invisible.
The purpose behind these filters leads us to the second issue.
Alongside their interest in keeping users satisfied, tech companies
have, of course, their own commercial interests which at times may
conflict with those of the user. Google, in a case
that has been rumbling on since 2010, is facing sanction from the
European Commission (EC) for altering the all-important ranking of
websites so its own businesses appear at the top of searches. The
ability to present information in a manner which favours the company at
the expense of others – whether businesses or individuals – is available
to any tech company which provides content to users.
As the tech sector matures, we may increasingly also see political interests shaping their actions. The ongoing ‘net neutrality’
battle in the US has brought to light that one of the biggest potential
beneficiaries of abandoning neutrality – the Internet Service Provider
Comcast – spent more on lobbying
last year ($18m) than any other company except the arms manufacturer
Northrop Grumman. In the Facebook controversy some critics have already
raised the prospect of such filters being used to alter users’ emotional
states during an election, in order to affect the outcome. Even the
small effects described in the study could have huge impact given
Facebook’s reach, as the study itself acknowledges: “an effect size of d
= 0.001 at Facebook’s scale is not negligible: In early 2013, this
would have corresponded to hundreds of thousands of emotion expressions
in status updates per day.”
There was actually no need, in this case, to use such a poor
approximation of informed consent. We shouldn’t though let that
complaint obscure the bigger story here. It may have been by accident,
but Facebook have, I hope, triggered a vital public debate about the
role of algorithms in shaping our online lives.
As societies, we are currently a long way behind the curve in dealing
with the possibilities for information manipulation that the internet
offers. Of course information manipulation is as old as information
itself, but users of a news website know that they are receiving a
curated selection of information. We do not yet have such expectations
about Google searches or the updates of our friends that Facebook makes
available to us. We must then begin to think about how we can ensure
such powers are not abused, and not rely just on one-off cases such as
Google’s battle with the EC. The challenge of balancing public interest
and commercial secrecy promises to result in a long battle, so it’s one
that needs to begin now.
In my view, Facebook’s mistake was not in conducting such work, but
in reporting it as a study of human behaviour, rather than of tech
companies’ influence over us. Ethics are not set in stone, and must
always be balanced with what is in the public interest. If there is
sufficient benefit for society as a whole, it may be considered
justifiable to transgress some individuals’ rights (as is the case, for
example, when the news media reports on a politician committing
adultery). As such, it could be that argued that Facebook’s study was
actually ethical. For this to be the case though, Facebook would need to
show an understanding of what the public interest actually is in this
case.
The UK’s most ambitious infrastructure project is in trouble.
Criticism of the High Speed Two rail network has come from left and
right of the political spectrum, with both the New Economics Foundation and the Institute of Economic Affairs challenging the government. The project’s costs are rising all the time: they now stand at around £40 billion while the economic benefits have been continually revised downwards.
The crux of the government’s problem is the need to find a
justification for such a huge investment. Ultimately, this is really a
question about what Britain will look like over the next century and
what this implies for mobility.
Even if everything goes to plan, the first phase of HS2, between
London and Birmingham, will not open until 2026. Since the Treasury
assesses such schemes over a 60 year period of operation, justifying HS2
requires the government to produce a vision of the UK in the late 21st
century. When we look at the flaws and gaps in this vision, we have to
ask whether HS2 is a sensible investment.
Don’t look back
The government is failing to win the case for HS2 because its vision
has not been sufficiently compelling or credible to persuade its
critics.The Department for Transport has attempted to create a vision of
the UK’s future through a process of modelling; taking selected
observed trends from recent history and rolling them forward.
The most important of these is the demand for train travel. In recent
decades this has consistently risen, leading the Dft to declare in its economic case for HS2 that there will be a continuing growth in long distance rail travel.
The past, however, is often a very poor basis for creating visions of
the future. Consider the very same data the DfT uses to identify rising
rail demand. Until 2005 domestic air travel (against which HS2 would
compete) also grew consistently. Past trends would suggest no slow down,
but it actually fell steeply in subsequent years. The extra security
checks introduced in response to terrorism fears made it less
competitive relative to other options, and demand fell accordingly.
Travelling trends
Click to enlarge
An alternative future
The impact of increased security reminds us that what determines
travel demand is the immensely complex outcome of intersecting social,
political, technical and economic processes. Many potential developments
over the decades to come could drastically change the case for HS2.
Some of these are already here. One major line of attack on the case
for HS2 has focussed on its assumption that train travel is economically
unproductive – that people do not work on trains. While this view
might have been credible 20 years ago, technological advances, such as
laptops and Wi-Fi, mean that a train carriage today functions as a
mobile office. The Institute of Directors recently surveyed its members on this issue and found that only 6% do not work when travelling by train.
The same class of technology has the potential to change life
radically in the coming years. Indeed, just such a vision is being
pursued by the government as it funds research into the digital economy.
In the digital economy future, UK economic and social activity will
have continued to move online. Interaction with others is increasingly
virtual. People do not travel to occupy the same physical space but use
technologies like video conferencing, and, further ahead, advanced
virtual reality. Work becomes ever more distributed as the convenience
and low cost of digital communications comes to outweigh the value of
physical proximity. Manufacturing, too, becomes increasingly localised,
as technologies like 3D printing and computer-aided manufacturing reduce
the requirements for concentration to achieve economies of scale.
None of these developments are particularly “out there”. They are
based on an analysis of current trends, in the same way that the DfT has
looked into travel demand. As with growing demand for trains, none of
these outcomes is inevitable, and neither are they a zero sum game – we
might adopt all these practices and still find reasons to travel long
distances in ever increasing numbers. If we do travel though, it is
difficult to envisage how technology will not continue to blur the lines
between stationary and mobile activity.
Today smartphones, 3G, tablets and laptops enable us to consume and
produce information and entertainment on the move. Just ten years ago,
most of these activities could only be done at home or in the office. If
this process continues, the actual time spent travelling could become
increasingly irrelevant, because what we do during that journey will be
largely indistinguishable from what precedes and follows it. In such a
situation, the logic of investing billions to shave 35 minutes off the
journey from London to Birmingham becomes highly questionable.
The HS2 project hangs in the balance. Unless the government can
produce more convincing visions about the mobility that will be required
by the UK in the late 21st century, it has no hope of convincing the
public that its money really is best spent on high speed trains.
I witnessed something rather remarkable at the Planet Under Pressure (PUP) conference in London last week. PUP was a huge event: some 3000 scientists, of both natural and social stripes, assembling to present and discuss the latest science on climate change. There were in fact several aspects which could be called remarkable, such as the emerging consensus that we have entered a new epoch, the Anthropocene, in which human activities are a dominant driver in many global systems. Also remarkable is how bleak the future looks from current climate modelling, with emissions on track to cause 4 degrees of warming or more, at which point all number of positive (destructive) feedback loops could kick in, leading to runaway change.
From a social science perspective, the organisation of the conference was itself remarkable, with its overt focus on influencing a political process, namely the Rio+20 UN summit this June. To this end, a 'state of the planet' declaration was worked on throughout the conference and read out at it's closing. This was not the scene I want to talk about, but in raising the question of where science should begin and where it should end, and what 'engagement' really means, it's a relevant framing.
The scene in question happened during the plenary sessions on the first morning of the conference. A panel of speakers were on stage to discuss 'The Planet in 2050'. One of them was a fellow called Martin Haigh from the oil company Shell, and shortly after he began speaking two protesters (from the group London Rising Tide) slipped on to the stage and unfurled a banner depicting the Shell logo as a human skull:
A burst of applause quickly gathered from a sizeable portion of the audience (myself included), before a wonderfully English scene played out in which the slightly shaken host politely asked the pair to leave and they quietly did so, flanked by flustered-looking men in suits.
Why this 30 seconds stuck in my mind was that one of the dominant themes of the conference was that politics at the national and supra-national level has failed to address climate change. The only real source of optimism is to be found at the city and community level, where activists are building the capacity to begin challenging the destructive status quo. Lord Anthony Giddens argued this in his plenary talk just an hour before the panel took to the stage, and it was a claim I heard repeated at several points during the following four days. What I didn't see during those four days, and what made the Shell protest remarkable, were any other activists. I did see panels featuring representatives from a couple of other corporations - the insurance giant Aviva and Sainsbury's - but the grassroots were nowhere to be seen.
I find this rather troubling. Alongside the missing grassroots was a missing question that the open recognition of political failure demanded be asked: why has politics at the national level failed? Beyond a few muttered comments no one seemed to want to talk about this. When panel members were asked similar questions by delegates there was generally embarrassed silence followed by a swift move on to something else. The lowest points in this discourse of omission were when speakers lamented the turn of publics, particularly in the US, towards climate change denial, as if this was simply a process sui generis, with no larger structural forces behind it.
Answering the Missing Question
In depth, the answer to this missing question could be terrifyingly complex, incorporating all number of actors, driven by diverse ideologies, economics, technologies and institutional cultures, but it can distilled down to a very simple answer: that many of the holders of power in the current paradigm feel threatened by what a systemic shift could entail, and are expending resources accordingly. The greatest single impediment to meaningful progress on climate change right now is the political toxicity of the issue in the US. This impasse is not the product of some organic bottom-up movement (as a couple of conference panellists seemed to imply) but an orchestrated effort by elites such as the Koch brothers, who have been major funders of the Tea Party movement, both directly in financial terms, but also indirectly in ideological terms though their funding for climate skeptic and 'free market' think tanks.
There was no doubt amongst the conference attendees about the lethality of the precipice that society is blithely marching off right now. It was also explicit in the organisation of the conference that science can no longer simply limit itself to generating knowledge - it must also concern itself with ensuring that this knowledge is acted upon, through engagement with wider society.
Putting these elements together, it seemed clear to me that many scientists are convinced by their data that a paradigm shift in the socio-economic organisation of society is required at this point. The current system is simply unsustainable. What wasn't clear to me was a willingness on the part of many to actually start thinking through (at least openly) what such a shift means for engagement. Is focusing on the defenders of the status quo really the best way of instigating change? As a strategy it doesn't seem to have got us far - carbon emissions rose at their fastest rate ever last year. Shouldn't we give a little more attention to those who do share our goals, and invite them on to the stage rather than ask them to leave it?
Ultimately, this is a question of power. Powerful opponents of change were invited on to the stage at PUP, whilst powerless proponents of change were not. Engagement with powerful actors is of course vital if paradigm change is to be achieved, but that engagement will not achieve anything if it is naively uncritical. Shell's appearance is a powerful example of this. No matter how many photos of wind turbines are placed on their website, they remain a company whose balance sheet relies on their holdings of billions (trillions?) of dollars of oil. How such an organisation can be recruited as a agent of transition to a carbon-free future is somewhat beyond me.
By no means was PUP a one note song: there were many at the conference who were thinking though these issues, and going beyond simply a desire for change to think about what it might mean in practice, both for society at large, and scientists themselves. Perhaps unsurprisingly, these seemed mainly to be the social science delegates (after all, for many of us it's our day job), but there were notable exceptions like that of Anne Glover, the biologist and chief scientific advisor to the European Commission, who spoke passionately on some of these issues. My hope is that by the time of the next PUP, such awareness is more apparent amongst the conference organisers too. They achieved a great many successes during the week, but they failed to create a space conducive to the radical thinking the climate science is demanding. Perhaps in a setting in which Occupy are as visible as Big Oil, the delegates will feel better able to speak openly on what engagement entails when the goal is paradigm change.