Stories such as these have been appearing in ever greater numbers
recently, as the technologies involved become ever more integrated into
our lives. They form part of the Internet of Things (IoT),
the embedding of sensors and internet connections into the fabric of
the world around us. Over the last year, these technologies, led by Amazon’s Alexa and Google’s Home, have begun to make their presence felt in our domestic lives, in the form of smart home devices that allow us to control everything in the house just by speaking.
We might look at stories like those above as isolated technical
errors, or fortuitous occurrences serving up justice. But behind them,
something much bigger is going on: the development of an entire class of
technologies seeking to remake the fundamentals of our everyday lives.
Breaking the social order
These technologies want to be ubiquitous, seamlessly spanning the
physical and virtual worlds, and awarding us frictionless control over
all of it. The smart home promises a future in which largely hidden tech
provides us with services before we’ve even realised we want them,
using sensors to understand the world around us and navigate it on our
behalf. It’s a promise of near limitless reach, and effortless
convenience.
It’s also completely incompatible with social realities. The problem
is, our lives are full of limits, and nowhere is this better
demonstrated than in the family home, which many of these technologies
target. From the inside, these places often feel all too chaotic but
they’re actually highly ordered. This is a world full of boundaries and
hierarchies: who gets allowed into which rooms, who gets the TV remote,
who secrets are shared with, who they are hidden from.
Much of this is mundane, but if you want to see how important these kind of systems of order are to us, consider the “breaching experiments”
of sociologist Harold Garfinkel in the 1960s. Garfinkel set out to
deliberately break the rules behind social order in order to reveal
them. Conducting the most humdrum interaction in the wrong way was shown
to elicit reactions in others that ranged from distress to outright
violence. You can try this yourself. When sat round the dinner table try
acting entirely normal save for humming loudly every time someone
starts speaking, and see how long it is before someone loses their
temper.
The technologies of the smart home challenge our orderings in
countless small ways. A primary limitation is their inability to
recognise boundaries we take for granted. I had my own such experience a
week ago while sitting in my front room. With the accidental slip of a
finger I streamed a (really rather sweary) YouTube video from my phone
onto my neighbour’s TV, much to the surprise of their four-year-old
daughter in the middle of watching Paw Patrol.
Slip of the finger.Shutterstock
A finger press was literally all it took, of a button that can’t be
disabled. That, and the fact that I have their Wi-Fi password on my
phone as I babysit for them from time to time. To current smart home
technology, those who share Wi-Fi networks share everything.
Of course, we do still have passwords to at least offer some crude
boundaries. And yet smart home technologies excel at creating data that
doesn’t fit into the neat, personalised boxes offered by consumer
technologies. This interpersonal data concerns groups, not individuals,
and smart technologies are currently very stupid when it comes to
managing it. Sometimes this manifests itself in humorous ways, like
parents finding “big farts”
added to their Alexa-generated shopping list. Other times it’s far more
consequential, as in the pregnant daughter story above.
In our own research into this phenomena, my colleagues and I have
discovered an additional problem. Often, this tech makes mistakes, and
if it does so with the wrong piece of data in the wrong context, the
results could be disastrous. In one study we carried out,
a wife ended up being informed by a digital assistant that her husband
had spent his entire work day at a hotel in town. All that had really
happened was an algorithm had misinterpreted a dropped GPS signal, but
in a relationship with low trust, a suggestion of this kind could be
grounds for divorce.
Rejecting the recode
These technologies are, largely unwittingly, attempting to recode
some of the most basic patterns of our everyday lives, namely how we
live alongside those we are most intimate with. As such, their placement
in our homes as consumer products constitute a vast social experiment.
If the experience of using them is too challenging to our existing
orderings, the likelihood is we will simply come to reject them.
This is what happened with Google Glass,
the smart glasses with a camera and heads-up-display built into them.
It was just too open to transgressions of our notions of proper
behaviour. This discomfort even spawned the pejorative “Glasshole” to describe its users.
Undoubtedly, the tech giants selling these products will continue to
tweak them in the hope of avoiding similar outcomes. Yet a fundamental
challenge remains: how can technologies that sell themselves on
convenience be taught the complexities and nuances of our private
worlds? At least without needing us to constantly hand-hold them,
entirely negating their aim of making our lives easier.
Their current approach – to ride roughshod over the social terrain of
the home – is not a sustainable approach. Unless and until the day we
have AI systems capable of comprehending human social worlds, it may be
that the smart home promised to us ends up being a lot more limited than
its backers imagine. Right now, if you’re taking part in this
experiment, the advice must be to proceed with caution, because when it
comes to social relationships, the smart home remains pretty dumb. And
be very careful not to stream things to your neighbour’s TV.
[A piece for the Sociological Imagination blog, on the subject given by the title above.]
My first experience of interdisciplinarity was genuinely exciting
to be a part of. To some degree of course the quality of the experience
was shaped by the particular focus of research, and the characters of
those on the team. But fundamentally, the work of attempting to
understand a shared problem, and enact a shared solution, was deeply
satisfying, often surprising, very difficult in usually a good way, and
only on occasion terrifyingly overwhelming.
As the talk of ‘solution’ suggests, this was interventionary project, tasked with achieving ‘impact’. Public Access Wi-Fi Service (PAWS)
was an Internet access model by which existing domestic broadband
connections could securely share a small slice of connectivity (2mb)
with others living close by. In doing so it would address one barrier to
online access, that of cost (and/or credit worthiness). It was never
intended to address absences of relevant skills or positive meanings,
but previous work suggested that cost was a big enough hindrance for
enough of those categorised as ‘digitally excluded’ that it was worthwhile to tackle on its own.
At the time, and still today, this
struck me as a noble goal to pursue. We cited a UN report that spoke of
digital access as a human right, and whilst acknowledging the
limitations imposed by today’s privatised market orthodoxy, spoke of the
possibilities of a National Broadband Service. To be genuinely invested in the social value in your project is enormously beguiling, perhaps dangerously so in hindsight.
Our approach felt resolutely
socio-technical. Computer scientists would create the software which
carried this transformational potential; two sociologists (of which I
was one) would study its deployment in a real world
setting. We would do it at scale – up to 50 installations – and at the
margins – a socio-economically troubled inner city estate. This was ‘in-the-wild’
research of a kind that simply isn’t done (perhaps with good reason
given what followed). The ‘wild’ of technology deployments is often
rather tame
– it is outside the lab, but it’s a world conterminous with the white,
middle class and educated inside. By necessity of seeking out the
digitally excluded, we had to go further, venturing “across the parking
lot” (Kjeldskov & Skov 2014) and beyond.
In hindsight it is easy to
disassemble this endeavour and critique the techno-utopianism which lay
at the heart of it. That though is not what I want to write about,
certainly not directly, not least because PAWS still feels to me to have
been genuinely brave, and if it was flawed, it tried. The detachment of side-line critique is easy by comparison.
What I do want to write about is the
experience of doing PAWS. Judged by its starting goals, PAWS ultimately
failed. We – the sociologists – never really got to study PAWS in its
intended setting. Instead, we worked, endlessly, at embedding
it in the setting. We rarely got to step back and observe. The work of
embedding a research technology in a setting is little spoken of. Rare
exceptions include Peneff’s (1988) study of French fieldworkers carving
out the necessary agency to adapt formalised, large scale survey
instruments to localised conditions, and Tolmie et al. (2009) on
‘digital plumbing’, that is of reconciling deployed technologies with
the social worlds in which they are to be set loose. Here I want to
highlight three challenges that emerged from this work of embedding.
These are discussed in detail in our paper (Goulden et al 2016) [Open Access], where we also offer some means of resolving them. I merely introduce them here.
Problems of time:
When, as sociologists, we approached this collaboration with computer
scientists, we were aware of a long history of ethnographic work within
CS, primarily in the form of the subdiscipline of Computer-Supported
Cooperative Work (CSCW). We failed to appreciate that PAWS was different
from the canonical CSCW study, in which an existing or novel technology
is studied within an organisational setting. Perhaps the single most
important difference was this question of embedding – in the typical
CSCW study, the embedding is being done by the organisation, and the
ethnographer is there to study it. We were attempting to do both,
simultaneously. Furthermore, our setting – a marginalised inner city
estate – was significantly more socially ‘distant’ from us, as middle
class white-collar professionals, than any typical office might be. The
result of these differences was that the work was slow.
There was not prospect here of ‘quick and dirty’ ethnography of the
kind which is commonplace is traditional technology-led projects.
The cadence of the work was entirely
out of kilter with that of computer science. This is a field in which
talk of iterative, “agile” development abounds, where ‘Moore’s Law’
dictates that the capacity of the underlying technology doubles every 18
months, where Mark Zuckerberg extols the mantra of “move fast and break things”. As strangers, and guests, in a foreign land, we could not afford to break anything.
It wasn’t that the computer science work was constantly ahead of us.
Rather that the development cycles of the two disciplines were rarely in
sync, which greatly complicated everything else.
Digital plumbing: in
turning attention to the work of installing deployed research tech in
homes and other non-lab settings, Tolmie et al. (2009) were drawing
attention to how fundamentally socio-technical
this work is. This was all the more so in PAWS, where the division of
the work into lab-based ‘technical’ labour, and real world ‘social’
labour was split cleanly between technologists and sociologists. The
work of doing the embedding of technology was all our own then. The task
did not appear overly complicated – plugging-in additional routers in
the houses of those ‘sharing’ their signal, and installing software on
the devices of those making use of this signal. The latter commonly
threw up all kinds of errors and snags which slowed us down, but in and
of itself was rarely insurmountable.
What was more so was the range of the
Wi-Fi which underpinned the entire system. Huge amounts of additional
labour were generated by the fact that Wi-Fi signal strength was highly
unpredictable. Sometimes, due to the specific local material
circumstances – the positioning of walls, trees, inclines etcetera – it
travelled far further than anticipated. More often it didn’t come close.
We had been caught out here not by the labour which falls between disciplines, but by the knowledge.
It turns out that real world Wi-Fi performance is a poorly understood
phenomenon, beyond perhaps very specific niches. As one of the computer
scientists on the team summarised: Radio physicists know what the
answer is in theory; the lab engineers know what the answer is by
simulation; computer scientists don’t care what the range is, they care
what the throughput or latency is. The greatest challenge for our
fieldwork came when this technical labour combined with the demand for
emotional labour. Peneff (1988) speaks of the means by which
fieldworkers “cope” with the many ambiguities and tensions of fieldwork,
in a setting in which they must execute a formalised task in manner
naturalistic enough that the human participant might engage as if it was
a conversation with a trusted acquaintance. Trying to deduce why an
iPad was refusing to connect to PAWS – instead complaining of an ‘Out of
date security certificate’ – whilst simultaneously presenting the
required attention and sympathy towards a participant met five minutes
earlier, who was now relating her recent ordeal at the local hospital
following a heart scare, it was difficult for us not to look on Peneff’s
fieldworkers with envy. This simultaneous performance of emotional and
technical labour, orientating to both human and non-human, is a
challenge particular to this form of fieldwork.
Going native:
Doing interdisciplinarity means stepping outside traditional discipline
boundaries and making a commitment to meaningful engagement with what
may be very different logics of enquiry. There is a balancing act to be
done here. As social scientists we should maintain a critical appraisal
of the technological programme and its conception of the setting.
Perhaps too enamoured by the laudable goals of PAWS, we did not always
do this, becoming too close to the project’s “technical boosterism” (Savage 2015).
Within PAWS this was realised in how
our original plan constituted its participants. During these initial
stages, the greatest concern amongst the project team was that PAWS
might fail to find enough residents willing to act as sharers. It was
easy to adopt the computer scientists’ concerns that the notion of
sharing a resource with strangers would be rejected by many, or that
security fears might prove insurmountable. Those using the system were
less of a concern: it was thought that the combination of free access to
the Internet and a £50 voucher for participating in the research would
be sufficiently compelling for those with limited resources.
In hindsight it became clear that in
buying into PAWS’ technological programme we had been insufficiently
sensitive to the social orientations of those we were seeking out. We
were appraising the project through the eyes of the technologists not
the members of the setting. Those using the system were liable to be
amongst the most marginalised of a marginalised community. The
implications of this for the door-to-door recruitment we conducted are
made clear in McKenzie’s (2015) ethnography of life on inner city
estates (actually conducted on another Nottingham estate just 3 miles
away from ours). She writes
it was actually very impolite to
turn up unannounced. This practice was always about risk management –
there was a lot of fear and suspicion on the estate, fear of the
unannounced visitor, which meant the police, the ‘social’, the TV
licensing people. It always meant problems, and doors would not be
opened if they didn’t know who was on the other side of it. (p. 89)
Our experience of going door-to-door
seemed to support McKenzie’s account: potential users of the system were
hard to find, and many properties never answered the door, despite
knocking on more than one occasion, and often when it was clear someone
was home. The result was that we never recruited anything like as many
users as we hoped for, and this was ultimately where the project failed
to achieve its original goals. —–
Where PAWS succeed was in
demonstrating some of the challenges to be overcome if we are to become
serious about doing ‘in the wild’ research. In turning increasingly
towards applied, technology-led research, directed towards specific
‘social problems’, we overlook at our peril the work of embedding, both as a task in itself, and in what it implies for interdisciplinary collaboration.