[Originally posted here]
Facebook came in for considerable criticism recently when they revealed that, over a one week period in 2012, they conducted a study
on 689,000 users. By filtering the status updates that appeared on
individuals’ new feeds, the study set out to measure “emotional
contagion”, a rather hyperbolic term for peer effects on the emotional
state of individuals. The study concludes that Facebook users more
exposed to positive updates from contacts in turn posted more positive
updates, and less negatives ones, themselves. The result for those
exposed to more negative updates was the opposite.
Some of the claims made by the study are open to challenge, but here
I’m interested in the response to it. I’ll go on to argue that we should
actually be grateful to Facebook for this study.
Criticism revolved around two interrelated themes: manipulation and
consent. Of the first, Labour MP Jim Sheridan, member of the Commons
media select committee, declared that “people are being
thought-controlled”, and several news stories declared it “creepy”. The second response, as reported here,
was led by academics highlighting clear differences between the ethical
requirements they are required to meet when conducting research, and
those the Facebook study adopted.
Without informed consent, it’s difficult not to see an attempt at
mass emotional manipulation as creepy, so it is highly problematic that
the study’s claim for consent is so weak. It states “Facebook’s Data Use
Policy, to which all users agree prior to creating an account on
Facebook, [constitutes] informed consent for this research”. This is
simply nonsense. Even if we pretend every user actually read the Data Use Policy
rather than just clicked past it (and how many of us ever read the
Terms and Conditions?), this should have happened shortly before the
study – rather than potentially as long ago as 2005 when Facebook first
launched. Further stretching the definition of “informed” is the fact
that the key sentence in the Policy comes at the bottom of a long list
of things Facebook “may” use your data for. This solitary sentence –
“internal operations, including troubleshooting, data analysis, testing,
research and service improvement” – embedded in a 9,091 word document,
may legally constitute consent, but from an ethical standpoint it
certainly isn’t informed.
It is perhaps surprising, having said all this, that I think Facebook
should be congratulated on this study. What they have managed to do is
draw back the curtain on the increasingly huge impact that tech
companies’ algorithms have on our lives. The surprising, and rather
worrying thing, is that Facebook has done this inadvertently, whilst
reporting on a study of social contagion. The really important
‘contagion’ revealed by this work is that of invisible filters and
rankings in structuring our access to information online. The reason why
this is worrying is that not seeing this controversy coming suggests
Facebook are as blind to the social consequences of these processes as
most of the public have been.
Despite the accusations of “creepy” manipulation, the only thing that
is unique about what Facebook did in this experiment is that in
reporting it they publically admitted that the processing they carried
out was not done in the “interests” (as defined by Facebook) of the
individuals involved. There are two issues here. The first concerns what is
in the interests of the individual. For Facebook, and no doubt other
tech companies relying on advertising income base on page views, this is
defined as what people like (or ‘Like’ in Facebook’s case). In a
healthy, pluralist society we shouldn’t only be exposed to things we
like. Being a citizen in a democracy is a job for grown-ups, and
important information is not always as immediately palatable as videos
of cats on skateboards are. And what of serendipity, of finding
interesting content from a novel source? Filters strip this away, in a
manner which is entirely purposeful and almost always invisible.
The purpose behind these filters leads us to the second issue.
Alongside their interest in keeping users satisfied, tech companies
have, of course, their own commercial interests which at times may
conflict with those of the user. Google, in a case
that has been rumbling on since 2010, is facing sanction from the
European Commission (EC) for altering the all-important ranking of
websites so its own businesses appear at the top of searches. The
ability to present information in a manner which favours the company at
the expense of others – whether businesses or individuals – is available
to any tech company which provides content to users.
As the tech sector matures, we may increasingly also see political interests shaping their actions. The ongoing ‘net neutrality’
battle in the US has brought to light that one of the biggest potential
beneficiaries of abandoning neutrality – the Internet Service Provider
Comcast – spent more on lobbying
last year ($18m) than any other company except the arms manufacturer
Northrop Grumman. In the Facebook controversy some critics have already
raised the prospect of such filters being used to alter users’ emotional
states during an election, in order to affect the outcome. Even the
small effects described in the study could have huge impact given
Facebook’s reach, as the study itself acknowledges: “an effect size of d
= 0.001 at Facebook’s scale is not negligible: In early 2013, this
would have corresponded to hundreds of thousands of emotion expressions
in status updates per day.”
There was actually no need, in this case, to use such a poor
approximation of informed consent. We shouldn’t though let that
complaint obscure the bigger story here. It may have been by accident,
but Facebook have, I hope, triggered a vital public debate about the
role of algorithms in shaping our online lives.
As societies, we are currently a long way behind the curve in dealing
with the possibilities for information manipulation that the internet
offers. Of course information manipulation is as old as information
itself, but users of a news website know that they are receiving a
curated selection of information. We do not yet have such expectations
about Google searches or the updates of our friends that Facebook makes
available to us. We must then begin to think about how we can ensure
such powers are not abused, and not rely just on one-off cases such as
Google’s battle with the EC. The challenge of balancing public interest
and commercial secrecy promises to result in a long battle, so it’s one
that needs to begin now.
In my view, Facebook’s mistake was not in conducting such work, but
in reporting it as a study of human behaviour, rather than of tech
companies’ influence over us. Ethics are not set in stone, and must
always be balanced with what is in the public interest. If there is
sufficient benefit for society as a whole, it may be considered
justifiable to transgress some individuals’ rights (as is the case, for
example, when the news media reports on a politician committing
adultery). As such, it could be that argued that Facebook’s study was
actually ethical. For this to be the case though, Facebook would need to
show an understanding of what the public interest actually is in this
case.