How do you decide who to trust with your data? Professor Robin Mansell from LSE on why protecting privacy online is so difficult
As part of an international campaign to lift the lid on data privacy violations, The Privacy Collective is asking some of the UK’s leading experts why online privacy matters.
Robin Mansell is Professor of New Media and the Internet at the London School of Economics and Political Science (LSE). She is also past chair of the International Association for Media and Communication Research, and a board member and secretary of TPRC, the research conference on communications, information and internet policy. Here she discusses why economic incentives are needed to encourage more responsible handling of citizen’s data, how technology platforms are allowed to grow as large as they do, and why the internet has been a runaway experiment.
Why does online privacy matter?
All technologies, including the internet and its design, get embedded in society’s values. And as long as we privilege the profit motive and the commercialisation model, it comes as no surprise that the internet has continued to evolve in that direction. As these companies have grown, there’s now much more attention being paid to the importance of privacy and the potential harm being done by surveillance models that have been introduced.
You never know that you’re in trouble until something actually happens. So some people are just fine. But a large number will find themselves in difficult situations, and it’s not clear where they should turn to when that happens. There are very few ways in which we’ve invested in redress systems, other than through the torturous process of the courts. The kinds of intermediary organisations that would be there in the offline world aren’t there, at least not on the scale that’s needed. It’s still very small scale and often an afterthought.
Can you tell me a bit about your research at LSE and your interest in the field of data privacy?
I teach courses on disruptive digital worlds, which address all aspects of innovation in new technologies, especially the social, economic and political implications. Within that, issues of surveillance and privacy are crucial – how do you create incentives for encouraging the private sector and government to behave more responsibly around the use of citizen’s data? And in that sense, I’m interested in the economic incentives but also in the governance and political issues.
How do you think laws like the introduction of the General Data Protection Regulation (GDPR) have impacted how companies are handling data, and how the public understands online privacy?
There is a higher awareness of data privacy and I think, amongst some segments of the population, there’s more of a propensity to ask questions. But in general, I think it’s a confusing situation for the public – they’re told to trust companies with their data, but then they hear about all of these lapses in data protection, covered by the media.
The issues there are not just technical issues and economic issues, they’re also to do with the implementation of legislation. As soon as you introduce one piece of legislation, there will be elements of the corporate world that will start looking for work-arounds. The GDPR is about changing the whole culture, the ways in which the corporates and public sector are supposed to think about data. And I think the biggest deficit there is in training, not just around technical issues, but training to be responsible guardians of the kinds of data that is being collected.
It’s a confusing situation for the public – they’re told to trust companies with their data, but then they hear about all of these lapses in data protection, covered by the media.
You recently co-authored a book about how artificially intelligent platforms are collecting and processing data – what inspired you to write the book?
We started with this question of why do these platforms grow as big as they grow? Is that growth inevitable? Why have state governments have just let them grow without intervening until now? One reason is because most governments are interested in a kind of technology race, and so the Americans have pushed and allowed their Silicon Valley companies to dominate the global market. We also talked about totally different business models, because the fact that you give your data for free to platforms so that they can nudge you into buying more things from their clients, is not the only business model in the world. There are collaborative, collective models, but very little investment has been put into them. And so no one really knows whether they could be sustainable or not.
Is public outrage what’s needed to make those alternatives a reality?
I think it’s part of the story. But collective action like that is usually stop and go. We see it in the environmental movement. It has an impact, certainly. But I think there needs to be a bigger impact whereby the institutions, whether they’re the courts or whether they’re regulatory agencies, actually get the message that they need to shape the behaviour of these data-collecting companies. They need to create the incentives where it makes economic sense for platforms to do business differently.
Do you think the pandemic has changed how people are thinking about surveillance and privacy? Are we prepared to accept more intrusion than pre-Coronavirus?
I think we might have been. But the trust that people might have been willing to put in government has been completely broken as a result of the A-level fiasco and the track and trace system. Why should people respect the notion that they should be monitored if monitoring leads to nothing? More illness, less kids in schools, people self isolating, because they simply do not know if they have the virus or not. Once you lose trust, getting people to believe in a system, which introduces more extensive and integrated data collection activities, is difficult. That said, and this has been true for the last decade or more, people express concerns about their data privacy in surveys but will then go and use these apps or platforms without thinking about the consequences.
What are some of the consequences of sharing this sort of data in the longer term?
One thing to bear in mind is the actual empirical evidence on whether or not sharing this data does affect outcomes such as voting behavior is ambiguous. There are some people who say absolutely it does, and other people who say no it doesn’t. But I think what is more concerning is the general way in which the proliferation of that kind of information changes the whole sense of society and public discourse. The notion of what’s “good behaviour” and “good speech” in a democracy starts to change and become normalised. I certainly see this happening in the United States – it’s normal for politicians to be uncivil, and it’s therefore normal for people who follow them to be uncivil.
That is problematic but it isn’t really to do with technology, in my view. It’s more about what are the behaviours that we find acceptable in society, and what are the behaviours that we don’t. I think that’s gradually changing as people being more used to a really fractured populism, which is problematic. The fact that we have as much information, misinformation or disinformation as we have is a symptom of those changing values and the changing notion of what our culture should be about and how to be civil to each other.
You’ve described the internet as a ‘runaway experiment’. What do you mean by that, and what is needed to bring it back in line?
I think the big question now is where will the investment come to develop new ways of doing things? My hunch is that if businesses do get the message, if they start competing on whether or not they protect people’s privacy, then we might be on a different pathway in the future. But they can’t just treat a fine from the Information Commissioner’s Office as a cost of doing business. That’s no longer viable. On the regulatory side, the oversight of the behaviours of these platforms needs to be independent, rather than an arm of the state. We’ll never get it perfect. But it seems to me that if you invest in those kinds of institutions that have that responsibility and mandate to think about a variety of interests, including those of citizens, then you have at least a chance of shaping the online world in new ways.
Your data should not be for sale. We’re taking Oracle and Salesforce to court for illegally selling millions of peoples data and we need your help! If you believe that tech giants should be held accountable for their use of people’s data please support our claim by “liking” our support button at the top of this page.
We’re fighting for change, because your privacy matters.