Dominant technology companies are shaping how we see the world, says Deborah Brown from Human Rights Watch

November 12, 2020
Share on facebook
Share on linkedin
Share on twitter

As part of an international campaign to lift the lid on data privacy violations, The Privacy Collective is asking some of Europe’s leading experts why online privacy matters. 

Based in New York, Deborah Brown is a senior researcher and advocate on digital rights at Human Rights Watch, an international non-governmental organisation that conducts research and advocacy on human rights. Her areas of focus include the role of digital technologies in electoral processes, cybersecurity and digital exclusion. She tells The Privacy Collective about how our information ecosystem has become polarised by big technology companies, why location tracking is a big concern, and why governments need to do a better job of safeguarding privacy.

Deborah White - Human Rights Watch

Why does online privacy matter? 

Privacy is a fundamental right and some of the most pressing threats to privacy today occur through the use of technology, but not necessarily when we’re online. There’s a danger that talking about online privacy creates a false dichotomy between offline and online. In addition to pervasive online tracking, the use of digital technologies invades our privacy when we’re going about our daily lives, but not actively using the internet. That includes facial recognition technology, and the ability to geo-locate mobile phones. 

It’s perhaps more helpful to think about privacy as something that’s mediated by technology. Aside from being a right, privacy is really how we create the conditions to live in  dignity and protect our autonomy in this world. It’s how we control who has access to our lives, whether it’s controlling how much information you’re sharing with the government, with society, with your employer, or your family. It’s really critical that we think about how we manage those relationships and the way that they’re mediated through digital technologies. 

Can we talk a bit about the surveillance business model and the consequences that it has on how content is shared?

When we talk about the surveillance-based business model, we’re often talking about the social media platforms of this world (Facebook, Twitter, etc). The issue is that we rely on these platforms as our so called ‘public squares’ – a place to discuss matters of public interest, but also to connect with family and friends. 

The incentive structure that the platforms are built on is to maximise engagement – the number of clicks, shares, likes, and time spent on the platform. And there have been many studies that show that the type of content that’s the most engaging is often false information, and content  that incites strong emotions. It’s the content that’s the most extreme that therefore gets put to the top of the feed and is recommended to users. That’s really polarised our information ecosystem and has led to real harm in different parts of the world.

Recently in the US there have been measures that the social platforms have rolled out to protect the integrity of the election, and to their credit some have really put a lot of thought into this. Following pressure, Facebook has temporarily stopped recommending groups, for example, and Twitter has introduced friction, where there’s an extra step before you retweet something. So we’ve seen platforms starting to acknowledge that change is needed, and I do think that’s an important development. But the elephant in the room remains that these platforms are designed to maximise certain types of content, and it’s not reliable, accurate information. And they’re trying to fix the problem by putting band aids on it, rather than address the core of the issue. 

The elephant in the room remains that these platforms are designed to maximise certain types of content, and it’s not reliable, accurate information.

What impact does this concentration of power in big technology companies have on everyday users? 

When people criticise these platforms, the response is often ‘well, you don’t need to use them. If you don’t like their terms of service, go somewhere else’. But it’s really a privilege to be able to do that. In some parts of the world, the internet is Facebook. There often isn’t an alternative. 

Another issue is search – more than 92% of the world uses Google for search. If Google decides to rank certain types of content higher in their algorithm, or autocomplete a search term in a certain way, that has a real ability to influence how people think. It’s very unusual that people will search through to the tenth page (or even past the first page) to find what they’re looking for. So you have a private company that’s really shaping how people understand and organise information. 

A lot of the world relies on search results, and they don’t necessarily realise that these results are different based on where you live, or on your browsing history. As Safiya Umoja Noble argues, data discrimination in search engines leads to biased results that privilege whiteness and discriminate against people of colour, specifically women of colour.  And yes there are alternative search engines, but are they as useful or do people know about them especially when Google is the default browser on most devices? 

What changes would you like to see to hold platforms accountable? What might an alternative business model look like?  

Self regulation can only go so far and it puts a lot of hope on companies to do the right thing. So we do need some form of oversight. I think one of the difficulties around imagining an alternative model is the information asymmetry we’re facing right now. We simply don’t understand enough about how recommendation and curation algorithms work within these platforms. We can’t just rely on the companies to voluntarily offer that data.

As well as more transparency, it’s also about giving people on the platform more choice, either to be able to influence the algorithms that are shaping their experience – to say I want to optimise for this type of content but not that kind, or to opt out entirely – so that they can use the service without that influence shaping their experience. 

Another idea is around data portability and interoperability, which means that rather than locking people into these platforms, they can bring their data with them and still interact with their networks. Much like how email and phone providers work – I can email or call anyone, regardless of which service they use. There are lots of details to work out on how to do this, but the idea is once people have more choice and aren’t locked into these platforms, more networks will grow. 

Have you been concerned by some of the digital surveillance being introduced during the Covid-19 pandemic?

Very concerned. We’ve seen governments who were already using data and technology to restrict human rights continue to do so. We’ve also seen some efforts, I think in good faith, to use technology to contain the pandemic. But a lot of that was wishful thinking because the technology being rolled out was untested and people were being asked to trade their privacy and other human rights for the possibility of a public health benefit. 

Any efforts involving location tracking for example, raises a red flag because your location can provide a lot of insight into your life – who you spend your time with, your religious or political beliefs, etc. The fact that governments were using location data to enforce quarantines or social distancing measures, and to do contact tracing has been a huge concern. And with nearly half of the world’s population not online, these measures can be quite exclusionary, and leave out people who are most in need of support during the pandemic. 

There are also issues around the efficacy of all of this. If governments are creating new technology and putting money into it, why aren’t they putting that towards tried and trusted efforts, like traditional contact tracing or testing, and support for people who might be unemployed during this period? There’s this notion that if we rely on magical technology, we’ll somehow be able to solve this thing. And even if that’s well intentioned, it can have a detrimental and long-lasting impact on a range of rights.

What can people do to educate themselves and protect their online data today?

There are a lot of good resources out there – Tactical Tech and the Electronic Frontier Foundation, for example, are good places to start. And there’s a lot that can be done individually to secure our devices, to practice better digital security, and to boost our own digital literacy. On the other hand, I do think it’s asking too much for people to have to take elaborate steps in order to secure something that’s their fundamental right.

There are also a lot of people who aren’t even online yet and are already affected by surveillance. Some of the most marginalised people in society, for example refugees or people who rely on public subsistence programmes, are subject to surveillance and forced to provide lots of personal data when they’re not even online themselves. Governments need to do a better job of safeguarding people’s privacy and restricting certain exploitative or abusive behaviours by companies. 

Your data should not be for sale. We’re taking Oracle and Salesforce to court for illegally selling millions of peoples data and we need your help! If you believe that tech giants should be held accountable for their use of people’s data please support our claim here. We’re fighting for change, because your privacy matters. 

RELATED ARTICLES