It’s naive to think we’re not being manipulated online, says Mozilla Fellow Harriet Kingaby
As part of an international campaign to lift the lid on data privacy violations, The Privacy Collective is asking some of Europe’s leading experts why online privacy matters.
Harriet Kingaby is an award-winning campaigner and co-founder of The Conscious Advertising Network, which has grown to more than 90 members. She’s also a Mozilla Fellow and is working with Consumers International to map the opportunities and impacts of AI-enhanced advertising. Here she discusses the need to consider factors other than profit when enhancing technology, the possibility of effective advertising without personal data, and the responsibility of brands to change the system.
Why does online privacy matter?
Like a lot of people with privilege, I used to think that online privacy really wasn’t a problem. As someone who has worked in advertising, the idea of receiving more tailored ads when I surfed the web was absolutely fine by me.
But that is obviously a very naive view of the transfer of data that happens when we’re online. And it comes from a place of not having to worry about the state getting hold of data about me and what they might do with it. It comes from a place of not having to protect special characteristics about myself that might expose me to discrimination. And it comes from a place of being in Europe and having the General Data Protection Regulation (GDPR), which protects that transfer of data. So actually privacy is so important because not only is it a human right enshrined within the UN Declaration of Human Rights, but it’s also essential for us to be able to protect ourselves, and to avoid being profiled and manipulated.
As human beings, we love to think that we are self determining, autonomous creatures. But when our online experience is being tailored and optimised to not only keep us there, but also to serve us the kind of information that it thinks we might want, there are huge consequences. This isn’t just about receiving ads for theatre recommendations, it’s about what this means for our information environment. And as we start to embed artificial intelligence (AI) within the system, which is data hungry in itself, online privacy is one of the key rights that needs to be protected.
Can you tell me a bit about your work as a Mozilla Fellow and your research into AI-enhanced advertising?
We ran a global study, looking at various case studies of artificial intelligence being embedded into advertising. Specifically, we looked at two different types of technologies – machine learning, which is already being used quite prolifically, and facial and emotional recognition, which we’re starting to see being incorporated. There were seven major harms that we identified: everything from harm to vulnerable people (who are not yet online), through to facial recognition startups, who are relying on some old and debunked science, which suggests that you can tell someone’s personality type or sexual orientation from their face. There are also concerns around just how much energy this system will consume, because AI is extremely energy intensive.
What we found was data protection legislation, and the enforcement of that legislation, is the biggest thing that a country (or a group of countries) can do to protect consumers. But as well as legislation, you need cross disciplinary forums where you’ve got human rights specialists, and advertisers and the voices of affected communities, working together to solve these kinds of issues. There’s often a tendency to prioritise economic measures or profit over everything else, because they are easily measurable. We have jobs, we have livelihoods because of the economy, but when you’ve got organisations that are only looking at that side of things, you end up with web funding models, based on advertising, that boost misinformation and hate speech, because it generates a lot of interaction, and therefore revenue.
Like a lot of people with privilege, I used to think that online privacy really wasn’t a problem.
Tell me about the Conscious Advertising Network (CAN) – how can advertisers use their power to change the system?
Advertising funds everything on the internet, from great journalism that speaks truth to power right through to cat videos. At CAN we believe that with that great power should come great responsibility (I had to get a Spiderman quote in somewhere!). The global digital advertising market is huge – in 2019 it was worth $330 billion. That’s a lot of money. So if we can make sure that it’s spent better, then we can have a real impact.
The World Health Organization recently declared that we’re living through a pandemic of misinformation – an infodemic. At a time when we’ve got a global pandemic, we need to make sure that people are informed, that they get vaccinated when that is available, that people understand public health messages, but much of that information is getting lost among all of the rubbish out there.
It’s really important that we change the way advertising is bought and sold, so that it’s not funding hate and misinformation. Advertisers need to be able to take more responsibility for their supply chains. In the same way that it’s no longer acceptable for shoe manufacturers to have child labour within their supply chains, it should be no longer acceptable for advertisers to have hate speech and disinformation within their advertising supply chains. But the current real time bidding process, for example, is flawed. Advertisers really don’t get a lot of transparency from their vendors, so they don’t know where their cash is going. That needs to change.
What has the response been like from the advertising industry? Is there a recognition of the need to evolve?
Almost half (48%) of 16-34 year olds globally use ad blocking software, and in the UK, it’s still around 20%. Obviously, something’s not working. This doesn’t just affect advertisers, it affects publications. If I’m going to my favourite news website and I have an ad blocker on, that news website isn’t getting any revenue. That’s not fair to me as the consumer, whose user experience has been degraded to the point where I have to download something to protect myself online. It doesn’t work for the publisher, who is supposed to be receiving some sort of funding, and it doesn’t work for the advertisers who are chucking money into this big system to try and reach people.
There is also a necessary mindset shift that it’s possible to reach audiences without having to know every bit of available information about them. What’s positive to see is that those shifts are happening, and advertisers are starting to demand transparency. At CAN, our argument is that we don’t need to wait for legislation to make a change or limit ourselves to compliance. By leading this change, brands can set an example and really change the way the public feels and thinks about them.
You’re working on a project to champion a “cookie free world”. Is there anything you can tell us about this vision for the future? What changes are needed to redesign this system?
If Google Chrome does phase out third party cookies, then the current ad tech model is basically finished. So we’ve got to think about alternatives. We’ve got to get away from thinking ‘I just need more and more and more data to target advertising’, towards thinking about how we can reach people on their own terms and respectfully. We don’t need third party cookies replaced by something like browser fingerprinting or other methodologies that just recreate or worsen privacy issues. We’ve got to fundamentally change that mindset.
In the same way that we have planning laws which enshrine the right to public spaces, I think we need to think about internet infrastructure in a much bigger way. There are ideas around contextual advertising models – for example, if you’re reading Attitude magazine, I can take a punt about what kind of ads you might want to see. In the same way, if you’re reading Marie Claire, I can make assumptions that are enough to serve relevant adverts. Brands also need to think about advertising money as a resource that funds a healthy internet, and take more responsibility for funding diverse, quality content. That will create a healthier system, an online environment that has commercial and non commercial spaces, where respect for the user and privacy are enshrined by design.
What can people do to educate themselves and protect their online data today?
On an individual level, the first thing I would say is educate yourself about these issues. Netflix’s The Social Dilemma is flawed but it has certainly created conversations about privacy among people that I never expected to be thinking about this. Changing your browser to one that already doesn’t support third party cookies, is another good step. And have a think about what you’re giving consent to. Pop ups are really annoying. But the better designed ones will have the option to manage those settings.
Supporting campaigns for systemic change is also hugely important. I work in climate change and it’s a similar story. We as individuals can do all we like, but we cannot physically protect ourselves online all the time. Supporting organisations, such as The Privacy Collective, that are fighting at a collective level is probably the best thing that we can do right now as individuals.
Your data should not be for sale. We’re taking Oracle and Salesforce to court for the misuse of millions of people’s data and we need your help! If you believe that tech giants should be held accountable for their use of people’s data please support our claim here. Because your privacy matters.