New technologies have given us a wealth of opportunities. They have also made our world smaller. They have allowed us to spend more time with our families on and offline. But these same technologies have also brought about a new, darker reality. One that many are not aware of.
“Human existence is threatened by quantification through AI technologies, but we are also definitely living in the best moment in history for 21st human existence,” Daniel Leufer, a Mozilla Fellow with the digital rights organisation Access Now, told us.
It was a red thread weaving through all the conversations we had with artists, academics, activists and fiction writers. The internet has brought us major opportunities to connect, to work, to express ourselves. But while the technology developed and people got used to it, we lost sight of what we lost, and may lose even more along the way.
Jennifer Morone, an artist, filmmaker, and digital rights activist, went through a period of personal insecurity and as she looked around her, solace was not easy to find. For starters, she worried about the future of work. The threat of more automation juxtaposed with the number of disparities fueling economic inequality on a foundation of inadequate social safety nets and reluctance to regulate and tax big tech giants left little in the way of imagination for a reasonable future. Social media platforms driving faces to their screens and an obsession with lifelogging among some was resulting in increasing isolation, self-obsession, distraction and social division. Then Snowden’s revelations came out in 2013 and this is when she started digging into the unifying factor of all of the above - data. She began researching how data is collected, by whom, and why.
“What I found was that we are essentially all creating data just by existing. And that data is valuable, which means that we are all valuable. However, even though we are contributing to this economy just by being, we do not have access to any meaningful way of participation or control with this economy - we are essentially exploited. ” she told me. Now she is the CEO of the RadicalxChange Foundation, which focuses on ways of distributing power, one aspect of which is digital governance and economies. “We need data unions. We need to be able to collectively bargain, to be able to have a say over what data can and should be used and for what purposes. And to be able to share in the value that we collectively create.”
Regaining control over the data we create is an issue raised by many of the people we talked with.
The right to private property was a crucial demand in early quests for political freedom and equality and against feudal control of property. The right to peacefully enjoy your property is embedded deeply in European heritage, going back to feudal times. If I am not mistaken, it is included both in the the Universal Declaration of Human Rights and in the European Human Convention on Human Rights. Why, then, is our data not part of this? Why do our devices not seem to offer the same protection digitally?
Nicole Immorlica — who researches the intersection of economics and computer science — believes that regaining ownership of personal data could give people a uniquely modern opportunity for financial growth as well. “Where does that data come from? That data comes from an entire ocean of humans that are generating the data,” Immorlica told us. “These humans ought to be compensated for the job.”
But it’s not only factual ownership that we discussed as a means to regain control over our lives in a digital age. Kristina Irion, an assistant professor at the University of Amsterdam’s Institute for Information Law, has been focussing on data protection from the law point of view for over a decade.
Irion argues that people should be at the centre of what technology should and shouldn’t do. We shouldn’t view large tech companies as purely private companies anymore due to their relevance to society. But, also, it shouldn’t be up to people themselves to protect their own rights; that is something a government should be responsible for.
“In this race of technologies between those who control the technologies and those who use it, we should bring the users again on par with those who control the technologies,” Irion says. “Why do we let everybody into our devices? This is personal space. It should be like we have the sanctuary of our homes.”
It’s an imperative issue Irion raises: we do own the devices we buy. Companies have questionable access into our devices and how we use them. Maybe we should also ask ourselves why we are not allowed to use them how we see fit us best.
Why can’t we get rid of apps we don’t want? Why can’t we alter apps to fit our needs? This is a question Cory Doctorow has been focusing on. He believes that we are not able to take full advantage of what the technologies have to offer us, as we have been allowing tech monopolies to form. We do not have technological “self-determination”.
“You got these concentrated sectors that can collude to spend their monopoly rents, to buy policies that are favourable to their continued existence,” Doctorow explains. “It should never be an offense to modify a product or service in order to repair it, to audit its security, or make it more secure, to add accessibility features, to support people with disabilities.”
We are not allowing people to use technologies fitting their needs best, but we are allowing tech companies to do what they want. From influencing our policies to allowing them to track workers’ productivity and emotions. We allow governments to use facial recognition. We have not been successful in ensuring labour rights for people working through app-based platforms.
“Technology that could be used to liberate people to give them more flexibility and autonomy is actually used in the opposite way. And it is counterproductive,” Valerio DeStefano, a professor in labour law at the University of Leuven, explains to me. DeStefano argues for labour unions, including platform workers, to have a say over what kind of systems will manage them.
DeStefano cautions that some uses of technology should be outright banned. Especially those that aim to predict people’s future behaviour.
Agreeing with this sentiment is Daniel Leufer. Leufer is “strongly” pushing back against the idea that the human essence could be quantified. “If you’re constantly worrying that every single thing you’re doing is being tracked and evaluated — fed into a profile or a model of your behaviour, which is accessible to job advertisers, insurance companies, and the government — that’s going to significantly influence our behaviour.” In other words, we need to strive towards enabling people to have more self-determination and diversity.
The participants we spoke to all came from different fields. They had different backgrounds. They are working on different parts of the internet and other technological developments. But all came to the same conclusion: we must put people first to steer away from an otherwise disastrous future.
We need self-determination, ownership over what we create, and freedom in how we behave.
Which leaves me with a question:
Since control over our personal data is crucial for self-sovereignty, why are we not seeing more self-proclaimed nationalist movements up in arms over these issues?