@noah great interview. I’m Edgeryders’ community journalist and have taken the liberty to edit the transcript. If you feel like this explains who you are, you can replace it as the first post here:
I’m a social entrepreneur and technology researcher, currently enrolled as an MSc candidate at the Oxford Internet Institute. I’m interested in the global governance of emerging technologies, and how we could possibly shift the narrative shift from “cyberwar to cyber peace” as a practical utopia and a guiding north star, including internet-related technologies such as the governance of AI.
My journey of becoming politically aware started when I was quite young, I was about 12. I became passionate about figuring out how we can get to a world with hundred percent renewable energy as fast as possible. And that was my utopia, it really inspired me and got me going. It’s a topic that is actually now all over the world and in the headlines, but back then, it wasn’t so much.
Coming from a small town in southern Germany, I travelled to different places for protests. And this what connected me to research and political institutions. But at some point, I wondered what impact have those discussions with politicians and lobbyists and activists really? In the end, how do both the ecological imperative and the changes we need to make globally bound by our ecological reality?
Unfortunately, the economic argument has won until now. And within this political arena, I became very frustrated with it. But it also made me think: if the economic argument wins, maybe I should learn better how it works. That is how I got into business ethics, asking the question: how can we reimagine how the economy and how businesses work, to challenge the money making paradigm, moving towards about thriving humans and the common good — and which contributions companies should aim for.
Companies can slow down due to technological developments. So, I realized that if we can get the technological change right, it could also be a good opportunity to change how our economy and society work for the better. Because it’s technology which are the moving pieces shaping new things, and our generation has the opportunity to co-shape these technological changes. That’s the opportunity I want to take, to use technology together with our human insights, intuitions and values, to create a more humane society and economy.
A Tech Philosophy
I find egalitarian philosophy very appealing, particular, John Rawls’ idea that freedoms should be equal and that everybody should have some basic civil liberties. When we look at material conditions and apply how unequal division of resources is only legitimate if it actually benefits those that are least well off — a different principle formulated by Rawls — we can see that our current system isn’t justified. It isn’t upholding that standard, in the sense that the technology is often not centered on the most vulnerable people in our communities and societies.
With regards to Silicon Valley culture: start-ups often purports to save the world, but are bought by venture capitalists, which then again are owned by those who already own the world. It’s self-feeding.
I find it sad that all this beautiful technology, which could be built into incredibly meaningful solutions, have to “make a profit.” For example, social media companies — they innovated, but were immediately pressured to become profitable. The copyright debate falls exactly within this sphere, although I could imagine we arrived here through different paths.
The philosophy I’ve developed around these issues, means we really have to be willing to think radically different about how to make changes, in a massive way.
I don’t like to merely complain about how things should be very different. I didn’t like to only talk about business ethics, so I also dived into social entrepreneurship and founded companies myself. I tried to not just talk about how business should be run differently, but do it. And that’s quite difficult.
The methodologies of business could be useful when applied to different ends. For example, Muhammad Yunus idea revolves around the idea that you found a social business, and use business means for social ends to solve real social problems — not for the kind of problems they’ve been conventionally associated with.
One more additional thought that helped me to develop my own philosophy, is what I call pragmatic idealism: hope without critical thinking, is just naivety, but critical thinking without hope is just cynicism; confidence without humility, is just arrogance, but humility without confidence, is sheepishness; and pragmatism without idealism is just opportunism, but idealism without pragmatism is just wishful thinking.
I found that pragmatism and idealism need to work together: we need to have really have ambitious goals, but also be really pragmatic about where we start.
Reshaping the internet
I have several different utopian visions on how to reshape the internet. During my undergrad degree in Amsterdam in Social Science, I wondered how international relations in global politics fits into the cyberspace which we are increasingly living in. How do we combine global politics theories with these new technologies. That’s when I came up with the idea of moving from cyber war to cyber peace.
I noticed that my computer for some reason didn’t recognize cyberpeace as one word, but did recognize cyberwars. I looked it up in different word processing programs, and none of them knew cyberpeace, but all knew cyberwar. That was an interesting revelation: the vocabulary that we have in the conceptual space of the internet, is conceptualizing it as war, we don’t even know how to talk about it in terms of peace, because we don’t have a word for it. That obvious asymmetry means that cyber is somehow linked to a very aggressive and violent place. Which made me realize there’s some utopia missing.
To define cyber peace, we should first examine peace in the real world. The idea of world peace has always existed, but somehow we’ve not achieved it yet. Immanual Kant argued in his 1795 called Perpetual Peace, that the states with a republican constitutions — in other words democracies — should all guarantee each other security. When they would do so, we would have an ever expanding union of peace, because nobody would ever attempt to attack such a block of countries. And we could have an ever expanding unit of peace.
Decentralization is another important part. Kant argued that we don’t need a centralized contract or an institution, or a treaty: it would emerge out of the enlightened actions of individuals and countries.
But perhaps 18th century philosophy doesn’t completely explain the internet, as it is a very different beast. If you look at the long term trends, we have not achieved real peace. But overall, we live in a more peaceful world than ever before. And I think we should continue to have progress in this sphere. When we speak about the internet, we often discuss cybercrime and a cyber war between countries. But what we really should aim for is to have a utopian vision
From Cyderwar to Cyberpeace
I believe cyberspace actually could be a much more peaceful place than the world we live in physically. That’s because of the different dynamics of security in cyberspace. It’s very asymmetric. And it’s very different from the 20th century Cold War thinking, which often has been the only dominant security paradigm applied to the internet. The superpowers which are emerging, are basically trying to copy these old patterns into this new world.
But I don’t think that’s how it’s going to work out and I don’t think it’s desirable, it doesn’t work that way. That’s because because of the internet’s asymmetry. A commonly heard argument is that on the internet you don’t know who your attacker is. If you don’t know who your attacker is, you can’t retaliate, even if you have offensive capabilities. If you have a nuclear bomb, the other person is not going to bomb you because you can retaliate. This Cold War logic doesn’t work with the internet. And anyone can destroy somebody else’s offensive capability by building up their own defenses. Meaning that if you have security for yourself, and if you do security research, and you find vulnerabilities, then you can basically destroy the offensive capabilities of others — because they rely on these vulnerabilities. You don’t know if I have a nuclear bomb, and just because you build some protection doesn’t mean I don’t have a nuclear bomb. But if I have an exploit to which there are no fixes yet, and you close that gap, then I don’t have a bomb anymore, I don’t have a weapon anymore. Basically, you can take away somebody else’s offensive capability by building good defenses. I think these dynamics have not been fully understood yet by policymakers.
I also believe that security could move away from being state-centric. My vision for cyberpeace is involving everyone in a certain union — and it doesn’t have to be a contractual union — which works towards building collective defenses. In other words, a global norm for “zero day” reporting in which countries report on these “zero days.” They’ll not only improve their own security, but also the security of everyone else. To many, this might sound too idealistic. But I don’t think it is. I think there are very pragmatic reasons to do so. If you’re part of this defensive union, then you will be much safer than if you’re not.
Generally, I would apply Kant’s vision that, in the end, we want to live on a peaceful planet or in a peaceful cyberspace. Hopefully, the whole internet will be a safe space, but we can start by creating smaller safe spaces. There will always be cyber attacks. But we can minimize the impact and increase the integrity of everybody’s experience on the internet
A global norm for “zero day” reporting encourages everyone to contribute to everyone’s security: you can’t close the vulnerability for yourself only, you’ll do it for everyone. At the same time, the only “them” are those who don’t want to work together to make everyone’s experience safer, which is basically the people that want to keep the exploits to themselves.
I do not have a very clear institutional setup of how this would work, because obviously, it’s very difficult to control it. But my point is more on shifting the narrative from a place where a lot of war is happening — cyber war — to a place where cyber peace is possible, and where we should make sense of peace.
It’s everyone’s responsibility to make this shift happen, but I think institutions such as the European Union are well equipped for it. The US’s current administration, — which obviously controls the most powerful capability — is not going to be willing right now. But, I hope that this could change given history’s trajectory. We didn’t think it would be possible to start a counter proliferation movement in the nuclear age, but we now actually live in a world with less bombs than 50 years ago. I know with the internet it’s a long shot as well, and it isn’t going to happen overnight. All the talks at the UN level are basically non-existent on this topic. But I really do hope that we could live in a world where we have less, not more, attacks and more integrity of everybody’s devices.
On AI and Ethics
When I started focusing on the narrative shift in cyber peace, I realized that besides being state centric, there were also several practical problems with it. And so I wanted something more positive than that. And that’s when I got more interested in AI, as I believe it offers an opportunity to positively shape human thriving in many ways. To unlock this potential, we need to avoid its many risks, which are often discussed and which are very real.
Somehow, every utopia involving machines which can do more than we do, always ends up being a dystopia. This means we still have to do a lot of work on which utopia we actually want out there.
As I’m a pragmatic idealist, my ideals are also based on my pragmatism. This means I don’t believe in a world where we get rid of AI. Let’s take as an example the unabomber, who bombed scientists, because he believed that the progress of science and technology would destroy human society. And given that I don’t think that’s a viable path — living with less technology — then the question arises: how can we shape the current trends and the current technologies that are benefiting humans instead? And how can we consciously decide not to use them?
But, overall, I’m convinced that we will be able to create machines which will be better at performing certain tasks than humans are. Given that’s going to come, the question is: how do we want to do that? The biggest problem is related to human centrism. So, how can we ensure that the tech aligns with our values?
When we look at AI governance questions — such as the one pioneered by Nick Bostrom: what do we do when general AI outsmarts humans? — they are extremely valuable questions to research. But we shouldn’t forget the short term questions. For example, if we have a system making legal decisions in a public administration, what laws do we want to be imbued into that? Do we want some sort of human mandate, or democratically elected institutions, or do we want this to be done by an AI? In addition, do the outputs — in the material sense — have to be fair if it’s cheaper and faster? Do we just accept that? Or, do we want to also be able to actually understand the process to be accessible for humans? This is crucial, as many advanced machine learning programs are currently black boxes to us, we can’t really understand them.
The other question revolves around the idea that if it’s trained on human data, human centric data, then it’s going to take human vices with it. Even more so, it could aggregate and exaggerate them.
There’s a lot of open questions, and I don’t really have all the answers of course. My utopia is quite abstract right now, because there’s a lot of research and thinking and testing and acting to be done in this field. What we should focus on right now is to reap its benefits and avoid the risks: to ensure that we humanize the technology that we live with, in the sense that we really challenge what it means to be human ourselves. And right now, we don’t. If we first figure out what it means to be human, then we can also tell how we want technology to help us to thrive.