Text of the audio conversation with Zenna Fiscella and Noah Schoeppl
Recorded September 2019 for Edgeryders and NGI Forward
_Zenna: _
_Welcome to the first episode of the Edgeryders podcast with Zenna or zelf for “human centered internet.” Today, we are interviewing Noah Schoeppl. And the conversation will be exploring Noah’s visions and thoughts of a utopian internet, as well as some of Noah’s history. We’ve been talking quite a bit ahead of time. And we recently got introduced each other. And it was kinda with a shebang and a lot of checkboxes. _
To give it a round about exploration of what you’ve explored previously, is it legal infrastructures mixed with AI and ethical complications that come thereof. And you’ve explored ethical hacking, and working with radically open security, as well as politicy, psychology, law, economics and programming and many more it seems like, and now you’re entering a master’s program at Oxford, in like, two weeks. Yeah. To start off, if we if we go about it from a chronological order, where would that lead us to start?
_Noah: _
_I’m Noah. And I think my journey started as a politically aware and interested human being quite young, I was 12 or so when I really somehow caught fire for a topic that now is actually all over the world and more in the headlines, but back then was not so much. And it’s really just the question: how can we get to a world with hundred percent renewable energy as fast as possible. And that was my utopia, that that really inspired me and got me going. _
_And I came from a small town in southern Germany, and I started to, travel to different places, protests and all these things. This got me in touch with research institutions, and political institutions. And then there came a second phase at some point, because I realized, well, somehow in the end, in all these discussions that I had with politicians and lobbyists and activists, in the end, you know, like, we can all discuss this, and how there’s an ecological imperative how we need to change our world, due to the ecological boundaries of our world. But in the end, still, the economic argument wins somehow. It always does in the political arena, or tends to, and I was really frustrated by that. And so I was like, Well, if the economic argument wins, maybe I should learn better how it works. And I really got into business ethics, and asked: how can we reimagine how the economy and how businesses work, because to challenge that paradigm of that it’s only about money making, and it’s much more about human thriving and the common good. And these are the contributions that companies should aim for. _
So from there on, I realized what drives companies and what really shapes them, because companies can become really slow to develop there as well. The single force that right now is changing companies and the economy most is technology. And so if we get this one right, if we can get the technological change right, then that’s also a really good opportunity to just change how our economy works, and how our society works for the better. So like technology is what’s right now is the moving pieces that right now are shaping new things, and our generation has the opportunity to co-shape these technological changes. And I think that’s opportunity I want to take so that we can use the technology together with our human insights and intuitions and values to make a more humane society and economy.
_Zenna: _
So I’m, I’m curious. Now, we’ve talked a little bit about this before about the technology being power. And we also talked about how Oxford was Hogwarts. And coders are the wizards up today. Framing it in that perspective, is an egalitarian world something worth fighting for? And if it is, what is technology and the knowledge thereof, what place does that have?
_Noah: _
In generally I find egalitarian philosophy very appealing, particular, John Rawls’ idea that first of all, freedoms should be equal, and everybody should have some basic civil liberties. And then when we look at material conditions, that to be very rough again, the different principle according to John Rawls is that an unequal division of resources is only legitimate, if it actually benefits those that are least well off. And I think, right now we’re definitely at a point I would say, where we don’t justify our current system is not justified by that, by that standard, in the sense that we use technology often not centered on the most vulnerable people in our communities and societies.
Also Silicon Valley culture, even if it purports to save the world, startups get bought up by venture capitalists, which then again are owned by the ones that already currently own the world. Self-feeding. Exactly. And again, I find it sad that all this beautiful technology that could be built into really meaningful things in the end, nobody says that all the social media companies we have now or all those giants needed to go to the direction they had to it was just that’s the society in which technology arrived was one in which these kinds of innovations were immediately put under profit, maximizing pressure.
_Zenna: _
And copyright?
_Noah: _
_And copyrights is also one part of it. Yes, I could imagine very different ways of arriving there. But then again, I think my my philosophy that I developed around this is kind of really being willing to think, very radical, very big about how things could be different, but then really starting from where we are right now. So I really don’t like just complaining about how things should be very different. I didn’t like only like talking about business ethics, but I also got into social entrepreneurship and founded companies myself and tried to not just talk about how business should be run differently, but I tried to do it. And it’s hard. _
_Zenna: _
just as a tidbit, what’s goal that you found from your time in there, and I don’t mean, goals as in capitalist values, but as in, what did you find that you brought with you on your new journey?
_Noah: _
I think generally, the, the methodologies of business can often be very useful if they are used to different ends. So like, basically, the idea of Muhammad Yunus is to find a social business, to use business means for social ends to solve real social problems, and not for the kind of problems they’ve been conventionally associated with. One more additional thought that helped me to develop my own philosophy that I call pragmatic idealism. At this point, I can summarize in three models: first, hope without critical thinking, is just naivety, but critical thinking without hope is just cynicism. And then the second idea is: confidence without humility, is just arrogance but then humility, without confidence, is sheepishness.
_Zenna: _
And that’s your own personal philosophy for how you see yourself throughout the world?
_Noah: _
More people that inspired us include Maria Popova, Bulgarian writer, but also many others. And then like the last sentence kind of is more like the synthesis of it all I would say, which is pragmatism without idealism is just opportunism. But idealism without pragmatism is just wishful thinking. So that’s kind of my, that’s my philosophy. That kind of is the goal, I would say, I found that pragmatism and idealism need to work together. Yeah, the, I think, especially like, the first two are probably more inspired by other people. Pragmatism and idealism, they live in the space between the two. And so really have ambitious goals, but also be really pragmatic, where you start with it.
_Zenna: _
if we go back to the origin of the ambitious goals, so to speak, if you were to start sketching out the image of what those ambitious goals for, let’s say, if you want to reshape the internet, plural, What would that look like for you?
_Noah: _
I don’t have one overarching one but I think I have several different utopian visions. I took an international relations course in my undergrad degree in Amsterdam in social sciences and I was thinking about, well, how does international relations in global politics fit into cyberspace that we increasingly live in, and the internet and, and just writing and trying to combine this global politics theories with these new technologies, and then I just for some reason, wrote these words like cyber peace and cyber war. And I just noticed that my computer for some enough reason underlined cyber peace as like an unknown word, but not CYBER WARS. I looked up in different word processing programs, none of the knew cyber peace, but all the knew cyber war. And so it was just like, Well, that’s interesting, the vocabulary that we have in the conceptual space of the internet, is we conceptualize it as war, we don’t even know how to talk about it in terms of peace, because we don’t have a word for it. And just that obvious asymmetry that it somehow needs to be a very aggressive and violent place, apparently out there. And this is the language we use. That kind of made me think there’s some utopia missing. When you look at what is cyber peace, maybe we look at peace in the real world first. There were always like ideas of world peace, they’ve always existed, but somehow we’ve not achieved it yet. So just one very common idea of world peace was developed by Immanual Kant, a philosopher, who in 1795 wrote an essay called ‘perpetual peace’, with the simple hypothesis that the states with a republican constitutions so democracies, should all just guarantee each other security. And when they do that, then we could have an ever expanding union of peace, because nobody would ever attempt to attack such a block of countries. And we could have an ever expanding unit of peace.
Zenna:
One thing that’s been discussed recently, especially in the realms of internet and internet infrastructures, is the issue of centralization. In that kind of utopian image of a block of peace, one can say a block of Empire, which would then set the frame for the rest of the world.
_Noah _
_Absolutely agree. Kant said we don’t need a centralized contract or institution for this. We don’t need a treaty for this, this would just emerge out of enlightenment, basically, enlightened actions of, of individuals and countries, again, 18th century philosophy maybe doesn’t completely explain the internet. But I think there are some interesting things when you actually apply it. _
And again, because the internet is a very different beast. If you look at the long term trends, again, we have not achieved real peace. But overall, we live in a more peaceful world than ever before. And I think we should continue to have progress. Because right now, when we speak about the internet, we just speak a lot about, like, there’s a lot of cybercrime and a cyber war between the countries and all that. And I think that what we really should aim for is really to have this this utopian vision and how I think that can actually happen in cyberspace, much more than in real world, think cyberspace is actually could be a much more peaceful place, then the world we live in physically. And that is because of the different dynamics of security in the cyberspace. It’s very asymmetric. And it’s very different from other security, thinking of like 20th century Cold War thinking, which often has been the only dominant security paradigm has been applied to the internet, because the common story is that on the internet you you don’t know who’s your attacker. Basically, it’s very difficult to do attribution of attackers. So you just want to develop your own offensive capability and hack everyone. The superpowers that are emerging, are basically trying to copy this old patterns into this new world. First of all, I don’t think that’s how it’s going to work out. And second of all, I don’t think it’s desirable, and it doesn’t work that way. And that’s because because of this asymmetry in the internet, that you don’t know who your attacker, attribution is very difficult. So you can’t retaliate, even if you have offensive capabilities, right? The offensive capability you have is not a deterrence against somebody else’s attack, which in the physical world is very different. If you have a nuclear bomb, you know, the other person is not going to bomb you because you can bomb them back. This logic of the Cold War in that sense doesn’t work anymore in the internet. A second reason for that is basically you can destroy somebody else’s offensive capability by building up your own defenses. So if you do really good, actually security for yourself, and if you do security research, and you find vulnerabilities and you find zero days, then you can basically destroy the offensive capabilities of others because they rely on these vulnerabilities. And so, you know, if I have a nuclear bomb, just because you build some protection doesn’t mean I don’t have a nuclear bomb. But if I have a zero day, which is basically an exploit to which there is no fixes yet, and you close that, that gap, that I don’t have a bomb anymore, I don’t have a weapon anymore. So basically, you can take away somebody else’s offensive capability by building good defenses. I think these dynamics have not been fully understood by policymakers.
_Zenna: _
Is that something you would call sign of peace? You can build up defenses and by having a safety rather than offensive?
_Noah: _
Yeah, I don’t think security must always be so state-centric, my vision for cyberpeace is that, basically everybody in a certain union, and it doesn’t have to be a contractual union, which works towards building collective defenses. A global norm for zero day reporting, that countries that join and when they report zero days they never improve not only their security, but security of everybody else. And again, like many people might say that sounds so super idealistic. And I don’t think it is, I think there are very pragmatic reasons to do so. Because if you’re part of this defensive union, then you will be much safer than if you’re if you don’t share any of you zero days, and you twist try to get keep and build your own offensive capabilities for all the reasons that I’ve named, because on offensive capabilities can be destroyed by enemies, and because of having offensive capability is not a deterrent against attacks.
_Zenna: _
So if you have a union, there must also be an ‘them’ outside of it? Is that what you imagine, for a utopian internet?
_Noah: _
Generally, I would apply this kind of Kantian idea that in the end, we want to live on a peaceful planet or in a peaceful cyberspace. My vision is that we, in the end have a safe space in the internet and we start by creating small safe spaces, and they start expanding until in the end, hopefully they cover everything. There will always be cyber attacks. But we can minimize the impact and increase the integrity of our of everybody’s experience in the internet by for example, having a global norm for zero day reporting. And I find what I like about the global norm for zero day reporting is it encourages everyone to contribute to everyone’s security, because you can’t just close the zero day vulnerability for yourself, you do it for everyone. And at the same time, the only ‘them’ that basically you create is the people that don’t want to work together to make everyone’s experience safer. And the ‘them’ is basically the people that want to keep the exploits to themselves. Only the people that kind of want to be able to really attack other people systems are then kind of that that will be against. I do not have a very clear institutional setup how this would work, because obviously, it’s very difficult to control these kind of things. So I don’t it’s not fully developed. But my point is more this narrative shift from we talked about a place where there’s a lot of war happening, cyber war, to a place where cyber peace is possible, and where we should make sense of peace, not cyber war.
_Zenna: _
Whose responsibility is it to make this shift happen?
_Noah: _
I think in the end, it’s everyone’s, but I think, if I should put my hope on, for example, individual institutions, I think institutions that are very well set to do this, or where I have realistic hope for this is the European Union. So if the European Union were to start such a project, where it said, okay, we, as a group of countries decided we want to start a norm, that what everyone ought to report zero days, because it’s better for everyone to do that. I think that could be very powerful. And I know that the current administration, for example, in the US, which obviously controls the most powerful capability there, is not going to willing to do that. But I hope that that could change given also the large trajectory of history, I hope that it is possible. Also for other countries, I hope that they that in the same way that we didn’t think it seemed impossible to ever start a counter proliferation movement in the nuclear age. And it seemed that we will for eternity, always just create more bombs to kill each other. We now actually live a world where there’s still too many bombs, but at least less than 50 years ago. And in the same way, I have a hope that in 50 years, I know it’s a long shot and it’s not going to happen overnight. And right now, all the talks on the UN level are basically non-existent on this topic. So it’s I really hope that we could live in a world where we have less, not more attacks and more integrity of everybody’s devices.
_Zenna: _
You’ve been working a bit with AI and machine learning, and specifically on the focus of whether it’s ethical, if placed in a legal system. Do you want to expand on that?
_Noah: _
Yeah, of course. So I think kind of like that was my work that I started, or did two years ago that I talked about this narrative shift to cyber peace. And what I didn’t like about it is that it was apart from that it was state centric, and that there were many practical problems with it and very long shot, it was still defensive, it was still like it’s talking about my utopia is security or safety, which is still something like the absence of violence. That’s the problem with peace. Also, you know, it’s not medicine super positive, it’s only that bad things are not there. And so I was like, I want something more positive than that. And and that’s then where I got also more into this AI space. And because I think they’re really there is opportunity to positively shape human thriving in many ways. And to unlock this potential that we need to avoid many risks that are often also discussed and that are very real. And the as I said, also, I wonder, and they I feel I don’t have this super clean utopia, because somehow every utopia that involves machines that can do more than we do, in some sense, apparently ends up being a dystopia sometimes. And so we have to still do a lot of work on which utopia, we actually want out there.
Zenna: _
_ In your utopia, does AI exist?
_Noah: _
I’m a pragmatic idealist. So my somehow my ideals are also based on my pragmatism. And I think there is not a world possible where we get rid of AI. I don’t know if you heard about the unabomber, who’s like a, like a guy who started basically bombing scientists, because he believed that the progress of science and technology would destroy human society. And given that I don’t think that’s a viable path that we will have, that we will basically live with less technology, then the question is, how can we shape the current trends and the current technologies that are rising to human benefit? And also maybe sometimes also, how can we consciously decide not to use them?
_But overall, I’m convinced that we will get machines that will be better than humans at many tasks that currently only humans can do. Given that that’s going to come, the question is, how do we want to do that? I think their biggest problem are, exactly about human centrism. So how can we make sure that they align with our values. Like that’s a long term perspective, I would say all this question, which is kind of like AI governance questions that says when, for example, pioneered by Nick Bostrom, the whole ideas of what do we do when general AI actually is smarter than humans? I think these are super valuable questions to research. _
_But then there’s also the short term questions, and that’s what we’re after dealing with these important questions of, if we already today have for example, a system that makes legal decisions, for example, in an administration and that public administration, what laws do we want to be imbued into that? Do we think it’s just fine if there’s some human mandate and humans and some democratically elected institution decides now we want this to be done by an AI? Or do we also think there needs to be some outputs that need to be in some material sense fair, as it just if it’s if it’s cheaper and faster do we just accept it? Or do we also want to actually be able to understand the process to be accessible for humans, because then many advanced machine learning programs, currently, they’re black boxes for us, so we can’t really understand them. _
And the other question is, well, if it’s trained on any human data, human centric data, also in that sense, then it’s going to take human vices with it, and it could even aggregate them and exaggerate them. And so yeah, there’s a lot of open questions, and I don’t really have full answers to those yet. But my utopia is just I can only explain it in very abstract terms at this point, because there’s a lot of research and thinking and testing and acting to be done on this field. But right now, it’s to reap the benefits to avoid the risks, and to make sure that we humanize the technology that we live with, in the sense that in the sense that we really challenge also what it means to be human for ourselves, because I don’t think we know right now what it means to be human. And if we first figure out like, or if we figure out what it means to be human, then we can also tell when we want technology to do to help us to thrive.
_Zenna: _
Thank you so much for forth your thoughts. I’m that feeling that we will hear more from you.
_Noah: _
Well, thank you very much for giving me this space. And I’m very grateful for your time and for your work.