Zenna and Noah discuss his philosophy of “practical idealism” and how it pertains to AI and other tech policy.
Text of the audio conversation with Zenna Fiscella and Noah Schoeppl
Recorded September 2019 for Edgeryders and NGI Forward
Zenna:
Welcome to the first episode of the Edgeryders podcast with Zenna or zelf for “human centered internet.” Today, we are interviewing Noah Schoeppl. And the conversation will be exploring Noah’s visions and thoughts of a utopian internet, as well as some of Noah’s history. We’ve been talking quite a bit ahead of time. And we recently got introduced each other. And it was kinda with a shebang and a lot of checkboxes.
To give it a round about exploration of what you’ve explored previously, is it legal infrastructures mixed with AI and ethical complications that come thereof? And you’ve explored ethical hacking, and working with radically open security, as well as psychology and programming and many more it seems like, and now you’re entering a master’s program at Oxford? in like, two weeks? Yeah. To start off, if we if we go about it from a chronological order, where would that lead us to start?
Noah:
I’m Noah. And I think my journey started as a politically aware and interested human being quite young, I was 12 or so when I really somehow caught fire for a topic that now is actually all over the world and more in the headlines, but back then was not so much. And it’s really just like, how can we get to a world with hundred percent renewable energy as fast as possible. And that was my utopia, my first political utopia, that that really inspired me and got me going.
And I came from a small town in southern Germany, and I started to like, travel to different places, try to like, you know, protests and all these things. This in touch with research institutions, and political institutions and things like all these things. And then there came a second phase at some point, because I realized, well, somehow in the end, in all these discussions that I had with politicians and lobbyists and activists, in the end, you know, like, we can all discuss this, and how, what’s that there’s an imperative how we need to change our world, due to the ecological boundaries of our world. But in the end, still, that the economic argument wins somehow. It always does in the political arena, or tends to, and I was really frustrated by that. And so it was like, Well, if the economic argument wins, maybe I should learn better how it works. And I really got into business ethics, and it was like back then, how can we reimagine how the economy and how businesses work, because to challenge that paradigm off that it’s only about money making, and it’s much more about human striving and the common good. And these are the contributions that companies should aim for.
So from there on, I realized once we drive companies and what really shapes them, but companies also can be really slow, or they can become really slow to develop there as well. The single force that right now is changing companies in the economy most is technology. And so if we get this one, right, if we can get the technological technology to change, then that’s also a really good opportunity to just change how our economy works, and how our society works for the better. So like technology is what’s right now is the moving pieces that right now are, are shaping new things, and we have the opportunity to co-shape, our generation has to come opportunity to co-shape these technological changes. And I think that’s opportunity I want to take so that we can use the technology together with our human insights and intuitions and values to make a more humane society and economy.
Zenna:
So I’m, I’m curious. Now, we’ve talked a little bit about this before about the technology being power. And we also talked about how Oxford was Hogwarts. And coders are the wizards up today, Framing it in that perspective, is an egalitarian world something worth fighting for? And if it is, what is technology and the knowledge thereof, what place does that have?
Noah:
I think, like generally I find, it can turn philosophy very appealing, particular, to the very rough like John Rawls idea that inequality, that the generally there should be, first of all, in terms of freedoms, they should be equal, and everybody should have some basic civil liberties. And then when we look at material conditions, that to be very rough again, the different principle is that according to John Rawls is that an unequal division of resources is only then legitimate, if it actually benefits those that are least well off. So when when an unequal distribution can meet that, for example, that overall, there’s just much more resources, including for the for the less than the least, will have to go around. And I think, right now we’re definitely at a point I would say, where we don’t justify our current system is not justified by that, by that standard, in the sense that we use technology often not centered on the most vulnerable people in our communities and societies very much on emerging goals that already happened to hold.
Also Silicon Valley culture, even if it purports to claim the world, to save the world, really is the end, although startups gets bought up by by venture capitalists that already are basically, that then again are owned by the ones that already currently own the world. Self-feeding. Exactly. And again, I find it sad that all this beautiful technology that could be built into really meaningful things in the end, like I mean, nobody says that all the social media companies we have now or all those giants needed to go to the direction they had to it was just that’s the society in which technology arrived was one in which these kinds of innovations were immediately put into, under profit, maximizing pressure.
Zenna:
And copyright?
Noah:
And copyrights is also one part of it. Yes, I could imagine very different ways of arriving there. But then again, I think my my philosophy that I developed around this is kind of really being willing to think, very radical, very big about how things could be different, but then really starting from where we are right now. So I really don’t like just complaining about how things should be very different. And I gotta do fix. And it’s also when I yeah, I mean, I didn’t like only talking about business ethics, but I also got into social entrepreneurship and founded companies myself and try to try to not just talk about how business should be run differently, but I tried to do it. And it’s hard.
Zenna:
just as a tidbit, what’s goal that you found from your time in there, and I don’t mean, goals as in capitalist values, but as in, what did you find that you brought with you on your new journey?
Noah:
I think generally, the, the methodologies of business can often be very useful if they are used to different ends. So like, basically, the idea of one Europe is to find a social business, to use business means for social ends to solve really social problems, and not for the kind of problems they’ve been conventionally associated with. But that’s not an original thought. So even though I want to credit it, but I think one more additional thought that I had was kind of like that it really helped me to develop my own philosophy that I call pragmatic idealism. At this point, I can summarize in three models, and first kind of is hope without critical thinking, is just naivety, but critical thinking without hope is just cynicism. And then the second idea is confidence without humility, is just arrogance. But then humility, without confidence, is sheepishness.
Zenna:
And that’s your own personal philosophy for how you see yourself throughout the world?
Noah:
More people that inspired us include Maria Popova, Bulgarian writer, but also many others. And then like the last sentence kind of is more like the synthesis of it all I would say, which is pragmatism without idealism is just opportunism. But idealism without pragmatism is just wishful thinking. So that’s kind of my, that’s my philosophy. That kind of is the goal, I would say, I found that pragmatism and idealism need to work together. Yeah, the, I think, especially like, the first two are probably more inspired by other people. Pragmatism and idealism, they live in the space between the two. And so really have ambitious goals, but also be really pragmatic, where you start with it.
Zenna:
if we go back to the origin of the ambitious goals, so to speak, if you were to start sketching out the image of what those ambitious goals for, let’s say, if you want to reshape the internet, plural, What would that look like for you?
Noah:
There can like I don’t have like one overarching but I think I have several again, utopias for this, or different utopian visions. I took an international relations course in my undergrad degree in Amsterdam and social sciences and I was thinking about well, how does international relations in global politics fit into cyberspace that we increasingly live in, and the internet and, and just writing and trying to combine this global politics theories with these new technologies, and then I just for some reason, wrote these words like cyber peace and cyber war. And I just noticed that my computer some enough reason underlined cyber peace as like an unknown word, but CYBER WARS recommend. And then it actually was, and I looked up in different word processing programs, none of the new cyber peace, but all the new cyber war, I looked up in different almond dictionaries and make not all of the new cyber cyber war, but very few use cyber peace. And so it was just like, Well, that’s interesting, the vocabulary that we have in the conceptual space of the internet, is we conceptualize it as war, we don’t even know how to talk about it in terms of peace, because we don’t have a word for it. And just that obvious asymmetry that it somehow needs to be a very aggressive and violent place, apparently out there. And this is the language we use. That kind of made me think there’s some utopia missing. And they’re starting to develop that a bit more in this economic context, initially, where and for example, what I find interesting is, when you look at cyber peace, maybe we look at peace in the real world first. And like there were always like ideas of world peace, they’ve always existed, but somehow we’ve not achieved it yet. So just one very common idea of world peace was always Immanual Kant, philosopher, who, in 1795, wrote an essay called perpetual peace, for some reason, the original, they basically purported that just the simple hypothesis kind of is the states with a republican constitutions of democracies, basically, they should all just guarantee each other security. And when they do that, then we could have an ever expanding unit of peace, because nobody would ever attempt to attack such a block of countries. And we could have an ever expanding unit of peace.
Zenna
One thing that’s been discussed recently, especially in the realms of internet and internet infrastructures, is the issue of centralization. In that kind of utopian image of a block of peace, one can say a block of Empire, which would then set the frame for the rest of the world.
Noah
Absolutely agree. And like, I think it’s really interesting that like, actually back then, again, I don’t want to get too much into wrap Kant up. He said, We don’t need a centralized contract or institution for this. We don’t need a treaty for this, this would just emerge out of enlightenment, basically, enlightened actions of, of individuals and countries, again, 18th century philosophy maybe doesn’t completely extend the internet. But I think there are some interesting things when you actually apply it.
And again, because the internet is a very different beast. If you look at the long term trends, again, we have not achieved real peace. But overall, we live in a more peaceful world than ever like than ever before. In less than two years, we did have a lot of progress. And I think we should continue to have that progress. Because right now, when we speak about the internet, we just speak a lot about, like, there’s a lot of outs, and so insecure and get cybercrime and a cyber war between the countries and all that. And I think that what we really should aim for is really to have this this utopian vision and how I think that can actually happen in cyberspace, much more than in real world, think cyberspace is actually could be a much more peaceful place, then then, then the world we live in physically. And that is because of the different dynamics of security in the cyberspace. It’s very asymmetric. And it’s very different from other security, thinking of like 20th century Cold War thinking, which often has been the only dominant security paradigm has been applied the internet, because the common story is internet you just, like you don’t know who’s your attacker, basically, it’s very difficult to do attribution of attackers. So you just want to develop your own offensive capability and hack everyone. And you can just do it in the face with us I’ve been doing in Russia, I’ve been doing in China for doing, many other countries aspire to do. And there’s also again, like the superpowers that are emerging, it’s basically trying to copy this old patterns into this new world. And I don’t think, first of all, I don’t think that’s how it’s going to work out. And first of all, don’t think it’s desirable, and it doesn’t work that way. And that’s because because of this asymmetry and the internet, that you don’t know who your attacker really is, attribution is very difficult. So you can’t retaliate, just to keep because you have offensive capabilities, right? Because you have cyber weapons, because you have the vulnerabilities that you know about, exploits that you can enter into other systems. Just because you can attack someone doesn’t mean that they want to attack you, because you again, the offensive capability you have is not a deterrence against somebody else’s attack, which in the physical world is very different. If you have a nuclear bomb, you know, the other person is not going to bump you so much because you can bomb them back. This logic of, of Cold War in that sense doesn’t work anymore in the internet. A second reason for that is basically you can destroy somebody else’s offensive capability by building up your own defenses. So if you do really good, actually security for yourself, and if you do security research, and you find vulnerabilities and you find zero days, then you can basically destroy the offensive capabilities of others because they rely on these vulnerabilities. And so, you know, if I have a nuclear bomb, just because you build some protection doesn’t mean I don’t have a nuclear bomb. But if I have a zero day, which is basically an exploit to which no fixes there yet, and you close that, that gap, that I don’t have a bomb anymore, I don’t have a weapon anymore. So basically, you can take away somebody else’s offensive capability by building good defenses. I think these dynamics have not been fully understood by policymakers.
Zenna:
Is that something you would call sign of peace? You can build up defenses and by having a safety rather than offensive?
Noah:
Yeah, I think like that, when my vision for several pieces that states should end and everything out everybody else, I don’t think security must always be so state-centric, my vision for cyberpeace is that, basically everybody in a certain union, and it doesn’t have to be a contractual union, works towards building collective defenses. So basically, these are two very basic things. zero day reporting, a global norm for zero day reporting, that’s countries that join a certain even even even just an idea that they would never did follow this idea can start to report zero days and never include and never improve not only their security, but security of everybody else. And again, like many people might say that sounds so super idealistic. And I don’t think it is, I think it is very pragmatic reasons to do so. Because if you’re part of this defensive union, then you can really you will be much safer than if you’re if you don’t share any of you zero days, and you twist try to get keep and build your own offensive capabilities for all the reasons that I’ve named, because on offensive capabilities can be destroyed by enemies, and because of having offensive capability is not a deterrent against attacks.
Zenna:
So if you have a union, there must be something that’s in it for them, there’s the <CAN"T INTERPRET THIS WORD> Is that what you imagine, for a utopian internet?
Noah:
Generally, again, I would apply this kind of Kantian idea that in the end, we want to live on a peaceful planet or in a peaceful cyberspace. My vision is that we, in the end have a safe space in the internet and that place in that sense that we start by creating small safe spaces, and they start expanding until in the end, hopefully they cover I always think that will be like, there will always be cyber attacks. But we can minimize the impact that we can increase our the integrity of our of everybody’s experience in the internet by for example, having a global norm for zero day reporting. And I find what I like about the global norm for zero day reporting is it encourages everyone to contribute to everyone’s security, because you can’t just close the zero day vulnerability for yourself, you do it for everyone. And at the same time, the only them that basically you create is the people that don’t want to work together to make everyone’s experience safer. And the them is basically the people that want to keep the exploits to, for example, I don’t know, whether it is NSA hackers, or whether this malicious Russian hackers or whatever, whatever groups, only the people that kind of want to be able to really attack other people systems are then kind of that that will be against, I think the what would I like with this is it’s not so much that I have a very clear institutional setup, how this would work, because obviously, it’s very difficult to control these kind of things. Exactly. So I don’t it’s not fully developed. But I think it’s kind of like my point is more like this narrative shift from we talked about a place where there’s a lot of war happening, cyber war, to a place where cyber peace is possible, and where we should make sense of peace, not cyber law or cyber war.
Zenna:
Whose responsibility is it to make this shift happen?
Noah:
I think in the end, it’s everyone’s, but I think, if I should put my my my hope on, for example, individual institutions, I think institutions that are very well set to do this, or where I have realistic hope for this is the European Union. So if the European Union were to start such a project, where it say, okay, we, as a group of countries decided we want to start a norm, that what everyone ought to report zero days, because it’s better for everyone to do that. I think that could be very powerful. And I know that the current administration, for example, in the US, which obviously controls the most powerful capability there is not going to willing to do that. But I hope that that could change given also the large trajectory of history, I hope that it is possible. Also for other countries, I hope that they that in the same way that we didn’t think it seemed impossible to ever start a counter proliferation movement in the nuclear age. And it seems we will for eternity, always just create more bombs to kill each other. We now actually live a world where there’s still too many bombs, but at least less than 50 years ago. And in the same way, I have a hope that in 50 years, I know it’s a long shot. And it’s not going to happen overnight. And right now, all the talks on your end level are basically non existent on this topic. So it’s I really hope that we could live in a world where we have less, not more attacks and more integrity of everybody’s devices.
Zenna:
You’ve been working a bit with AI and machine learning, and specifically on the focus of whether it’s ethical, if placed in a legal system. Do you want to expand on that?
Noah:
Yeah, of course. So I think kind of like that was my work that I started, or did two years ago that I kind of talked about this narrative shift to cyber peace. And what I didn’t like about it is that it was apart from that it was like state centric, and that there were many practical problems with it and very long shot, it was still defensive, it was still like it’s talking about my utopia is security or safety, which is still something like the absence of violence. That’s the problem with peace. Also, you know, it’s not medicine super positive, it’s only that bad things are not there. And so I was like, I want something more positive than that. And and that’s then where I got also more into this AI space. And because I think they’re really there is opportunity to positively shape human thriving in many ways. And to unlock this potential that we need to avoid many risks that are often also discussed and that are very real. And the as I said, also, I wonder, and they I feel I don’t have this super clean utopia, because somehow every utopia that involves machines that can do more than we do, in some sense, apparently ends up being a dystopia sometimes. And so we have to still do a lot of work on which utopia, we actually want out there.
Zenna:
In your utopia, does AI exist?
Noah:
In my utopia. Again, I’m a pragmatic idealist. So my somehow my ideals are also based on my pragmatism. And I think there is not a world possible. Like we cannot, I don’t know if you heard about the unabomber, who’s like a, like a guy who started basically bombing scientists, because he believed that the progress of science and technology would destroy human society. And given that I don’t think that’s a viable path that we will have, that we will basically live with less technology, then the question is, how can we shape the current trends and the current technologies that are rising to human benefit? And also maybe sometimes also, how can we consciously decide not to use them? Yeah, we can sometimes do that.
But overall, I’m convinced that we will get machines that will be better than humans at many tasks that currently only humans can do. Given that that’s going to come, the question is, how do we want to do that? I think their biggest problem are, exactly about human centrism. So how can we make sure that they align with our values and their ? Like that’s a long term perspective, I would say all this question, which is kind of like AI governance questions that says when, for example, pioneered by Nick Bostrom, the whole ideas of what do we do when general AI actually is smarter than humans? I think these are super valuable questions to do with and to research.
But then there’s also the short term questions, and that’s what we’re after dealing with these important questions of, if we already today have for example, a system that makes legal decisions, for example, in an administration and that public administration, what laws do we want to be imbued into that? Do we think it’s just fine if there’s some human mandate and humans and some democratically elected institution decides now we want this to be done by an AI? Or do we also think there needs to be some outputs that need to be in some material sense fair, as it just if it’s if it’s cheaper and faster do we just accept it? Or do we also want to actually be able to understand the process to be accessible for humans, because then many advanced machine learning programs, currently, they’re black boxes for us, so we can’t really understand them.
And the other question is, well, if it’s trained on any human data, human centric data, also in that sense, then it’s going to take human vices with it, and it could even aggregate them and exaggerate them. And so yeah, there’s a lot of open questions, and I don’t really have full answers to those yet. But my utopia is just I can only explain it in very abstract terms at this point, because there’s a lot of research and thinking and testing and acting to be done on this field. But right now, it’s to reap the benefits to avoid the risks, and to make sure that we humanize the technology that we live with, in the sense that in the sense that we really challenge also what it means to be human for ourselves, because I don’t think we know right now what it means to be human. And if we first figure out like, or if we figure out what it means to be human, then we can also tell when we want technology to do to help us to thrive.
Zenna:
Thank you so much for forth your thoughts. I’m that feeling that we will hear more from you.
Noah:
Well, thank you very much for giving me this space. And I’m very grateful for your time and for your work.