Global Governance of Emerging Technologies

my name is noah and I am a social entrepreneur and technology researcher. Currently, I am a MSc candidate at the Oxford Internet Institute. I was very honored to be interviewed by the fabulous @zelf: here you can listen to the podcast (Podcast: Zenna interviews Noah Schoeppl) and below I post the transcript of that recording.

My current research interest is in the global governance of emerging technologies. In the podast we mostly discuss a potential narrative shift from ‘cyberwar to cyberpeace’ as a practical utopia and guiding north star but also touch on other internet-related technologies such as the governance of AI. I hope it serves you and I look forward to engage with your wisdoms and insights.

3 Likes

Text of the audio conversation with Zenna Fiscella and Noah Schoeppl
Recorded September 2019 for Edgeryders and NGI Forward

_Zenna: _
_Welcome to the first episode of the Edgeryders podcast with Zenna or zelf for “human centered internet.” Today, we are interviewing Noah Schoeppl. And the conversation will be exploring Noah’s visions and thoughts of a utopian internet, as well as some of Noah’s history. We’ve been talking quite a bit ahead of time. And we recently got introduced each other. And it was kinda with a shebang and a lot of checkboxes. _

To give it a round about exploration of what you’ve explored previously, is it legal infrastructures mixed with AI and ethical complications that come thereof. And you’ve explored ethical hacking, and working with radically open security, as well as politicy, psychology, law, economics and programming and many more it seems like, and now you’re entering a master’s program at Oxford, in like, two weeks. Yeah. To start off, if we if we go about it from a chronological order, where would that lead us to start?

_Noah: _
_I’m Noah. And I think my journey started as a politically aware and interested human being quite young, I was 12 or so when I really somehow caught fire for a topic that now is actually all over the world and more in the headlines, but back then was not so much. And it’s really just the question: how can we get to a world with hundred percent renewable energy as fast as possible. And that was my utopia, that that really inspired me and got me going. _

_And I came from a small town in southern Germany, and I started to, travel to different places, protests and all these things. This got me in touch with research institutions, and political institutions. And then there came a second phase at some point, because I realized, well, somehow in the end, in all these discussions that I had with politicians and lobbyists and activists, in the end, you know, like, we can all discuss this, and how there’s an ecological imperative how we need to change our world, due to the ecological boundaries of our world. But in the end, still, the economic argument wins somehow. It always does in the political arena, or tends to, and I was really frustrated by that. And so I was like, Well, if the economic argument wins, maybe I should learn better how it works. And I really got into business ethics, and asked: how can we reimagine how the economy and how businesses work, because to challenge that paradigm of that it’s only about money making, and it’s much more about human thriving and the common good. And these are the contributions that companies should aim for. _

So from there on, I realized what drives companies and what really shapes them, because companies can become really slow to develop there as well. The single force that right now is changing companies and the economy most is technology. And so if we get this one right, if we can get the technological change right, then that’s also a really good opportunity to just change how our economy works, and how our society works for the better. So like technology is what’s right now is the moving pieces that right now are shaping new things, and our generation has the opportunity to co-shape these technological changes. And I think that’s opportunity I want to take so that we can use the technology together with our human insights and intuitions and values to make a more humane society and economy.

_Zenna: _
So I’m, I’m curious. Now, we’ve talked a little bit about this before about the technology being power. And we also talked about how Oxford was Hogwarts. And coders are the wizards up today. Framing it in that perspective, is an egalitarian world something worth fighting for? And if it is, what is technology and the knowledge thereof, what place does that have?

_Noah: _
In generally I find egalitarian philosophy very appealing, particular, John Rawls’ idea that first of all, freedoms should be equal, and everybody should have some basic civil liberties. And then when we look at material conditions, that to be very rough again, the different principle according to John Rawls is that an unequal division of resources is only legitimate, if it actually benefits those that are least well off. And I think, right now we’re definitely at a point I would say, where we don’t justify our current system is not justified by that, by that standard, in the sense that we use technology often not centered on the most vulnerable people in our communities and societies.

Also Silicon Valley culture, even if it purports to save the world, startups get bought up by venture capitalists, which then again are owned by the ones that already currently own the world. Self-feeding. Exactly. And again, I find it sad that all this beautiful technology that could be built into really meaningful things in the end, nobody says that all the social media companies we have now or all those giants needed to go to the direction they had to it was just that’s the society in which technology arrived was one in which these kinds of innovations were immediately put under profit, maximizing pressure.

_Zenna: _
And copyright?

_Noah: _
_And copyrights is also one part of it. Yes, I could imagine very different ways of arriving there. But then again, I think my my philosophy that I developed around this is kind of really being willing to think, very radical, very big about how things could be different, but then really starting from where we are right now. So I really don’t like just complaining about how things should be very different. I didn’t like only like talking about business ethics, but I also got into social entrepreneurship and founded companies myself and tried to not just talk about how business should be run differently, but I tried to do it. And it’s hard. _

_Zenna: _
just as a tidbit, what’s goal that you found from your time in there, and I don’t mean, goals as in capitalist values, but as in, what did you find that you brought with you on your new journey?

_Noah: _
I think generally, the, the methodologies of business can often be very useful if they are used to different ends. So like, basically, the idea of Muhammad Yunus is to find a social business, to use business means for social ends to solve real social problems, and not for the kind of problems they’ve been conventionally associated with. One more additional thought that helped me to develop my own philosophy that I call pragmatic idealism. At this point, I can summarize in three models: first, hope without critical thinking, is just naivety, but critical thinking without hope is just cynicism. And then the second idea is: confidence without humility, is just arrogance but then humility, without confidence, is sheepishness.

_Zenna: _
And that’s your own personal philosophy for how you see yourself throughout the world?

_Noah: _
More people that inspired us include Maria Popova, Bulgarian writer, but also many others. And then like the last sentence kind of is more like the synthesis of it all I would say, which is pragmatism without idealism is just opportunism. But idealism without pragmatism is just wishful thinking. So that’s kind of my, that’s my philosophy. That kind of is the goal, I would say, I found that pragmatism and idealism need to work together. Yeah, the, I think, especially like, the first two are probably more inspired by other people. Pragmatism and idealism, they live in the space between the two. And so really have ambitious goals, but also be really pragmatic, where you start with it.

_Zenna: _
if we go back to the origin of the ambitious goals, so to speak, if you were to start sketching out the image of what those ambitious goals for, let’s say, if you want to reshape the internet, plural, What would that look like for you?

_Noah: _
I don’t have one overarching one but I think I have several different utopian visions. I took an international relations course in my undergrad degree in Amsterdam in social sciences and I was thinking about, well, how does international relations in global politics fit into cyberspace that we increasingly live in, and the internet and, and just writing and trying to combine this global politics theories with these new technologies, and then I just for some reason, wrote these words like cyber peace and cyber war. And I just noticed that my computer for some enough reason underlined cyber peace as like an unknown word, but not CYBER WARS. I looked up in different word processing programs, none of the knew cyber peace, but all the knew cyber war. And so it was just like, Well, that’s interesting, the vocabulary that we have in the conceptual space of the internet, is we conceptualize it as war, we don’t even know how to talk about it in terms of peace, because we don’t have a word for it. And just that obvious asymmetry that it somehow needs to be a very aggressive and violent place, apparently out there. And this is the language we use. That kind of made me think there’s some utopia missing. When you look at what is cyber peace, maybe we look at peace in the real world first. There were always like ideas of world peace, they’ve always existed, but somehow we’ve not achieved it yet. So just one very common idea of world peace was developed by Immanual Kant, a philosopher, who in 1795 wrote an essay called ‘perpetual peace’, with the simple hypothesis that the states with a republican constitutions so democracies, should all just guarantee each other security. And when they do that, then we could have an ever expanding union of peace, because nobody would ever attempt to attack such a block of countries. And we could have an ever expanding unit of peace.

Zenna:
One thing that’s been discussed recently, especially in the realms of internet and internet infrastructures, is the issue of centralization. In that kind of utopian image of a block of peace, one can say a block of Empire, which would then set the frame for the rest of the world.

_Noah _
_Absolutely agree. Kant said we don’t need a centralized contract or institution for this. We don’t need a treaty for this, this would just emerge out of enlightenment, basically, enlightened actions of, of individuals and countries, again, 18th century philosophy maybe doesn’t completely explain the internet. But I think there are some interesting things when you actually apply it. _

And again, because the internet is a very different beast. If you look at the long term trends, again, we have not achieved real peace. But overall, we live in a more peaceful world than ever before. And I think we should continue to have progress. Because right now, when we speak about the internet, we just speak a lot about, like, there’s a lot of cybercrime and a cyber war between the countries and all that. And I think that what we really should aim for is really to have this this utopian vision and how I think that can actually happen in cyberspace, much more than in real world, think cyberspace is actually could be a much more peaceful place, then the world we live in physically. And that is because of the different dynamics of security in the cyberspace. It’s very asymmetric. And it’s very different from other security, thinking of like 20th century Cold War thinking, which often has been the only dominant security paradigm has been applied to the internet, because the common story is that on the internet you you don’t know who’s your attacker. Basically, it’s very difficult to do attribution of attackers. So you just want to develop your own offensive capability and hack everyone. The superpowers that are emerging, are basically trying to copy this old patterns into this new world. First of all, I don’t think that’s how it’s going to work out. And second of all, I don’t think it’s desirable, and it doesn’t work that way. And that’s because because of this asymmetry in the internet, that you don’t know who your attacker, attribution is very difficult. So you can’t retaliate, even if you have offensive capabilities, right? The offensive capability you have is not a deterrence against somebody else’s attack, which in the physical world is very different. If you have a nuclear bomb, you know, the other person is not going to bomb you because you can bomb them back. This logic of the Cold War in that sense doesn’t work anymore in the internet. A second reason for that is basically you can destroy somebody else’s offensive capability by building up your own defenses. So if you do really good, actually security for yourself, and if you do security research, and you find vulnerabilities and you find zero days, then you can basically destroy the offensive capabilities of others because they rely on these vulnerabilities. And so, you know, if I have a nuclear bomb, just because you build some protection doesn’t mean I don’t have a nuclear bomb. But if I have a zero day, which is basically an exploit to which there is no fixes yet, and you close that, that gap, that I don’t have a bomb anymore, I don’t have a weapon anymore. So basically, you can take away somebody else’s offensive capability by building good defenses. I think these dynamics have not been fully understood by policymakers.

_Zenna: _
Is that something you would call sign of peace? You can build up defenses and by having a safety rather than offensive?

_Noah: _
Yeah, I don’t think security must always be so state-centric, my vision for cyberpeace is that, basically everybody in a certain union, and it doesn’t have to be a contractual union, which works towards building collective defenses. A global norm for zero day reporting, that countries that join and when they report zero days they never improve not only their security, but security of everybody else. And again, like many people might say that sounds so super idealistic. And I don’t think it is, I think there are very pragmatic reasons to do so. Because if you’re part of this defensive union, then you will be much safer than if you’re if you don’t share any of you zero days, and you twist try to get keep and build your own offensive capabilities for all the reasons that I’ve named, because on offensive capabilities can be destroyed by enemies, and because of having offensive capability is not a deterrent against attacks.

_Zenna: _
So if you have a union, there must also be an ‘them’ outside of it? Is that what you imagine, for a utopian internet?

_Noah: _
Generally, I would apply this kind of Kantian idea that in the end, we want to live on a peaceful planet or in a peaceful cyberspace. My vision is that we, in the end have a safe space in the internet and we start by creating small safe spaces, and they start expanding until in the end, hopefully they cover everything. There will always be cyber attacks. But we can minimize the impact and increase the integrity of our of everybody’s experience in the internet by for example, having a global norm for zero day reporting. And I find what I like about the global norm for zero day reporting is it encourages everyone to contribute to everyone’s security, because you can’t just close the zero day vulnerability for yourself, you do it for everyone. And at the same time, the only ‘them’ that basically you create is the people that don’t want to work together to make everyone’s experience safer. And the ‘them’ is basically the people that want to keep the exploits to themselves. Only the people that kind of want to be able to really attack other people systems are then kind of that that will be against. I do not have a very clear institutional setup how this would work, because obviously, it’s very difficult to control these kind of things. So I don’t it’s not fully developed. But my point is more this narrative shift from we talked about a place where there’s a lot of war happening, cyber war, to a place where cyber peace is possible, and where we should make sense of peace, not cyber war.

_Zenna: _
Whose responsibility is it to make this shift happen?

_Noah: _
I think in the end, it’s everyone’s, but I think, if I should put my hope on, for example, individual institutions, I think institutions that are very well set to do this, or where I have realistic hope for this is the European Union. So if the European Union were to start such a project, where it said, okay, we, as a group of countries decided we want to start a norm, that what everyone ought to report zero days, because it’s better for everyone to do that. I think that could be very powerful. And I know that the current administration, for example, in the US, which obviously controls the most powerful capability there, is not going to willing to do that. But I hope that that could change given also the large trajectory of history, I hope that it is possible. Also for other countries, I hope that they that in the same way that we didn’t think it seemed impossible to ever start a counter proliferation movement in the nuclear age. And it seemed that we will for eternity, always just create more bombs to kill each other. We now actually live a world where there’s still too many bombs, but at least less than 50 years ago. And in the same way, I have a hope that in 50 years, I know it’s a long shot and it’s not going to happen overnight. And right now, all the talks on the UN level are basically non-existent on this topic. So it’s I really hope that we could live in a world where we have less, not more attacks and more integrity of everybody’s devices.

_Zenna: _
You’ve been working a bit with AI and machine learning, and specifically on the focus of whether it’s ethical, if placed in a legal system. Do you want to expand on that?

_Noah: _
Yeah, of course. So I think kind of like that was my work that I started, or did two years ago that I talked about this narrative shift to cyber peace. And what I didn’t like about it is that it was apart from that it was state centric, and that there were many practical problems with it and very long shot, it was still defensive, it was still like it’s talking about my utopia is security or safety, which is still something like the absence of violence. That’s the problem with peace. Also, you know, it’s not medicine super positive, it’s only that bad things are not there. And so I was like, I want something more positive than that. And and that’s then where I got also more into this AI space. And because I think they’re really there is opportunity to positively shape human thriving in many ways. And to unlock this potential that we need to avoid many risks that are often also discussed and that are very real. And the as I said, also, I wonder, and they I feel I don’t have this super clean utopia, because somehow every utopia that involves machines that can do more than we do, in some sense, apparently ends up being a dystopia sometimes. And so we have to still do a lot of work on which utopia, we actually want out there.

Zenna: _
_ In your utopia, does AI exist?

_Noah: _
I’m a pragmatic idealist. So my somehow my ideals are also based on my pragmatism. And I think there is not a world possible where we get rid of AI. I don’t know if you heard about the unabomber, who’s like a, like a guy who started basically bombing scientists, because he believed that the progress of science and technology would destroy human society. And given that I don’t think that’s a viable path that we will have, that we will basically live with less technology, then the question is, how can we shape the current trends and the current technologies that are rising to human benefit? And also maybe sometimes also, how can we consciously decide not to use them?

_But overall, I’m convinced that we will get machines that will be better than humans at many tasks that currently only humans can do. Given that that’s going to come, the question is, how do we want to do that? I think their biggest problem are, exactly about human centrism. So how can we make sure that they align with our values. Like that’s a long term perspective, I would say all this question, which is kind of like AI governance questions that says when, for example, pioneered by Nick Bostrom, the whole ideas of what do we do when general AI actually is smarter than humans? I think these are super valuable questions to research. _

_But then there’s also the short term questions, and that’s what we’re after dealing with these important questions of, if we already today have for example, a system that makes legal decisions, for example, in an administration and that public administration, what laws do we want to be imbued into that? Do we think it’s just fine if there’s some human mandate and humans and some democratically elected institution decides now we want this to be done by an AI? Or do we also think there needs to be some outputs that need to be in some material sense fair, as it just if it’s if it’s cheaper and faster do we just accept it? Or do we also want to actually be able to understand the process to be accessible for humans, because then many advanced machine learning programs, currently, they’re black boxes for us, so we can’t really understand them. _

And the other question is, well, if it’s trained on any human data, human centric data, also in that sense, then it’s going to take human vices with it, and it could even aggregate them and exaggerate them. And so yeah, there’s a lot of open questions, and I don’t really have full answers to those yet. But my utopia is just I can only explain it in very abstract terms at this point, because there’s a lot of research and thinking and testing and acting to be done on this field. But right now, it’s to reap the benefits to avoid the risks, and to make sure that we humanize the technology that we live with, in the sense that in the sense that we really challenge also what it means to be human for ourselves, because I don’t think we know right now what it means to be human. And if we first figure out like, or if we figure out what it means to be human, then we can also tell when we want technology to do to help us to thrive.

_Zenna: _
Thank you so much for forth your thoughts. I’m that feeling that we will hear more from you.

_Noah: _
Well, thank you very much for giving me this space. And I’m very grateful for your time and for your work.

It was great interviewing you! And super fascinating topics. I like bringing in the terminology of “cyberpeace” .

Thank you for posting this as well, looking forwards to following any conversations which may appear!

If I recall correctly there are other people on Edgeryders who are into AI as well, maybe @nadia , @hugi , @johncoate or @alberto knows better!

2 Likes

@noah great interview. I’m Edgeryders’ community journalist and have taken the liberty to edit the transcript. If you feel like this explains who you are, you can replace it as the first post here:

I’m a social entrepreneur and technology researcher, currently enrolled as an MSc candidate at the Oxford Internet Institute. I’m interested in the global governance of emerging technologies, and how we could possibly shift the narrative shift from “cyberwar to cyber peace” as a practical utopia and a guiding north star, including internet-related technologies such as the governance of AI.

My journey of becoming politically aware started when I was quite young, I was about 12. I became passionate about figuring out how we can get to a world with hundred percent renewable energy as fast as possible. And that was my utopia, it really inspired me and got me going. It’s a topic that is actually now all over the world and in the headlines, but back then, it wasn’t so much.

Coming from a small town in southern Germany, I travelled to different places for protests. And this what connected me to research and political institutions. But at some point, I wondered what impact have those discussions with politicians and lobbyists and activists really? In the end, how do both the ecological imperative and the changes we need to make globally bound by our ecological reality?

Unfortunately, the economic argument has won until now. And within this political arena, I became very frustrated with it. But it also made me think: if the economic argument wins, maybe I should learn better how it works. That is how I got into business ethics, asking the question: how can we reimagine how the economy and how businesses work, to challenge the money making paradigm, moving towards about thriving humans and the common good — and which contributions companies should aim for.

Companies can slow down due to technological developments. So, I realized that if we can get the technological change right, it could also be a good opportunity to change how our economy and society work for the better. Because it’s technology which are the moving pieces shaping new things, and our generation has the opportunity to co-shape these technological changes. That’s the opportunity I want to take, to use technology together with our human insights, intuitions and values, to create a more humane society and economy.

A Tech Philosophy

I find egalitarian philosophy very appealing, particular, John Rawls’ idea that freedoms should be equal and that everybody should have some basic civil liberties. When we look at material conditions and apply how unequal division of resources is only legitimate if it actually benefits those that are least well off — a different principle formulated by Rawls — we can see that our current system isn’t justified. It isn’t upholding that standard, in the sense that the technology is often not centered on the most vulnerable people in our communities and societies.

With regards to Silicon Valley culture: start-ups often purports to save the world, but are bought by venture capitalists, which then again are owned by those who already own the world. It’s self-feeding.

I find it sad that all this beautiful technology, which could be built into incredibly meaningful solutions, have to “make a profit.” For example, social media companies — they innovated, but were immediately pressured to become profitable. The copyright debate falls exactly within this sphere, although I could imagine we arrived here through different paths.

The philosophy I’ve developed around these issues, means we really have to be willing to think radically different about how to make changes, in a massive way.

I don’t like to merely complain about how things should be very different. I didn’t like to only talk about business ethics, so I also dived into social entrepreneurship and founded companies myself. I tried to not just talk about how business should be run differently, but do it. And that’s quite difficult.

The methodologies of business could be useful when applied to different ends. For example, Muhammad Yunus idea revolves around the idea that you found a social business, and use business means for social ends to solve real social problems — not for the kind of problems they’ve been conventionally associated with.

One more additional thought that helped me to develop my own philosophy, is what I call pragmatic idealism: hope without critical thinking, is just naivety, but critical thinking without hope is just cynicism; confidence without humility, is just arrogance, but humility without confidence, is sheepishness; and pragmatism without idealism is just opportunism, but idealism without pragmatism is just wishful thinking.

I found that pragmatism and idealism need to work together: we need to have really have ambitious goals, but also be really pragmatic about where we start.

Reshaping the internet

I have several different utopian visions on how to reshape the internet. During my undergrad degree in Amsterdam in Social Science, I wondered how international relations in global politics fits into the cyberspace which we are increasingly living in. How do we combine global politics theories with these new technologies. That’s when I came up with the idea of moving from cyber war to cyber peace.

I noticed that my computer for some reason didn’t recognize cyberpeace as one word, but did recognize cyberwars. I looked it up in different word processing programs, and none of them knew cyberpeace, but all knew cyberwar. That was an interesting revelation: the vocabulary that we have in the conceptual space of the internet, is conceptualizing it as war, we don’t even know how to talk about it in terms of peace, because we don’t have a word for it. That obvious asymmetry means that cyber is somehow linked to a very aggressive and violent place. Which made me realize there’s some utopia missing.

To define cyber peace, we should first examine peace in the real world. The idea of world peace has always existed, but somehow we’ve not achieved it yet. Immanual Kant argued in his 1795 called Perpetual Peace, that the states with a republican constitutions — in other words democracies — should all guarantee each other security. When they would do so, we would have an ever expanding union of peace, because nobody would ever attempt to attack such a block of countries. And we could have an ever expanding unit of peace.

Decentralization is another important part. Kant argued that we don’t need a centralized contract or an institution, or a treaty: it would emerge out of the enlightened actions of individuals and countries.

But perhaps 18th century philosophy doesn’t completely explain the internet, as it is a very different beast. If you look at the long term trends, we have not achieved real peace. But overall, we live in a more peaceful world than ever before. And I think we should continue to have progress in this sphere. When we speak about the internet, we often discuss cybercrime and a cyber war between countries. But what we really should aim for is to have a utopian vision

From Cyderwar to Cyberpeace

I believe cyberspace actually could be a much more peaceful place than the world we live in physically. That’s because of the different dynamics of security in cyberspace. It’s very asymmetric. And it’s very different from the 20th century Cold War thinking, which often has been the only dominant security paradigm applied to the internet. The superpowers which are emerging, are basically trying to copy these old patterns into this new world.

But I don’t think that’s how it’s going to work out and I don’t think it’s desirable, it doesn’t work that way. That’s because because of the internet’s asymmetry. A commonly heard argument is that on the internet you don’t know who your attacker is. If you don’t know who your attacker is, you can’t retaliate, even if you have offensive capabilities. If you have a nuclear bomb, the other person is not going to bomb you because you can retaliate. This Cold War logic doesn’t work with the internet. And anyone can destroy somebody else’s offensive capability by building up their own defenses. Meaning that if you have security for yourself, and if you do security research, and you find vulnerabilities, then you can basically destroy the offensive capabilities of others — because they rely on these vulnerabilities. You don’t know if I have a nuclear bomb, and just because you build some protection doesn’t mean I don’t have a nuclear bomb. But if I have an exploit to which there are no fixes yet, and you close that gap, then I don’t have a bomb anymore, I don’t have a weapon anymore. Basically, you can take away somebody else’s offensive capability by building good defenses. I think these dynamics have not been fully understood yet by policymakers.

I also believe that security could move away from being state-centric. My vision for cyberpeace is involving everyone in a certain union — and it doesn’t have to be a contractual union — which works towards building collective defenses. In other words, a global norm for “zero day” reporting in which countries report on these “zero days.” They’ll not only improve their own security, but also the security of everyone else. To many, this might sound too idealistic. But I don’t think it is. I think there are very pragmatic reasons to do so. If you’re part of this defensive union, then you will be much safer than if you’re not.

Generally, I would apply Kant’s vision that, in the end, we want to live on a peaceful planet or in a peaceful cyberspace. Hopefully, the whole internet will be a safe space, but we can start by creating smaller safe spaces. There will always be cyber attacks. But we can minimize the impact and increase the integrity of everybody’s experience on the internet

A global norm for “zero day” reporting encourages everyone to contribute to everyone’s security: you can’t close the vulnerability for yourself only, you’ll do it for everyone. At the same time, the only “them” are those who don’t want to work together to make everyone’s experience safer, which is basically the people that want to keep the exploits to themselves.

I do not have a very clear institutional setup of how this would work, because obviously, it’s very difficult to control it. But my point is more on shifting the narrative from a place where a lot of war is happening — cyber war — to a place where cyber peace is possible, and where we should make sense of peace.

It’s everyone’s responsibility to make this shift happen, but I think institutions such as the European Union are well equipped for it. The US’s current administration, — which obviously controls the most powerful capability — is not going to be willing right now. But, I hope that this could change given history’s trajectory. We didn’t think it would be possible to start a counter proliferation movement in the nuclear age, but we now actually live in a world with less bombs than 50 years ago. I know with the internet it’s a long shot as well, and it isn’t going to happen overnight. All the talks at the UN level are basically non-existent on this topic. But I really do hope that we could live in a world where we have less, not more, attacks and more integrity of everybody’s devices.

On AI and Ethics

When I started focusing on the narrative shift in cyber peace, I realized that besides being state centric, there were also several practical problems with it. And so I wanted something more positive than that. And that’s when I got more interested in AI, as I believe it offers an opportunity to positively shape human thriving in many ways. To unlock this potential, we need to avoid its many risks, which are often discussed and which are very real.

Somehow, every utopia involving machines which can do more than we do, always ends up being a dystopia. This means we still have to do a lot of work on which utopia we actually want out there.

As I’m a pragmatic idealist, my ideals are also based on my pragmatism. This means I don’t believe in a world where we get rid of AI. Let’s take as an example the unabomber, who bombed scientists, because he believed that the progress of science and technology would destroy human society. And given that I don’t think that’s a viable path — living with less technology — then the question arises: how can we shape the current trends and the current technologies that are benefiting humans instead? And how can we consciously decide not to use them?

But, overall, I’m convinced that we will be able to create machines which will be better at performing certain tasks than humans are. Given that’s going to come, the question is: how do we want to do that? The biggest problem is related to human centrism. So, how can we ensure that the tech aligns with our values?

When we look at AI governance questions — such as the one pioneered by Nick Bostrom: what do we do when general AI outsmarts humans? — they are extremely valuable questions to research. But we shouldn’t forget the short term questions. For example, if we have a system making legal decisions in a public administration, what laws do we want to be imbued into that? Do we want some sort of human mandate, or democratically elected institutions, or do we want this to be done by an AI? In addition, do the outputs — in the material sense — have to be fair if it’s cheaper and faster? Do we just accept that? Or, do we want to also be able to actually understand the process to be accessible for humans? This is crucial, as many advanced machine learning programs are currently black boxes to us, we can’t really understand them.

The other question revolves around the idea that if it’s trained on human data, human centric data, then it’s going to take human vices with it. Even more so, it could aggregate and exaggerate them.

There’s a lot of open questions, and I don’t really have all the answers of course. My utopia is quite abstract right now, because there’s a lot of research and thinking and testing and acting to be done in this field. What we should focus on right now is to reap its benefits and avoid the risks: to ensure that we humanize the technology that we live with, in the sense that we really challenge what it means to be human ourselves. And right now, we don’t. If we first figure out what it means to be human, then we can also tell how we want technology to help us to thrive.

2 Likes