Peter is a well known expert in IoT - tech and policy. Currently he studies smart city policy especially as pertains to public spaces.
Peter Bihr - Mozilla fellow, author of View Source: Shenzhen and Understanding the Connected Home (with Michelle Thorne)
Welcome Peter Bihr to our community call today. Delighted to have you. I don’t know how much everybody knows about Peter and his work. But I spent a lot of yesterday reading your writing, your website, looking at all the things you do. You’re a prolific writer, you have the ThingsCon conference that you’ve been doing, you have The Waving Cat, you do consulting, you’re a young parent, you started a company that makes pants, you have a travel magazine that is kind of crowd sourced blog thing. What a renaissance man. Where do you find time to do all these things?
Well, the good thing is, I don’t have to do them by myself. So that makes it a lot easier. The things I do most of us basically just what you mentioned, the way we can, that’s my company, that’s essentially the administration little back end to make everything work. That’s that’s the consulting firm. And that makes it a lot easier. Thanks.
Con is the other big project that takes the other half of my time mostly. That’s a nonprofit, but that’s a bigger team. So that, you know, that’s not just me, that makes it easier. And the rest are kind of like side projects that were other teams, or are really like on a very like small slave. So that makes it a lot easier. It makes it sound more than that. It really is like the way the Cat the consulting part around smart cities and emerging tech. That’s the commercial part. And the things con is the nonprofit advocacy part of the whole thing that started as a conference, and then turned into a whole online community where essentially, we work with a lot of professionals and practitioners in IoT to make sure that it gets done a little bit better, and not just faster and easier to exit later.
Your current research focuses on smart city governance, what specifically Are you looking at with that?
So when we look at smart cities, like anything that’s connecting the urban space, let’s keep it like very open and broad for now. Anything that is internet connected or collect data sensors that enables administrations to run decision making algorithms, that kind of stuff, anything called Smart Cities, and - small footnote: I go through this whole spiel that that’s all encompassing because the term Smart City is a very specific industry shape term is like it’s essentially an IBM term more or less, that they kind of brought into the world. And like all the vendors that make smart cities acknowledges use that term. And it comes with some baggage. But I mean, whenever we connect public space, essentially, we, I think, need to look at how that works. Because if you look at my home, it used to be a private space; home in the in the global West used to be a private space. And now because we embed connected microphones, it’s not so there’s a phase change of some sort. And similarly, if you look at the public space, the moment we started tracking the movement of people through space, or we imply algorithms to make decisions about where to put the next fire station, or who gets to use which road or that kind of thing of who gets routed which which way that happens in public space. And unlike if I buy a naked gadget for myself, in public space, there is not opting out, if they want it or not. And so I think it’s like this, the smart home was a face to face for privacy, the Smart City, I think is a face change for who gets to control the public space in a way and who gets to shape it. And I think that’s like a profound change for governance.
Because right now, most of what we see, and this does not happen with any malign intentions, as far as I can tell, it’s just like a thing, where the companies that make technology could connect public space or network companies, they are data analytics companies, they are logistics companies, companies that know how to track movement of physical things to physical space, and companies who know how to extract meaning from data. Those are the companies, the Ciscos and Googles - the waterways of the world. They know how to network stuff, and how to extract meaning, from the data that’s created. And so with them, when they transfer the knowledge from advertising technology into the public space, or from global supply chain optimization into public space, they import along with them a certain logic of thinking about the space to us. And that is not necessarily compatible with what we know works for a rich life, or civic life in the city. And so that’s the slightly heavy way to think about this. But really, the question is, who gets to control? what’s allowed in the city? Is it the vendors of technology? Or is it the citizens who live there, and what we see in practice is that the vendors are lobbying city administration like crazy to just implement stuff, and then test it. But once it’s implemented, realistically, it’s never going to be taken back out. Again, once it’s in there, it’s established.
And so we’re in a really pivotal point right now where I think we can help shape the discourse of what we actually want to see in public space. And what we don’t want to see there by establishing first participatory practices that say, Okay, if we’re going to put something in public space, we’re going to have a process where it’s not just something where a vendor sells a city something without public oversight, or with not very meaningful public oversight. And then everybody’s just stuck with it for like, 25 years in a IT support situation.
You make the comment, at one point in your writing, don’t leave something this important to a bunch of tech Bros. You also made the point that technology is political.
Yes. And so as I think these lines are both very closely interrelated. I’m a political science and communication science major by background. So while I’ve never worked much in that field, I do think a lot about how things are political. And it always sounds like an abstract kind of late at night red wine discussion. But really only it means who has the right to change the rules, if we don’t? Who gets to decide what’s allowed? What’s up? With whom? What’s good for whom? How do we measure what’s good, right? And silence these questions. There’s never going to be a perfect answer. But the quest of finding an answer and discussing this is really important.
When I refer to the bro culture, I mostly mean, that’s really, a lot of the products we see out of Silicon Valley. These things are made by young fairly well-off men for their peers. There’s nothing inherently bad about this. But this is not a representation of the world. This is a very narrow section of how we go through the world. There’s gender issues, there’s control issues. There’s racial components to this. But there’s also, just like a wealth gap issue, always part of this, a lot of smart cities stuff, for example, that’s the secret. This is what’s called pay for place. So of course, it’s it sounds like a fair argument to say, Hey, if you’re going to use the service, you should pay for it, that’s fine. But on the other hand, there’s a good reason we provide some stuff for free, because we don’t want stuff to be exclusively available. If you’re rich enough to afford it. There’s some stuff, and sometimes it’s more than others, but where we just think this is really important that this is what means civilization, and it goes the same for us. So this is going to be available for everybody. To a certain degree that starts with stuff like clean air, to some degree, it starts with affordable public transport. It is available in some countries. I come from Germany, I live in Germany, public health care, that’s not free, but it’s very affordable is one of those, Where does it start and an end. That’s the thing that every society has to decide for themselves. But that everything should just work really, really well, for young single men who are 25 and get 250 grand a year, it should also work for like a young Hispanic mom, who has to get by on 30 grand a year and may have three kids, some things should not work for just fairly rich white guys.
I, I think one of the main reasons why things go wrong or tend to go wrong is because usually the sensor package and the action package get put into one thing, the person or the organization, who does the monitoring of traffic is also the same organization that then controls the traffic. And because that data never gets used. Otherwise, the interesting part of what make smart cities or internet of things for the city is if we have on one side, little machines or little computers that generate data, and that show data to people and make it available publicly anonymized pro certain point or whatever. And when on the other side, we have services that use that data, and then provide services based on that. And as long as we keep this device intact, we have publicly available data sources. And then the option to use that data and then have paid services, free services, open source services, things made by enthusiasts and things made by new companies that do something, we can have all the good things there and we make even the bad things, but we can act on it, we will have the option to say okay, this company is doing bad things with data, we’re not buying there anymore, or we as society decide to shut them down. But we still have the upside of being able to use the data. There’s a friend of mine here in Berlin working on a project where they are basically adding very simple sensors to elevators with a long term goal of making all this sensor data on the elevators publicly available, and then have a disability-friendly map of buildings know that at the train station, the elevator is not working before you’re there. So you can take the train to the next train station where there is a working elevator.
And that works because there is this divide between our sensors delivering data, and the services we make use of. And if we do that, for traffic lights for traffic control for congestion, for smart air quality, noise levels, how many people are walking down the street right now? Is it full? Is it full of tourists or not all that and then allow people to make choices based on their data and you have apps or things working on that. That’s a whole other thing.
I don’t know how many of you read Wired magazine back in the 90s. There was always on the last few pages, the column by Nicholas Negroponte. In one of those, he painted a dystopian and utopian picture of the city. And on every corner possible, there was a surveillance camera, and all the cameras collecting all the data and all the images from everywhere. And in one city, all the data went to one entity, surveilling everything. And in the other city, that was just open video data. Anyone could at any point, go to any camera and look for it. There was no central authority on that. Anyone could look up, yep, I’m going to look to this camera right now, and could save the stream just in case or not. Which of these cities will be nicer? They are both surveillance states that the one has some sort of crowd control on it, where people can look at this and can get to the data, even if a police is doing something wrong. And that’s where the power dynamic comes into play. And I think this is something that is really important for public internet of things, that the data that is being collected should be available to the public.
They’re really important points. So thank you so much for sharing that. So first, you touched on, stopping to work with a company if they happen to abuse that data or if they’re just not good enough, that’s a really good point. Like currently, most Smart City contracts have these long built in contracts where it’s IT infrastructure that you don’t easily switch back and forth. And that’s a question of procurement that’s only solved at the stage of buying public infrastructure and defining terms of data portability, and that kind of stuff and exit strategies and making sure they’re not too proprietary, there’s a really big lock there. And even if you have to use some providers stuff, which sometimes might happen, I think there’s still practices to make sure you can move to a different vendor. So that’s absolutely important.
You touched on publicly available data, like open data, like strengthening the data comments, essentially, which I think is a tremendously powerful thing that we’ve only seen the tiniest first steps on. It feels like there’s a lot happening. And there is but, I think if you look back in 10 years time, it will look like the tiny, tiny bit of the curve where it just goes up a little bit. And this little mountain that’s like a big hockey stick curve. There is a downside to that too. Where going back to like what Negroponte wrote there, we all thought like that in the 90s. And up till maybe like fairly recently that publicly available data also means evenly distributed power dynamic. And only now do we find that if it’s publicly available, it could also very easily be abused by much more capable organizations like the New York Times that have fantastic interactive piece, maybe maybe three months ago, maybe a little longer, where they tried to track individuals through like a cheap, I’m not sure if it was Microsoft or a Huawei Chinese tracking software like facial recognition software. And they just cooked it up with publicly available video feeds around Union Square over a week and tracked people that they could identify with a very high confidence, and they took their sample data. If I if I remember this correctly, I think they took the faces of the company websites around unit square, and then run it through the facial recognition software, and compare that to the video feeds. So there is an element where it’s very easy to start being really invasive with this. And I’d have no solutions for this whatsoever. Maybe maybe just no video cameras whatsoever. Maybe that’s maybe the radical kind of Luddite approach is like the only one that works. But I don’t feel comfortable proposing that. It does have interesting friction there, right. TI think that maybe that’s the best I have to offer. There’s interesting trade offs there to make sure. But also, there’s recently like a wave of new companies that actually also figured out better ways to do video recognition stuff in public space, but do it with a full respect of privacy where on device they filter out faces, or number of plates of cars, or where they track footfall in retail spaces, but only looking at the feet. And somehow, through gate recognition managed to get fairly robust demographic data for retail spaces by only ever looking at feet, which I would be kind of surprised that someone found a way to make that a horrible thing. Yeah, that seems like a very, very sensible approach, like if you can actually make that work. But there’s a New York City based startup that collects traffic data like cyclists and cars in public space. And they automatically filter out all the faces and the number plates. And so you get traffic analysis. But like nothing ever leaves the device is all further up on device seems to me like a really powerful point of leverage, where you don’t have to trust the company to have good Privacy Practices later on, if the stuff never gets collected to begin with. So that’s not like criticism about the points. Like that’s a that’s a very clear Yes. And there’s there’s a really interesting part about the decentralization and the commons, kind of like leveling the playing field here that I don’t think solves all the problems. But it certainly is a much better start than just having one company collect all that stuff and go crazy.
It maps directly right back to technology being political, because in order for that kind of benign vision to even be possible, it isn’t enough to have a democratic society, you have to have more than just that. You have to have an aware society.
In your writing, you talk about your little privacy dial - a little thing in your home that dials up how much privacy you want at a given moment. You have another example, there’s a kind of a fob you could carry around in public that would let you have some kind of control over your environment, which I thought was a wonderful idea.
Well, I’m always hopeful. There was a small piece of speculative fiction as part of an exhibit for the DNA Museum in London, where they explore the impact of tech on society. And we just played around with known interfaces trying to have them do stuff to give you control of your data. And these are complete purely visual mockups, the box to do anything but latest LEDs and the favelas, literally some some piece of metal and a kitchen and some some Sugru or something. But the idea was that just like we know how to turn on and off, up or down the volume on an old hi fi system, you could just say, hey, all my smartphone devices go through this thing. And when I want my home to be as assistive as possible in the mornings when I have to get ready for work, and maybe have kids running around or whatever, I want all the efficiency that they can offer me. And so as I say, you know, I’m going to wind down the privacy part, but I’m going to let it do all its smart and loose and support stuff that it could possibly do. Whereas, in the evenings, I might sit down with my partner and just want to want to have like a quiet conversation and might not need that level of assistance, I’m going to turn the privacy all the way up. And I won’t need all the smart stuff, I’m just going to have my privacy now. And in a very intuitive interface. This model, of course, breaks down really fast when you like try to map what it actually means in context. So it’s really like it’s a speculative fiction piece. And a fairly simple one at that.
You all know the feeling when you’re invited to a friend’s birthday or something. And in my my peer group, my circle of friends, there’s like a bunch of people who really like Internet tech, but also are others who are real hardcore privacy activists. And so you’ll always have like a bit of this tension, which is maybe too much of a word. But, clearly, there’s there’s different interests coming into the room where some people want to share with you on Instagram, and others don’t want their photos to be taken at all. Preferably some check in on Foursquare others would rather have you know, like a Wi Fi blocker, but whatever the range might be. And the idea with this fob thing was that, what if we just have a thing on a key chain where we kind of set a default, where we define what, generally speaking, I want in public spaces such as, I don’t mind my movement to be recorded, but I don’t want my face to be seen in public and in private space at home I don’t want anything being recorded, when I go to like someone, I don’t go through three to three degrees of separation, I might only be comfortable having either recorded or presence recorded, but not audio, whatever the things may be the match your personal preferences, as you could just have them set. And when you walk into a room, the room would be smart enough to recognize and also respect all that these wishes and fulfill them. Of course, like I said, that would mean first of all, that all the infrastructure is in place, which of course, it’s not. That every vendor, every app maker every hacker like agrees to these rules and respects them, which of course is like a wish we would live in that society. And also, we all need to agree that that’s actually even something desirable. And that needs to be respected. And of course, that’s like the next thing. We need to be able to trust the whole chain to respect that decision, and to work. And of course, this is never going to happen, right? This is like, when we talk to these people who are not technologists at the VNA in London, it was really interesting, because people got the idea. The moment you mentioned, both of these, they’re like, Oh, yeah, I can set my stuff here. And then I can just walk in and the place knows what’s cool. Like, that makes sense to me. I want to dial up and privacy down. First, there was like a bit of a question around like what a smartphone even does but then, and they’re like, Oh, yeah, that makes sense. I want that. Can I get that? And of course, the answer is no, absolutely not. The moment you have, the microphone, you kind of got a hope at Amazon. And all the makers of apps on the on the Amazon Echo are trustworthy enough to respect your choices. Maybe they are, I hope they are.
So you have a smart city. And you set it up where maybe it doesn’t quite go all the way. But let’s say part of the way toward this, and no doubt the people of Toronto, people in government of Toronto, I mean, I wasn’t sitting in on the pitches, but I’ve been to plenty of pitches to government and high up people who don’t really use this stuff or completely understand it and you dazzle them properly. They’ll say, Oh, yeah, okay, we want this, that sounds really benign and cool. And we’ll get that going and won’t be exactly the way you describe a better vision, but it’ll kind of work. And the people running Toronto, you know, are pretty good people, I think in their hearts and stuff. However, all you gotta do is get some people in who don’t feel that way, who flip a few switches. And suddenly, you have a pretty dystopian situation, like I see in China. I saw 911, here in America in 2001 change our society very profoundly. Everybody was all freaked out. And in the end, during all that panic, all kinds of horrible laws got passed, at least horrible in my opinion, to essentially surveil us all in order to make us feel more safe. And so then you get Obama, a guy who has a strong belief in certain amount of civil liberty and personal control, who understands the technology and did have to engage with the national security state. And I can only imagine what that must have felt like, I’m not going to second guess the man’s political instincts. But you know, he dialed it back a little bit, but that structure was largely already in place. Okay, now we have Donald Trump, he and his people are completely comfortable with mass ignorance and a high level of surveillance. And he has millions of supporters who are perfectly fine with that. I’m reiterating the point that whether you like it or not, technology really is political.
When it really touches the most personal parts of a man just like this, with a strong power dynamic, which, of course is government to citizen relations. There’s always a point that I’m making - it mostly doesn’t play the day to day the way it should - that the citizens rule through the government, but rather there is a slight disconnect often where governments can come into power. And they might totally use all the established infrastructure and the data you collect for not the originally intended purposes. Often for nefarious purposes. Like I said, I’m German, I studied German history, there’s like two states in like the last less than 100 years, that essentially run to a large degree on large scale data collection, spying on citizens and absolute abuse of power at the expense of a lot of large minorities, mostly, and often leading to horrible, horrible, horrible results, first the Nazi regime, of course. And then also, isn’t Germany also like a big surveillance states compared to today’s level of surveillance? Of course, this looks like peanuts, but it ran very efficiently, right. And so these are things where oftentimes we see infrastructure implemented with good intentions. But every technology that can be used for bad will be used for bad at some point. So when you talk to security and privacy activists, of course, they will say, the only really powerful failsafe way of data protection and privacy protection is by not collecting any data at all - is data avoidance. Once you install CCTV cameras in public space, it’s a question of time. First they are going to use it to remove some trash then to highlight some bad parking along the way, of course, always like to find like terrorists, if you can, often they don’t turn up anything. And so there’s a question of time, because at some point, they need to justify all these costs for very low level returns. And so you’re going to widen the mandate for the surveillance.
I want to go a little bit back. There were a few things mentioned about this before. And I have to say the reason why this is so important to me is because I work with journalists who are covering these stories, mainly about the problem with smart cities, for example, in China, where a minority is being currently genocide. And the problems with smart cities. I’ve posted this on the on the forum before as well, a newsletter actually by a by a colleague of mine about his ideal Smart City was always negative because he focused so much on authoritarian regimes using it in a bad way. And he got all these Google alerts about how great Smart Cities are. I think this competition is great to have and needed to have for for us in the West and in the states and hopefully able to push a more ethical and sensible way of using tech. But what can be done about regime said, like you were mentioning right now, Germany itself 100 years ago, and then 50 years ago, use the surveillance state and negative way to suppress people? How should we approach this? And what can be done? I mean, there are companies that are willingly selling their surveillance tech to make smart cities in China. Is there something - what is our role in making sure that can’t happen? In your opinion, what should be done?
I am at the same time, incredibly optimistic and pessimistic. Let’s start with the pessimistic part. The cat is out of the bag, the genie is out of a bottle, we will not be able to go back. But technology is VR. But technology is getting better and better and scarier and scarier, mass surveillance, will be effect, we will not be able to get rid of it with technology, because whenever we built the technology that enhances privacy, someone will build a technology that negates that enhancement, and then as more researchers are coming up with more and more ways to identify people, how they look different biometric things like skin, fingerprints, mouth prints, whatever it’s getting more and more and more and more. And we probably haven’t even scratched the surface yet on how to identify humans with technology. And that’s getting more and more, we will have data collection, it’s getting easier and easier. If someone wants to spy on us at large scale, they will be able to do so. The only thing that has a chance in hell of stopping that is the social contract. That’s the only chance we have. And that’s where I’m going to be optimistic. I don’t know when we will achieve that. But ideally, I hope and believe that at some point, we will come to terms as humanity as a whole in how we integrate living with this technology. I don’t have any idea of how that is going to look. But we have to realize that the concept of privacy we are living in right now, as in, we have separate bedrooms, we will not have sex in front of our children. This is new compared to the whole story of humanity. And not saying this is bad. This is good. But we’re having this right now. But a lot of the concepts of what is private and what is not is a social construct that we have arrived at sometimes recently.
Scandinavia, for example, especially Sweden, has a lot different expectations on how you treat your salary. In Germany talking about your salary is kind of a taboo among colleagues, you don’t tell your colleagues what you’re making or, allowed to buy contracts - you’re not even allowed. In Sweden, for contrast, you can look up anyone’s salary on a public internet database. If you know the name of a person in Sweden, you can look up what they make per year. That’s a completely different concept. And both have sort of advantages and disadvantages. I’m leaning towards the Sweden model myself, but but on the other hand, I’m German I grew up here. And this you don’t - the don’t talk about money thing is kind of ingrained in me and we will have to come come to terms with effect. But anyone can find out everything about us online, eventually. And that will have a huge impact on how we will live and how we will treat each other. And in the best possible of roles. This will mean that we will be able to accept each other more that we will be able to forgive things, though not necessarily forget. But actually forgive. Right now we have this ‘you rights to be forgotten’ directive, where if Google know something about your past, which you’re embarrassed about, you can tell Google to not show it in the search result, you have an actual legal rights of that. The next evolutionary step would be that, yes, I have something horrible in my past, but I have overcome it. And even though you know it, you will not hold it against me, because now I’m a different person than 5-10 years ago. I have no idea how we will get there. That is basically the only chance we have as a species. To overcome this, we will have to adapt to living with very invasive technology. And we will have to adapt in a way hopefully, that will make us better, as opposed to just controlling each other and use it as a means of oppression, then we have to protect privacy by social norms.
There’s a bunch of things like wrapped up there, right? Because, Yes, I think part of it is social norms. Now, we need to have the social norms, and then some hard guaranteed rights, I don’t know, if if the UN Human Rights have been set as the baseline or something or something new that comes out of them or something else entirely. But there is a hard thing that you can go to court to, hopefully, if your country supports the rule of law. And then there’s the social norm kind of that’s like on the smaller change level, not the faster change, before it gets put into law.
So there’s one thing that I think you mentioned, also, your colleague or friend worked a lot with the horrible implementations of smart cities, and in authoritarian regimes, but now he gets all these Google alerts for the great versions. And I think what one question that I personally find really helpful to just ask when you see something like this is working well or not? Well, this is great. Great for whom? And by extension, for whom is it maybe not great? Is it only great for the 25 year old white dudes? Or is it great for the week? Or is it in China as well? And that’s where I think you often get two interesting branches off the discussion. And so how can we make this stuff better, of course. I have opinions of what I’d like to see down there. I’d like to see a stronger focus on privacy and other stuff. But I think more importantly, then, what I think should happen is that we have a process in place. And that’s where the policy part comes in. A process in place that is participatory, and actually democratic. That makes sure that I’m not the person deciding this because I’m one person and I’m in a historically very dominant, very privileged demographic group, I should not be deciding these things. So I think that’s where that participation part is a really important one. We’re in smart cities, I don’t think we see it as much as say on the local governance.
I think the question that needs to be answered in general is, can we make it more inclusive? There’s this Wired video about the problems with AI and facial recognition and how it’s super biased and how this is a problem for minorities and gender, gender minorities and ethnic minorities because that they are so much more likely to be miss-recognized, and so they have much more chances of, for example, being questioned by the police or seen as a perpetrator even though they’re not. In this video, I think towards the end, it says, the only way to change is if we change the demographics of the AI study group, because it’s white males. I don’t know the percentage, but it was like 10% is like female and so. So that brings us to a second, I think, question when it comes to ethics and Internet of Things is how do we increase this inclusivity? And how can we make sure that we include all kinds of different groups?
This European Commission project, Next Generation Internet, Internet of Humans is related to the document you participated in, the Vision for a Shared Digital Europe that I don’t think it has a formal relationship with this NGO project. Or maybe it does, I don’t really know. But it’s very similar in what it’s going for in describing how society ought to work or it could work. And it has these pillars: cultivate the commons, decentralize infrastructure, enable self determination, and empower public institutions. And then it also call for replacing what it calls “the lens of markets” with these four pillars. But that, to me suggests that capitalism itself as it is currently practice would actually have to change in order for such a thing to come about. Do you agree?
I cannot take a lot of credit for it. I love the project. It doesn’t have any formal relationships, I think, to anyone but the three lead authors and a few people who, like me gave some input, because it’s literally at this point, just a large document online, by activists from you know, the Netherlands and from Poland and a few really esteemed senior colleagues from working with them for a long time who come from an Open Data background and activist governance background. And you’re right, that’s a big challenge that they’re proposing. They’re not just that Europe shifts away from seeing everything digital, just as a market, which currently, it does, where all the development stuff is framed as under the banner of the digital single market. And in the European Union, it was like this tiny term in some obscure document that at some point, just slowly started spreading until it became like the lead framing for the policy, the lens that everything to do with the internet is looked at and funded as is all market market market. And they say, hey, that’s not bad. But it’s also really myopic. It should only be a narrow part of what happens online, the internet is much more than just a marketplace. It’s also the commons. It’s everyday conversations. It’s the tool that empowers this and empowers all the communities essentially, they say, how about the just start advocating for reframing of the internet, within the structure of the European Union, not primarily a market, but primarily a commons, decentralized infrastructure, and powered citizens, and empowered public institutions, the market is still a thing there. But it’s a much, much reduced thing. It’s not the thing that everything works for, it’s the market would work for these other four pillars. And that’s a very powerful framing, that I really, really hope, just like this single market, that as a term started spreading from this obscure document across the whole of the European Commission’s language, that this may just do a similar thing where it, starts spreading from document to document and at some point, it’s what shapes the thinking. And hence, also all the funding the European Union puts to work, which of course is a lot. That’s how the European Union makes change happens through some laws and through some funding, right. That’s the main mechanisms of influencing change.
You mentioned that you see opportunities here for European companies. And I think that is part of what the commission is looking for with this project, which is why they call it NGI Forward. Because we’re all pretty good at describing the problems. And we’re pretty good at describing what we want. And the trouble is, as futurist Paul Saffo once said, Never mistake a clear vision for a short path. But I think the opportunity here is to build products that are embedded with these values in what I guess is the hope that people will prefer them.
if you look five years ago, there are many reasons why Silicon Valley technology is credibly dominant and solid. So for a long time, it’s just been far superior to many other places, or better marketed, whatever the success factors were, but also in the current global way. W on the global market, think about data and how we extract meaning and value from data. They’re at a clear advantage because they’re much more free to do with their data as they please, as opposed to European companies that have traditionally been through a different kind of data protection laws. And they’re maybe not more free and using this data. Then Chinese companies have easier access to the Western world. That’s five years ago, right. So fast forward to 2019, we see I think, a dramatically changed global landscape. Because first of all, Chinese companies have gotten so much better at working across the global West and global south, they’re much more international and global in their scope, and ambition. So that’s one part. Silicon Valley companies, for the first time face significant pushback across all of the world for various different reasons in China because of geopolitics in Europe, because of geopolitics, and privacy violations and data breaches in the States, because they’re kind of providing the tools for an election meddling, and all that stuff. So there’s a complex mishmash of weirdness going on. But I think it’s also very clear that since the GDPR came into effect that comes with significant financial, global fines for certain uses and abuses of data, and all of a sudden, you, you see a much more level playing field for companies that are from Europe, and stick to the national European data protection regulation. And they are not at a disadvantage, because they can’t use the data as efficiently. But rather, they just happen to be compliant with one of the most powerful legal blocks in the world, whereas the others may get fined if they follow their own procedures. I have strong opinions on it. And yes data protection, but I also am painfully aware that part of this is like a horrible geopolitical power play in a way that feels really unhealthy To me, it’s like, oh, we’re also going to leverage local industry at the expense of the others. But to me also, I think, as long as it’s based on certain rights I believe in and values I believe in, I’m kind of okay with it, if that makes sense.
Maybe I shouldn’t be. But I really am. I really like data protection to be stronger until we figure out what we want to get out of this. And for the first time, in probably 20 years, European companies are on equal footing in that space, the global players, and one example that I really, really like, it’s a fairly small startup out of friends. They’re called snip stuff dot AI. And they’re like a voice assistant, and voice assistance are, you know, like, the Google Home or I’m an Echo, but their version is fully open source. And it comes with a pre-trained data set. It doesn’t share anything back to the company. And for them, it was like a security privacy decision. But by the time the GDPR got implemented, when all the other companies started scrambling, and my phone stopped ringing, because then we were just trying to figure out, how can we make our data-hungry services compliant with that, which, frankly, I didn’t have an answer, I could say, you probably can, but for them it’s, it’s by design, and it’s by designed data for that particular law. They were having a big party, because all of a sudden, they get validation, they get a political framework, a policy framework that said, hey, what you’re doing is exactly what we want. So you will not have to fear anything, you’re good to go. And it gives them a boost. And they were one of these companies that already were trying to do the right thing by their users. And all of a sudden, they got a leg up from the European government in a way that I find really interesting, because of course, all these companies not like an unusual move for every country has policies in place to boost their economies and to boost certain technologies of others and certain markets or others. It’s just for the first time that I’ve seen the European Commission, actually, up in the Parliament get serious and kind of intentional about that, and not just defensive in a way that I found really interesting. I just want to say, yes, I think there is a powerful point of leverage to boost certain values and certain rights on the global stage and incentivize companies to build products based on these values.
One of your big projects, and your Mozilla Fellowship was about your Trust Level tech certification process. For those who don’t know, you can go to trusttech.org, where you can apply to get certified answering a long string of questions.
And even if you don’t want to get certified, the questionnaire is so all encompassing, that it’s a good map of the landscape. But I found it useful simply to read it. My question now is along the lines of what we were just talking about: have you seen any interest in the Commission or the larger pan-European government in this kind of a certification process, which would be different than Oh, you broke the law? And now we’re going to find you, you know, I mean, it would be a kind of a positive step. Are they interested in that?
Yes, we designed this question specifically to look at all the aspects of product development which is really what we’ve been building for a long time, slower than I hoped we would. But we’ve been building for a long time, an actual design guideline. So by the end, you’re just like automatically there. You don’t have to, answer these questions, in hindsight, but these are all questions that aim at how you handle data, how you handle security, how you handle these kind of things in a pretty holistic manner. These are all questions you have to answer one way or another during design process anyway, only some companies I think, don’t make it explicit. They just build something. And they’ve made a decision, but maybe they weren’t even aware they’re making this decision. So yes, it is like this, just for legal reasons. It’s not a certification, because I learned this along the way, certification actually, in the German context at least, is a check if you’re compliant with the law. And we , like you indicated, did not want to just say, Are you compliant with the law? We want to say, Are you building products and really respect users privacy and users rights? And do you empower users and give them all the rights you can, or strip them of the rights because you legally can. We wanted to say we didn’t want to have the baseline certification of “this is not a horrible product”, we wanted to find and identify and give some external validation and and reach to those products that go above and beyond in respecting user privacy and user rights.
Have we seen much interest in this? More from the policy side and product side, because it’s hard to go back and have Twitter or go to a product that you launched, while you’re working on the next level of the next product release cycle, it’s hard to go back and then try to map that on your existing products. I hope you’ll see interest in the next year, when more companies come out that have been developed based on this, there has been huge interest in the Smart City space to see what of this thinking can be applied to Smart City space, and to other decision making in the public space. I haven’t seen that much feedback from the European Commission. To be honest, I’ve been involved in a lot of products where I’m on all these panels in Brussels. I’m in the right conversations now. Nobody has invited us to see if we can apply this directly in a larger scale on the European level. And that’d be really interesting conversation to have. I’m not sure if that’s possible. But I’m sure that our research surfaces enough potential issues and enough possible solutions to make sure that a lot of stuff would get a lot better, certainly a lot more respectful of your rights as a citizen and as a user.
Christian is actively building a product that, as far as I know, very likely would qualify for that certification, which he’s building from scratch, or sort of from scratch. Rather than “we have to basically bug fix our product to comply with something” it starts from the premise that, as Christian put it on the Edgeryders site, social media is broken.
Let’s fix it, though, the only way to do that is to make things where the fix is in the core of it; you can’t just put a nice suit of clothes on it. A lot of why this stuff comes up is related to social media. Things started to change extremely rapidly when you get smartphones widely adopted. This field of social media has overwhelmed so many things. The point I want to make about that is I see it a little bit related to the climate problem in the sense that there are tangible short term benefits, and the long term problems are not so tangible - until they show up.
the whole GDPR thing that is a perfect example of the social contract at work. And when I said only social contract can rescue us from the horrible internet, GDPR is one of those things in there. Something that is often overlooked is infrastructure. And the internet is infrastructure, social media, up to a point is infrastructure. Infrastructure drives civilization in the most profound ways. I don’t know if any of you read David Prince’s book, the Postman, you probably have all watched the movie with Kevin Costner. But basically, it’s the story of civilization being restarted through the use of infrastructure, because suddenly there starts to be infrastructure, again, infrastructure for communication. And that bootstraps the whole civilization, again, that puts people in touch with each other, make them realize their very community. And that is what the internet is doing on a much larger scale. And that’s the hopeful thing about it. Where we can use that. And if we protect it and see it as that sort of infrastructure, we can really drive forward. And if we build this infrastructure and maintain it with a view to protect minorities, to look at not the voices of those who are white and powerful, but everyone else, and build it in a way that respects everyone’s voice, then we can build a truly internet of humans.
We’ve got the policy issues, we know the problems, or at least some of them, what concretely are we really doing about it, and I’ll bring the Chinese back into the picture here because you pointed out when you went to Shenzhen, the Silicon Valley of hardware, at one point you show a fairly simple, but nice looking little smart lamp. Not really that outstanding, except the thing was made by a class of fifth graders.
Shenzhen is fantastic, an impressive powerhouse in in the way they make technology, this is where essentially, all the factories and all the supply chains globally go from the silica mines in Africa, to the container shipping and packaging that ships this stuff’s going on your end to Africa into Southeast Asia. And to a billion people in China. A giant chunk of that stuff is often produced in different versions for like differently priced markets. So hover boards are cheaper in the Philippines, which is why someone who imported the less secure cheaper version to the UK, they started catching fire. There’s this aspect where they are the production line essentially for the whole world for electronics, but increasingly also developing and researching new technologies at ever greater speed. And it’s a very intentional thing - a big government project. I think it’s everything to do with IoT, with making connected stuff there. It has the term Internet Plus, where they just say “we want to be a world leader in AI in electronics and connected devices.” And then they add the resources to it. Not having a strong democratic mechanism behind it, of course, makes it easier to have decentralized efforts, all the drawbacks that brings with it all the horrible drawbacks. But that’s why it’s possible to say we want to be world leaders. So we’re just going to commit x billion a year to train X, Y, or Z number of PhDs in AI research, they’re going to be trained by some of the best professors to China and software which are really smart there and developed training, they’re also, certainly several hundred thousand or however many maybe study abroad at the most, prestigious universities, the nationally is really impressive.
And like you said that small example when we went to this small maker space as a change of pace, essentially to see not just the factories and the r&d departments, but also see the community spaces. We saw the smart lamp looked like a perfectly fine smart lamp. I thought, that looks like something Phillips, might produce or you know, it’s it’s a fine consumer product. It’s really uninspired and boring, has motion detection, it regulates up or down — until the nine year old showed up who had made it. And I was like, Whoa, okay, that’s kind of mind blowing. To me. It’s also a generational thing that totally has gotten better and all of this, but still, it’s impressive. They had found 3D models, they’ve modified them then from code, they modified it, they plugged it all together. Is this groundbreaking r&d? No, but when I was like that age, I’m not sure what I did. But it was certainly not designing smart lamps.
At a teenagers security surveillance fair in London a month or so ago, a few of the major setups there were actually Chinese companies. So one of the major surveillance companies has all these cameras, and they’re insanely good surveillance cameras that are all over China, but they’re also sending internationally to smart cities. And some other innovative companies, tech companies from China that are from before would produce, tech that was like sort of a copy of Silicon Valley. But right now they’re getting to the stage where they are making actually better things that are still cheaper. And one example is my phone, I have a Xiaomi, quite popular right now with the four among us. This is like a $100 Phone. It’s a smartphone, and it’s insanely good. It’s super high quality. And obviously, one of the questions is, again, with me focusing a lot with my work on how I started during regimes that use tech to suppress their populations. That’s one of the things we focus on iwith one of the kinds of the people I work with. How do we make sure that the tech that we employ will make our lives better? On one hand I’d like the GDPR conversation we just had to happen in China, but we don’t have that. So what can we use?
I don’t think I have any answers to this. And China has this like big, big public funding initiative called the ‘belt road initiative,’ where they essentially try to like recreate or take the model of the Silk Road into 21st century. It’s a thing in Chinese politics to always have these cultural references and have a good name for this stuff. So this is the New Silk Road essentially, is the metaphor they work with. And what it means is a big chain of investments in in countries around the globe and public infrastructure and companies, lots of joint ventures, lots of research partnerships, a enormous scale, I think disproportionately in the global south like Africa, Southeast Asia, South America, but also in Europe and North America, Germany, there’s, I think I recently saw a map for journalists had collected all these investments. Germany has, 10 or so and they’re like, you know, here’s a pilot project on Smart Lighting in a city. Here’s a research partnership with the University, it all sounds pretty harmless and benign. And I really, really try to not be overly biased and be very careful in my language. But there is an aspect of this where you see big Chinese part of these investments, usually big Chinese tech companies like Huawei and the others and Tencent investing in local infrastructure and networks and communication networks. And it’s tricky if you have a government for a legal regime where there is no hard firewall between these companies and the nondemocratic authoritarian regime. That’s, highly, highly problematic if you rely on that with infrastructure, with sensors with data analytics. I’m a little hesitant because again, to me, there are weird colonialist undertones there, but it does feel really wrong to just, say, look, can we really allow this or not? In Africa, for example, we see a lot of Chinese companies going in and actually building amazing things for, you know, urban, rural farmers and rural life, there’s a bunch of what seems to be benign, good stuff happening there at the speed of the markets that European and American companies will just completely ignore because it’s not high margin. On the other hand, we also see giants of infrastructure being put in place. Lagos is one of the Smart City projects for YY. But I imagine all the fast growing like African cities will be at some point, “are you okay with that trade off of like getting like quick Wi Fi everywhere in exchange for massive surveillance?” Or is this like a false choice that nobody should have to make? To me, it feels like, this shouldn’t be the choice. But as long as I don’t see a better alternative. I’m not sure what to say. It’s easy for me to say the pure right thing is better than the compromise thing. But that’s easy to say, if you have fast internet.
I’m by no means an expert on China. But one thing that I want to give as fruitful for China is undoubtedly cracking down internally as if you’re in China, and if you try to be in the opposition or being a dissident, or doing something that is not officially approved, you will live dangerously, and you will have a hard time and it is horrible. And we should work against that. The other thing though, is if for example, China invests in Africa, or generally in the global self, it must not be necessarily a nefarious motive. Because right now, China is gearing up to be a technology provider. Everything China exports and bets that they can export with the good margin, a good profit margin is electronics. Right now, Africa is a huge untapped market for that stuff. If they bring internet to Africa, if they bring technology development fear, they can’t they create the market to sell more stuff. And that is probably a large reason why they are doing it. Instead of just “we want to export oppression and correct on software, and spy on people.”
I agree with you. And I think what Peter said as well, you know, this, this post-colonial context. Look at us, actually, the people on this call are all nice by Western European and USA. It’s a difficult subject matter to discuss. But yes, you are right, or I agree with you to that extent. But there definitely some ways and this is not only China. Especially Africa is very, very vulnerable to being used as a test case or two. As facial recognition, there’s is actually just this major scandal that was in Kenya with this French company, de mi Gemma, if I didn’t know if I pronounce this correctly, that works with the Kenya government initially, to better their voter database to make a FIFO metric voter database, but their data was sold in the back markets. And so there was this big scandal and then the Kenyan government said, like, Oh, you cannot operate here anymore. But one of the reasons they actually did go to Kenya was because they could test the Kenyan government allowing them to test their technology and gave them a free space to do it, because the African governments are really hungry to to improve. And so if they’re given visibility to technologically try out new things and are more eager to do it, and a lot of other countries besides France or China would, because of their already disadvantaged position. But they, you know, there was this other big scandal, and I don’t know where it was, I don’t think it was Kenya, where he said area where they work with Chinese companies to make these smart cities and all the facial recognition data is being sent back to China for their databases. The point is that these people, the people that are participating in this, don’t realize that.
And again, we’ve come to the social contract, right? You don’t realize that their own personal data is being given to a foreign government, which doesn’t necessarily have to be a bad thing. But because there are already countries that are at a certain disadvantage, they’re more eager and willing and open to accept certain things without further thought, or, understanding of what the consequences might be, especially for their population. So I do think this isn’t this is an important thing. And again, it’s not just China, it’s it’s like I said, this French company, as well, that the global sales in general is more more vulnerable to this than we are.
I couldn’t agree more. These are, really important points. Yes, there’s more need and more hunger to, to not turn opportunities down, just because they have a risk attached to them. I think when we talk to government officials, especially on the city level, like in Europe and the US where they’re not necessarily in the same situation of need, but they’re seeing, they have such a perception that they need to be seen as innovation friendly, that they also don’t want to turn stuff down just because it would seem as anti-tech, anti-innovation. And they’re facing a lot of trouble. And it’s not like the smaller cities have like a big staff that has the capacity to analyze these impacts, really like a new big city does. And maybe San Francisco does. But Austin maybe already doesn’t. And Austin is like a fairly big pro-Tech City. And so that’s interesting, but also what what John raised at the beginning of the call, if this stuff gets shared back to a government that might possibly abuse it, that’s already bad enough by many counts, where, for example, like Geneva Conventions do not just forbid torture, they’re also forbid the threat of torture, because even if you don’t torture, if you expect that you might be tortured, that already changes your behavior. Not that it’s the same thing. But the same dynamic applies, I think, where if you know it could be abused and shared back it already has an impact on your life. The bizarre thing of all of this is, that maybe some of these data sets are much more diverse than those of the restaurant companies, which is partially why the New York City Police is trial trailing out the facial recognition software by a Chinese company, that’s much better at recognizing darker skin faces. I don’t have a strong point here, except that there’s a lot of like weird things going on when we discuss these things, because it touches all parts of all our lives.
I was reading in some of your material, where you were talking about neural networks, and how AI comes into play in all of this, because they’re so profoundly different in the way that they go about gathering and out putting whatever their result is. And the point was made that neural network can output a result and you don’t know how it got there. You may or may not know, even the people who made it don’t always know how they got there.
You showed this little video of all these little robots in a warehouse, doing a much more efficient better job of putting stuff where it belongs, and how that’s gonna be the way it goes. And for that kind of thing, it’s “well, I’m sorry, you don’t have your job driving a forklift truck now, because robots are doing it.” And that’s one kind of too bad.
But what about when you get this kind of black box activity going on in a situation where you need to find out how you got there. That was why I was thinking of elections when I read that. How did we get here? I was thinking about the intersection of IoT and AI. Do you guys have any comments about that? How things move from IoT, which is reasonably understandable at least to me, with little sensors doing things feeding information to some central unit? It’s not really that black of a box, if you want to look into it, but it is different when you’re talking about people who make these things who don’t even know how they got to the answer.
right, and what went into like, shaping things’ decisions, right? There’s these things start as a thing where you test them for some results, and they show good results. And so, you know what, it’s good enough. But you’d have a human in the loop. And at some point, the human in the loop seems to not add a lot of value. And so you cancel that line item in the budget. And all of a sudden, you don’t have a level of recourse or someone to understand how these things happen. And we see this in a certain level of transparency in the way that insurance premiums are calculated. We see it to some degree in the way that policing algorithms work. Most of these stories are fairly horrible. I guess we only read about the ones that are fairly horrible. But if you can really interrogate the way that decisions are made, if you don’t, we don’t need to know all the the exact. The exact recipe is one metaphor, everyone’s heard that I kind of like, we do need to know which ingredients go into the decision making. So we can at least get to some level of accountability. We don’t need to necessarily understand exactly how it works. But we need to have tools built in and methods to interrogate if the results are correct, we need to if we are using AI to do any decision of importance, and we make it mandatory to use AI for that.
If the person who gets potentially affected cannot opt out of the AI, then I need to be able to understand the process in all of its forgiveness. Because if I’m not able to understand that this is a nightmare scenario, and it’s really important that AI only gets implemented in ways where people can look after this. Part of the reason why Germany is still so much on the paper votes is that is something that every person with a minimal amount of training can actually understand how the votes are being counted. If there is a machine, then you will suddenly need special knowledge and special training to understand what is happening and to be able to check on it. And that’s the same thing with AI systems. If an AI system that makes a decision, I need to be able to understand how video a vi system came to the decision. So in order to make a meaningful appeal to it, if it’s necessary, or to understand one of my behavior or my things or my surroundings, factor into that decision. If that doesn’t happen, it’s basically the same as if the king just say kill that person. And I just have to deal with that. That’s the same result from the citizens perspective. And that really means AI must be made in a way that is transparent. Also to uncover any hidden bias that is created in the AI when it’s being built, where we end up with systems that don’t see black people.
I think that is a really good point. I think in the meantime, we just need to also make sure that there is a level of a method for records built in. This is something that’s an aspirational goal, but I’m not sure it can ever be fully reached. But in the meantime, which might be forever, we just need to make sure that if something smells fishy or something goes wrong, there is a method to address that and get someone to look into it, preferably a human. I know Institute’s annual report, they have all these examples of say, you know, health insurance in the States, changing the rules for like, who gets what level of care at home with like certain disabilities, and how the rules change. And all of a sudden someone gets lowered from getting like a full-time care person to a 20% care person, which might not be enough to get through your day, in a dignified way. And if there’s nobody to understand how the process ends up there, then you in deep trouble because you have no level of recourse that says something that happens because this is always like a force not applied in a vacuum. But it’s applied in like horrible, commercialized, cost saving contexts where all the safeguards will be stripped off to make to increase the profit margin. And that’s where things go really wrong.
It’s even worse, the thing is VIP process costs time. And depending on who or where you are, in which circumstances you are, for example, you get a parking ticket, you could just pay it and when it’s when it’s over. Or you could say I don’t want to pay, I handed it over to my lawyer. That implies that you have a lawyer and you’re willing to take the risk. But if it’s somebody for whom it is going to cost more with that lawyer, that implies a whole different level of wealth so they just pay for the parking ticket, because the risk of having that escalate into something bad, right? They won’t do. Now imagine VA is handing off the parking ticket because it’s an AI, it’s a computer thing, it’s suddenly at a much larger scale. That’s the thing with everything where humans are involved, that puts a limit on how often this happens. Because that takes time. And if you want to scale it up, you need some way more humans for this. So it’s not happening at that large scale. If it’s a computer, you can have it 1000 times per second, and it doesn’t cost you much more. So suddenly, VAI would give our parking tickets way more often. And then, to stay in this example, which is slowly getting more unlikely you would have poor people bombarded with target parking tickets, they cannot fight having VAIV, and just saying “in case you do not like for result Call this number” is not enough.
Again, we need like a fair playing field for all artists. We need transparency and all that. I think it’s not a coincidence that s one of the organizations that is most urgently researching transparent AI is DARPA and the US military research agency, because they say look, we do need algorithmic assistance in the way we handle our weapons systems. But we also really need to understand how exactly they come to their conclusions. For example, an AI- based visual recognition of incoming fighter planes is much faster and more reliable than human eyes at a certain distance. Because you can’t even see, you can resolve what’s going on, when they already recognize the movement patterns as drones or rockets or planes. But before you shoot at anything you need to really know that you are shooting at the right thing. And you can cue all the examples of where that went wrong with like human or no human in the loop. Of course, there’s more than enough examples of that. But I was at a conference listening to a military person, at a fairly high ranking University, describing in great detail how they’re super afraid of introducing too much AI into the process, because the risk of escalation and wrong decisions is so high if you just go to the point where you say a green light or red light for shoot inga gun shows that you really need to know is this reliable or not. And CN system always having a human and never have fully automated systems on that level, which gives me a shred of hope at least.
Either you or somebody you were quoting was describing how in chess, the computer beats the human but when a computer and a human team up, they always beat the computer by itself.
Yes, like the Center Chess model. That goes back to the Engelbart focus on augmentation over automation. I think that’s where it gets really interesting: when you don’t just replace humans by automating them away. But you give them superpowers, the way that all of technology does, it’s like going back to a knife where the knife helps us to cut things that we couldn’t slice them with our hands, these tools enhance our capabilities. And that’s I think, where things get really interesting and where I’d like to see things going where we augments the skills of of humans rather than automate them away.
Thank you so much, Peter. It is an honor to have you with us. Christian, the same with you. And always a pleasure, Inge.
Thank you so much for having me.