A little while ago I had the chance to sit down with Jon Rogers for an interview. Jon is a professor for creative technology at the University of Dundee, and was just about to wrap up a stint at Mozilla Foundation’s Berlin office where he had been on loan, so to speak, as a Mozilla Fellow working on their IoT program with Michelle Thorne. Over a few years they had been running the Open IoT Studio program during his first couple of years, and just shifted their primary collaborative focus on OpenDoTT, a EU-funded PhD program about responsible Internet of Things (IoT), jointly run by University of Dundee and Mozilla.
Rather than rehashing the whole interview (which ran quite long and you can listen to here) I wanted to highlight a few bits and pieces I found particularly relevant, slightly edited for clarity below. If any of these ideas resonate, please don’t hesitate to share your thoughts!
The P’s mark my questions and comments, J’s mark Jon’s answers. Settle in, grab a cup of tea or coffee… even the edited version is still a bit of a #longread. Here goes — enjoy!
P: In your practice, you come from a design research background where you do a lot of design research these days and think about how that applies not just to to physical products, but to connected products. What was your journey? How did you get there? You did engineering, you did some AI stuff, you did a lot of design research, how did you end up bringing it all together in in this space around IoT-connected products and connected services?
J: That’s a good question. It’s something I’m constantly shifting focus on. Essentially, I’ve always, in my professional life worked between electronic engineering, and product design and primarily interaction design. And actually, the thing that brought me into product design, and interaction design was really that these all have electronics in them, especially if you’re thinking of the late 90s. Pre web as we know it. But effectively, electronics were the things in products that did the human interface, and then they converted that into electrical impulses. That was a radio, you press a button, it will change the channel for you. And the channel was in something electrical. So it was humans and technology coming together. And I’ve always done that. Flitting between them and at times I will emphasize more of the technology research. And at times more of the human aspect. Increasingly, I’m finding, maybe because I’m getting older, that it is harder and harder to differentiate the fields of engineering and design. Actually, I think the lines are getting more blurry.
They’re getting way more blurred, because what the internet did was tell everyone they could make something. And that was the spirit of Mozilla, actually, in its earliest days was everyone can make something. And you know what? People listened. And so Arduino said, How can we do that? And Mozilla was making things like Web Maker and other tools, and all the stuff got people making. So we’re all told we can make stuff, right? And so we did. I started teaching that to students in between design and technology studies. And computer science and engineering students would pick up an Arduino and realize that this was amazing.
A the same thing with 3D printing. Rapid prototyping is natural. These used to be engineering tools. They were like CNC machines. And they would sit next to the CNC machine as another way of doing engineering parts. But of course, they slipped into art schools, and they slipped into people’s bedrooms, almost unnoticed. And suddenly the thing that was an engineering thing was suddenly some making thing. Suddenly, people who make things are not just engineers, they’re all sorts of people. And I think, again, these hybrid bridging technologies are enabling people to do things in this new way. So there is a blurring reality.
P: You mentioned things slipping into our schools and bedrooms. The narrative of things slipping into bedrooms also, of course, is one you hear here often when you talk about systems like Alexa and Google Homes. That is a different kind of dynamic, because they also slip into all parts of our lives, but they’re not tools for making. They are not creation tools, they’re consumption tools. In the most benign reading — or surveillance tools if you’re less benign.
J: I think there’s a really big distinction to make here about tools for making. In the 90s, there was a big trend for bedroom DJs. And all the big names now that flood the circuit, and huge clubs all over the world - they all started in a bedroom. Because there wasn’t a college of this, there wasn’t a formal way to learn this. They also went to art school. So if there’s a bedroom as a space for making, and it’s yours, and you own that and you control it, and no one’s gonna tell you you can’t. I think that’s where these edgy, potentially paradigm-shifting things come from. It’s where Microsoft came from. Originally it’s where Apple came from. And that’s very different from consumer products. Coming into the bedroom and the things that are, for me, not things that should be put in without serious caution. And we’re actually having the same thing I think with the thing about garages or the extension of a garage or your home, or your local street. Suddenly, even these last few weeks, where did all these E-scooters come from? And why was the one parked next to my bike this morning? I couldn’t physically move this thing, and it’s blocking my way. It’s really weird. And I’ve seen kids around the park bobbing around on this thing in the park last night, they certainly weren’t over 14.
These things are happening in different contexts in ways that we don’t know. You know, you might say scooter: mobility. I might say mobile mapping device. Hugely invasive. That thing knows stuff from just the behavior of how it’s been spun around or stopped. So that’s the reality for teenagers. I get into an office district, right in the morning. These are different contexts and different ways of using scooters. So surveillance is now so huge, it wears so many clothes now that it’s impossible to see.
P: Like the William Gibson quote where the street find its own use for technologies, only here it’s surveillance capitalism that finds its own use for the streets.
J: Like the quote from Jurassic Park where it’s like nature finds a way, I think surveillance finds a way. And I think humans want to extend the reach of their sensors all the time. They want to do this for good, and they want to do it for harm.
P: So how do you explore these questions? There’s like a bunch of inherent tensions in these fields right between, say, convenience and privacy and across many, many dimensions. It’s like a minefield full of potential friction, right? How do you explore that? I know that once you helped commission a video together with Superflux (Our Friends Electric) and the extended opened-up IoT Studio to network around different personalities, different roles. Can you explain a little bit how that works and why you were doing that?
J: Yes, there’s two things. One, from my perspective it’s a very powerful tool in understanding the implications of technology in the future is to give people as near to an experience of being in the future as possible. Now, that doesn’t mean putting on a VR suit, stepping into the future at some VR cave in a very expensive lab.
For me, it is storytelling. And science fiction writers have been doing this for the last 120 years. You brought out a William Gibson quote. We could bring out George Orwell. We have all these imaginative ways that people tell stories. And design is another very powerful way of telling stories, when coupled with filmmaking. So designs that can make props, near or far future technologies that can affect appear in a narrative. And it doesn’t have to be a blockbuster science fiction narrative, like lightsabers. These can actually be domestic, ordinary, rather dull futures, the kind of futures that we’re really going to inhabit. The future is going to be just as domestic and dull as it is today. The question is, can it be the domestic and dull that we want, or the domestic, and dull that’s been forced on us that we don’t want. And that’s where making these films that can reveal in playful domestic ways, maybe there’s a little humor, because people understand more with humor, if you can get across concepts.
What we wanted to do in that film was to translate the research findings and complex conversations we had on a research retreat in Bellagio for five days, this is really important process for navigating the futures: Ask experts.
So how do you get this mess, lots of postings, lots of conversations, lots of notes, lots of typed documents into something that was understandable? One of the people that came was this filmmaker, founder of Superflux. And we asked her (Anab Jain) to work with us — us being University Dundee and Mozilla — to create a film around the future, the voice, which is a topic we were discussing in Bellagio. So that film is an embodiment of some of the snippets, and thoughts and reflections we had during and after this five day research retreat.
P: So you gathered a bunch of experts, identified issues, challenges, approaches. Then you had professional speculative design fiction people turn that into a more tangible thing and the result was this video called Our Friends Electric. And it showed essentially three different personalities of voice assistants. One was a pretty controlling kind of nanny that would always ask you to do things. And other one was a kind of quirky, critical socialist or Marxist voice assistant. And the third was a voice assistant that would speak in your voice and could speak on your behalf, and you’d control its mood and attitude. If you had to call like a phone hotline or something, it would negotiate on your behalf, and you could just give it basic back parameters, like confidence and a friendly or angry attitude.
J: Do you want it to sound frustrated? Do you want it to sound relaxed? Do you want it to sound funny? Or do you want to sound serious — we wanted to embed all these and make them human-controllable.
P: The one where you control the emotion, I found that really fascinating. The idea to have to be explicit about what emotion you want to display is so inverse from how we usually experience this, where oftentimes I think we’re surprised by our emotions rather than think “Oh, I want to be angry.” In this scenario, instead we flip the switch for the device to be angry now on our behalf, so I don’t have to. That’s fascinating.
J: And also, if only you can turn yourself down, right. You know, like people say, tone it down a bit. You have a tongue you control. That was where that started to come from. And we also wanted the conversations to feel real. So there is a term in English, the saltiness, from sailors in that particular language. That saltiness on the BBC is allowed to appear after what’s called the nine o’clock watershed. So if the BBC made voice assistance, could you imagine the same watershed suddenly, after nine o’clock? “What do you want me to fucking play now?”
P: I really love this movie. What was some of the feedback you got? Were there surprising insights when you showed us other people in how they reacted?
J: Yeah, we wrote this research paper published in CHI (PDF). So what was it like to have this film out there for a year two years and show it to students, design fairs, and events happening in the real world? Six months after we made the film, a voice assistant can make a call on your behalf: Google Duplex was announced. I guess we were six months ahead in a way we hadn’t expected to be. But what it made me realize straight away was ‘wait - Google Duplex didn’t inform the caller it was an AI.’ How terrible. That’s disgusting. They shouldn’t have done that. And that was the big debate. So when that voice assistant was calling people, it wasn’t saying Hi, I’m an AI. It was saying Hi, I’m a person. That was wrong.
The way it said a lot of the things was also in this sugar-coated humaneness. The politeness. It wasn’t like, Hi, can I make a reservation for seven o’clock? Wait, what do you mean, you don’t have that? No, I want that. Come on. Come on. That’s a real conversation. Not everyone is polite and hesitant. I’ve been called by people’s personal service assistants. The power relationship is really interesting. So again, we were able to investigate the power relationship between humans and machines. Which we also did in this film. That’s so important.
Everything has a an ideology. And so what we wanted to show was that if you had a voice assistant, you might be able to adapt it to appear like a monster. But if it was run by a capitalist, a company like Amazon, the only real existence is to try and sell you things.
And that’s the surveillance economy coming back to the ad economy. A suggestion economy. There’s new silencing. Machines having ideologies need to be really clear. In an age of programability we need to be able to do that. And ultimately, machines will need to be accountable, right? Accountable to the people that created them. Not just having robot toys for robots.
P: If you think in concentric circles: If you’re a user, and you have a device doing a thing on your behalf, that’s one thing, that’s one relationship, right? Between you and the device, or the device maker, if it makes a call your behalf it likely already affects a third party who might not have consented. So already, we’re establishing a completely new relationship. And then there’s, of course, all the outsourced kind of externalized costs and external hidden effects that we haven’t even touched upon. And that’s really a whole new rabbit hole and a huge, huge field, right? That’s only slowly starting get any attention.
J: And I think if you can look at the way in which a blockbuster movie enables you to convince you that this narrative is real, if only in a suspend your disbelief kind of way. Now we see with a deep faith in a very different way. What was fiction is now able to do this, not just suspend your disbelief. These things with the same production values are now going into changing your behavior, right, as they go into changing your mood during entertainment. Imagine those production values of the things that are coming to you. And that they will employ so many different techniques, embedded in different things. It’s really important for the next generation of the internet to really understand these very powerful forces, to start to name them. I don’t think we know yet. We call them advertising, but we don’t know what they are really, it is new forms of controlling. We’ve known that we need to enable people to take back control to be able to read, write, execute and participate to be not just consumers, but creators.
But these need balancing against trust, opportunity, responsibility, ethics, and they need to be done in a way that’s back to the person in control of this. And that’s hard. Because if I put a 12 year old like my kids in the car, when they were younger they would love to be in control of the radio, then that’s fine, but I wouldn’t put them in control of the steering wheel. We’ve learned so much about dangerous systems in the past flying, driving, manufacturing things, you know, where the safety standards are so much clearer, so much more understood and have been adapted into human customs and behaviors,
Partly because it was slower: Bits of metal were slower to form and communication was slower. The steam engine took a while to spread around the world. The infrastructure for train travel took a long time to build.
P: Cars took a long time to become even somewhat secure even for the passenger, let alone the environment.
J: Exactly. The systems - control and manipulation - grew up with them. I think what’s happened with big tech is that the systems of control and manipulation and law and lobbying and policy to enable your product to get to market and through government control are as hugely sophisticated as the automotive industry. They ported those same people into big tech. But what hasn’t been ported is a sense of safety, responsibility, workers rights. All this stuff has been jettisoned. We require new understanding of what have we left behind rightly, to figure out what we left behind wrongly. What do we need to bring back, what new skills and things do we need for this very fast economy?
P: Have you seen any promising approaches? Or do you have a gut feeling where you think how we could even tackle these large-scale change problems? There’s a lot of talk about , generally speaking, social media and all these wrong incentives and all these like public discourse platforms that lead to outrage rather than consensus, how that undermining democracy and the democratic process completely. So the tools are currently not that strong it seems to me.
J: There’s two things we have - two pieces. One piece we’ve talked about, where you’re able to make films and speculate about the future and tell stories about the future, in a very clearly produced way that is built on expert views from many, many points of view, gets written into a narrative by filmmaker and designed in order to show this to have many different lenses looking in the future. So we know how to do that.
P: So these are essentially objects that you can then coalesce a whole conversation around.
J: What we then could do is say, okay, we first will use experts to create this vision of the future. But then we need other experts to critique this. So the piece we don’t have, and that comes back to democracy and control is, we are seeing powerful emergent techniques around informed citizens, such as the citizens assembly. And actually another thing, which I found out recently with citizen juries: Imagine 12 members of the public chosen at random, and who get to cross -examine experts. So could you imagine these speculative films about the future in a series of 12 or whatever number, get to basically see these films then interview the experts behind the films, also experts that present a counter viewpoint to the film. So in the case of Our Friends Electric, you would show these films, and these would be on trial, the future would be on trial. And then these 12 people come from all aspects of life, completely normal from all across Europe, let’s say, Would then be able to interview the people behind the film who made it and everyone who went to Bellagio. They would also be able to call people who would counteract that. So maybe Google Duplex would be there saying what’s wrong with this film? Why their Duplex is better. And you could have someone from Facebook say, okay, we don’t have to go with a blown up thing. It wasn’t what these researchers and the Guardian is saying, and have this thing on trial.
So why don’t we put trust in data on trial, and show what that looks like? Let’s show the crimes or the impact; it doesn’t have to be crimes - it can just be what does the future look like different visions for the future? And put them on trial, before the problems have happened. So it’s a kind of predicted, not predictive, responsibility.
P: It’s participatory prediction.
J: It’s about future responsibilities and how we in the EU or globally can deal with them.
Autonomous vehicles are gonna ride like scooters, everyone’s gonna go. We read about these and we knew they were coming. But we didn’t know they were coming next week.
P: And even if we did know that, then unless there’s a way to actually participate in the decision making process it doesn’t matter.
J: Could you imagine like in the old days the cinema had the news before the film, could you imagine if you said, okay, you’re coming to see science fiction, you’re highly, highly, highly motivated by futures. And we’re just going to show you the five minute film beforehand. And then everyone gets to vote on the future with a few questions. And could you imagine how that would be as an experiment - ways of putting things in front of people that start to use people’s time more wisely.
P: Another example that surfaces this issue is Aadhar, the Indian centralized government ID based on finger prints. It frequently doesn’t work for people who do manual labor. The fallback plans for when it fails, like iris scans, are a big problem for people with cataracts. But also some will say well, I don’t trust it. I just want there to be a person to talk to. But of course that also doesn’t exist. Like so many fallback plans: They always get scrapped. It’s like, oh, we have a new shiny system. So everything now works perfectly. But if then anything doesn’t work entirely perfectly everything goes belly up. Everything comes to a grinding halt.
You mentioned that the next generation is really important in this, and you started this PhD program called OpenDoTT. What’s up with that? How does it work? What’s your goal with that? Because to me, that’s a pretty groundbreaking new thing, the way you frame these these PhDs?
J: I think it comes down to a simple, maybe naive observation of the world and this was a response to a call, which is about identifying gaps in training, where you need seriously strong academic leadership, and have industrial experience to go into this gap in the training. And the gap in training I can see globally. But some way Europe can foster it is closing the gap in the training of how we design for trusted objects within the internet. We don’t know how to design for trust. We know, as designers, we might know how to design for reliability, we might design for speed, we might know how to design for people with poor eyesight. We know how to design for so many things. Yet, we don’t have design for trust.
So we realize that this isn’t something you’re going to tackle with just putting up a 10 week course on a bachelor’s program. It’s a graduate program that we needed for this intensive training. And that’s what we applied for, and won the funding for. We’re really looking at this trust, and the internet, across all the physical, different scales that the Internet of Things will be happening. Because on one level it’s either on or in your body or it might be in your home, it might be around your home, it might be the bedroom or kitchen. Friends share things in homes. So the homes are important, as is the body. And also we’re going to have a kind of informal network and community level. We can say we want some technology in here. And then zoom out again, on the biggest scale, you have things city, this so-called Smart City, infrastructure and so on. And there, things are coming in a different way. And all this is calculated by trust, of which we now don’t even know how to read this now, visually, or otherwise have been informed. So we have designed a program designed for trust.
P: That’s also what I really appreciated about this is the framing, that’s what I found really interesting. But also you take a clear stand based on values. This is a very opinionated program in a way. It’s not just about building sensors, building IoT stuff, but about make it trustworthy. That itself is such a strong frame.
J: We all have heroes and heroes. One of mine, John Sacra, he’s had a lifetime - 50 years of experience in journalism and design and technologies and big systems. And he just said recently, in an article in the magazine for us: You know that for all these incredible sensors, and 50 sensor based and open source technologies that are emerging in the world, how much difference is that actually making to people? The icebergs are still melting. And disasters are crashing even faster and closer. So while we’re sitting there measuring and being all kind of very self congratulatory about this citizen science approaches, they’re not making a difference. And they’re really not. If it was well done, we would have been able to slow down this stuff.
I want to relate this to another piece which is relating it to trust. The next thing I think, which is if the Next Generation Internet (NGI) has to comply with is all the other work that’s going on around climate change and humanity. Is this a human centered Internet of Things, or can we bring being humanity and planetary centered into that? Let’s not do an iteration, let’s do a paradigm shift. So let’s not do it to keep the internet things going, it’s going to keep going. But let’s not just make it more human, let’s just make it serving the planet.
We already have a framework that’s been discussed for years and years and years through the Millennium goals, and now has become the UN Sustainable Development Goals, so-called SDGs, 17 of them. Carefully planned so that by 2030, the planet will be in a better shape. The thing I will ask as pushback is, isn’t the Next Generation Internet going to be here before the next generation arrives? It’s got to be compliant with the SDGs and how much work has been done? As far as I can tell absolutely nothing. So this has to be the priority area of designing for the SDG for the next generation internet. Trust, but plus the SDGs.
P: That’s really strong framing because it makes it more easily relatable to policy, which I think is always a huge thing. I see people are now moving from the product itself to, Oh, actually, we need to always go upstream. Always one more step: What’s the economic model and its policies around it? What’s the regulatory framework? That might be the the peak of that particular mountain, that peak of the stack where you go, Okay, look, if it doesn’t fit in the framework of the Sustainability Development Goals, then maybe it’s not a thing we can be doing.
J: On the surface, the sharing bikes, I know, look like something that meets sustainable development, because they’re going to have cleaner cities, they’re gonna have cleaner energy than, say, a car or a train or bus. But actually, if you really look at them, in terms of leaving no-one behind, this model leaves everyone behind, except the company owners behind it. You can no longer repair these things. We both have bicycles, and I know without looking at your bike, I can take the saddle off mine and put it on yours. If your brakes failed, I can take the brakes off mine and put it on yours. If your tire needed fixing, I could use my punch repair kit right. All you need is a good bike dealer. You could recommend the repair shop around your corner, and they will fix my bike without having to check which model it is. Now, the sharing bikes are doing the opposite. No parts are interchangeable. You can’t change the saddle, the brakes, or the lights. So when my bike go bust, you have to throw everything away. So they are not meeting the Sustainable Development Goals. But they’re also not reaching the next generation internet goals. They’re not reliable or trustworthy. We have no idea what’s happening with your data. We have no idea what’s happening to the communities that use these and so on. No idea what the app connects to.
P: But we do know that none of them put a premium on privacy data protection, we do know that they don’t do the same for sustainability. We do know that none of them make a point of say minimizing external costs through paying a fair labor, we do know they don’t have a sustainable business model. That’s why you see these explosions of bike sharing companies and then they all get gobbled up by one. It’s the purest version of the downside of the venture capitalist model.
J: What the problem is, is not that this is happening, that’s just a result of that consequence. My problem is the only counter movements of protest ones, there’s no lead by example. So we can have this next generation internet then let’s have a set of values as defined by the UN Sustainable Development Goals. And then let’s start to create these stories and compare them right through citizen assemblies or citizen juries. Let’s have a debate about this. So if you can say, here’s a film made about E-bikes, so good from Lyft, or whoever’s made them from Uber. And here’s a film made by researchers and designers, like as a counter narrative. Don’t just leave them as opposition on Twitter and Facebook bubbles, bring them together, and critique and analyze and then draw the best bits out and use policy to regulate and inform so that these companies are not about eradicating these companies. It’s about them being more informed, and evidence-based ways of making change.
P: I read this amazing thing from Deb Chachra, who in her newsletter just drops this one line, “sustainability always looks like under extraction, when you compare it to extraction.” And there’s so much in that sentence. lf it’s sustainable, it’ll just not be as in your face as the hyper growth model that we’re so used to. That I found really interesting. Maybe a little slower, a little more humble approach might not seem like it’s a fantastic business model that scales from here to Mars. But it’s something that we could keep doing without limiting future generations.
J: With these models, we’re trying to bring them back with other forms of resilient sustainability, you know, and ways in which we could do this, we still haven’t got it right. Farming is still not a place that probably could ensure it, the lessons of history have been for now, of course, this is affecting wildlife all over the planet. Because of these huge farms that have been grown from machines to harvest and marketers to sell not for people to eat and animals to live with. Again, flipping the framework, if we had all the farming meet the SDGs everyone would be fed, because that means that animals and the environment would flourish with farmers. And we have to look at technology development in the same way.
P: So if I tried to like sum that up the key takeaways: Sustainable development goals should be the guiding framework for all that happens, the Next Generation Internet and beyond. Participatory futuring, if you want to call it that, or participatory processes in evaluating futures, would be the foundation to help us guide within the framework.
J: I think it’s one of many useful tools in understanding. I’m not saying this is the magic effect. All I’m trying to propose is that this form of future storytelling involving designers and film makers is one incredibly powerful tool to add to many ways in which we academically and professionally evaluate the the world around us and propose futures. So I want it to be given due credit in a broad spectrum approach. Because it has to be right. No one method is going to find this future. It’s just not going to happen.
P: Is there anything else you’d like to share?
J: The main thing I do want to stress is that I don’t think that pursuing technology standards first as I have seen in the current NGI approach is going to do anything. I really don’t. I think it is a really problematic approach. What we need is humans first, and the planet first. Then find out what standards and cultures and practice fit together in order to exist in that world.
Humans will find ways of being more human. But if you put technology first humans will find a way find a way around it. Always.
All these futures exist. The future is a sphere, not a line. And it can go in so many different directions. Why don’t we just choose the ones we want to go.
The other thing I think about the future, which is really, really important - this is a thing I’ve talked about recently is. I could say to you there’s the park next door to here, right next to the office, and I know a particular oak tree right round about the middle of that park. If I tried to describe that oak tree and ask you to navigate, you would probably get somewhere in the right part of the park, because we could say that you’d really study and would get to some oak tree but you’d probably have one of a hundred chances of being at the right one. However, if I took you to that tree, right and said, find your way back to the chair you’re sitting on the resident office right now… No problem.
My argument for this speculative approach is: let’s take people there, and ask them to walk back and see the consequences to where they are now. Because it’s always easier to get back from the future to now than it is to blindly navigate into the future with just some advice from experts and standards.
P: Thank you so much for making the time. Thank you.
Full disclosure: I’m married to Michelle, I’m an industry supervisor in the OpenDoTT program, our non-profit ThingsCon is a training partner there.