Why is all this innovation not being channeled into ways for people to help them live a better life?

I’m a Dutch guy working in Brussels at the Foundation for European Progressive Studies (FEPS), a left-leaning think tank. Before that, I was at the European Commission where I worked for five years on competition law. I am close to home and “far” away at the same time, it’s far enough to skip the occasional birthday because it’s still International.

In my spare time I love reading, I could actually live in a library. Besides that I love to dance and I enjoy playing the violin.

My main focus at the EC was competition law and European regulations, especially focussing on the internal market. Questions like, how should we address big companies from a regulatory perspective? How should we deal with transport sectors as monopolies?

When I started my studies in International Relations, I was always looking for some truth, but scientific truths aren’t really existing in IR. It’s very theoretical, but they don’t really apply and everything changes when unforeseen events happen, such as the end of the Cold War.

To me, this wasn’t very satisfying. That’s why I decided to study law, because there are these rules which you can apply, you’ll get to a certain point where you’ll find an answer which makes sense. That’s why I studied law. And that’s why I ended up focussing on regulation. Unfortunately, I never really found the “truth,” because in law, the courts argue in one way or another: it’s not completely consistent.The European court consists of several judges who actually don’t agree on many things. The outcome turns into a judgment that doesn’t make a whole lot of sense. When I decided to study law, and in particular competition law, is because it was relatively new and still kind of exciting, it’s not that established. There’s still room to maneuver new doctrines. But I guess always looking for certain truths in a post-truth world.

When I was working at the EC, I always thought that there was someone at the top who knows how things work, that there’s a grand strategy on how it all fits together. I never found that person.

I started as a trainee at the application law unit. That sounds a bit abstruse, but it means that we were essentially checking whether the member states complied with EU law. There are quite a lot of directives coming out each year. Member states have to transform them into the national laws. And this is kind of a black box area. We don’t really know: we come up with all these laws of up level, then we think it’s done. But a huge part of the laws, how it’s being transposed, how’s the entire plan implemented, if it’s being enforced. A good example of this is the data protection regulation. It’s not that different from what already existed in 1995, the directive on data protection, but it was simply not enforced. So, how can we do better? How can you make sure that the law is not just existing on paper? What actually works? And this is a major problem for the commission: they don’t have the staff or the resources to really ensure that the directives are enforced to a significant extent.

After that, I moved on to the digital single market. I guess I got the position because I was the only person below 30s with a smartphone. Okay, that’s maybe a bit too reductive.

When it comes to the Digital Strategies, many people don’t really understand what it’s about — this also includes me by the way. We looked at the digital sector for a long time in terms of competition. We started with Microsoft in the 90s, how they sold a pre-installed Internet Explorer browser on the Microsoft Windows operating system. I arrived much later at the Commission. We looked at copyright laws, we looked at the digital single market strategy, at the revision of the ratio of digital single strategy in 2017, at autonomous and connected cars. We tried to define what kind of legislation we needed, to bring it into life in a broad manner. For example Audio Visual Media Services, we came up with a new directive two years ago. It was quite broad, but a lot of it was linked to economic law, and then what we call digital law. This included portability, geo blocking, all from a regulatory perspective. But I think this is very problematic. We wait to see what comes and then we try to regulate it, instead of coming up with more positive, efficient motivational transitions.

I’m not neutral or unbiased to this. But I feel that if we look at the history of the European Commission, when they started in the 50s, 60s, the progressive rate was the future. It was to create an internal market to take power away from Member States and create one common market. The idea was that it would allow for political integration, and would block war: a nice progressive society where everybody would be happy. It’s a very good story. But now we’re in a different environment: we’ve created this huge internal market, but the political structures are not there. And for me, this is a key driver for inequality.

It’s kind of a neoliberal model where they let the markets do what it has to do. And then maybe if there’s some kind of narrowly defined market failure, then EC would step in. For example with geo blocking that’s the case. And we have many laws and copyrights. Whereas with competition, obviously, you’d like to restore perfect competition, but it hasn’t really worked. It’s a very fragmented approach, in a way a very reactive approach. I understand that law is important, and we should have it. The general data protection regulation is I think a step in the right direction, but it can’t be the only thing. You also need to govern. And for me, that means steering investment into a specific technology or a technological application that you want. To use public procurements and all your buying power to steer it all into a better direction: environmentally friendly, interoperable, etc. We’re not doing that enough, because the commission isn’t willing to intervene in a market like that. It’s also difficult because of the treaties, the four key things are the freedom of movement, capital, services, and goods.

One of the reasons I left the Commission is because it is a quite hierarchical organization. This makes it sometimes difficult to get things moving. There’s little room for personal initiative, especially when you’re not high up. And it can take a very long time to get higher up in the Commission.

Creating a Tech for “Good”

That’s why I was excited to join a smaller team, where you can have more of an individual impact. I was hired, essentially, to take on the digital portfolio. And my main task is to steer the digital transition in a more progressive direction. This means to use digital technology as a driver for young people’s aspirations, to reduce inequality, and to create a more participatory democracy. In order words, how tech could be used for the good of the people. One of the items I focus on is data governance. This means access to data, which is extremely important in areas such as Smart City development: a lot of new digital infrastructure is being rolled out, but it’s not very accountable. It’s often driven by narrow efficiency concerns.

There’s a lot of different motives for people to behave the way they do. It’s not just market concerns. It’s also solidarity, sociality. And the systems don’t take that into account. Governments often don’t understand the systems, especially on the local level, they don’t have access to the data that’s being produced. They outsource the entire management of the systems, the key criteria, to private parties. And I think that’s a problem. And I think that if we take a very close look at how we design the infrastructure, we will be locked into long term contracts, with very expensive systems that will decide how people live together in a city for decades to come. I think we have a real opportunity to make this more participatory and more accountable, but we have to take it. The same counts for the debate about the automation of jobs: what kind of work will there be in the future, how can we help people you to find meaningful, remunerated activity in the decades to come? How does it look like?

Many public authorities, and people working on policy policy — including myself — don’t really understand the systems. But because of austerity, budget cuts, they’re very sensitive to the narrative about cutting costs and efficiency. They believe the systems can solve everything, but actually don’t understand them. That’s why they outsource it without really asking the difficult questions: what exactly is the problem you’re trying to solve? Can we solve it by just collecting data ourselves and coming up with some automated system?

A lot of those AI systems essentially feed on existing data. We feed an AI data system with occupational roles, and the jobs that people have. This means it will train on data that shows that a lot of managers are male, and a lot of cleaning personnel are female. If you add that data for a labor application, women will more likely get an ad for cleaning jobs, and not managerial roles. The system has learned that it’s the most efficient, because that’s often what we know.

When we talk about predictive policing, it has shown to steer officers to serve the neighborhoods which are well off: the chance that certain people get caught is higher in those. So, you create a self reinforcing loop. If this is how we build systems —by just importing existing data — it will penalize those — minorities, the poor people — and they’ll become even more disadvantaged. It’s a crucial problem.

I read in a recent report that the Austrian labor department started to use AI to decide how they should spend the resources for rescaling and training the unemployed. And given the criteria, they would focus mainly on people that had the most chance to find a job. What happened was that people who needed help the least, actually received the most resources. Whereas the people that had the least chance to actually find a job again, were left out: it wasn’t considered an efficient investment. You need to understand how this tech works, because the institutions using them often have certain targets. Probably the central authority tells them “you need to help this percentage of people for this amount of money.” The outcome, however, means that the people who need to help the most will get it the least, and the other way around. I found that quite instructive for the kind of thinking that that’s happening on a more personal level.

What I’ve seen when talking about technology, is the fact that we don’t understand it. When I worked at the Commission, I was also involved in some IT projects. There was no oversight, and we wanted to have system deadlines, increase incredibly overturn. And it’s just a sense of helplessness, because you don’t understand it.

We outsourced something, which is not directly related to AI, but which I found quite instructive for myself. When I worked for a certain organization, we had a specific problem: a complaint form, and we wanted to lessen the workload for ourselves, because we got too many complaints and we couldn’t handle it all. We needed to find a way to make us more efficient. Under the guise of automating things, it became no longer accessible. We made it more difficult for people to complain. Yes, we put the complaint form online, but we also created a number of technological hurdles. The result meant it was actually more difficult for people to complain, we restraint the options to complain.

And this is also a part of the digital environment, you can design exactly everything the way you want it in a way that you can’t in the physical world. But you can also inadvertently make it more difficult. For me that was very instructive, because the narrative is always “we put things online, we make it digital, to make it more accessible. Closer to the citizen, to be more effective and more efficient.”

But I feel that the underlying trends are that’s what matters with technology. It’s not the technology itself, it’s the socio-economic environment. It’s often austerity. “We need to cut costs. We need to cut budgets,” And the market logic is preparing the public sector. The result is often that people aren’t held accountable by the solutions.

You see this a lot in healthcare and the social care sector where everything is digitized: you can’t actually talk to a person anymore. There’s often a binary logic, you can click this or that. But often, your case doesn’t fit those options. And then you’re left out, there’s no one to complain to. And it means reduced access. You see it a lot in the US where, once they digitize procedures, the amount of people that are allowed to claim benefits goes down drastically.

Tech and economics

Unbiased data doesn’t exist. There’s a bias in the criteria for which these systems optimize. The key problem for me is always the logic behind systems, not just AI, but with the entire data structure we’re building. It’s linked to the fact that we essentially try to automate humans.

There’s actually very interesting parallels between neoclassical economic thoughts and cybernetic influences that kind of drive the digital revolution forward: the idea that humans are binary. If you look at neoclassical economics, it’s just about individuals, there’s no such thing as a society. And these humans respond to stimuli. There are certain preference set, they’ll try to optimize that. It’s a very reductive view of what human beings do, how they think, what they are. And if you look at the cybernetic movement, that thinking is also behind computers, the internet, etc. It’s based on individual logic, an individual methodology, and it’s very reductive.

Data is a simplification of the real world. And that’s exactly the same with neoclassical economics. It simplifies things. You have a hypothesis about how humans behave and then you built an entire theory on that, which influences policy-making. In the end, people end up playing by those new rules and behave in a more simplistic way. And I see that when I’m on Facebook, I sometimes feel automated myself in the way I use language, for example. In a way, we’re becoming automated. And I think that’s the key problem.

We never regulated a space that we actually could democratically control. With the libertarian origins of the internet — and the fact that it became really crucial in the 90s, which was the height of the neoliberal period — we never did anything about it. And that means that all the potential those technical technologies, are being used by conservative forces that already have power. For them, it’s interesting to ask how can we prevent people from reoffending. They only want to know which people would do it again and automatically lock them up. For me, this is linked with the conservative bias. These technologies are being used to amplify the logics which are dominant dominant already.

You won’t get the right questions, because if you want to get the right questions, you need to have a new set of people, you need to have a more participatory style of governing technology. And this also means making it more local. These technologies are similar to major monopolies, but they actually govern the infrastructure of cities and communities around the world. They don’t know their culture, their language, their interests, their problems. How can they offer an infrastructure — a system that will supposedly help them resolve issues — if they don’t even know them?

One of the key examples is Facebook how operates. They pay a lot of money to engineers to come up with a massive one-size-fits-all system. Whereas people actually have to govern to innovate, such as the information which they would like to receive. But it isn’t, it’s selling us products. People actually check the info that’s being put online, if it’s illegal or not. In addition, they’re outsourcing employees to people in the Philippines who have to make split second decisions about what would be appropriate or offensive to a certain community they often don’t know.

Solutions

I think some basic steps could be taken in the right direction, such as we would in other other areas, such as in the car sector. Before a car comes to markets, they have to comply with quite a lot of safety requirements. The same counts for the construction industry. Somehow, for physical infrastructure and for cars, we have these regulations, but we don’t have them for digital infrastructure.

Let’s say Facebook releases a new algorithm on a billion people, we could introduce certain requirements. I think that’s kind of a basic step. You have some key criteria, you have some transparency, and documentation requirements about how the systems work, what it’s trying to maximize.

Some may argue that it would stifle innovation. But I don’t think we have to be worried. We saw the exact same arguments for the general data protection regulation. Companies said it would kill innovation. This is dramatic. Now, other countries have exercised legislating in a similar fashion and Apple has made it their competitive advantage. I don’t think it would’ve happened without GDPR.

It’s a bit dubious for companies to say, we can deliver everything frictionless, we do things that you have never seen before. But when we ask them to explain how their system works to protect certain important principles, suddenly it’s completely impossible, its “beyond their capacity.” I don’t believe them, it’s just not in their own interest. And you see that across the board.

For me “innovation” means nothing. It just means new stuff. And new stuff can be good or bad, it can be useless, or useful. If innovation means coming up with better ways to exploit us online and ruin our privacy, that fisn’t innovation. Innovation — abstract from context and important value, where human experience is valued — doesn’t mean anything. I’d like to see technological applications that make sense for us, that makes our life better.

A big question around how to come up with standards in Europe is related to those which don’t undercut industrial production and outsourcing everything. One idea is introducing something like a carbon tax. It may be controversial, but we need to have this discussion regarding technology. Europe is a very important consumer market, it’s too big for most companies to do ignore. And we saw this with the GDPR.

That’s why ultimately — and perhaps this radical — for certain key services, like the way we communicate, talk online, or gather information it perhaps shouldn’t be completely private. Companies will try to avoid the law where they can, and especially because a lot of these companies are global, they don’t even have that like cultural link.

99% of people passively comply with a law because it corresponds to their cultural values and social norms. If nobody would comply, and it’s only enforcement and the law simply will not work. You can never have 100% enforcement, but maybe in the digital sector it’s all about shared social norms. That’s what makes laws effective, but it has to be based on something which already exists. This is more difficult in an international environment, because a lot of companies, don’t share those norms, a lot of companies don’t give a crap about privacy or data protection. This introduces some additional challenges for the effectiveness of law.

That’s why I think that public authorities need to be more actively involved and have more ownership of these systems. That doesn’t mean per se at the level of the state or the European level, it can be at the local level. Because just outsourcing everything to the market and then trying to control those developments will be very imperfect, and it will be too late.

For example, the way we communicate and talk to each other, that is crucial for societies and in the private in the past, that has never been fully privatized. So when, for example, when the US became independent, the first thing they did was making the Postal Service a public service, because they understood how important that was for the national community, it had a strategic value. Similarly, for TV, we have public channels which must carry obligations, strict limits on advertising, because governments understood this is important. This is what binds societies together. This is how they share values

My hope is that people are more willing to accept technological change if they have a saying in decisions. For example, at the company level people could be involved designing what criteria for AI systems. If they understand the purpose and the meaning, they are much more likely to accept it and work with technology. That’s what I would like to see.

We don’t know how technology functions. We don’t know why it’s being used and by whom, and to what effect. Why do I get shown certain ads, why does my Facebook feed looks the way it looks? That would already matter. Starting with basic transparency, accountability, and economic democracy, so the people that are affected by those systems have a say in the way the systems are built.

We never decided what we want to do with technology and how we democratically want to shape this transition. And I think we can do that.I think then we can see a lot of benefits. We’re at a crucial paradox: we have all these technologies, a lot of the have proven that they work, but they’re just not really delivering benefits. Especially with regards to important areas for how people live: I still have to go to work from nine to five, even though we have all this flexibility, we’re just monitoring it more. I still have to drive to work and I’m still stuck in traffic — there are actually more traffic jams now in the past, we move slower. We pay more for the same houses. Why is all this innovation not channeled into ways for people to help them live a better life?

Post a thoughtful comment below ahead of the workshop on AI & Justice on November 19!

This will help us to prepare good case studies for exploring these topics in detail. It will also ensure participants are at least a bit familiar with one another’s background and experiences around the topics at hand.

6 Likes

@Neel, @J_Noga is also going to be at the AI & Justice workshop. How do those examples of law-making EC work and AI connect to your points about a cautious, known-media-frenzy approach to AI in healthcare?

Wow - @J_Noga - it took me awhile to get through this set of remarks, this treatise - almost a manifesto. So many good points I can hardly keep track.

Let’s say Facebook releases a new algorithm on a billion people, we could introduce certain requirements. I think that’s kind of a basic step. You have some key criteria, you have some transparency, and documentation requirements about how the systems work, what it’s trying to maximize.

how the systems work, what it’s trying to maximize.

This is what they are not going to tell us no matter what. This to them is like the recipe for Coca Cola. But it is the thing we most need to see, because otherwise a more honest title for the business might be RatMaze. You can poke around here and there but ultimately you are going to go where they want you to go.

You spoke of the “self reinforcing loop” up there. I was thinking about this same thing through many of the paragraphs prior to you naming it. You also named part of the problem that contributes to this loop when you said that government doesn’t understand the technology, or words to that effect.

I think part of the reason why this is true is because government workers and tech people are usually two different kinds of people. Neither the government worker or the private sector, usually engineer, person is smarter than the other one. Rather, one - the private sector programmer or engineer tends to be more of a risk taker and the government person tends to be more stability seeking.

I know this might seem like an over simplification, and maybe it is. But what informs me in this comes not so much from a government perspective, but from years i spent managing a digital media project for a large privately owned newspaper and TV company in the San Francisco Bay Area during the height of the dotcom boom of the 90s. “Privately owned” meant that I could not offer stock options for my “startup” (which was a very cutting edge media project for that time), only good salaries and benefits (important in the US where there is no govt healthcare) and stability. So I had to seek out those kinds of people. Almost all the seriously inventive programmers and project people I knew - and I knew a good number of them from the years I spent in online community management - wanted the higher stakes poker tables of silicon valley where their work could bring them the chance of big paydays but also they could be on the street anytime. These are the makers. The stability seekers tend to be more comfortable mediating between the makers and the rest of society in one way or another. That keeps the public happy and wanting the stuff. So tech has a big self reinforcing loop going right there.

Same with cyber security where often the ones who commit the crimes wind up working for the government after they pay their debt to society so to speak and are thus watched all the time anyway. My point is, in addition to all the other complicating factors, the dynamic between these two types of people/workers is part of it too.

1 Like

Wow, a great post. Definitely opened my eyes about few things.

Regarding the paradox of EU laws not implemented properly and lack of cohesion at EU level, I just had a really funny experience:
I am building a project in Croatia, and I wanted to speed things up. So I first checked with the bank of my Croatian company, if I can get a loan for constructing some houses etc. They said no as all my current revenue is on my Belgian accounts (I also have a company in Belgium and I am a freelancer since some time).
Then I checked if I can do it with my Belgian banks, as my revenue is here. Well they said it’s impossible as the project is in Croatia.
It was so confusing, as both countries are in EU and I couldn’t understand the issue. Considering it’s a construction investment, basically just finishing construction will bring the value of the property to double of the loan I was asking…I considered that very safe.

They explained to me that there are systems in place but just in theory, that they don’t work properly.

Also if you look at it logically, if I really wanted to cheat in some way, I certainly wouldn’t run from Belgium to Croatia or vice versa. I would go to Dominican Republic, or Thailand…something that actually makes sense.

The part about innovation and responsibility with special focus on IT sector also struck the cord within me. I wrote about it here as well.
We really need to take a good, hard look at technology. It has grown so much in such a short time that nobody really adapted or figured out what the consequences really are. For me IT is a huge potential but we aren’t using it properly yet.

Very interesting examples of AI used, if you don’t mind I will quote you few times in my new article :slight_smile:
I am writing now on the subject of AI use specifically and the dangers of seeing algorithms as best solutions for everything. What really triggered my response was seeing more and more news about using AI in most improbable ways. Like using it to solve the Japanese issue of having way too few children being born and more and more single adults just not being able to find or keep a partner. It is a deeply human issue. It requires very critical look into various aspects of socioeconomic conditions in Japan…I highly doubt using AI is the best solution to reduce the numbers of humans drifting away from their humanity.

2 Likes