I live in Brussels, work in Delft, and have strong connections to Berlin where I lived before. In my day job I’m an academic at the university, which means that I do research. I just joined the faculty at Delft University of Technology.
The many strands of research I do are driven by the question “how can we do computer science differently?” The path towards it has, on the one hand, been by working on privacy enhancing technologies: if information and communication technologies come with associated risks and harms— for example, they enable a certain kind of surveillance — how can we use the same technology to protect people from the surveillance, or how can we design that technology differently. This kind of research requires, a very interdisciplinary approach, looking at both conceptions of privacy and surveillance, and also looking at what communities need.
Most recently, I’ve started working on online services, studying them also as a way to show the ways in which software industries have changed. In my current work, I’m trying to conceptualize how software is developed today, what kind of political economy software companies are functioning in, and how that affects the kind of systems they develop. I then tie this work to what this means for computer science and how we can do computer science differently.
Outside of my day job, I work with artists and activists and civil society in Brussels. I’ve been working with a feminist artist collective, called ConstantVZW. They do a number of project and organize different events around free software, artistic practice and collective digital practices. For example, they are currently having a work session on “collective conditions”, which focuses on thinking about how to deal with the way in which collectivity and technology is imagined They invite participants to imagine protocols of care, alternative futures, differences etc. that are layered on top of the protocols that drive current day services, that could allow our collectivities to exist in the extractive socio-technical world that we currently live in. The collaboration with Constant and other groups is an important part of my work: I work with these different communities as a way to think about and develop methodologies for a different computer science, a different technology, or a different life together with technology. This work builds on previous engagements regarding migration, gender and anti-racism in Europe, and digital rights in Turkey.
Roots
I was born and raised in Turkey. I already tried to go abroad when I was just a child. M family didn’t really have the means for such a wish, but I was always searching for possibilities to go abroad. And part of it was because I wanted to study international relations. If I had stayed in Turkey, I would become an engineer, because I was very good at math and sciences. But at that time, I loved math, but I was sure I didn’t want to be an engineer.
I managed to go abroad. I went to the US. I was there for five years, and then I got a scholarship to go to Germany. I stayed in Germany for 13 years. But somehow, math and engineering did not leave me. So, I decided to combine them.
My undergrad was in international relations and mathematics. In my final report before graduating I looked at the way in which game theory was used during the Cold War. John von Neumann, among other things a famous computer scientist, actually believed that it would be rational for the US to do a nuclear strike first, i.e., he believed in first-strike. He bolstered his argument using mathematics, specifically, game theory, in order to rationalize and deem “objective” a very devastating political position. As a young person interested in international relations and at home with mathematics, I was very intrigued by the use of mathematics that I, at the time, also thought was absolutely neutral and objective in the pursuit of devastating imperial politics.
Maybe some of this interest was due to the time my family and I spent in Sofia, Bulgaria when I was a kid. Bulgaria then was a communist country. As a kid, I was quite impressed by communism vis-a-vis how Turkey was doing at the same time. We were there because my father was appointed the Turkish military attaché. Turkey was and kind of still is an ally of the US. This had consequences for our life in Bulgaria, and in a sense, the Cold War really colored our lives. Growing up in the cold war and, much later, going to the US and seeing the US framing of the same war forced me to try to grasp what the nuclear arms race was about and how we got there? My field of study helped me understand it better.
Later in my life, when I became interested and involved in the feminist critique of computing and mathematics, I came to understand better the many ways in which mathematics is not neutral. People believe numbers are neutral, or that mathematical concepts or abstractions don’t have political relevance. However, every time you use a mathematical language, let’s say to model the world, you have to make some assumptions. The numbers may make it seem as if things expressed in formal languages are neutral, but in fact — we see again and again — that this is not the case.
Prisoner’s Dilemma’s Problems
The prisoner’s dilemma, a commonly used example of a game used in game theory, was subject to much research at the RAND Corporation, a US military think-tank. In the prisoner’s dilemma, two prisoners can snitch on each other with punishments distributed in such a way that if any or two players snitch they are worse off. The objective was to demonstrate that “rational beings” might prefer not to cooperate even if it is not in their self-interest. One of the main researchers involved in developing theories around this game was John Nash. The legend goes that he and his colleagues asked secretaries at the RAND Corporation to play the game. But there was a glitch: the secretaries would cooperate instead of choosing for themselves. So, Nash simply dismissed them, calling them unfit subjects of the game. It was the same game, the Prisoner’s Dilemma that was later used to rationalize that it would be better if the US would strike first.
The story around Prisoner’s Dilemma is good example of how the model makes assumptions about humans being selfish and always after maximum gain, an assumption that percolates through economics, social choice theory etc.
The game, used as a model of the world and our behavior in it has a lot of appeal. It is a simple model— with lots of assumptions folded into a very simple model. Once you have such models running, you can start making a lot of statements about the world. You can then build on top of these models, build systems that manage life at scale. But with that, you also expose a lot of people to the potential violence that comes from these assumptions.
Linking Politics and Computer Sciences
I got a scholarship and moved to Berlin to study computer science. To be honest, it was more of an economic decision. A lot of my friends from college had moved to Silicon Valley during the first tech-bubble at the end of the 90s. I was self funding and I was on students visas that had run out. So I thought, okay, maybe I should study computer science as a way to get a job, earn money and apply for a visa to go back to the US.
In Berlin, I started working with feminist computer scientists. During my study, I focused on security and privacy. That work led me to meet people like Andreas Pfitzmann, a cryptographer who had moved into the field of privacy enhancing technology. They were coming up with new theories of security. For example, they argued that the objective of a computer scientist is not to secure the system, but to build systems that secure the interests of the different actors who are using and are affected by that system. And that really appealed to me.
I became interested in the question related to the different factors of stakeholders to a system and how they have different security interests. That’s what we should be enabling and not protecting systems for their own sake. That’s how I got into privacy technologies. This was also politically very interesting. A lot of security comes from the military domain. Businesses employ security as a way to secure the assets of the company. So, by realizing we should be securing different stakeholders, we were able to start acting on security and privacy in a very different way.
Privacy was often in juxtaposition to security: it was about protecting the users from the service providers themselves. You would see the user needs instead of “assets” as the driving force of your designs. That is how I got to the general question in my research: is there another computer science possible? Let’s say a computer science that doesn’t assume that our job as computer scientists is to secure assets of companies or governments, but one that secures and protects the needs of users and their environments.
A changing technical landscape
I only was back in the US again between 2013 and 2016. When I talked with developers there, I noticed that the way I was taught Software Engineering at the University had completely changed and, basically, was no longer relevant. Let me try to explain what the differences were.
Until the end of the 90s, it was very typical that software came in a box, which included floppy disks or CD ROMs that contained a copy of the software that you could install on your personal computer. This box was the product of a very specific way to produce software as a product using “waterfall methodologies”. In waterfall methods, you start with the requirements from the clients, once you have those, you go to your developer team and start turning those requirements into a specification and a design of a machine to be developed. This whole process was slow, like one or two years, and after that you would emerge and pass what you developed back to the client. You would test it with the client to see if it worked. And, ideally it all worked and you were done.
Or, for example with Microsoft, every two to three years a new release of the operating system would be offered. They would go through a similar cycle for each version, and come back in 2-3 years with a new version of the operating system, or other software like Microsoft Office. Once it was ready, you would release the code and put it on a disk to get sold. Once code was released, changes were difficult, so that release date was a big moment and the end of a production cycle.
Today, we do not buy boxes, but open up a browser window to access most software. We access Facebook, Google search or Google Docs, and a host of other “services” instead of installing them on our computers. With services the code is on a server. The user connects to the server to get the functionality. This is what happens with a web browser.
This subtle shift has completely changed how software is produced and how the software industry functions. As a result, the political economy of these companies has transformed, turning them into very different kinds of investment objects.
The last five years I have been trying to understand how the shift has enabled different kinds of software engineering processes, in which developers don’t close themselves up for two years and come back with some software, but you can incrementally and continuously develop software as a service.
As users we notice the difference in many ways. When you’re using an app, it doesn’t function when you’re offline, because most of the code is somewhere else. Since a release now means releasing to a server and not to a packaged box, the developers have the ability to constantly update and incrementally develop the software. And since most user interactions are also communicated to the server, the developers can observe users’ behavior and tweak the design to get better results. With this feedback loop in place, the industry can now optimize software so that they capture and manipulate user behavior in a way that is aligned with business interests.
For example, an app might give three or four functionalities to their users, they will then continuously improve on those functionalities. It might be a feature or it might be a button, or it could be the ability to make a call. And they constantly refine that feature based on the feedback they get from how the user uses the service. If you compare Microsoft Office on your personal computer, back then Microsoft didn’t have that information, they had little clue what users were doing with it — except when complaints bubbled up on online discussion forums or on support lines.
Compare that with Google docs, Google can watch every click, holds all user documents, and can capture all the desirable and undesirable ways in which users interact with their service. Service providers can improve the software using this feedback, and in the process also optimize users behavior towards a certain direction. You can orient the users towards the interests of the company.
And this is how I started to understand that we are no longer looking at information and communication technologies, but we’re looking at optimization systems. These are systems that are optimizing features depending on user behavior, and optimizing that behavior based on the interest of the company for the extraction of value.
So, my collaborators like Martha Poon, Carmela Troncoso, Bekah Overdorf, Joris van Hoboken and others have been asking: if the systems are no longer about information, collection, and knowledge — which comes with associated problems of surveillance — but these systems are about optimization, what are their associated problems? And how do we protect the users and their environments?
The Hidden Truth of Google Maps And Other Apps
Many of us use Google Maps, or Google-owned Waze, which allows you to beat traffic. If you’re on the freeway, and there’s a traffic jam, Waze can recommend an alternate route. Waze then will say “Okay, get off the freeway, I’ll push you through the surface roads, so that you can get from A to B faster.” In doing so, they’re claiming to optimize the time to travel for the users. But what they’re also doing is that by optimizing your path by putting you through surface roads, they are actually causing more congestion. What they’re doing is putting more traffic onto roads that cannot deal with that kind of congestion. If only a few users do this, they are doing well. But if a lot of users start doing this, you actually increase congestion for everyone. This is something traffic engineers have called the price of anarchy.
Waze basically keeps its users, by promising them the ability to optimize their path. Time to travel from A to B. It optimizes all of its services, so that the users are happy and if the user base grows, it has more location data from people, which can then be used to build new services and overall, keep investors happy.
In services like Waze, the app is the product of layers and layers of managerial and mathematical forms of optimization. And, at the same time, the app helps optimize asocial selfish driver behavior while congesting surface roads, increasing congestion overall, causing extra traffic expenses for municipalities that now have to deal with hundreds of cars using small roads, increasing accidents, etc. Waze and other optimization systems do not “know” things, but sense the world and give information to users in order to co-create new geographies and behaviors for the extraction of value. They maximize the extraction of value by externalizing costs onto others, like non-users and their roads and cities.
Waze is very useful in showing how optimization systems function in the world, and in demonstrating the externalities they produce. My collaborates and I are trying to wrap our heads around these optimization systems: how do they function? What are their externalities and how can we protect users, non-users and their environments from them? We have an initial list to say, optimization systems typically: favor optimal users, don’t care about non-users, don’t care about their environmental impact, externalize the costs of errors and experiments, etc.
Let me explain what I mean by the latter: you can experiment with services in a way that you couldn’t with shrink-wrap software. For example, the Waze app: maybe there’s a road near a congested freeway that might be an interesting alternative. However, Waze doesn’t have any users going down that road, so they can’t capture if this road is congested and good to recommend. To find out, they send some of the users down these roads that they don’t have enough sensory information about. Let’s say they don’t want to risk it with their “favorite” users, so they may prefer to recommend this route to users that are less optimal for their business. The route may turn out well, or turn out to be congested or blocked, a risk externalized onto that selected user. These are examples of methods of exploration and experimentation that allow service providers to optimize their services, while externalizing risks and costs onto users.
I want to be careful here and not fold everything under optimization, because, then we might as well talk about capitalism rather than optimization, but I think these are new technological practices that deserve their own critique. At the same time, it looks a lot like capitalism, too. There is something about the way in which the services are applied in places where they are expeted to bring return on investment, leaving behind many places. Only when these companies are pressured to expand their markets, do they start deploying their services elsewhere. However, it may not be the same quality, or like Facebook did in Myanmar and Bangladesh: without any support for the local languages. It has to do with the logic of extraction, of value driving, how you deploy services.
Don’t Call Them AI Accidents
When we started this work on optimization systems and their externalities, and we’re not sure if we should call these risks and harms externalities. When looked at AI safety papers and we realized their authors call these harms and risks “accidents”.
We decided not to follow in their steps. These are not accidents. These are deliberate ways of scaling technologies to extract value, while externalizing certain costs onto others. In fact, a lot of the technologies are externalizing risks onto other people.
Let’s take, Uber: normally, you would pay people for the time they work. And now you basically pay people if there are customers. This means that you externalize the risk of not having customers on to individual workers. What you offer them in return is that you will make an optimization system that will give them enough information, so that they can judge when they’re going to take that risk. Uber tells its drivers there will be more customers between 3pm and 5pm. The system is also very attuned to their price optimization. Basically, Uber is constantly putting their drivers into a situation where they are provided with information, so that they can manage their risk. This is how Uber is designed to work. Having seen many examples like thiswe intentionally moved away from calling these negative occurrences ‘accidents’.
There is something to be criticized in our work, which is that we assume that this is all very deliberate. And we’re talking about organizations with thousands of people, where sometimes the left hand doesn’t know what the right hand is doing. There are conflicts internally, different power games within the institutions etc.
But, take autonomous cars as an example: for them to function, you have to remove everything that has a lot of uncertainty from their way. I’m seeing a lot of research now where they’re like, “Okay, how can we still maybe have people in the picture? And the answer is we can profile them all the time. We can constantly monitor people and their behavior and give them control when they fit our expectations and models of risk.” Basically, what they’re imagining is that autonomous cars can only exist if the infrastructure continuously monitors everyone around those cars, and provides agency to those people based on continuous behavioral analysis. None of this is an accident. This is a plan.
I think there’s also another story to be told, which is that Cloud services are a natural progression from the move from shrink-wrapped software in a box to services as we have them now. Some argue services lead to centralization and concentration of computing power, which is optimized to serve all these different services. That’s why we now have Amazon Web Services or Google Cloud, or even the great pivot of Microsoft from being a software company to a cloud company. So, we could say that service architectures pool the clouds to us.
But you can also turn that story around and say, the cloud is a promise of the software industry to investors, that they can get a lot of people onto the cloud. They should invest in it. In fact, cloud services are fast growing investment objects. Seen this way, autonomous cars are a great way to fulfill the promise of the clouds. In other words, the clouds and their investors are pulling on autonomous cars.
Given these developments, calling these risks and harms and control infrastructures accidents can only happen if we ignore the whole political economy of software industries. This doesn’t mean that there are no accidents, right? There are accidents. But, I think we have to be very careful not to fall into this idea that they just don’t know what they’re doing.
Public Institutions and Tech Injustice
I don’t in particular study public institutions, but I have been following the discussions professionally around algorithmic discrimination and fairness, and how concerns around discrimination arise for public institutions.
For example, I have followed from a distance a proposal for a ‘lie detector’ AI to be piloted on European borders. They are proposing that this service will help optimize the surveillance at the border and the management of flows. Another example I have heard about is from the Austrian Employment Agency: they are deciding who of the currently unemployed is likely to reenter the workforce in order to only offer those access to re-training. This can be said to be a proposition for optimizing the social welfare state.
You can imagine that all of these services have been sold, and all of these technologies have been developed, with the idea in mind that these institutions can cut costs and be more efficient at the same time, the main concern being that they may make unfair automated decisions with discriminatory effects.
This is a very narrow view of the problems that can arise when optimization systems are applied in the public sector. The concern about automated decision with discriminatory effect misses the point that a border system is already a very racialized and very discriminatory system. With regards to social welfare, especially unemployment systems are most likely to impact people with low economic resources, and a good portion of that is likely going to be immigrants. People quickly made the conclusions that border control could be discriminatory or make mistakes. Similar to the Austrian example, there are claims that if you are an immigrant, or a woman with a child, the systems reduces your chances of getting access to resources to re-enter the employment market.
Here we see the trouble with the systems, because they are already racialized and discriminatory systems of population control. That they’re discriminatory is not just at the level of the algorithm, but the whole system. The fact that it’s optimizing over that population is already very problematic, because it’s going to optimize with respect to somebody’s understanding of what is optimal. The discriminatory effect is just an additional factor, but it’s not the main factor. How can government agencies who employ AI start making claims about fairness, when they’re in themselves problematic institutions.
I work in the Netherlands currently, where people have been optimizing the infrastructure for a long time, such as traffic optimization. But they think about how to optimize the infrastructure for everyone, having in-depth discussions about how to do it in a fair manner.
Companies like Waze do not officially join the infrastructure planning. They offer services to citizens that use that infrastructure, going through the backdoor, de facto redefining the social welfare function of those infrastructures. These services are putting into practice a new definition of what it means to use roads optimally. In the process, they are no longer optimizing the infrastructure for everyone, but doing so their users and their bottom line.
Basically, every time we apply optimization systems to public agencies or infrastructures, we are redefining the social welfare function of these agencies, and changing the fabric of society. There’s currently very little discussion about whether we want this model. Instead, we have public discussion that is overly focused on algorithms, automated decisions and data flows. Discussions about how to improve algorithms or control data flows does not address how optimization systems are changing the way in which our infrastructures should be governed, or how our resources should be allocated.
There’s currently a run for AI. There’s all this inevitability and urgency around it: “If we don’t do AI, then China and the US will. So we need to do AI.” On a cynical day, one could imagine this is a huge fraud scheme to transfer wealth from some parties to others in the name of innovation while changing our governance structures. At the same time, the academic and civil society discussion is focused on data and algorithms. But we really need to open up a discussion about what it means to use optimization systems in managing all aspects of life.
The optimization systems we look at are based on a utilitarian logic. They come with very specific economic models. We need to discuss whether this kind of utilitarian model is the only model for resource allocation. And when it is not the right model, then we need to look at what would be better? As far as I know, which is little, until the end of the 60s, economic models and social choice theory predominantly assumed we are selfish agents trying to maximize our gain, and they proposed systems that could optimize over everybody’s gains. Utilitarian models have been subject to much critique. Optimization systems bring utilitarian logics back in vogue, but with the addition that they optimize resources in different domains in line with profit interests. I think in that sense, optimization systems, or machine learning, or AI propose a very specific model, and it’s a very narrow model to organize allocation of resources.
Given how little attention has been paid to these issues, it is really time to move beyond data and algorithms, and to focus on the forms of governance these technologies are brining and whether they fit with our imagination of how we want to govern our societies? Currently things are the other way around: technology companies are redefining forms of governance and proposing optimization as the only way to organize resource allocation. To succeed, these companies backdoor into the world through users. They use investment money to disrupt existing markets and outrun competitors. If and when they become the last ones standing, it may be too late to have the discussion whether we want the models of life that they propose and implement.
Burning Question
The push for AI is basically marketing for a financial boost for creating a computational infrastructure based on optimization systems that govern all aspects of life. Do we want this infrastructure? How can we resist it, engage it, and ensure it serves people and their environment, and not the other way around?