Were you at the AI & Justice workshop on 19/11? Share your notes and thoughts here!

Here are a few notes from my side about content I found notable during the workshop.

Workshop notes

  • Value-based software architecture. Scuttlebutt deprioritized making their software available for multiple devices based on their values of “this being software that is made for people who only have one device”. That’s a major architectural decision, which might not be possible to adjust later without rewriting the whole software. So they really poured their values into their software. In comparison, politics is not yet good at getting its values implemented into technological developments. So we need a better process that implements our values into our technology. It’s about a process strong in accountability.

  • What is human? If an actor (any party / organization) says they are human centric, they often do not even define what “human centric” means in their case. For example, “human” in “European human centric Internet” is left undefined. This generates conflict potential, as it stays so general.

  • The economics behind AI. There’s an interesting study of “the cost of developing a universal chips after the end of Moore’s law”. It means that now that we’re at the end of Moore’s law, the money sunk into developing faster chips is unlikely to come back. So instead the industry took a risk by developing specialized chips. There are two main types of such chips: AI and blockchain. That’s the only reason why AI became a hype and we’re talking about it: it is pushed on us, because industry needed a new profitable outlet for investments, and high levels of capital investment are backing AI already. “If we are not buying it, it’s going to go down. If we are not buying it, we are going to go down.” We are still in the process of making that choice if we (also: the Commission) want to invest money into AI.

  • Good and bad AI architecture. Let’s differentiate between “AI for research and solutions” and “AI for the production of services”. The first type is benign research aimed to solve intricate problems, for example done by universities. The second type is commercial SaaS software that scoops in data out of profit interest of the company. Maybe Google maps might in the future adapt your routing so that you see adverts of parties that paid Google for audience for these adverts.

    This means: the problem of this is about the economics of who runs the datacenters: Amazon and Microsoft and Google built “clouds”, data centers for people to run their applications. Due to economics of scale, they provide the cheapest solution, but also are able to monitor and keep the data going through them. This is an undemocratic process for plain economic reasons, and it’s a hard problem to crack.

  • The structure defines the function. The type of governance structure defines how a new technology gets used. So it may be that we have allowed the wrong governance structures to happen, which will lead to the wrong outcomes of AI technology. In addition, Google and Facebook have been advertising companies but are not anymore – trying to rule them in as advertisers with regulation is already not applicable anymore, instead we should rule in their new structure of “AI first” companies.

  • Is AI anti-human by definition? What AI (as rebranded Bayesian statistics) does is to put the individual differences between human beings (everyone’s “spark of the divine”) into the epsilon, the error term at the end of the equation. That makes AI non-human-centric by design. Because the definition of human for an individual is “that which cannot be predicted by AI, which is not part of the ‘normal’”.

  • On tech interfering with relationships. Intermediating the patient-doctor relationship with data collecting and analyzing systems has degraded the value of that relationship. Because medicine is not about diagnosis, but prognosis: improving a patient’s future condition, and that is a negotiation with the individual, and that individual might be very much non-average, refusing certain treatments etc., and should have and keep a right to that individualness. That still allows for tech systems that could benefit relationships – it’s just that the tech systems we have currently in medicine do not do that.

Personal reflections

I want to clearly distinguish between systems of exploitation and the technology itself. Any system of exploitation, including capitalism, will find and use any new technological option in exploitative ways. The history of digital innovation and media innovation in more general is quite rich in that: before AI for behavior prediction it has been targeted advertising, tracking, mass surveillance programs by governments, weaponized drones, propaganda micro-targeting, mass media use for state propaganda etc…

That’s purely a social problem, not a technological one. Adding one more technology cannot make it much worse … nukes are already around, so what technology could offer significantly worse potential outcomes? So to deal with it, you don’t have to make legislation for AI, but against capitalism.

AI in itself is a nice tool in the toolkit, and easily allows for beneficial use. Personal example: my open source coffee sorter project uses deep learning based image classifiers to relieve small-scale farmers from sorting your coffee by hand.

3 Likes