Seda Event Notes

The EU’s new Commission is looking to approve a new AI directive, and of course democratic participation is important to ensure a good law. What do you feel necessary from EU to participate?

Why are we pushing for AI? The end of Moore’s law. The money we seek in are more unlike to come back. Industry took a risk at specialised chips: AI and blockchain-specific. So, now they need to create a demand for these chips. We should be aware of the vendor-driven nature of this stuff, and think about how to regulate these companies.

The business model of Big Tech now is based on keeping the code, providing cloud services and collecting data to profile and ultimately influence users. They cross-subsidize their cloud services with their marketing revenue streams. If you want to fight that, you should have public cloud infrastructure, which is a trillion-EUR affair. That’s your investment in human-centric AI infrastructure.

Companies like Google optimise with AI the profilazione of their customers. The difference is if it’s a data centered AI or not. … ? Where do we bring the democratic process into new models?

Can we think of counterproductive regulation on the Internet? Stuff like the copyright directive? how did we end up with all these things, that used to be a public good, end up being someone’s property?

We optimize relationships. This is problematic. For example, we have marketing companies working with civil liberties organizations, that tell their clients: we can tell you which of your donors care about which issues, so you can optimize for issues where you get more money.

Optimisation has been the logic as far, but it can go wrong anyway. Traffic is a perfect example. If you look at optimization system, it comes at a cost and if you don’t do internalise the cost it causes bigger damages. Apparently innocent choices as what to optimize for turn out to have huge consequences. When you design traffic, do you optimize for driver safety? Or for pedestrian safety? Of minimize time spent on the road?

SEDA on feedbacks: I know GDPR, it’s regulated anyway. There are serious problems on how GDPR focuses on personal data on individuals.
We talk about automated decisions, but there is a set of criteria … the optimization is not about the single person, but as pbe part of a certain group which operates certain choices in general. So, you expect to empower people, but actually you don’t. Paul Olivier Dehaye is trying to pool data from different workers to see if people are being discriminated. This is called a “data trust”. There is also the DECODE project that is doing something they called “data commons”.

We are now in a mess with the GDPR. Uber, for example, uses it to refuse to give drivers their data, saying this impacts the rights of the riders. This (says Seda) is the consequence of focusing on personal rights when the data are used to optimize over populations.

Do we actually need AI? Why are we pushing for AI?

Where do we bring the democratic process into new models?