The EU’s new Commission is looking to approve a new AI directive, and of course democratic participation is important to ensure a good law. What do you feel necessary from EU to participate?
I’m a bit skeptical that legislation is the right instrument to do this kind of things. It’s a blunt instrument, because as of now there is no good understanding of the process from the ethical values (fairly easy to encode in a directive) to the actual technical choices of the scientists and engineers building AI systems. Ex. we like the New Zealand group developing Scuttlebutt: when users asked for cross-device accounts, the core developer replied “I’m not doing that, because we want to serve underprivileged people, and these people do not have multiple devices”. What he was doing was justifying his technical choices in terms of his values. As a socio-technical culture, we are not good at that.
What is “investment in AI”? What should the EU do practically?
climate + competition policy = nobody is allowed to grow to much. What is “empowering”? I suspect that AI (ML applied on big data) is inherently NOT human-centric, because its models (for example a recommendation algo) encode each human into a database, and then models you in terms of who you are similar to: for example, a woman between 35 and 45 who speaks Dutch and watches stand-up comedy on Netflix. Everything not standardizable, everything that makes you you, gets pushed to the error term of the model. That is hardly human-centric, because it leads to optimizing systems in terms of abstract “typical humans” or “target groups” or whatever.
Can we think of counterproductive regulation on the Internet? Stuff like the copyright directive? how did we end up with all these things, that used to be a public good, end up being someone’s property?
in Helsinki everybody agreed that you can’t sell your data.
Do we actually need AI? Why are we pushing for AI?
Where do we bring the democratic process into new models?