Workshop on Inequalities in the age of AI, what they are, how they work and what we can do about them - 19/11 - Brussels

and I just came across this twitter thread: https://twitter.com/sherrying/status/1149507266338410498?s=20

about this: https://icarus.kumu.io/fluxus-landscape

Fluxus Landscape is an art and research project created in partnership with the Center for the Advanced Study in the Behavioral Sciences (CASBS) at Stanford University with support from the Stanford Institute for Human-Centered Artificial Intelligence.

When I first started worrying about ethical problems in emerging technology, I would lament that no one else was paying attention. Until I met artist Sophia Brueckner, who taught me that the problem was more complicated than I had assumed. My concern deepened as I learned more about ethical problems in AI. Our policies could not possibly keep pace with technological advancements and would lead to greater inequality in society. I grew critical. Why were regulatory and other societal institutions not doing anything? Later, I met the CASBS director Margaret Levi, who informed me that many institutions were working on this – I just did not know them.

The following map is an attempt to document and clarify my learning through a compositional research process. The map is curatorial and qualitative – not indexed and quantitative. Unlike the data scraped by computers and sorted by rules, the data within the Fluxus Landscape were gathered one by one and categorized through deep conversations. Both methods are biased.

The map includes 500 nodes representing both allied groups and those who are in conflict with each other. As different as the communities represented in the map are, many still share the same question. What should we do? In this composition – like in all art – your experience is unique. Some may see a practical stakeholder map while others may see that they are not alone in their fears. Art and ethics share the same power: there are as many interpretations as there are minds. I hope this map will help you answer your questions. What will you do?

Has Likes

@hugi and @MariaEuler she might be interested in taking part of the conversations here. Perhaps we can reach out to her?

and I shouldn’t forget to tag the research team

@alberto @amelia: 32

Has Likes

I would love it if we could get Abeba Birhane and Cathy O`Neal to join us. Am already in touch with Abeba, she is super oversubscribed so I am pulling out the big guns, even trying a bribe consisting of almost home made Ethiopian food :))

Cathy, I have not yet contact - am struggling a bit with the neurowiring atm and finding it harder than usual to approach new people. Can anyone help?

Has Likes

I can look into contacting her tomorrow, finding an interesting in. Let me research her a bit tonight.

Has Likes

thank you <3

ping @sander this is the event I was referring to

@inge, Reach out to whom? The author of the article?

the artist/researcher of the AI piece? Sherry Wong

Has Likes

Rob, thanks! - @MariaEuler can I ask you follow up with Rob about setting this up?

hi @eirinimal would you be interested in joining us for this as a discussant along with Fabrizio Barca, people from the Oxford Internet institue + fair work foundation + others being announced soon?

@inge and @MariaEuler - @Amelia suggested we reach out to Kate Sim who is doing interesting work around tech in the context of gender-based violence at the Oxford Internet Institute. It would be great if we could schedule a video chat to learn more about her work and see if/how she might be interested in participating. I am very very interested in bringing in people working in sextech and femtech for a deeper conversation as part of this workshop or one video call in the run up to it or even a separate event. What do you think?

Also, I contacted Abeba Birhane who is doing brilliant work spanning data ethics, embodied cognition and bias (see this aeon article she wrote) . It would be very interesting to have a conversation between her and @amelia exploring how SSNA/ quantified social anthropology methods and tools can help surface and address biases that affect how/which technological choices we make - and where possible, mitigating their consequences. Is this something you might be interested in @amelia and if yes, how could we go about making it happen? We have a bunch of different options to choose from spanning video chats, conversation threads on the platform all the way to organising an event at the RSA where I currently have a fellowship.

Has Likes

Ciao @rmdes so the festival site will be ready to go hopefully tomorrow, but if you view with web browser you can check it out already here https://festival.edgeryders.eu

As you see above the content is now taking shape. @MariaEuler will have the visuals for promoting the event in what, 4 days?
So I guess it is time to set things in motion comms-wise with Digityser. We said I would do a presentation on October 10 to introduce the festival methodology - is this still on? and if yes, what materials do you need from me to promote it?

Has Likes

Someone who would also be very interesting to engage is Mounir Mahjoubi - @clairedvn do you have a way in to see if we can invite him to join us?

Has Likes

Hiya,

If you are still looking for people to join it might also be worth reaching out to:

Mutale Nkonde (Berkman center)
Vidushi Marda (NGO article19)
Nikita Arghwal (Oxford Law)
Fieke Jansen (Data justice PhD Cardiff Uni)
Joris van Hoboken (Prof. at Brussels Uni)
Seda Gurses (Prof at Delft Uni)

They are relatively young scholars doing very interesting work!

Cheers,

Has Likes

think you could put us in touch ( see here https://edgeryders.eu/t/schedule-of-interviews-to-do-with-participants-and-content-contributors-ahead-of-the-festival/10971)

timing-wise it’s great as the European Commission will come up with horizontal AI legislation (principles) in early 2020. Not an expert, but I reckon issues they will have to come to grips with could be

  • scope of the rules (what ‘AI’ systems should be covered, only top consumer/citizen-facing applications, or other/underlying infrastructure)
  • type of obligations (only transparency/documentation requirements?, what should they contain?)
  • How to differentiate, depending on scale (i.e. me building system affecting 10 people vs FB unleashing a new content ranking algorithm on a quarter of world population) and risk (system to distinguish which cat video I will be shown vs. decisions on employment, insurance, housing, predictive policing).

Could it be an idea to look at case studies at the event, with these questions in mind? For case studies itself, AlgorithmWatch produces yearly reports looking at automated decision-making in Europe (Jan 2019 report, ‘Automating Society - Taking Stock of Automated Decision-Making in the EU’). UK Centre for Data Ethics and Innovation is also carrying out a review of positive and negative effects of automated decision-making, first interim results just out I think. Interesting examples could be insurance branch, or the case of Austrian labour authority using it to decide whom of the unemployed to help in finding a new job (criterion: efficiency narrowly defined, i.e. those already have highest chance to find a new job in the first place…).

Apologies for lengthy post, but it’s my first here, so please cut me some slack

Has Likes

Yes I think this makes complete sense. Would you have time to pick a couple of case studies you find especially relevant and post them here?

In parallel what we do via the comms team is to ask the internet, and other participants in the workshop in for suggested case studies that we should include. And to get the conversation going around the three questions/challenges you outlined.

Has Likes

It depends on the exact definition of justice, perhaps. The computerization/mechanisation of the public sector has largely been driven by the hope that logical government (deterministic decisions, given certain input deterministic output) will make society more “just” and “equal” (i.e. a computer does not have the social sensitivity to discriminate, for instance, and even when it does - for instance through statistical short-comings - we can normally measure and assess the discrimination that occurs). But we are now half a century in to computerization - has it worked?

I think “lasagna-style” technologies, which are vertically separated as a matter of technology, are more likely to lead to an outcome of increased “justice”. Because I think of justice as something which guarantees to individuals freedom to act - commercially and socially - and this freedom can only be obtained if market entry barriers are low, or if technologies lend themselves to a multitude of entities cooperating on different levels. I’d prefer, for this reason, WiMAX and Wi-Fi over LTE systems and cellular networks, and I am for this reason cautious about 5G.

With my rudimentary understanding of Italian, the 15 proposals for justice would impact technology development - would they be advanced by technology development? I think governments across Europe - certainly in Sweden - are still very much stuck in the 1960s vision of computers-as-the-saviour-of-government-through-imposing-cold-hard-logic. I.e. the “fairness” our governments strive for is either being able to use technology against the governments’ own citizens (to find “cheaters”) or to have a “government machine” that is not able to distinguish the unique life-stories of either subjects (citizens) or staff (civil servants).

Has Likes

Could you describe more what you mean by lasagna technologies? I have not seen that term before.

Has Likes