Schedule of interviews to do with participants and content contributors ahead of the festival

Hi guys.

We want to create a series of interviews and follow up Q&As here on the platform with each participant in the run up to the different festival activities. We do this for three reasons:

  1. It is a good learning opportunity and way to raise the quality of our discussions. If everyone has a shared foundation of knowledge, the conversations can go “deeper and further” much faster.

  2. This gives both participants in the event, and those who cannot go, an opportunity to identify people who’s work is relevant to their own - and to meaningfully connect with one another.

  3. We can better prepare the contents, facilitation and follow up of each event so that the experience is meaningful and generative to each participant. Also this gives people doing presentations or webinars an opportunity to know who their audience is and ensure their content is adapted to the context.

How you can help make this happen!

  • Think of one person or more whom you think we should interview
  • Reach out to them and ask them if they would be up for being interviewed
  • Leave a comment with their name, and a bit about what you find interesting about their work and their contact info
1 Like
  1. Emmi Bevensee: Data Scientist studying machine learning. Is doing interesting work developing tools (mostly open source) and guides to deal with next generation disinformation and facism online . Currently working on a piece for mozilla and the Anti Defamation League right now about how inequalities get baked into dataset annotations due to… basically not having marginalized people with subject expertise involved. Creating neural nets that just reproduce bias." Found them through @sammuirhead twitter feed :slight_smile: Set time and channel via email: emmib AT protonmail DOT com
1 Like
  1. Corinne Cath: Cultural anthropologist whose research focuses on engineering culture, Internet governance and the management of the Internet’s infrastructure. Currently focusing on the governance of artificial intelligence and the responsibility of the technical community towards human rights. Doing Phd at Oxford Internet Institute. Hi @CCS when might be a good time for an interview with you?
1 Like

Hi Nadia,

I am available online next week on Tuesday, Wednesday and Friday. I am currently in Ankara, so one hour ahead of Europe two ahead of the UK.

Let me know what works for you!



ping @inge @MariaEuler

  1. Kate Sim. She is a PhD Candidate at the Oxford Internet Institute researching the intersection of gender, technology, and epistemic in/justice. Currently researches the automation and datafication of sexual harassment reporting procedures to examine the credibility politics of designing digital reporting platforms. As a qualitative researcher, she employs employs ethnographically-informed methods to uncover how gendered assumptions and values are encoded in emerging data/AI-driven systems.

Kate’s previous research projects include: ethical and practical scopes of predictive risk assessment tools for child protection services in the UK; and the sexual politics of consent discourse. Beyond research, Kate has nearly a decade of experience in community organizing, survivor advocacy, and social policy in the US, UK, and South Korea.


1 Like
  1. Justin Nogarede (@J_Noga) : Has been involved in drafting the European Commission’s mid-term review of the Digital Single Market Strategy, and in policy on standards and standard-essential patents, audio-visual media, Internet governance, the collaborative economy, product liability and the internal market for goods. European Policy & Legal Expertise. When Might work for you for an interview during the next 10 days? Also - any suggestions for others we may want to reach out to do you think?
  1. Mutale Nkonde (Berkman center): “Mutale Nkonde is an AI policy advisor and incoming fellow at the Berkman Klein Center for Internet & Society at Harvard University, where she will be conducting an ethnographic study on how congressional staffers learn about AI Policy during the 2019/2020 academic year. During her time as a Data & Society Fellow, she founded the Dorothy Vaughan Tech Symposium briefing series on Capitol Hill, and was on the team that introduced the Algorithm Accountability Act and Deep Fakes Accountability Act with the Office of Congresswoman Yvette Clarke.Is Co-author of a report on racial literacy and tech, and speaks widely on race, policy, and AI”. @CCS Do you know Mutale/could do a personal introduction?
  1. Vidushi Marda (NGO article19)
  1. Nikita Arghwal (Oxford Law)
  1. Fieke Jansen (Data justice PhD Cardiff Uni)
  1. Joris van Hoboken (Prof. at Brussels Uni)
  1. Seda Gurses (Prof at Delft Uni)

Ping @MariaEuler and @inge who can make it when? I am available any of those days.

the Wednesday would probably work from my site and the Friday for sure

1 Like

Ok let’s agree on a time with @CCS for wednesday at 10 am CET (Brussels time)?

1 Like

Some suggestions from Philippe Van Impe (Digityser)

  1. Digitalisation champions
    Loubna Azghoud - Founder - High Her | LinkedIn
    Sana Afouaiz Sana Afouaiz - Founder & CEO - Womenpreneur-Initiative | LinkedIn

  2. very active in AI
    Ségolène Martin - CEO - Cofounder - Kantify | LinkedIn
    Isabelle Grippa
    Isabelle Grippa - | LinkedIn
    Elise Vandenberghe
    Elise Vandenberghe - Digital Strategy Advisor - | LinkedIn
    Cécile Jabaudon Cécile Jabaudon - Cluster Software Coordinator - | LinkedInéline-vanderborght-95a06254/

1 Like

@CCS and @nadia, here the link to the zoom room for the interview on Wednesday the 8th 10:00 am.


Thanks Nadia! For the interview, any time in the afternoon of Thursday 10 or Friday 11 October looks good. Would that work?

For the event on 19 Nov, on inequality and AI: timing-wise it’s great as the European Commission will come up with horizontal AI legislation (principles) in early 2020. Not an expert, but I reckon issues they will have to come to grips with could be

  • scope of the rules (what ‘AI’ systems should be covered, only top consumer/citizen-facing applications, or other/underlying infrastructure)
  • type of obligations (only transparency/documentation requirements?, what should they contain?)
  • How to differentiate, depending on scale (i.e. me building system affecting 10 people vs FB unleashing a new content ranking algorithm on a quarter of world population) and risk (system to distinguish which cat video I will be shown vs. decisions on employment, insurance, housing, predictive policing).

Could it be an idea to look at case studies at the event, with these questions in mind? For case studies itself, AlgorithmWatch produces yearly reports looking at automated decision-making in Europe (Jan 2019 report, ‘Automating Society - Taking Stock of Automated Decision-Making in the EU’). UK Centre for Data Ethics and Innovation is also carrying out a review of positive and negative effects of automated decision-making, first interim results just out I think. Interesting examples could be insurance branch, or the case of Austrian labour authority using it to decide whom of the unemployed to help in finding a new job (criterion: efficiency narrowly defined, i.e. those already have highest chance to find a new job in the first place…).

Apologies for lengthy post, but it’s my first here, so please cut me some slack :slight_smile:


yes yes and yes :slight_smile: