NGI Ethnography Code Review Thread

Use this thread / topic for a running code review. We can talk about best practices in our call on Monday.

Codes to review during bi-weekly meeting tomorrow (19 Oct 2020):

'don't need to know how to use': this is a placeholder for right now to refer to how people feel like they don’t need to necessarily understand the mechanics of how things work in order to use them and expect it to function well and safety. For example, users shouldn’t have to know how search engine algorithms work to expect results to be accurate

'privacy as space': another placeholder code to add more granularity to how participants think and talk about privacy as a set of different spaces

making decisions: made this to disambiguate from decision making since it was getting overloaded. referring to when people are in the process of making decisions, or experiencing it, i.e., “we had to weigh all these factors in order to make the best decision for us” (=/= decision making: the state of decision-making, i.e., “we need to automated decision making process”)

guiding users: created to add more granularity to UX/design connections.

self-censorship: whenever people don’t say what they want to say because they fear repercussions, feel silly about it, etc. was gonna go with “chilling effect” but thought this would be more broad

more ontological codes:
being practical
being supportive
making accessible

1 Like

functional sovereignty – is this the same as free-standing technology (code I have newly created)

contingency planning and potentially merging with disaster preparedness or mitigating risk / discuss how these all interact

I’m going through threads on how technology design process is ripe with bias that have potential harms. So far, only 1-2 annotations for most of these, but I reckon they’ll have more robust annotations by our next meeting when I’ve gone through more threads. Collecting them here as I think through finessing them:

On modeling and bias/assumptions

  • encoding values: I’m getting the sense this has been a general code for any “putting values into design” which we should now fork and make precise with some of the other codes below
  • modeling the 'real world'+ *expressing mathematically: These are forked from what I had last week as operationalising the real world
  • making generalisations
  • assuming objectivity
  • *assessing model accuracy: We currently don’t have a code about accuracy??? we really should. I want the word “representation” in this code somehow…

On oversight

  • justifying purpose
  • 'confirmation bias': probs too jargon-y at the moment, consider ‘feedback loop’ and others
  • the ability to explain which should almost always co-occur with human oversight
  • techniques of mitigating bias

On impact

  • potential harm: though should i just merge this with unintended consequences?
  • differential impact
  • higher threshold

Changes I want to propose:

  • I changed deployment to the rush to deploy
  • Relatedly, I changed non-deployment argument to the more clunky but specific shouldn't just because we can
  • vulnerable population to differential impact to capture how impacts are unevenly distributed
  • 'big tech' to tech industry bc other equally evil companies like Uber are not technically included, and it would align with other -industry codes we have
  • Is common good also meant to include public good? We don’t have a code currently for “public interest/the public/etc”
  • forming a coalition: do we have a code for when groups/people come together as strategies of pushing back ? it’s not quite the same thing as connecting people
  • Having read through the report, I think it’s important to keep track of human vs. machine abilities. So I created human ability (which nests things like ability to explain and detecting nuance) and machine ability

Quick memo of some of the codes I’m working through this week

Codes about power

  • I’m trying to stick to control for any vertical/oppressive interpretations of power. Need more robust set of codes to capture:
  • controlling behaviour
  • exploitative labor practices
  • digital divide

Codes about institutional change

  • strategies of pushing back
  • driving incentive which co-occurs with codes like financial incentive and external pressure (might be better to re-name as ‘reputational damage’?)
  • creating more problems
  • *'holding to a higher threshold': this idea that public institutions should be held to a higher threshold??? halp

Codes about impact of tech

  • still trying to capture what happens when, even with accurate and fair design processes, potential harm and unintended consequences will take place becaause technologies will be *'interacting dynamically' with the social world