At the NGI Forum @katjab and Harry Armstrong held a workshop on a potential future NGI Trustmark, a certification for technology considered to live up to the ideals of an “internet for humans”. These are my notes from the workshop. Katja and Harry, please add what I have missed and correct what I might have misunderstood.
Trustmarks are nothing new, they exist for everything from fair-trade for food to energy efficiency for domestic appliances. They exist because, without them, ethical technologies are hard to find. It’s a fragmented marketplace and there is a lack of coordination. NGI has an opportunity to address that by developing a Trustmark, and this workshop was one step in that direction.
Of course, defining “ethical” is no easy feat. Things to be considered could be privacy, fairness, bias, sustainability, security and countless other factors. There are many open questions, and this workshop did not aim to limit the scope but was rather an open exploration of the topic.
We know that there is some appetite for data protection trustmarks. One participant quoted a figure stating that 67% of consumers would like for such a certification to exist. But data protection is only one aspect of ethical data-driven technology, and the consumer appetite for other consideration is unclear.
In a way, terms of service are like very long and opaque trustmarks for what companies may do with your data. However, as we know, they are close to useless from a consumer perspective as a tool to compare technologies. One sort of trustmark could focus on making sure that terms of services live up to certain standards and include respectful defaults.
GDPR has also become something of a trustmark, being used as shorthand when talking about data protection. What makes it effective is that there is accountability. As one participant said: “You are trusted because you are accountable”.
This workshop did not go into detail, but we managed to write down some interesting opportunities and challenges of developing a trustmarks.
Opportunities
- It creates a branding for ethical product, creating a virtuous circle where you must be certified to compete.
- You can start creating new products based on being certified ethical, much like some lines of washing machines have as their unique selling point to be energy efficient.
- Companies can account for what value consumers are giving up to get something for free. Facebook and its ilk would then have to be very upfront about its business model during signup.
- A well-designed mark can become a strong marketing tool, by making the mark itself a brand.
- A certifying body could include individual consumers who are, as a group, given equal weight to the industry.
- AI bias is difficult because reducing bias often decreases accuracy. AI bias could be addressed by requiring companies to be upfront about what trade-offs they have accepted and how their AIs are biased. This is much like how human bias (internalized racism, sexism, etc) is addressed by becoming aware of one’s own bias.
- Sustainability, security and ethics of data centers could be certified by independent observers, similar to UN polling observants.
Challenges
- A trust mark with high standards might not work, if every single data giant ends up with a bad rating, trustmark might come across as too punitive
- Privacy might be the easy part of the certification, but fairness will be harder. Addressing AI bias and error is harder.
- Where would such a trustmark be placed? How would you see it?
- If the trustmarks becomes something that one “pays a premium” for, will privacy and ethical tech become even more of an elite privilege?
- Who pays to certify open source software?
- Entrenched business models will lobby against it.
- It can easily become a barrier for entry for startups if not done right.
At the end of the workshop, participants were asked to name one thing that they though a trustmark must cover. These were some answers (I didn’t catch everything):
- Transparency: A non-technical person must be able to understand in a reasonable time frame, for what purpose and how is it collected and used? Steps forward for the commission would be to look at existing best practice for sustainability trustmarks in particular.
- People don’t know how their data is being used, and the next step is to make those who break trust pay.
- Simplicity and usability of the mark are key.
- Create respectful defaults. One such concrete default would be to always have a user accessible off-switch for the AI. (On this point, one participant recommended a book on urban planning called “Massive small”.