NGI Trustmark, notes from NGI Forum workshop

At the NGI Forum @katjab and Harry Armstrong held a workshop on a potential future NGI Trustmark, a certification for technology considered to live up to the ideals of an “internet for humans”. These are my notes from the workshop. Katja and Harry, please add what I have missed and correct what I might have misunderstood.

Trustmarks are nothing new, they exist for everything from fair-trade for food to energy efficiency for domestic appliances. They exist because, without them, ethical technologies are hard to find. It’s a fragmented marketplace and there is a lack of coordination. NGI has an opportunity to address that by developing a Trustmark, and this workshop was one step in that direction.

Of course, defining “ethical” is no easy feat. Things to be considered could be privacy, fairness, bias, sustainability, security and countless other factors. There are many open questions, and this workshop did not aim to limit the scope but was rather an open exploration of the topic.

We know that there is some appetite for data protection trustmarks. One participant quoted a figure stating that 67% of consumers would like for such a certification to exist. But data protection is only one aspect of ethical data-driven technology, and the consumer appetite for other consideration is unclear.

In a way, terms of service are like very long and opaque trustmarks for what companies may do with your data. However, as we know, they are close to useless from a consumer perspective as a tool to compare technologies. One sort of trustmark could focus on making sure that terms of services live up to certain standards and include respectful defaults.

GDPR has also become something of a trustmark, being used as shorthand when talking about data protection. What makes it effective is that there is accountability. As one participant said: “You are trusted because you are accountable”.

This workshop did not go into detail, but we managed to write down some interesting opportunities and challenges of developing a trustmarks.

Opportunities

  • It creates a branding for ethical product, creating a virtuous circle where you must be certified to compete.
  • You can start creating new products based on being certified ethical, much like some lines of washing machines have as their unique selling point to be energy efficient.
  • Companies can account for what value consumers are giving up to get something for free. Facebook and its ilk would then have to be very upfront about its business model during signup.
  • A well-designed mark can become a strong marketing tool, by making the mark itself a brand.
  • A certifying body could include individual consumers who are, as a group, given equal weight to the industry.
  • AI bias is difficult because reducing bias often decreases accuracy. AI bias could be addressed by requiring companies to be upfront about what trade-offs they have accepted and how their AIs are biased. This is much like how human bias (internalized racism, sexism, etc) is addressed by becoming aware of one’s own bias.
  • Sustainability, security and ethics of data centers could be certified by independent observers, similar to UN polling observants.

Challenges

  • A trust mark with high standards might not work, if every single data giant ends up with a bad rating, trustmark might come across as too punitive
  • Privacy might be the easy part of the certification, but fairness will be harder. Addressing AI bias and error is harder.
  • Where would such a trustmark be placed? How would you see it?
  • If the trustmarks becomes something that one “pays a premium” for, will privacy and ethical tech become even more of an elite privilege?
  • Who pays to certify open source software?
  • Entrenched business models will lobby against it.
  • It can easily become a barrier for entry for startups if not done right.

At the end of the workshop, participants were asked to name one thing that they though a trustmark must cover. These were some answers (I didn’t catch everything):

  • Transparency: A non-technical person must be able to understand in a reasonable time frame, for what purpose and how is it collected and used? Steps forward for the commission would be to look at existing best practice for sustainability trustmarks in particular.
  • People don’t know how their data is being used, and the next step is to make those who break trust pay.
  • Simplicity and usability of the mark are key.
  • Create respectful defaults. One such concrete default would be to always have a user accessible off-switch for the AI. (On this point, one participant recommended a book on urban planning called “Massive small”.

Following up with some of my own remarks.

I think there is an inherent difference between at least two categories of trustmarks. One category is exemplified by the energy efficiency certification on a washing machine an the other by a fairtrade sticker on a banana. While the first has some direct and appreciated practical consequences for the owner of the washing machine, namely a lower electricity bill, the person buying the fairtrade banana only gets to feel like they are doing the right thing.

I would argue that an NGI Trustmark will be more like the sticker on the banana than the sticker on the washing machine. It is true that leaked personal data due to lax cybersecurity, or sold personal data due to lax ethics, may have an individual impact, but this is really a fringe phenomenon. A vast majority of people who get their data leaked or sold are only a single datapoint in hundreds of thousands of others, and will usually suffer no consequences as a result. There are exceptions – people who for one reason or another are more likely to be harassed or targeted are also harmed more than others by leaks – but this almost per definition a minority. Instead, the consequences of leaked or sold data are usually societal and structural – lex Cambridge Analytica, and in that real security becomes an elite privilege.

This is this relevant because I think it might be wise to not try to make this sticker into something that focuses on what the consumer is getting from the product because I think most consumers will just keep making the long term privacy trade-off for the short term price tag. This is also relevant for design and branding, as the strategy pursued to get people to care about a fair-trade sticker is probably very different from a campaign promoting energy efficiency certification.

Let’s see if @pbihr can weigh in on this. He has studied this extensively as the main focus of his Mozilla Fellowship. Peter?

well there was some discussion about this in the thread I kicked off on how one ought not to approach the work of developing a deep green trustmark for digital tech. I asked what people felt it made sense to avoid or try doing etc. It’s here

Looking forward to hear from @amelia and the rest of the ethno/ssna gang what is coming out of that and other deep green related threads. Might make visible some emergent insights.

Happy to, and thanks for tagging me in this thread. What I’m writing is based on developing the Trustable Technology Mark, a trustmark for connected consumer products, issued by our non-profit, ThingsCon. So that’s the area I looked at most closely, but I think a lot might apply beyond IoT.

So, there are various categories of “trust marks”, some more explicit than others, like: Certifications, which often certify compliance with a law or regulation, like the CE mark, or the more general “consumer marks” or “trust marks”, which depending on the jurisdiction are less “hard” so to speak. Either one of them needs an issuing body, meaning for larger scale projects we’re talking about large organizations.

(ThingsCon’s Trustable Technology Mark is both very young and a very small project, and technically speaking not a certification but a mark we license companies to use if they meet certain criteria for transparency and other aspects. Details matter in these things.)

That said, after having studied this for quite some time but without a longer-term background in this space I think it’s fair to say that just about any assumption you’d make (“it just needs to do X by Y in oder to demonstrate Z”) is quite likely not going to stand up to a somewhat rigorous kicking of the proverbial tires. For things that are as vague/debatable as “ethical” or “fair” or even the much easier (but still freakishly hard to define) “private”, even a baseline definition is incredibly hard, because we’re talking about nearly philosophical debates plus a lot of contextual interpretations.

That said, I’m not here to grump around and I would like to see more of this in the world. So practically speaking, what are the challenges and opportunities?

Challenges:

  • Trust in the issuing org: Who is the trusted authority of issuing this trust mark?
  • Legal status: Is this a thing that’s legally required for entering the market, i.e. a regulation? In that case, funding will not be an issue, but it’ll have to be a so-called “base line certification”, meaning a LOT of products need to be able to get it. This is a much lower standard than we might want. Think of the CE mark: It indicates that your TV won’t set the house on fire, but it doesn’t guarantee it’s well made in any other way.
  • Reach: If however the standard we aim for is higher, than by definition less products will qualify, meaning it must almost by definition be a voluntary thing. Which is good for many things, but means we’re now talking about massive outreach or marketing, and likely about requiring a business model. If however you get paid for issuing this, then there’s already a potential conflict of interest to issue as many as possible, so it might erode trust into the mark itself.
  • For anything connected, a software update might change important details or features, and also what happens to the data is often something that happens far from the actual product in the cloud, or on third party servers. Whatever the trustmark promises, it needs to apply to every party that has access to the data or that is involved in bringing this product into the world.
  • Verification: How does the issuing body verify its claims? Are there external audits (good but expensive), is it based on companies providing the info (cheap but harder to trust), or some hybrid?

Doteveryone, for example, has studied this in depth and decided not to go the route of a trustmark and instead just to offer guidelines for making better products, as has the Better IoT initiative, and frankly with ThingsCon we were short of going that same route (but were really curious to see if we could make it happen - and we’ll know in a few years time).

Opportunities:

  • I’d offer the counter claim to what @hugi was saying about consumers’ attitudes towards privacy and data protection (thanks Hugi for sharing this; I agree with pretty much the rest you’re saying), as well as potential damages. Currently it’s not like consumers have a real choice, and if you have to choose between unsafe and not using important services, you’ll have to expose yourself to the risk. I think consumers care about privacy quite a lot and make the best choices they can, but only have a very narrow band of choices. I also believe that it’s not nearly as niche as it’s often portrayed and instead the damage in fraud, eroded trust, lost productivity, etc. is quite large, even though I hasten to add I haven’t studied this in-depth (so take it with a grain of salt).
  • Nobody has figured out what this means for connected products yet. This is a huge opportunity, because consumers are largely in the dark about what’s ok and what isn’t. Currently we see a giant market failure there.
  • A trustmark, whichever shape it takes, could also be complementary to regulation that heavily fines sloppy practices if they hurt consumers. There are many mechanism to potentially make this happen; I’m not familiar with the current state of debate around this, but I’m sure Harry might know.
  • Finally, I’d say that consumers would hugely benefit from a trustmark if it’s established and gains traction, even if they don’t know the details of what it checks for. Let’s face it, most trustmarks we trust work in ways we’re not familiar with, at very different levels, many of which we’re so familiar with that depending on context we might not even perceive as such but everybody knows how to interpret according to their own preferences. To give just a few examples from recent debates: Fairtrade; BlueSign; Made in Germany; the fire department colors; CE; and most brand names all the way from Amazon to Red Cross. Trustmarks are everywhere, and they come in many shapes.

One thing I’d caution against is to come up with One Trustmark To Rule Them All. Either one of the areas tagged above (privacy, fairness, trustworthiness, sustainability, etc etc.) are huge and complex areas. Something can be extremely fair but not at all private, or perfectly secure yet offer no protection against abuse of data in the way the product is designed; etc. etc. Sustainability for technology is incredibly hard, as is privacy, especially if we’re comparing across jurisdictions/countries/cultures. (Privacy and data protection are interpreted fundamentally differently in the US than in Germany, for example.)

In the end, whoever creates the mark has to decide what they aim to cover and make choices, including many hard choices. No trustmark will solve all issues for everyone. But it might solve some issues for some, or even for many - and that’s absolutely an endeavor worth engaging in.

3 Likes

Maybe it will develop in a way similar to organic food. We all share some basic ideas of what that is, and I remember when there was no certification for it anywhere. But a variety of certifying orgs emerged and governments got involved. So, now we all still have our notions of what organic means, and generally speaking we are correct - no pesticides or chemical fertilizers was where it started. But now it is much more complex than that and not all standards agree. And some standards don’t cover certain aspects of it, like organic animal products, and some do. It’s kind of all over the map. Plus, government certifications are subject to political pressure (in the US, the government issues certain exemptions for certain ingredients in certain foods. Lots of grey area in there).

So it is all pretty squishy, but overall we benefit from it, even with all the variables.

1 Like