Monthly Ethnography Report 1: The Future of the Internet and All it Encompasses

When EdgeRyders set out this project in 2019, the @nextgenethno team was interested in understanding how community members talked about and envisioned the Next Generation Internet. Which core themes emerge? What are pressing concerns about the internet and digital technologies now and going forward? What are community members working on? How are they positioning themselves in relationship to these issues? And, as the NGI platform asks, “what should the Internet, and all it encompasses […] really do for humanity? How should it work? How should it be governed?”. We approached these questions ethnographically, meaning we focussed on the salient themes that emerged through on-platform interaction, paying attention to the language community members used to describe their experiences, and the categories they established discursively.

Over the last 18 months, we observed, participated in and coded on-platform interactions between a range of members active in the tech world. Many members either seem to be conducting research, developing software, platforms, applications, leading groups of innovators and designers, or engaging in activism and journalism around tech issues. This means that, as ethnographers, we have been observing and coding insights based on expert knowledge.

Nearing the conclusion of this project, we now have a complex and dynamic Social Semantic Network which visualises the codes we have created over the course of the project. Within this network, codes cluster around three core overarching topic areas: “The Future of Work”, “Data, Privacy & Control” and “Resilience, Welfare & Sustainability”.

While each of these themes is distinct, they do overlap significantly across a number of focal issues: attention is placed on challenging the status quo and building alternatives to our current modes of online interaction, access and data control, to reshaping our regulatory and governance systems, to pushing for justice and equality, ensuring safety and security, and to better distributing our resources. Undergirding all of our on-platform debates is the urgency to define our values as professionals involved in digital technologies and to implement new systems of collective action that reflect these shared values.

On a monthly basis, I will zoom in ethnographically into each of the topics identified and unpack them in more detail, drawing on the SSNA, community threads and our previous reports. In this way, I will create an in-depth link between our coding work, the content of the platform and our own ethnographic analyses.

This first post serves as an overview of these core emerging themes and my initial interpretation of them. Going forward, each post will be dedicated to one of the core themes outlined here, exploring them in more nuanced detail.

Underlying each of the topic clusters, are three overarching conceptual themes. I plan to delve into these more going forward, but let me begin by highlighting them here. I bring these in at the beginning because, in my observation, they have come to define much of the NGI community dynamic that I have observed and participated in over the last 18 months.

Negotiating Values: Power, Agency and Resistance

From the very beginning, interaction on the EdgeRyders platform has been characterised by nuanced conceptual work: community members share a sense of responsibility and urgency to revisit, question and reframe the often taken-for-granted concepts and practices that we come to use when we speak about tech, digital tools and virtual life. What does it mean for technology to be ‘green’, to be truly environmentally sustainable? And are we currently addressing this issue from all angles? What does it mean for cities, homes and digital tools to be ‘smart’? Observing this kind of conceptual discourse can tell us a lot about different ways of evaluating and perceiving the world. It allows to explore the lenses through which tech discussions are held, including contemporary ideologies, ethics, values and norms.

Dystopian and Utopian Futures

Unsurprisingly, a lot of this conceptual work is forward-moving and future-oriented; in many ways capturing the momentum behind notions of the Next Generation Internet : How do the tools we have access to in the present need to adapt to the issues we see emerging in the future?

As Sociologist Barbara Adam (2004) reminds us, this kind of decision-making has important ramifications for future generations. In so doing, she points to the short-sightedness of many modern democracies, who often make short-term decisions (relevant in there here-and-now) that will have consequences for the future. However, despite calls for policies to take long-term perspectives, not enough is being done to expand our view of the future and the role of digital technologies within it. Jane Guyer (2008), meanwhile points towards a notable shift from focus on what she calls the near future toward a combination of immediate response and orientations towards the long-term. However, such long-term orientations are, as Guyer demonstrates, often fixated on the temporal frameworks of economic institutions. In a similar vein EdgeRyders community members have demonstrated the ways in which current decision-making maintains and even exacerbates existing power dynamics, allowing private corporations and big tech to dominate the discourse. In this way, there appears to be a notable disjuncture within state and social understandings of futurity, often grounded in competing and unequally prioritised temporal models. In this sense, an urgency exists within the NGI community to challenge the status quo and develop tech solutions that will contribute to greater equanimity, access, transparency, individual agency, and towards a collectively achievable future.

So, what does the future of the Internet and all it encompasses, look like? And importantly, how do we get there? Visions of the future often navigate between the dystopian and the utopian: future dystopias are frequently understood as marked by global pandemics, environmental disaster, fully autonomous AI systems which become uncontrollable for humans and other narratives of ‘crisis’, ‘breakdown’ and ‘catastrophe’. Utopic futures, meanwhile, envision new cosmopolitan modernities; the rise of a new political dawn and resilient communities built on the promise of digital technologies. Social scientists have used these taxonomies to describe how individuals and groups orient and organise themselves in time and space, as well as how decision-making is embedded within our relationship to time.

Sitting at the intersection of these two extreme poles, NGI community members come together to consider how digital technologies may be contributing to ‘crises’ of the present, how they might lead to long-term dystopian outcomes, and importantly, how digital tech can be harnessed as a tool for human-centred design, advancements in medicine and climate protection, the growth of global communities and mechanisms of empowerment and equality. Understanding how people come together to talk about the future can tell us a lot about the ways in which different ideas of the future are constructed, how they are negotiated and how they interact. Within the NGI community there is an overwhelming sense that community members are trying to imagine and build a collective future, one which engenders a sense of shared purpose through which long-term change can be achieved.

Building a Community of Practice

This forward movement, importantly, is characterised by a sense of community: teasing apart the steps necessary to realise the future of tech and our roles within it. Anthropologists, like Amanda Wise (2016) have described this kind of community-focused, collective work as convivial labour – describing the lived practices and work of living together. Importantly this notion also foregrounds the ways in which notions of difference and forms of consensus are negotiated in the everyday, highlighting the importance of studying the ways in which community and togetherness is interactionally achieved.

It may not be initially surprising to find that community members are seeking to define and build collective futures, after all, this is what EdgeRyders is all about: coming together collectively to identify problems and to solve them. However, it is important to point this out, particularly as the NGI community is made of a broad range of experts, each coming from their own unique perspective and area of expertise. Such heterogeneity is important for eliciting a nuanced discussion on broad issues like the future of the internet. However, it can sometimes lead to fragmentation, with various members holding on to their own specific interests. We don’t find this here. Instead, from the beginning, there has been an underlying sense of collective action, community organising and an interest in transparency and accountability. What is more, community members seem to overlap quite significantly on some core issues, bringing their own expertise to the discussion.

This brings us to the issue of defining our community of practice. Who do we mean when we talk about the NGI community? As I have flagged at the beginning of this report, we have largely been hearing from experts – so, people who are in various ways working on issues in tech. That being said, while there is diversity in the kinds of professions and areas of expertise from which members approach issues of tech, there does not appear to be as much heterogeneity in terms of ethnicity and socioeconomic class. Community members seem to largely (not exclusively) be composed of individuals from European, white and middle class – or otherwise upwardly mobile – positions. In this sense, our community of practice is less diverse than one might initially expect. Identifying who our community is allows us to better understand how and why they are addressing certain issues and, importantly, from which positions they may be approaching them.

The vantage point from which our community members are addressing the future of the internet has become most explicitly legible during the COVID-19 pandemic, when the issue of work become an even more central theme. Looking towards my future reports, one question that I would like to investigate is how exactly the community is defining ‘work’ and the kinds of labour they include in this category. From an initial, top-down look at our SSNA – as well content of our threads – it seems that when the community talks about work, they are largely describing forms of labour which are comparatively flexible to shifts in working conditions. By this I mean sectors where employees can more easily adapt to greater shifts online, to remote or at-home work and to virtual co-working spaces. Put simply: we are largely talking about forms of labour which can adapt comparatively flexibly during a pandemic.

I say this, because this is what our most salient codes (or code clusters) reflect. Let’s take a closer look at this:

The Future of Work

Keywords: defining forms of labour, work as space, work as time, work as movement, work as process

We talked to and heard from experts including financial analysts, academic researchers, authors, open source software developers, founders of co-working associations, virtual meeting room developers, environmental tech practitioners and entrepreneurs. Within these discussions a distinction is being made between a) everyday labour: the practicalities and lived experiences of day-to-day work and b) top-level business models, policies and forms of regulation. Importantly, community members discuss how current decisions and models around work affect the lived experiences of day-to-day work, or rather, how business models and regulatory principles shape the conditions of everyday labour. At the onset of the Covid-19 pandemic close to one year ago, conversations on platform started to concentrate around discussions of what we need digital technologies to allow us to do in order for us to work during a pandemic. A lot of thematic focus here was placed on co-working: from designing new platforms for individuals to work together remotely, to facilitate meetings and the sharing of knowledge and resources, to envisioning new ways of working and collaborating. Such discussions tease apart the various dimensions behind the notion of work. The SSNA offers a visual map of the ways in which work is understood, and which I define across four categories 1) work as space (physical and social), 2) work as movement (including access and agency), 3) work as time (including notions of acceleration, slowness and constraint) work as process (legal, political, economic).

I plan to delve into these various dimensions in forthcoming reports, but for now let me describe these briefly.

Global lockdowns and constraints on mobility have brought the spaces we work in to the center of community discussions. With many moving their work from shared offices, schools and universities to at-home remote work, community members find themselves confronted with the physical and social ramifications of everyday labour. While in the past we were often able to spatially separate the private, professional and social dimensions of our lives, many have discussed the ways in which their work and private lives have become compressed into one place. Or rather, we are now experiencing a new merging of public and private life. On the one hand, this means physically reorganising and repurposing the the spaces we live and work in and finding new ways to negotiate our professional schedules with child care, social interaction and personal wellbeing. For many community members, the shift to online, remote work has lead to acute experiences of social isolation as the familiar aspects of interpersonal contact take on new and, at times, uncomfortable forms. On the other hand this also means revisiting what it means to work together, to collaborate and share resources. In this sense, what are the challenges and responsibilities of institutions, companies and organisations to facilitate new forms of co-working and and co-living? After all, “work environments aren’t just people in a space together, they are about interacting and generating positive synergies with coworkers” (@nachorodriguez 2020). This means that as more companies adapt to remote work, a key challenge is maintaining a sense of community and shared purpose.

Within these debates, codes like “imagining alternatives” and “imagining the future” take on a central role. This means that work has important temporal implications. On a day-to-day level this might mean reorganising the time we dedicate to work, family, friends and leisure activities. During the pandemic the notion of duration has also become an important conceptual frame: how long will these shifts to alternative ways of working last? How long will our businesses and livelihoods be affected by these constraints? When will a sense of normalcy return? How long will we have to wait for solutions to materialise?

Intricately connected to the dimensions of time and space is the notion of movement. Our physical mobility has in many ways become limited by the pandemic. On the one hand, this means that we may no longer be commuting back and forth to work or university, we may no longer be traveling to different countries for work, and therefore much of our physical mobility is limited to a smaller radius of action than we have been previously accustomed to. Now that many forms of labour have moved to online spaces, a further question is how to ensure socioeconomic mobility. This latter theme is one I would particularly like to explore in future reports. For now, one key factor in socioeconomic mobility is access. One of the main obstacles to online work is linked to access control and internet connectivity. This tells us that being able to work remotely in flexible ways requires resources like good internet connection, the ability to control access to resources like a good workspace, and a manager and sector that allows for this kind of work. We can see how inequities will continue to emerge from such differences in access if they remain unchecked, and how this might significantly impact disparities in socioeconomic mobility in the long-term.

A central question surrounding our discussions is thus, as we look toward the future of work, what are our values? How do we ensure that our values are represented going forward? How can tech intervene to shape the future of work?

This takes us back to the responsibilities of employers, companies and institutions more broadly. Work as process necessarily involves a consideration of top-level decision-making processes and the ways in which labour is regulated and governed. Underlying many on-platform conversations is the question of how work is currently regulated and how these models should change in the future. The tension within this debate lies in building alternative labour models, while negotiating the often exploitative business practices, financial incentives and power imbalances which saturate many of our existing labour sectors. One key fear here is that big tech will maintain its monopoly and thereby thwart future efforts to better regulate, fund and diversify our toolsets. As one NGI member pointed out, “its not just their sizeable chests that leaves Big Tech so well positioned in this current crisis. Because credit where credit is due: the privileged new class of remote workers can only be sustained because we have access to solid, well-functioning digital tools courtesy of the data barons of the new Gilded Age” (@katjab 2020). Of course, work has been a core topic in NGI even before Covid, and this is one area I intend to expand upon as I write up further reports. In this way, I plan to track what our relationship to work looked like outside of a global pandemic, and how these relationships have changed one year into it. However, as @katjab’s point makes clear, conversations that have emerged during the pandemic have brought into stark focus how existing inequities are not only exacerbated by crises, but also how they are exploited by powerful actors.

This brings us to the second big theme I intend to flesh out in my monthly reports.

Data, Privacy & Control

Keywords: building alternatives, uncertainty, (dis)trust, negotiating power dynamics

Amongst NGI community members, visions of a collective future are shaped by a shared interest in building open source, inclusive and transparent environments in which power and agency over data, access, monetisation, development and deployment are more equally distributed across a range of individuals and groups. However, as members continuously highlight, these values often clash with existing exploitative business models, technologies of surveillance, opaque and inflexible policies, untrustworthy systems and powerful state and corporate interests. Such on-platform discourse has thus led us to identify a key tension undergirding how we look at digital technology going forward.

I this sense, a palpable tension lies in reckoning with the more and more unequal, in-transparent and under regulated tech landscape in which we are having these conversations, doing our research and pushing for change. This also means weighing the possible trade-offs we have to make as we pursue higher ethical standards, greater oversight, accountability and transparency. The key question is thus, how to negotiate our expectations and aspirations for digital technologies of the future with the often stark reality in which they are developed, funded and governed in the present. As one of our community members has recently pointed out, “while there is a perception that the private sector can be modified by the public sector through law and regulation, once something is in the government’s hands, where is the oversight?” (@Schmudde 2020). The means through which we can achieve change are not as straightforward as they may initially seem. This is often because governments are not held responsible for regulating big tech companies and ensuring user safety. Instead, the burden is more often placed on individuals. Another issue is that the systems currently in place as well as which tools we have available, feel fundamentally unclear and uncertain.

This sense of uncertainty and distrust in our current models is strong theme in our SSNA. People feel sceptical when it comes to technologies of surveillance that use their personal data. There is also a trade-off between our interest to expand user control and agency over personal data with the interest to improve and optimise user experience (which we see, for example, in current debates on GDPR checks). This raises the question as to whether individuals should have to constantly make decisions about their own data or whether their alternative models for ensuring data security without diminishing user experience. This tension between user control and user experience continues in our discussions of advertising and social media platforms. How do we ensure privacy? Which rights are being threatened by the in-transparent and often exploitative practices of big tech and advertising giants?

The consensus across the community is that we need to fundamentally change our current business and governance models in such a way that would allow us to move from big tech to open source. In this way, emphasis is placed on human-centred design alternatives that empower us to build digital environments that allow users to connect and share resources, while maintaining decision-making power over how their data is accessed, stored and used. How can digital technologies become a tool for the social good? How can we shift our current models to shape the internet towards more open and transparent alternatives? And how can the digital tools we have available in the present be harnessed and developed to empower communities in the future?

Resilience, Welfare & Sustainability

Keywords: health, environment, infrastructure, community-building

Visions of dystopian futures often include scenes of natural disaster, the spread of disease, the total collapse of infrastructures and the breakdown social networks. In this light, a question seems to be, how might the development of digital technologies contribute to an increasingly dystopian future? And, how can we harness the potential of digital tools to improve our existing systems and pave the way towards a resilient and sustainable future?

As NGI members have shown, we can’t talk about technologies without considering the complex ways in which they interact with and impact our health and welfare.

The pandemic has brought renewed urgency to assessing the stability and preparedness of the tools we have available to ensure public health and safety and to enable our communities to recover and remain resilient during disasters. What do we need from tech in times of crisis? (going forward, it will be interesting to unpack what the community means by ‘crisis’).

The climate and sustainable tech is an overarching theme throughout and it also links to broader discussions of community and responsibility. Community members are concerned with defining the responsibility of policy makers, governments and big tech to protect the climate (and thereby also calling out corporations, governments and individuals who are failing to do so). As in many of our on-platform discussions there is a reckoning with the fact that tech can often act as a double-edged sword. This is particularly true when it comes to the climate. Community members have pointed out that the growth of the tech industry and tech products, more broadly, can significantly harm the environment, from un-recyclable devices and bi-products to their energy consumption. On the other hand, there is a great deal of optimism over how tech can help us protect and even improve our environment. Smart cities and smart devices are oft cited in these debates, as are notions of green and deep green tech.

Themes of resilience and sustainability are also central to understanding the relationship between digital technology and public health. Access to healthcare, for example, is an important domain in which technology plays a key role. On the one hand, the internet and other digital technologies can make access to crucial healthcare information more widely accessible. It can enable to us to make informed decisions about providers and it can help us access our own health data. During the pandemic, digital technologies allow us to track the spread of the virus and locate vaccination and testing sites. In this way digital technologies can offer quite a lot of agency to individuals and communities. On the other hand, there is a concern about how our data (health, biometric and location) is being stored and used, bringing renewed urgency to questions of regulation and transparency. A further concern is that technology may further exacerbate disparities in healthcare access, and how it may contribute to exploitative business practices.

Our exposure to digital technologies, furthermore, plays an important role in community mental health. During the pandemic, communication technologies have allowed us to maintain social contact, to establish networks of support and build online communities. However, community members have also pointed out that our use of digital technologies can also heighten anxiety, our sense of isolation and desires for human contact.

As we look toward the future, how can we ensure community resilience? This sentiment is captured in NGI conversations around thrivability – or the act of thriving and prospering. What role will technology play in our ability not only to survive (to attain livelihoods, to establish systems of support, and to access to necessities), but to build abundant and sustainable ways of living? Crucially, these questions are addressed from a collectively grounded perspective of shared responsibility. This makes clear that the act of prospering involves shared labour and a vision of the future which is collectively achieved.

In reports to follow, I will unpack themes of resilience, thrivability and sustainability as they are embedded within broader discussions of health, climate and environmental protection and the development and maintenance of infrastructures.

Artificial Intelligence as Case Study

Throughout all the debates on platform, Artificial Intelligence has emerged a case study, intersecting with many of the core themes and conceptual framings of community interaction.

Notions of dystopian and utopian futures are often cited in reference to the development of AI and machine learning. Such technologies can engender a sense of impending doom in which autonomous systems operate beyond our control. On the other hand, the debate around AI brings with a great deal of optimism about how it can be optimised and deployed to improve our lives.

In an interview posted in May 2020, Dr Annette Zimmermann – a political philosopher and ethicist working on AI – explained, that “public discourse on this issue tends to split into two fairly extreme views, neither one of which is correct. On the one hand, there’s ‘AI optimism: the view that the increasingly ubiquitous use of AI is inevitable, that we can’t return to not using AI once we have deployed it in a given domain […] On the other hand, there is a dramatically opposing view that says something like, “it is inevitable that all AI will lead to incredibly bad and harmful consequences”. That’s a different sense of inevitability right there. Tech pessimists seem to think that whatever we do, whichever domain we focus on in our AI deployment, the use of automated reasoning methods will always be somehow counterproductive or harmful”.

The notion of inevitability is an important theme as we think about the future, particularly as it ties into our debates over how technologies are developed, whose values and positions they are based upon, who gets a say and how we can intervene. Within discussions around AI, safety and security are central topics. As we have stated elsewhere, security or safety are not static artefacts or states but perceived outcomes of multitudes of interplaying factors. What do these different terms mean for different people and what are the implications of these different perspectives on how we go about imagining, regulating, financing and deploying different infrastructures and technologies? These questions are central across all of the themes explored here, but they are particularly pronounced in the context of AI. As the ethnography team goes forward with our analysis, we will continue to return to AI as a core case study in order to explore the role it plays in broader discussions of regulation, governance and business models, trust, accountability and transparency, as well as the complex ways in which emerging technologies interact dynamically with our social world.

What is the Future of the Internet and All it Encompasses?

This first ethnographic report serves as an initial overview of core emerging topic and conceptual themes. It is, therefore, by no means an exhaustive discussion of the rich and intricate data we have collected over the course of this project. Instead, the aim is to highlight, from an ethnographic perspective, central trends we have observed and to prompt questions for further inquiry.

As I have outlined in this report, the tech experts who have come to form the NGI community share an interest and sense of urgency to build a collective future. This kind of work involves unpacking the ways in which digital technologies can both serve to further exacerbate existing harms, inequities and exploitative practices, as well as becoming tools through which to challenge, redefine and empower communities.

The reports that follow will continue to build on the themes presented here, tracing in more detail who our community is and how forms of collective decision-making, collaboration and convivial labour engender a sense of shared futurity.


Beautifully written, @Leonie. Thank you for this and can’t wait to dive deeper into it (though it’s already incredibly rich!)

It’s solid work Leonie. Do you mind if the outreach crew extracts/reposts the different sections as individual posts for dissemination crediting you as author?

Sounds good to me @nadia. If you post stuff on Twitter, here is my handle: @LeonieEMSchulte

1 Like

This is very interesting, Leonie! Thanks for sharing!

Thanks @Leonie, I read it with great interest. I have nothing systematic to add, but here are a few scattered remarks.

A nice surprise! Like you say, it was not obvious.

Are we sure that the tension is really between “big tech” and “open source”, and not between “monopolies” and “non-monopolies”? Back at the NGI Forum, @RobvanKranenburg and I participated in a discussion where a man called Oskar Deventer illustrated a vision for a “data wallet”, some kind of device with all your data, encrypted. People would enter into relationships with third parties, and give them only the data that these third parties needed in order to complete the interaction. The example usually given is that of the club bouncer, who asks to see your ID in order to make sure you are of drinking age. The bouncer, in fact, does not care about most of the data reported on your ID card, like your name, address, or place of birth. He only needs one bit of information: that you are, indeed, of drinking age. A data wallet could automate this process, because you could tell it “give bouncers access to my date of birth”.

But here is the catch. Suppose you are engaging in an interaction for which you have a need, and nowhere else to go. Maybe you are entering a foreign country, for a job interview. Maybe you are trying to open a bank account. What is stopping the immigration service, or bank, to ask your data wallet for all your data? You are vulnerable at that moment, so you might find it hard to push back, even if pushing back is within your right. Deventer’s answer was straightforward:

This is my nightmare. You cross the Chinese or US border, and then some kind of data vacuum cleaner empties your wallet. This might require issuers policy, stating once and for all what can be done with those data. But I do not yet know how to solve it.

With this kind of argument, Cory Doctorow and the EFF have recently turned to advocating aggressive antitrust policy for the tech sector. You would think that the European Commission would seize the opportunity, because it has the most experience in the world when it comes to regulating Big Tech (thanks to Mario Monti vs. Microsoft back in the day, and Margrethe Vestager vs. everyone more recently).

I am not sure I understand this point completely… but I was reminded of Cesar Hidalgo’s work on how human judge machines. His lab psych experiments show that humans judge humans on the basis of intentions, but machines on the basis of outcomes. Translated to safety-and-security, people might feel safer around (fallible) well-intentioned human decision makers than around (equally fallible) algos… is that what you mean?


I will be preparing a 5-minutes presentation as a contribution to the Forward talk in the NGI forum. I will also prepare a custom visualization.

I asked @Leonie which codes she would recommend for it. Noting here the answer from the Matrix, where it will get erased:

Around “thrivability”? So, I see this as a kind of umbrella descriptor for a number of things. The first is that it describes the community’s orientation towards the future, with the central question being, how can we both ‘imagine alternatives’ and ‘build alternatives’ (codes: ‘building’ and ‘imagining’ occur a lot) that will enable us to ensure ‘resilience’, ‘empowerment’, ‘security’ and ‘agency’? And what are the ‘technological solutions’ we need to ensure this? Another part of this is a focus on ensuring greater ‘regulation’, ‘accountability’, ‘privacy’ and ‘user control’, while still remaining adaptable (‘adaptation’, ‘adapting to new circumstances’ and ‘tech adaptation’). These are linked to conversation around infrastructure, open source, AI, business models, the environment, various forms of labour and education, data protection and privacy (to name a few). There are also a lot of codes around/that include ‘collective’ and ‘community’ and ‘public’, as well as ‘value’ and ‘common good’, which I take to mean that within these debates, community members are thinking about these issues from a collective action standpoint.

So, ‘thrivability’ summarises these various codes in a way that describes: “the act of thriving and prospering. What role will technology play in our ability not only to survive (to attain livelihoods, to establish systems of support, and to access to necessities), but to build abundant and sustainable ways of living? Crucially, these questions are addressed from a collectively grounded perspective of shared responsibility. This makes clear that the act of prospering involves shared labour and a vision of the future which is collectively achieved” (ethnography report 1).

Ping @Markus_D: I have now added an image and some text as presenter notes to the Google Slides presentation file. Does it work for you?

Hi @alberto - great, thanks for the slide and notes! This looks really good. My one concern is that when I read through it, even at pace, it clocks in at around 5 minutes and we’re trying to keep it to 3 minutes each. How would you feel about shortening the discussion of public health? It’s interesting on its own but takes us in a different direction that the following slides and group exercises. I’d also suggest adding a link to the exchange platform for those interested in learning more, as well as a photo/name to the speaker slide. What do you think?

Sure, will do tomorrow.

@Markus_D, done now.