Field methods paper: lit review and comments

Ok, looking again I see he also OKed Wed afternoon. So let’s set in stone the meeting on Wednesday 17th October at 14.00 CEST. Fingers crossed for @markomanka’s availability. Meeting link: Launch Meeting - Zoom

Is this happening today?

The busy one is @melancon. Let me check…

@melancon is saying “today at 16.00 CEST”. Does it work for you, @amelia @markomanka @tah?

Ok for me :slight_smile:

Sorry, cannot. Tuesday is OK.

Hi all, just confirming that we are on for the call today. Can everyone update to confirm availability?

Confirm.

1 Like

@amelia and @markomanka, I managed to get a hold of @melancon. He says he is only free at 15.30 CEST (which was the time we had tentatively set for last week). Can you do it? If not, I propose we go ahead anyway without Guy.

Can do. I think it’s necessary to talk today, whoever can make it. We need to set a timeline for the writing — months disappear quickly!

Dressler et al. (2005) Measuring Cultural Consonance: Examples with Special Reference to Measurement Theory in Anthropology… Field Methods.

This paper tries to measure cultural consonance (‘the degree to which an individual approximates in his or her own behaviour or belief the shared cultural model in some domain’) quantitatively. It draws upon psychometric theory, arguing that psychology is the most explicit in measurement theory of the social sciences. Their desire is to be able to systematically measure things in anthropology without losing its best quality: sensitivity to local meaning and context. This is the part I’d cite of this paper-- both in terms of methodology and research aims, its goals are very different from ours.It also tries to pave a way for hypothesis testing in anthro, which I’d argue it does less convincingly.

The paper first theorises cultural consensus as a model. It does so by conducting semistructured, focused group interviews, and individual unstructured interviews. They had participants do free listing, pile sorts and rankings of different social concepts to understand what informants thought was important. They then measured distance from the centre to determine cultural consonance in terms of lifestyle, social support, family life, and national characteristics (this is a gloss).

1 Like

I’m ok as well

1 Like

Murthy, D. (2008). Digital Ethnography: An Examination of the Use of New Technologies for Social Research. Sociology, Vol. 42, Iss. 5, pp. 837-855.

This paper discusses digital social research techniques and recommends combining both ‘digital’ and ‘physical’ ethnography. It begins by describing ethnography as telling ‘social stories’, which remains true in digital ethnography. The paper is from 2008 so while it argues that the use of new digital technologies has been slow in social research methods, this seems less true now. A lot of the technologies discussed are on the visual anthropology side of things (digital cameras, videorecording) but some include things like videoconferencng and webcam. The paper then goes on to encourage the use of four technologies: digital video, social networking websites, blogs, and online questionnaires.

The relevant parts of this paper have to do with the blogs and social networking sites. Though Murthy does point out the potential of the internet to offer anthropology at scale, the reference is to online questionnaires. People online can provide different, often more personal responses as compared to face-to-face responses (Miller and Slater). Data provided is different from that provided face-to-face, suggesting that a combo of the two methods would be powerful. Digital video, Murthy argues, also gives people who otherwise would be harder to interview a voice (like asthma patients) and also lets people control their own means of communication (instead of the ethnographer holding the recorder). Much like on our own platform, Murthy says that research blogs can be used to collaboratively share research data and results, allowing informants to interact with research as it is ongoing. Murthy calls this “collaborative ethnography”, where “the community meaningfully becomes invested in the resaercher’s work through consultation and critique.” The online presence is democratising and holds researchers accountable. They can also chart ethnography in real time and be platforms for methods discussions.

Murthy ends by saying that “the combination of participant observation with digital research methods into a ‘multimodal ethnography’ may provide a fuller, more comprehensive account’ of social life. What we also have to do is make sure our analytic categories and capacities develop alongside (Sassen 2002). Worth citing, if just to say ‘we are doing this thing Murthy calls for’.

1 Like

Snodgrass, Dengah, et al. 2017. “Online Gaming Involvement and Its Positive and Negative Consequences: A Cognitive Anthropological ‘Cultural Consensus’ Approach to Psychiatric Measurement and Assessment.” Computers in Human Behavior.

Essentially the same approach as Dressler. Studying cultural consensus using psychiatric metrics, uses interviews with gamers, and concludes that current measures of gaming addiction aren’t necessarily the best. Qual-quant component here again (desire to measure cultural phenom) but nothing like our online community ethnography or our modelling of their social/semantic worlds. So like the Dressler above, of limited use. But interesting that people are trying to measure things while being culturally sensitive?

Dengah, Snodgrass, et al. 2018. “The Social Networks and Distinctive Experiences of Intensively Involved Online Gamers: A Novel Mixed Methods Approach.” Computers in Human Behavior.

The authors use survey data and ‘egocentric social network interviews’ (N = 53) to understand the relationship between social support and online gaming involvement/experience. The compelling part of this piece is their desire to go beyond survey data, but though their methodology is mixed I cannot call it ‘ethnographic’ (there are only interviews, no actual participant-observation).

The social network itself is the object of analysis here, rather than a tool to further ethnographic analysis of cultural phenomena (the research question asks whether one’s social network is shaped/shapes online involvement and experience). The social networks themselves were constructed manually during interviews with participants (the participants were asked about people who were important within their social network as well as the relationships between people in their social network, and their opinions of gaming). So we have to take the participant’s word for it, and as we know in anthropology, people do not always do what they say or say what they do :slight_smile:

Similarly, their use of the ‘online’ is different (as in, conversations aren’t convened online to discuss actual-world issues — the topic of analysis is the online world itself, a gaming space. So while the paper discusses both social networks and digital ethnography, both are used in different ways from our usage. Interesting paper though! Allows them to talk about how gamers might construct social networks that support their gaming, leading them to over-play. The causal direction though was up for grabs (whether gamers who played a lot surrounded themselves with people positive about gaming, or whether being surrounded by people who were positive about gaming lead to more playing).

1 Like

T. Van Holt, Johnson, J. C., Carley, K. M., Brinkley, J., & Diesner, J. (2013). Rapid ethnographic assessment for cultural mapping. Poetics, 41(4), 366-383.

This piece is about coding scaling/automation and coding accuracy. The authors try to find a way to analyze large swaths of online textual information, recommending semi-automated coding (balanced recall and precision) which they say is preferable to both fully automated coding, which is usually inaccurate (high recall, low precision) and human coders, who are slow (and have low recall, high precision).

This article is definitely worth @alberto @melancon and @markomanka reading as well.

It focuses on ethnographic coding, which has the goal of maximizing the ‘contextual properties of the original texts’. Their goal is a ‘rapid analysis of a culture, the socio-economic and environmental drivers of culture, and how these processes change over time’. They compare 3 coding strategies which are outlined in more detail (fully automated, semiautomated, and human). the semi automated employs a ‘human in the loop’ approach (what they call a data-to-model process, or D2M). The coder reviews a list of automated strategies and can change them, resulting in a list of concepts and their frequency. Then those concepts are categorized into ontological categories using a machine learning technique. The categories are again vetted by a human. Then links between categories are made – automated, using a proximity-based approach where user specifies a window size within which all concepts are linked to each other. This results in a network representation of textual information – each concept is associated with an ontological category, with concept networks containing weighted, bi-directional links. Then it is visualised using software.

In short- – interesting, everyone should read, and we will almost definitely need to cite. And could be useful as we further develop large-scale coding strategies, for sure.

1 Like

See you in two hours at Launch Meeting - Zoom

1 Like

I remember reading a paper about how people are bad at perceiving their social networks, but I cannot find it anymore. Perhaps @melancon can help.

Noted. But remember this is devil-in-the-details territory. A friend was complaining to me that many applied research environments (OECD, for example, but the JRC as well) are insisting on topic models, which in her opinion are not open enough to novelty and serendipity. Human experts are used to define the baseline against which the data get analyzed, so the algos obviously see the data in terms of what the human experts expect. I am not fully convinced by this critique, because I think that topic models, as I understand them, are vulnerable to false positives, not false negatives: they will find, if anything, spurious novelty. I have asked for further info, but she went silent.

Anyway noted, I will read it.

1 Like

I completely agree with you— I’m generally skeptical as well on similar grounds. This was the only article I read that made an attempt to do something resembling what we are doing (creating networks of concepts via ethnographic coding) so I think it’s worth reading. The issue of course is limited access to the process/software itself.