@Jan I definitely wanted to run with abductive reasoning in Peircean style after our conversation!
@amelia We are meeting about the budget at my 11 (in 10 minutes). Will you be able to join us? I sent you via email a link to our new spreadsheet. If not can we talk soon? Yes, abduction all the way!
This is the part I don’t understand. Similarly, I don’t understand (fully) Guy’s translation in our terms:
Can anyone help?
Also, for some reason my Overleaf file got stuck, looking for a nonexistent bin.bib
file…
Submission done.
We still have a few days for reviewing…
@alberto Content is method - they mean by this that science can study anything. What makes such a study “scientific” is a specific method. I would add a specific epistemology, i.e., something more general than method and including a specific way of asking questions. I suppose I am - first of all - a Popperian: scientific questions are related to hypotheses that are falsifiable. And I like to assume that we try to specify - as fully as we can - a range of answers that falsify a hypothesis, prior to the onset of our investigation. That does not apply - be definition - to exploratory and hypothesis-generating projects. But here the point is - I believe - to try to generate hypotheses that ARE falsifiable. So, as an interpretivist, I need to be as clear as possible. When I say “This uniform signifies a doctor, not miner,” I need to show my data (picture of that uniform?) and delineate a scope of my claim, so someone can say: So not overgeneralize, you idiot: in culture X doctors wear such uniforms, but in culture Y miners do so. As the object of interpretation gets more complex, this job gest very challenging.
Glad to help! I really miss working more closely with you guys
I’d need a live chat to squaring my thoughts.
That’s exactly it Jan – probably worth wording it in the paper literally like you just did (@alberto?) , at least both of us understand each other
No, that’s not quite how I understand it… unless by “the technique” you mean only network reduction, not SSNA itself. This is because the network being reduced is being generated by ethno coding, so quite some interpretation on the raw data (the contribution) is already there.
Good point @alberto – your reading is sharp, as usual.
I do focus on the reduction technique. And that’s how I read Jan, talking about inference pertaining to the reduced graph, with the interpretation being used to evaluate how faithful/relevant/robust/… the technique is as seconding the ethno work.
In fact, the whole idea of SSNA is that “interpretation” is multiscale. Ethnographers interpret the single post when they code; but when thousands of posts are annotated thousands of times, post-level interpretation does not translate into corpus-level interpretation because of a short-term memory problem. It’s a bit like you using linear algebra because you cannot do n-dimensional geometry in your head the way you do with 2D. Hence SSNA, where graphs are built, reduced, and then interpreted again, but at a different scale.
So, @Jan @melancon @amelia and all, I gave another round of polish to the abstract. The key bit on criteria to evaluate network reduction techniques now reads like this:
Following [King et. al.] , we evaluate the extent that each reduction technique (i) usefully supports inference, here in the sense of interpretation; (ii) reinforces reproducibility, in that they can be formally described to insure equivalence between any two implementations; (iii) holds a part of uncertainty, since the algorithms do not decide how parameters should be set to get optimal readability; (iv) combines with the data model and network construction technique above as part of the SSNA method.
Works for everyone?
I like most of it a lot, as much as I can understand this very dense paragraph. Point (iii) is somewhat beyond my ability to comprehend, though I “sense” (I think) where it is going. Could you explain (iv)? In King et al the last point is “the content is the method,” meaning that science is defined by its approach, not what it studies.
Thumbs up.
Point (iii) means: we are applying a mathematical transformation onto the network, and only after application will we know if it indeed does improve legibility and support inference. Results are uncertain.
Point (iv) means that the reduction is a component of the SSNA method. Is it unclear? Anyone wants to propose a different formulation?
I should not speak up, perhaps, as it is a technical language in which I am not fluent, but I do not get how the sentence “We evaluate the extent that each reduction technique… combines with the data model and network construction technique above as part of the SSNA method” (that is point (iv)) means simply “that the reduction is a component of the SSNA method.” How would you say it in plain English? I sense a lot of important stuff here, but it flies above my head…
If we had the space, we could report King’s list and establish a correspondence between his general criteria for good qualitative research and our specific evaluation of network reduction technique. In this case:
“the content is the method” (King); “science can study anything. What makes such a study ‘scientific’ is a specific method” (Jan) => good qualitative research
becomes
“a reduction technique combines harmoniously with other parts of the SSNA method” => good reduction technique (abstract)
A reduction technique could also be inconsistent with the other parts of the method. For example, if we reduce on the basis of the number of co-occurrences, this reduction is consistent with the technique of constructing the network in the first place. A co-occurrence network is interested in showing what pairs of codes occur together. This is a different question from, say, listing the codes, or counting their occurrences. Reduction by number of co-occurrences shows what pairs of codes occur together the most times, and so it is tentatively good.
By contrast, reducing the network to a list of codes would be bad: we would lose the critical information of occurring together. Reducing it to a list of the highest co-occurring pairs would also be bad: we would lose the critical information about network structure (for example, two codes do not co-occur with each other, by they are tightly connected because they co-occur with the same neighbors. By contrast, losing the information about co-occurrences that appear only once seems like a smaller loss.
I would appreciate a suggestion on how to express it more clearly. Deadline tomorrow!
Of course you should! I am way out of my depth here. I am used to deploying methods that are accepted: you might justify the choice of one statistical estimation technique over another (example: logit vs. probit for binary variables), but not to arguing that the whole exercise has meaning – that is done for me by the broader discipline.
I tried to gather our thoughts about this matter here: Qualitative research, knowledge and discovery: a discussion on the epistemological strengths and limits of ethnography and SSNA - #4 by alberto.
@alberto and @melancon It is very useful. I believe you are both right. We can reconstruct our technique, step by step. That will show as one of the first steps Alberto’s “ethnocoding” (a lot of interpreting already there - this is my "classificatory interpretation), then a specific dataset emerges that is structured by cooccurrences, etc. (here all that analysis of algorithms, what they and what they do not do, applies), and then we are back in the ethnographers’ “house” where we all look at those beautiful graphs and come up hypothetical interpretations (again, but on a “higher” level of abstraction). The method of their falsification/verification is/should be both iterative (we go back to any stage we want/need) and relentlessly, intersubjectively self-critical as we try to reach an interpretive consensus.