Technically “Central and Eastern Europe” I think, though it steals two words from you
This is broad, I’d emphasize the fact that we target a specific audience, that of (digital) ethnography and validate our results from this perspective (or define legibility criteria based on specific needs of this audience).
Université de Bordeaux, CNRS UMR 5800
Need a physical address?
It all sounds great! I like that interdisciplinary idea and an attempt to answer the questions: So what? What does it all mean?
My data:
Jan Kubik
Professor, Department of Political Science
Rutgers, The State University of New Jersey, New Brunswick
Email: kubik@polisci.rutgers.edu
Professor of Slavonic and East European Studies**
School of Slavonic and East European Studies
University College London
Email: j.kubik@ucl.ac.uk
This is a wonderful discussion and I wish I could seriously engage, but I cannot due to other pressing tasks (you know what…). I have been teaching qualitative and interpretive methods in political science for years now and have tons of ideas. We may want - one day - to have a seminar on all of that? I will share with you - hope you do not mind - one of my products (the whole issue of this Newsletter may be of interest): 3 Introducing rigor to teaching interpretive methods.pdf (525.1 KB)
Yes I am sure this is ok, and worth if it allows to include a figure.
Still haven’t looked into Kitto’s list.
From what I understand, Kitto et al. applies to research questions. I don’t see how to directly transfer them to reduction techniques. So we’d need to evaluate how each of those reduction technique backs up research questions ethnographers ask.
Do I get it right?
As for the affiliation:
- Osaka University, Institute for Datability Science, Osaka, Japan
Looking at the short abstract, it’s fine for me now.
As for Kitto’s approach, I am not familiar with those, and even less with translating into survey tasks, but to the look of it I guess we need first to devise and explain the sample of people we want to interview, and prepare a set of relevant questions that can assess the qualitative aspects of our method
This is really great, @Jan! These 3 frames in particular could be very useful for our paper:
-
The first is in recognizing and classifying observations (or “data”). For example, is a group of people gathered in a market square a religious procession, a political rally, or a crowd getting ready for an open-air concert? Does the uniform of a person whose actions we are studying signify a soldier or a
miner? Interpretive skills enable basic coding and classification. Without them, much comparative work is inconceivable. Weber calls this type of interpretive work direct observational understanding. I refer to it as classificatory interpretation. -
The second interpretive moment comes when we try to specify what drives human agency: “Why does/did she do this?” When researchers ascribe motives (psychological approaches) or reasons (rational choice approaches) to human behavior, they engage in what Weber refers to as explanatory understanding. I call it motivational interpretation.
-
The third is in reconstructing the meaning of actions, statements, displays, performances, etc. Discerning “What does she mean by this?” or “What is the meaning of this action?” involves semiotic/communicative interpretation.
No, not just RQs. The whole process of research.
Another measure from Jan’s paper:
For King, Keohane and Verba (1994),
“good research, that is, scientific research” has four characteristics:
- The goal is inference. There are two types of inference: descriptive and causal. Descriptive inference involves “using observations from the world to learn about unobserved facts.” Causal inference involves “learning about causal effects from the data observed”
- The procedures are public
- The conclusions are uncertain
- The content is method.
@Jan goes on to say:
Interpretation meets all four criteria: (1) it relies on inference to connect observed phenomena (signifying elements) with the (unobserved) meanings (signified elements); (2) its procedures are (or at least are supposed to be) public and repeatable; (3) its result are provisional (uncertain) and always subject to verification and updating; and (4) its content can be construed as method. The task, whose realization has already begun, is to systematically demonstrate the validity of these points as well as
specify and examine the method’s:
(1) ontological affiliations (How are society and politics understood and defined?);
(2) epistemological commitments (How are societies and politics defined in a specific manner knowable?);
(3) rules and procedures;
(4) disciplinary varieties (semiotics, hermeneutics); and
(5) specific techniques (for example, content analysis, [critical] discourse analysis, ethnographic accounts of meaning-formation through rituals, etc.).
What I understand is that the paper IC2S2 we are writing takes these criterion to motivate the value of the SSNA approach. What I see is that we can argue how the project/filter method we use
- usefully supports inference
- reinforce reproducibility (repeatability) in that they can be formally described to insure equivalence between any two implementations
- hold a part of uncertainty since the algorithms do not decide of how parameters should be set to get optimal readability
- together with open data, open algorithm (and even code) form a decisive part of the proposed method
Again, do I get it? I am still unsure I read the conversation on evaluation right.
Yes. The way I think about it is, once again, lossy file compression. We lose information, we gain interpretability: do we get a net gain, or not?
I really like this Guy!
@bpinaud @brenoust @Richard can I have your affiliations, assuming you want in? I plan to submit today.
Same as Guy : University of Bordeaux, LaBRI CNRS UMR 5800, France
Richard Mole, Professor of Political Sociology, UCL, Gower Street, London WC1E 6BT
Of course, here you go (yes I still use my Osaka affiliation for my academic works, I have a guest position there ):
Yes, this sounds really good. We can have a very interesting conversation about stages of inference. Do I get it right that we can talk about inference that is generated by the technique and its algorithms and then “ethnographic interpreters” add one more layer or stage. All of this is happening in a constant dialogue between the researchers from both “layers.” That meets the criteria if Pierce’an abduction, in some sense.