IC2S2 is the International Conference in Computational Social Science. It has a call for papers that seems right for SSNA work. Advantage: you submit a two-page extended abstract, which is doable. Deadline is March 12, with the submission revisable until March 18. If accepted, authors must submit a video in July.
Note the steering committee: plenty of heavyweight netsci people: Salganik, Jackson, Fortunato. Duncan Watts, one of my favorite network scientists, is chair.
I see two ways:
If we have emerging empirical results from POPREBEL, we could submit that.
If it is too early for that, we could submit a methodological paper.
I’m up for it, but I don’t have the bandwidth between now and then to put in work on the abstract – so if others can do that, I am happy to work on the presentation itself should we get accepted.
I don’t think there’s sufficient data from POPREBEL right now since the “pilot study” phase doesn’t have high quality underlying data. We’ll have better results by July for sure, though, so we could still propose it.
Dear All, I think we/you should do this and the best option - to my mind - is number 2. If we have more and better data by July (and if we don not I am going to jump from the nearest bridge), the paper can be rebalanced from more methodology to more results. People routinely adjust their presentations during the post-abstract period.
Who should I include as authors? Besides present company (but you need to tell me you want in) I would love to include @melancon and possibly @brenoust or @bpinaud, if they are interested. I need this answer by tomorrow, to create the application on EasyChair.
Could @Jan and/or @Richard and/or @amelia give me an opinion on the methodology and pitch proper. You find it towards the end of the abstract:
We proceed as follows. First, we induce a CCN from a corpus obtained from an online forum discussing populist politics in Eastern Europe (336 informants, 2,284 contributions, 5,863 annotations and 1,424 codes). Next, we attack it with alternative reduction techniques. Finally, we assess each one against the criteria of groundedness (how well rooted into social theory is it?) effectiveness} (how well does the reduced CCN lend itself to interpretation?) and faithfulness (how much important information is lost in the reduction, for someone who knows the original corpus well?).
Reduction techniques include:
[…list of techniques]
Our contribution consists in proposing an interdisciplinary approach in evaluating techniques to process ethnographic data. These techniques are, in themselves, purely mathematical, but are evaluated in terms of how well they support qualitative research in the social sciences.
The question is: how do we measure these things? Self-reported on a Likert scale? Do you, as qualitative researchers, have a better idea? For this answer we have a few more days, as the abstract is editable until the 18th.
Why on earth did I miss this message posted on March 2nd and only recieve a notification today – must be me … I’m in and now looking at your abstract.
I’m in, but I’m not sure how to answer your question on “measurement” – assigning numerical scales to interpretive categories is not my approach. My approach would be to leave the criteria there and have the evaluation be qualitative, not quantitative.
Instead of the three you propose, why not use existing measures? e.g. a selection of these:
Transparency and systematicity (Meyrick 2006)
Validity (do methods match RQ?), reliability (replicability, consistency), generalisability (“conceptual clarity” (Toye et al 2013)).
i) Clarification and justification, (ii) procedural rigor, (iii) sample representativeness, (iv) interpretative rigor, (v) reflexive and evaluative rigor and (vi) transferability/generalizability (Kitto et al 2008).
conceptual clarity (generalisability/transferability) makes sense to me, for example, as a good way to evaluate SSNA (one of the possible “measures”), alongside transparency and systematicity (which = procedural rigour), and then representativeness (in our case, this could also be tied to interpretive rigour – how your conclusions match the group it emerged from)
Yes, I agree this would be better. But I am inexperienced with the academic standards of qualitative evaluation, is why I ask.
This sounds great, much better than what I came up to! And it is great to have references. Take your pick, I will edit the abstract (or just edit it yourself). Pointers to references welcome.
i) Clarification and justification, (ii) procedural rigor, (iii) sample representativeness, (iv) interpretative rigor, (v) reflexive and evaluative rigor and (vi) transferability/generalizability (Kitto et al 2008).
Their paper is very clear in describing each part:
We can amend later if need be (and will add discussion of each with other citations in the paper itself).
Throwing two cents in the discussion:
Likert scale is the easiest under the given time constraints.
groundedness might be quite subjective, so yep Likert scale sounds nice,
effectiveness can be measured from a list of randomized tasks which success could be measured, such task could be based on finding elements, or overall understanding. This has the advantage of giving multiple criteria of evaluation, however it is usually takes quite some amount of work and time to design the tasks, evaluate them (measuring time is good too), and analyse the results.
faithfulness could also be measured more extensively with memory tasks (what do the people remember of the network an hour, or even a day or two later).
Thanks everyone! The way that I would prefer is: the netsci folks propose and implement network reduction techniques on the data, and the anthro folks evaluate them. Come to think of it, this is the challenge I proposed for Masters of Networks 5!
So, we drop my criteria, and instead use those by Kitto and al. This part of the methodology will be governed by @amelia, @Jan and whoever else wants in.
Ok, second pass done, thanks @melancon and @brenoust for all your comments.
Missing:
One sentence on methodology. How do we check techniques against Kitto’s six criteria?
Yes, as remarked by mostly everyone a figure would be great. But we are really struggling with space limitations. The main thing we can do is cut out the list of reduction techniques, which is tentative anyway, or (less radically), replace it with a list of mentions
Reduction techniques include: eliminating edges encoding a low number of co-occurrences; eliminating edges encoding a low number of informants; eliminating edges via k-core decomposition […]
I also need information about the affiliation of each co-author.
50 words abstract:
Semantic social networks are a way to encode large ethnographic corpora as structured data, and express them as networks. Using a corpus on populist politics in Eastern Europe, we compare different techniques to reduce the network for improved legibility.
It looks good to me for now – I rephrased the sentence to make it stronger/more deliberate (e.g. we use X to evaluate Y) but I don’t think more elaboration is needed in the abstract since it’s a qualitative evaluation. Let me know if you disagree / have an idea of what kind of elaboration you think you might need as a reader.