Hey Jirka. I would distinguish (A) “pornography” as consumption of media content (such as photos, videos) - then code in Y media, and (B) “doing pornography” as (participative ) action/activity, when someone is doing porn (shooting videos, doing webcam shows) - then some code in Y activities.
Hi all. I couldn’t connect because I am on my way to a research interview in Plzeň/Pilsen (the city of beer). I had to fight for my seat in the train which is totally full then my computer died, and now I can’t log in to the session as my Zoom is too old and an update needs to be downloaded. I appologize.
POPREBEL coding meeting (8.10.)
Mania -Polish fieldwork
New phase of fieldwork: approached by an Emergency Room (high rate of non-vaccinated people), they asked me to do research on why the region has such low levels of vaccination rates.
-People from cities and villages from south of Poland.
-More insight into the correlation between conservative voters and vaccination sceptics.
Jan Who are the people we are talking to?
-What’s the discourse among PiS voters who ARE anti-vaxers and pro-vaxers…
Richard - German fieldwork
- Permission on Monday to disseminate surveys among the Berlin Police.
-We have contacts for Saxony and Brandeburg…
→ Balance out the skewed gender lense.
-Djan cannot access the social media groups he was studying previously.
Jan: Most of our suggestions that you can see will eventually be out.
-We have too many codes (still…as we said months ago…)
-Self imposed deadline: having it done by the end of the month.
Mania: can we delete codes later (as we go on?)…
-Wojtek: creating dozens Y categories and sticking to them.
-Jan: Our problem is to create a coherent structure on two levels (X and Y).
-Wojtek: once we’re done with the codebook, we’ll train you on it.
-The idea that we dont have to focus too much on the X codes, has been with us for a while…
Jan: We realized that the world presents itself to the people as a set of problems (ontology of the world in peoples’ minds).
-Ethnomethodology: key category is a situation, you analyze the world through the concept of the situation…for us its ‘problems and needs’.
-We abandoned the idea of the grammar of action - it would be impossible to execute.
Jiri: started organising the EMOTIONS category.
-generating a new table with x categories, by coincidence I pushed down all the codes that we might need to get rid off.
-I don’t know if the hierarchy which is coming up will be strictly organised according to the psychological approach to emotions.
-Jirka is coming up with a hybrid approach to creating the codebook: a different categorization of emotions that might match our research purposes better.
-We are finishing the final report for the ‘antisemitism’ project - corona, antisemitism, populism of V4 countries, I will share the report with you.
(Czech and Poland represent polar opposite).
-Grammar of action: I do not really have an articulated idea, there might be a possibility to work on automated structural analysis of the text with the coding.
-Structural Functional systemic analysis (download the coded material).
-Follow up project?
-It is not clear of who is doing what in the coding (underlying problem).
-What is the nature of the connection (the relationship)…
Eg: “Smith” “Jones” “killed” …this appeares in a longer text…
-Smith and Jones mentioned in the first five sentences, 10 sentences later someone mentiones that ‘Joseph killed his cat’… the system generates ‘killed’, ‘cat’, ‘Smith’ and ‘Jones’…and it may appear like there is a relation between ‘Smith’ and ‘kill’ but in reality, there is non.
-Alberto: proximity solution?
-If you have thousands of data → relationships, then the set of true correlations appears.
-The major insight from the first round of coding - we are more at the hierarchy of problems than edges. (Poland - people are in a very bad situation in terms of healthcare and housing).
Wojtek: LGBTQ and demonization → we have to fork it.
Share with us the work of economic side of POPREBEL @Jan pleaseee
A small problem with categories for protected interviews @Djan, @Maniamana let us know, what you decided…
Several co-occurance within one text.
-it is also easy to build a separate index.
-Alberto will go ahead and build a graph using Tulip.
-As a group we have to take responsibility of how we approach this co-occurrence issue…
-Jirka: Using the bigger approach which produces more edges = intensity of co occurrences within one post.
-Alberto: Interview that is 4x as big as other, will have 4x number of edges…
-What it does do - adds to the strength of the existing edges
-Wojt: Green light to use multiple instances of a given code within one unit of analysis that is the post.
-Effort which would allow us to increase the strength of connections within one post.
-Tool of measuring the strength of connection between posts.
-The system does not allow us to manipulate an additional connection to convey the emotional load that someone is expressing…
-It is useful to use version 1→ let us have the opportunity to show that somebody associates 2 given ideas…it also generates edges with other codes (doesnt solve the problem of having just a few posts). The posts are not as long now and it wouldnt be such a problem.
-The exchanges are not so long so it wouldn’t generate false positives.
- I thought that we decided that we will be chopping up the interviews…
-we have 2 types of materials - one generated by interviews, the other by conversations - person x can be presented on the platform several times).
-Eg: Jitka interviews person x - this interview has five major questions (five major posts)…would it be possible to have a picture by post (5 units, each generates edges) and then have a picture of the conversation as a whole…?
-Can we generate it both ways? Both 5 posts and one conversation (including the 5 posts)
-The larger the unit, the more dense the network becomes…
-If the corpus is a gigantic unit you would have all codes co occurring with all codes (you would still have the weight = weight of edge = the number of cooccurrences)
-The length of the post will generate lots of edges.
-It makes sense that the coded edges would be similar across the graph.
-Only one graph that expresses the interviews…
-If someone in the whole corpus mentioned ‘the church’ and ‘the traditions’ were linked 10x times in 10 different posts…the strength of the post is 10?
-Edges of a weight one in a single post…
-Why we decided to divide the interview into 5 units–>
You have a question and the first post is something about ‘church’ and ‘traditions’…
Post n. 3: people talk about ‘economy’ and ‘corruption’
If we put it all together in one conversation we will get new edges between ‘church’ and ‘corruption’ and ‘traditions’ and ‘economy’… that is why we chopped it up right? Answer is YES.
-D: If 2 codes co-occur many times, the collective intelligence makes it dense.
-P: number of people in the study that makes that connection.
-Since we have the chance, part of the problem can be taken out by consistent coding.
-In the beginning it was fragmented, low level of coding…
-Then we developed larger levels of text being coded…
“You are the most consistent and disciplined bunch of coders”
-How you store and label the codes… you are accountable to the process.
-Take care of the data - if there is anything you do not need, just get rid of it.
-Next discussion we will have will be about THE HIERARCHY.
@rebelethno where is today’s meeting?
It is postponed to next Friday because Jan and Wojt can’t attend today.
Dear All, I apologize for this cancellation on such a short notice. I have a medical issue (nothing major) and need a procedure that has to be approved at several levels (welcome to the nonsense of the American health care system). So, I am still in the process of scheduling… And it is time consuming as well as unpredictable.
Anyway, Wojt and I will continue working on the code book and will study the points raised by @alberto. Alberto, I now have access to our text and will start making suggestions, but it would be good if @Richard, @amelia and I could have a meeting soon to discuss our contribution to the project.
Sure. What happened is this: Richard observed that it would be good for you guys to have access to the results of the computation process. Some results are available (sections 5.2. and 5.3 of the paper; I am now updating section 5.4, but its gist is going to be this). Others are not ready, for various impediments of our co-authors in Bordeaux. So I left you guys alone.
But, my preference is working in parallel. We have quite some work to do to frame the problem, link it to the replicability crisis, etc. Happy to talk soon, if you guys do not need to look at the finished computation results.
Hi @rebelethno, are we meeting today at 3pm (CET)?
As far as I know!
Good morning! Here is the place where I am waiting to see your lovely faces: https://rutgers.zoom.us/j/96889855169?pwd=V1RDV3g5RUdIdWpWSkh4RUlvejc2dz09
I will get back to you on this today. Monday 2-5 - I teach, but it is an exam (in-class, on paper), so perhaps I can try to talk to you as they are writing.
Sorry, Zoom is making me update my software!
I think @Jirka_Kocian and I are in the wrong place – we are on the link attached to the calendar invite.
This is where we are…
I wanted to check the code “marketing”, with two annotations. One annotation is mine, but the other is not from our project.
When I have checked the source text, where it is annotated, I have found out, that many of the codes are associated to our project - though the text is not. (see picture)
I also wanted to check what are the other +56 annotations to the code marketing and besides “political marketing (by Jirka)” there are codes from other projects (e.g. “advertising”) childed to the code marketing, which again has two annotations - one si mine from our project, the other is from “katejsim”. (see picture) I have no idea how to deal with this. E.g. duplicate marketing one with my annotation and associated “political marketing” will be left under some(?) name and parented to our Z category, the other will be left with the name marketing, the second annotation and children of “advertising” etc.?