Marco, please note I checked my code, corrected several mistakes, collapsed the two figures into one to optimise the space, and revised the text accordingly.
Can I ask you how many words where produced in the last six months of opencare online conversation? In absolute number and in percentage as compared to the total?
Could I also ask you to include in the computation the codes that predate 2016 AND were used to code opencare? From the project point of view, the first time they were incepted in the coding of conversations is essentially the moment they were generated.
If we exclude them tout-court from the calculations, we bias our quantitative analysis of the dynamics of ethnographic coding…
The computation is based on the annotations authored by @amelia. If she re-used pre-existing codes (I doubt it) those are automatically included. Pre-opencare annotations on old content are not included.
I need to run a script for that. Will do.
OK, I had misunderstood the original text concerning past codes then, thank you for clarifying.
I am preparing the paragraph, as I guess you understood. I look forward to the word counts, thank you
908 contributions, with 174K words, from 2017-07-01.
In total: 3,887, with 820K words.
ahoy mateys! Update from me: I plan to have my bit done by tomorrow. Sunday at the latest.
OK, 2 and 3 done. For 4, just waiting on the picture. If Graph Ryder is up to date then I can go ahead and do it based upon that. Is it?
Would be great if you could!
Here. See which one you like best. As always, color encodes communities of nodes. Do you know how to upload it onto Overleaf and include it into the LaTEX file?
1. With k > 6 (32 codes, 33 edges)
2. With k > 5 (58 codes, 63 edges)
3. With k > 4 (75 codes, 97 edges)
Thanks! Haven’t done it before, but will let you know if I can’t figure it out easily.
Notice how the Golden Triangle legality
- regulation
- safety
is, with all three levels of filtering, part of the core cluster with the mothership code community-based care
. It means the partition in communities of the nodes is pretty solid (Louvain modularity = ~ 0.6).
Also note that I have manually zapped the accursed research question
. I think we should zap it from the database, in fact…
OK, sounds good. I think it has probably been more annoying than useful to us ultimately. Maybe it would be good to grab all the text associated with it, then zap it, if possible.
I like the k>5 visualisation the best. I’ve edited the paper to make sure the walkthroughs makes sense with that visual. If it’s easy for you to do, it would be great if you could pop the image in — if not, let me know and I will sort it out.
I’m also happy with the k>4 one as well, since it shows more. Both walkthroughs can be seen on both, so either is fine. I just think the k>5 is clearer.
Image uploaded and included in the text.
caption says k>6, but in this thread it says k >5. Which one is it? If 6, I need to tweak the text to reflect.
It’s k > 5, so k > = 6. Sorry.
cool, just checking all good
@alberto I just realised I kept following (and replying to) the wrong thread → Annotations vs codes time series - #5 by markomanka