Masters of Network 4: Networks of Care

Why

Care happens in networks. People take care of each other. They seek advice, medical help and moral support from each other. They exchange knowledge and share resources. They meet, interact, and work together. And, of course, no human can live well if he or she disconnects from the fabric of society at large (in recent times, care also happened in big bureaucracies, but that approach has issues. Here we look for something better).

We think that this ceaseless exchange is collective intelligence at work. Network analysis is a useful tool to understand this process, and perhaps find ways to improve upon it. Thinking in networks is a great way to generate fresh, relevant questions. How do you know your network is going in the right direction? What is a “direction” in this context? Is everyone following the same path? Do people group into sub-communities? What are the focus of these (sub) communities? 

What

We come together to find out how networked humans can better take care of each other. 

To do this, we study result-oriented conversations. Conversations are networks: people are its nodes, and the exchanges are its links. It you don't believe us, click here to explore the Edgeryders conversation network (allow a few seconds for the data to download). But conversations are networks also in another sense: each exchange contains some concepts. Example of concepts useful in care are: well-being, syringe, diabethes, fitness, prosthetics, etc. We can represent concept in a conversation as a network. Concepts themselves are its nodes; two concepts are linked if they are in the same exchange. 

Person-to-person conversation networks tell us who is talking to whom. Are there individuals who act as "hubs"? Why? Can we use hubs to improve the process, for example asking them to spread important knowledge? 

Concept-to-concept conversation networks tell us how the different concepts connect to each other. Are there surprises? Do apparently unrelated concepts tend to come up in the same exchanges? Anomalies might mean something interesting is going on. In fact, spotting anomalies is how John Snow invented epidemiology in 1854

The fascinating part is this: by looking at the network, we can extract information that no individual in the network has. The whole is greater than the sum of the parts. Collective intelligence! 

How

We look at conversation data taken from Edgeryders and build them into a network. We use open source software for network analysis. We then visualize and interrogate the network to see what we can learn. Our final aim is to prototype methodologies for extracting collective intelligent outcomes from conversations.

One great output from the workshop would be to unleash our imagination, and specify design & requirements:

  • What views works best? It'll be useful to build it as a mockup if we do not already have it -- use color pens, paper, clips, cardboard and build it into a mock-up!
  • For what tasks? Do we need to move things around? Pile them up to trigger comparison of things on-the-fly? Lasso an item to trigger some computation? -- use post-it notes, cut and paste pieces of paper, draw arrows to turn tasks into real actions (on a screen!).
  • Using what ingredient (data)? What should we feed the system with to accomplish these analytical tasks? -- write them down, cut & paste, associate them with specific tasks, embed them into views.

The workshop is a unique opportunity to have a design participatory workshop -- we want it to be a source of inspiration to design and build the next generation EdgeSense dashboard!

Who should come

Masters of Networks is open to all, and especially friendly to beginners. Patients, network scientists, doctors, hackers and so on all have something to contribute. But in the end we are all experts in this domain. We all give and receive care in the course of our lives, and all humans are expert conversationalists.  There's an extra bonus for beginners: networks are easy to visualize. And when you visualize them, as we will, they are often beautiful and intuitive. 

Trust us. We have done this before (check out the video above).

Data

We have a dataset drawn from a large conversation that took place on the Edgeryders platform in 2014. It consists of 161 posts and 910 comments, authored by 128 different people. All posts and comments have been annotated by a professional ethnographer. This leaves us with an ontology of relevant concepts: we can use it to build the network. 

That conversation was not about care. We will need to be clever, and use different data to figure out a methodology to apply to a future conversation about care. 

Agenda and challenges

The agenda is simple:

  • We will spend the first hour and a half explaining how the data were formed, harvested and converted into a network. We will explore the network together using a software called Detangler, brainchild of the wonderful @brenoust. Detangler is highly intuitive: we can use it to manipulate network without knowing any network math at all. 
  • Then, we'll hack. We can explore the data in many directions. Depending on how many we are, we can split into groups that look at different things. We see at least three possibilities:

Visualization challenge. Create informative and beautiful visualizations starting from our data. Skills needed: design, dataviz, netviz. Coordinator: @melancon (you can call me Guy)

  • ​Its not only about creativity and beauty, it's about interactivity -- a map seen as a malleable object so you can squirk information out of it.
  • It's also about being able to specify graphical design from the tasks you'd need to conduct on the data and its representation on the screen.
    • How is a node-link view useful? How would you intuitively like to manipulate, filter or change it at will when exploring it?
    • Would you feel you need to synchronize the view with a bar chart on some statistics? A scatterplot to figure out if things correlate?

Interpretation challenge. How many conclusions and hypotheses can we "squeeze" from the data? Skills needed: social research, ethnography, network science. Coordinator: @Noemi?

  • Interpretation is at the core of the process. You play with data, you map it, and iteratively build hypothesis. In the end, you dream you would have provable claims.

sensemakingLoop_0.png

Quality challenge. Can we think of simple criteria to filter the data for the highest-quality content only (eg: only posts with a minimum number of comments, or of minimum length)? Does the filtering change the results? Coordination: @Alberto

And more. But we insist that every group has a coordinator, who takes responsibility for driving it, sharing the relevant material (examples: software libraries, notes for participants, pseudo code...). If we only have two coordinators, we'll only have two groups. If you think you can lead a group, get in touch with us!

Tentative schedule

  • 9h30 - 11h - @Alberto and @melancon (@Hazem @Noemi) give a start by drawing the overall picture, following the famous adagio "A picture is worth a thousand words".
  • 11h - 12h30 Teams give it a first shot
    • The viz team will play a game building their ideal visual dashboard using pen and paper, cardboard -- explaining why these features may turn to be essential when exploring or analyzing network data.
    • The interpretation team output is critical: the directions they will provide has decisive impact on how data will be used, massaged and turned into visual representations.
    • The qualitative team plays a similar role, feeding the intepretation team with high quality content -- their recommendations will make even greater sense if we can link them with paths of interpretation.
  • 12h30 - 14h Feed your brain with proteins and glucids.
  • 14h - 16h Teams go back to work and build a proof-of-concept of the ideas /hypothesis they came up with in the morning session.
    • Cross-fertilization of ideas with the other teams is encouraged. People may wish to change teams to widen their experience and knowledge.
    • Teams prepare a short summary of their findings/conclusions that will be presented during the wrap-up session.
  • 16h - 16h30 Wrap-up. Team presentation, plenary discussion.
Who is facilitating

When it is, where it is and how to participate

Masters of Networks 4: Networks of Care is part of LOTE5. It takes place on Saturday, 27th February 2016 at Brussels Art Factory, SmartBE. Sign up by clicking the "attend button. Leave a comment below to let us know what your skills are, we'll put them to good use! We particularly need people to help us with the documentation of what is done.

How to prepare

Have a look at Detangler, and play with the map just to get a feeling of what can be done. If you have questions, write them as comments to this post. 

What happens next

A project called OpenCare will convene a large-scale conversation about care. The work in OpenCare will make good use of the insights generated during Networks of Care.

Sat, 2016-02-27 9:30
like1

Comments

Moved everything here

Alberto's picture

@melancon, I rewrote your text for outreach purposes. Do change what you don't like. I also had to move the content to a newly created event for technical reasons. 

@markomanka and @Luciascopelliti, please note that I have enrolled you. smiley It will be fun! Please click on the "attend" button. Same goes for @MoE and @dora and whoever is interested. @mstn? @maxlath? @danohu

like0

Thanks

melancon's picture

Good. Thanks for helping. You have a talent it would take me yet another life to learn :-)

like1

Hey :-)

Alberto's picture

Funny, I always say the same of you. laugh

like0

enrolling

Luciascopelliti's picture

hi alberto!

Can I facilitate even if I must leave at 1 pm?

like0

Yes

Alberto's picture

I see your role as helping to shape questions, i.e. map the methodology we'll be working with onto care in general and OpenCare in particular. smiley

like0

all right

Luciascopelliti's picture

I'm in

like1

So...

Alberto's picture

... please click on the "Attend" button. While you are at it, please do the same on the LOTE5 event here. @Rossana Torri, can you do it too please? I will explain better the rationale for this when we meet in Brussels. smiley

like0

Ok

Rossana Torri's picture

I did it for the LOTE event.

Unfortunatly I have to fly back to Milan on Friday evening...

Lucia will stay for MoN till Sat morning.

 

 

like0

"Yes, but what will we DO?"

Alberto's picture

@melancon, my friend: our text is clear and nicely written, but it is not a work program. I added a tentative agenda section. Please look it up and see if you think it makes sense. If it does, assign yourself as the coordinator of a challenge... or maybe let's decide to do only one challenge, so we can hack together!

I kind of like the idea of the quality challenge. smiley

Input from all participants welcome! @MoE @dora @Betty Gorf @jimmytidey (Jimmy, are you coming?)

like0

Need a hand

melancon's picture

@Alberto

I was about to edit the text of the event, but I thought I should double-check -- I admit I have but no experience in organizing things the way it takes place on edgeryders.eu

  • I already asked for a list of attendees, thinking I could possibly format the workshop according to the audience.
    • I could probably refine the different challenges accordingly.
  • I thought I should share the data ahead of time for those who wish to have alook at the material we'll be using. I have a set of JSONs, and I also uploaded everything into a Neo4j database. Neo4j is nice because it allows to readily visualize the data without really doing anything special (but install neo4j).
    • Note: the JSONs I have include much more content than what you describe. I guess the few hundreds users and comments you mention were obtained by discarding but items of interest (I see this as being part of the process).
like0

Certainly!

Alberto's picture
  • The list of attendees is going to be provisional at best. Your main audience is the OC consortium. Several people from the ER community will also attend. Some will be more on the data geek side, like @MoE; others will be from the medical space (I spoke to a woman called Claire at the LOTE5 apéro). We'll have to improvise. My solution to this is: we announce clearly what the tracks will be, and people who come will be attracted by the tracks. 
  • To share the data ahead of time is a great idea. I suggest a GitHub repo. 
  • Take care! The dataset has the whole Edgeryders conversation at the time it was generated, so thousands of posts and well over 10,000 comments. But only those of the Spot The Future projects are coded with semantic information! For the rest, you can draw an Edgesense-style social network, but that's about it. Unless you want to try NLP stuff, which I would advise against because it is a totally different methodological path. 
like0

Coordinator for interpretation challenge

melancon's picture

@Noemi, I deliberately put you as coordinator for the interpretation -- without asking you first whether you would like to, or even be available! I only did it based on your past experience of previous MoNs. I know you would do a marvelous job.

Guy

like0

Flattered but..

Noemi's picture

Thanks @melancon. The only issue I see is that I have to moderate the European Capitals panel Saturday starting 2PM. 

I would also recommend @Hazem for the job, as he is joining us for Lote, has been doing previous work with Edgesense and knows the ER network well enough.

like0

How about teaming up

melancon's picture

Thanks @Noemi

How about teaming up with @Hazem in the morning and then leaving (the most fabulous) MoN4 to join your afternoon session?

@Hazem, please let me know whether this suits you.

Guy

like0

count me in

Hazem's picture

sure will be there.

like1

Good

melancon's picture

@Hazem So you are now officially coordinating the interpretation challenge!

Looking forward to meet you at LOTE5.

Guy

like0

Network newbie with graphic and statistics background

RossellaB's picture

I am relatively new to networks, but i'm working on a serious project with networks and i have a background in graphic design and in statistics. I can program in R but won't be able to bring a laptop.

I am particularly interested in the visualisation challenge and looking forward at meeting you all!

like0

Grrrrreat!

melancon's picture

@RossellaB Do not forget we all were newbies at some point, and will probably remain newbies on so many topics till the end. I am real happy to count you in.

Looking forward to code in your company :-)

Guy

 

like1

Thanks Guy! That is certainly

RossellaB's picture

Thanks Guy! That is certainly true.

like0

Question about Detangler

RossellaB's picture

I was having a look at Detangler and I don't understand what the coordinates x and y stand for. I can see scatter plots and bar plots but I don't know what they tell about the network. Can someone help me out?

like0

It's all in the interaction

melancon's picture

Hi @RossellaB, good to see you are playing with Detangler.

First thing you need to know is that nodes on the left panel (substrates) are the main focus. Those substrates relate to one another through nodes on the right panel (catalysts). Catalysts are the "reasons" why substrates relate to one another. In the demo example, people get connected because they co-participate to political lodges (you may have recognized names from the so-called Paul Revere night ride from the American revolution). The quest is to try to figure out, for instance, who was in a position to reach all of those guys pretty quickly (in order to organize a mutiny before the British authority could counterfeit them).

The x, y position of nodes is decided in the following way: nodes on the right panel are displayed using a force-directed layout (ask me if you have no idea what that is). There is no absolute meaning in the x or y value, nodes are just positioned so as to have a readable display. Nodes on the left panel are positionned according to how they relate to nodes in the right panel. The layout attemps at mimicking the layout on the left, substrates are positionned "around" the catalysts to which they correspond (although catalysts are not embedded in the panel. The reason is to make the selection more natural: when you select substrates at the top in the left panel, you may expect the corresponding catalysts to be located at the top in the right panel.

The main feature is the easy selection of substrates or catalyst using the lasso.

We'll be using Detangler with substrates=people and catalysts=topics, for instance.

Enjoy!

like0

thanks!

RossellaB's picture
Thank you for the explanation Guy, now it starts to make sense. I know more or less what a force directed layout is, although I'm not familiar with the maths behind it.
like0

MoN4 stage-setting conference call!

Alberto's picture

@melancon, are you free for an hour on Friday, say 15 to 16? I would like to touch base with you on the finishing touches to MoN4.

like0

I'm in

MoE's picture

Hi everybody,
  I'm sorry for the long silence but it's been a long any busy period for me.

I just wanted to confirm I'll be attending MoN4 with @dora (we have accomodation sorted).
We also made some progress with python, relatively to my last updates on ER, but not recently. I'm planning to get back to the code next weekend and I'm confident I'll be able to give you a better update, then.

I can't wait to meet you in person :)

Cheers,
s  t  e

like0

Friday 3pm -- ok

melancon's picture

3pm - 4pm and more if necessary.

I guess you had a look at the (tentative) agenda, and also saw I wish to give MoN4 a participatory design workshop twist.

Do we open the call to all, or keep it between facilitators (@Hazem?), or us two?

like0

Open

Alberto's picture

... so, on Google Hangout, because you can join them with just the link.  No Skype. 

like0

How do I extract the STF posts

melancon's picture

Is there a tag or something I can grab so I know a post is relevant to STF.

I also need to be helped on the ethno posts, which for now I cannot really exploit.

We'll talk about all this tomorrow I guess.

like0

A bit confused

MoE's picture

I finally took some time to read thorugh the most recent program and comments and I am a bit confused...

We started having a plan, with @dora, about what could be done with the Wikipedia data we managed to mine but I see there's no mention of Wikipedia at all, in this page, so I was wondering whether you're giving up on that end or it was just left aside for the moment, or... ?

Again, I'll be catching up with the code stuff this weekend. In the meanwhile, I'd ask: are you planing to have just 2 hours of prototyping for the proof of concept? Ain't it a bit too shrinked?

I see that properly structuring ideas is the most relevant aspect of the hackathon, but I fear that not having enough time to make them into proper, working pieces of code might risk to end up producing mainly fluff... I hope I won't sound harsh in saying this, I would just hear what's your take

like0

But, hackathon

Alberto's picture

@MoE, to what @melancon writes I would like to add two things.

First, this is a hackathon, and that means we enjoy a lot of freedom. If you have done preparatory work on Wikipedia data, you are more than welcome to lead a track on Wikipedia data! We'll treat it as we treat the other tracks – in fact, I might drop my quality challenge and join it myself. We also reserve the right to keep hacking into Sunday – I'll definitely do it if we really get going.

Second, the time limitations will be mitigated by several factors. The first one is good preparation – join the MoN4 call right now to find out more.  The second one is the usual trick of all hackathons: we just stay in touch (through GitHub and other channels) and finish our work in remote. The third one is the LOTE5 freedom that I mentioned above. We should be OK.

like0

Sounds Good

MoE's picture

Hi @Alberto (I have often problems with mentions' hints not appearing, and therefore such mentions not being recognised; is it a known issue or is it just me?)

What you say makes sense and is reassuring. I'm trying to re-organize the several proofs of concept which we put in place with @dora, to have a unified simple tool that we might use to query wikipedia and store responses to a database, for later visualization.

Based on what we managed to fetch via the API, here's what I was thinking:
* we have a list of the medicine pages
* for each page, we can query the pageviews, the links to other wikipedia pages and the translations into other languages
* for each link to other wikipedia pages, we can tell whether they're medicine pages or not; if they are, we can record they're semantically connected
* for each page translation we can repeat the above queries and store the relative data

With the stored info we could try to analyze:
* which pages are connected, assuming their sub-network might represent a certain topic
* which pages (in absolute and relatively to a certain sub-network) have most views
* for each topic or sub-network, what is the weight of a specific language in the overall page count

Things we might learn:
* higher page counts of certain topics might represent higher interest and/or practice of autodiagnosis for such topics (I'm aware that what's actually relevant is how to tell between the two but we can investigate this further)
* existance of page translations in certain languages might imply a geographical and/or ethnographical relevance for certain areas and/or ethnies
* higher page counts for a certain topic, in a certain language, might be relevant too

Thinking of Edgesense and ways to use it (maybe) differently than what it was designed for, we might have nodes representing pages as "semantic knots" (more than "bits of conversation") and connections representing their semantic affinity. Edgesense could then be used to analyze whether the sub-regions it finds match the ones we found as wikipedia's internal links (mentioned above).
We could also visualize page counts for each sub-region and each node, both as a global count (including all languages) ans as a "filtered" count, per ethnographic group.

Finally, if we imagine this visualized in three dimensions, we might have:
* a planar XY mapping, with all the English medicine pages, where all connections are visualized and semantic sub-regions are highlighted (the distributions of nodes in 2D space wouldn't necessarily have a geographical meaning)
* a Z layering, where at each depth/height we have a "language plane" which shows which of the English pages are translated in a certain language
Connections would exists across layers, giving a "volumetric" representation of medicine semantic networks.

I guess this last bit might sound particularly abstract or confusing, until I manage to sketch a graphic prototype. I hope I'll be able to do it soon, on paper at least, to try explain the idea a bit better.

If anything of what I wrote makes sense to any of you, let me know :)

like0

These are two hackathons, not one!

Alberto's picture

I think I understand. You want to do two things.

  1. A multiplex network of pages connected by links. The multiplex part takes advantage of wikidata: we know that "influenza" in English is the same thing as "grippe" in French through Wikidata. So, we can follow all the links from "influenza" and all the links from "grippe"; these will induce two networks, one English-speaking and the other one French-speaking". The networks might be different, and we can analyze that difference. In practice, you have a multiplex network in which each language is one layer of the multiplex.
  2. A page count exercise. Page counts have a separate collection method, that we discussed and fiddled around back in 32C3.
  3. If both exercises are successful, you have a multiplex network of medical pages in Wikipedia, each of which is associated to a number in terms of page counts per unit of time. You could then map this information onto the network, even visually:

What I like about the approach is that it is relatively simple to compute correlation coefficients across different versions of (medical) Wikipedia. Correlation betweeen two languages is high if the probability of being connected of two random pages in language A, conditional to those two pages being connected in language B, is close to 1. Notice that you could do this without even looking at page counts! It seems like a lot of work for one day of hacking, but if that's what you want to do, go for it. 

like0

"Multiplex Network" sounds sleek! :)

MoE's picture

I had no idea that was called a multiplex network but I googled it and, yes, it looks exactly like what I was thinking.
When I mentioned a 3D visualization, I had in mind exactly this:

where each coloured layer is a language, each dot is a page, each link between pages sybolizes a semantic affinity. @Alberto we're on the same page, right?

Regarding @Alberto's and @melancon's concerns about time, I agree. The whole process might not be trivial nor quick.
On one end, that's why I hoped we would have had more than 2 hours of programming; but after what @Alberto replied, I think it is reasonable to think of a simple proof of concept (we choose one question and prove via code how we "could" provide an answer). If that proves to be worth it, I/we can invest some more time the day after, to extend the code and produce something more meaningful.
On the other end, I was trying to produce as much code and db data as I can, to have a good base to start with (and to share with everybody else, obviously).

So far I have most of the code in place to do the data mining. I had to go through a few iterations, as we have limited db storage (free account on mongolab) and I needed to find a way both to do quick grouped queries and to store the results efficiently enough. I think I'm pretty close: I could store around 25K entries in around 6Mb (of 200Mb we are allowed) in a few hours. These entries count 4.5K English pages and all their available translations in any language. This means that within 24h we should be able to populate the db with all the pages, from scratch.

This does not include the page counts, though, which require a separate query and I'm still figuring out if there's a way to optimize those (ie. not sending a query per page).
I'll test this later or tomorrow, but I'm confident I can get decent results in reasonable time.
This partially answers @melancon's question about what I'm storing. In terms of page counts, both because of time and storage restrictions, I was thinking to store sample counts for a given period (ie. 1 month of pagecounts, per day), instead of a tighter sampling.
I thought that, for the sake of demonstration, any timeframe can be used to prove the concept, and we can assume that the real measures will then take place on more accurate/representative data.
Do you think it is acceptable, as an assumption?

Does it still sound too scary? I trust your judgement guys, seriously

like0

Multiplex it is

melancon's picture

@MoE @Alberto

Yes, multiplex. I find this a convenient concept, probably more buzzwordy than deep -- anything is multiplex if you think about it ... it depends on what you are ready to term a layer ... We'll have plenty of time to chew about this.

You got things right. There are several ways to compute similarity between entities described by a "bag of words" (which you can actually see as embedded in a high-dimensional vector space ...). The better your index (words associated with entities), the better the similarity measures, from which you usually derive a topology by linking similar enough entities. As for the pagecount, I would expect larger time span to lead to somehow uniform pagecounts over all pages, while finer time spans may indicate when/if pages are simultaneously consulted.

The link structure you consider, the similarity measure you computeIt all depends on what question/task you are supporting.

As I see it, you have done quite a lot of work and will be bringing fantastic material to the workshop. This may well open the door to interesting future collaboration. I mean, people usually get together to finish up what has been done during the workshop, sometimes ending as blog posts, repots, or even scientific publications. In this case, my feeling is we may have things to say to the academic crowd. @Alberto?

like0

Mentions module

Alberto's picture

The mentions module started acting up after we upgraded the site to the newest version of Drupal Commons and is currently disabled. See here. 

like0

Blame it on me

melancon's picture

Hi @MoE,

you are right, we decided in the end not to include anything about Wikipedia/data in the final program,

mainly because we feared there wouldn't be much to investigate. The motto for MoN is to have clear domain questions together with data, and then mine and visualize the data in ways that can help refine the questions, then iterate until you reach some sort of answers.

We had also included the EdgeRyders conversation data from the beginning, in order to also have a chance to look at this type of data -- although what we have for the moment is less concerned with care. It is that type of data OpenCare has planned to deal with (people interacting and discussing issues -> socio-semantic network).

Pagecounts did not seem to offer a tangible opportunity to look at how people perform self-diagnosis. So we (@Alberto and I) had to decide not to include them in the final MoN program.

--

Now, regarding your comment on the risk of being "shrinked" by time ... well, it is real. Previous MoN sessions expanded over two full days. This time, we had to cope with lots of constraints both on the OpenCare and LOTE5 side. To help with this, I will make data available later today, hopefully with some code snippets, so we can save time and still do some work.

Hope this helps.

like0

Zero Blame

MoE's picture

Seriously, I hope I didn't give the wrong impression. I can only appreciate the organic way topics naturally adapt, here on ER, depending who's active in the discussions. I wasn't for a while, so it's legit that the topic might have faded in favour of others :)

I was asking, simply because I had spent some time on the Wikipedia thing and I thought something interesting could be observed (or we could at lease try). I tried giving a hint of my thoughts above, as a reply to @Alberto. I'd be happy to hear what you think.

Peace :)
s  t  e

like0

my skills

alessandro contini's picture

Hello!
I'd like to attend the session and I think my skillset will be a good fit for the viz or interpretation team:
- paper prototyping / brainstorming
- D3js (HTML, CSS)

Looking forward to join and hack :)

like0

Paper prototypes, yeah!

melancon's picture

Hi @alessandro-contini,

please join, I'll be more than happy to have you on board, I am most sure you can contribute with great design ideas and/or improvements, and even paper mock-up!

d3 is so great, would be nice to have some of our stuff put up on the web too.

See you in Brussels next week

Guy

like0

Let's do it!

melancon's picture

@MoE this is great news. I didn't dare put that forward for MoN4 as I fear I would not be able to keep up to my promise.

It's seems you have already done part of the work, and about all of the thinking. Then go ahead, I'll register to your session :-)

I am quite sure I will learn from your experience. And if I understood correctly, you already have put up a db registering pageoucnts on an hourly basis -- all great news.

Looking forward to see you in Brussels next week.

like0

Getting ready

melancon's picture

To those of you who plan to attend to Mn4, and more particularly to those I expect to actively "play" with networks:

@Alberto @Hazem @dora @MoE @RossellaB -- I must be forgetting someone ...

We will be using the Tulip network viz framework. Tulip can be downloaded from its homepage (follow the download link). Other options are also possible, but we might enjoy Tulip's python scripting capabilities. The Tulip API is quite intuitive, although the doc could be improved. I'll be there to help.

Here is a file you could load after installing Tulip (install is easy, unroll the .exe or .dmg depending on the OS you are using). The file contains users, posts and comments, together with associated tags. All networks in this file connect different types of entites, users to posts or comments, comments to tags, etc. That's why it is named "bipartite".

Our task will be to explore how we may then model interaction between users, inferred from the available traces (comments to posts, by whom, etc.).

Once we have a user-to-user file, we will then want to use Detangler to inspect tags around which interaction takes place. I plan to make some code available to ease the process of producing Detangler files from user-to-user networks built with Tulip.

like0

am having problem with

Hazem's picture

am having problem with installing tulip on ubuntu...am still new with ubuntu so might need some help. will try again in the morning though

like0

Asking for help

melancon's picture

I am not that used with Linux myself.

Did you have a look at the sourceforge forum? In case you do not find proper help, please send email to Patrick Mary <mary@labri.fr>, cc'ed to me. He is the main engineer in charge of Tulip.

Guy

like0

Hackpad: please read and make improvements

RossellaB's picture

Hi all,

It was really great meeting you today! I have written a report on the spot on the hackpad for this group.

Please have a look and make adjustments.

Enjoy the rest of the meetup and see you all online soon!

Rossella

like0

Hackpad updated

Alberto's picture

Great work, @RossellaB. I have made some improvements to the hackpad myself. 

The main thing we need now is the code for the Wikipedia track by MoE and Dora. 

like0

Wikipedia code

dora's picture

Hi there, thank you so much for the event, it was a nice experience!

Our Wikipedia code is a branch of the spaghetti-open-data GitHub repository. The branch is called "dev-moe" and I think Moe has already sent a merge request.

You can browse the code here: https://github.com/FuturoAnteriore/visualizing-self-diagnosis/tree/dev-moe

Cheers!

 

like0

Thank you!

Alberto's picture

It was great to meet you Dora, you guys are really impressive and we hope we'll stay in touch. 

I have not seen pull requests yet...

like0

Recent site activity