Good idea from Marc-Antoine of Imagination4People. Assembl is also being tested by the Loomio community (the subgroup on Loomio talking about Loomio). In that case, they do an A/B test: what is the impact of having seen the Assembl-generated summary on what people do (of course “what people do” must be precisely defined and measurable).
Not sure how that could be technically done (with the newsletter? With Google Analytics?) but the idea is intriguing. What do you think @Noemi and @Ruxandra?
That’s indeed an interesting idea. To do an A/B test, we would need to split the Edgeryders community in two groups. To one group we show the Assembl synthesis, to the other one we wouldn’t.
To have statistical meaning, in order to see an impact on google analytics, newsletters or others, we would need to split the group in homogenous parts. The biggest error that we could do is if we split the groups, and put in one the more active Edgeryders, and in the other one the members that are interested but connecting less… In this case we would have a significant biais on the results of the A/B testing…
… because over a large group most differences will average out across the two partitions.
A possibility would be to do this through the Edgeryders newsletter: one version would contain news that the summary exists and a click leading to it, the other would not. MailChimp can then do an A|B test over the “click rate” of the two versions.
Testing about the impact of the summary or about its existence
Let me see if I understand well. In newsletter A, we let people know about Assembl and its synthesis. And in newsletter B, we do not talk about it at all. We let then Mailchimp compare the click rates.
It’s very easy to do this, and its a great idea. However, I am not sure if we are really testing the impact of the summary on the involvement of people. We are testing more their likelihood to click before they know what’s in the summary, than effectively testing the impact of the summary on their actions after having read it.
We should let group A know about it, group B not know about it. And then test their involvement, participation on something else. This is harder to organize, and here is where it’s important to organize the groups homogeneously, so that one group is not already more likely to contribute because they are usually more involved. This is why I was talking about carefully making the split. What do you think about this @Alberto?
Correct on all points, @Ruxandra. My approach to analytics in Edgeryders is: if it is not easy, it is probably not worth doing.
That said, a slightly more sophisticated version is: put the summary on Edgeryders. Try to get, say, 100 users to look at it. Look at the conversion rate of that page (against goals of, say, creating comments). Analytics will give you the average conversion rate of the website. Run (manually, with any statistical software) a test that the two conversion rates are the same. Rejection will prove that the summary is engagement-driving.
I like both ideas so far… Mailchimp can be useful when you look at what people are interested to look at in more detail (clicking being the first sign of engagement), then umber of comments or further engagement with the content - which is the ultimate proof of interacting with the synthesis.
We can also compare how the ethnographic summary of stewardship compares with an Assembl summary of the same topic, or even a different conversation, like a synthesis of ER activity over the next months, in general.
Should we then agree to ask the Assembl folks to work on helping us import content already? This is the first step to move on.
Go ahead, if the others are up for it. To make sure I understand, we need to specify now what we want imported and that will be the content used for synthesis? What Inga coded includes content from several groups, basically 3: