Well, that’s understandable - but it also reflects on how having proper food labeling and caloric content in eateries and other such information can help people know what is best for their own health and the health of society in general.
I have run into many real world situations where people don’t know the basics (even the Golden Rule) - but when I consider telling them the score I realize that if they don’t know by now I’m certainly not going to be able to clue them in!
My experience as admin of a large forum for 18 years was that Leadership (admin and mods and top helpers and posters) set those rules and 98% or more of the users followed along once they realized it was all good (no negatives involved).
Attention is a funny thing. Many people cannot discern negative attention from positive and from most everything in-between. When billions are online reacting to this…well, it causes the problems we see today. IMHO.
A “moses” in this case is a metaphor for a new take on the Golden Rule.
I have no faith that social media companies that make most of their money tracking you and subtly funneling you into where they want you to go in order to maximize revenue are ever going to allow enough user control to “un break” it.
Social networks are a lot more socially random than the societies that form around specialty sites like yours or this site here. So I think rules of civility can be applied in the very broadest sense in a social network but otherwise I think one size just does not fit all and administrators are going to be playing defense endlessly. This is why I think it is necessary to have good user controls in social media for it to work in ways that we here think it should work. (Assuming we are more or less aligned.)
This comprehensive study lays out the state of things today:
Samantha Bradshaw & Philip N. Howard, “The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation.” Working Paper 2019.3. Oxford, UK: Project on Computational Propaganda. comprop.oii.ox.ac.uk. 23 pp.
From the intro, “Cyber troops’ are defined as government or political party actors tasked with manipulating public opinion online (Bradshaw and Howard 2017a). We comparatively examine the formal organization of cyber troops around the world, and how these actors use computational propaganda for political purposes. This involves building an inventory of the evolving strategies, tools, and techniques of computational propaganda, including the use of ‘political bots’ to amplify hate speech or other forms of manipulated content, the illegal harvesting of data or micro-targeting, or deploying an army of ‘trolls’ to bully or harass political dissidents or journalists online. We also track the capacity and resources invested into developing these techniques to build a picture of cyber troop capabilities around the world.”
The summary:
Over the past three years, we have monitored the global organization of social media manipulation by governments and political parties. Our 2019 report analyses the trends of computational propaganda and the evolving tools, capacities, strategies, and resources.
-
Evidence of organized social media manipulation campaigns which have taken place in 70 countries, up from 48 countries in 2018 and 28 countries in 2017. In each country, there is at least one political party or government agency using social media to shape public attitudes domestically.
-
Social media has become co-opted by many authoritarian regimes. In 26 countries, computational propaganda is being used as a tool of information control in three distinct ways: to suppress fundamental human rights, discredit political opponents, and drown out dissenting opinions.
-
A handful of sophisticated state actors use computational propaganda for foreign influence operations. Facebook and Twitter attributed foreign influence operations to seven countries (China, India, Iran, Pakistan, Russia, Saudi Arabia, and Venezuela) who have used these platforms to influence global audiences.
-
China has become a major player in the global disinformation order. Until the 2019 protests in Hong Kong, most evidence of Chinese computational propaganda occurred on domestic platforms such as Weibo, WeChat, and QQ. But China’s new-found interest in aggressively using Facebook, Twitter, and YouTube should raise concerns for democracies
-
Despite there being more social networking platforms than ever, Facebook remains the platform of choice for social media manipulation. In 56 countries, we found evidence of formally organized computational propaganda campaigns on Facebook.
Hello @JollyOrc,
how are you now?
I was wondering where you are with darcy now, especially in context with the current situation of the Covid19 crisis.
During social distancing, social media becomes an even more central part of people live and potentially a real lifeline. To look into how to make it a save space in that situation is even more pressing.
And while a lot of conspiracy theories and new attacks like “zoom-bombing” are rising, there are also interesting trends towards a more social internet:
Yesterday on Facebook I posted a photo of the house my grandmother grew up in, which is in the state of Idaho in the USA. Someone else then posted a photo of his house on the California beach showing the ocean waves. He made some comment about it and I said “no surfing in Idaho.” because there is no ocean there.
For that FB suspended my account claiming that my comment was harassment and bullying. Of course it was nothing like that, but people seem to not be making those decisions - bots do. Ok, we all know that. What bothers me is what came after.
They told me I was suspended and they let me walk through a process where I could dispute it. At the end of that they just noted that I disagreed with their decision, without indicating that anyone or anything on the other side even looks at it. Then I picked my way to a form where I could describe the situation, with the idea that someone would actually read it and perhaps override the bot. Except that when I got through they said that function was not working and to try again later. Which I did for hours until I went to bed. So I reported that as a bug since their support structure was not working.
A few hours later my ban was lifted without explanation. So does that mean someone saw my argument and reversed the decision? I have no way of knowing because they have not communicated with me about that.
I find this all quite disturbing because millions of people have given over a big chunk of their daily lives to this platform that acts with impunity based on robotic decisions and makes no effort to communicate with the user, even though they constantly tell me “we care about you.”
So this part of social media is for sure broken. And it is to me an example of the creeping influence of bots making human decisions. @alberto pointed out to me that Facebook is too large - 2 billion people can’t be managed. And I agree, I think. Except they have billions of dollars and could afford to do a decent job if they really wanted to.
Ultimately though, I believe the answer is to allow we the people to have more direct control over our own experience. Why can I not be my own content moderator? Why do I, nearly 70 years old and in this business for 34 years, need a censor, or need to be censored? They could provide a number of tools to help me improve my experience.
But that it seems, would hinder their business model and it gives more control to me and less to Facebook. This they will never agree to unless forced.
Which brings me to one more point. When power companies and phone companies first got started they were private companies that could do whatever they wished. But over time electricity and telephone service became so important to society that they could not be left to the whims of the companies and instead became publicly regulated utilities.
I think this is where we are now with social media.
And it looks like Google has a variation of this kind of bot decision behavior making ridiculous errors:
A friend a colleague who knows a lot about this sort of thing tells me it probably is not bots but humans making these decisions. That’s even worse!
That is not my argument, John. My argument is: you can not allow a platform with 2 billion people, because that is a monopoly, and monopolies trample on people if it benefits them, because there is no punishment for doing so.
Ah. Right. I agree. Also as many have pointed out, such as in the article above, community management does not work well at that scale.
@alberto - Do you believe this to be true of platforms in general? English Wikipedia is a platform of 40 million users. When taken with all Wikipedia entities, the platform essentially has a monopoly of encyclopedic knowledge worldwide. It is the defacto source of “truth” on the internet.
So is this also too large? Or does the difference in governance and motivation make all the difference?
I’ve been thinking lately about these open source communities, and how Wikipedia has demonstrably re-enforced real-world inequalities: Open Open Source Communities.
Whether it be 2 billion people (Facebook), 40 million people (English Wikipedia), or 4.5 million people (Wikidata):
English Wikipedia is a platform of 40 million users. When taken with all Wikipedia entities, the platform essentially has a monopoly of encyclopedic knowledge worldwide. It is the defacto source of “truth” on the internet.
So is this also too large? Or does the difference in governance and motivation make all the difference?
I think Wikipedia is a rather interesting example - and not necessarily a purely positive one. But first - those 40 million is the number of total accounts. Actually actively editing users are more around 126.000, so significantly less. (And actually on par with Mastodon or Diaspora).
Now, why is Wikipedia an interesting example? For one because of the discrepancy of who edits and who consumes it. There are over 800 million different devices accessing it each month, but only126k editing and maintaining it. Active Editors are actually about 40.000 only.
Now, why is it that only a fraction of a percent are actually maintaining it?
I suspect several reasons:
- editing is hard and unrewarding
- the existing community makes it hard for new people to join, putting up layers of bureaucracy (this is especially true for the German Wikipedia, but I’ve talked to a Wikipedian from Hong Kong and they confirmed the same issue)
- Generally there is a discrepancy between consuming and maintaining
The other interesting thing is that Wikipedia has, as you pointed out, effectively killed traditional encyclopedias. And I am not too sure how much of a good thing this is. It does self-govern itself quite effectively in that it is stable. But it is also frighteningly good at resisting any kind of change. Which is good if you see it as “protecting against outside influence”, but very bad if viewed as “maintaining the status quo of those in power”.
But regardless of that - I do not think that Wikipedia is a fitting example of “this is large and still working”, because in terms of people actively working on it, it is not particularly large. In fact, it has obviously managed to not grow over a certain active user threshold, and it does in fact not work as a large scale community.
When put in those terms, it reminds me of the Emacs community. A small group of powerful people are very resistant to change. It makes onboarding new users or contributors very difficult.
But Emacs is just a tool. Wikipedia and Social Media platforms are places where information shapes people’s perception of reality. Communities on these platforms have a different moral imperative.
In the 2016 election, the total amount of Facebook Activity (likes, comments, shares, & posts) for Clinton was 410 million engagements. Trump has 960 million engagements. Who speaks on these platforms now has incredible consequences. There is something particularly discouraging about your (honest) depiction of Wikipedia’s community. It’s small enough to manage and purports to be open. And yet there have been no improvements on the byzantine tooling and top-line demographic problems in the last decade.
I’m starting to come around to Darius Kazemi’s idea of human-scaled social media as the only real approach. Which kind of looks like a federation of 1980s BBSes. It’s easy to dismiss as retro and regressive, but they honor millennia of human socializing.
I was trained as an industrial economist. We have a standard way of looking at this: yes, monopolies are always bad because they limit the freedom of the consumer/user to walk away and go find another supplier to his or her liking. But sometimes monopolistic provision is way more efficient than pluralistic provision. Classic example are aqueducts: having five competing networks delivering water to your tap would be very, very inefficient. Water provision is a natural monopoly.
In those cases, monopolies can be tolerated, but must be locked down tight. The classic solutions are nationalization (takes away the profit motive, like you hint at in the case of Wikipedia) or regulation (profit motive stays, but there is a watchdog).
Wikipedia seems to have found a third solution: openness. If you do not like what Wikipedia is providing, you can look for a different source, but you can also change Wikipedia itself.
The problem is also mitigated by the fact that, to a first approximation, encyclopedias are not a natural monopoly. Indeed, there is Wikipedia, but there are also specialized sources like Investopedia for finance, etc.
So, yes, motivation is a big factor, and openness a help. But also yes, monopolies are always bad, though sometimes a necessary evil.
I call it the Bedroom-to-Broadcast theory - we do well up to a certain scale, but lack the tools to properly navigate a landscape where you can start out a very private conversation that is suddenly getting the attention of millions of people… And also do not really know how to have an open conversation with a few strangers but not open ourselves up to attacks by thousands.
So, yes, scaling back to human scale is useful and important, but I think the allure of potentially speaking to thousands is a factor we should not ignore.
Interesting. It takes the perspective of the individual user, I presume.
When Grier and Campbell look at the history of Bitnet in A Social History of Bitnet and Listserv, 1985-1991, it struck me that really looked at the health of group dynamics as an indicator for the usefulness and health of the network. The scholars seemed to ascribe special value to groups that endured longer and expanded wider than Bitnet itself.
Groups are interesting because they can be managed and the provide an value beyond the ego-centric “I broadcast because I ideally want everyone to hear what I say.”
This differentiation was stark when @THEHermanCain (an individual person, THE individual person, blue check mark and all) started necro-tweeting. Several people had come together in an attempt to turn the individual into a group. The transition was awkward for many reasons, but in part because Twitter has no good tools for people to make meaningful groups.
Groups have all sorts of governance benefits, including this one from the Bitnet article:
“A friend,” wrote one Bitnet representative, “finds that a telephone call about annoying behavior works well.
When I reflect back on the aforementioned efforts by Kazemi and the Hometown branch of Mastodon, I’m starting to think these federated instances are less like BBSes and more like groups. Sort of like Paul Ford’s experiment, Tilde Club. I’ve been a member for some time. When Paul moved on, there was enough interest for others to update and change the group.
Indeed. You also illustrate the reason that the United States Congress needs to write new antitrust laws. Because
- As Google says, “competition is just a click away.” And they’re not wrong.
- In a world of zero marginal costs and total abundance on both the supply side (essentially infinite bits) and the demand side (billions of potential customers), the current laws don’t apply.
I’m sure serious proposals for antitrust in networked societies exists. If anybody can point me in the direction, I would be grateful.
necro-tweeting
Great phrase. For those who don’t know the story, Herman Cain had already died of Covid-19 when “he” started tweeting that the virus was not that deadly.
As we creep closer to the next prototype release of Darcy, we want to validate a few of our assumptions for the project.
To that end, it would be very kind and useful for us if some of you would take 5-10 minutes of your time to fill out our survey! https://forms.gle/J13KAUJKxZWN3wgq8