"Should they get away with it?" Carole Cadwallr calls for an all-out fight to regulate tech giants

Is this what we want: to let them get away with it, and sit back and play with our phones as this darkness falls?

But OK, regulate. How would it work? Does anyone have any clue?

Regulators will regulate (once they get started)

Mmh, “regulate”, really? I really hope there are enough outspoken tech-libertarian and crypto-anarchist users of the early Internet left, who will not fall for regulation as a solution. As we have just recently seen in the lost fight against upload filters in the EU, regulators have an unsatiable appetite to churn out more regulation, in favour of whoever lobbies them around, once they discover another domain that is responsive to regulation. Innovative cryptocurrency and blockchain projects are doomed as well once governments discover ways to regulate there. (That space is admittedly a bloody Wild West scenario right now, but why not. At least it’s innovative!)

Too big to exist

So if regulation can not save but only destroy the Internet we loved, what then? In very short: everything that is large enough to cause serious social issues (like, well, Facebook in this case) should either (1) not exist at all or (2) be public infrastructure, preferably distributed and open source, and certainly not in private hands or (3) consist of small federated units so it cannot cause such serious problems.

The case against advertising

And while we are at it: most issues with tech giants somehow revolve around the use and abuse of advertising. Both in the case of Facebook and Brexit, and in the conflict between publishers and Google over the share of Internet advertisement revenue (which is behind the whole upload filter and link tax debate of the EU copyright reform). Now advertisement is organized bullshitting and misleading of people anyway, and the Brexit campaign example makes the hypocrisy of political leaders blatantly obvious: in their eyes, organized bullshitting of people for economic growth is fine (even though overconsumption means ecological destruction, but who cares) but organized bullshitting of people for other causes is “a threat to democracy”!?

So if we want to save the Internet, and a good part of the natural world alongside, let’s get rid of advertising altogether, both online, in print, on TV and in public space, whether for products or political parties. It will create some disruption, but the Internet will find other business models, no worries. To get rid of advertising online, I am even willing to accept a tiny bit of regulation: all major browsers must come with an ad-blocker enabled by default.


(P.S.: Re-reading this, I like my style of writing when I’m angry :laughing: It sounds a bit like @hexayurt now, and I always wanted to know his secret of writing.)


1 is regulation. In fact it is the regulation, the nuclear trigger under the finger of antitrust authorities, since 1890. The concept is that of “dominant position”. Once a company attains a dominant position in a market, it can be forced to break up into smaller companies. This is what happened under the Carter administration in America to AT&T (“Ma Bell”), forced to decompose into regional companies (“Baby Bells”). That does not happen by chance, or divine intervention, but by a court applying a piece of legislation.

2 is nationalization, and also requires regulation. 3 is just a solution applied after 1.

I asked “how to do it” because antitrust regulation evolved in a state context. It is not clear if, say, a German court, could order Facebook (an American company) dismantled. It could, in theory, forbid it to operate on the German market, but that would require a Great Firewall of Germany or some non-existent technology. At the very least there would be some creative implementation to do, even if the political will could be found.

For this to be viable, we need a clearer idea of how it would work. History might hold some clues: prior to 1993 it was indeed illegal to use the Internet for commercial purposes, and it would be interesting to look at how things might have evolved from there. I would be eager to have this discussion on this platform, as a part of NGI Forward!


1 is the absence of regulation.

Regulation is what made companies as big as Facebook possible in the first place, with stuff like companies as legal persons, publicly traded companies, the basic concept of transferable debt with govt enforcement and so on. I prefer they would not make any big business possible and develop standards for cooperation instead. The W3C is actually a good model.


Well that can be applied to all media nowadays unfortunately (or 95%). Can we talk about free and fair elections when they spend hundreds of millions of dollars/euros to have someone elected or manufacture public opinion/consent for various horrible decisions being made in the name of the people?
We also now know that silicon valley giants abuse of their position very much and in so many ways, from privacy issues over rigging the search results/filtering information we can see to silencing people by removing their ability to make money or to be heard via their platforms.

As far as regulation is concerned, well I have very bad experience with regulation and standardization. Seems to be used mostly by big company, who actually finance our policy makers, to reinforce their position on the market or eliminate smaller players.

Maybe a combination of banning advertisement & funding political campaigns could bring some improvements. In our democracy money wins…


And that is actually a really good question to discuss and solve around here: how to break up Facebook (or annihilate it altogether) when there is no global agreement about it. I don’t have an idea right now, because for sure we don’t want Great Firewalls or national “splinternets”, which would surely be used by governments for all kinds of anti-democratic purposes once they would have them implemented …

1 Like

One of my particular concerns is the use of AI algorithms to manipulate human feelings and decision making for commercial or political benefit.

Ideally, there should be an outright ban of such algorithms.

If that is not possible, there should oversight of such algorithms by a board of stakeholders and such a board should have mandatory powers to enforce changes to algorithms when they are deemed to be harmful or excessively manipulative.

1 Like

Another idea: perhaps users should have to opt-in to be exposed to those algorithms.

What would be an example of an AI algorithm that manipulates in that way?

1 Like

Here’s one example of how algorithms can be manipulative and unhelpful.

“YouTube algorithms have been criticized for drawing viewers into ever more extreme content, recommending a succession of videos that can quickly take them into dark corners of the internet.”

This is an eye-opening TED talk by a researcher on this topic:

1 Like

Good references. Thanks.

It goes back to the original sin: modeling human beings as desire machines, instead of thinking adults. The YouTube algo maximizes minutes of video watched, which in turn maximizes number of ads watched. Watching videos is the supreme good. If extreme controversy (and worse) get people to watch more videos, it is good. Because the viewer is choosing to watch those videos, right? He chooses to, because watching the videos makes him feel good. And why should people not want to feel good? What’s wrong with that?

Whereas, if you model a human being as a thinking adult, some of those videos are debasing, and watching them is definitely not good.


Spot on, @alberto!

Yes “modeling human beings as desire machines” pretty well nails it. But then this has been the MO of TV ads for generations.

1 Like

Human beings ARE desire (or pleasure, take your pick) machines!

So it’s not a question of modeling or appealing to that. Need I mention porn? Free iPads? Obama lowering your interest rates?

One has to accept the world as it is and was. Attacking any of these problems by trying to change “the beast who is already formed” seems fruitless.

Now…there is not doubt that parenting and education along with society can crank out humans who suppress the beast. Such has been done for 100’s of years (or longer), but a society modeled on money really has little reason to do so.

Attention is Heroin. I have seen it first hand. In fact, I think it’s more addicting than any drug I have seen…maybe Cocaine is close. People simply cannot help themselves.

Many people have better angels. Some people don’t. Even with increased awareness and education, you still have the sociopath problem…as well as others.

I don’t mean to say that it is hopeless…but I think it’s hopeless.

We can work around the edges a little bit - in terms of data collection, possible breakup of large firms, etc.

But will Facebook really change if it doesn’t own instagram or WhatsApp? Not IMHO.

Not to repeat myself, but Facebook didn’t “do” anything much…they just created a terrible framework and then people did the rest…and it just got out of control.

My thought is that it will, in a sense, self regulate because decent people will do less there (already happening in my view). Government has a place in the data collection part, tho.

I do think that breakup of the core businesses is going too far…as stated, there is nothing to replace them and nature abhors a vacuum. What comes after FB would be more likely to be Putin controlled or the like.

I think simply divesting WhatsApp or Instagram from FB, or Android from Google won’t get to the heart of the problem if nothing is done about the way they gather and share user data. And even that won’t deal with how FB and others like Google follow people as they go to other parts of the net and use that combined data to not just further target ads, but more insidiously to decide what and who one does and does not see.

Furthermore, regulation has to make them choose if they are neutral platforms (which would force them to fundamentally change their business model) or publishers (in which case they would have to take more responsibility for what get said on their platforms). These are the core problems. Core to them because this is central to what drives their growth.

1 Like

More evidence for recommendation algos distorting the public sphere. This time it is Neflix’s, which has a large catalogue of nutty conspiracy theory-endorsing documentaries, and recommends them to people who have watched balanced, well made documentaries.

1 Like

The whole discussion that tech is just tools that people use, and that people are the bad inherently just turns out again, isn’t true. It’s the algorithms (accidentally?) feeding us fringe ideas making them part of the general narrative. This NYT thing on Brazil and YouTube perfectly exemplifies it:

1 Like

Well, video itself is full of bad ideas, but its people who click on this and consider it true. Same goes for the gun debate (in my opinion) in US. A gun on a table doesn’t kill people, someone needs to pick it up and pull the trigger.

I personally see here a huge and enormous potential for online education seminars on how to distinguish fake news from actual news.

The Dunning Kruger effect would seem to suggest that those online seminars would not be well intended. Simply speaking, people who don’t know things don’t know (or care) that they don’t know them. They have zero interest in knowing them.
One would have to read tends of thousands of pages of fairly dry history to even have a foundation to understand what might be real “news” or “truth” and what not be.

Further, the Dunning Kruger effect says those who are more capable (maybe those making and giving the seminars) are LESS SURE than the ignorant about whether they are correct or not!

It’s a strange thing…but very real.