Reflecting on the Long Termism Deep Demonstration

During the spring of 2020, the Sci-Fi Economics Lab has been involved in the Long Termism Deep Demonstration. The LTDD group has been working to connect proposed positions into a coherent whole. Several common interests have emerged. Moving forward, I propose it is useful to unveil some of the low-level assumptions underlying our proposals. As a rule, they do not get discussed, because they are part of how we see the world. We find them natural, obvious. No need to discuss them, just as fish do not feel the need to discuss water.

In this situation, it is possible that our portfolio of positions turns out to be incoherent. This is not necessarily bad. We could claim pragmatism: different problems need approaches grounded in different world views. To do that, though, we need to be aware of which position encodes which world view.

In this document I try to do three things.

  1. Propose an idea of a Deep Demonstration.

  2. Highlight the areas where our world views might differ.

  3. Reflect on the implications of different world views for the scope of LTDD.

All that is just my personal views, and reflects my wish to question them and compare them with your own. No implications that I have a “right” way to do things.

1. What do we mean by Deep Demonstration?

I understand a Deep Demonstration as a demonstration that is deep.

A demonstration: a scale model, as small and cheap and possible. We build it to show what the world would look like if someone acted upon the world as we acted upon the model. It is also something you show, not something you describe. A street corner could be a demonstration of a new style of city planning. The corner is smaller than the city, but you can stand in it and experience what the city would feel like, if it were to follow the planning style in question.

Deep: it encodes changes that are radical enough, and sustained by real social practices. Imagine, for example, that our city planning innovation includes parking spaces for bicycles. To be a deep demonstration, the street corner needs to have parking spaces for bicycles, and real people that use them. Hiring actors to pretend they are locking their bicycles there is not deep enough.

If the above is correct, a deep demonstration on long-termism is one or more small contexts where real-life actors live in the long term. They make long-term decisions; build to last; plan for resilience rather than short-term efficiency, etc.

2. Three questions on world views

Question 1. Is long-term thinking contendible?

Several proposal circulated within the group imply that some solutions are within reach. Example: some indigenous people have a “seven generations” rule when they make decisions. This is an attractive idea, why do we not push more decision-makers to adopt it?

Whether we should do so or not depends on whether decision-making processes are contendible. To be contendible means that there is no major barrier to entry, or change. In our example, decision-making is contendible if decision makers are quite open to adopting new methods. Maybe they were not aware that indigenous people had come up with such an ingenious system!

  • Naive innovators think everything is contendible. Short-termism is nonsense! But now we will fix it, because we see this, whereas everyone else does not.

  • Naive conservatives think nothing is contendible. “If it were possible, someone would already have done it”.

Non-naive changemakers needs a way to decide whether a situation is contendible or not. Eliezer Yudkowsky has proposed that a contendible situation is in an inadequate equilibrium. It is still an equilibrium (that is why it has not changed), but that equilibrium is inadequate, and it might be possible to disrupt it with a reasonable effort. Upon proposing a position, it is a good hygiene norm to say why our approach can deliver change where so many smart people have failed.

Question 2. Is the actor of change individual or collective?

Some theories of change take as their unit of analysis the individual. Example: nudge theory. Individual behaviour depends on individual incentives and individual framing of information. It only depends on those: the behavior of the individuals determine that of the system. Design incentives and framing to get the desired behavior.

Others take take as their unit of analysis the collective. Example: all complex system approaches. The behavior of the system can never be deduced from that of the individuals in it. Rebound- and other second-order effects are important and need modelling.

Which position reflects which stance?

Question 3: volume knobs or system attractors?

Suppose you feel you understand why something happens, and you would want it to change. For example, you understand how “Hot money” on financial markets feeds instability and vice versa. What would corrective measures look like?

One possibility is that they look like a volume knob. In our example, a Tobin Tax can be set at any level. The higher the tax, the higher the incentive to hold financial positions for a long time.

The other possibility is that they look like pushes to dislodge the system from where it currently is. But the space of possible equilibria is not a continuous one. The system can only rest in one ot its attractors. It is possible to disrupt it, so that it leaves its current state, but it will always end up in one of the attractors. For example, speculators might elude a Tobin Tax moving to some other jurisdiction. In this case, the minimum viable change might consist of a introducing a Tobin Tax and a “world state”. Gradualism won’t work.

By the way, the same reasoning applies to culture. Some of our meetings have focused on cultural artifacts (Hollywood productions) as determining culture, including long-termism. This means culture is a lever. But is it? You can of produce a cultural artifacts encoding non-mainstream culture, but it is not clear that you can make it be mainstream. Anecdotally, creators in the mass culture markets work by canons and tropes, and feel like breaking them results in unsuccessful products. System attractors are too strong – until they aren’t and, say, the Jedi knight can be a black woman. But maybe by then masscult is simply recording the change, not producing it.

3. Implications for the LTDD

  • Do we agree on what a deep demonstration is?

  • Where are our positions in terms of contendibility, unit of analysis, viability of gradual solutions? We at the Sci-Fi Economics Lab tend to think that most issues are not that contendible; that the minimum viable unit of analysis is the collective, and rebound- and second-order effects are common; and that change is more similar to a jump from one system attractor to another than to a smooth, gradual glide. This view forces on us a lot of intellectual humility. It brings us towards experiments and research, more than high impact and scale. Hence my cautiousness.

  • What is the “right” degree of coherence across the different positions for LTDD? How do we acknowledge it in the portfolio?

Grateful for any feedback, as always.

2 Likes

Hi, Alberto- I’ve been ruminating on this (one of my favorite words – literally means “chewing the cud like a cow”)… and the answer to your question is…: “The Eternal Conflict”. Isaac Asimov’s story – the last in the collection “I, Robot”. Here is what I mean: "Our new worldwide robot economy may develop its own problems, and for that reason we have the Machines. The Earth’s economy is stable, and will remain stable, because it is based upon the decisions of calculating machines that have the good of humanity at heart through the overwhelming force of the First Law of Robotics.”
and "the Machines are a gigantic extrapolation. Thus, a team of mathematicians work several years calculating a positronic brain equipped to do certain similar acts of calculation. Using this brain they make further calculations to create a still more complicated brain, which they use again to make one still more complicated and so on… what we call the Machines are the result of ten such steps.”
and "Mankind has lost its own say in its future.”
“It never had any, really. It was always at the mercy of economic and sociological forces it did not understand – at the whims of climate, and the fortunes of war. Now the Machines understand them; and no one can stop them, since the Machines will deal with them as they are dealing with the Society, – having, as they do, the greatest of weapons at their disposal, the absolute control of our economy.”

1 Like