Sociopathic innovation: how we are investing most in the most evil technologies (LONG)

(reposted from my personal blog)

TL;dr

Artificial intelligence and the blockchain are the two main technological hypes of the past fifteen years. Both were hailed as technologies with the potential to solve many problems and change the world, for the better. It now looks like their impact is overwhelmingly negative. Though they could be used for the common good, it turns out they are not very good at that. They are better, far better, at harming humans than at helping them. They encode dystopian, sociopathic world views; and tend to attract developers, investors and entrepreneurs that share those world views. So, once deployed, they tend to bring the world closer to them. They are sociopathic tech. This is disturbing, because mostly everyone fell for them: investors, developers, entrepreneurs, academics, government officials. I call for a re-examination of the achievements of these technologies and the impact they are having on our life and our societies. I would like to see support to innovation systems depend on how new technologies improve the well-being of humans and of the planet, and only on that. In what follows, I review some of the facts as a discussion starter.

Of how Artificial Intelligence excels at everything, except solving problems that matter

I recently had the opportunity to be exposed to the work of Juan Mateos-Garcia, a leading data scientist. Juan and his team had been looking at a large dataset of science papers published on the topic of Artificial Intelligence (AI). Their results look like this:

  1. AI has been undergoing a revolution since about 2012, when deep learning started to systematically outperform established techniques.
  2. Scientific production (papers) is booming. AI is shaping to be a general-purpose technology, like electricity or computing itself.
  3. Industry interest is evident. Many top scientists have been recruited from academia into industry. Venture capitalists have moved to invest in AI startups. Major governments are underwriting large public investments. There are talks of a “AI arms race” between China, the USA and the EU.
  4. AI is dominated by a small number of organisations and geographic clusters. Diversity of its workforce has stagnated.
  5. AI has had no impact on the effort to beat back the COVID-19 pandemic. In fact, all other things being equal, a paper on COVID is more likely to be cited by other papers if it is not about AI.

This final point gave me pause. Something was off. Why would AI not make a valid contribution to fighting the COVID plague? The conditions all seemed to be in place: there was, and still is, plenty of funding for research on COVID. There is a large, inelastic demand for the applications of that research, like vaccines. There is plenty of training data being generated by health care institutions the world over. And, if AI is a general purpose technology, it should apply to any problem, including COVID. The most exciting technology of the moment somehow failed to contribute to solving the most pressing problem of the moment. Why is that?

I can imagine a world where AI is deployed to help in the fight against a pandemic. We would use it to engineer a more targeted response to the risks of contagion. Granular risk scores could be associated to individual people and different situations, allowing society to protect the most vulnerable people from the riskiest situations, while leaving low-risk individuals in low-risk contexts free to get on with their lives.

Sounds good, but that world is not the one we live in. In our world, AI-powered, individually customized COVID restrictions would run into non-tractable problems. First, the algos would seize the high correlation between different socio-demographic variables, and decide that poor people, people of color and (in America) trumpists are more prone to the contagion, and should stay at home more than white, affluent liberals. Discriminated groups would react fighting back, challenging the algos as biased, starting litigation and inviting to civil disobedience, as is happening time and time again. Even if there was no conflict and everybody trusted the algos, it is not clear how we would use effectively the predictions they make for us. First of all, there is the cognitive challenge of understanding the predictions. You could tell someone something like this: “the risk of catching COVID on public transport for someone with your demographic profile went up 20% today, avoid the bus if you can”. But that is unlikely to work, because

  • Most people do not understand risk. For example, they are more scared of terrorist attacks than they are of car crashes, though the latter are far more frequent (hence more dangerous) than the former.
  • AI is Bayesian statistics, and as such it makes prediction not on you, but on somebody who is like you in a quantifiable way. It leaves out everything that makes you unique, putting it in the error term. For example, imagine you are a 45-year old living in the Netherlands who is also an ultra-marathoner. The algo computing your risk factor processes your age and the country you live in, because it has thick enough data in those dimensions. Your ultramarathons stay in the error term, because there are not enough people doing ultramarathons for that activity to be tracked in its own variable. And yet, when looking at the overall resilience of your organism, this is clearly an important information.

Given this situation, I suspect most people would end up following their own belief system rather than the algo’s recommendations. People who fancy themselves strong and resilient might say “yes, this gizmo is predicting high risk, but it is not talking about me, I am healthier and stronger than most!”. Or, vice versa, “yes, a low risk is predicted for outdoor mingling, but with my history of allergies I still don’t feel safe”. This is de facto happening right now with how people process scientific findings about COVID-19. Some people prefer to trust their own immune systems over the pharma-punditry complex. Others made COVID restrictions into some kind of weird religion, following them “above and beyond” even when science is calling for their relaxations. Even if a good AI-powered risk prediction system were in place , many humans are way too irrational to take full advantage of it. They prefer simple rules, applicable to all: “1.5 meters”, “wash your hands” and such. The promise of AI, providing personalized recommendation to each and every one of us, clashes with the human need for stability and security. In conclusion, AI had no grip on COVID, and is unlikely to have any grip on any similar high-stakes problem. So, what is AI good for? We can start with the applications already being developed:

  • Boosting consumerism: recommendation algorithms, targeted advertising. This has substantial socially detrimental spillovers. Algos maximizing engagement on social media are known to push their users towards ever more extreme, radicalizing content.
  • Surveillance: facial recognition, predictive policing, remote invigilation of children (with notoriously racist algorithms), bossware.
  • Machine translation.
  • Deepfakes - whose main applications, according to Wikipedia, include “blackmail”, “pornography”, “politics” and “sockpuppets”.

With the exception of machine translation, these applications are all detrimental to human well-being, for world-eating values of “detrimental”. We are seeing yet another example of Kranzberg’s First Law in action: AI is not good, nor is it evil, nor is it neutral. It could be used for good, though I am unconvinced it would work very well: but it is when you use it for evil, dehumanizing purposes that it really shines. That such a potentially toxic technology is attracting so much attention, public funding and private investment is a spectacular societal and policy failure. And that brings me to the blockchain.

Of the blockchain and its discontents

The blockchain, as by now everyone had to learn, is the name of a family of protocols that allow data storage not in a single repository, but in many. Using cryptography, the different computers who adhere to the same protocol validate each other’s copy of the database. This prevents a “rogue” participant from altering the records, as the alteration would only be present in a single computer and not be validated by the others. This system was first proposed to solve a problem called double spending when no trusted, centralized authority is present.

That was 2008. In these 13 years, blockchain solutions have been proposed for many, many problems. To my knowledge, none worked, or at least none worked any better than competing solutions that used a more conventional database architecture. This makes sense, because blockchains are self-contained systems. They use cryptography to certify that in-database operations took place, but cannot certify anything that exists outside the database. Any system based on a blockchain relies on external sources of information, known as “oracles”. For example, if you were to build an identity system based on the blockchain, you would have to start by associating your name, date of birth etc. to a long string of digits. Once stored on the blockchain, the association is preserved, but some external “oracle” has to certify it before it gets stored. In the absence of a credible external certification, the system could work technically, but it would produce no impact. I could create my own identity system, but no one would use it, because I am not trustable enough when I issue a digital ID to your name. There are entities with the trustability to start such a system, for example major governments. But, because they are trustable, they do not need the blockchain at all. I have lost count of technologists who told me:

Any technology which is not an (alleged) currency and which incorporates blockchain anyway would always work better without it. (source)

But the blockchain is not just another clever technical solution in search of a problem to solve. I argue it is a major source of problems in itself. Consider this:

  • The distribution of Bitcoins is extremely unequal, with a Gini coefficient estimated at 0.95 in 2018 (theoretical maximum: 1; Lesotho, the most unequal country on the planet for which we have data: 0.65). In fact, inequality seems to be a feature of blockchains, not just of Bitcoin – for example, it is estimated that the bulk of the monetary value conjured by Ethereum-based non-fungible tokens (NFTs) is appropriated by “already big-name artists and designers”.
  • Blockchains use a lot of power. Every update anywhere in the system needs to be validated by network consensus, which includes a lot of computers exchanging data. Bitcoin alone consumes about 150 Terawatt per hour, more than Argentina. Providing computer power to the Bitcoin network is rewarded in Bitcoins, through a process known as “mining”: this provides the incentive to underwrite all this computation. In bid to make what they see as easy money, Bitcoin miners have resorted to malware that infects people’s computers and gets them to compute SHA-256 , incorporated into the builds of open source software projects; resurrected mothballed power stations that burn super-dirty waste coal; installed mining operations in Iranian mosques (which get electricity for free) and engaged in plain stealing. Their carbon footprint is enormous: one Bitcoin transaction generates the same amount of CO2 as 706,605 swipes of a Visa credit card. Some blockchains have less computationally expensive systems of verifications, but they are still more energy- and CO2-intensive than traditional databases.
  • Fraud – especially to the detriment of less experienced investors – is rampant in crypto.
  • Crypto has provided a monetization channel for ransomware attacks. Ransoms are demanded and paid in Bitcoin, untraceable by Interpol. Some observers go so far as to claim that the price of Bitcoin is tied to the volume of ransomware attacks. Hospitals and other health care institutions are among the main targets of these attacks: not only do they have to pay money, but their IT systems shut down, threatening the lives of patients.
  • In 2021, tech companies that used to donate CPU power to legitimate projects have had to stop doing so, citing the constant abuse from crypto miners. It is worth quoting the words of Drew DeVault:

Cryptocurrency has invented an entirely new category of internet abuse. CI services like mine are not alone in this struggle: JavaScript miners, botnets, and all kinds of other illicit cycles are being spent solving pointless math problems to make money for bad actors. […] Someone found a way of monetizing stolen CPU cycles directly, so everyone who offered free CPU cycles for legitimate use-cases is now unable to provide those services. If not for cryptocurrency, these services would still be available. (source)

In return for this list of societal bads, so far, all the blockchain has to offer is a plethora of speculative financial assets: a casino. Which is also a societal bad, if you, like top innovation economist Mariana Mazzucato, believe that the economy is overfinancialized, and that policies should be put in place to roll financialization way back.

The blockchain is, overall, a net societal bad: it consumes resources to deliver a casino. Humanity would be better off without it. The picture gets even grimmer when you consider the opportunity costs: blockchain startups have gobbled an estimated 22 billion USD in venture capital funding from 2016 to 2021, very likely matched by various forms of government support, and that money could have been used in more benign ways. So, what’s going on here? Kranzberg’s First Law, yet again.

The original group of developers that rallied around Satoshi’s Nakamoto White Paper had a libertarian ideology: they dreamed of a trustless society, where contact is reduced to a minimum and anonymised, and were obsessed with property rights. So, they built a technology that encodes those values, which in turn attracted more people than believe in those values. Code is law, they said. If someone can technically do something, that something is allowed, even moral, under some kind of tech version of social Darwinism. When the DAO was hacked in 2016, exploiting vulnerabilities in the Ethereum blockchain, the perpetrator bragged about it: if I stole your money, it’s your own fault, because code is law. I am just smarter than you, and I deserve to walk away with your money.

Trustless societies do exist – the mob is one of them. But they are not a good place to live. Economists and social scientists think of trust as social capital, and seek ways to build it up, via accountability and transparency. Again, the blockchain could conceivably be used for something good, but in practice almost all of its uses contribute to making the world a worse place, while making money for the top 0.1% of crypto holders. This is because the tech itself embodies evil values, and because the social coalition behind it upholds these values. Don’t take it from me, take it from open source developer Drew DeVault:

Cryptocurrency is one of the worst inventions of the 21st century. I am ashamed to share an industry with this exploitative grift. It has failed to be a useful currency, invented a new class of internet abuse, further enriched the rich, wasted staggering amounts of electricity, hastened climate change, ruined hundreds of otherwise promising projects, provided a climate for hundreds of scams to flourish, created shortages and price hikes for consumer hardware, and injected perverse incentives into technology everywhere. (source)

Or writer and designer Rachel Hawley:

NFTs seem like an on-the-nose invention of an anticapitalist morality play: a technology that delivers exponential gains to those already at the top by convincing everyone to collectively imagine that free, widely distributed artwork is actually a scarce commodity, all while destroying the actual scarce resources of our planet. (source)

Or economist Nouriel Roubini, testifying to the U.S. Senate:

Until now, Bitcoin’s only real use has been to facilitate illegal activities such as drug transactions, tax evasion, avoidance of capital controls, or money laundering. (source)

Of how and why we are bad at supporting the right innovation

Why are the two most hyped technical innovation of the past 20 years, the blockchain and artificial intelligence, diminishing human well-being instead of enhancing it? Why are we investing in things that make our problems worse, when the world is facing environmental collapse? My working hypothesis is that the financial world will put money into anything that promises returns, with little humanitarian concerns. They lead the dance; and governments the world over have been captured into supporting anything that promises GDP growth. If I am right, it is important to decouple support to innovations from their growth implications, and throw our institutional support behind technologies that uphold human well-being over capital growth. Jason Hickel has some interesting thoughts in his book Less is More, and Mazzucato has forcefully made the point across the arc of her work. Time will tell; and I am confident that better minds than mine will cast more light onto the matter. But this question can no longer wait, and if you are working in one of these two tech ecosystems, you may want to ask your employer, and yourself, some hard questions.

Update

Thanks to all the fine folks that reacted to this piece, and gave me useful suggestions. Many people pointed out counterexamples (I owe this particularly nice one to Raffaele Miniaci). But of course, it is not a problem of finding counterexamples, but to assess the overall net impact of this particular bit of technological development on society. My answer may be wrong, but I am fairly confident that the question is right.

Another objection comes from @yudhanjaya, who says that, without giving a definition of AI, the whole first part is meaningless. I went back to Mateos-Garcia’s definition, which he borrowed from Brian Arthur:

Machines able to behave reasonably in a wide range of circumstances.

Depending on how you interpret “reasonably” and “wide”, this indeed captures everything from deep learning for facial recognition to the individually trained spam filter in my personal install of Thunderbird. The reason for this choice is probably that it enables a statistical test for structural change: in 2012 everything changed, more or less at the same time as an influential paper by Krizhewsky et. al. was published. Output of AI papers went way up.

I am looking for a socio-economic definition, not a technological one. These technologies each catalyzed a “scene” of researchers, companies, investors, governments etc. What values and visions do these scenes embed? What do they want? The libertarian streak of the blockchain gang is clear. With AI, this is less obvious because AI has a much longer history, and you cannot define it technologically. I guess when I am talking about “AI” in this article, I refer to its post-2012 scene, with some fuzziness but still quite identifiable. This excludes the spam filter on my e-mail client, and should take care of Yudha’s objection. It also raises concern, for the surveillance-authoritarian streak that this scene has.

EDIT 2024-11-04

Some time has gone by since this post, and we now know a bit more about real-world use cases of AI. Cory Doctorow has provided a helpful summary. It is a bit of a black book, unfortunately. An excerpt is copied below; or you could read the entire post on his blog.

Cory Doctorow on some use cases of AI that have emerged as if late 2024

The real AI harms come from the actual things that AI companies sell AI to do. There’s the AI gun-detector gadgets that the credulous Mayor Eric Adams put in NYC subways, which led to 2,749 invasive searches and turned up zero guns:

NYC's subway weapons scanning pilot program "objectively a failure," critics say - CBS New York

Any time AI is used to predict crime – predictive policing, bail determinations, Child Protective Services red flags – they magnify the biases already present in these systems, and, even worse, they give this bias the veneer of scientific neutrality. This process is called “empiricism-washing,” and you know you’re experiencing it when you hear some variation on “it’s just math, math can’t be racist”:

Pluralistic: 23 Jun 2020 – Pluralistic: Daily links from Cory Doctorow

When AI is used to replace customer service representatives, it systematically defrauds customers, while providing an “accountability sink” that allows the company to disclaim responsibility for the thefts:

Pluralistic: “Humans in the loop” must detect the hardest-to-spot errors, at superhuman speed (23 Apr 2024) – Pluralistic: Daily links from Cory Doctorow

When AI is used to perform high-velocity “decision support” that is supposed to inform a “human in the loop,” it quickly overwhelms its human overseer, who takes on the role of “moral crumple zone,” pressing the “OK” button as fast as they can. This is bad enough when the sacrificial victim is a human overseeing, say, proctoring software that accuses remote students of cheating on their tests:

Pluralistic: 16 Feb 2022 – Pluralistic: Daily links from Cory Doctorow

But it’s potentially lethal when the AI is a transcription engine that doctors have to use to feed notes to a data-hungry electronic health record system that is optimized to commit health insurance fraud by seeking out pretenses to “upcode” a patient’s treatment. Those AIs are prone to inventing things the doctor never said, inserting them into the record that the doctor is supposed to review, but remember, the only reason the AI is there at all is that the doctor is being asked to do so much paperwork that they don’t have time to treat their patients:

Researchers say AI transcription tool used in hospitals invents things no one ever said | AP News

My point is that “worrying about AI” is a zero-sum game. When we train our fire on the stuff that isn’t important to the AI stock swindlers’ business-plans (like creating AI slop), we should remember that the AI companies could halt all of that activity and not lose a dime in revenue. By contrast, when we focus on AI applications that do the most direct harm – policing, health, security, customer service – we also focus on the AI applications that make the most money and drive the most investment.

6 Likes

Hi @asimong, better here… @amelia can more easily use it for her research.

Thank you @alberto! Strong, clear, well-argued – I can’t exactly say whether it is convincing to others, because my starting point is similar to yours, but I hope it is convincing enough to cause some significant hesitation and disinvestment, or divestment?

I comment partly from PhD work in machine learning over 30 years ago. Right back then, the late Donald Michie was clearly pointing out that pure neural-net-based AI would never satisfy the human reasoning requirements that we sometimes have to fall back on. He favoured a rule induction approach, which is what I followed; and I looked at rule induction from the perspective of trying to figure out what rules humans themselves were unconsciously using, when they performed complex tasks fluently. I still believe that this approach would be very valuable to follow up, and would love to see evidence of it. Currently I remain disappointed.

I do agree that machine translation – and indeed speech recognition – are some of the very few beneficial applications of the current deep AI approaches.

In terms of blockchain-related issues, I continue to be impressed by how the variant ideology and values of Holochain has taken root in the communities I mix with, as something that is much better in line. Personally I can see some very positive uses for the right kind of distributed ledger technology, perhaps including currencies much better designed than Bitcoin (which I also see as a hugely bad influence).

It does come down to the societal context as well, doesn’t it? In a capitalist market-dominated economy, anything that has extractive potential will be available, for a price. What are the most effective leverage points to move us away from that mindset?

3 Likes

Brilliant Alberto, and right on. So, is this how you spend your vacation time?

2 Likes

Is Holochain not so energy intensive?

Indeed Holochain isn’t so energy intensive. The main thing, as I understand, is that Holo relies on consistency across a set of servers, not consistency across servers globally. But I guess others may be able to explain more. See e.g. https://www.reddit.com/r/holochain/comments/mftp7a/visa_bitcoin_ethereum_and_holochain_energy/ and Satoshi Nakamoto and the Fate of our Planet - Holochain Blog

1 Like

Yes, I stayed out of the proof-of-work vs. proof-of-stake debate. Reason: with both, we have at least some energy costs over a centralized database architecture. Until we see substantial benefits, we have no reason to bear any cost at all.

@alberto I agree completely that AI and blockchain are currently mostly having negative effects. I also believe that these are tools which have a possibility for supporting a regenerative future and that it is all of our responsibility to demand that tools are used to support a regenerative future. I have some actionable and highly viable ways to use these tools to catalyze a Regenassaince.

According to the father of the browser, the web has a BIG missing feature, which makes the web flat and static. This has enabled the web to be captured by the Internet Platforms which see people and information as commodities, essentially using AI to monetize online humanity. I am working on the Overweb, a trust layer over the webpage that enables people to be visible and layers knowledge on webpages. The Overweb can catalyze a shift from an Attention economy that harms people to a Knowledge economy that neutralizes misinformation, enables safe digital nations, and offers creators a fair value exchange.

1 Like

Good on you! Look, I don’t claim to be able to make predictions (“especially about the future” :slight_smile:) Thing is: when ideas are young, they might be highly beneficial. It is impossible to predict the full chain of knock-on effects. So you keep up hopes: since we are talking Bayesian statistics, you have a prior that the effects are positive. But when they start to get mature, then you have a duty to update. How did it go? And if it went badly, can we still claim that it will get better in the future without making the same mistake that we made the first time around, failing to predict, say, that Bitcoin would create a business model for ransomware? “This time is different” is normally a weak argument. If you want to make it, better have solid reasons behind it.

Lately I have been sensing a turn of the narrative tide about the blockchain. More and more influential commentators are coming out with arguments that are strongly negative about it – or at least, more and more are making it into my Twitter timeline. Stephen Diehl is an obvious example; but I also see @smari in there.

A potential accelerator of this process is a video uploaded two days ago by a man called Dan Olson (@FoldableHuman on Twitter). It’s very long (over two hours), but super well researched, super high production value, super clear without trivializing the issues. As I write this, it already passed the 2 million visualizations. I recommend its use to get a big picture of the all crypto scene 13 years after the famous Satoshi Nakamoto White Paper: from Bitcoin all the way to NFTs.

There are too many insights for a TL;dr, but if I had to pick one it is this: people who participate in the crypto scene (outside of early adopters) are basically the perfect target for scams:

  • “Barely middle class” – some disposable income.
  • Insecure – high financial anxiety and FOMO.
  • No business experience – they don’t think high returns on investment are suspicious.

The combination of money, FOMO and gullibility makes it so that any NFT Discord server is basically “a directory of potential marks”.

This explains a lot.

Do yourself a favour and watch it.

2 Likes

Hi Alberto!

I came to this via twitter after seeing an exchange between you and Vinay about the validity of NFTs, and I really enjoyed this piece. Thank you for writing it.

I hadn’t seen the argument made about the weird uncanny valley affect of AI predictions applied to COVID before. I can totally see the recommendations you mention being dismissed the way statistical info about COVID is being dismissed in favour of people’s own intuition before.

I think it’s an interesting critique around AI being used as a tool in governance, or decision making in general, and I think that where it’s being deployed, there almost needs to be a budget, or insurance allocated to cover the costs of something like an intervenor compensation programme to provide recourse when complaints inevitably come up.

These exist in the energy sector in the US as an way to address the massive power imbalances when citizens raise issues about big utility companies making decisions that harm citizens and climate, but help their own bottom line. I don’t know what the closest thing would be when folks talk about addressing the problems of recourse around AI - I’ve only seen folks talking about bias or explainable AI.

Anyway - just wanted to say thanks for writing it, and that I enjoyed it. Danke!

1 Like

That is a really interesting thought, Chris.

Are you familiar with the history of the regulation of natural monopolies? Talking late19th century, when infrastructure (city aqueducts) meet capitalism. Aqueducts had been a pet project of the Roman Republic, and later the Empire, but those had been built as public works. In the economic scenario of the 1800s, it was private companies building up the infrastructure – and marginalist theory was around to conceptualize where the problem with that lay. Economists like Giovanni Montemartini (Wikipedia, in Italian) quickly realized that fixed costs were so high, and marginal costs so low, that, for any realistic market size, there was space for only one company to produce below average costs. This situation was christened “natural monopoly”.

To this problem there were thought to be two “clean” solutions. One is direct provision: the infrastructure is owned and run by the collectivity. Pricing is ideally at marginal cost if other forms of financing can be found, if not at average cost. The other is regulation. A private company is in charge, but the collectivity sets up a regulatory agency to oversee, exact information, compel non-monopoly prices, and punish if the company transgresses. Today we would frame regulation as a principal-agent problem, but I digress.

Montemartini himself liked the first solution better. With a twist: he thought that infrastructure should be owned by the public sector, but not by the state. Rather, he proposed municipal socialism, a concept whereby city administrations build in-house the capacity to serve their community with what today we call utilities. In Italy, municipal socialism had a major heyday in early 20th century, and is the reason why we (well, we in the north) have, to this day, a landscape of reasonably efficient municipally owned companies doing water supply, wastewater treatment, waste management, and even energy generation and distribution. These are too efficient and deep-rooted for the multinationals like Vivendi and Waste Management, which cannot get a foothold.

But in the 1980s, we had Reagan and Thatcher, and most of the world went down the road of regulation. The main problem with regulation, Montemartini thought, was that there is a structural imbalance of power between the regulated and the regulator. This is because the former is sitting on a lucrative revenue stream, and the latter depends from a cash-strapped parliament or local council for its budget. At the end of the day, the regulated is always going to be a step ahead. The privatizations of the Reagan-Thatcher era were a mixed bag: some went reasonably well, others went terribly (British Rail). Mazzucato’s book contains a big picture view on this.

With AI, the problem is even larger. This is because algos are inherently opaque. Governments could dream up an Ethical AI Agency (I expect the European Commission to do this, in fact), but good luck to them as they try to figure out if there are biases in Zuckerberg’s stuff. They are just going to be outgunned.

So, maybe AI should be public. Like nukes. :face_with_raised_eyebrow:

3 Likes