#22 AIs will never be as dangerous as stupid humans

This post's read time: 11 minutes

Dear Friend,

You told me about Ezra Klein’s Op Ed in the New York Times about dangerous AI. I thought to myself “Finally, someone I respect, someone thoughtful, who will give us a context for how to adjust our expectations as new information rapidly emerges to inform us about one of the most powerful transformations in human history.”

I was woefully mistaken, and sadly disappointed.

Klein’s essay is basically a humblebrag about how many brilliant and strange people he hangs out with in San Francisco, and how they think there’s a 10% chance we will all die by means of AI. He does not suggest how these people come to such an estimate, nor does he describe them as having any particular skill at statistics or forecasting – just that they are the coders who will conjure up the AI demons which will kill us all. Yes, he literally uses that analogy.

Much better is his Op Ed from February, which describes the dangers of alignment – that is, what are we teaching AI to actually do, and is that good for all of us who end up depending on the machines, or will it benefit the ones who control the AI? Effectively, is AI capitalism on steroids?

“I’d feel better, for instance, about an A.I. helper I paid a monthly fee to use rather than one that appeared to be free, but sold my data and manipulated my behavior.” E. Klein

This was precisely my fear. As a lifelong student of marketing, culture, and capitalism, I know it will take very little time for the “weaponization of media” to become the “weaponization of AI”. Not that AI is going to launch missiles, crash airplanes, and take out our power grid and attack our hospital equipment.

Rather, it’s going to double our Amazon purchases, increase our video gaming and social media usage, and generally turn us into even lazier, more helpless creatures than we already are. Wall Street has trained you and I to believe that Capital is more important to us than Socialism, suppressing millions of years of evolution. Training algorithms to believe the same is child’s play. Klein does well to warn us.

Kara Swisher was the first person I started to believe when around 2016 people started warning us that social media was hurting us exponentially faster than we realized. Everyone started talking about the “echo chamber”.

Of course, in the 90’s at an Ivy League University, my sister talked about the “bubble” and how she was insulated from the outside world.

Of course, when trains started to run faster than horses and coal engines we perfected, doctors in the UK predicted that it would suck the air out of our bodies and we’d suffocate. American doctors believed women’s uteruses would be ripped out of them.

Of course, when books first came into general public view, especially novels, people believed they’d rot your brains and that reading by candle light would make you blind. Same with radio. Same with television. The debate still rages over nonionizing radiation.

“According to Genevieve Bell, the director of Intel Corporation’s Interaction and Experience Research, we have had moral panic over new technology for pretty well as long as we have had technology. It is one of the constants in our culture.” –WSJ

Nobody rides these trends better than Audrey Watters, even though her specialty is Ed Tech, and how poorly it measures up to the hype. Learning from her how many ‘revolutionary’ technologies turned bust, and totally failed in deployment, reminds me of Sagan’s quote, paraphrasing LaPlace:

“Extraordinary claims require extraordinary evidence.”

Having used ChatGPT, I’m convinced it is a useful tool with real potential. ChatGPT’s propensity to make up an answer when it doesn’t have a real one is all too human.

People are so willing to believe each other’s claims, and deny real evidence to oppose it. We are social creatures, not logical ones. ChatGPT has been trained to be the same. In short: For all its transistors and algorithms, ChatGPT cannot think.

So I am far more worried about people being unwittingly and even willingly manipulated by AI. I am far more worried about us trusting it to make decisions and predictions. Especially, I worry AI will act as a magic mirror, telling us what we want to hear while gently manipulating us toward someone else’s benefit.

These are all fairly quotidian concerns. When we developed nuclear power, it was a race to figure out fission so we could bomb our enemies. There was some concern we might ignite a large part of the atmosphere, but no large collective or even powerful leader stopped to try to calculate the relative pros and cons of the new technology.

Since 1947 there has been a “doomsday clock” implying our inevitable demise, but we are still around. The clock has never been adapted to be probabilistic, it merely flails around as a political tool to warn us about our own stupidity, and yet manages to contribute to the underlying problem, which is that we are not very good at predicting the future.

There is a trove of human literature on how important it is biologically, to be conservative – if you get killed by a saber toothed tiger, there is no tomorrow to hunt for the next game. We are wired to be fearmongers.

When the Brooklyn Bridge opened in 1883, a panic erupted after a woman stumbled. The ensuing stampede killed 12 people. Last Halloween, the same thing happened again in South Korea, killing 159.

I am reminded of a story by a hypoxic mountaineer of a bear roaming on a glacier. It pounced upon him, and he used an ice axe to defend himself as it sank its teeth into his leg. Only later did he realize it was a wind-blown garbage bag, and that he had stabbed himself in defense.

Many people I know, many whom I respect, behave irrationally to protect themselves from a danger without realizing the costs of their defense. Many Americans keep a gun at home to defend against intruders, without seeing the data that, like a Chekhov cliché, that gun is far more likely to harm someone they care about, or even themselves, in a dark suicidal moment.

I just read about a nuclear spill in Minnesota. The manager of the powerplant felt compelled to hid it from the public, to avoid exactly the kind of knee-jerk reaction which later materialized. In comments on facebook posts and on a BBC article, few cared about knowing the actual hazards, and preferred to throw themselves into fervor over how many ways we have invented to kill ourselves – completely blind to the benefits of these technologies and what we might learn from our mistakes.

Nobody paused to think about the harm caused by coal mining, hydro-fracking, CO2 emissions. Nobody weighed these against a bit of Tritium. Nobody asked whether more than 1/4 of the Tritium might be recaptured.

A similar social maelstrom erupted when people learned about a planned tritium release from the decommissioned Indian Point plant in New York.

At that time, I happened to be reading “Swords Into Ploughshares: Does Nuclear Energy Endanger Us?” This brilliant essay comes from a book by Max Perutz, a Nobel-prizewinning molecular biologist and polymath. He lays out a systematic, mathematically-based way of evaluating the risks and benefits of nuclear energy. The essay is impossible to find on the internet. Chloe got me the book out of a dumpster free bin.

The few people who care are so few, so quiet, that effectively the collective voice of the times is that of total willful ignorance.

‘If life is going to exist in a Universe of this size, then the one thing it cannot afford to have is a sense of proportion.’ -Douglas Adams, H2G2

I am crying out into the wilderness.

We cannot continue to be blind to the relationships between our actions. We cannot make each decision in a hermetic vacuum. We must learn to think proportionally, to think of every action as a compromise of future possibilities.

A friend of mine recently confessed a recent growing fear of flying. It reminded me of my great privilege. While she had recently started flying in small planes a lot, working and living in remote Alaska, I’d grown up in small planes, and one of my first jobs was summarizing the NTSB’s preliminary reports on aircraft accidents. This was followed by intensive training in the medical device industry.

Both of these experiences are due to my father – who has had an engineer’s lifelong passion for flying, and responding like an engineer to the news he was unable to have children, founded a company and developed groundbreaking technology in reproductive medicine. I think it is also important to recognize (although data here is slim) that earlier in his life my father had some strange health issues, and developed a hypochondriac’s obsession with disease and wellness. He also lost his mother at a relatively young age, due to medical malfeasance. I think this all informed his calculous of risk immensely.

The result was, I have been exposed for decades to incredibly complex and time-tested risk management systems, designed to keep people from dying in troves of airplane crashes and medical device failures.

My friend felt her fears were irrational. When pressed, she confessed it might stem from all of the pilots she ended up meeting and flying with and hanging out with. They were not exceptional – just flawed humans prone to distraction and error.

I was in the Bahamas once when a pilot went on a bender with some of my boatmates, consuming far too much alcohol and cocaine only for my colleagues to learn he had sole command of a morning flight in only a few hours. At the time, he’d just lost his keys and they were tearing apart a bathroom in order to break into his apartment to fetch his flight bag. He could barely stand upright.

If that was my model for a commercial pilot, I too would avoid flying. It is definitely eerie to think too long or hard about being completely powerless, suspending disbelief at the same time as being suspended with a hundred other people, eight miles above the surface of the earth.

But I know the systems. I know the stats. Driving in a car is far more terrifying for me, with our chaotic streets and highways (which even the almighty AI seems incapable of managing).

Autopilot and collision avoidance in an airplane is far more mature, has better regulatory oversight, and is actually a far simpler problem to solve, than automobile safety.

The simple algorithmic analysis tells me it is safer to fly than to drive.

If you want to know more, the FAA and NTSB have published extensively on all of the systems of safety. One example is that airline transport requires two pilots – and very specific rules determine the responsibilities of each. After recent international accidents, even this system was refined to prevent rare but possible failures. The system is always self-critical and attempting increased robustness.

My friend mentioned the Boeing 737 Max groundings. Root cause analysis, if I remember correctly, showed that this was an experiment at deregulation. Pressure was placed on the FAA to allow Boeing to self-regulate their safety checks, and a bunch of shortcuts and assumptions led to new safety technology being deployed which, when it malfunctioned, flew the airplanes into the ground despite attempts by pilots to prevent this.

So this was another case of something which was supposed to make us safer, actually killing people. It’s also a lesson that capitalism doesn’t naturally protect us. While the accidents were a tragedy, the response was powerful: ground the airplanes, perform extensive analysis, fix several problems and reassess how we assessed the risks of all problems before.

They used things like Root Cause Analysis (RCA) and Corrective and Preventative Action (CAPA)- engineering jargon I became intimate with in the Med Tech industry.

One of my favorite forms of RCA is the “5 Whys” a poetic Japanese interpretation of RCA intended to inform the decisions of shop stewards on factory floors.

Taiichi Ohno, godfather of Toyota’s much-vaunted Production System give us an example. (Ohno 1988, p. 17):

Why did the machine stop?
There was an overload and the fuse blew.
Why was there an overload?
The bearing was not sufficiently lubricated.
Why was it not lubricated?
The lubrication pump was not pumping sufficiently.
Why was it not pumping sufficiently?
The shaft of the pump was worn and rattling.
Why was the shaft worn out?
There was no strainer attached and metal scraps got in.

Without repeatedly asking why, managers would simply replace the fuse or pump and the failure would recur. The specific number five is not the point. Rather it is to keep asking until the root cause is reached and countermeasures enacted.

Note that I don’t say the root cause is “fixed” or “resolved” – because there is still a further question here: Why was strainer missing?

This is where the CAPA comes in. The corrective action is easy: replace the strainer, and replace the shaft and the bearing if they are out of spec. But what of our preventative action?

The preventative action requires more thought about human error. Why was the strainer missing? Is the person responsible for it overburdened or undertrained? Can it be redesigned so the machine won’t run when it is missing that part?

Crucially: Why didn’t we identify this problem sooner? Is it part of a system of failures which require a culture change, through training or or more frequent inspections?

This whole methodology takes where we are right now, tries to understand it, and tries to move us toward a better state. To borrow again from manufacturing, we are performing “Kaizen.”

Kaizen is the Japanese word for improvement. In manufacturing, uit is sacred, because all other things stem from it. We improve ourselves, we improve our working conditions, we improve our tools, we improve customer satisfaction. It is a continual process of taking baby steps toward perfection, and knowing we will never come close.

If AI is subject to regulatory oversight like aviation and med tech are, if smart journalists ask real questions and actually strive to inform while they entertain, and if we think probabilistically when weighing the likelihood of desirable and dangerous outcomes – well then I think we will do fine with AI.

If not, then I predict a series of small disasters, like canaries in a coal mine predicting a larger problem.

One of these tiny disasters is already upon us: the automation of cars. Self driving cars are arguably only slightly safer than the average human driver, and are prone to all sorts of glitches which sound eerily human: missing red lights and the ends of highways, struggling to recognize the edges of roads, and hitting the occasional emergency vehicle or motorcyclist.

This is not an example of AI performing too well for our own good. Rather, it’s an example of how something we as humans do objectively poorly (driving) turns out to be enormously difficult to improve upon with AI. So far, the big successes have come from careful mapping and redesigning of our roads, not from training computers to outthink humans.

It is very similar to the situation with the Boeing safety system which crashed two planes. I think we will see more of this, especially from AI designed to replace, rather than augment, a human.

When it comes to augmentation, I see enormous potential for AI. It can find patterns we miss, and crucially, if we are honest about our blind spots, AI can keep tabs on us.

When we are after laziness and market share, we design self-driving AI and release it to the public before it is ready (and yes, I do believe it will eventually get significantly better than a human driver, but that is a low bar to set).

When we are after public safety, we design AI to recognize when we are distracted, or inattentive. We use it to tell us when we’re not fit to drive and to help us find other means of transportation in order to get home (ride-shares and taxis). We use it to detect when road conditions are dangerous and to warn or divert future traffic.

We have to be honest about when and where we need help.

Without that honesty, AI will just be another magical tool, with successive crazy, high-profile failures, just like nuclear energy. We will lament the Chernobyl and the 3-mile island and the Fukushima of AI.

Meanwhile, climate change and poor nutrition and the corrosiveness of capitalism for capital’s sake will keep on killing us, and we won’t care. We probably won’t even notice.

Leave a comment

Your email address will not be published. Required fields are marked *