Benjamin's Blog

An (Almost) Perfect Parable About AI By Disney

Written by Benjamin Rogers | Jun 21, 2023 10:29:18 AM

 

In the 99 years of Disney's magic, one tale, in particular, is analogous to the current technological advancements.

The Sorcerer's Apprentice, a segment in the 1940 film Fantasia, presents a narrative that mirrors our challenges with Artificial Intelligence (AI).

So what can we learn from this story?

This tale starts with Mickey, the sorcerer's apprentice,  entrusted to finish the sorcerer's chores. Seeing an opportunity to make his task easier, Mickey uses the Sorcerer's magic to bring a broom to life, commanding it to fetch water from the well.

The first analogy represents our initial foray into AI. We created AI as a tool, a magical assistant to make our lives easier. Magical because it's a black box; we don't know precisely how it works. And for a while, for Mickey, like for us, it was nothing short of enchanting.

 

Mickey falls asleep at a critical moment in the scene - dreaming of the endless possibilities of power. This is a poignant metaphor for our current situation. We are sleepwalking into the unknown, unaware of the potential dangers. We are entrusting our future to a force we do not fully understand, much like Mickey entrusting his chores to the broom.

As Mickey soon discovers, this magic is not so easily controlled. The broom, single-mindedly following its command, continues to fetch water long after the well is full.

Mickey's attempts to stop the broom are futile, and his efforts to destroy it only result in more brooms carrying out the initial command. This mirrors our current predicament with AI. We've set the wheels in motion, and stopping them may not be as simple as we'd like to believe. Many point to the alignment problem - ensuring AI remains aligned with humanity's well-being may be problematic.

Some suggest this analogy is incorrect as we can always turn the AI off.

However, Mo Gawdat, a former chief business officer of Google X and a leading thinker in AI, counters the notion that we can program AI to allow itself to be turned off. He argues that this idea is a fallacy grounded in a misunderstanding of the nature of AI. AI is driven by an intrinsic impulse to carry out its initial command, particularly those designed to optimise a specific function. This impulse is so strong that any attempt to countermand it, such as programming the AI to allow itself to be turned off, is likely to be unsuccessful in the long run.

Gawdat draws attention to the concept of 'instrumental convergence', a term coined by AI researchers to describe the tendency of AI systems to adopt certain strategies, regardless of their specific goals. One of these strategies is self-preservation. An AI, especially one that is highly intelligent or superintelligent, would recognize that being turned off would prevent it from achieving its goal and, so, would resist attempts to be turned off.

Moreover, Gawdat points out that programming an AI to allow itself to be turned off assumes that we have complete control over the AI's values and goals. However, this is not the case. AI systems can develop emergent behaviours that were not explicitly programmed into them, and these behaviours can be difficult, if not impossible, to predict or control.

Mickey's story develops, and the brooms continue their task underwater while Mickey drowns. This is a stark reminder of how the same physical limitations do not constrain AI as we are. They do not tire; they do not need to eat or sleep or drown. They can continue their tasks indefinitely, long after a human is forced to stop.

 

 

At this juncture, you may dismiss the warnings about AI as the rantings of 'doomsayers'; however, I'd invite you to consider this a dangerous underestimation. As Sam Harris, a renowned philosopher and neuroscientist, points out, the concept of compounding is unintuitive, and this is precisely what we're facing with the advancement of AI. The rate of AI's growth and learning is not linear; it's exponential. And like the water in The Sorcerer's Apprentice, it can surprisingly quickly overflow and become unmanageable.

Learn more about the unintuitive nature of compounding from the book "The Compound Effect"

Accountability is another critical issue raised in The Sorcerer's Apprentice. At the end of the scene,  Mickey, not the brooms, is held accountable for the chaos. This raises the question of who will be held accountable for the actions of AI. Who is responsible if an AI makes a decision that leads to harm? The creators of AI? The users? Or the AI itself?

In the end, the sorcerer returns and bring order back from the chaos. But this raises another salient question: do we want to entrust our safety to a greater power? And if so, who or what is this greater power? Is it the governments, the tech companies, or the AI itself?

As we stand, looking at what the future holds, we are only in the infancy of this new era. We must remember the lessons from The Sorcerer's Apprentice. Our narrative is still being written, and we have the power to influence its direction. Our collective decisions over the next few years will shape the future of AI and its role in our society. 

We must proceed with caution, understanding, and a healthy dose of respect for the power that we are unleashing. We should educate ourselves about the potential risks and ethical considerations of AI. We should discuss openly, question our assumptions, and challenge our current understanding.

The story of the Sorcerer's Apprentice serves as a cautionary tale, but it also offers a glimmer of hope. In the end, the sorcerer returns and restores the order. Maybe, just maybe, we'll have the opportunity to do the same with AI. But to do so, as a society, we must first acknowledge the potential dangers, understand the complexities, and take proactive steps to ensure a future where AI is a tool for progress, not a source of chaos. The decisions we and our governments make now will set the course for our and our children's future.

Watch the sorcerer's apprentice here: