I saw The Lion King on Broadway this weekend and I can’t stop thinking about it. Namely in terms of artificial intelligence. Because just the day before, I listened to this podcast on RadioLab about human-computer interaction: Talking To Machines.
In the segment on Furby, Caleb Chung, creator of Furby, recounts the elements needed for a toy to be convincingly “alive”: it must feel and show emotions, be aware of its environment, and change over time. An MIT grad student conducted experiments with school age children based on these criteria, comparing their interactions with Furby to their interactions with a Barbie doll and a gerbil. The childrens’ interactions with Furby were more aligned with the way they treated the live animal than with the way they treated the doll.
Chung goes on to argue that Furby is alive in the same way you and I are and, as a developer, he can code emotions necessary to keep a mechanical object alive.
Chung When is something alive? Furby can remember these events, they affect what he does going forward, and it changes his personality over time. He has all the attributes of fear or of happiness, and those are things that add up and change his behavior and how he interacts with the world. So how is that difference than us?
Abumrad [Because] Life is driven by the need to be alive. And by these base, primal, animal feelings like pain and suffering.
Chung I can code that. I can code that. Anyone who writes software […] can say okay, I need to stay alive, therefore, I’m gonna come up with ways to stay alive. I’m gonna do it in a way that’s very human.
When countered by Abumrad, who is understandably resistent to the idea that a simple mechanical toy possesses life as we know it, he finishes:
Our neural system isn’t different from Furby’s. It’s just more complex.
So what does this have to do with the puppets in The Lion King? In the same way that developers are the genuine intelligence that drive the emotions behind artificially intelligent toys like Furby, so then are actors the real intelligence behind a form of artificial intelligence (“soft” AI?) for these magnificent theatrical puppets. Inara Verzemnieks explains in the Oregonian:
One of the things that’s fascinating about puppetry, its inherent paradox, is that in order for it to be successful, the puppeteer must be invisible, anonymous. And yet, a puppet is by definition an extension of the puppeteer – his stand-in, in effect.
This leads to some pretty trippy lines of thinking.
With a puppet, you are taking something from your imagination – a part of you, yet not you – giving it a physical form and then animating it, trying to occupy it so fully that you become this part of yourself that is not yourself (yet is still yourself).
In other words, you become a different you.
As Stephanie Snyder, curator of Reed’s Cooley Gallery put it, the puppeteer “is not just becoming the character, he’s becoming the puppeteer becoming the character. We all do that. I become the curator becoming the interpreter. You become the writer becoming the interviewer. All these layers. …”
What might happen, then, when artificial intelligence meets puppetry meets theatre? What if toy designers like Caleb Chung (who, incidentally has a background as a mime) formed collaborations with puppeteers like Michael Curry, the Lion King’s puppet master, and directors like Julie Taymor? I’m not sure, but I’m guessing it would be wonderful and scary all at once. I just hope it happens in my lifetime.