Do you enjoy bedtime stories? I hope you do, because the reading of these stories might be the best way to teach robots how to be more human. Mark Riedl is the director of the Entertainment Intelligence Lab at the Georgia Institute of Technology, and his “Quixote” AI teaching technique (besides being named after an amazing story character) aims to teach human values to robots through story.
The premise is simple: teach these robots to read and understand these stories, crowdsource that education, and over time, a robot might be able to learn what it’s like to be a human. In speaking with Popular Science about his research and findings, Riedl said:
We believe that AI has to be enculturated to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behavior. Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual.
If a robot can read a story and demonstrate what is considered “good behavior,” then they’re rewarded. That reward helps them value that behavior over its opposite behavior and others unlike it. As well, if a robot demonstrates bad behavior, then negative reinforcement is applied.
In a lot of ways, this mimics the very same method of education we get as young children. We’re told parables and fables in the hopes that we learn a moral or a lesson from them. Most–if not all— of these stories stick with us for the rest of our lives. In fact, there’s likely a fable that you’re thinking of right now as you read this. I bet you could even share the gist of it as well as what you’ve learned from it.
Given that, and given our strides in the development of artificial intelligence, it perhaps isn’t too outlandish to think that robots might be able to learn how we learn–especially given that we so often build robots in our own image.
Frankly, as Blastr’s Trent Moore so astutely points out, the idea of the Quixote method or program is genius in its simplicity. It’s elegant because it’s using techniques we’re all already familiar with, things we already do, to teach something how to think and act like us. Maybe years and years from now, when AI has grown to an advanced enough level, we’ll look back and think about how simple and quaint these methods seemed. But for now, developments like these are pretty gosh darn rad.
Though the jury’s still out on whether this would’ve helped Hal 9000 or Ultron.
(image via Shutterstock/Angela Waye)
—Please make note of The Mary Sue’s general comment policy.—
Have a tip we should know? email@example.com