Skip to main content

Sci-Fi Science: Age of Ultron and Robots That Learn

The down-low from an IRL science communicator.


Welcome to Sci-Fi Science, our new series where a professional science communicator takes you through the hows and whys of what we see in our favorite fiction.

Age of Ultron, Terminator: Genisys, Ex Machina, Chappie—if 2014 was the year of space movies, then 2015 is absolutely the year for robots and artificial intelligence to take over our screens. And it makes sense. We haven’t had much of a trend for robot movies since around the time people were worried about Y2K, and technologies have made advancements in leaps and bounds since then.

Let’s take a closer look at Ultron. Ultron is a robot with consciousness modelled after human brain patterns, minus a human conscience. It can learn, take in new information and adapt its behaviours, and it can self-rebuild. That sort of tech could make for a frighteningly formidable enemy, whether you’re a basic human, an Avenger, or even Mr. Incredible.

MrIncredibleRobotWhat if I told you that computer systems with artificial neural networks modelled after human brains have been around for decades now? What if I told you that mathematical algorithms can already be trained to learn pretty much independently?

Freaking out yet?  Let me explain.

“Deep learning” computers have been around for over half a century. Before the 1950s, computers could only do exactly step-by-step what you commanded, which wasn’t particularly exciting or useful, particularly if, say, you wanted to play a rousing game of checkers—which is exactly what Arthur Samuel wanted. In 1956, instead of laying out all of the computer’s moves in painstaking detail, he programmed the computer to play against itself thousands of times in order for it to “learn” how to play the game. By 1962, his computer could compete with and win against checkers champions. Remember, this is decades before IBM’s Watson (another famous deep-learning machine) won against Jeopardy champions.

Beyond fun and games, setting up a computer to “teach” itself in this way is super useful: not having to program every. single. step. saves a lot of time and effort, and it also allows for computers to take on tasks that we mere humans might not be able to tackle ourselves (or at least, the computers can accomplish things much more quickly).

We encounter this sort of tech every day. Google uses machine learning-based algorithms to dig up our search results. Online stores use them to create those recommendations based on your likes and based on what other people with similar shopping patterns purchased. As more people visit and buy stuff, as more people punch terms into a search bar,  computers are crunching these data and are creating more and more accurate results and recommendations. And yup, machine learning is in part behind those creepily targeted ads that pop up everywhere around the web.

In 2012, Google famously had a deep-learning algorithm watch thousands and thousands of YouTube videos and the computers independently learned to recognize concepts like people and cats based strictly on the content of the videos. Another project called The German Traffic Sign Recognition Benchmark has reported that computers can recognize and understand images of traffic signs with fewer errors than humans.

This may not seem very impressive. I mean, are computers really learning? If we take learning to mean gathering more and more information and repeating tasks until connections (either conceptual or procedural) are made, then heck yes, these computers are learning. Are they conscious? Do they understand their learning in the way that humans do? Does it feel all warm and fuzzy when it sees a cute cat video? Well… no. But does that really matter in the grand scheme of things if the output of robot learning versus human learning is otherwise the same?

Now, this doesn’t mean that we’re right around the corner from a real-life Ultron. Building this sort of robot would require huge advancements in all sorts of tech beyond deep learning—grip sensitivity, computer vision, weaponry, and hardware that can handle everything without falling apart. Progress has been accelerating lately in all of these fields, but not in a balanced way. And this tech is finding more appropriate niches in other territories, like health and medicine. To my knowledge, no one is plotting to incorporate them into one big scary attack robot.


Nina Nesseth is a professional science communicator, emerging playwright, and serial tea-drinker. She’s happiest when science-ing at people (yes, that’s “science” as a verb) and watches way too many movies (but she lacks stamina and falls asleep if she tries to watch two in a row).  You can find her on Twitter @cestmabiologie.

Are you following The Mary Sue on Twitter, Facebook, Tumblr, Pinterest, & Google +?

Have a tip we should know? [email protected]

Filed Under:

Follow The Mary Sue: