Can machines think like human beings? That's a question the Turing test supposedly answers.
But what if the Turing test isn't accurate? What if there's a better way to judge artificial intelligence on how much it thinks like a human?
These are questions asked by Mark Riedl, an associate professor in the School of Interactive Computing at the Georgia Institute of Technology. Riedl suggests that a new test, Lovelace 2.0, is better in that it looks at how artificial intelligence handles creative tasks.
Creative cognition is something unique to human brains. It's what allows us to create art, music and literature. It's also something that computers struggle with.
This is why Riedl suggests that his test is better for measuring how human an artificial intelligence really is.
The Turing test depends on an artificial intelligence imitating a human while engaging in dialogue. To pass that test, the AI must convince a human that it is also a human. It's based on deception, which Riedl believes is an inaccurate way of measuring AI.
The Lovelace 2.0 Test of Artificial Creativity and Intelligence asks an AI to create a creative "artifact," based on specific artistic genres that require human-like intelligence. To pass the test, the AI must create something evaluated by human creative constraints. The artifact must also show a good representation of the given genre. However, its art does not need to have any aesthetic appeal.
This test actually follows a previous Lovelace Test, proposed in 2001. (The test gets its name from Ada Lovelace, the first computer programmer). That test required that the AI produce a creative artifact that it is incapable of explaining how it was created. This means that the creative item is "valuable, novel and surprising."
In his paper, Riedl uses the example of fictional storytelling, which requires many creative human cognition skills. At present, Riedl states that no artificial intelligence can pass the Lovelace 2.0 test when asked to create such a story.
"Creativity is not unique to human intelligence, but is one of the hallmarks of human intelligence," writes Riedl. "In the spirit of the Imitation Game, the Lovelace 2.0 test asks that artificial agents comprehend instruction and create at the amateur levels."
Perhaps, though, Riedl isn't aware of the AI that created the video game Angelina. That AI used a variety of programmed phrases and basic parameters and created something praised for its originality.
"Computational evolution is a means of generating things inspired by how evolution works in the natural world," says Michael Cook, one of the researchers behind Angelina. "Lots of random recombinations of things, with some kind of pressure or selection mechanism to make sure the best examples are chosen to combine with one another at each stage."
[Photo Credit: Warner Brothers/Dreamworks]