The Rubik’s Cube is no match for a new artificial intelligence system, researchers report.
Since its invention by a Hungarian architect in 1974, the Rubik’s Cube has furrowed the brows of many who have tried to solve it.
DeepCubeA, a deep reinforcement learning algorithm, can find the solution in a fraction of a second, without any specific domain knowledge or in-game coaching from humans.
“How do we create advanced AI that is smarter, more robust, and capable of reasoning, understanding, and planning?”
This is no simple task considering the cube has completion paths numbering in the billions but only one goal state—each of six sides displaying a solid color—which apparently can’t be found through random moves.
For the new study, researchers demonstrated that DeepCubeA solved 100 percent of all test configurations, finding the shortest path to the goal state about 60% of the time. The algorithm also works on other combinatorial games such as the sliding tile puzzle, Lights Out, and Sokoban.
“Artificial intelligence can defeat the world’s best human chess and Go players, but some of the more difficult puzzles, such as the Rubik’s Cube, had not been solved by computers, so we thought they were open for AI approaches,” says senior author Pierre Baldi, professor of computer science at the University of California, Irvine.
“It learned on its own.”
“The solution to the Rubik’s Cube involves more symbolic, mathematical, and abstract thinking, so a deep learning machine that can crack such a puzzle is getting closer to becoming a system that can think, reason, plan, and make decisions.”
The researchers were interested in understanding how and why the AI made its moves and how long it took to perfect its method. They started with a computer simulation of a completed puzzle and then scrambled the cube. Once the code was in place and running, DeepCubeA trained in isolation for two days, solving an increasingly difficult series of combinations.
“It learned on its own,” Baldi notes.
There are some people, particularly teenagers, who can solve the Rubik’s Cube in a hurry, but even they take about 50 moves.
“Our AI takes about 20 moves, most of the time solving it in the minimum number of steps,” Baldi says. “Right there, you can see the strategy is different, so my best guess is that the AI’s form of reasoning is completely different from a human’s.”
The ultimate goal of projects such as this one is to build the next generation of AI systems, Baldi says. Whether they know it or not, artificial intelligence touches people every day through apps such as Siri and Alexa and recommendation engines working behind the scenes of their favorite online services.
“But these systems are not really intelligent; they’re brittle, and you can easily break or fool them,” Baldi says. “How do we create advanced AI that is smarter, more robust, and capable of reasoning, understanding, and planning? This work is a step toward this hefty goal.”
The study appears in Nature Machine Intelligence.
Source: UC Irvine