The Google program successfully taught itself the rules to over 49 Atari 2600 computer games from the 1980s, eventually figuring out strategies for victory. This included figuring out navigation, actions and positive outcomes, then using these for improved outcomes.
In 43 of the games, which included such classics as Space Invaders and Breakout, the D.Q.N. outperformed previous computational efforts to win, the paper said. In more than half the games, the new system could eventually play at least three-quarters as well as a professional human games tester.
By figuring out for itself the rules of a system, the D.Q.N. occasionally surprised its creators with new winning strategies. Playing Seaquest, for example, the computer determined that the game’s submarine could survive by staying near the surface for the entire game. In Breakout, it figured out a novel way to get through a wall of bricks.
The Google program successfully taught itself the rules to over 49 Atari 2600 computer games from the 1980s, eventually figuring out strategies for victory. This included figuring out navigation, actions and positive outcomes, then using these for improved outcomes.
In 43 of the games, which included such classics as Space Invaders and Breakout, the D.Q.N. outperformed previous computational efforts to win, the paper said. In more than half the games, the new system could eventually play at least three-quarters as well as a professional human games tester.
By figuring out for itself the rules of a system, the D.Q.N. occasionally surprised its creators with new winning strategies. Playing Seaquest, for example, the computer determined that the game’s submarine could survive by staying near the surface for the entire game. In Breakout, it figured out a novel way to get through a wall of bricks.