Google’s DeepMind AI has already made headlines for its abilities. Now, StarCraft players can go head-to-head with AlphaStar, a DeepMind-powered program, at Blizzcon 2019.
StarCraft has long been considered one of the most advanced, well-rounded and complicated real-time-strategy (RTS) games on the market. The game enjoys a large following and a professional eSports presence. For players to succeed, they must be able to master complex strategies, scout their opponent, manage an economy, choose the right builds and maintain enough actions per minute (APM) to control everything happening.
AlphaStar has achieved the rank of Grandmaster for all three of the races in StarCraft II, making it better than 99.8% of all human players. To ensure the AI plays on equal terms, its mouse clicks are restricted to the same level that a human player can achieve and the AI can only see the portion of the map it has explored—unlike many games where the computer can see everything.
Google hopes the advances made with AlphaStar will have applications far beyond RTS.
“At DeepMind, we’re interested in understanding the potential – and limitations – of open-ended learning, which enables us to develop robust and flexible agents that can cope with complex, real-world domains. Games like StarCraft are an excellent training ground to advance these approaches, as players must use limited information to make dynamic and difficult decisions that have ramifications on multiple levels and timescales.
“Open-ended learning systems that utilise learning-based agents and self-play have achieved impressive results in increasingly challenging domains. Thanks to advances in imitation learning, reinforcement learning, and the League, we were able to train AlphaStar Final, an agent that reached Grandmaster level at the full game of StarCraft II without any modifications, as shown in the above video. This agent played online anonymously, using the gaming platform Battle.net, and achieved a Grandmaster level using all three StarCraft II races. AlphaStar played using a camera interface, with similar information to what human players would have, and with restrictions on its action rate to make it comparable with human players. The interface and restrictions were approved by a professional player. Ultimately, these results provide strong evidence that general-purpose learning techniques can scale AI systems to work in complex, dynamic environments involving multiple actors. The techniques we used to develop AlphaStar will help further the safety and robustness of AI systems in general, and, we hope, may serve to advance our research in real-world domains.”