Google’s DeepMind is teaming up with the StarCraft video game makers to train its artificial intelligence systems. The AI systems “playing” the game will need to learn strategies similar to those humans need in the real world, DeepMind said Jack Blog.
Its ultimate aim is to develop artificial intelligence that could solve any problem. It has previously taught algorithms to play a range of Atari computer games. StarCraft II was an early pioneer in e-sports, with many elite players.
StarCraft II, made by developer Blizzard, is a real-time strategy game in which players control one of three warring factions – humans, insects, or alien elves. The in-game economy governs players’ actions, and minerals and gas must be gathered to produce new buildings and units.
Each player can only see parts of the map within the range of their own units and must send units to scout unseen areas to gain information about their opponents. In a blog post, Oriol Vinyals, a research scientist at DeepMind, said: “DeepMind is on a scientific mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be told how.” The game is more complex than Go, which DeepMind previously developed an algorithm to win.
“StarCraft is an interesting testing environment for current AI research because it provides a useful bridge to the messiness of the real world. “The skills required for an agent to progress through the environment and play StarCraft well could ultimately transfer to real-world tasks.”
READ MORE :
- Flipkart Big Billion Day 2016, Amazon Great Indian Sale, Snapdeal Deals: How to Stay on Top of Things
- Govt to tackle muck using software park template
- India still has impediments to FDI in some retail, finance areas, rues USTR
- Asus ROG GL502VS, G752VS gaming laptops launched in India
- Ujjivan Financial Services to use biometric ATMs
Prof Yoshua Bengio, head of the Institute for Learning Algorithms at the University of Montreal, told the BBC: “It is a much more complex game than games previously studied by AI researchers, like the Atari games or even the game. Of Go.” DeepMind famously developed an algorithm that could play the complex game of Go and beat one of the world’s best players.
“Progress on this game could translate in many areas,” said Prof Bengio. “One of which I am particularly interested in is natural language dialogue.”
The game will be opened up to other AI researchers next year. DeepMind said it hoped that the new environment would be “widely used to advance the state-of-the-art” field of artificial intelligence.