Google’s DeepMind has created a new AI, called AlphaStar, which managed to beat two professional StarCraft players in a multi-game series that was streamed on both YouTube and Twitch.
While pro player Grzegorz “MaNa” Komincz was able to get us humans one victory, AlphaStar beat its flesh and bone counterparts for no less than 10 games in a row.
The games were played in December last year at the DeepMind HQ in London, with AlphaStar beating pro player Dario “TLO”
Wünsch during the first games, while the final match against MaNa was streamed live today.
For those of you who are not familiar with StarCraft II, the game places two players on different sides of the same map, who then start building bases, train armies and ultimately gather up enough resources and armed forces to invade the enemy’s territory.
Apparently, AlphaStar proved itself very good at micro-managing its territory and controlling its troops rapidly and decisively on the virtual battlefield.
Of course, you might ask yourself: wouldn’t an AI beat a human just by the sheer amount of clicks (actions-per-minute) it can average? Well, technically it, could, but apparently, AlphaStar managed to average around 280 clicks every 60 seconds, which is way below the number the pros usually rake up.
In addition to that, the AI also recorded a delay between observation and action of 350 milliseconds (on average), which is also an under-performance when compared to its human counterparts.
Apparently, it was that micro-strategic decision-making attitude that placed AlphaStar ahead of the other players, at the end of the day. Of course, gaming experts are already all over the gaming sessions, which are being dissected as we speak.
While gaming enthusiasts are bound to talk about these matched for a while, DeepMind’s goal was not to see if an AI could beat a human during a video game – the matches were used as a way to chisel out AI training methods a bit more and the researchers used reinforcement learning to train AlphaStar.
This method allows the AI agents to learn by copying the way the humans play. After that, they play against each other in a competition where the strongest survive while the weaker get discarded.
This is still an upward learning curve for DeepMind, who wants to create an AI agent that is capable of performing the same mental tasks human beings can. And what better way than through strategic games, right?