Engineers spend their time coming up with inventive ways of getting their neural networks up to speed and the most creative one involves… StarCraft II.
Training state-of-the-art neural networks with a decade-old strategy game may seem pointless, but it does the trick more than we could imagine. According to a report published Thursday by MIT Technology Review, the technique used to get AI playing Starcraft II smarter and unbeatable can be carried over to neural network development.
As fans would know, in StarCraft II you are responsible to control various individual units, each with a different set of skills. You are expected to manage your resources wisely and fight an opponent for your life. For humans, this is doable, but machines need a bit of help.
To win in Starcraft II, Google’s DeepMind system used an algorithm called population-based training (PBT) to imitate natural selection, speeding up the learning process using the most efficient units first and then adapting to those.
“One of the key challenges for anyone doing machine learning in an industrial system is to be able to rebuild the system to take advantage of new code,” Matthieu Devin, director of machine learning infrastructure at Waymo, said in an interview with MIT Technology Review. “We need to constantly retrain the net and rewrite our code. And when you retrain, you may need to tweak your parameters.”
Now, Google’s machine-learning technology is being trained using this population-based model to further advance Waymo’s self-driving capabilities.
Yu-hsin (Joyce) Chen, a machine-learning infrastructure engineer at Waymo, says that PBT has reduced the computer power needed to train again a neural net by almost half and has doubled or tripled the speed of the development process.
Follow TechTheLead on Google News to get the news first.