Object recognition, speech and face detection are just a couple of computer vision applications where deep learning has played an important role. In order to help a machine get at this point, to learn complex representations of data, you need strong neural networks* and with them, high-power GPUs (graphic processor units). MIT has developed a chip that could do just that in low-power devices such as smartphones.
Eyeriss has 168 cores but consumes 10 times less power than ordinary GPUs found in mobile phones. This means it could give you the chance to access AI algorithms locally, without needing the help of Internet servers. No more delays, privacy breaches and Wi-Fi connection needed.
How was it done? Eyeriss was made with efficiency in mind. It minimizes the rate with which cores exchange data from one level to another. “Big distance” is out of the equation, as nearby cores can talk directly to each other. Moreover, cores receive a precise amount of data they can process realistically, without going back and forth to bring it.
It sounds easy but it was exactly the opposite. Researchers had to build extremely simple units, yet still flexible, so they could handle different types of networks and tasks.
If this technology will be embedded in the near future, Eyeriss is going to make a difference in the IoT field , as well as robotics. With one of the senior researchers working for NVIDIA, this could happen sooner than you think.
* Neural networks are organized in layers, where each layer has a number of processing nodes. When data comes in, each node receives a part and starts manipulating it. Then, it passes the results to the nodes in the next layer who analyze it in return and do the same until the final layer comes up with the solution for a computational problem.