Machines

Google AI Created A Child AI That Outperforms Human-Made Systems

facebook artificial intelligence

This spring, Google managed to give its AI training abilities for other neural networks. AutoML was proven capable to create and train its own AI, focused on a given task. Now, it has shown that its program is not only powerful enough to do so but it will also make other human-made computer systems pale by comparison #machinemagic

AutoML from Google effectively outperformed all other computer vision systems available today. It all started with the creation of a new child AI in charge with recognizing objects in a video, in real time. Called NASNet, the network finished and restarted the task thousands of times, under the evaluation of the controller neural network – AutoML – until the results were satisfying enough to be assessed with the ImageNet image classification and the COCO object detection. If you haven’t heard of them, don’t worry – most humans haven’t. What you need to know is that these two classifications are considered by Google researchers “two of the most respected large-scale academic data sets in computer vision.”

This is how NASNet showed its superiority when compared to present computer vision systems. In fact, on the ImageNet scale, Google’s child AI proved  82.7% accurate at predicting images, which is 1.2% better than previous results. They also said it’s 4% more efficient than the rest of them.

Google has open-sourced NASNet, inviting developers to take a peek. Frankly, image recognition is one of the most important tasks in computer vision use cases right now. In fact, for self-driving technologies, the accuracy and speed with which a system can recognize objects in its path could make the difference between life and death one day.

But what if AutoML becomes too fast and created child AIs that can’t be controlled by humans anymore? That’s a risk we’ll have to take, it seems. But, if it makes you feel better, know that there are ethical standards in place for AI and DeepMind itself has created a group focused on the moral implications of AI. A disaster can always be avoided if we pay attention and take measures beforehand.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Popular This Week

To Top