There are more ways than one to help artificial intelligence understand the our world and, although some methods have been successful, we’re still far from from getting AI to interact with us in a more ‘human’ way.
But that won’t stop us from trying.
The Allen Institute for Artificial Intelligence in Seattle has created what they call the AllenAI, a system designed to play a game much like Pictionary, called Iconary, alongside humans.
The AllenAI can guess what its human colleagues are drawing or even draw something itself and see if humans can guess what it is.
So far, the AllenAI has played over 100,000 rounds of Iconary and has managed to recognize over 1,200 completely unique concepts. But what’s interesting about this particular AI is that is didn’t learn all of that by repetition, as many AI systems do – it learned it by collaborating with humans.
This means that the AI has had to make connections in its own ‘mind’ the same way humans do with neural connections. The AllenAI has studied the patterns it has encountered during the Iconary rounds and created datasets for guidance.
“You can’t simply use vanilla reinforcement learning to train a model that is required to communicate or collaborate with humans in a language they understand,” project lead Ani Kembhavi stated.
The team chose Pictionary as the game-style they wanted to train the AI on because they believe video games are not attuned to reality well enough. Pictionary would, therefore, give the AllenAI the opportunity to learn common sense in a more realistic way.
“One of the key distinctions of Pictionary and Iconary is that the knowledge that is used to be successful at these games is directly applicable to everyday AI agents that might help you in your home or at work,” Khembavi said “The rook moves two up and one to the left is not something you need to perform everyday tasks or to collaborate with some futuristic AI agent to complete tasks, but the knowledge that for dinner people usually eat this or dinner is eaten in the evening, lunch is eaten midday, people usually eat three meals a day, these are all common sense facts that today’s AI agents aren’t very good at recognizing.”
The AllenAI team thinks that working with humans and not other AIs will eventually help the system understand and see the world in the same way we do.
In the future, Kembhavi hopes that the AI will start developing a mind of its own, good enough to qualify for a Turing Test.
Follow TechTheLead on Google News to get the news first.