Believe it or not, AI algorithms are now smart enough to generate texts that are eloquent enough to fool human into believing they have been written by a person. These algorithms have shown a lot of potential for creating fake news and reviews as well as social accounts. So how do we fight against them? By using another AI, of course.
As they say: it takes one to know one and researchers at Harvard University and the MIT IBM Watson AI Lab have developed an new AI tool called Giant Language Model Test Room (GLTR, for short) that can detect patterns in texts and tell if they have been written by a machine or a human.
AI text usually relies on statistical patterns in text and not the meaning behind the words and sentences – if they sound too predictable, they have most likely been created by a machine.
GLTR highlights the words in the text based on how often they might appear again within the same text – green shows a very predictable word while purple the least predictable ones.
GLTR was tested against the OpenAI algorithm and was successful in finding a lot of predictable words when put against genuine new articles written by people, even more so in the case of scientific abstracts.
But that was not the only test GLTR was put against – they also asked Harvard students to identify the AI generated text both with and without the tool. The students only managed to spot half of the fake articles by themselves but were more successful when they were helped out by the tool and registered a 75% increase in accuracy.
“We apply the insights from the analysis to build GLTR, a tool that assists human readers and improves their ability to detect fake texts,” the report says. “GLTR aims to educate and raise awareness about generated text.”
If you want to try GLTR you can do so for free via this link.