Just last week, we were telling you that a team of researchers from Harvard University and MIT-IBM Watson AI Lab came up with a system to detect AI-generated content.
The tool was called Giant Language Model Test Room (GLTR) and as you can guess, it uses the very same thing that could lead to chaos in the first place: artificial intelligence.
Sebastian Gehrmann, one of the researchers behind GLTR, tested out his discovery with the help of a group of language-processing students. Their ability to differentiate between real and fake text was at 54%, while the GLTR had an accuracy of 75%.
After this test, the researcher is convinced GLTR can do much more than just detect fake news. It can stop malicious actors in their track. Entire websites, created automatically from scratch, can be highlighted and revealed as what they are: false information sources.
“Someone with enough computing power could automatically generate thousands of websites with real looking text about any topic of their choice. While we have not quite arrived at this point of the focused generation yet, large language models can already generate text that is indistinguishable from human-written text”, explained Gehrmann for CNET.
This is researchers’ first concrete step to prevent the spread of fake information not just in compact tweets and posts, but on entire websites, the preferred means of online propaganda.
“We hope that GLTR can inspire more research toward similar goals and that it successfully showed that these models are not entirely too dangerous if we can develop defense mechanisms,” Gehrmann added.
Follow TechTheLead on Google News to get the news first.