Thought the last election cycle was bad thanks to social-media fuelled disinformation? Now add ChatGPT and deepfakes to the mix.
The first major election cycle since the release of ChatGPT could be an even worse nightmare than before. Thought Donald Trump’s myriad of misleading statements were enough to destabilize democracy? Now, thanks to ChatGPT, we could have chatbots pretending to be politicians. That’s just one way ChatGPT could be used to influence elections, so OpenAI revealed how they’re preparing to limit the potential of ChatGPT abuse in the electoral process.
In a statement posted on their website, OpenAI emphasized how they’re working to implement better transparency about AI generated content, making sure news sources are attributed properly.
Most importantly, in their statement OpenAi say that they won’t allow chatbots to pretend they’re a person – which is the main concern most people will have when they consider how ChatGPT can be used in an election.
“People want to know and trust that they are interacting with a real person, business, or government. For that reason, we don’t allow builders to create chatbots that pretend to be real people (e.g., candidates) or institutions (e.g., local government),” wrote the company.
In December, we had reported on a politician who used the Ashley cold-calling chatbot to engage with voters, Democrat Shamaine Danies of Pennsylvania. In that case, the bot did inform voters about it not being a real person, so it followed OpenAI’s disclosure policy.
In the statement sum-up, OpenAI also said they won’t allow “applications that deter people from participation in democratic processes,” giving the example of voting suppression tactics like disinformation people about when, where or who can vote, or telling them that voting doesn’t help.
2024 is a major election year not just for the US but for the world. It’s actually one of the most significant election years in history, with more than 50 countries around the world having to vote new leadership.