Countries around the world are preparing for elections in 2024 and in the meantime, OpenAI has prepared its plan to deal with misinformation. It majorly focuses on encouraging and promoting transparency around the information source. The company says that its team is working to prevent misuse. The company also said that its team is working regularly to improve access to accurate voting information and provide transparency on AI-generated content.
“We have a cross-functional effort dedicated to election work,” OpenAI said in a blogpost. This cross-functional effort brings together the expertise of our security systems, legal, engineering, threat intelligence, and policy teams to quickly investigate and remediate potential abuses. ,
“Before releasing any new system, we create a red team,” OpenAI said. Engage users and partners for feedback. “Take safety measures to reduce the possibility of loss.”
Regarding providing near transparency to AI-generated content, the company said that it is working towards this and is working on several proven initiatives.
It will implement the Alliance for Digital Credentials of Content Provenance and Authenticity introduced earlier this year for images generated by Dell E3.
The Provenance Classifier is a new tool for detecting images generated by Dell that OpenAI is also experimenting with.
Presidential elections are going to be held in America later this year. The creator of ChatGPT said that they are working with the National Association of Secretaries of State (NASS). The National Association of Secretaries of State (NASS) is the nation’s oldest nonpartisan professional organization for public officials.
“Certain procedural election-related questions on US voting information can be asked by ChatGPT users and will be directed to the official website canivote.org,” the company said. For example, where to vote.”