Meta is bracing itself for one of the most pivotal elections in US history that could make or break its sentiments in the industry. The company was the subject of allegations for allowing its platform to be exploited to propagate election-related misinformation in the run-up to the 2016 election. The company wants to avoid it in the 2024 presidential election run-up, where artificial intelligence could play a crucial role.
Detecting AI Content on Facebook
The communication services company has confirmed that it is bolstering its effort to identify images doctored by AI. The push is part of a plan that seeks to get rid of misinformation and deep fakes ahead of the election that could come into being due to various AI tools. Consequently, Meta has started working on tools to identify any AI-generated content at scale shared on its platforms.
The new tool will not only identify and label AI-generated images developed by Meta AI tools, but it will also seek to identify content developed by other tools from Google, OpenAI, Microsoft, and Adobe, among others. The company intends to label all the AI-generated images and content in all languages as one of the ways of curbing misinformation on Facebook, Instagram, and Threads.
Meta Platform will start labeling AI-generated mages and content from external sources in the coming months. The added time is needed as the company works with other AI companies to develop a tool that effectively identifies AI content and images.
While there are tools that can detect AI-generated content, most of them have been accused of being biased and ineffective. For instance, some tools have been found to be biased towards non-native English speakers. In addition, videos and audio can be challenging to detect, which is a problem that Meta will have to address.
Curbing Misinformation
Biasness is the last thing Meta Platform would wish to find itself embroiled in ahead of the election, as it would amount to election interference. Consequently, Meta plans to minimize any biases or uncertainty by working with other AI companies that use invisible watermarks on AI-generated images on their platforms. While there are ways of removing the watermarks, Meta Platform plans to come up with ways of addressing the issue.
In addition, Meta Platform intends to add a way allowing users to disclose whenever they upload AI-generated content. Whenever users upload or share a deep fake or another form of AI-generated content, the company may apply some penalties.
With a user base of billions of people, Facebook has been exploited over the years to propagate misinformation, which has plunged Meta into trouble with lawmakers. In the aftermath of the 2016 election, it was the subject of accusations over the way foreign actors led by Russia created and spread misinformation that interfered with the election.
Four years later, Facebook was yet again the center of attention as the platform was exploited to spread misinformation pertaining to the COVID-19 pandemic. Holocaust deniers and QAnon conspiracy theorists have also been rampant on the site.