Tech Companies Take Steps To Protect Voters From AI-Generated Misinformation
Tech Companies Take Steps To Protect Voters From AI-Generated Misinformation

Tech Companies Take Steps , To Protect Voters From , AI-Generated Misinformation .

On November 8, Meta announced that it will require political ads that have been digitally atered, using AI or other technology, to be labeled.

.

On November 8, Meta announced that it will require political ads that have been digitally atered, using AI or other technology, to be labeled.

.

'Time' reports that Meta's announcement comes one day after Microsoft revealed steps it will take to protect elections.

Microsoft announced tools to add watermarks to AI-generated content and a "Campaign Success Team" which will offer campaigns advice on AI and security.

The advent of generative AI, which allows users to create text, audio and video content, comes ahead of a busy global election year.

.

The advent of generative AI, which allows users to create text, audio and video content, comes ahead of a busy global election year.

.

2024 will see major elections decided in the United States, India, the United Kingdom, Mexico, Taiwan and Indonesia.

.

According to a November poll, 58% of adults in the U.S. are concerned that AI could be used to spread false information in the upcoming election.

Elizabeth Seger, a researcher at the Center for the Governance of AI, warns that AI could be used to conduct mass persuasion campaigns.

.

Seger also warns that just knowing deepfakes exist could erode people's trust in information sources.

A risk that is often overlooked, that is much more likely to take place this election cycle, isn't that generative AI will be used to produce deepfakes that trick people into thinking candidate so and so did some terrible thing.

, Elizabeth Seger, Researcher at the Center for the Governance of AI, via 'Time'.

But that the very existence of these technologies are used to undermine the value of evidence and undermine trust in key information streams, Elizabeth Seger, Researcher at the Center for the Governance of AI, via 'Time'