Has Elon Musk Gone Too Far? The Truth About Deepfakes
Virginia Allen / Kristen Eichamer / Crystal Bonham /
After entrepreneur Elon Musk, a leading voice on AI technology and development, shared a deepfake campaign ad for Kamala Harris, California Gov. Gavin Newsom, a Democrat, signed bills to crack down on the use of such technology in election campaigns.
“Safeguarding the integrity of elections is essential to democracy, and it’s critical that we ensure AI is not deployed to undermine the public’s trust through disinformation—especially in today’s fraught political climate,” Newsom said in a public statement announcing the actions.
California is Harris’ home state, where she was attorney general and U.S. senator before being elected vice president as President Joe Biden’s running mate in 2020.
One bill “requires large online platforms to remove or label deceptive and digitally altered or created content related to elections during specified periods,” according to the California governor’s website. It also requires digital platforms to “provide mechanisms to report such content.”
Deepfakes, the use of artificial intelligence technology to create images, video, or audio that looks real but isn’t, have become much easier to create with recent advancements in artificial intelligence, or AI.
The satirical ad Musk shared features Harris’ voice and begins: “Kamala Harris, your Democrat candidate for president, because Joe Biden finally exposed his senility at the debate. Thanks, Joe.”
Although the Harris ad was clearly satirical, many deepfakes are difficult to identify, leading to questions over the kind of laws that should be passed to protect individuals’ images, voices, and likelinesses.
On this week’s edition of “Problematic Women,” Heritage Foundation tech policy expert Kara Frederick joins the show to discuss the fine line between limiting the dangers of AI and protecting free speech.
Watch the show above or listen using the link below.