Media platforms seek to combat AI misinformation with new labeling policies
The protocols from TikTok, Adobe and Google arrive in the midst of a US presidential campaign cycle that has already been made more complicated by generative AI.
Google has announced a new policy requiring the disclosure of some digitally modified content in election ads / Adobe Stock
Since the public release of ChatGPT almost a year ago, generative AI has become something of a cultural obsession.
Tech giants including Microsoft and Amazon are pouring billions into AI startups that are developing powerful new generative AI models, while brands from Coca-Cola to Volkswagen have begun to incorporate the technology into their marketing. This year’s Cannes Lions Festival, one of the ad industry’s biggest annual events, was buzzing with excited chatter about this new technological and creative frontier.
But as excitement about generative AI has grown, so too have anxieties. In addition to raising concerns around intellectual property and copyright law, generative AI also has the potential to stir up the waters in an already muddied information ecosystem. For the first time, the world is reckoning with a technology that, in the not-so-distant future, could enable just about anyone to create an image or video clip depicting someone doing something that that person never actually did.
The same goes for audio clips; last week, Spotify teased a new feature that can automatically translate a podcaster’s voice into multiple different languages, just from a few seconds of audio. It isn’t too much of a stretch to imagine a similar technology being used by a bad actor to, say, disseminate an audio clip of a public figure saying something unsavory or incriminating.
Fears around deepfakes, as such AI-generated digital simulations of an individual’s likeness or voice are called, have recently percolated to a new high as a number of prominent celebrities – including actor Tom Hanks and YouTuber MrBeast – have taken to social media to declare that deepfaked versions of themselves have been nonconsensually created in order to promote a service (“some dental plan” in Hanks’s case, as he described it in an Instagram post).
Though some early governmental efforts are being made to control the development and deployment of AI, the technology at the time of this writing remains unregulated, creating a kind of wild west atmosphere in which both its most positive and negative attributes are quickly coming into focus.
Generative AI in political campaign ads
The danger posed by deepfakes and the need for a counter-technology capable of distinguishing the real from the fake is made even more acute by the fact that the US is in the midst of a presidential election campaign cycle, during which the risks of any kind of misinformation are obviously heightened.
In April, following President Biden’s announcement that he’d seek re-election in 2024, the Republican National Committee (RNC) released an ad that used generative AI to visualize its idea of what another four years under Biden might look like.
The video spot opens with digital renderings of Biden and vice-president Kamala Harris celebrating another election victory, followed by images of boarded-up banks, the Rio Grande choked with migrants trying to cross the US/Mexico border, the National Guard standing in the streets of San Francisco, an MS-13 gang member and other images clearly meant to evoke fear in the hearts of many in the Republican base.
In response, congresswoman Yvette Clarke (D-NY) introduced a bill aimed at requiring the disclosure of any AI-generated images or video used in political campaign ads.
Some political marketers have also been warily paying attention to these developments. “We’re keeping track of new generative AI content in our industry,” says Valentina Perez, senior vice-president of GMMB, an ad agency based in Washington, DC that specializes in political campaigns for Democratic candidates.
“There’s a big responsibility for advertisers in the political space because we’re not just communicating about a product; we’re communicating with people about candidates, elected leaders and big ideas that have an impact on people’s lives in a different way than a consumer product,” she says. “[We have to] think about what we’re telling voters and if we’re telling them something that’s true or not true.”
Cameron Kerry, an attorney, AI expert and visiting fellow at the Brookings Institution’s Center for Technology Innovation, says that it’s highly probable that AI-generated content will continue to appear in political ads throughout the campaign season, particularly from “PACs [political action commitees] and dark money groups that aren’t accountable.”
Labeling AI-generated content
A few major tech platforms have taken some initial steps towards ensuring that AI-generated content shared on their platforms will be clearly labeled as such.
TikTok recently unveiled a new policy requiring creators to disclose AI-generated content included in posts. The platform also warned that any posts that include unlabeled AI-generated content could be removed. “Digital forgeries (synthetic media or manipulated media) that mislead users by distorting the truth of events and cause significant harm to the subject of the video, other persons or society” are also prohibited by TikTok, according to the company’s community guidelines.
Suggested newsletters for you
Adobe has also announced that the public release of its new Firefly generative AI model would come equipped with “content credentials,” a labeling system for AI-generated assets created on the platform. The company described this disclosure system in a company blog post as “verifiable details that serve as a digital ‘nutrition label’” that “can show information including an asset’s name, creation date, tools used for creation and any edits made.”
Aiming to curb misinformation in the political arena, Google announced last month that verified political campaign ads for individuals running for elected office that include AI-generated content or other forms of digital modification will need to include a prominent disclosure.
Slated to go into effect next month, a year before the 2024 election, Google’s new policy builds on a body of similar efforts geared towards transparency in political advertising, according to a statement from a Google spokesperson provided to The Drum. “Given the growing prevalence of tools that produce synthetic content, we’re expanding our policies a step further to require advertisers to disclose when their election ads include material that has been digitally altered or generated,” the spokesperson wrote. “This [new] update builds on our existing transparency efforts – it’ll help further support responsible political advertising and provide voters with the information they need to make informed decisions.”
The advent of AI-generated content, according to Kerry, will pose a significant risk to the information ecosystem in the US, which is already plagued with mistrust. “Misleading advertising has been around since there have been campaigns,” he says, “but today’s AI enables persuasive deceptions on a massive scale.”
Kerry says that, ultimately, any effort to label AI-generated content and thereby mitigate the risks of deepfakes must be enforced by both government and private companies. “The FEC can develop labeling requirements for political ads, the FTC can act against deceptive uses in the consumer context and Congress can adopt additional rules, but these [policies] will have to be carried out by the private sector that can block or take down offending content.”
For more on the latest happenings in AI, web3 and other cutting-edge technologies, sign up for The Emerging Tech Briefing newsletter.