The Creator Economy Is Facing A Perfect Storm Of AI-Generated Content And Piracy
[ad_1]
Brandon Clement is not afraid of a little headwind. The Emmy award winning videographer been posting some of the most compelling footage of extreme weather events from around the world for over a decade. His YouTube channel, WX Chasing, has over 77,000 subscribers, his videos have racked up more than 100 million views, and his work is often featured in mainstream news reports and features.
But there’s one system that Clement has been tracking that has him more than a little concerned: a perfect storm of greed, technology and indifference that threatens his livelihood and that of nearly every creator hoping to monetize their work on platforms like YouTube, Instagram, X and Facebook.
“It’s destroying my business, it’s putting so much stress and anxiety into my head I can’t sleep, I can’t stop thinking about it,” Clement said in a phone interview earlier this month.
The scourge that has the tornado chaser in a twist? Shadowy operations that have been pirating copyrighted footage and repackaging it into clickbait on social media platforms, running under hundreds of cutout accounts in dozens of languages, using the power of generative AI at a scale that threatens to overwhelm human-generated content.
“I’ve had certain pieces of video stolen by more than 60,000 pages on Facebook,” he said. “Some of these pages have millions of subscribers and followers, some have zero. But when you start dividing your views up by 60,000, you just can’t make money. You can’t grow an audience, and your content is ruined by overexposure.”
Science YouTuber Kyle Hill shares Clement’s concerns about the threat. Though he has so far benefited from his audience’s ability to “tell the difference between high quality content and auto-generated blah-blah,” he is extremely worried that the bad guys are gaining ground, and that the problem extends far beyond the niche of news, science and documentary content. Yesterday’s situation of X (formerly Twitter) being flooded with deepfake images of Taylor Swift was a more spectacular and horrific example of the same toxic combination of generative AI tools in the hands of unscrupulous operators hijacking the scale and reach of social platforms for their own gain.
“The core of the issue is that these scammers can rapidly generate and steal content,” he explained. “YouTube [and other platforms] will give them ad revenue until the owner or other party claims that stolen content. And so by making literally dozens of channels uploading new videos every few hours, these actors can consistently make enough money to continue their operation before any single creator (like me) has the time to track down and claim it.”
Hill and Clement called out specific examples of YouTube channels trafficking in AI-generated fake-science content in several videos describing the problem, including this one.
Copyright infringement has been a problem on content platforms since the day the first one launched, but the advent of generative AI for text, voice, imagery and video has turbo-charged the ability of thieves to blast out hundreds or thousands of videos under AI-generated headlines and thumbnails engineered to garner views, often containing false, misleading, or outright incoherent content generated automatically from Wikipedia posts or random web scrapes.
“What I worry about in the short-term is simply being drowned out by nonsense,” said Hill. “There isn’t enough time in the day to sort the good from the bad, especially when you just want something quick to watch on your lunch break. It’s an old disinformation tactic. You don’t have to lie; you just have to pollute the well enough that everyone stops caring.”
Jevin West is associate professor at the Information School at the University of Washington, and cofounder of the Center for an Informed Public, which studies how false and harmful ideas spread and get amplified in the digital universe. “The real question is, will users want a real human behind that content? And the data is inconclusive on that,” he said, noting that current data do not yet suggest an uptick in the amount of misinformation spreading since generative AI tools became mainstream in 2022-23. “The danger is that when there are information vacuums, such as during natural disasters or elections, opportunists sweep in. My speculation is that it will get worse.”
Not all AI-generated content is stolen, phony or problematic. In some cases, human creators are using the tools to enhance their own content or bring higher production values. “If these videos were using AI tools to create something never seen before, something that enhances human creativity or allows someone like me to do something groundbreaking, they should be allowed to compete,” explained Hill. “But that’s not what these [pirate] videos are. They are text-to-speech Wikipedia entries over stolen footage from everywhere from Netflix
NFLX
to the Weather Channel to yours truly.”
In response to these issues, YouTube recently updated its terms around “responsible AI innovation,” by giving users more notice about content that contains AI-generated elements. “Specifically, we’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools,” wrote YouTube executives Jennifer Flannery O’Connor and Emily Moxley in a blog post from November, 2023. “When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material. For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do.”
There is also a provision in the new policy that provides for takedowns of content that fails to meet standards of decency, or violates the privacy of individuals by representing them without permission. However, the post specifies that “Not all content will be removed from YouTube, and we’ll consider a variety of factors when evaluating these requests.”
For creators victimized by systematic hijacking of their content like Brandon Clement, that’s not good enough. “Most of these platforms have tools that allow the recognition of content and gives creators the right to act on it, but they only allow access to major production houses and major music labels,” he said. Meanwhile, for ordinary creators, filing takedown requests against bad actors is so time consuming, cumbersome and often inconclusive that the offenders can make the bulk of their money and take down or make private their videos before it shows up on the platform’s radar. As we saw yesterday with the Taylor Swift fakes, bad content can spread fast, while countermeasures take time, even when the victim is one of the biggest and most influential celebrities in the world.
“They’re allowed to delete evidence as the investigation is taking place,” he explained. “There’s no punishment. They get rewarded because they are able to escape punishment. It would be very easy for YouTube to prevent any action on a video as soon as a DCMA [Digital Millennium Copyright Act] request is filed. If they wanted to stop it, they could.”
Clement further observed that the platforms profit from engagement and clicks, regardless of where they come from, and may have incentives to allow more AI-generated, algorithmically optimized, synthetic content if it outperforms the metrics of human-created work. YouTube and its parent company Google
GOOG
declined to make a spokesperson available to address these concerns.
In the face of indifference from the platforms, Clement has taken matters into his own hands, organizing a company called ViralDRM to advocate for creators in legal and procedural actions, and filing DMCA takedowns against perpetrators, including, recently, the Indian news networks News Nation, TV9 Bharatvarsh and Zee News.
Despite these efforts, the legal and regulatory systems are blunt instruments to use against such nimble and fast-moving technology, especially in the absence of international consensus on how to deal with the problem.
“It’s not an easy solution for even a company that has more cash than some countries,” said West. “What they can do, first of all, is push efforts like watermarking anything that’s synthetically created. That might only constitute 80% of the content, so there’s going to be a significant portion that will not abide by those norms.”
Kyle Hill believes there may be some alternatives available to creators and conscious consumers. “Most larger creators that I know have ways to support them directly, and that can be a huge help. Patreon, YouTube memberships, merchandise, etc. But while there have been some upsides to alternative media and revenue streams outside of the ad-driven model, I still worry that the fracturing of our informational ecosystem will do more harm than good. More spam, more lies, more mistrust, more extremism, more divisiveness.”
[ad_2]
Source link