When folk musician Murphy Campbell checked her Spotify profile in January 2026, she found songs she had never uploaded. They were her own recordings — or at least very close to them. Someone had taken audio recordings from her YouTube channel, created AI-generated vocal copies, and distributed them to the streaming platform without her consent, according to The Verge.

Campbell's case is not an isolated exception. It is symptomatic of a deeper systemic flaw at the intersection of AI technology, digital distribution, and copyright.

How the scam works

The method described in Campbell's case is technically surprisingly simple: an actor downloads publicly available audio recordings, uses AI tools to manipulate the vocals, and then utilizes music provider services to get the tracks onto Spotify under the artist's name — or very close to it.

That this is possible is partly due to large music distributors initially processing enormous volumes. According to industry background information, platforms receive tens of thousands of new tracks daily, making manual review practically impossible.

Someone had taken her recordings from YouTube, created AI copies of the vocals, and distributed them to Spotify — without Campbell knowing anything about it.
AI clones stole her voice — now the artist is fighting the system

Platforms struggle to keep up

Streaming platforms are aware of the problem, but the solutions are uneven and largely reactive.

Spotify's approach is symptomatic of the industry's dilemma: The platform does not prohibit AI-generated music in general, but targets misuse such as voice cloning and fraudulent tactics. The problem is that the line is difficult to enforce in real-time.

39 percent of all daily music uploads to Deezer in January 2026 were fully AI-generated.
AI clones stole her voice — now the artist is fighting the system

A copyright system that isn't keeping up

The case also reveals a fundamental weakness in today's copyright infrastructure. According to industry information, two separate rights arise from a recording: one for the composition itself (melody and lyrics) and one for the sound recording. For independent artists like Campbell, who manage both of these themselves, there are few mechanisms for proactive notification if someone exploits their material.

Distributors like DistroKid offer tools such as «DistroLock» — a service where artists can register unpublished audio files with unique fingerprints to prevent unauthorized releases. But for material already publicly available on YouTube, the protection is much weaker.

Organizations like ASCAP and BMI administer rights to musical compositions, and databases like MusicBrainz are used to track origin — but these systems were designed for an analog reality, not for a world where AI can clone a voice in minutes.

Independent artists are most vulnerable

Major record labels have resources for continuous monitoring and legal departments. Murphy Campbell does not. For independent artists, discovery largely depends on chance — such as checking one's own Spotify profile on a random January day.

Deezer is now licensing its AI detection technology to other platforms and organizations, which could represent a step in the right direction. But as long as the distribution system does not require proactive confirmation from the actual rights holder before a track is published, cases like Campbell's will continue to emerge.

The Verge, which reported on the case, describes it as an example of a double system failure: AI dramatically lowers the threshold for misuse, while an outdated copyright regime is not equipped to protect the artists it was meant to safeguard.