In the rapidly evolving world of music streaming, a disturbing trend has emerged: artificial intelligence is being used to create fake albums that impersonate real artists, flooding platforms like Spotify and Apple Music with unauthorized content. According to a recent report from the BBC, established musicians—often those who aren’t global superstars—are finding bogus tracks and entire albums appearing under their names without consent. This isn’t just a nuisance; it’s a potential revenue thief, as these AI-generated imposters siphon streams and royalties from legitimate work.
The issue gained prominence when folk singer Emily Portman discovered an album titled “The Lonesome Road” attributed to her on multiple services, despite her having no involvement. Portman described it as “AI slop” in the BBC piece, highlighting how fraudsters use generative tools to mimic voices and styles, sometimes even resurrecting deceased artists like Frank Sinatra or Kurt Cobain with fabricated “new” material. Industry insiders note that platforms’ algorithms often fail to distinguish these fakes, allowing them to blend seamlessly into official catalogs.
The Mechanics of AI Impersonation
Behind this surge lies accessible AI technology, such as models from startups like Suno and Udio, which can replicate vocal timbres and musical patterns with eerie accuracy. A Guardian investigation revealed how an entirely AI-fabricated band, The Velvet Sundown, amassed over a million streams on Spotify before admitting its synthetic origins, raising alarms about transparency. Music executives argue that without proper safeguards, these tools erode the value of human creativity.
Legal repercussions are mounting. Federal prosecutors in North Carolina recently charged a man with using AI to generate fake songs and game royalties from services including Spotify and Amazon Music, as detailed in a New York Times article. The scheme involved uploading thousands of tracks under pseudonyms, netting illicit earnings. Meanwhile, major labels like Universal Music Group have urged platforms to block AI scraping of copyrighted material, per posts on X (formerly Twitter) from industry figures echoing concerns about voice theft.
Platform Responses and Industry Backlash
Streaming giants are scrambling to respond. Deezer announced in June 2025 that it would flag AI-generated albums to combat fraud, according to an Associated Press report, aiming to alert users and protect artists. Spotify, facing criticism for hosting tracks impersonating dead musicians, has removed some content but insists on ongoing improvements to detection systems, as noted in recent X discussions among artists and tech watchers.
Artists like Selena Gomez and Drake have publicly decried AI mimicry, with Billboard compiling instances where stars voiced fears of identity erosion. The controversy extends to ethical debates: is this innovation or infringement? A ABC News piece from 2023 foreshadowed these alarms, pointing to viral fakes mimicking popular voices without permission.
Future Implications for Music Royalties
The financial stakes are high. AI tracks dilute royalty pools, where fractions of a cent per stream add up. An analysis from Mind Matters estimates that fake bands could divert millions from real creators annually. Regulators are eyeing stricter rules, with Tennessee lawmakers pushing bills against AI voice theft, as highlighted in X posts about recent legislative efforts.
For industry insiders, this signals a pivotal shift. Without robust verification, streaming could become a Wild West of synthetic sounds, undermining trust. As one music executive told the BBC, it’s “the start of something pretty dystopian.” Platforms must invest in AI detection, perhaps integrating blockchain for authenticity, while artists advocate for clearer laws. The battle lines are drawn, with creativity itself hanging in the balance.