Spotify is under growing pressure after investigators found fake podcasts promoting the illegal sale of prescription drugs. These podcasts, masked as normal health or ADHD content, linked users directly to shady online pharmacies. The scam shows how online platforms can be misused to spread harmful and unlawful material.
Recent searches for common prescription drugs such as Adderall and Xanax on Spotify revealed a troubling pattern. Instead of real educational or medical content, dozens of podcasts were discovered advertising unregulated drug sales. The titles and descriptions were blunt, such as “My Adderall Store” and “Order Xanax Online Big Deal On Christmas Season”, and often included links to external websites promising “no prescription needed.”
Hiding in Plain Sight: How the Scam Works
Investigators uncovered a wide network of deceptive podcasts selling drugs like Oxycodone, Methadone, and Ambien. The audio content was usually short and robotic, often using text-to-speech software to sound like health tips. But the real goal was to push listeners toward unlawful drug sites.
One podcast named “Xtrapharma.com” promoted prescription drugs including Vicodin and Ativan, claiming “FDA-approved delivery without prescription.” Another show called “John Elizabeth” had dozens of episodes linking to the same shady pharmacy, offering “hassle-free” orders.
The abuse of podcast tools on Spotify allows bad actors to easily post and distribute misleading content. Anyone can upload a podcast using Spotify’s free publishing tools. Despite having rules in place, enforcement appears to lag far behind the volume of content.
Spotify Responds After Investigation
Once Spotify was notified of 26 fake podcasts, the company removed them quickly. A Spotify spokesperson confirmed the removals and pointed to existing platform policies that ban illegal, harmful, or spammy content.
However, new shows with the same tactics continued to appear shortly after takedowns. Spotify says it is working to improve its detection systems and employs both algorithmic and human moderation.
“We’re committed to keeping the platform safe and free from harmful content,” the company stated in response to the findings.
Teen Safety and Platform Responsibility
The discovery has alarmed parents, digital safety groups, and tech experts. Many are calling for stronger oversight on platforms where teens are active. Recent overdose cases involving fake pills bought online have highlighted the deadly risks of unregulated drug access.
“There are no consequences for platforms hosting this kind of material,” said a director from a nonprofit focused on tech transparency. “Even when warned, these sites can take weeks or months to act.”
The concern is not new. In 2011, U.S. regulators fined a major tech company $500 million for allowing illegal pharmacy ads. Since then, warnings from the FDA and DEA have urged tech firms to stop enabling online drug sales.
Automation and AI Fuel the Problem
The spammy podcasts are largely automated. Most use computer-generated voices, generic scripts, and fake branding to appear credible. Some only contain a single short episode, followed by website links in the description.
These AI-driven uploads are fast and cheap to produce, making it easy for scammers to flood the system. Platforms must now fight large volumes of low-effort, high-risk content.
Searches for drug names like Valium, Codeine, and Percocet also returned suspicious podcast results. None had user ratings or clear signs of audience engagement, making it hard to measure their real-world impact.
Spotify’s Safety Record in the Spotlight
Spotify has faced criticism before over content moderation, especially around health misinformation. In 2022, several artists removed their music after a top podcast spread false COVID-19 claims. The backlash led Spotify to introduce new safety features, including health advisories and the creation of a Safety Advisory Council.
The company also bought Kinzen, a startup focused on detecting harmful content with machine learning. But the rise of fake podcasts promoting drug sales shows more action is needed.
“Wherever users can post content, scammers follow,” said a digital child protection expert. “The real issue is how fast companies can respond when lives are on the line.”
Spotify’s current content policies ban illegal product promotion, hate speech, and explicit material. The platform says it will continue to invest in tools to detect abuse. But critics want more transparency, faster action, and stronger safeguards for young listeners.
With automation making scams easier to produce, tech firms must adapt quickly. If not, they risk becoming unwitting channels for dangerous, illegal content.