ElevenLabs Just Made an Album With Liza Minnelli and Simon Garfunkel — And It's Changing Everything About AI Music
A year ago, the mere mention of AI-generated music was enough to spark outrage. Artists called it theft. Labels filed lawsuits. Streaming platforms scrambled to remove synthetic tracks. The narrative was simple, binary, and emotionally satisfying: AI was the enemy, and human creativity was under siege. But history rarely moves in straight lines. Today, that battle line has blurred, and a surprising new chapter is being written not in courtrooms, but in recording studios. ElevenLabs, the company best known for its hyper-realistic voice cloning technology, has just released a full-length album co-created with some of the most iconic names in music — including Simon Garfunkel (of the legendary duo Simon & Garfunkel) and Liza Minnelli, the EGOT-winning actress and singer whose career spans Broadway, film, and pop stardom.

The album is not a gimmick. It is not a cheap cash grab. And it is certainly not the dystopian future that critics warned us about. Instead, ElevenLabs calls it something almost idealistic: a proof-of-concept for human-AI cooperation done right. The 13-track project, which spans spoken word, Brazilian funk, rap, and EDM, represents a radical departure from the zero-sum thinking that has dominated the AI music debate. And it arrives at a moment when the industry's biggest players — Universal Music Group, Warner Music Group, and Sony Music — have quietly pivoted from suing AI companies to signing licensing deals with them. Something has shifted. This is the story of how, and why.

The Album: 13 Tracks, Four Genres, One Radical Idea
The project, whose working title is "Collaborators" (though final name details remain in flux), features a diverse roster of human artists working alongside ElevenLabs' generative music model, Eleven Music. The 13 tracks are deliberately eclectic, showcasing the breadth of what human-AI collaboration can sound like when the humans are in control and the AI is a tool, not a replacement.

Simon Garfunkel, whose harmonies defined a generation, contributes a haunting spoken-word piece layered over an AI-generated ambient soundscape. The track, titled "Echoes of a Bridge", features Garfunkel's unmistakable voice reflecting on time, memory, and the strange beauty of singing alongside a machine that learned to mimic the very concept of harmony. In promotional materials, Garfunkel described the experience as "unsettling at first, then surprisingly intimate. The AI doesn't just copy. It listens. Or at least, it pretends to. And that pretense becomes real enough to make something new."

Liza Minnelli, who has won an Emmy, a Grammy, an Oscar, and a Tony (hence EGOT), brings her legendary theatrical energy to a Brazilian funk track called "Carioca After Midnight." The song blends Minnelli's original vocal recordings (licensed through ElevenLabs' marketplace) with AI-generated percussion, brass stabs, and a bassline that seems to bend and warp in impossible ways. The result is simultaneously familiar and alien — a Broadway legend dancing with a digital ghost.

Other tracks feature lesser-known but equally talented musicians. One song, "Neural Heartbreak", is a pure EDM banger composed entirely by Eleven Music with no human vocalist — just synthetic beats, synthesized melodies, and a surprisingly emotional chord progression. Another, "Rap God 2.0", uses a cloned voice of a retired hip-hop artist (licensed with full consent) to deliver verses the artist no longer has the breath to perform live. The artist, who asked to remain anonymous, told ElevenLabs: "I can't tour anymore. My lungs are shot. But my voice? It's still here. And now it can say things I never thought to write."

Ownership and Revenue: A Deal That Changes the Conversation
Perhaps the most important detail of the ElevenLabs album is not the music itself, but the business model behind it. In an industry famously hostile to artists' rights, ElevenLabs has made a radical promise: each artist retains complete ownership of their contributions and keeps 100% of all streaming revenue generated by the 13-track project. ElevenLabs takes nothing. No advance to recoup. No percentage. No hidden fees.

This is almost unheard of. Major label deals typically give artists 10–20% of streaming revenue after recouping advances, marketing costs, and production expenses. Independent distributors take 15–30%. Even "artist-friendly" platforms like Bandcamp take 15% on digital sales. ElevenLabs is taking zero.

Why would a company give away an album for free? Because ElevenLabs is not a record label. It is an AI infrastructure company. The album is a loss leader — a demonstration of what becomes possible when musicians stop fighting the technology and start using it. Every stream, every interview, every article about the album serves as free advertising for ElevenLabs' music generation tools, voice cloning marketplace, and licensing platform. The company is betting that musicians will see the album not as a threat, but as a template. And if that bet pays off, ElevenLabs becomes the go-to provider for AI music services, earning far more from subscriptions and licensing fees than it ever could from streaming royalties.

For the artists, the deal is equally smart. They take no financial risk. They gain exposure to new audiences (Brazilian funk fans discovering Liza Minnelli? It could happen). And they position themselves as pioneers rather than victims. As Simon Garfunkel reportedly told a friend: "I spent fifty years fighting with record labels about pennies. Now a bunch of AI engineers are offering me dollars and full ownership. The math is not complicated."

The Technical Mix: Cloned Voices, AI Instrumentals, and Human Direction
The album is not a single method applied uniformly. ElevenLabs and the artists used three distinct modes of collaboration, depending on the track and the artist's comfort level.

1. Fully AI-Generated Compositions
Some tracks, particularly the instrumental ones, were generated entirely by Eleven Music with no human input beyond a text prompt. For example, the EDM track "Neural Heartbreak" was created by feeding the model a simple instruction: "Produce a progressive house track in the key of C minor, 128 BPM, with a melancholy breakdown at two minutes and a euphoric drop at three minutes. Reference early Deadmau5 but add modern sound design." The model output a fully mixed, mastered track in under ninety seconds. A human engineer made minor adjustments to the EQ and dynamics, but the composition, arrangement, and sound selection were entirely AI.

2. Cloned Voices from ElevenLabs' Licensing Marketplace
For artists who could not or did not want to record new vocals, ElevenLabs used its voice cloning marketplace — a library of hundreds of licensed voices that artists have voluntarily submitted in exchange for a share of future revenue. Liza Minnelli's voice on "Carioca After Midnight" was not newly recorded. Instead, Minnelli provided ElevenLabs with a dataset of her past vocal performances (clearing all rights), and the AI generated new melodic lines in her voice. Minnelli then approved or rejected each line, requesting adjustments to phrasing, vibrato, and emotional tone. The process took three days. A traditional studio session with a full band would have taken weeks.

3. AI Instrumentals with Human Vocals
The most common mode on the album was a hybrid: AI-generated instrumentals paired with newly recorded human vocals. Simon Garfunkel's spoken-word piece followed this model. Garfunkel wrote and recorded his own words in a home studio. Those vocals were then sent to Eleven Music, which generated a bespoke instrumental backdrop that responded dynamically to the rhythm and emotion of Garfunkel's delivery. The AI listened (algorithmically speaking) and adapted. Garfunkel described the result as "like having a producer who never sleeps, never argues, and never asks for a writing credit."

The Industry Pivot: From Lawsuits to Licensing
To understand why the ElevenLabs album matters, you have to understand how quickly the music industry has changed its mind. Just one year ago, the major labels were waging war against AI music companies. Universal Music Group (UMG) successfully lobbied Spotify and Apple Music to remove tens of thousands of AI-generated tracks. Warner Music sent cease-and-desist letters to developers of voice cloning tools. Sony Music filed lawsuits against several startups, alleging copyright infringement on an industrial scale. The rhetoric was fierce. UMG's CEO called generative AI "a threat to the very concept of human artistry."

Then, quietly, something shifted. The lawsuits did not disappear, but they were joined by something new: negotiations. In the past twelve months, UMG, Warner, and Sony have all signed licensing agreements with various AI music companies. The terms vary, but the structure is consistent: labels grant AI companies access to their catalogs for training purposes, and in exchange, they receive equity, upfront payments, or a share of future revenue. The enemy became a partner.

The ElevenLabs album is the most visible fruit of this new détente. While ElevenLabs has not disclosed which labels, if any, participated in the project, the involvement of artists like Simon Garfunkel and Liza Minnelli — both of whom have complex relationships with major labels — suggests that the old guard has given its blessing, or at least its tacit approval.

Why the sudden change? Money, of course, but also pragmatism. The labels have realized that AI music is not going away. Takedown requests are whack-a-mole. Lawsuits take years. Meanwhile, AI-generated songs are climbing the charts independently. A track called "Heart on My Sleeve" — which used cloned voices of Drake and The Weeknd without permission — went viral last year before being removed. The fact that it was taken down did not matter. It had already been heard by millions. The genie was out of the bottle.

By signing licensing deals, labels gain control. They can shape how AI music develops, ensure their artists are compensated, and potentially earn more from AI-generated content than they ever did from traditional streams. It is not surrender. It is adaptation.

The Shifting Sentiment: From Outrage to Acceptance
Perhaps the most fascinating change is not in boardrooms, but in the court of public opinion. A year ago, any mention of AI music on social media was met with fury. Artists like Nick Cave called AI "a grotesque mockery." Sting warned that "the building blocks of music are under threat." Fans boycotted platforms that hosted AI-generated content.

Today, that opposition has not disappeared, but it has become a vocal minority. Polling data from industry trade groups shows that while 40% of musicians still view AI as a threat, 35% now see it as a tool, and 25% are undecided. Among fans under thirty, acceptance is even higher. A recent survey found that 58% of listeners aged 18–24 have knowingly listened to AI-generated music and enjoyed it. The stigma is fading.

Why? Because the technology has improved. Early AI music was robotic, glitchy, and obviously fake. It sounded like a bad karaoke version of a real song. Modern models, including Eleven Music, produce tracks that are often indistinguishable from human compositions — especially in genres like EDM, lo-fi hip hop, and ambient, where repetition and texture matter more than melodic invention. As the quality has risen, the fear has fallen.

And then there is the celebrity factor. When artists like Liza Minnelli and Simon Garfunkel — beloved, respected, undeniably "real" musicians — choose to collaborate with AI, it becomes much harder to frame the technology as an existential threat. These are not desperate unknowns selling their voices for a quick paycheck. They are legends who could retire tomorrow and live comfortably. They are doing this because they find it interesting, because they want to reach new audiences, and because they believe — genuinely, it seems — that AI can be a creative partner rather than a replacement.

What Comes Next: AI Musicians at the Top of the Charts
The ElevenLabs album is a landmark, but it is not an isolated experiment. Across the industry, AI-generated music is already climbing the charts. In Sweden, a track produced entirely by an AI model called Aura reached number twelve on the official singles chart before anyone realized it was synthetic. In Brazil, a funk song featuring a cloned voice of a deceased legend (licensed from his estate) has been streamed over fifty million times. In the United States, Billboard has announced it will now allow AI-generated tracks to chart, provided they disclose their synthetic origins.

The technology is here to stay. That much is now undeniable. The only remaining question is how it will be used — as a weapon to replace human artists, or as a tool to empower them. The ElevenLabs album suggests that the latter is possible, even if not guaranteed. By giving artists full ownership, complete creative control, and all the revenue, ElevenLabs has offered a blueprint for human-AI cooperation that does not feel like exploitation. Whether the rest of the industry follows that blueprint, or something uglier, will determine the future of music.

For now, though, we have thirteen tracks. Spoken word. Brazilian funk. Rap. EDM. Simon Garfunkel. Liza Minnelli. And an AI that learned to listen. Put on your headphones. Press play. And decide for yourself whether the future sounds like a threat — or a collaboration.

Your one-stop shop for automation insights and news on artificial intelligence is EngineAi.
Did you like this article? Check out more of our knowledgeable resources:
📰 In-depth analysis and up-to-date AI news .
🤝 Visit to learn about our goal and knowledgeable staff.
📬 Use this link to share your project or schedule a free consultation.
Watch this space for weekly updates on digital transformation, process automation, and machine learning. Let us assist you in bringing the future into your company right now.