A little-known artist named Xania Monet signing a reportedly $3 million record deal with Hallwood Media sounds like a nice musical dream, but there’s a twist at the end of the tale.
Xania isn’t a traditional singer or producer. She’s the brainchild of Telisha “Nikki” Jones, who used AI, specifically Suno, to turn her poetry into music. That album you heard climbing up the gospel and R&B digital charts is largely AI-composed.
Jones isn’t some Silicon Valley bro with a laptop full of prompt hacks. She’s a poet, someone who’s been writing since childhood. She grew up singing in church. The lyrics behind the Xania Monet project are apparently 90% percent hers and rooted in real stories. But the production, the voice, the polish is courtesy of the same Suno that major record labels are currently suing for copyright infringement.
So now there’s a song called “Let Go, Let God” charting on Billboard’s Hot Gospel Songs and another one, “How Was I Supposed to Know,” sitting at number one on R&B Digital Song Sales. Xania Monet is technically topping charts with nearly ten million streams in the U.S. alone. But she also doesn’t exist.
It’s an impressive feat when AI tools like Suno take a paragraph of lyrics and generate a whole song around them in seconds. Some of those songs sound passably good, or even great, if perhaps a bit familiar. But does the success of Xania and other artificial artists come at a bigger cost?
This isn’t a screed against AI as a tool. Musicians always find ways to incorporate tools to enhance their work, even in the face of controversy. Electric guitars, synthesizers, and autotune have all faced or continue to face skepticism, even after becoming mainstays.
However, what’s happening here is arguably different. It’s not just that AI is assisting in the creative process; it’s about whether AI is doing so much that it ceases to be human music in anything but a superficial sense.
The sound of the future
To the average listener, this might not seem like a big deal. After all, if the song sounds good, who cares how it was made? There’s a certain pragmatism in that attitude, especially in an age where digital manipulation is standard. But I’d argue it’s one thing to polish a performance in post-production. It’s another to create an entire identity around a machine-generated sound and expect people to connect with it emotionally.
The illusion of intimacy with the prepackaged persona of most popular artists already has some problems, but at least there is a human being at the core of it all. Telisha Jones deserves credit for turning her poetry into something commercially viable, and it’s not as though her background is fake.
Still, the label’s willingness to pay millions for an entirely fake character with AI-generated music is not about the poetry. Telisha Jones isn’t being sold as a brilliant poet who used AI to bring her vision to life. She’s being sold as Xania Monet, an R&B powerhouse. Hallwood wants to be an early investor in AI music. This isn’t even their first AI signing.
AI-generated music is spreading with uneven popularity. And at least this case doesn’t involve an artist’s name and voice attached to music they never performed or approved of, or that was produced long after they had passed away.
But if labels begin prioritizing AI-assisted acts over flesh-and-blood musicians, we could see an erosion of opportunities for emerging artists. Touring bands, background vocalists, studio musicians, and all the people who traditionally help make an album risk being sidelined in favor of cheaper, faster, algorithm-friendly alternatives.
Then there’s the live performance question. What does a Xania Monet concert even look like? Will it be Telisha Jones lip-syncing to an AI’s voice? Will it be a hologram playing the same track but with different AI-generated dance moves? A song can live online forever, but a performance lives or dies in the moment. If the live act doesn’t match the voice fans fell in love with, will they still care?
I do think AI-generated music has a place on the charts. Suno and other tools used effectively can be a means of expanding musical possibilities. It can help people overcome technical barriers and democratize access to high-end productions. But we should be honest about how it’s different from human compositions.
And once enough AI artists break through, labels will follow the money. They’ll push more acts like Xania, refine the models, and streamline the process. And before long, the charts could be filled with songs that sound eerily similar, not because people follow a trend but because they’re all coming from the same neural network.
I hope Xania Monet’s album sparks more discussion about AI music and how the technology fits into creative spaces. However, it will take thought lest we have kids assuming that all good music is technically perfect and emotionally vacant, with little or no human involvement. That would be a loss too big for even the best remix to fix.
Add Comment