Thursday, January 29, 2026
AI: Friend or Foe?
Like it or not, AI is here to stay. Our writers take a look at the ethical and legal issues the technology raises, as well as how musicians can use it as a source of creativity. Is it all doom and gloom?
Camilo Lara in his CDMX studio
Respecting Legacies
For producer Camilo Lara (aka Mexican Institute of Sound), artificial intelligence signals a troubling moment for music, with vocal cloning representing “a lack of respect on so many levels”, hears Charis McGowan
There’s an entire playlist on YouTube of Mexican-American tejano singer, Selena Quintanilla, covering pop numbers: ‘Hit Me Baby One More Time’, ‘My Heart Will Go On’, ‘All I Want For Christmas’, ‘La Isla Bonita’, and many more. They typically feature generic banda-style brass bursts over a karaoke backing track, with muffled-sounding Selena singing in Spanish.
Of course, Selena never actually sang these songs – they were released following her tragic death (she was fatally shot by Yolanda Saldívar, the founder of her fan club, in 1995). The YouTube covers were made with generative AI software, which clones Selena’s voice from original recordings, allowing it to be moulded into different melodies.
While some fans may view such vocal imitations as touching tributes to the late star, Mexican producer Camilo Lara sees these faked covers – which exist online for virtually any major pop star – as a troubling development. “It’s a lack of respect on so many levels – not just copyright. It’s completely out of the creative perspective of an artist,” he says, with concern. “The more [AI] evolves, the less the artist’s authorisation is relevant in the conversation.”
Lara and I are talking in his Roma Norte studio in Mexico City, where he recently scored two Netflix documentaries on titanic figures in Mexican music: one on Selena (Selena y los Dinos) and the other on ranchera-pop icon Juan Gabriel (I Must, I Can, I Will). Both projects delve into the posthumous legacies of beloved artists whose deaths have left fans struck by grief up to this day. Juan Gabriel – whose decade-spanning career began in the 1970s – passed away in 2016 aged 66, while Selena was only 23 when she was murdered.
Lara approached the projects by closely listening to the source material, gathering a palette of suitable shades that seemed appropriate for each artist. “The most important thing to do in those projects is not to mimic – it’s never going to be as good as the original. If you try to emulate [exactly] what Juan Gabriel or Selena did, it’s disrespectful.”
A passionate supporter of human creativity and interpretation, Lara has a strong opinion on generative technology. He shudders at the question of whether he would ever use AI to engage with artists posthumously. “AI songs, holograms, videos. It’s the Madame Tussauds of music,” he says. “All I know is that the dead artists in most cases would not like to be simplified by AI based on the cliche of their public persona.”
'Como Te Quiero Yo A Ti' from Selena's Moonchild Mixes album, featuring Selena's AI-augmented voice
Aside from the proliferation of illegitimate AI-tracks uploaded to YouTube, there are also instances of AI being used through official channels. In 2022, Selena’s family released an album, Moonchild Mixes, based on songs she had recorded as a child, now manipulated with AI to make her sound older. At the time, Selena’s father explained that the album was for the fans: “They haven’t let go of her. They’re waiting for a project like this to come out, and I know it will be well received by the public,” he reasoned.
While the record caused controversy among listeners at the time, the album has since racked up millions of streams on Spotify. The divided reaction is reminiscent of when, in 2023, a Brazilian Volkswagen commercial resurrected late Brazilian singer Elis Regina using AI, showing Elis – who died in 1982 aged 36 – driving a car and duetting with Maria Rita, her real (not AI-made) daughter. Some Brazilians, including Brazil’s first lady, Rosângela Lula da Silva, were moved to tears. Others found Regina’s digital immortality troubling and unethical.
Brazilian Volkswagen commercial featuring Maria Rita and her mother, 'Elis Regina'
Lara maintains these officially approved AI releases are a ‘grey’ area, but questions whether it should be done just because it can be done: “Sure [the family] own the rights and permissions, but is that going to add to the legacy of the artist? It’s junk food. It’s not a rich process made from the creator’s mind. As a promotional tool and fan perspective I understand it, it’s amusing – but from an artist point of view, it’s completely horrifying.”
For Lara, the year ahead signifies a testing time for morality, ethics, politics and world order – with AI playing an integral role in how things will play out.
“[2026] is going to be the most important year [regarding] changing culture. It’s going to be very challenging. The world is splitting, and Latin America is a laboratory for that.” he says, prophetically. We are speaking only days before Trump launched military action in Venezuela, resulting in the capture of its president, Nicolás Maduro, before firing threats towards Cuba, Colombia and, Lara’s home country, Mexico. “[It´s] going to be the year you define which side of history you want to be on”, he adds.
For Lara, AI and the policies of the global right are naturally interlinked. AI represents the hegemony of power, controlled by the elites and the private entities, whereas analogue – the stacks of synthesisers that line his studio walls, for example – is on the side of the arts, the people, the human-made.
“There are two levels of conception: the ultra-pasteurised, full of pesticides, hormones, that’s the AI. Then there’s a minority that will be natural, human-made – not only music, but human thought, philosophy, painting, places where we can still be ourselves. We have to bet on that.”
+ Selena y Los Dinos and Juan Gabriel: I Must, I Can, I Will are both available to stream on Netflix
Stills from Ben Potton's 'Folklore' film
Folk without Folks
Russ Slater Johnson discovers an AI-augmented fracture in the folk community
In December 2024, Benjamin Potton posted a video to the Facebook group, ‘How many Morris Dancers are on Facebook?’ with the message: “I love our quirky English folk traditions. How many can you recognise in this archive footage?” The video featured would-be morris dancers in technicolour sequins holding golden spheres, a Jim Henson-esque 10ft beast, a giant hay bunny rabbit running down a cobbled street on fire, while morris bells jangled amid an eerie, droning soundtrack. It conjured up a near-future England full of customs that somehow seem familiar but are also very different: bolder, shinier, more colourful and, undoubtedly, more unhinged than usual. The reaction to the video was overwhelmingly positive, but there was one comment: “Why bother creating shitty AI ‘art’ or whatever you want to call it when there are SO MANY people making fantastic costumes (yes, even sparkly ones) and performances, continuing old traditions, and inventing new ones? Go and film some of them and promote actual real artists and practitioners.” After various users pointed out that Potton is one of the most active documenters of folk traditions in the group, the same user went on to write: “This is still an unethical practice that is killing the careers of actual artists and generally making the world a worse place”.
Ben Potton's Folklore short film
The creator of the video, Ben Potton, is a fiddle player and guitarist who records and performs with Selam Adamu as Ben & Sel – who are regulars at Sidmouth Folk Festival. Handily, he’s also a computer games developer. Many of the AI-generated videos I’ve seen have been quite horrific, with distorted faces, limbs disappearing mid-frame, and just plain odd transitions from moment to moment. Ben’s, on the other hand, was clearly made with a lot of love for the subject, and a lot of time and thought. I wasn’t wrong. “I was creating images for it for months,” says Ben, “and then the actual film took weeks to put together.” First, he created images using AI image generation, then he would pick an image as a starting frame and ask a program called Kling to begin turning it into a video. “You could say, ‘I want this to go in that direction’, and you’d have to make 10 versions of that, and then you’d find one version that kind of works”. He created the whole 2:31 video in that way. It sounds like a laborious process which goes against the idea of AI saving time. “I’m a photographer and I do a lot of videos for festivals”, says Ben, “and it was harder and longer putting the AI video together than a video for a real festival.”
So, why did he make it? “When I was growing up I had a fascination with the Fortean Times or there’d be a slightly weird, nostalgic BBC programme about folklore, and it had this warm creepiness about it. I moved into thinking about imagined, possibly lost, folklore, that we might see today, but that’s not quite right. I had a big folder full of all these weird images, and then I thought, it’d be great to make them into a 1970s film of folklore. The Whittlesea Straw Bear was a big influence, the fun of dancing with the bear one day and then burning it on the Sunday.”
Ben’s coy on whether he’d do another video, stressing that he’s waiting for the right idea. When I ask him what else AI could be used for in the world of folk, he definitely sees potential. “The thing with folk is we think we’re going back to the source quite often. I’m a Cotswold morris dancer and I know people go back to the ‘Black Book’ [Lionel Bacon’s A Handbook of Morris Dances, the quintessential 1974 guide to Cotswold morris] and then you go, ‘OK, well, that’s just one person collecting something at a certain tiny sliver of time and then interpreting it a certain way when they read it 100 years later and assuming that’s how it’s going to have to be’. Whereas the thing that AI could do is take you out of that fixed mindset and give you a different vision on some things.”
These are the creative sides of AI, labelled as augmented creativity in some quarters. Ben singles out Petr Válek, a Czech musician, artist and instrument builder as someone else doing interesting work in this vein – Válek’s ‘traditional Czech Christmas’ and ‘noisy children’ series of images are definitely worth checking out (you can find them on Válek’s Facebook page). However, the reason Ben’s video received such criticism in the first place was due to other concerns around AI. This was magnified when the visual/performance artist Lucy Wright, a friend of Ben’s, posted the same video on her Instagram. The reaction was so vociferous, clearly touching a nerve among fellow artists, that she took down the video and wrote a response.
In that post, she wrote: “I’m simultaneously seduced by some of the incredible graphics [that AI] can produce, and afraid of how easily it can be misused by those who seek to mislead. I’m excited by its potential to streamline systems and revolutionise data analysis and exchange, while anxious about what that means for those working in industries likely to find themselves obsolete. The visual arts are at the top of most people’s lists for the chopping block.” She goes on to say: “AI has gotten very good at spotting patterns and replicating existing material, but it can’t and won’t replace human ingenuity, munificence and spark. It can create pictures that look a bit like folklore, but it can’t push the discussion into new territories or negate what it feels like to make something ourselves, or come together to celebrate, mourn and share. Only we can do that.”
Ben seconds this: “I like to make electronic music, and I’ve also played with AI-generated music, but I also get great joy sitting in a pub with a pint and my fiddle under my chin playing tunes with real people.” It does seem – out of all of the genres threatened by AI – that folk will be the most resilient. It is a style of music that is built by communities around oral traditions and the joy of people playing music together. If anything, folk may become even more desirable as pop, rock and electronic become bogged down by AI generation and augmentation.
Plus, folk culture can actually reap some of the benefits of AI. Recordings from decades ago can be cleaned up (as John Lennon’s demo was to make the last Beatles 2023 single, ‘Here and Now’), global folk archives may become more accessible as AI tools learn how to identity rhythms, languages and instruments, and perhaps we might even be able to hear music that was never recorded (AI being used as an archaeological tool, approximating instruments and languages of ancient times).
In terms of where we go from here, Lucy put it well in her post: “I’m not saying that we have to embrace AI art. We can dislike what it produces and try to fight it if we wish. But I also don’t think we necessarily have to be afraid of being replaced by it. As artists, as human beings, we need to keep on communicating what it is that WE, and only we, can do best of all. And keep on doing that.”
Music creators and politicians, including the Musicians’ Union, protesting the UK government’s handling of AI legislation, May 2025
Clone Wars
Generative AI cloning and creation has sent shockwaves across the music industry. But can musicians safeguard their rights against the machines – and can tech ever be used for creative good? Chris Wheatley finds out
In 2025, a new indie band debuted on Spotify – their music combining a touch of soft rock with hints of psychedelia. This rather colourless blend turned heads by achieving one million streams. Yet, The Velvet Sundown didn’t exist. They were entirely AI generated.
It wasn’t too long ago that the success of an artificial intelligence band – duping listeners into thinking it was real – would seem more akin to an Isaac Asimov tale than our own reality. But now, in 2026, AI already inexorably encroaches into our everyday lives – via Google searches, workflow-tools-cum-personal-assistants like ChatGPT and, more controversially, in the arts.
Take, for example, the case of folk singer Emily Portman, who spoke of her concerning brush with AI in Songlines #213, when not one but two AI-generated albums appeared on her Spotify account under her name. “It wasn’t that my own voice was used, but it sounded like my style, my genre,” she told us at the time. As Portman’s case demonstrates, AI has rapidly impacted the music industry as we know it: who is making it, how it sounds, where it appears – and legal bodies have struggled to keep up. Some of Portman’s fans thought that the AI album was indeed legit, and the reason that music generated in this manner can sound so convincing is that systems are trained extensively on existing, human-generated music. In this process, known as data mining or ‘scraping’, AI models ‘learn’ from copyrighted material to generate ‘new’ songs.
Tom Eagle, Regional Officer for the Musicians’ Union, which represents UK artists, says the struggle to protect artists’ rights from AI scraping has been a major issue since generative technology first became widely available to the public in 2022. Concerns intensified in 2024 when the government sought to introduce an exception for copyrighted materials, allowing Gen AI to further train, imitate and create material based on existing work: “The Musicians’ Union and other music industry bodies, and representatives across the creative sectors, thankfully managed to face that proposal down.”
The Musicians’ Union has been pushing for tech companies to legally require explicit consent from musicians and for greater transparency in general. Yet, as Portman discovered, proving copyright infringement is far from straightforward, and protecting work from data mining even less so. Says Eagle: “If it’s out there available to the public, then AI tech firms see it as fair game, currently.” The Union is continuing to lobby politicians directly and speak at trade union conferences across the UK for greater legal protections. “We need the government to legislate, to clarify that internet scraping for AI training is a copyright-protected act.”
While there is no unified global effort to address the issues, organisations in several countries are taking direct legal action which may set precedents. Potential good news arrived in early November 2025, with the culmination of a lawsuit brought by the Recording Industry Association of America (RIAA) on behalf of the big three labels, Universal Music Group, Sony Music and Warner Records, against popular AI music generator, Udio. The lawsuit alleged that Udio had trained its systems by using UMG’s vast catalogue of recordings. Udio appeared to capitulate, abruptly cutting its website services and announcing that users had a limited time left to download their creations. The case ended in a settlement, wherein Udio has now been absorbed into UMG to create a new product – a music generator based on UMG’s catalogue, where artists whose work is used to generate the music will receive a share of the profits.
On the face of it, this may seem like a win for artists. However, while it’s certainly true that the music has always evolved in tandem with technical innovations, from the advent of streaming to virtual reality concerts, Eagle points out that there is always a cost to be paid.
“What is not said is that every one of those has led to a degradation in the work of the music creator. It is the music creator who has had to cope with seeing their opportunities and income sliced away.” This is particularly true when it comes to streaming. Online artist support organisation, VIRPP, calculates that a million streams of one song on Spotify will earn you around £75 – a compelling argument that the ‘value’ of music has reached an all-time low. “There will be a solution found that means music creators will be paid something for AI,” says Eagle, “but it will be a downgrade on the pre-AI situation.”
Udio’s biggest competitor, Suno, is currently facing similar lawsuits, with Danish music rights organisation, Koda, becoming the latest to take the company to court in order to win compensation for artists in an ongoing case. Koda claim that Suno was guilty of “the biggest theft in music history”, scraping lyrics and music from copyrighted works belonging to the artists they represent. Elsewhere, German licensing body GEMA has won a case against OpenAI’s chatbot, ChatGPT, which the Munich court ruled violated German copyright law by training its language models with lyrics from works by popular artists. OpenAI have been ordered to pay damages, with GEMA beginning negotiations on how rights holders should be compensated.
It would be wrong to see all GenAI companies as ‘the bad guys’ – there are companies putting themselves forward with more ethical approaches than solely exploiting grey areas. Maya Ackerman, co-founder of the company WaveAI and author of the book, Creative Machines: AI, Art & Us, believes that AI can be an aid to musicians, rather than a threat. WaveAI’s online AI-assisted songwriting tools, LyricStudio and MelodyStudio – are trained on data which is in the public domain (i.e. not subject to copyright). Unlike Udio and Suno, these do not attempt to generate complete tracks. “We built a system where you would give it some text,” says Ackerman, “and then it would show you different ways that you could sing little phrases. The aim is to aid people in their songwriting journey, rather than give them music.”
It’s Ackerman’s belief that such systems can aid creativity rather than replace it and benefit working musicians. Yet, she has found it is an approach that can be at odds with big business. “When I was fundraising,” she says, “the more powerful investors, for the most part, held the opinion that the way you make money out of music AI is by it being an alternative to human labour.” By reducing the need for collaborators, the major generative AI players – the aforementioned Udio and Sumo – have created a platform where one user can create a (virtual) room of musicians in minutes.
What’s clear is that the approach to legislating AI needs to be drastically overhauled. Tom Eagle knows, as he was there at the beginning: “In the early days of the copyright exception proposals,” he says, “we attended a number of meetings, organised by the [UK] government involving AI firm representatives. [The AI companies] were pretty brazen at the time that they weren’t doing anything wrong and that if the government sought to legislate then the public and the government would be the ones who lost out on any AI financial boom.”
It could be argued that the real threat here lies in a prevailing idea that, ultimately, humans can be replaced. “There’s a very common belief,” observes Ackerman, “that robots can be smarter than us, and that our intellect is becoming less relevant. The moment a machine does something impressive, for example, text to image models – you type a prompt, you get an image – the response is, ‘Oh, my goodness, how amazing, we could never do that.’ Not true.”
AI generators, whether in the field of writing, art or music, could not function without scraping the works of countless human writers, painters and musicians. In a world where it’s becoming increasingly difficult to tell AI from the real thing, it remains to be seen what balance can or will be attained.