Rage Against the Machine:The humans whose creative output feeds AI bots are striking back on the legal battleground

By Neil Dolby
May 26, 2025

As generative artificial-intelligence tools from Grammarly to ChatGPT grab the popular imagination, the world’s creatives feel increasingly marginalised, fearful of their ability to eke out a living in the all too near future. Generative AI sifts through human-generated content – texts, sounds, images and videos – to train its models to compose music, or create a painting, or generate a TV advert. Scraping through such data, known as text and data mining (TDM), is often without the consent of the content owners.


Protests against the use of AI to create new forms of artistic output without recompense or the consent of the original creators rage on around the globe. A protest album called Is This What We Want? was released earlier this year in response to proposed changes in UK copyright law that would mean AI developers could train their models using material available on the internet unless those holding the rights to this content specifically elected to opt out of providing permission.


Available on Spotify, the album consists of silent recordings from empty studios and performance spaces, and involved more than 1,000 musicians. The 12 tracks listed spell out a powerful message: “The British Government must not legalise music theft to benefit AI companies.”


In the art world, an Augmented Intelligence auction held in New York this spring also caused a furore. Prior to the sale, an open letter signed by thousands of artists called for its cancellation, decrying the “mass theft” of human artists’ work by AI companies.


The vexatious issue is underscored by the fact that many governments are keen to promote a thriving ecosystem for both tech firms and artistic creators, even though myriad legal and ethical considerations swirl around the promotion of this nascent technology.


Composer protection

Due to AI’s increasing influence in various sectors, the Hong Kong government launched a public consultation last year on the enhancement of the Copyright Ordinance to reflect AI technology development. In its submission to the consultation paper, the Composers and Authors Society of Hong Kong (Cash) – which manages the copyright of musical works for more than three million members locally and from affiliated societies overseas – said copyright protection should not only be provided to the “arranger” of the works but also extend to the author or owner of the underlying work. Like the protesters in the UK, it believes an opt-out model proposed by the government unfairly shifts the burden to the copyright owner who has to take an active step to safeguard their rights.


Instead, Cash proposes the implementation of an opt-in system, arguing that “the default position should be … it is illegal to use any music without permission, unless the author or the composer has expressed an intention to sell their rights to the tech company.” In their view, when there is commercial use of copyrighted material for TDM purposes, the owners of the work should be remunerated, and this remuneration should be based on licensing arrangements between the owners and the AI companies.


Legal conundrum

Hong Kong-based lawyer Ellie Patel, the founder and CEO of Re-think Legal, believes the generative AI issue is exacerbated by the legal framework in most jurisdictions lagging behind modern technological developments. “Governments are now scrambling to put in place laws to govern the creation and use of AI, but this is just the start of the process,” she says. “Ultimately, it’s only when these laws are interpreted and tested by the courts, and harmonised on an international level that real certainty regarding rights and obligations will be established.”


Patel is torn on the subject of “fair use” exemptions to train AI models. These allow for use of content without the threat of copyright infringement in certain circumstances, such as education and news reporting. As a trademark and intellectual property expert, she has clients who are creators and others who are in the tech industry.


She supports the development of AI, viewing it as an increasingly important public and corporate tool. “That said, creators also need protection and, in the main, should be rewarded for their contribution,” she notes. “It is all about striking a balance between the two – but striking that balance is no mean feat.”


Jonathan Chu, an Intellectual Property Partner at CMS in Hong Kong, warns that if there are too many restrictions on the use of copyright works for training AI, it will hamper innovation, and jurisdictions with the most stringent laws on AI development will be left behind. He urges greater clarity on the issue.


Remixing the past

Hong Kong University of Science and Technology Professor Andrew Horner, whose primary research interests are music synthesis and timbre, sees generative AI as the exciting new “kid on the creativity block” shaking up the artistic landscape. He has no qualms about generative AI learning from the “rich tapestry of existing content – music, literature, visual art and more”.


He states that throughout history, artists have drawn inspiration from each other’s creations. “A classic example in music is Miles Davis paying tribute to Stravinsky’s The Rite of Spring,” he says. “In this sense, generative AI is simply carrying forward this vibrant legacy. No creator operates in a vacuum; they are always influenced by the world around them and the inspirations that strike their fancy.” 


Two disco songwriters recently lost a legal case alleging that Dua Lipa copied her single Levitating from their tracks, and Horner believes this highlights the beautiful – and sometimes tricky – dance of creativity. “It’s only natural for songwriters to mix and remix elements from others’ work, adding their own unique flair,” he insists. “After all, if you create something that inspires, it’s likely others will want to take that spark and run with it.”


For him, the crux of the matter is whether the output of generative AI models resonates with us and inspires meaningful experiences. “It’s an exhilarating time for creativity, making it easier than ever to play with sounds and forge new musical landscapes,” he says. “There are so many ways to pay tribute to a song: you can cover it, create a mashup that fuses it with another track, or compose something entirely fresh that reflects your personal take on it. It’s all part of the vibrant tapestry of music-making.”


Horner suggests the creative lineage of a piece should be honoured by giving a shout-out to the artist or artists who influenced it. Musician David Robinson, who is paid royalties for a book and tape he created, is strongly in favour of AI as a tool to enhance the creative process, but believes musicians should be compensated if AI uses their material.


Pushing art forward

Art-world pioneers including Refik Anadol, Sougwen Chung, Sander Coers, Anna Ridler and Victor Wong have used AI to expand artistic possibilities, blending human intuition with machine-generated aesthetics. Karen Sanig, who founded Art Law at Mishcon de Reya, says that for some artists, AI is less a threat than a transformative tool, deeply integrated into their creative processes in sophisticated ways. However, she is concerned that AI systems which scrape vast datasets of artworks en masse, and without consent, deny artists control over how their work is used or reproduced.


AI-generated imitations could also both damage the integrity of a work of art and dilute an artist’s reputation, hence conversations about moral rights (such as the right to attribution, or the protection of a work’s integrity), passing off are vital alongside copyright considerations.


“Legislators face a daunting challenge – not only must they crystallise a formula to regulate a technology that evolves daily, they must reconcile AI’s data-driven mechanics with intellectual property frameworks built largely for human authorship,” notes Sanig.


Hong Kong-based entrepreneur Matey Yordanov of Antei AI Development Hub says he has developed an imperceptible ‘mask’ technology that adds an invisible layer to any type of media, leading to the corruption of the AI training output if protected files are used. “This not only safeguards the media from being copied, but also prevents deepfakes and impersonation of artists, which is becoming increasingly more critical with the development of AI,” he affirms.