To get a sense of how complicated it is to grant an AI company a license to train its algorithm on copyrighted work, imagine for a moment that you run a European collecting society that distributes royalties to songwriters and publishers. For the sake of this example, let’s say that this organization is called COMPLEX (the Cooperative for Original Music Publishing Licensing Excellence) and that it represents public performance and mechanical rights for the fictional country of Freedonia, which it licenses for revenue of about $200 million a year.
Many AI companies are already scanning the work you control — I’m skipping the usual “allegedly,” since this is hypothetical — and you want to make them pay to do so. Then, one day, another AI company comes to you and says it wants to do just that — scan all of your works and use them to train its algorithms. In return, it’s willing to give you a one-time payment of $300 million, with no further obligations on either side. All you have to do is sign the deal, take the money and… then what?
Would you take the deal? The question takes on new urgency now that STIM, the Swedish collecting society, announced on Sept. 9 that it had set up the first collective licensing deal for AI training. That followed an early-August deal between ElevenLabs and Kobalt and Merlin. It is still unclear in the U.S. whether the copying of works to train an AI algorithm even requires a license, so these deals should probably be seen as experiments.
I’d argue that you shouldn’t take the deal — if you even could, based on what rights you controlled for which territories. Analyzing a work requires ingesting it, which in turn requires making a copy. That involves copyright law, under which this would be considered a mechanical reproduction. It is not settled as to whether using an AI algorithm involves making a copy, but it’s obvious that training one does. So it’s possible — not definite, but possible — that each AI company only needs to copy a work once.
So that $300 million payment could be the only one you’d ever get. And once you get it, your songwriters will have to compete with an algorithm that can use their works to churn out an enormous volume of similar music. And that assumes you could even sign such a deal without the permission of rightsholders — both the STIM and Kobalt deals are opt-in — plus figure out where the deal would apply. But the big issue is that rightsholders essentially need to find a way to control the horse after it has left the barn.
The 4 billion Euro question — based on CISAC’s estimate of the value of generative AI music in 2028 — is how rightsholders can use the rights they can license as a lever to exercise some control over the rights they can’t. Some of this is just practical — they need to have some sense of how much algorithms rely on different works in order to pay out royalties from whatever agreements they make. (In the example of the $300 million offer, how would COMPLEX deal with that revenue?) But some of it is strategic: As AI uses copyrighted songs to create new ones, rightsholders deserve an ongoing revenue stream, both because these new creations are built on their work and because those creations will inevitably compete with theirs.
GEMA’s licensing model, presented a year ago, in September 2024, says that “a one-off lump sum payment for training data is not nearly sufficient to compensate authors in view of the revenues that can be generated.” The idea is that songwriters get paid based on how often their works are used to generate new ones. STIM’s license, really a pilot project, takes this a step further, compensating rightsholders with a share of revenue when their works are used by generative AI algorithms and potentially then again when AI music based on their work is used.
That’s one hell of a lever. Not only does this agreement impose an obligation on AI companies that license works for training purposes, it puts that obligation on the work they create. It’s especially interesting because in most countries, music created by AI is not covered by copyright, so it can be used freely. This would potentially give rightsholders the ability to benefit from that work under contractual rules. If it works, of course. It’s hard to know if it will — and it’s hard to imagine that the AI licenses that will be used in 2035 will look anything like these. But we are seeing the first moves to shape a new business, and they could be quite influential. Next week I will write about how they will be influential in ways that aren’t immediately obvious.