Drawing from analysis of 13 generative AI platforms that it developed for a study of platform transparency it released in September, the Danish Rights Alliance has followed up with a set of recommendations toward securing copyright in connection with the development and use of artificial intelligence.
Since the introduction of ChatGPT and other large language model AI platforms, the Rights Alliance says that it has worked actively to try to control and protect content from being used illegally in connection with the training and use of artificial intelligence (AI), but in the end analysis, the organization has reached a conclusion that’s bad news for rights holders.
(Rights Alliance concludes) that it is neither possible to control nor protect the use of content today, in connection with the development and use of AI. Key learnings leading to its conclusions include:
• Providers of generative AI do not voluntarily disclose which copyrighted material they have trained their models on,
• There is no possibility of checking whether TDM reservations are complied with,
• It requires legal proceedings to get providers of generative AI models to acknowledge that they have violated copyright,
• Users upload copyrighted material to AI services, and there are insufficient measures to prevent this, and that
• Online content sharing services’ mechanisms for detecting, reporting, and having infringing AI output, e.g. deepfakes, are inadequate.
It is on (that) basis that the Rights Alliance is behind a number of recommendations for measures that will help protect content and rights from being violated in connection with the development and use of artificial intelligence.
“Existing copyright regulation can prevent the extensive, unauthorized use of content that is taking place,” said the Alliance. “But this requires that action is taken against infringements and an effective end is put to the illegal use of content that exists at all stages of the development and use of AI. This is not a task that rights holders can undertake alone. Nor is it a task that can be undertaken solely via the legal system, resulting in years of cases – and years of unclear legal status,” they said.
Transparency is central
Rights Alliance takes the position that transparency about the content underlying AI models models is crucial, because transparency is necessary to be able to enforce rules against improper use of content and to ensure that rights holders have the opportunity to enter into license agreements. In cases where transparency is not sufficient, a rule of reversed burden of proof should also be introduced.
The recommendations “also include measures to protect rights holders against abuse of their person and strengthen the possibilities to remove and filter deepfakes and voice clones from online platforms. In addition, there is a strengthening of the regulation of personality rights,” they said.
Further reading
New recommendations: How to promote respect for copyright in connection with AI. Press release & summary. November 15, 2024. RettighedsAlliancen (Danish Rights Alliance)
AI and Copyright: RettighedsAlliancen’s recommendations for securing copyright in connection with the development and use of artificial intelligence. Report (PDF, in Danish). November 15, 2024. RettighedsAlliancen (Danish Rights Alliance)
Report: Data transparency and enforcement of copyright by AI model providers is found to be lacking. Article. by Steven Hawley. September 13, 2024. Piracy Monitor (Summarizes each model provider and links to the original Rights Alliance study)
Why it matters
In addition to the obvious need to attribute ownership and ensure that the owner is compensated or recognized by the user, platform providers must also comply with transparency obligations set forth in the European Union’s recently-passed AI Act. (Article 53).
Knowing the sources of content ingested by AI platforms seems a straightforward path to copyright enforcement. The fact that most of these platforms have little intent to disclose their sources makes it difficult if not impossible to determine what copyrighted content they might use and therefore pursue licensing arrangements. Of course, licensing fees paid by the AI platform providers would be expenses that impact those companies’ profitability, which could cast them as less attractive to early investors.
The lack of transparency provided by most of the AI platform providers studied by the Rights Alliance may arouse suspicion.