Copyright v AI – Where does one draw the line? Media and Technology lawyer, Will Charlesworth, considers the recent Getty Images v StabilityAI case.
You are unlikely to have missed the recent hysteria over the AI software: Chat GPT. Seen as the future of content search and creation, technology titans, including Microsoft, have invested considerable sums into its creator, OpenAI.
As exciting as the new evolution of AI is, it does force us to face difficult questions: How much should AI technology be allowed to legitimately freely borrow (“learn from”) from originally produced human generated content? Will AI content one day replace human created content in the creative industries?
As to the first question, it may soon be answered by recent legal proceedings brought by Getty Images, one of the world’s best known image banks (you will recognise the name as it accompanies most stock images used in the press, for example), against StabilityAI, the creator of software “Stable Diffusion”, used to generate detailed images based on text descriptions.
In order for StabilityAI to create images on demand, it uses its past ‘learning’ of (data mining) millions of online images. Getty claims that StabilityAI copied and used its images to train the AI software without permission, and as such, infringed Getty’s copyright.
It is understood that Getty has licensed other AI software providers for such data mining, but not StabilityAI. Absent Gerry’s permission, the case will inevitably consider issues such as to the extent to which the use of Getty’s images is permitted under the English copyright law’s ‘fair dealing’ exceptions.
The case will be watched with much interest by those on both sides of the debate, particularly as it comes after a recent series of consultations by the UK Intellectual Property Office on the issue of how AI should interact with IP. A government proposal to expand the current text and data mining exception in copyright and database right law beyond only non-commercial research purposes, to the mining of text and data for all purposes, has been rejected by the House of Lords Communications and Digital Committee. The Committee had concern for the implications on the creative industries in the UK, which contribute considerably to the UK economy:
“Developing AI is important, but it should not be pursued at all costs.”
A balance will have to be reached, that much is clear, between a desire to encourage innovation (and not stifle technological development) against the need to protect existing creative rights.
Oh, and as to my second question above (will AI content one day replace other content in the creative industries?), I am not sure how much reliance we will place on AI content in the future, however I would like to think that as a society, we still value human creativity and originality above all else (but maybe I’m just a dreamer).