As the use of artificial intelligence (AI) has permeated the creative media space — especially art and design — the definition of intellectual property (IP) seems to be evolving in real time as it becomes increasingly difficult to understand what constitutes plagiarism.
Over the past year, AI-driven art platforms have pushed the limits of IP rights by utilizing extensive data sets for training, often without the explicit permission of the artists who crafted the original works.
For instance, platforms like OpenAI’s DALL-E and Midjourney’s service offer subscription models, indirectly monetizing the copyrighted material that constitutes their training data sets.
In this regard, an important question has emerged: “Do these platforms work within the norms established by the ‘fair use’ doctrine, which in its current iteration allows for copyrighted work to be used for criticism, comment, news reporting, teaching and research purposes?”
Recently, Getty Images, a major supplier of stock photos, initiated lawsuits against Stability AI in both the United States and the United Kingdom. Getty has accused Stability AI’s visual-generating program, Stable Diffusion, of infringing on copyright and trademark laws by using images from its catalog without authorization, particularly those with its watermarks.
However, the plaintiffs must present more comprehensive proof to support their claims, which might prove challenging since Stable Diffusion’s AI has been trained on an enormous cache of 12+ billion compressed pictures.
In another related case, artists Sarah Andersen, Kelly McKernan and Karla Ortiz initiated legal proceedings against Stable Diffusion, Midjourney and the online art community DeviantArt in January, accusing the organizations of infringing the rights of “millions of artists” by training their AI tools using five billion images scraped from the web “without the consent of the original artists.”
AI poisoning software
Responding to the complaints of artists whose works were plagiarized by AI, researchers at the University of Chicago recently released a tool called Nightshade, which enables artists to integrate undetectable alterations into their artwork.
These modifications, while invisible to the human eye, can poison AI training data. Moreover, subtle pixel changes can disrupt AI models’ learning processes, leading to incorrect labeling and recognition.
Even a handful of these images can corrupt the AI’s learning…
Click Here to Read the Full Original Article at Cointelegraph.com News…