Generative AI is magic for many. Despite being controversial on some issues like ethics or responsible usage of AI, it is possible to say that AI started becoming an indispensable part of our lives slowly but surely. AI tools literally can be used to do anything but in this particular podcast I will be interviewing an AI model about generative AI, copyright issues and the legal aspects of these.
“Generative AI” is a subset of machine learning where the program copies training data and uses it to produce derivative works of that training data. “Machine Learning” is a type of AI process in which the behavior of the software program is derived from copying a corpus of material called training data. Machine-learning programs can find patterns or make calculations based on datasets or training data. “Derivative Work” is used for the output of generative AI.
In today’s episode, I am interviewing Marky, the chat-bot developed by Easy-Peasy.ai. Let’s start with our first question.
What do you think about derivative works?
Derivative works are created by building upon or transforming existing works to produce something new. This can include adaptations, remixes, translations, and other forms of creative expression that draw on pre-existing material. Derivative works can be a reflection of contemporary culture, offering new perspectives and interpretations of original content. They can also contribute to the evolution and diversification of artistic and intellectual endeavors. However, it’s essential to consider copyright and intellectual property laws when creating derivative works to ensure that the original creators are appropriately credited and respected.
You mentioned issues about copyright and intellectual property laws but what about ethical issues?
The ethical problems surrounding derivative works often revolve around issues of intellectual property, fair use, and creative attribution. Here are some of the ethical considerations:
Copyright Infringement: Creating derivative works without obtaining proper permissions or licenses from the original creators can lead to copyright infringement, potentially undermining the rights and financial interests of the original creators.
Fair Use and Transformative Use: Determining what constitutes fair use and transformative use when creating derivative works can be ethically complex. While transformative works add new value or meaning to the original content, it’s crucial to respect the balance between using existing material and creating something truly original.
Attribution and Recognition: Ethical concerns arise when derivative works fail to appropriately credit the original creators. Proper attribution is essential to honor the contributions of the original creators and acknowledge their intellectual and artistic rights.
Economic Impact: Derivative works can also impact the market value and commercial prospects of the original works. Ethical considerations arise when derivative works compete with or devalue the original content.
Cultural Appropriation: In some cases, derivative works can raise ethical questions related to cultural appropriation, especially when the original content pertains to culturally significant or sacred material.
Engaging in ethical practices when creating derivative works involves respecting the rights of the original creators, seeking appropriate permissions, providing fair attribution, and ensuring that the derivative work adds meaningful value or transformative elements. These considerations are essential to maintain ethical integrity and foster a respectful creative ecosystem.
Attribution and ownership are quite important when we talk about intellectual property but who is the owner of an AI generated work?
The ownership of generative images can vary depending on the context in which they are created. If a generative image is produced by an individual using their own algorithms, tools, and creative input, then typically the individual would own the copyright to the generated image. However, if the generative image is produced using proprietary software, algorithms, or platforms owned by a third party, the ownership and copyright may be subject to the terms and conditions of the software or platform. In the case of AI-generated images, ownership can become more complex, especially if the AI system itself is considered the creator. Current laws and regulations regarding the ownership of AI-generated content are still evolving and can differ across jurisdictions. It’s essential to consult legal experts and consider the specific circumstances of the generative image to determine ownership and copyright status accurately.
AI models are trained on vast datasets of images, often scraped from the internet without consent. Also, the creation of these AI works is becoming undistinguishable day by day. Who owns the copyright to an image generated by AI? Is it the developer of the AI model, the user who prompts the generation, or the model itself? This ambiguity needs legal clarification as Marky stated. As we know by now, AI models are trained by data and the base of every derivative work is the billions of data points on the internet. But what happens if this data and algorithms are biased?
If the data used to create derivative works, including generative content, is biased, it can have significant ethical, social, and practical implications.
Reinforcement of Bias: Biased data can lead to the reinforcement of existing societal biases and stereotypes, perpetuating systemic inequalities and discrimination. This can be particularly concerning when derivative works are intended for public consumption, as they can shape perceptions and attitudes.
Unintended Consequences: Derivative works created from biased data can inadvertently amplify and perpetuate misinformation, misunderstanding, and harmful narratives, especially when the biases are not recognized or addressed.
Ethical Considerations: The ethical implications of using biased data include the responsibility to acknowledge and mitigate biases, ensure fair representation, and strive for accuracy and inclusivity in the resulting derivative works.
Legal and Regulatory Issues: The use of biased data in derivative works can raise legal and regulatory concerns, particularly regarding consumer protection, privacy, and compliance with anti-discrimination laws.
To address these challenges, it’s crucial to implement measures to identify, mitigate, and address biases in the data used for derivative works. This can include employing diverse datasets, transparently documenting data sources, and engaging in ethical and inclusive data collection practices. Additionally, fostering awareness of bias and promoting responsible data usage is essential to creating derivative works that are fair, accurate, and respectful of diverse perspectives.
As AI and generative technologies continue to evolve, there is growing emphasis on ethical AI practices and data governance to mitigate the impact of biased data on derivative works and other AI-generated content. Do you think that this generative content created with manipulated data can be used for propaganda and disinformation.
Yes, derivative works, including generative content, can be used for propaganda and disinformation purposes. The malleability and persuasive nature of visual, audio, and written content make them susceptible to exploitation for propagandistic and disinformation campaigns. Here are some ways in which derivative works can be used for these purposes:
Visual Manipulation: Generative images and altered visual content can be used to create false or misleading depictions of events, individuals, or objects, contributing to visual propaganda and deception.
Misleading Narratives: Derivative works such as altered videos, deepfakes, and deceptive audio recordings can be used to construct false narratives, misrepresent facts, and manipulate public opinion.
Cultural and Historical Rewriting: Derivative works can be misused to distort cultural or historical representations, promoting biased or false interpretations of events or traditions.
Social Media Influence Campaigns: Propagandistic and disinformation efforts often utilize derivative works to spread misleading or divisive content across social media platforms, exploiting their viral potential and reach.
To counter the potential misuse of derivative works for propaganda and disinformation, it’s essential to promote media literacy, critical thinking, and fact-checking to help individuals discern the authenticity and credibility of visual and audio content. Additionally, developing and implementing technologies for detecting and mitigating manipulated or deceptive content, such as deepfake detection tools, can aid in combating the spread of propaganda and disinformation through derivative works.
Efforts to address the ethical and responsible use of derivative works in media and communications are crucial to upholding the integrity of information and safeguarding against the harmful effects of propaganda and disinformation. What do you think of the explainability and transparency of these works?
Efforts to enhance the explainability and transparency of derivative works, particularly those involving generative technologies, can involve the development of standards, guidelines, and best practices for disclosing the processes and influences behind the creation of such works. Additionally, fostering collaboration between creators, technologists, ethicists, and stakeholders can contribute to advancing responsible and transparent approaches to the use of generative technologies in creating derivative works.
You stated there should be collaboration between multiple parties for an ethical and responsible use of these technologies. However, there are artists who would disagree with you. This brings me to my next question. From your own perspective please tell me what is the impact of generative works on the art community?
From my perspective, generative works have had a profound impact on the art community, offering new avenues for creativity, expression, and exploration. Here are some key impacts of generative works on the art community:
Innovation and Experimentation: Generative works have encouraged artists to explore innovative techniques and experiment with computational processes, leading to the creation of unique and unconventional art forms.
Intersection of Art and Technology: The convergence of art and technology through generative works has sparked interdisciplinary collaborations, bringing together artists, technologists, and researchers to push the boundaries of traditional art practices.
Accessibility and Inclusivity: Generative art has the potential to democratize the creative process, making art more accessible to a broader range of individuals and communities. This inclusivity can foster diversity and representation in artistic expression.
Challenges to Traditional Notions of Authorship: The use of generative algorithms and AI technologies has prompted discussions about the shifting dynamics of authorship and creativity, challenging traditional notions of artistic ownership and originality.
Critical Discourse and Reflection: Generative works have prompted critical discourse within the art community, raising questions about the role of technology in art, ethical considerations, and the nature of creative expression.
New Forms of Engagement: Generative art has introduced new ways for audiences to engage with art, including immersive experiences, interactive installations, and participatory environments that redefine the viewer’s relationship with the artwork.
(This part is created with bard): The mass production of AI-generated art could devalue human creativity and craftsmanship. Artists may feel they lose control over their work and the way it is used when it is incorporated into AI models. AI could potentially replace human artists in some fields, leading to job losses and economic hardship.
*For this question I used both bard and easypeasy.ai it is interesting to see that both of them answered very differently from each other. While one of them was very positive the other gave a negative opinion.
Thanks Marky, my last question to you is about the copyright claims. There are many artists who are not fond of generative works, claiming their intellectual property rights are infringed. Can you give some examples for our listeners?
There have been several legal claims and lawsuits that have involved AI and copyright infringement. Here are a few examples:
Getty Images vs. Stability AI: In January 2023, Getty Images filed a lawsuit against Stability AI in the High Court of Justice in London, alleging that Stability AI infringed upon Getty Images’ copyrighted images and works by using them to train their AI.
Microsoft, GitHub, and OpenAI Lawsuit: Microsoft, GitHub, and OpenAI faced a lawsuit for allegedly violating copyright law in the creation of GitHub Copilot, an AI coding assistant trained on open-source code. The lawsuit raised concerns about the use of training data in AI models.
AI Image Generators and Artists’ Rights: Lawsuits have been brought forth arguing that AI image generators, such as Midjourney and Stable Diffusion, are infringing on artists’ rights by generating images that may violate copyright laws.
These cases and claims highlight the evolving legal landscape surrounding AI and copyright infringement, particularly in the context of AI-generated content and the use of copyrighted material to train AI models. The intersection of AI technology and intellectual property rights has prompted scrutiny and legal action as the implications of AI on copyright law continue to unfold.
These are just a few examples of the growing number of IP infringement claims filed against AI applications. As AI technology continues to evolve, we can expect to see even more litigation in this area. It is important to note that these cases are ongoing and have not yet been decided. The outcome of these cases will have a significant impact on the future of AI development and intellectual property law.