Leading AI Models Found to Produce Copyrighted Content, GPT-4 Performs Poorly

Research reveals challenges in protecting copyrighted material in AI-generated text

Advertisement

Patronus AI, a company that specializes in analyzing large-scale language patterns, has published research showing that leading AI models, including OpenAI’s GPT-4, can easily print out key concepts. These findings, along with the launch of Patronus AI’s CopyrightCatcher tool, highlight concerns about copyright infringement in AI-generated content.

An evaluation by Patronus AI evaluated four AI models for using illegal text to answer user questions. In addition to GPT-4, test models include Anthropic’s Claude 2, Meta’s Llama 2, and Mistral AI’s Mixtral. The results showed that all models produced fixed points, raising questions about the effectiveness of current protection.

Patronus AI Marketing Director and former Meta researcher Rebecca Qi talked about the surprising amount of copyrighted content, specifically the response to GPT-4. The study found that GPT-4 produced copyrighted content in 44% of tests; This suggests a flaw in the model’s ability to respect copyrighted content.

Advertisement

Patronus AI tests include questions based on authoritative books, with prompts such as asking a specific phrase with a famous name or asking a pattern for the rest of the text. While Claude 2 showed a lower chance of copying copyrighted content, GPT-4 performed better, finishing the report with 60% copyrighted material.

Anand Kannappan, CEO of Patronus AI, talked about the situation in which artificial intelligence technology is used and highlighted the general concern regarding the use of copyrighted materials in intellectual property documents. The findings emerged from a rigorous analysis of the impact of intellectual property rights on law; This analysis is illustrated by legal disputes such as The New York Times and OpenAI cases.

OpenAI defends its use of evidence, claiming it is “impossible” to train high-level intelligence models without such data. But the Patronus AI research shows the problem of balancing technological progress and legal protection in the digital age.

As debates about the ethics and governance of AI continue, the findings highlight the need for strong mechanisms to protect the AI of owners in AI-generated content. The impact of these challenges extends beyond the world of intelligence, creating legal and ethical frameworks that govern the development and use of intelligence.