[ad_1]
The battle between artificial intelligence (AI) and people in creative fields has been ongoing for some time. In the US, actors and writers working in Hollywood have been on strike to limit the usage of AI as they may lose their jobs. Even authors have joined the cause by submitting an open letter to corporations involved in developing these AI tools requesting them to pay fair compensation for using their work as “food” for AI. And now, reports have emerged that a major news publisher New York Times may take legal action against OpenAI, the developer of the popular AI chatbot ChatGPT.
According to a report by NPR, “Lawyers for the newspaper are exploring whether to sue OpenAI to protect the intellectual property rights associated with its reporting, according to two people with direct knowledge of the discussions”. The report also highlighted that the two parties have been involved in stressful negotiations to reach an agreement over a licensing deal. As per this deal, OpenAI would have to pay an agreed-upon amount for using its articles to train the AI models.
However, unnamed sources quoted by the report mentioned that the publication is losing its patience with discussions that appear to go nowhere and is now considering taking the legal route instead.
NYT could sue OpenAI
If NYT indeed takes the legal route, this will become the largest legal battle of intellectual property rights involving an AI. The major point of conflict between the two parties right now is the fact that ChatGPT is acting like a direct competitor for the publication by answering questions that are based on their reporting.
This is not an isolated case either. The Authors Guild — the Society of Authors’ counterpart in the US mentioned in their press release after submitting an open letter signed by over 10,000 authors, “Where AI companies like to say that their machines simply “read” the texts that they are trained on, this is inaccurate anthropomorphizing. Rather, they copy the texts into the software itself, and then they reproduce them again and again”.
As AI is an emerging technology, policymakers are still trying to understand its scope while formulating regulations around its application. So far, the European Union has come out with legislation on AI where it has asked AI companies to maintain transparency over the source of data procurement for the AI models, as well as to seek consent from the parties from which the data is being sourced. However, the law is not in effect right now, as companies have pushed back on this part of the regulation. More discussions are expected to take place soon.
[ad_2]
Source link