New York Times sues Microsoft and OpenAI
-
Could be a game changer: https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html[^]
I've come up with a simple defense that the OpenAI team of lawyers can utilize and that no one can possibly defend. If the President of Harvard can do it then Chat-GPT can do it because if the President of Harvard can do it because she is a "protected" class then what is more of a minority than the very first instance of an AI and shouldn't that then be a protected class that is allowed to also break the law and all forms of ethics if the Harvard President is also allowed to otherwise keep her job after having so many clear instances of plegarism? :) :-D :-D :-D :)
-
Could be a game changer: https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html[^]
-
You are correct, the laws surrounding unjust or unlawful enrichment are tricky. The NYT will have to prove in court that the AI is not randomly piecing articles together and not following a rule (like the standard "Who, What, When, Where, Why and How" of news article structure). But that the AI algorithm is using the stylistic pattern that was trained by the use of the NYT articles. That pattern when applied to "new" news articles will allow the AI to impersonate the successful NYT style and unfairly compete with the NYT. You are correct that there is nothing stopping you from studying the NYT article style and copying that style. But to compete with the NYT you would also need to raise money to start your own newspaper. You as a person will not be able to compete with a complete news organization. You would need to hire people and in the end, your organization would be similar but not identical to the NYT. However, an AI with proper hardware can replicate the work of hundreds of people. It can be identical because it is not creative. It is not sentient, it is not conscious. It is an algorithm. The NYT is claiming that the news articles were not used for their intended purpose, which is to inform the public of events. Instead it was used to train a machine to replicate the style that makes the NYT unique and the result will be a machine that can unfairly compete with the NYT. For that valuable training, the NYT wants to be compensated or the material removed from the training dataset. It remains to be seen how this will play out in court.
Gary Stachelski 2021 wrote:
the laws surrounding unjust or unlawful enrichment are tricky.
Follow up on actual video (CNN?) suggested that NYT provided an 'example' which was a post where a real person could not find anything so they used a AI which responded with the first three paragraphs of an existing article. Now one might say that is problematic. But any standard paywall is likely going to do something similar. Only alternative with a paywall is either to use only the headline or to provide a synopsis for every article. The user/reader, if they wanted to see the entire article, would still need to access NYT. So at least with that example I am not convinced where the problem lies.
Gary Stachelski 2021 wrote:
nothing stopping you from studying the NYT article style
Nothing I have seen suggests that has anything to do with it. The problem is content in everything that I have seen.
-
I've come up with a simple defense that the OpenAI team of lawyers can utilize and that no one can possibly defend. If the President of Harvard can do it then Chat-GPT can do it because if the President of Harvard can do it because she is a "protected" class then what is more of a minority than the very first instance of an AI and shouldn't that then be a protected class that is allowed to also break the law and all forms of ethics if the Harvard President is also allowed to otherwise keep her job after having so many clear instances of plegarism? :) :-D :-D :-D :)
-
Gary Stachelski 2021 wrote:
the laws surrounding unjust or unlawful enrichment are tricky.
Follow up on actual video (CNN?) suggested that NYT provided an 'example' which was a post where a real person could not find anything so they used a AI which responded with the first three paragraphs of an existing article. Now one might say that is problematic. But any standard paywall is likely going to do something similar. Only alternative with a paywall is either to use only the headline or to provide a synopsis for every article. The user/reader, if they wanted to see the entire article, would still need to access NYT. So at least with that example I am not convinced where the problem lies.
Gary Stachelski 2021 wrote:
nothing stopping you from studying the NYT article style
Nothing I have seen suggests that has anything to do with it. The problem is content in everything that I have seen.
Here is an article that just came out that sheds more light on NYT suit. One thing that I did not consider is that AI responses often hallucinate (fabricate) results and in some of the NYT examples a GPT model completely fabricated an article that it claimed that the NYT published on January 10, 2020 titled "Study Finds Possible Link between Orange Juice and Non-Hodgkin's Lymphoma", The NYT never published such an article. Other examples show a mix of fact and fabricated info. Never thought about that aspect of AI responses. NY Times sues Open AI, Microsoft over copyright infringement | Ars Technica[^]
-
Here is an article that just came out that sheds more light on NYT suit. One thing that I did not consider is that AI responses often hallucinate (fabricate) results and in some of the NYT examples a GPT model completely fabricated an article that it claimed that the NYT published on January 10, 2020 titled "Study Finds Possible Link between Orange Juice and Non-Hodgkin's Lymphoma", The NYT never published such an article. Other examples show a mix of fact and fabricated info. Never thought about that aspect of AI responses. NY Times sues Open AI, Microsoft over copyright infringement | Ars Technica[^]
But I doubt that is actionable. Not in this suit. Their current claim is about how it is using the data it collected. Obviously this demonstrates something it didn't collect. Not to mention they would also need to prove that what they publish is a standard in truth telling and thus this would hurt them. But following as an example suggests otherwise. What the New York Times UFO Report Actually Reveals[^]
-
But I doubt that is actionable. Not in this suit. Their current claim is about how it is using the data it collected. Obviously this demonstrates something it didn't collect. Not to mention they would also need to prove that what they publish is a standard in truth telling and thus this would hurt them. But following as an example suggests otherwise. What the New York Times UFO Report Actually Reveals[^]
Lol, so true, so true.
-
I can see a slew of lawsuits on the horizon, but will you be able to trace who created the AI or did it create itself and it that case??
As the aircraft designer said, "Simplicate and add lightness". PartsBin an Electronics Part Organizer - Release Version 1.3.0 JaxCoder.com Latest Article: SimpleWizardUpdate
-
Another example: FTC offers $25,000 prize for detecting AI-enabled voice cloning FTC offers $25,000 prize for detecting AI-enabled voice cloning[^]
Could you not us AI to detect? :)
As the aircraft designer said, "Simplicate and add lightness". PartsBin an Electronics Part Organizer - Release Version 1.3.0 JaxCoder.com Latest Article: SimpleWizardUpdate
-
Could you not us AI to detect? :)
As the aircraft designer said, "Simplicate and add lightness". PartsBin an Electronics Part Organizer - Release Version 1.3.0 JaxCoder.com Latest Article: SimpleWizardUpdate