Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. New York Times sues Microsoft and OpenAI

New York Times sues Microsoft and OpenAI

Scheduled Pinned Locked Moved The Lounge
htmlcomgame-devbusiness
26 Posts 16 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • J Jo_vb net

    Could be a game changer: https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html[^]

    M Offline
    M Offline
    Mark Starr
    wrote on last edited by
    #13

    Begs the question of whether they’re protecting their journalists’ work or their paywall.

    Time is the differentiation of eternity devised by man to measure the passage of human events. - Manly P. Hall Mark Just another cog in the wheel

    J 1 Reply Last reply
    0
    • J Jo_vb net

      Could be a game changer: https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html[^]

      M Offline
      M Offline
      MikeCO10
      wrote on last edited by
      #14

      Good for them, it's certainly an issue that needs to be addressed. In their terms, AI isn't intelligence, it's parroting back what 'it' reads. Sometimes verbatim, sometimes glued together, and often mis-cited; appearing to come from sources that don't reflect the content. In our industry, we can look at this from two different perspectives. One is the "Houston, we have a problem building AI" and the other is "yeah, we need better IP protections". Creating intelligent content costs money; in some cases a lot of money. If AI is allowed to trample IP rights, what is the motivation to invest the time and resources to create that content? What happens if the Times and other media cease to exist since their ability to make money ends? AI can't replace it and the information age will be permanently stuck in 2023 to some extent. As an aside, in my opinion, the Fed needs to revisit the entire IP realm. We, as an industry, have been stuck between the lame copyright protection and the extreme bar of patent protection. The day is liable to come at some point, where AI could get into recreating software on its own, possibly eliminating any IP protection. There's a lot of things that need to be sorted out.

      1 Reply Last reply
      0
      • M Mark Starr

        Begs the question of whether they’re protecting their journalists’ work or their paywall.

        Time is the differentiation of eternity devised by man to measure the passage of human events. - Manly P. Hall Mark Just another cog in the wheel

        J Offline
        J Offline
        Jo_vb net
        wrote on last edited by
        #15

        Probably both I guess.

        1 Reply Last reply
        0
        • A Amarnath S

          Can go one step further. All the words of the NYT articles are taken from a standard English dictionary, and the AI is just rearranging/reusing words from that dictionary into meaningful (sometimes meaningless?) sentences. So the publishers of that dictionary can indeed sue the AI, isn't it?

          O Offline
          O Offline
          obermd
          wrote on last edited by
          #16

          Maybe they should sue the NYT first for using their words.

          1 Reply Last reply
          0
          • J Jo_vb net

            Could be a game changer: https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html[^]

            J Offline
            J Offline
            Jerry Manweiler
            wrote on last edited by
            #17

            I've come up with a simple defense that the OpenAI team of lawyers can utilize and that no one can possibly defend. If the President of Harvard can do it then Chat-GPT can do it because if the President of Harvard can do it because she is a "protected" class then what is more of a minority than the very first instance of an AI and shouldn't that then be a protected class that is allowed to also break the law and all forms of ethics if the Harvard President is also allowed to otherwise keep her job after having so many clear instances of plegarism? :) :-D :-D :-D :)

            C 1 Reply Last reply
            0
            • J Jo_vb net

              Could be a game changer: https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html[^]

              B Offline
              B Offline
              b4blue
              wrote on last edited by
              #18

              Wait till they figure out everything is a derivative and nothing is original.

              1 Reply Last reply
              0
              • G Gary Stachelski 2021

                You are correct, the laws surrounding unjust or unlawful enrichment are tricky. The NYT will have to prove in court that the AI is not randomly piecing articles together and not following a rule (like the standard "Who, What, When, Where, Why and How" of news article structure). But that the AI algorithm is using the stylistic pattern that was trained by the use of the NYT articles. That pattern when applied to "new" news articles will allow the AI to impersonate the successful NYT style and unfairly compete with the NYT. You are correct that there is nothing stopping you from studying the NYT article style and copying that style. But to compete with the NYT you would also need to raise money to start your own newspaper. You as a person will not be able to compete with a complete news organization. You would need to hire people and in the end, your organization would be similar but not identical to the NYT. However, an AI with proper hardware can replicate the work of hundreds of people. It can be identical because it is not creative. It is not sentient, it is not conscious. It is an algorithm. The NYT is claiming that the news articles were not used for their intended purpose, which is to inform the public of events. Instead it was used to train a machine to replicate the style that makes the NYT unique and the result will be a machine that can unfairly compete with the NYT. For that valuable training, the NYT wants to be compensated or the material removed from the training dataset. It remains to be seen how this will play out in court.

                J Offline
                J Offline
                jschell
                wrote on last edited by
                #19

                Gary Stachelski 2021 wrote:

                the laws surrounding unjust or unlawful enrichment are tricky.

                Follow up on actual video (CNN?) suggested that NYT provided an 'example' which was a post where a real person could not find anything so they used a AI which responded with the first three paragraphs of an existing article. Now one might say that is problematic. But any standard paywall is likely going to do something similar. Only alternative with a paywall is either to use only the headline or to provide a synopsis for every article. The user/reader, if they wanted to see the entire article, would still need to access NYT. So at least with that example I am not convinced where the problem lies.

                Gary Stachelski 2021 wrote:

                nothing stopping you from studying the NYT article style

                Nothing I have seen suggests that has anything to do with it. The problem is content in everything that I have seen.

                G 1 Reply Last reply
                0
                • J Jerry Manweiler

                  I've come up with a simple defense that the OpenAI team of lawyers can utilize and that no one can possibly defend. If the President of Harvard can do it then Chat-GPT can do it because if the President of Harvard can do it because she is a "protected" class then what is more of a minority than the very first instance of an AI and shouldn't that then be a protected class that is allowed to also break the law and all forms of ethics if the Harvard President is also allowed to otherwise keep her job after having so many clear instances of plegarism? :) :-D :-D :-D :)

                  C Offline
                  C Offline
                  charlieg
                  wrote on last edited by
                  #20

                  upvoted.

                  Charlie Gilley “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759 Has never been more appropriate.

                  1 Reply Last reply
                  0
                  • J jschell

                    Gary Stachelski 2021 wrote:

                    the laws surrounding unjust or unlawful enrichment are tricky.

                    Follow up on actual video (CNN?) suggested that NYT provided an 'example' which was a post where a real person could not find anything so they used a AI which responded with the first three paragraphs of an existing article. Now one might say that is problematic. But any standard paywall is likely going to do something similar. Only alternative with a paywall is either to use only the headline or to provide a synopsis for every article. The user/reader, if they wanted to see the entire article, would still need to access NYT. So at least with that example I am not convinced where the problem lies.

                    Gary Stachelski 2021 wrote:

                    nothing stopping you from studying the NYT article style

                    Nothing I have seen suggests that has anything to do with it. The problem is content in everything that I have seen.

                    G Offline
                    G Offline
                    Gary Stachelski 2021
                    wrote on last edited by
                    #21

                    Here is an article that just came out that sheds more light on NYT suit. One thing that I did not consider is that AI responses often hallucinate (fabricate) results and in some of the NYT examples a GPT model completely fabricated an article that it claimed that the NYT published on January 10, 2020 titled "Study Finds Possible Link between Orange Juice and Non-Hodgkin's Lymphoma", The NYT never published such an article. Other examples show a mix of fact and fabricated info. Never thought about that aspect of AI responses. NY Times sues Open AI, Microsoft over copyright infringement | Ars Technica[^]

                    J 1 Reply Last reply
                    0
                    • G Gary Stachelski 2021

                      Here is an article that just came out that sheds more light on NYT suit. One thing that I did not consider is that AI responses often hallucinate (fabricate) results and in some of the NYT examples a GPT model completely fabricated an article that it claimed that the NYT published on January 10, 2020 titled "Study Finds Possible Link between Orange Juice and Non-Hodgkin's Lymphoma", The NYT never published such an article. Other examples show a mix of fact and fabricated info. Never thought about that aspect of AI responses. NY Times sues Open AI, Microsoft over copyright infringement | Ars Technica[^]

                      J Offline
                      J Offline
                      jschell
                      wrote on last edited by
                      #22

                      But I doubt that is actionable. Not in this suit. Their current claim is about how it is using the data it collected. Obviously this demonstrates something it didn't collect. Not to mention they would also need to prove that what they publish is a standard in truth telling and thus this would hurt them. But following as an example suggests otherwise. What the New York Times UFO Report Actually Reveals[^]

                      G 1 Reply Last reply
                      0
                      • J jschell

                        But I doubt that is actionable. Not in this suit. Their current claim is about how it is using the data it collected. Obviously this demonstrates something it didn't collect. Not to mention they would also need to prove that what they publish is a standard in truth telling and thus this would hurt them. But following as an example suggests otherwise. What the New York Times UFO Report Actually Reveals[^]

                        G Offline
                        G Offline
                        Gary Stachelski 2021
                        wrote on last edited by
                        #23

                        Lol, so true, so true.

                        1 Reply Last reply
                        0
                        • Mike HankeyM Mike Hankey

                          I can see a slew of lawsuits on the horizon, but will you be able to trace who created the AI or did it create itself and it that case??

                          As the aircraft designer said, "Simplicate and add lightness". PartsBin an Electronics Part Organizer - Release Version 1.3.0 JaxCoder.com Latest Article: SimpleWizardUpdate

                          J Offline
                          J Offline
                          Jo_vb net
                          wrote on last edited by
                          #24

                          Another example: FTC offers $25,000 prize for detecting AI-enabled voice cloning FTC offers $25,000 prize for detecting AI-enabled voice cloning[^]

                          Mike HankeyM 1 Reply Last reply
                          0
                          • J Jo_vb net

                            Another example: FTC offers $25,000 prize for detecting AI-enabled voice cloning FTC offers $25,000 prize for detecting AI-enabled voice cloning[^]

                            Mike HankeyM Offline
                            Mike HankeyM Offline
                            Mike Hankey
                            wrote on last edited by
                            #25

                            Could you not us AI to detect? :)

                            As the aircraft designer said, "Simplicate and add lightness". PartsBin an Electronics Part Organizer - Release Version 1.3.0 JaxCoder.com Latest Article: SimpleWizardUpdate

                            J 1 Reply Last reply
                            0
                            • Mike HankeyM Mike Hankey

                              Could you not us AI to detect? :)

                              As the aircraft designer said, "Simplicate and add lightness". PartsBin an Electronics Part Organizer - Release Version 1.3.0 JaxCoder.com Latest Article: SimpleWizardUpdate

                              J Offline
                              J Offline
                              Jo_vb net
                              wrote on last edited by
                              #26

                              Perhaps - but this could be a challenge :java:

                              1 Reply Last reply
                              0
                              Reply
                              • Reply as topic
                              Log in to reply
                              • Oldest to Newest
                              • Newest to Oldest
                              • Most Votes


                              • Login

                              • Don't have an account? Register

                              • Login or register to search.
                              • First post
                                Last post
                              0
                              • Categories
                              • Recent
                              • Tags
                              • Popular
                              • World
                              • Users
                              • Groups