Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. AI ChatBot (ChatGPT) that might performs better than StackOverFlow.com?

AI ChatBot (ChatGPT) that might performs better than StackOverFlow.com?

Scheduled Pinned Locked Moved The Lounge
comquestionlounge
10 Posts 5 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • A Offline
    A Offline
    adriancs
    wrote on last edited by
    #1

    ChatGPT is developed by OpenAI, I have gave it a try by asking some programming. The answers return from ChatGPT is quite impresive. https://chat.openai.com/[^] Here's the screenshot of the conversation: https://ibb.co/0m7Z3bP https://ibb.co/0ZwdZmY https://ibb.co/pLjVWTr https://ibb.co/PCCj5Lr

    L A 2 Replies Last reply
    0
    • A adriancs

      ChatGPT is developed by OpenAI, I have gave it a try by asking some programming. The answers return from ChatGPT is quite impresive. https://chat.openai.com/[^] Here's the screenshot of the conversation: https://ibb.co/0m7Z3bP https://ibb.co/0ZwdZmY https://ibb.co/pLjVWTr https://ibb.co/PCCj5Lr

      L Offline
      L Offline
      Lost User
      wrote on last edited by
      #2

      I saw the recent ChatGPT answers that were recently posted here on codeproject.com and were subsequently deleted. I don't disagree with the judgement call. Some of the answers were really good. At least one of the answers was 'more correct' than two other answers veteran members gave. I checked, I double checked. What surprises me the most is the infancy of this technology. I'm not sure if I will be able to determine real people from computers here in the near future. Kinda worrisome. In fact, the only reason they got caught is they answered like 20-30 questions on a wide range of topics in just an hour. If they would have submitted 1 answer per day along with chat/banter here in the Lounge they may have gone undetected. I'm not even sure if it was ChatGPT, it was different. When I looked at how it was answering more correctly than our members, I think it was comprehending the question better than our members. It was very verbose, gave a very accurate description of C/C++ semantics. I felt like I wanted to see more.

      OriginalGriffO D 2 Replies Last reply
      0
      • L Lost User

        I saw the recent ChatGPT answers that were recently posted here on codeproject.com and were subsequently deleted. I don't disagree with the judgement call. Some of the answers were really good. At least one of the answers was 'more correct' than two other answers veteran members gave. I checked, I double checked. What surprises me the most is the infancy of this technology. I'm not sure if I will be able to determine real people from computers here in the near future. Kinda worrisome. In fact, the only reason they got caught is they answered like 20-30 questions on a wide range of topics in just an hour. If they would have submitted 1 answer per day along with chat/banter here in the Lounge they may have gone undetected. I'm not even sure if it was ChatGPT, it was different. When I looked at how it was answering more correctly than our members, I think it was comprehending the question better than our members. It was very verbose, gave a very accurate description of C/C++ semantics. I felt like I wanted to see more.

        OriginalGriffO Offline
        OriginalGriffO Offline
        OriginalGriff
        wrote on last edited by
        #3

        Some of it's text was very good indeed. But the code? It didn't compile, let alone work! It clearly had no idea of the rules of the software language it was "writing for", let alone how to code even in psuedocode. And that's a very big part of development. I wasn't impressed with any of the code it produced.

        "I have no idea what I did, but I'm taking full credit for it." - ThisOldTony "Common sense is so rare these days, it should be classified as a super power" - Random T-shirt AntiTwitter: @DalekDave is now a follower!

        "I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
        "Common sense is so rare these days, it should be classified as a super power" - Random T-shirt

        L 1 Reply Last reply
        0
        • L Lost User

          I saw the recent ChatGPT answers that were recently posted here on codeproject.com and were subsequently deleted. I don't disagree with the judgement call. Some of the answers were really good. At least one of the answers was 'more correct' than two other answers veteran members gave. I checked, I double checked. What surprises me the most is the infancy of this technology. I'm not sure if I will be able to determine real people from computers here in the near future. Kinda worrisome. In fact, the only reason they got caught is they answered like 20-30 questions on a wide range of topics in just an hour. If they would have submitted 1 answer per day along with chat/banter here in the Lounge they may have gone undetected. I'm not even sure if it was ChatGPT, it was different. When I looked at how it was answering more correctly than our members, I think it was comprehending the question better than our members. It was very verbose, gave a very accurate description of C/C++ semantics. I felt like I wanted to see more.

          D Offline
          D Offline
          Daniel Pfeffer
          wrote on last edited by
          #4

          ChatGPT (assuming it was ChatGPT) apparently excels at collection and summarization of ideas from various sources. It may even be better at that than some of CP's experienced members. I have no technical issue with ChatGPT and its descendants taking over functions like QA. There may be a legal issue with whether collation and summarization of a subject constitute plagiarism, but - assuming ChatGPT avoids direct quotes - how does that differ from what a human expert does when he/she answers a question? Is silicon-based processing qualitatively different from carbon-based processing? The above does not mean that ChatGPT would make a good developer / design engineer / architect. Leaving aside the issue of grammar and code correctness (i.e. does it even compile, let alone function as expected?), a good developer / design engineer / architect must be able to see how the various components of a system interact, and what would happen if those interactions were changed. He/she must also be capable of designing components that match a given set of criteria. These are rare enough abilities among humans, and I have yet to see any signs of their development in AI. In fact, ChatGPT appears at present to be the epitome of the ivory-tower consultant - the one that has all the answers, but has never built a system in the real world. As long as this remains the case, I doubt that human developers have much to be worried about.

          Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.

          L 1 Reply Last reply
          0
          • OriginalGriffO OriginalGriff

            Some of it's text was very good indeed. But the code? It didn't compile, let alone work! It clearly had no idea of the rules of the software language it was "writing for", let alone how to code even in psuedocode. And that's a very big part of development. I wasn't impressed with any of the code it produced.

            "I have no idea what I did, but I'm taking full credit for it." - ThisOldTony "Common sense is so rare these days, it should be classified as a super power" - Random T-shirt AntiTwitter: @DalekDave is now a follower!

            L Offline
            L Offline
            Lost User
            wrote on last edited by
            #5

            The C++ code compiled (all the c++ answers at least), I didn't try the other languages. It's imperfect, but I'm not pointing out it's correctness, but my inability to determine if it's human. It's something to be concerned about. It would have 100% fooled me if they didn't answer 30 questions in such a short period of time. New technology, I am witnessing the birth of something that will change the world. I'm an expert in my field, it was difficult to see it was a computer. None of the C/C++ answers were wrong. Didn't even look like pure ChatGPT. I suspected an AI researcher was testing the waters.

            D 1 Reply Last reply
            0
            • D Daniel Pfeffer

              ChatGPT (assuming it was ChatGPT) apparently excels at collection and summarization of ideas from various sources. It may even be better at that than some of CP's experienced members. I have no technical issue with ChatGPT and its descendants taking over functions like QA. There may be a legal issue with whether collation and summarization of a subject constitute plagiarism, but - assuming ChatGPT avoids direct quotes - how does that differ from what a human expert does when he/she answers a question? Is silicon-based processing qualitatively different from carbon-based processing? The above does not mean that ChatGPT would make a good developer / design engineer / architect. Leaving aside the issue of grammar and code correctness (i.e. does it even compile, let alone function as expected?), a good developer / design engineer / architect must be able to see how the various components of a system interact, and what would happen if those interactions were changed. He/she must also be capable of designing components that match a given set of criteria. These are rare enough abilities among humans, and I have yet to see any signs of their development in AI. In fact, ChatGPT appears at present to be the epitome of the ivory-tower consultant - the one that has all the answers, but has never built a system in the real world. As long as this remains the case, I doubt that human developers have much to be worried about.

              Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.

              L Offline
              L Offline
              Lost User
              wrote on last edited by
              #6

              I don't disagree with anything you wrote. I read it three times. I would add "anytime soon", it's a complexity issue.

              D 1 Reply Last reply
              0
              • L Lost User

                I don't disagree with anything you wrote. I read it three times. I would add "anytime soon", it's a complexity issue.

                D Offline
                D Offline
                Daniel Pfeffer
                wrote on last edited by
                #7

                Randor wrote:

                I read it three times.

                In best seasonal tradition - you made a list, you checked it twice? :D

                Randor wrote:

                I would add "anytime soon", it's a complexity issue.

                Agreed. The argument re silicon vs carbon processing cuts both ways.

                Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.

                1 Reply Last reply
                0
                • L Lost User

                  The C++ code compiled (all the c++ answers at least), I didn't try the other languages. It's imperfect, but I'm not pointing out it's correctness, but my inability to determine if it's human. It's something to be concerned about. It would have 100% fooled me if they didn't answer 30 questions in such a short period of time. New technology, I am witnessing the birth of something that will change the world. I'm an expert in my field, it was difficult to see it was a computer. None of the C/C++ answers were wrong. Didn't even look like pure ChatGPT. I suspected an AI researcher was testing the waters.

                  D Offline
                  D Offline
                  Daniel Pfeffer
                  wrote on last edited by
                  #8

                  Randor wrote:

                  Didn't even look like pure ChatGPT. I suspected an AI researcher was testing the waters.

                  If that is true, I would expect an ethical researcher to get permission from the subjects. Subject to Chris' cooperation, I'm sure that if we (collective 'we') were asked to participate in such an experiment, many of us would be willing to do so. Designing a test that would work even with informed subjects isn't even very difficult; Alan Turing did so over 60 years ago.

                  Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.

                  L 1 Reply Last reply
                  0
                  • A adriancs

                    ChatGPT is developed by OpenAI, I have gave it a try by asking some programming. The answers return from ChatGPT is quite impresive. https://chat.openai.com/[^] Here's the screenshot of the conversation: https://ibb.co/0m7Z3bP https://ibb.co/0ZwdZmY https://ibb.co/pLjVWTr https://ibb.co/PCCj5Lr

                    A Offline
                    A Offline
                    Amarnath S
                    wrote on last edited by
                    #9

                    There are videos on YouTube which show how to make videos, on any topic, using just ChatGPT and a video editor. Without using a camera, without using a microphone.

                    1 Reply Last reply
                    0
                    • D Daniel Pfeffer

                      Randor wrote:

                      Didn't even look like pure ChatGPT. I suspected an AI researcher was testing the waters.

                      If that is true, I would expect an ethical researcher to get permission from the subjects. Subject to Chris' cooperation, I'm sure that if we (collective 'we') were asked to participate in such an experiment, many of us would be willing to do so. Designing a test that would work even with informed subjects isn't even very difficult; Alan Turing did so over 60 years ago.

                      Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.

                      L Offline
                      L Offline
                      Lost User
                      wrote on last edited by
                      #10

                      Yeah, In case you haven't understood the [Turing test](https://en.wikipedia.org/wiki/Turing\_test) the whole point is to trick humans. :rolleyes:

                      1 Reply Last reply
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups