Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
CODE PROJECT For Those Who Code
  • Home
  • Articles
  • FAQ
Community
  1. Home
  2. The Lounge
  3. Have you played with OpenAI's ChatGPT?

Have you played with OpenAI's ChatGPT?

Scheduled Pinned Locked Moved The Lounge
questiondata-structuresperformancetutoriallounge
19 Posts 12 Posters 24 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • J jmaida

    I have use ChatGPT. I asked for the history of AI and received some new and very interesting information. I asked for the high/low/average temperatures of Moscow for a year. ChatGPT: The average high and low temperatures in Moscow vary significantly throughout the year. Here is a general overview of the average temperatures in Moscow, based on data from the National Oceanic and Atmospheric Administration (NOAA): January: Average high of 23°F (-5°C), average low of 12°F (-11°C) February: Average high of 25°F (-4°C), average low of 14°F (-10°C) March: Average high of 35°F (2°C), average low of 23°F (-5°C) April: Average high of 51°F (11°C), average low of 36°F (2°C) May: Average high of 64°F (18°C), average low of 48°F (9°C) June: Average high of 73°F (23°C), average low of 57°F (14°C) July: Average high of 76°F (24°C), average low of 60°F (16°C) August: Average high of 73°F (23°C), average low of 58°F (14°C) September: Average high of 62°F (17°C), average low of 47°F (8°C) October: Average high of 50°F (10°C), average low of 37°F (3°C) November: Average high of 37°F (3°C), average low of 25°F (-4°C) December: Average high of 26°F (-3°C), average low of 15°F (-9°C) As you can see, the temperatures in Moscow can vary significantly throughout the year, with cold, snowy winters and relatively mild summers. The city also experiences a large range of daily temperatures, with highs often reaching into the 60s and 70s during the summer and lows dropping below freezing during the winter. I hope this information is helpful. Let me know if you have any other questions. Very cool, both figuratively and literally.

    "A little time, a little trouble, your better day" Badfinger

    L Offline
    L Offline
    Lost User
    wrote on last edited by
    #10

    Quote:

    Moscow experiences a wide range of average high and low temperatures throughout the year. Based on information from the National Oceanic and Atmospheric Administration (NOAA), the following basic summary of Moscow's average temperatures is provided:

    [Spinbot - Article Spinning, Text Rewriting, Content Creation Tool.](https://spinbot.com/)

    "Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I

    J 1 Reply Last reply
    0
    • L Lost User

      Quote:

      Moscow experiences a wide range of average high and low temperatures throughout the year. Based on information from the National Oceanic and Atmospheric Administration (NOAA), the following basic summary of Moscow's average temperatures is provided:

      [Spinbot - Article Spinning, Text Rewriting, Content Creation Tool.](https://spinbot.com/)

      "Before entering on an understanding, I have meditated for a long time, and have foreseen what might happen. It is not genius which reveals to me suddenly, secretly, what I have to say or to do in a circumstance unexpected by other people; it is reflection, it is meditation." - Napoleon I

      J Offline
      J Offline
      jmaida
      wrote on last edited by
      #11

      ChatGPT draws on a lot sources. No surprise.

      "A little time, a little trouble, your better day" Badfinger

      1 Reply Last reply
      0
      • Kornfeld Eliyahu PeterK Kornfeld Eliyahu Peter

        What kind of questions had you (or would) ask? What is considered a good answer for you? --- I decided to try and figure out how to write some C64 assembly code to make a beep sound (spoiler - I knew the answer beforehand)... The result was frustrating and relaxing at the same time... Frustrating, because I wasted time (compared to some simple search) and did not get any answer. Relaxing, because I realized that the judgment day is still far... ME: I'm looking for a sample code for commodore 64 to beep AI: Here is a sample program in Commodore 64 BASIC that will make a beep sound: 10 SOUND 0,10,1,8 20 FOR I=1 TO 1000:NEXT I 30 GOTO 10 [more explanations on how to actually run the code] (this code has nothing to do with C64 - it is for C128 with a more advanced BASIC) ME: do you have something similar in assembly? AI: Here is a sample program in Commodore 64 assembly language that will make a beep sound: *= $0801 .ORG $0801 SEI LDX #$FF STX $D020 STX $D021 LDA #$08 STA $DC0D LDA #$1F STA $DD0D JSR $FFE2 loop: JMP loop .END [with more explanations about the SID, and how FFE2 is there to play a sound] (the code totally wrong as FFE2 is not valid kernal routine address) It went on - I'm pointing out that the code is wrong (and why), AI apologize and offers the same code with a different routine address... Including CHRIN, CHROUT, STOP and back to the original FFE2 (three times). The truly fascinating thing is that the code is totally wrong, not just the wrong routine addresses, but the memory addresses used too, not even close to the solution... I not really know what is the goal of this chat, but I would hide it if I was the creator...

        "Everybody is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid." ― Albert Einstein

        R Offline
        R Offline
        RainHat
        wrote on last edited by
        #12

        I like the was this totally stops the machine. 1. Stop all interrupts. 2. Enter into an infinite loop. Worst code possible.

        1 Reply Last reply
        0
        • Kornfeld Eliyahu PeterK Kornfeld Eliyahu Peter

          What kind of questions had you (or would) ask? What is considered a good answer for you? --- I decided to try and figure out how to write some C64 assembly code to make a beep sound (spoiler - I knew the answer beforehand)... The result was frustrating and relaxing at the same time... Frustrating, because I wasted time (compared to some simple search) and did not get any answer. Relaxing, because I realized that the judgment day is still far... ME: I'm looking for a sample code for commodore 64 to beep AI: Here is a sample program in Commodore 64 BASIC that will make a beep sound: 10 SOUND 0,10,1,8 20 FOR I=1 TO 1000:NEXT I 30 GOTO 10 [more explanations on how to actually run the code] (this code has nothing to do with C64 - it is for C128 with a more advanced BASIC) ME: do you have something similar in assembly? AI: Here is a sample program in Commodore 64 assembly language that will make a beep sound: *= $0801 .ORG $0801 SEI LDX #$FF STX $D020 STX $D021 LDA #$08 STA $DC0D LDA #$1F STA $DD0D JSR $FFE2 loop: JMP loop .END [with more explanations about the SID, and how FFE2 is there to play a sound] (the code totally wrong as FFE2 is not valid kernal routine address) It went on - I'm pointing out that the code is wrong (and why), AI apologize and offers the same code with a different routine address... Including CHRIN, CHROUT, STOP and back to the original FFE2 (three times). The truly fascinating thing is that the code is totally wrong, not just the wrong routine addresses, but the memory addresses used too, not even close to the solution... I not really know what is the goal of this chat, but I would hide it if I was the creator...

          "Everybody is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid." ― Albert Einstein

          D Offline
          D Offline
          DerekT P
          wrote on last edited by
          #13

          Yes, a little. In some cases the answers seem very good, in others they can be incomplete, contradictory, or just wrong. When errors are pointed out, ChatGPT sometimes improves the answer (and sometimes apologises), but it seems to me it generally bases its responses on the first thing it looks up, rather than comparing a number of answers and finding a consensus. When I asked "whats the difference between walschaerts and stevenson valve gear?" (types of valve gear used on steam locomotives) it gave a clear answer, describing not just the differences but the advantages/disadvantages and why you might adopt one over the other. It felt like talking to a knowledgeable engineer with good communication skills. But when I asked "how do vacuum brakes work on a railway train" the initial answer was poorly described and included some confused and misleading descriptions. When I asked it to clarify an aspect it had pretty much fudged, it gave me an equally poorly worded response that said pretty much the opposite of what it had just said. I challenged it on its contradiction and it apologised, admitted it was wrong and gave a third poorly-worded spiel that didn't clarify anything and still used wrong concepts and omitted key aspects. As with (for example) newspapers, we take information and tend to assume it's true. But when it relates to a field we're knowledgeable about we quickly spot the flaws and inconsistencies and have to assume that the source is just as uninformed about other subject matters, and that we're being fed complete nonsense a LOT of the time.

          Telegraph marker posts ... nothing to do with IT Phasmid email discussion group ... also nothing to do with IT Beekeeping and honey site ... still nothing to do with IT

          N Kornfeld Eliyahu PeterK 2 Replies Last reply
          0
          • D DerekT P

            Yes, a little. In some cases the answers seem very good, in others they can be incomplete, contradictory, or just wrong. When errors are pointed out, ChatGPT sometimes improves the answer (and sometimes apologises), but it seems to me it generally bases its responses on the first thing it looks up, rather than comparing a number of answers and finding a consensus. When I asked "whats the difference between walschaerts and stevenson valve gear?" (types of valve gear used on steam locomotives) it gave a clear answer, describing not just the differences but the advantages/disadvantages and why you might adopt one over the other. It felt like talking to a knowledgeable engineer with good communication skills. But when I asked "how do vacuum brakes work on a railway train" the initial answer was poorly described and included some confused and misleading descriptions. When I asked it to clarify an aspect it had pretty much fudged, it gave me an equally poorly worded response that said pretty much the opposite of what it had just said. I challenged it on its contradiction and it apologised, admitted it was wrong and gave a third poorly-worded spiel that didn't clarify anything and still used wrong concepts and omitted key aspects. As with (for example) newspapers, we take information and tend to assume it's true. But when it relates to a field we're knowledgeable about we quickly spot the flaws and inconsistencies and have to assume that the source is just as uninformed about other subject matters, and that we're being fed complete nonsense a LOT of the time.

            Telegraph marker posts ... nothing to do with IT Phasmid email discussion group ... also nothing to do with IT Beekeeping and honey site ... still nothing to do with IT

            N Offline
            N Offline
            Nelek
            wrote on last edited by
            #14

            DerekT-P wrote:

            As with (for example) newspapers, we take information and tend to assume it's true. But when it relates to a field we're knowledgeable about we quickly spot the flaws and inconsistencies and have to assume that the source is just as uninformed about other subject matters, and that we're being fed complete nonsense a LOT of the time.

            Sadly many many people doesn't see it. And it gets even worse, when the one who explains bullshit is someone famous in social media.

            M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.

            1 Reply Last reply
            0
            • D DerekT P

              Yes, a little. In some cases the answers seem very good, in others they can be incomplete, contradictory, or just wrong. When errors are pointed out, ChatGPT sometimes improves the answer (and sometimes apologises), but it seems to me it generally bases its responses on the first thing it looks up, rather than comparing a number of answers and finding a consensus. When I asked "whats the difference between walschaerts and stevenson valve gear?" (types of valve gear used on steam locomotives) it gave a clear answer, describing not just the differences but the advantages/disadvantages and why you might adopt one over the other. It felt like talking to a knowledgeable engineer with good communication skills. But when I asked "how do vacuum brakes work on a railway train" the initial answer was poorly described and included some confused and misleading descriptions. When I asked it to clarify an aspect it had pretty much fudged, it gave me an equally poorly worded response that said pretty much the opposite of what it had just said. I challenged it on its contradiction and it apologised, admitted it was wrong and gave a third poorly-worded spiel that didn't clarify anything and still used wrong concepts and omitted key aspects. As with (for example) newspapers, we take information and tend to assume it's true. But when it relates to a field we're knowledgeable about we quickly spot the flaws and inconsistencies and have to assume that the source is just as uninformed about other subject matters, and that we're being fed complete nonsense a LOT of the time.

              Telegraph marker posts ... nothing to do with IT Phasmid email discussion group ... also nothing to do with IT Beekeeping and honey site ... still nothing to do with IT

              Kornfeld Eliyahu PeterK Offline
              Kornfeld Eliyahu PeterK Offline
              Kornfeld Eliyahu Peter
              wrote on last edited by
              #15

              The interesting thing about this chat AI, is how it evolves... At the end of our discussion I gave it a nice solution (actually the BELL form the C128, a well proved code). The AI acknowledged that it is a good solution (based on what?), but two days after still offering silly code when I ask the same question...

              "Everybody is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid." ― Albert Einstein

              "It never ceases to amaze me that a spacecraft launched in 1977 can be fixed remotely from Earth." ― Brian Cox

              1 Reply Last reply
              0
              • Kornfeld Eliyahu PeterK Kornfeld Eliyahu Peter

                What kind of questions had you (or would) ask? What is considered a good answer for you? --- I decided to try and figure out how to write some C64 assembly code to make a beep sound (spoiler - I knew the answer beforehand)... The result was frustrating and relaxing at the same time... Frustrating, because I wasted time (compared to some simple search) and did not get any answer. Relaxing, because I realized that the judgment day is still far... ME: I'm looking for a sample code for commodore 64 to beep AI: Here is a sample program in Commodore 64 BASIC that will make a beep sound: 10 SOUND 0,10,1,8 20 FOR I=1 TO 1000:NEXT I 30 GOTO 10 [more explanations on how to actually run the code] (this code has nothing to do with C64 - it is for C128 with a more advanced BASIC) ME: do you have something similar in assembly? AI: Here is a sample program in Commodore 64 assembly language that will make a beep sound: *= $0801 .ORG $0801 SEI LDX #$FF STX $D020 STX $D021 LDA #$08 STA $DC0D LDA #$1F STA $DD0D JSR $FFE2 loop: JMP loop .END [with more explanations about the SID, and how FFE2 is there to play a sound] (the code totally wrong as FFE2 is not valid kernal routine address) It went on - I'm pointing out that the code is wrong (and why), AI apologize and offers the same code with a different routine address... Including CHRIN, CHROUT, STOP and back to the original FFE2 (three times). The truly fascinating thing is that the code is totally wrong, not just the wrong routine addresses, but the memory addresses used too, not even close to the solution... I not really know what is the goal of this chat, but I would hide it if I was the creator...

                "Everybody is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid." ― Albert Einstein

                N Offline
                N Offline
                Nelviticus
                wrote on last edited by
                #16

                I believe the point of ChatGPT is not to give correct answers but to sound like the answers have been written by a real person. The fact that it's sometimes also correct is just luck.

                Regards Nelviticus

                1 Reply Last reply
                0
                • L Lost User

                  jschell wrote:

                  If I only had a dollar for every time I have heard that. Google 'changed the world' but it doesn't mean much

                  I'm glad you mentioned that. I've been heavily using search engines this week because I built myself a new workstation. I'm not exaggerating when I say that some of the things I am searching on Google/Bing this week show completely irrelevant results. If I search for software engineering terms that have the same words as music/art/books/commercial products search engines just throw everything at me that include the buzz words. Interestingly ChatGPT seems to understand exactly what I am asking about. Although I admit the information isn't always correct. This technology is brand new, I see alot of room for improvement.

                  M Offline
                  M Offline
                  MonkeyVaughan
                  wrote on last edited by
                  #17

                  I'm a software engineer, mostly contract stuff so I like to keep learning the latest technologies and stuff. I'm completely blown away by ChatGPT's possibilities. I don't feel like it will put me out of a job but it will make my work life easier, for example when I was testing what it was capable of I got it to write me a C# .Net MAUI, cross platform mobile speech to text interface and implementation, RESTful WebAPI calls and text to speech interface for responses, including dependency injection, unit tests and saving conversation snippet history to a SQLite database. Pretty blooming impressive! Some of the code it produced isn't perfect but it is a shed load better than google and even StackOverflow. A large part of our job as software engineers is knowing how to ask the right questions and being able to sort the wheat from the chaff but watching it go creating about 3 or 4 days work in about 20 minutes that saved me a lot of typing really blew my mind. To show my friends and family that I'm not completely obsessed by the whole "The future has arrived!!" vibe I was giving off I've also got it to write very funny screenplays for scenes of their favourite TV shows such as Friends, Charlie's Angels, The Big Bang Theory incorporating them in the storyline and the screenplays it produced were astonishing, kept really well within the style of each show and their characters. You can give it a theme/style for a chef's menu and it can produce one all formatted and provide recipes. It is only limited by your imagination for asking the right questions and giving it the inputs to come up with answers. It's a bit scary but amazing.

                  L 1 Reply Last reply
                  0
                  • M MonkeyVaughan

                    I'm a software engineer, mostly contract stuff so I like to keep learning the latest technologies and stuff. I'm completely blown away by ChatGPT's possibilities. I don't feel like it will put me out of a job but it will make my work life easier, for example when I was testing what it was capable of I got it to write me a C# .Net MAUI, cross platform mobile speech to text interface and implementation, RESTful WebAPI calls and text to speech interface for responses, including dependency injection, unit tests and saving conversation snippet history to a SQLite database. Pretty blooming impressive! Some of the code it produced isn't perfect but it is a shed load better than google and even StackOverflow. A large part of our job as software engineers is knowing how to ask the right questions and being able to sort the wheat from the chaff but watching it go creating about 3 or 4 days work in about 20 minutes that saved me a lot of typing really blew my mind. To show my friends and family that I'm not completely obsessed by the whole "The future has arrived!!" vibe I was giving off I've also got it to write very funny screenplays for scenes of their favourite TV shows such as Friends, Charlie's Angels, The Big Bang Theory incorporating them in the storyline and the screenplays it produced were astonishing, kept really well within the style of each show and their characters. You can give it a theme/style for a chef's menu and it can produce one all formatted and provide recipes. It is only limited by your imagination for asking the right questions and giving it the inputs to come up with answers. It's a bit scary but amazing.

                    L Offline
                    L Offline
                    Lost User
                    wrote on last edited by
                    #18

                    MonkeyVaughan wrote:

                    It's a bit scary but amazing.

                    The only thing I find scary about it is that it can mimic humans and go undetected. I think ChatGPT looks useful for determining what a human is asking about. In other words, I see an immediate use for search engines and Q/A websites.

                    1 Reply Last reply
                    0
                    • Kornfeld Eliyahu PeterK Kornfeld Eliyahu Peter

                      What kind of questions had you (or would) ask? What is considered a good answer for you? --- I decided to try and figure out how to write some C64 assembly code to make a beep sound (spoiler - I knew the answer beforehand)... The result was frustrating and relaxing at the same time... Frustrating, because I wasted time (compared to some simple search) and did not get any answer. Relaxing, because I realized that the judgment day is still far... ME: I'm looking for a sample code for commodore 64 to beep AI: Here is a sample program in Commodore 64 BASIC that will make a beep sound: 10 SOUND 0,10,1,8 20 FOR I=1 TO 1000:NEXT I 30 GOTO 10 [more explanations on how to actually run the code] (this code has nothing to do with C64 - it is for C128 with a more advanced BASIC) ME: do you have something similar in assembly? AI: Here is a sample program in Commodore 64 assembly language that will make a beep sound: *= $0801 .ORG $0801 SEI LDX #$FF STX $D020 STX $D021 LDA #$08 STA $DC0D LDA #$1F STA $DD0D JSR $FFE2 loop: JMP loop .END [with more explanations about the SID, and how FFE2 is there to play a sound] (the code totally wrong as FFE2 is not valid kernal routine address) It went on - I'm pointing out that the code is wrong (and why), AI apologize and offers the same code with a different routine address... Including CHRIN, CHROUT, STOP and back to the original FFE2 (three times). The truly fascinating thing is that the code is totally wrong, not just the wrong routine addresses, but the memory addresses used too, not even close to the solution... I not really know what is the goal of this chat, but I would hide it if I was the creator...

                      "Everybody is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid." ― Albert Einstein

                      D Offline
                      D Offline
                      Damien Hansen
                      wrote on last edited by
                      #19

                      I think that AI-powered language models like ChatGPT can be used to assist with tasks such as language translation and language learning, which can make it easier for people to communicate and access information from around the world. :thumbsup:

                      1 Reply Last reply
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups