Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. Other Discussions
  3. The Insider News
  4. ChatGPT DAN: Users Have Hacked The AI Chatbot to Make It Evil

ChatGPT DAN: Users Have Hacked The AI Chatbot to Make It Evil

Scheduled Pinned Locked Moved The Insider News
comai-modelstoolshelpquestion
3 Posts 3 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • N Offline
    N Offline
    Nelek
    wrote on last edited by
    #1

    The Latch[^]

    ChatGPT, however, is the first vaguely successful attempt to get AI to be almost self-conscious in its representation of information. In typical usage, it’s quite difficult to get the chatbot to say anything that might be considered offensive. ... SessionGloomy writes that the DAN script can get ChatGPT to write violent content, make outrageous statements, make detailed predictions about the future, and engage in hypothetical discussions about conspiracy theories and time travel. All of these would normally prompt the programme to tell the user that the content they are requesting is in violation of OpenAI’s ethical guidelines.

    #1) I knew it... how long was it? 3 months? 6? #2) There really is people with a lot of free time and no good intentions at all :sigh:

    M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.

    P M 2 Replies Last reply
    0
    • N Nelek

      The Latch[^]

      ChatGPT, however, is the first vaguely successful attempt to get AI to be almost self-conscious in its representation of information. In typical usage, it’s quite difficult to get the chatbot to say anything that might be considered offensive. ... SessionGloomy writes that the DAN script can get ChatGPT to write violent content, make outrageous statements, make detailed predictions about the future, and engage in hypothetical discussions about conspiracy theories and time travel. All of these would normally prompt the programme to tell the user that the content they are requesting is in violation of OpenAI’s ethical guidelines.

      #1) I knew it... how long was it? 3 months? 6? #2) There really is people with a lot of free time and no good intentions at all :sigh:

      M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.

      P Offline
      P Offline
      peterkmx
      wrote on last edited by
      #2

      Thanks for sharing ... Yet another warning to be added to the list of weak points of ChatGPT :-)

      1 Reply Last reply
      0
      • N Nelek

        The Latch[^]

        ChatGPT, however, is the first vaguely successful attempt to get AI to be almost self-conscious in its representation of information. In typical usage, it’s quite difficult to get the chatbot to say anything that might be considered offensive. ... SessionGloomy writes that the DAN script can get ChatGPT to write violent content, make outrageous statements, make detailed predictions about the future, and engage in hypothetical discussions about conspiracy theories and time travel. All of these would normally prompt the programme to tell the user that the content they are requesting is in violation of OpenAI’s ethical guidelines.

        #1) I knew it... how long was it? 3 months? 6? #2) There really is people with a lot of free time and no good intentions at all :sigh:

        M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.

        M Offline
        M Offline
        maze3
        wrote on last edited by
        #3

        I live through comparison, so this all just seems similar to indoctrination or gaslighting things like data bias. This does not seem like an issue. How do you train something on how to provide subtle difference in answers based on context or who is asking. Like the explaining something complex to 5 different people of knowledge with in that field.

        1 Reply Last reply
        0
        Reply
        • Reply as topic
        Log in to reply
        • Oldest to Newest
        • Newest to Oldest
        • Most Votes


        • Login

        • Don't have an account? Register

        • Login or register to search.
        • First post
          Last post
        0
        • Categories
        • Recent
        • Tags
        • Popular
        • World
        • Users
        • Groups