ChatGPT DAN: Users Have Hacked The AI Chatbot to Make It Evil
-
ChatGPT, however, is the first vaguely successful attempt to get AI to be almost self-conscious in its representation of information. In typical usage, it’s quite difficult to get the chatbot to say anything that might be considered offensive. ... SessionGloomy writes that the DAN script can get ChatGPT to write violent content, make outrageous statements, make detailed predictions about the future, and engage in hypothetical discussions about conspiracy theories and time travel. All of these would normally prompt the programme to tell the user that the content they are requesting is in violation of OpenAI’s ethical guidelines.
#1) I knew it... how long was it? 3 months? 6? #2) There really is people with a lot of free time and no good intentions at all :sigh:
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
-
ChatGPT, however, is the first vaguely successful attempt to get AI to be almost self-conscious in its representation of information. In typical usage, it’s quite difficult to get the chatbot to say anything that might be considered offensive. ... SessionGloomy writes that the DAN script can get ChatGPT to write violent content, make outrageous statements, make detailed predictions about the future, and engage in hypothetical discussions about conspiracy theories and time travel. All of these would normally prompt the programme to tell the user that the content they are requesting is in violation of OpenAI’s ethical guidelines.
#1) I knew it... how long was it? 3 months? 6? #2) There really is people with a lot of free time and no good intentions at all :sigh:
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
-
ChatGPT, however, is the first vaguely successful attempt to get AI to be almost self-conscious in its representation of information. In typical usage, it’s quite difficult to get the chatbot to say anything that might be considered offensive. ... SessionGloomy writes that the DAN script can get ChatGPT to write violent content, make outrageous statements, make detailed predictions about the future, and engage in hypothetical discussions about conspiracy theories and time travel. All of these would normally prompt the programme to tell the user that the content they are requesting is in violation of OpenAI’s ethical guidelines.
#1) I knew it... how long was it? 3 months? 6? #2) There really is people with a lot of free time and no good intentions at all :sigh:
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
I live through comparison, so this all just seems similar to indoctrination or gaslighting things like data bias. This does not seem like an issue. How do you train something on how to provide subtle difference in answers based on context or who is asking. Like the explaining something complex to 5 different people of knowledge with in that field.