Should we / I worry about AI "taking over the world "?
-
I would not - here is AI "corollary" on some of the "answers" I have received over the years,,,'' The "AI answer " is "missing " why would you want to do that ?"...'' and besides - what does AI knows about "GIGO" ?
It’s difficult to say what’s wrong with the code without more context. Can you provide more information about what the code is supposed to do and what isn’t working as expected? Also, is this the entire code or just a part of it?
They can't do a worse job than humans have been doing for the last 2000 years. :laugh: :laugh:
-
I would not - here is AI "corollary" on some of the "answers" I have received over the years,,,'' The "AI answer " is "missing " why would you want to do that ?"...'' and besides - what does AI knows about "GIGO" ?
It’s difficult to say what’s wrong with the code without more context. Can you provide more information about what the code is supposed to do and what isn’t working as expected? Also, is this the entire code or just a part of it?
I'm not afraid on them taking over the world, I'm afraid that they'll turn out like the ones in 'The Hitch Hikers Guide to the Galaxy' and they'll be insistently and annoyingly 'helpful', whether you like it or not.[
Share And Enjoy.mp3
-
Neither group is entirely right but Group 1 is nearest the mark. My prediction: Within 50 years (probably a bit less) a LARGE percentage of today's software development jobs will be gone. A few "wizards" will remain to mind the AI but the mundane stuff that the vast majority of today's devs do will be gone. That being said - project managers and sales/marketing drones will also be gone. The CEOs will deal directly with the wizards.
-
Since that's the gist of about 80% of the responses to questions in QA, I'm not sure you're making the point you think you are.
Keep Calm and Carry On
Or it emphasizes his point -- there is zero "intelligence" in AI, it's just a monkey learning colors and passing matching blocks. When I execute a search, I must review the answers, as some are obviously wrong (for my need), so I have to exert myself to locate the most correct answer(s) I can. The biggest danger of so-called AI is that people will begin to trust it, when it's just a monkey doing color matching. "Training" reduces the number of obviously wrong answers, and possibly increases the number of "may be correct" answers. Intelligence and experience is required to determine the validity, and AI does not have that.
-
I would not - here is AI "corollary" on some of the "answers" I have received over the years,,,'' The "AI answer " is "missing " why would you want to do that ?"...'' and besides - what does AI knows about "GIGO" ?
It’s difficult to say what’s wrong with the code without more context. Can you provide more information about what the code is supposed to do and what isn’t working as expected? Also, is this the entire code or just a part of it?
I read somewhere from a time traveler that an AI would be running the world at some point. People have found that an AI does a much better and fairer job than politicians.
-
Should anybody worry about AI "taking over the world" in the next few years? No. Should anybody be concerned about the future of their jobs specifically and society generally as AI becomes more robust in the future? Yes. Not sure why so many struggle with this... like everything else in technology - AI will continue to get more functional with every advancement.
fgs1963 wrote:
Should anybody be concerned about the future of their jobs specifically and society generally as AI becomes more robust in the future? Yes.
Future being what exactly? Development of "AI" started in the 1950s. Myself I am still waiting for that self driving car and the endless parade of software that claims it will make developers obsolete to become more "robust".
-
I wonder if there's a stupid gene or does it take practice? :)
PartsBin an Electronics Part Organizer - An updated version available! JaxCoder.com Latest Article: ARM Tutorial Part 2 Timers
-
Neither group is entirely right but Group 1 is nearest the mark. My prediction: Within 50 years (probably a bit less) a LARGE percentage of today's software development jobs will be gone. A few "wizards" will remain to mind the AI but the mundane stuff that the vast majority of today's devs do will be gone. That being said - project managers and sales/marketing drones will also be gone. The CEOs will deal directly with the wizards.
-
The risk is not AI taking over the world. The risk is people actively trying to use it to assert dominance. As part of the Russian-Chinese trade agreement, for example, Russia proposes 'their' cutting edge AI research is made available to China. Thing is, Russia doesn't do a whole lot of original research in the area, but they do actively encourage certain groups loosely-affiliated with them to experiment and innovate in that area, with no oversight. So basically, it matters that we act responsibly, but more importantly, we need to prepare and act pro-actively to detect and actively counter active threats. If one group creates a malicious AI, you basically need a more advanced and specialized AI system to actively counter that threat. GPT suggests, beyond educating people to be responsible (not seeing that happen soon), to increase research pace toward early detecting systems and monitoring systems. Since we build GPT explicitly to help us and foster good relationships, it will do just that. Let's not be blind toward projects that are run with the sole purpose of creating threats, and actively prepare for them and think about how to counter them. We still have the luxury of time at this point, let's not waste it, and keep ahead of the curve.
Kate-X257 wrote:
we need to prepare and act pro-actively to detect and actively counter active threats. If one group creates a malicious AI, you basically need a more advanced and specialized AI system to actively counter that threat.
So we should focus on that one AI versus the potential millions of people around the world that continuously seek active harm through technology?
-
They can't do a worse job than humans have been doing for the last 2000 years. :laugh: :laugh:
-
To be fair there is stupidity, ignorance and foolishness. And then those are impacted by other factors like arrogance, elitism, laziness, etc.
Since I posted th4e message I started reading "How the mind works" by Pinker and it goes into a lot of the reasons why people act the way they do. Interesting read.
PartsBin an Electronics Part Organizer - An updated version available! JaxCoder.com Latest Article: ARM Tutorial Part 2 Timers
-
I read somewhere from a time traveler that an AI would be running the world at some point. People have found that an AI does a much better and fairer job than politicians.
-
fgs1963 wrote:
Should anybody be concerned about the future of their jobs specifically and society generally as AI becomes more robust in the future? Yes.
Future being what exactly? Development of "AI" started in the 1950s. Myself I am still waiting for that self driving car and the endless parade of software that claims it will make developers obsolete to become more "robust".
jschell wrote:
Future being what exactly?
Sometime after now. :rolleyes:
jschell wrote:
Development of "AI" started in the 1950s.
If you haven't noticed, human technological advancement a) sometimes happens in leaps and b) is escalating at a geometric rate not linear. In the 1950's there was severely limited hardware* and only a handful of developers working on AI. Now we have exaflop super computers and thousands of very well financed developers working on AI. If you don't see AI becoming radically better over the short term (ie. the next few decades) then you might want to open your eyes. *The 1960 Cray CDC1604 was the fastest super computer ever made at the time. It was 48bit, 192kB of memory and operated at 0.1 MIPS.
-
fgs1963 wrote:
My prediction: Within 50 years (probably a bit less) a LARGE percentage of today's software development jobs will be gone
Will that be before or after self driving cars actually work?
-
fgs1963 wrote:
My prediction: Within 50 years (probably a bit less) a LARGE percentage of today's software development jobs will be gone
Will that be before or after self driving cars actually work?
Self driving cars will work when they don't have to deal with human drivers on the roads.
Outside of a dog, a book is a man's best friend; inside of a dog, it's too dark to read. -- Groucho Marx
-
Bruce Patin wrote:
I read somewhere from a time traveler
Did the time traveler provide any information about future stock market trends?
Yes. Invest in time travelling companies! ;)
-
Self driving cars will work when they don't have to deal with human drivers on the roads.
Outside of a dog, a book is a man's best friend; inside of a dog, it's too dark to read. -- Groucho Marx
-
Just bury your head a little deeper. I'm sure everything will work out just fine for you. :rolleyes:
Yep. Just like flying cars. And autonomous robots (Asimov not roomba) And mars colonies. And faster than light drives And planetary alignment catastrophes And year 2000 meltdown etc... Even that Stephen Hawking would not live to see 25. So very, very many predictions about the future and so very, very few that are even close and usually only then by stretching to find a correlation.
-
Yep. Just like flying cars. And autonomous robots (Asimov not roomba) And mars colonies. And faster than light drives And planetary alignment catastrophes And year 2000 meltdown etc... Even that Stephen Hawking would not live to see 25. So very, very many predictions about the future and so very, very few that are even close and usually only then by stretching to find a correlation.
Says a guy on the world wide web (the same web that 5 billion other people will use this year) that barely existed 30 years ago. The same guy that probably has a smart phone that has 5000x more computing power than the fastest super computers of the 1980's in 1/5000th the space. Most 12 year old kids have the same smart phone... Never mind other AMAZING things our parents never dreamed of as kids: flash drives, SSDs, fiber optics, the human genome project, graphene, WiFi & Bluetooth, Large Hadron Collider, AbioCor artificial hearts, artificial joints, stem cells, gene editing (CRISPR), laser/robotic surgery, GPS, MRIs, facial recognition, cheap drones, 3D printing, etc...
-
I would not - here is AI "corollary" on some of the "answers" I have received over the years,,,'' The "AI answer " is "missing " why would you want to do that ?"...'' and besides - what does AI knows about "GIGO" ?
It’s difficult to say what’s wrong with the code without more context. Can you provide more information about what the code is supposed to do and what isn’t working as expected? Also, is this the entire code or just a part of it?
Of course, there'll never be such a thing as True AI (you can't design something you can't describe: now describe yourself -- what are you? Now design it!). The more worrying aspect is that at some point, some idiot in a position of power far too high for his ability will use something like ChatGPT to write a critical system for - say - a nuclear power station, believing that the AI is real, and that it knows what it's doing...