Should we / I worry about AI "taking over the world "?
-
Should anybody worry about AI "taking over the world" in the next few years? No. Should anybody be concerned about the future of their jobs specifically and society generally as AI becomes more robust in the future? Yes. Not sure why so many struggle with this... like everything else in technology - AI will continue to get more functional with every advancement.
fgs1963 wrote:
Should anybody be concerned about the future of their jobs specifically and society generally as AI becomes more robust in the future? Yes.
Future being what exactly? Development of "AI" started in the 1950s. Myself I am still waiting for that self driving car and the endless parade of software that claims it will make developers obsolete to become more "robust".
-
I wonder if there's a stupid gene or does it take practice? :)
PartsBin an Electronics Part Organizer - An updated version available! JaxCoder.com Latest Article: ARM Tutorial Part 2 Timers
-
Neither group is entirely right but Group 1 is nearest the mark. My prediction: Within 50 years (probably a bit less) a LARGE percentage of today's software development jobs will be gone. A few "wizards" will remain to mind the AI but the mundane stuff that the vast majority of today's devs do will be gone. That being said - project managers and sales/marketing drones will also be gone. The CEOs will deal directly with the wizards.
-
The risk is not AI taking over the world. The risk is people actively trying to use it to assert dominance. As part of the Russian-Chinese trade agreement, for example, Russia proposes 'their' cutting edge AI research is made available to China. Thing is, Russia doesn't do a whole lot of original research in the area, but they do actively encourage certain groups loosely-affiliated with them to experiment and innovate in that area, with no oversight. So basically, it matters that we act responsibly, but more importantly, we need to prepare and act pro-actively to detect and actively counter active threats. If one group creates a malicious AI, you basically need a more advanced and specialized AI system to actively counter that threat. GPT suggests, beyond educating people to be responsible (not seeing that happen soon), to increase research pace toward early detecting systems and monitoring systems. Since we build GPT explicitly to help us and foster good relationships, it will do just that. Let's not be blind toward projects that are run with the sole purpose of creating threats, and actively prepare for them and think about how to counter them. We still have the luxury of time at this point, let's not waste it, and keep ahead of the curve.
Kate-X257 wrote:
we need to prepare and act pro-actively to detect and actively counter active threats. If one group creates a malicious AI, you basically need a more advanced and specialized AI system to actively counter that threat.
So we should focus on that one AI versus the potential millions of people around the world that continuously seek active harm through technology?
-
They can't do a worse job than humans have been doing for the last 2000 years. :laugh: :laugh:
-
To be fair there is stupidity, ignorance and foolishness. And then those are impacted by other factors like arrogance, elitism, laziness, etc.
Since I posted th4e message I started reading "How the mind works" by Pinker and it goes into a lot of the reasons why people act the way they do. Interesting read.
PartsBin an Electronics Part Organizer - An updated version available! JaxCoder.com Latest Article: ARM Tutorial Part 2 Timers
-
I read somewhere from a time traveler that an AI would be running the world at some point. People have found that an AI does a much better and fairer job than politicians.
-
fgs1963 wrote:
Should anybody be concerned about the future of their jobs specifically and society generally as AI becomes more robust in the future? Yes.
Future being what exactly? Development of "AI" started in the 1950s. Myself I am still waiting for that self driving car and the endless parade of software that claims it will make developers obsolete to become more "robust".
jschell wrote:
Future being what exactly?
Sometime after now. :rolleyes:
jschell wrote:
Development of "AI" started in the 1950s.
If you haven't noticed, human technological advancement a) sometimes happens in leaps and b) is escalating at a geometric rate not linear. In the 1950's there was severely limited hardware* and only a handful of developers working on AI. Now we have exaflop super computers and thousands of very well financed developers working on AI. If you don't see AI becoming radically better over the short term (ie. the next few decades) then you might want to open your eyes. *The 1960 Cray CDC1604 was the fastest super computer ever made at the time. It was 48bit, 192kB of memory and operated at 0.1 MIPS.
-
fgs1963 wrote:
My prediction: Within 50 years (probably a bit less) a LARGE percentage of today's software development jobs will be gone
Will that be before or after self driving cars actually work?
-
fgs1963 wrote:
My prediction: Within 50 years (probably a bit less) a LARGE percentage of today's software development jobs will be gone
Will that be before or after self driving cars actually work?
Self driving cars will work when they don't have to deal with human drivers on the roads.
Outside of a dog, a book is a man's best friend; inside of a dog, it's too dark to read. -- Groucho Marx
-
Bruce Patin wrote:
I read somewhere from a time traveler
Did the time traveler provide any information about future stock market trends?
Yes. Invest in time travelling companies! ;)
-
Self driving cars will work when they don't have to deal with human drivers on the roads.
Outside of a dog, a book is a man's best friend; inside of a dog, it's too dark to read. -- Groucho Marx
-
Just bury your head a little deeper. I'm sure everything will work out just fine for you. :rolleyes:
Yep. Just like flying cars. And autonomous robots (Asimov not roomba) And mars colonies. And faster than light drives And planetary alignment catastrophes And year 2000 meltdown etc... Even that Stephen Hawking would not live to see 25. So very, very many predictions about the future and so very, very few that are even close and usually only then by stretching to find a correlation.
-
Yep. Just like flying cars. And autonomous robots (Asimov not roomba) And mars colonies. And faster than light drives And planetary alignment catastrophes And year 2000 meltdown etc... Even that Stephen Hawking would not live to see 25. So very, very many predictions about the future and so very, very few that are even close and usually only then by stretching to find a correlation.
Says a guy on the world wide web (the same web that 5 billion other people will use this year) that barely existed 30 years ago. The same guy that probably has a smart phone that has 5000x more computing power than the fastest super computers of the 1980's in 1/5000th the space. Most 12 year old kids have the same smart phone... Never mind other AMAZING things our parents never dreamed of as kids: flash drives, SSDs, fiber optics, the human genome project, graphene, WiFi & Bluetooth, Large Hadron Collider, AbioCor artificial hearts, artificial joints, stem cells, gene editing (CRISPR), laser/robotic surgery, GPS, MRIs, facial recognition, cheap drones, 3D printing, etc...
-
I would not - here is AI "corollary" on some of the "answers" I have received over the years,,,'' The "AI answer " is "missing " why would you want to do that ?"...'' and besides - what does AI knows about "GIGO" ?
It’s difficult to say what’s wrong with the code without more context. Can you provide more information about what the code is supposed to do and what isn’t working as expected? Also, is this the entire code or just a part of it?
Of course, there'll never be such a thing as True AI (you can't design something you can't describe: now describe yourself -- what are you? Now design it!). The more worrying aspect is that at some point, some idiot in a position of power far too high for his ability will use something like ChatGPT to write a critical system for - say - a nuclear power station, believing that the AI is real, and that it knows what it's doing...
-
Self driving cars will work when they don't have to deal with human drivers on the roads.
Outside of a dog, a book is a man's best friend; inside of a dog, it's too dark to read. -- Groucho Marx
Self-driving cars will last exactly as long as it takes the Great Unwashed to realize that they're buying machines which can choose to kill them based on an algorithm!
-
Yeah, they build Stonehenge, The pyramids, and Teotihuacán
-
Or it emphasizes his point -- there is zero "intelligence" in AI, it's just a monkey learning colors and passing matching blocks. When I execute a search, I must review the answers, as some are obviously wrong (for my need), so I have to exert myself to locate the most correct answer(s) I can. The biggest danger of so-called AI is that people will begin to trust it, when it's just a monkey doing color matching. "Training" reduces the number of obviously wrong answers, and possibly increases the number of "may be correct" answers. Intelligence and experience is required to determine the validity, and AI does not have that.
There is a tremendous amount of intelligence in a monkey learning colors and passing matching blocks. It's not enough intelligence to write an epic poem or wonder what's beyond the horizon perhaps, but might be enough to take over the world if left unsupervised around just the right kind of blocks. An AGI that gets the answer wrong sometimes could still be as successful as, say, a politician or CEO. I think there is far more to be concerned about than you may realize. Of course, YMMV.
-
Says a guy on the world wide web (the same web that 5 billion other people will use this year) that barely existed 30 years ago. The same guy that probably has a smart phone that has 5000x more computing power than the fastest super computers of the 1980's in 1/5000th the space. Most 12 year old kids have the same smart phone... Never mind other AMAZING things our parents never dreamed of as kids: flash drives, SSDs, fiber optics, the human genome project, graphene, WiFi & Bluetooth, Large Hadron Collider, AbioCor artificial hearts, artificial joints, stem cells, gene editing (CRISPR), laser/robotic surgery, GPS, MRIs, facial recognition, cheap drones, 3D printing, etc...
fgs1963 wrote:
Says a guy on the world wide web
One success does not mean that every other prediction is also a success. Cars did not exist two hundred years ago. But flying cars (the ones and usage actually predicted) do not and never will.
fgs1963 wrote:
Never mind other AMAZING things our parents never dreamed of as kids
I gave you a long list of other predictions that do not, and are unlikely to, ever exist. Betting on one stock which makes one a millionaire does not mean that betting on all will make everyone a millionaire. That is why people do those seminars to teach others how to invest. Because getting paid for those seminars does make one wealthy.
-
jschell wrote:
Future being what exactly?
Sometime after now. :rolleyes:
jschell wrote:
Development of "AI" started in the 1950s.
If you haven't noticed, human technological advancement a) sometimes happens in leaps and b) is escalating at a geometric rate not linear. In the 1950's there was severely limited hardware* and only a handful of developers working on AI. Now we have exaflop super computers and thousands of very well financed developers working on AI. If you don't see AI becoming radically better over the short term (ie. the next few decades) then you might want to open your eyes. *The 1960 Cray CDC1604 was the fastest super computer ever made at the time. It was 48bit, 192kB of memory and operated at 0.1 MIPS.
fgs1963 wrote:
sometimes happens in leaps and
No I haven't noticed that. Actually, since I am more aware of it now, I am come to be aware that it never happens that way. It just seems like that since people are not looking at the full process. Technology is not built on the shoulders of giants. It is built on the shoulders of very normal size people who made incrementally improved perhaps just one thing. Moreover technology is never what drives that. Rather businesses do. Ford (the person) did not invent anything. He put an bunch of bits and pieces (some at least decades old) and then publicized it for very specific business reasons. Same for Edison. Cell phones have a very specific progression from the utility of satellite phones. Computers have a very smooth transition from larger computers and the very real need and demand by businesses as the saw the benifits. Applying technology to agriculture has been something that has been going on since agriculture existed. The Wright brothers were absolutely not even close to being the first ones to 'fly'. They were not even the first ones to put a human on a motor propelled flying machine (that happened in France.) For close to one hundred years that was how it was depicted though. Look even at the following which specifically mentions that they used a wind tunnel but in fact that was invented by someone else. 1903 Wright Flyer | National Air and Space Museum[^] Cell phones have had so much impact not because of the technology but rather because it is just so cheap to put up the towers. Businesses world wide could see the need and desire to communicate and providing that was so cheap and profitable that they did it. The technology allows it but it does not do it. If it was just a matter of technology then the first Mars-Earth war would have already happened.
fgs1963 wrote:
In the 1950's there was severely limited hardware*
And yet there was significant investments in that hardware made just so AI research could go on. It wasn't faltering due a lack of hardware.