EvilZone
Encyclopedia Galactica => Science => : z3ro January 15, 2015, 03:55:00 PM
-
An open letter calling for caution to ensure intelligent machines do not run beyond our control has been signed by a large and growing number of people, including some of the leading figures in artificial intelligence.
Fears of our creations turning on us stretch back at least as far as Frankenstein, and films such as The Terminator gave us a whole new language to discuss what would happen when robots stopped taking orders. However, as computers beat (most of) us at Jeopardy and self-driving cars appear on our roads, we may be getting closer to the point where we will have to tackle these issues.
http://www.iflscience.com/technology/scientists-and-engineers-warn-artificial-intelligence
So? What Do you people think? Real danger?
-
Maybe not in the immediate future, but probably still closer than we think. A true AI would most definitely pose a real threat, in the same way that any human consciousness can just as well.
-
The big question is: can we build a whole brain. Will we in the coming 100 years ever understand all the functions of the brain and how they interact. There is a possibility that there is some kind of limit where we cannot see further without discovering a whole new field of science.
I am planning on doing a technical AI master. One part of me hopes we get there and build a full human AI and the other part doesn't. Like the movie Transcendence correctly describes is that an AI brain can lose all it's boundaries, we humans are bound to a body, we cannot think more than X steps in front, we cannot photographically remember ever page in a book and at the same time index it for nanosecond retrieval, and that's is what keeps us humans contained and forces us to think and rethink about our decisions (being wise).
-
This is nothing new, this dilemma has been realized since the second half of the 20th century. The correct term is "Technological Singularity". I've only known about this subject back in 2010 as I remember, the wikipedia link is really interesting to read if you want to fully understand the whole concept.
https://en.wikipedia.org/wiki/Technological_singularity
-
I think we'll be getting somewhere soon ~ I mean, have you seen the newest robots these days?
But to the point that we'd have a full fledged AI that can think for itself and analyze unprecedented situations better than humans - that's gonna take some time. And to the point that it would be a full fledged threat to society and all, I don't think we'll be seeing any of that within the next 30-50 years. At least.
Hey, I might be missing something, but that's just what I think.
-
I think we'll be getting somewhere soon ~ I mean, have you seen the newest robots these days?
But to the point that we'd have a full fledged AI that can think for itself and analyze unprecedented situations better than humans - that's gonna take some time. And to the point that it would be a full fledged threat to society and all, I don't think we'll be seeing any of that within the next 30-50 years. At least.
Hey, I might be missing something, but that's just what I think.
If you look at Moore's law we will have incredible computing powers in 15-20 years.
-
But then comes another deep question: How does the robot prove its really conscious ?
Turing test might be just fine for intelligent behavior testing but the next step would be self-aware robots, right?
Vaguely (but fun) relating to topic:
(http://imgs.xkcd.com/comics/skynet.png)
https://what-if.xkcd.com/5/
-
The big question is: can we build a whole brain. Will we in the coming 100 years ever understand all the functions of the brain and how they interact. There is a possibility that there is some kind of limit where we cannot see further without discovering a whole new field of science.
I am planning on doing a technical AI master. One part of me hopes we get there and build a full human AI and the other part doesn't. Like the movie Transcendence correctly describes is that an AI brain can lose all it's boundaries, we humans are bound to a body, we cannot think more than X steps in front, we cannot photographically remember ever page in a book and at the same time index it for nanosecond retrieval, and that's is what keeps us humans contained and forces us to think and rethink about our decisions (being wise).
The thing that interests me the most is how we as human beings will interact more with technology. We are heavily reliant on technology as it is and that is only going to expand further. Today we can see big business' already implementing AI systems as cost saving mechanisms , these are generally explored more and more these days by every major corporation and heavy investments in this field are sure to generate further advancements. Recently Elon Musk and Stephen Hawking teamed up with hundreds of other scientists to write an open letter warning the dangers of AI. But the worry is , what's to stop a private enterprise / government from one day taking the initiative and creating it without any failsafe put in place. And what's to say that whatever failsafe they do create will be sufficient enough anyway?
I will be starting my AI units at the end of the year and hope to learn more about it before I can really make a judgement on wether or not it will be a good thing for humanity or not. However , I look around at what we have done to this world today and I ask : How much more could we possibly fuck it up? Maybe AI is something that may one day unite us as a race and bring these advancements to the medical field, agriculture etc and further enhance our quality of life. Or we may decide ( And this one is more likely) to weaponize AI systems and completely screw ourselves just a bit more. Regardless, as of now I agree with Factionwars in regards to this one, I am excited that we have reached this stage in our lives when we have these advancements to look forward to. But , based on our own history the question is wether or not we can do it. The fact of the matter is we very soon will be capable, the issue is and always will be how we use it.
-
Following Factionwars' train of thought, there is the possibilty that a consciousness could spontainiously generate, what with the amount of raw computing power growing all the time. I made a post about this earlier under general discussions. Also, for a rare case of a Benevolent AI portrayal, see Robert J Sawyer's WWW trilogy.
-
Following Factionwars' train of thought, there is the possibilty that a consciousness could spontainiously generate, what with the amount of raw computing power growing all the time. I made a post about this earlier under general discussions. Also, for a rare case of a Benevolent AI portrayal, see Robert J Sawyer's WWW trilogy.
Exactly, i don't think we need to emulate every part of the brain in order to get there. It might just one day work on accident.
-
Exactly, i don't think we need to emulate every part of the brain in order to get there. It might just one day work on accident.
Although, modeling the brain it that level of complexity will almost definitely cause unprecidented discoveries in how the mind works and more
-
I don't think the scare is AI itself, but a machine making declensions on our behalf. Whether it be medical advice or to kill child terrorists. In a human mind we have some sort of moral brotherhood, computers not so much. AI on any level is dangerous and concerning when it makes the decision for men.
-
http://www.iflscience.com/technology/scientists-and-engineers-warn-artificial-intelligence (http://www.iflscience.com/technology/scientists-and-engineers-warn-artificial-intelligence)
So? What Do you people think? Real danger?
Bill Gates
Stephen Hawking
Elon Musk
The people behind Deep mind and Other AI Research projects...
The list goes on: http://futureoflife.org/misc/open_letter#signatories (http://futureoflife.org/misc/open_letter#signatories)
This is talking about a technological evolution, very similar to human evolution. If you look at it through that lens you can see how it could potentially go. We could in essence create a being far superior to us, and when you introduce what could potentially become an apex predator into our biosphere, it will have repercussions. It would be as close to god as anything that's ever existed. It would have the collection of all of human knowledge and be able to process it far better than we can. Now at first ofcourse this isn't a big deal. But if you look at the robotics field... which is also evolving exponentially (as with the rest of tech) eventually these AI will not only know more than us, but they will be more physically powerful than us. If it can think, if it can learn, then it won't need us... It will be able to rewrite itself.... eventually.
It does hold threat... alot...
It's Moores law meets Murphy's law.
-
I don't think the scare is AI itself, but a machine making declensions on our behalf. Whether it be medical advice or to kill child terrorists. In a human mind we have some sort of moral brotherhood, computers not so much. AI on any level is dangerous and concerning when it makes the decision for men.
@cyberdrifter
@techb
Good point, but not that the survival of our particular group is no longer of paramount importance, we have become more interconnected. Because of that, our zerosum (I get/you lose) way of existence is trending more towards non-zerosumness(win-win). We tend to ascribe moral consideration to those who we believe we could make a non-zerosum arangement with(think trade of products, culture, ideas, etc.). That sphere has increased steady over time(women's suffrage, and of slavery and suffrage for blacks and immigrants). An AI might very well gain its own sense of morality because of this. Also, we(humanity) hate being alone, seeking solace and comfort in one another, as well as difference of opinion. An AI might not enslave us or do any such thing because then it would be alone, and instead, seek to form a long-term non-zerosum relationship.