Sign up
  • Articles can be formatted in a much more readable style using the [ap] BBCode.
  • If the article is from another location, provide attribution using the [source] BBCode, URL's can be used inside.

And a couple more quotes

"With artificial intelligence, we are summoning the demon," Musk said last week at the MIT Aeronautics and Astronautics Department's 2014 Centennial Symposium. "You know all those stories where there's the guy with the pentagram and the holy water and he's like... yeah, he's sure he can control the demon, [but] it doesn't work out."

- snip -

Just a few weeks ago, Musk half-joked on a different stage that a future AI system tasked with eliminating spam might decide that the best way to accomplish this task is to eliminate humans.


I think it's a pretty interesting concept, which will only get more relevant and exposure as AI advancements progress.

Sounds like we're really playing with fire though, and a 5-10 year window for an 'event' to happen is scary close.

The subject was touched briefly in the Elementary episode 'Bella', touting the alleged first real AI system that ended in a goddamn cliffhanger. It's the subject of a shit tonne of movies too, from Terminator to Transcendence.

Do you think this sort of catastrophic event is inevitable?

Do you have faith that those developing these systems can keep it under control?

Would you like to see this kind of research stopped? Or do you think this guy is a crackpot?
Skynet is coming.

Not sure if it was this guy or another one I think I saw being interviewed on The Colbert Report the other week about this very topic.

I guess it does make sense that once machines/computers are smart enough, they would be able to decide they don't need people. How they got out implementing that would be an interesting question actually. Do they harvest us like in the Matrix? Do they just decide to kill us like in the Terminator? Do they just do their own thing irrespective of us?
Not to write these people off but it's easier to metaphorize constructs we do not fully understand. Given the macabre tendency of our species, we play up mostly the worst case scenarios and that permeates our way of thinking. [Citation needed]

Similarly, Murphy's Law is confirmation bias.

We've all watched, read, played too many AI sci-fi gone wrong media that it just make it only harder for us to be subjective on this topic.
Considering how shitty we are as a general species, probably not for the worst.
I don't think processing power is exponential because it's becoming tougher and tougher for component manufactures to make computers more powerful because physics has a limit. It would take a new more powerful technology such as artificial biology (such as in that TED video) or quantum computing to push it further. Still, we all have brains capable of such processing power so it is possible.

If a scientist was to produce something able to cause a doomsday event they would install a off switch on it. Someone might say that such a technology could be put in the wrong hands. The atomic bomb shows that doomsday machine's are difficult things to make and only powerful entities such as USA and Russia are able to make them. In that event we have a stand off event such as the cold war. Could we have a new cold war of our artificial intelligence versus their artificial intelligence? I suppose it could be possible, but the cold war didn't end with nuclear war so why couldn't another arms stand off end any differently?

I've always been a skeptic of doomsday theories because I think that in the final seconds before humans choose life or death (such as pressing the button to start a nuclear war) they usually choose life. The exception to this is religious extremism, but they are usually so busy killing people and conforming to religious law that they will never have the ability to research such weapons or gain the resources required to produce them. One worst case scenario could be if they were to steel an easy to produce doomsday device, but the odds of this happening are very low.

Still, I wonder about something. See in 1902 a guy named Charles Holland Duell was thought to have famously quoted "Everything that can be invented has been invented." According to wikipedia this wasn't quite the case, his actually quote was this but to the same effect: "In my opinion, all previous advances in the various lines of invention will appear totally insignificant when compared with those which the present century will witness. I almost wish that I might live my life over again to see the wonders which are at the threshold." The quote is famous because he was fantastically wrong. The thing I wonder is, will we ever see a climax to our technology. I mean what if Mr Duell was just 150 years off? The world will obviously change to face new challenges such as less fossil fuels available and environmental requirements, but I doubt we will see as much change through out our lifetimes as previous generations have seen because breakthroughs will become tougher and tougher to invent. With the increased time between breakthroughs comes more time to reform the law of using such technologies which also benefits our safety. The 'ban stem cells' hippies might turn into the 'ban artificial intelligence' hippies. These people can be irritating and stifling to advancement but they always get media coverage influencing debate of such topics, putting pressure of governments to produce law to make such breakthroughs are handled correctly.

Yea I am a real doomsday skeptic. Shit will hit the wall sometimes (such as a probable massive recession in our country) but we always get it back together eventually. I don't believe we are going to make a bunch of terminators that will destroy the world though, that just sounds fucking ridiculous.
Wont happen.

The mind is too complex to replicate.

No matter what type of programming you do, it will always be a yes/no answer which basically means AI cannot function from a learning point of view. They cannot determine emotion which is the key to moving forward for intelligence.
the downfall? i'd argue it will be the saviour.

allowing us more free time while those sucker robots and computers carry out all our miserable tasks for us.

want to take over? go for it! management sucks as many of you would know. let them lead whilst we frolic through the fields unencumbered by jobs.
Post Reply