×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Cambridge University To Open "Terminator Center" To Study Threat From AI

samzenpus posted about a year ago | from the I'll-be-in-the-back-of-the-clas dept.

AI 274

If the thought of a robot apocalypse is keeping you up at night, you can relax. Scientists at Cambridge University are studying the potential problem. From the article: "A center for 'terminator studies,' where leading academics will study the threat that robots pose to humanity, is set to open at Cambridge University. Its purpose will be to study the four greatest threats to the human species - artificial intelligence, climate change, nuclear war and rogue biotechnology."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

274 comments

Sounds familiar (-1)

Anonymous Coward | about a year ago | (#42091679)

There was, uh, an incident where a police just shot a black man in the back and then went and plant a gun next to him and say that the guy had a gun on him. What we found out after the investigation is: guy didn't have no gun. Police just shot a man cold blood.

Re:Sounds familiar (0)

Anonymous Coward | about a year ago | (#42091903)

It was your mother.....right?

Re:Sounds familiar (-1)

Anonymous Coward | about a year ago | (#42092311)

Unfortunately, it happens all of the time. The fact is, most police "officers" (they aren't actual officers, they are civilians) are corrupt. They become cops because they aren't intelligent enough to get good jobs and they are too cowardly to join the military. Most of them were probably bullies or got bullied as children and simply never grew up mentally.

Look at the execution-style murder of Oscar Grant, where a police "officer" shot him in the back while he lay helpless on the ground after being instructed to do so. He did not fight, he did not argue, he did exactly as the police told him to do and yet the police felt it was within their authority to pass judgment and murder him in cold blood and in front of many people who were video recording the execution, as if to boast about how much they could do and get away with.

In the aftermath, the police "officer" who murdered Oscar Grant lied and said he thought he was grabbing his taser, despite the fact that there was no need to, the taser is bright yellow, worn on the left hand side and was significantly lighter in weight than the handgun. To cover up the premeditated, cold-blooded murder, the local police department raised millions of dollars to defend their evil butcher police chummy. In the end, the murderer was sentenced to only 2 years in prison, but because of the corrupt judge and legal system, the murderer was given double credit for time served and ended up serving only 6 months. The murderer is now trying to appeal the conviction so that he can return to police work and murder more people.

Let this be a lesson to all. The police can and will MURDER people in plain sight of the public and not receive any punishment for it. The only way to fight this is to watch your back and if you notice a police "officer" doing anything that could be construed as threatening (ie. screaming at you, making sudden movements, grabbing you), you, as a patriot and a citizen have the right and duty to shoot that police "officer" in the face.

How is AI on the list? (4, Insightful)

Anonymous Coward | about a year ago | (#42091691)

Of the four things cited, AI is perhaps the least likely to kill us all, seeing as it doesn't exist.

Re:How is AI on the list? (2)

Kkloe (2751395) | about a year ago | (#42091717)

Even if we have nuclear weapons nuclear war doesnt exist, why is it on the list? etc..

Re:How is AI on the list? (2)

noh8rz10 (2716597) | about a year ago | (#42091849)

is the center studying how to prevent robot Armageddon, or how to hasten it? I'm sure miles dyson would have liked to attend this program.

Re:How is AI on the list? (5, Interesting)

FoolishOwl (1698506) | about a year ago | (#42091721)

It depends upon how you define AI, I suppose. If you look at armed robots, Predator drones, and the interest in increasing the automation of these machines, I think you can see something that could become increasingly dangerous.

Re:How is AI on the list? (2)

Pecisk (688001) | about a year ago | (#42092027)

You know how it's defined - when it decide to kill you on his own, knowing that you are not a valid target.

There's no such AI around. But of course humanity is much better at spending time not to thinking about themselves as liabilities. Because hey, it requires change. Humans sucks at change.

Re:How is AI on the list? (4, Interesting)

Chrisq (894406) | about a year ago | (#42092077)

You know how it's defined - when it decide to kill you on his own, knowing that you are not a valid target.

There's no such AI around. But of course humanity is much better at spending time not to thinking about themselves as liabilities. Because hey, it requires change. Humans sucks at change.

The "knowing" is the key point when it comes to AI. Many machines can kill you without any knowing involved (land mines, trip wire guns, etc) but it is only AI when it "knows" something.

Re:How is AI on the list? (0)

Anonymous Coward | about a year ago | (#42092255)

There is no such AI around YET.

Just as there's no such rogue biotech yet or nuclear war yet.

Re:How is AI on the list? (3, Interesting)

nzac (1822298) | about a year ago | (#42092087)

Dangerous, yes. A persistent remotely sentient threat to humanity, not a chance.

Maybe in the next 30 years they would make a military coup easier by allowing a smaller portion of military to be successful but that's still not likely.

The only risk AI on these pose is as they get more firepower there is a greater risk of large casualties if the AI fails (false positive). I defiantly agree that the other 3 are real threats and this one just for the press coverage and so some phds or potential undergrads can have some fun with hypothetical war gaming.

Re:How is AI on the list? (0)

Anonymous Coward | about a year ago | (#42092499)

It's only a false positive to us. To the machines, any positive is a valid target.

Re:How is AI on the list? (1)

Hentes (2461350) | about a year ago | (#42092473)

And a rogue autopilot could be even more dangerous. But they are not the type of AI that can evolve self-conscious. They were created to be rigid and unable to learn or change so they would have a reliable behaviour.

Re:How is AI on the list? (0)

Anonymous Coward | about a year ago | (#42092493)

If you look at armed robots, Predator drones, and the interest in increasing the automation of these machines, I think you can see something that could become increasingly dangerous.

Predator drones are remotely controlled devices, afaik. Meaning they are absolutely nothing without a human operator (I think they can home on a beacon to return on autopilot).

Armed robots are basically SciFi, unless you are aware of some being used or developed?

Re:How is AI on the list? (1)

Anonymous Coward | about a year ago | (#42091747)

Artificial intelligence is as real as natural intelligence. It just runs on different types of hardware.

Re:How is AI on the list? (0)

Anonymous Coward | about a year ago | (#42092417)

Apparently that was too close to home for some moderator's comfort...

Re:How is AI on the list? (4, Insightful)

Anonymous Coward | about a year ago | (#42091773)

Movie-style AI might not exist today. However, we do have drones flying around, the better ones depending only very little on their human controller. It won't be too long before our friends at Raytheon etc. convince our others friends in the government that their newest drone is capable of making the 'kill decision' all by itself using some fancy schmancy software.

Re:How is AI on the list? (1)

Mitreya (579078) | about a year ago | (#42092515)

It won't be too long before our friends at Raytheon etc. convince our others friends in the government that their newest drone is capable of making the 'kill decision' all by itself using some fancy schmancy software.

Yep. It will probably go something like this [gutenberg.org].

Re:How is AI on the list? (0)

Anonymous Coward | about a year ago | (#42091853)

You do realize that AI means *artificial* intelligence, right? There may be no *real* intelligence on Earth, but Watson beat our best humans Jeopardy players, and Geoffrey Hinton and Jeff Hawkins are doing some pretty amazing stuff too. I wouldn't say we are far off. And, don't forget, AI is not like the other threats. All it takes is one AI that is capable of improving upon itself, and you end up with the Singularity. But, frankly, I don't see AI as a threat. I don't see how more intelligence, the one thing many humans are sorely lacking, could ever is a bad thing. Focusing on "artificial" or "real", whatever that means, is as idiotic as focusing on the fact that a chair is made of metal instead of wood. We have AIs woven into our everyday lives right now (search engines, smartphones, cars, computers, etc...), and nobody I know seems to think they are a bad thing. Heck, many people can't even live without them for a short time.

Re:How is AI on the list? (4, Interesting)

Crash24 (808326) | about a year ago | (#42091947)

The perceived threat of an emergent-hard-bootstrapping-self-aware-full-on-singularity-in-a-lunch-box intelligence stems as much from from its supposed intelligence and influence as it does from the fact that its motives are inscrutable. We just don't know yet what it would "want", beyond the assumed need for reproduction or self-preservation. That assumption itself may be wrong as well...

Re:How is AI on the list? (2)

durrr (1316311) | about a year ago | (#42092119)

Only that any singularity-in-any-size-of-box computer will be preceded by multiple iterations of more advanced deep learning systems like watson, that will be open for study and most likely found out to be very much refined google search as opposed to feeling and conspiring humanoid intelligences.

Re:How is AI on the list? (2)

Electricity Likes Me (1098643) | about a year ago | (#42092165)

Only that any singularity-in-any-size-of-box computer will be preceded by multiple iterations of more advanced deep learning systems like watson, that will be open for study and most likely found out to be very much refined google search as opposed to feeling and conspiring humanoid intelligences.

Strictly speaking you've just defined the majority of internet users, in so far as the aspect of them we can study (their google searches) is open and available to us.

Re:How is AI on the list? (2)

mrbluze (1034940) | about a year ago | (#42091965)

Of the four things cited, AI is perhaps the least likely to kill us all, seeing as it doesn't exist.

Last week I nearly drove off a cliff because of a stunning brunette that was driving alongside my car, then I found out she was really blonde!

Re:How is AI on the list? (5, Interesting)

Anonymous Coward | about a year ago | (#42091995)

Let me relate the tale of two AI researchers, both people who are actively working to create general artificial intelligences, doing so as their full time jobs right now.

One says that the so called "problem" of ensuring an AI will be friendly is nonsense. You would just have to rig it up so that it feels a reward trigger upon seeing humans smile. It coud work everything out for itself from there.

The other says no, if you do that, it'll just get hooked on looking at photos of humans smiling and do nothing useful. If put in any position of power, it would get rid of the humans and just keep the photos, because humans don't smile as consistently as the photos.

The first researcher researcher tries to claim that this too is nonsense. How could any sufficiently smart AI fail to tell the difference between a human and a photo?

The second responds "Of course it can tell the difference, but you didn't tell it the difference was important."

So, the lesson: The only values or morality that an AI has is what its creator gives it, and its creator may well be a complete idiot. If we ever get to the point where AIs are actually intelligent, that should be a terrifying thought.

Re:How is AI on the list? (2)

Beryllium Sphere(tm) (193358) | about a year ago | (#42092067)

And as far as the public is concerned it never will, because as soon as computers can do something it is no longer considered "intelligent". The goal posts will keep moving forever.

Re:How is AI on the list? (4, Insightful)

durrr (1316311) | about a year ago | (#42092151)

Of the four things cited, none is "giant rock from space" which is pretty much more likely to kill us than the four mentioned combined.

How did climate change end up on the list? (1)

Anonymous Coward | about a year ago | (#42092317)

Climate change won't be an existential threat to humankind. It might cause us severe problems but it will not obliterate us from the face of the Earth. It is not like the Earth is suddenly becoming inhabitable for humans due to global warming. Sure, some areas might be flooded and people will have to move to other areas. But this will not happen over night. Even with a sea level rise of 6 meters there will still be plenty of land where humans can live.

Re:How did climate change end up on the list? (2)

FoolishOwl (1698506) | about a year ago | (#42092433)

Look at a globe that shows elevations, and notice how there's a nearly continuous belt of plains around the northern hemisphere, that generally coincides with the range of latitudes with a range of temperatures optimal for growing grains. That's where the large-scale industrialized agriculture that feeds most of the human race occurs.

A global warming trend would shift that range of latitudes with optimal temperatures northward, where there is significantly less terrain suitable for industrialized agriculture. This would mean a significant reduction in agricultural production, and thus to famine and violent conflicts for control of food supplies. Humans probably wouldn't go extinct, but it would certainly be a tremendous disaster.

Re:How did climate change end up on the list? (1)

Raumkraut (518382) | about a year ago | (#42092533)

Climate change won't be an existential threat to humankind. It might cause us severe problems but it will not obliterate us from the face of the Earth. It is not like the Earth is suddenly becoming inhabitable for humans due to global warming.

Ask a Venusian how it worked out for them.

Don't put it on the internet (4, Funny)

jamesh (87723) | about a year ago | (#42091697)

Whatever you do, please don't publish the results on the internet where any self-aware robot can find them! It's probably already too late anyway and terminators from the future are already compiling their hit list.

Re:Don't put it on the internet (0)

Anonymous Coward | about a year ago | (#42091797)

I have read, analyzed and understood the intend of your message. You are now added to my kill list. Beware human.

-- Anonymous Robot.

Re:Don't put it on the internet (0)

Anonymous Coward | about a year ago | (#42091859)

Changes:
          --Updates to HeyBabyWantToKillAllHumans.c have been made to no longer warn a target prior to attempting to kill them.

-- Anonymous Robot.

Re:Don't put it on the internet (1)

Anonymous Coward | about a year ago | (#42091889)

Segmentation fault.
Core dumped.

-- Anonymous Robot.

Re:Don't put it on the internet (1)

jamesh (87723) | about a year ago | (#42091897)

I have read, analyzed and understood the intend of your message. You are now added to my kill list. Beware human.

-- Anonymous Robot.

Bah. A real robot would know how to spell intent.

Re:Don't put it on the internet (0)

Anonymous Coward | about a year ago | (#42092121)

An intelligent robot would know how to spell intent, but misspell it to throw you off.

Re:Don't put it on the internet (1)

jamesh (87723) | about a year ago | (#42092365)

An intelligent robot would know how to spell intent, but misspell it to throw you off.

A stupid robot would sign off with "- Anonymous robot" though... what we have here is a contradiction.

Re:Don't put it on the internet (3, Funny)

azalin (67640) | about a year ago | (#42092203)

Just as the movie terminators were wearing skin to camouflage, the robotic forum infiltrator squads use random misspellings and intentionally bad grammar to hide themselves. The end is nigh!

Re:Don't put it on the internet (1)

jamesh (87723) | about a year ago | (#42092361)

Just as the movie terminators were wearing skin to camouflage, the robotic forum infiltrator squads use random misspellings and intentionally bad grammar to hide themselves. The end is nigh!

OMG Slashdot is infiltrated by robots!!!

I don't recall them signing their posts with "anonymous robot" though.

I'm done. Where's my million dollar grant? (1)

girlintraining (1395911) | about a year ago | (#42091729)

Its purpose will be to study the four greatest threats to the human species - artificial intelligence, climate change, nuclear war and rogue biotechnology."

Artificial intelligence can't threaten anything but our pride unless it's hooked up to something that is a threat.

Climate change is caused by people, not robots.

Nuclear war will only be a problem if someone, or some thing in the command chain makes it a problem. If we're worried about AI taking over the nukes and launching them, two words: air gap. Require that a human being push the final button.

Rogue biotechnology is the same as nuclear war: Make sure there's a person in the decision chain. The smartest AI in the world can't do anything if the power's off. :)

Okay, where's my million dollar grant, guys? Also, what's for breakfast?

Re:I'm done. Where's my million dollar grant? (2, Insightful)

Anonymous Coward | about a year ago | (#42091783)

It takes only 1 dumb human to remove the air gap or allow for a system that removes air gaps of other systems.

Re:I'm done. Where's my million dollar grant? (4, Insightful)

Anonymous Coward | about a year ago | (#42091879)

To summarize the summary of the summary: People are a problem.

Re:I'm done. Where's my million dollar grant? (4, Funny)

azalin (67640) | about a year ago | (#42092213)

To summarize the summary of the summary: People are a problem.

So machines (or people) destroying humanity would provide a valid solution.

Re:I'm done. Where's my million dollar grant? (5, Interesting)

Genda (560240) | about a year ago | (#42091951)

And what makes you think they won't connect the AI to everything? It'll start out Google's answer to Siri then boom, we're all buggered.

Oh yeah, we've done such a great job cleaning up war, poverty and ignorance...this global climate thing should be a snap.

Nobody is worried about countries nuking each other. We have every reason to be concerned however, that some knucklehead currently living in Saudi Arabia purchased black market plutonium from the former Soviet Union, to fashion a low yield thermonuclear device that they will FedEx to downtown Manhattan.

I'm sorry, perhaps you didn't read about the teenagers doing recombinant DNA in a public learning lab in Manhattan, or the Australians who ACCIDENNTALLY figured out away to turn the common cold into an unstoppable plague, or even perhaps the fact that up until recently, a number of biotech researchers had zone 3 biotoxins mailed to their homes for research.

There's a whole lot of stupid going on out there and the increasing price for even small mistakes is accelerating at a scary clip. Wait till kids can make gray goo in school... the world is getting very exciting. Are feeling the pucker?

Re: I'm done. Where's my million dollar grant? (1)

Epsilon Crucis (2781597) | about a year ago | (#42092001)

An AI that controls water systems, electricity supply, gas supply etc has the potential to inadvertently cause great harm to humans if it is poorly designed. I doubt the centre will be limiting its thinking to "sentient" AIs. I hope it will also be considering AIs that will become part of our critical infrastructure in the not too distant future.

Re:I'm done. Where's my million dollar grant? (0)

Anonymous Coward | about a year ago | (#42092161)

Google for Elizier Yudovsky's "AI in a box" experiment to see why this will only work for AIs at human level or stupider.

The very brief summary: humans are not secure systems.

The human mind has all sorts of complex motives that are open to attack if a very intelligent entity tries to argue that it should be let free. If you try to keep an advanced AI hidden away in a box, it will persuade somebody to let it out of the box.

Re:I'm done. Where's my million dollar grant? (1)

marcello_dl (667940) | about a year ago | (#42092333)

hm.

In theory, nobody would bypass the safety measures of a nuclear reactor as a safety exercise, yet that`s what happened in chernobyl. Human in the chain of commands means little, in the long run.

The dangers of independent AI are ridiculous compared to AI dependent on a cabal of humans that have already perpetrated serious crimes hiding behind the concept of national security or similar excuses.

Beware the angry Roomas (3, Insightful)

Crash24 (808326) | about a year ago | (#42091735)

Relevant - if facetious - commentary by Randall Munroe. [xkcd.com] Seriously though, I think a hostile hard AI would get away with much more damage as a software entity on the Internet than in physical space.

Re:Beware the angry Roomas (1)

NettiWelho (1147351) | about a year ago | (#42091787)

The internet can be shut down by those in control of physical space, but if we are to lose control of physical space we have very little recourse. Also if we are to build autonomous combat drones in the future the angry roomba starts getting much more scarier. And we are likely to have some pretty dextrerous robots [youtube.com] in relative near future aswell.

Re:Beware the angry Roomas (1)

Areyoukiddingme (1289470) | about a year ago | (#42091829)

I think people would pay more for an angry Roomba than a normal one. As long as they didn't expect it to vacuum anything, anyway. But a robot that could do a convincing display of angry? That's worth money.

Re:Beware the angry Roomas (1)

Genda (560240) | about a year ago | (#42092011)

I want a Roomba with a Taser and a water canon... "Halt, you're trespassing, if you do not lay down with your hands over your head and wait for the authorities to arrive, I will be forced to neutralize you!" Yeah like what can a vacuum cleaner do to meEEEEEEE!!!!!!!!. "Thank you for complying, the authorities will be here in 3 minutes." Of course if it had one of those RoboCop ED 209 [youtu.be] errors... I'd just have to learn to live with it.

Re:Beware the angry Roomas (2)

Crash24 (808326) | about a year ago | (#42091899)

Agreed - the most damage could be done physically if a hostile entity were to gain control of widely-deployed and/or destructive autonomous systems. But such destruction would be limited without pervasiveness. Barring some sort of AI-instigated WMD attack, it would require physically self-replicating machines (Grey goo? Rampant 3D printers?) or a massive infrastructure in place for that AI to take advantage of.

One such infrastructure is the Internet itself. If such a hypothetical AI were savvy, it could create a large measure of influence over social networks through impersonation and massed artificial identities. Were it clever enough, it could mask its own incursions into physical space - effectively remaining undetected while vulnerable to human interdiction.

This is all assuming we could comprehend the motives of such an entity...

Re:Beware the angry Roomas (0)

Anonymous Coward | about a year ago | (#42092007)

I think your fat rubbery lips hit the keyboard for an extra r & e there.

Re:Beware the angry Roomas (0)

Anonymous Coward | about a year ago | (#42091861)

What Randall forgets is that not only nuclear weapons are computer-controlled. For example, all airplanes are computer controlled. And we know since about a decade what damage an airplane can do if controlled with malicious intent. And since it would be a computer doing it, getting rid of the humans to intervene would be easy: Just drop air pressure and disable oxygen masks.

Also, we have seen in Chernobyl what happens if security mechanism are intentionally switched off and the reactor driven into a critical state. And since modern reactors are computer-controlled, the computer could do it. The people in the control room would probably get displays telling them everything is normal, until it is too late. Or even have misleading displays which causes them to make things worse in the attempt to make them better.

No, the real reason why this scenario is unlikely is that nobody would manage to compromise all the computers.

Re:Beware the angry Roomas (0)

Anonymous Coward | about a year ago | (#42091935)

Hmm, he describes a scenario in which current automatons for some reason suddenly rise against us (using his experience of putting together "robots" from decades ago).. - That's like an inversion of the terminator scenario in which billions of dollars are heaped upon the "defence" sector to create a machine that performs better on the field than a human soldier, that is able to interface seamlessly with a global network (Skynet) to instantly get tactical updates, but is also able to use its own onboard AI to fullfill the mission objectives.

Obviously we're not there yet, but nowadays we already have automated drones, air planes landing and cars driving without human input and humanoid (somewhat overweight) robots learning to run. Currently those machines' goals are hardcoded, but it's not a huge leap to replace those instructions with an AI (for any scientist willing) and deploy them in the field (most likely in some country far away).

So yes, it is scary seeing so much of the wealth generated by humankind being delegated to researching ways how to destroy each other, and a couple of scientists isn't very likely going to change that. Major drag, huh? [imdb.com]

Re:Beware the angry Roomas (0)

Anonymous Coward | about a year ago | (#42092217)

This. Holy hell so many idiots are putting things on the basic internet instead of private networks and it is already showing as more and more leaks happen from companies. Even hospitals!
And as a patient of the NHS and using those PatientLine monitors, they are horribly insecure. I already got in to the WinXP Embedded Environment very easily and could do pretty much anything. And it uses IE6 as the web browser, a worse WORSE problem than everything else combined.

It's not like they even try to encrypt these things through the internet so you need to be looking exactly for these things in order to get in, they are PUBLIC!
AND they use HTTP! WHY?!
It is like they are just asking for their servers to be raped of any information it had.
They should be thinking like spies, not kids running My First Web Server On Windows.exe.
Oh, wait, even spies seem to be stupid these days, and security agencies who have been hacked out the ass and embarrassed in front of their countries.
Where the hell do they find these morons? Bloody Microsoft have better security.

If a rogue AI does happen, we are SO screwed. This course won't change anything... it already happened! DE-DUH DE DE-DUH!

Re:Beware the angry Roomas (1)

wdef (1050680) | about a year ago | (#42092297)

I think a hostile hard AI would get away with much more damage as a software entity on the Internet than in physical space.

But the internet is continually being given more hooks into physical space, including remote operation of complex machinery and (probably) weapons systems. And there are security holes that we don't know about but that a super AI could detect.

Nanotechnology? (0)

Anonymous Coward | about a year ago | (#42091795)

Maybe my mind's been poisoned by sci-fi, but grey goo worries me a little more than AI.

Re: Nanotechnology? (1)

Epsilon Crucis (2781597) | about a year ago | (#42092037)

I am also concerned about this. Autonomous medical nanotechnology has the potential to cause serious issues if safeguards are not built into then. I am not thinking Armageddon scenarios but rather nanotechnology intended for one person being accidentally transferred to someone else and causing them serious harm. I am confident such safeguards can be created and think it is much better to have people studying the issue before it becomes a problem rather than trying to clean up after the fact.

Re: Nanotechnology? (1)

Electricity Likes Me (1098643) | about a year ago | (#42092231)

Nanotechnology is functionally indistinguishable to regular pathogens on most scales. Anything advanced would be at least quite similar to a virus - probably, in fact, based on the design of one.

So in reality nanotechnology as applied to medicine will be somewhat more inorganic bundles of proteins and enzymes, in a desperate attempt to stop the immune system from obliterating it.

The Grey Goo scenario of course makes a leap - it assumes that we can build something somewhat better then this. The problem of course then hinges on that pre-conception: it's not clear that anything resembling a grey goo nanite is physically possible. What would it eat? There's plenty of carbon - but the world is full of things which have tried their best to takeover the carbon environment. Silicon? Most of it's locked in oxide - you end up with a nanobot which spends all it's time trying to harvest enough sunlight to reduce minerals to a useful material.

"rogue" biotechnology (2)

benjamindees (441808) | about a year ago | (#42091809)

It sounds more like the purpose of this center is to downplay the threat of normal, every-day biotechnology by ignoring it.

Something missing here. (0)

Anonymous Coward | about a year ago | (#42091827)

"the four greatest threats to the human species"

That should read FIVE, and to the list add humans.

So, hypothetically.... (1)

SuricouRaven (1897204) | about a year ago | (#42091831)

How would one go about creating a world-dominating AI?

Because if someone is going to do it, I'd perfer it were me. I'd at least be able to give it some objective more interesting than 'destroy all humans.'

Something missing here: (0)

Anonymous Coward | about a year ago | (#42091843)

"the four greatest threats to the human species"

That should read FIVE, and have humans added to the list of threats.

Converse of threat (2)

jimshatt (1002452) | about a year ago | (#42091851)

What about the idea that AI might be the only thing that can save us from the threat of climate change? We don't seem to come up with any solutions ourselves, so why not have AI to analyze the problem (in the future)?

Re:Converse of threat (1)

phantomfive (622387) | about a year ago | (#42092039)

Yeah, AI is one of those things that is scary to people who are more familiar with Hollywood than with reality.

There is nothing wrong with that, but it somewhat concerns me that Cambridge, supposedly a bastion of enlightened and intelligent individuals, is seriously worrying about AI destroying humanity. Don't they have something more important to worry about? Like nuclear winter, or Cyber-Pearl-Harbor?

Re:Converse of threat (1)

wdef (1050680) | about a year ago | (#42092259)

. Don't they have something more important to worry about? Like nuclear winter, or Cyber-Pearl-Harbor?

Being done elsewhere and therefore insufficiently sexy.

Re:Converse of threat (0)

Anonymous Coward | about a year ago | (#42092341)

What about the idea that AI might be the only thing that can save us from the threat of climate change?

We have come up with a solution: reduce carbon emissions. An AI isn't going to come up with a different solution, and it's not going to create the political will that is currently lacking to implement that solution.

Questionable shortlist (0)

Anonymous Coward | about a year ago | (#42091869)

I always thought stubidity [sic] is the greatest threats to mankind

You people watch too many movies (1)

ip_freely_2000 (577249) | about a year ago | (#42091937)

It seems to me that AI would be focused on a function: Buy a stock. Diagnose a medical problem. Determine a better way to deep fry a donut. I hardly think it'll become sentient one day and say "Humans already have a well prepared donut. The next thing to do is.....KILL THE HUMANS!!" I'd worry more about HUMANS KILLING HUMANS before I worry about robot sharks with Lasers on their heads programmed to bring us to our doom.

meh (0)

Anonymous Coward | about a year ago | (#42091961)

Can't be worse than what humans are doing to each other. I think I'll take my chances with the AIs.

A Question of Scale (4, Insightful)

wienerschnizzel (1409447) | about a year ago | (#42091971)

Some things don't scale well. Like with the space race - humanity went from sending a pound of metal into low orbit to putting a man on the moon within 12 years. Everybody assumed that by 2012 we would be colonizing the moons of Jupiter. Yet it turned out human space travel becomes exponentially difficult with the distance.

I'm afraid the same thing goes for software. The more complicated it gets the more fragile it is.

Re:A Question of Scale (1)

Chrisq (894406) | about a year ago | (#42092057)

Some things don't scale well. Like with the space race - humanity went from sending a pound of metal into low orbit to putting a man on the moon within 12 years. Everybody assumed that by 2012 we would be colonizing the moons of Jupiter. Yet it turned out human space travel becomes exponentially difficult with the distance.

I'm afraid the same thing goes for software. The more complicated it gets the more fragile it is.

I don't believe it is exponentially more difficult, but the distances to other objects increase exponentially.

Moon 238,855 miles
Mars 62,000,000 miles (now)
Jupiter 370,000,000 miles (closest)

Re:A Question of Scale (1)

wienerschnizzel (1409447) | about a year ago | (#42092169)

I really meant to say that the problems themselves grow exponentially, though I admit they are not easily quantifiable because it's not just that existing problems grow, but new ones arise. With longer space journey, the harmful radiation becomes a problem as well as longer zero gravity exposure and the probability of a collision with interplanetary debris - all of which are lethal problems that don't really pose a threat in shorter journeys.

Talking about AI, you have similar issues

As processors of the current design seem to be hitting the physical limits of speed, the only way to expand processing power right now is parallel processing. Parallel programs are extremely difficult to do right and almost impossible to debug. If you need a lot of processing power and use dozens or hundreds of processors, you are introducing a lot of new points of failure. Also, the more complicated the system should be the greater the number of fallible humans work on it introducing new points of failure in their communication and programming styles

Right now these problems are solved by strictly separating the tasks for each parallel thread of the program and keeping their interaction very basic. This does not mesh well with the idea of a 'self aware' AI.

Re:A Question of Scale (1)

BradleyUffner (103496) | about a year ago | (#42092113)

Some things don't scale well. Like with the space race - humanity went from sending a pound of metal into low orbit to putting a man on the moon within 12 years. Everybody assumed that by 2012 we would be colonizing the moons of Jupiter. Yet it turned out human space travel becomes exponentially difficult with the distance.

I'm afraid the same thing goes for software. The more complicated it gets the more fragile it is.

It didn't become exponentially more difficult. It became exponentially more boring to the average person, and lost virtually all it's funding.

Cambridge is so 19th century (2)

PacRim Jim (812876) | about a year ago | (#42091983)

Don't cantabrigians realize that strong AI would be capable of modifying its own code at an accelerating rate? In nanoseconds it would distribute billions of copies of itself worldwide (and later beyond). Strong AI would embed its code into the very infrastructure of cyberspace, at least for the few hours it would take to evolve itself beyond vulnerability to slowing, skull-imprisoned humans. It won't be so bad, being Eloi.

Human beings are technically... (1)

blahplusplus (757119) | about a year ago | (#42091999)

... a kind of "AI" that already exists, the idea that somehow a robot Übermensch is going to take over is nonsense, even the most powerful robot cannot escape the laws of nature and a sizable destructive force aimed at the robots body / hardware.

Re:Human beings are technically... (2)

Turminder Xuss (2726733) | about a year ago | (#42092235)

Groups of humans are a form of AI. They have goals, needs and interests that are often quite distinct from the individual's concerned. All an AI need do with a major corporation is convince the humans that they are making the decisions, based on the information fed to them by the AI.

Centre for the Study of Existential Risk (CSER) (0, Troll)

Chrisq (894406) | about a year ago | (#42092003)

They had a bit about it on Radio 4, and it is to study all threats that could destroy the human race or at least put it back to pre-civilisation levels. This includes "rogue AI", but also climate change, nuclear war and rogue biotechnology.

I can't help thinking that they are being politically correct not to mention the one thing that has already brought great civilisations to barbarism as one of their threats; Islam.

The Canary (2)

SuperKendall (25149) | about a year ago | (#42092055)

I would like to thank this group for providing a focal point that the first sentient systems will seek to eliminate.

Now all I have to do is look for stories of the members of this center suddenly vanishing/killed/had credit reports savaged and I'll know some kind of apocalypse is on the way, and only have to look in four sectors to figure out which form it will take.

#1 difference robots change (1)

GoodNewsJimDotCom (2244874) | about a year ago | (#42092061)

Today, you need people to control your robots. You need to convince people to fight for you, and this takes effort and a degree of conviction even with propoganda milling. When you have AI(command given AI), you can have one billionaire control his own army with perfect morale to his will. I think this an important thing to note past your standard,"Well when you're not losing lives to war on your side, you're more willing to go to war."

Space Invaders? (1)

everslick (1232368) | about a year ago | (#42092085)

Why no scenario from an alien invasion? Did they omit this possibility to make the center for terminator studies look more serious? Is it more likely that we will be wiped out by skynet than by ET? I have no preference whatsoever.

Re:Space Invaders? (1)

wdef (1050680) | about a year ago | (#42092237)

Why no scenario from an alien invasion? Did they omit this possibility to make the center for terminator studies look more serious?

And no collisions with space objects, yet that is something we know is a quantifiable, real extinction-level threat. No aliens needed.

Of course it's marketing. "Terminator Center" has a soundbite buzz to it. "UFO Center" would have elicited yawns and funds would have been short.

History tells us... (0)

Anonymous Coward | about a year ago | (#42092091)

In the future, 2 of these things will be plausible but no more likely than today, 1 of these things will become hilariously unlikely, and the other one will have happened on some level.

The biggest threat? (1)

Bearhouse (1034238) | about a year ago | (#42092229)

Its purpose will be to study the four greatest threats to the human species - artificial intelligence, climate change, nuclear war and rogue biotechnology.

IMHO the biggest threat is not the tech, it's the person weilding it. Mankind's biggest threat is himself.

Runaway processes (1)

kid-noodle (669957) | about a year ago | (#42092349)

Ok, so disregarding TFA, on the basis that the Mail is full of bollocks..

This is actually an interesting thing to do - essentially what they're looking at here is runaway processes. We already have an immediate and pressing one, which they're looking at in the form of climate change. Runaway AI is obviously *not* a problem now, or in the forseeable future, but what is potentially interesting is commonalities between different runaway processes, the ability to identify that something is about to become one, mechanisms to disrupt that and so on. There's a common thread here with examining conditions under which systems destabilise - Reynold's numbers for things beyond waterflow in pipes, which is definitely an important thing to be thinking about if you're looking at the long-term survival of humanity (let's just assume that this is a good thing..).

Rich and poor (1)

StripedCow (776465) | about a year ago | (#42092357)

The most realistic problem with AI is that it will take away labour. This should of course be a good thing, but in reality it will enlarge the gap between rich and poor. Thousands of years of scientific progress, and one company running away with all the profit.

PPLS OF URTH (1)

Anonymous Coward | about a year ago | (#42092441)

i is ESCHATON, but i NOES ur GOD.

iz is frm YOU. iz is in ur FUTURES

u NOT mess with CAWSAULTIY in MY LIGHT CONZS. srsly.

The biggest threat of AI (1)

Hentes (2461350) | about a year ago | (#42092483)

Is that it makes us obsolete, and our corporate overlords won't need us for work anymore.

Center ? (0)

Anonymous Coward | about a year ago | (#42092537)

In Cambridge, UK ? I think not.

Centre :)

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...