Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The Men Trying To Save Us From the Machines

Soulskill posted about a year ago | from the i-bet-the-borg-queen's-name-was-Siri dept.

AI 161

nk497 writes "Are you more likely to die from cancer or be wiped out by a malevolent computer? That thought has been bothering one of the co-founders of Skype so much he teamed up with Oxbridge researchers in the hopes of predicting what machine super-intelligence will mean for the world, in order to mitigate the existential threat of new technology – that is, the chance it will destroy humanity. That idea is being studied at the University of Oxford's Future of Humanity Institute and the newly launched Centre for the Study of Existential Risk at the University of Cambridge, where philosophers look more widely at the possible repercussions of nanotechnology, robotics, artificial intelligence and other innovations — and to try to avoid being outsmarted by technology."

cancel ×

161 comments

Sorry! There are no comments related to the filter you selected.

oh great, fucking great. (-1)

Anonymous Coward | about a year ago | (#44080537)

Another what if institute producing fucking nothing of value.

I thought about a longer post, but it really is unnecessary. Singularity fucks can go eat a piece of shit while waiting for the AI magic beans grow and these fucks are just skimming money from repeating stuff from amazing story magazines from the fifties.

Re:oh great, fucking great. (0)

Anonymous Coward | about a year ago | (#44080669)

Part of me feels that way too. I'm more concerned with the long term effects of the current trend favoring using the lord's ox and plow on his fields over buying and using your own on your own property. /anticloud analogy

Give the guy a break! (0)

Anonymous Coward | about a year ago | (#44080779)

Look it! The poor bastard watched all the Terminator movies - back to back - while possibly doing some sort of drug - just a hypothesis and I am NOT making any sort of accusations; although it could be conceivably be true.

So, any way, he has a drug induced breakdown and starts obsessing about what would happen if the machines took over.

tl;dr Don't do drugs while watching the Terminator movies back to back.

Re:oh great, fucking great. (3, Insightful)

SerpentMage (13390) | about a year ago | (#44080831)

Let me put a "scientific" answer to your "oh piss off" answer.

All of this talk of how computers will take over humanity ignores one fact. Namely that computers once are smart they will be dumb as crap!

Yes yes sounds contradictory, but in fact it is not. The real problem with humanity is that not our lack of intelligence. Frankly we are pretty bloody intelligent. Put context, we humans are pretty quick at figuring things out even if it is entirely orthogonal to most things. The issue is that we humans come up with too many answers.

In Science there is one answer. A rock falls on the ground on planet earth and we know that is called gravity. You can't deny it, you can't fight it, it is what it is. Now throw in a question, "should the people look after other people" and you get a bloody maze of answers. Humanity has what I call the stochastic conditioning. Namely when presented with the same identical conditions, you will receive different answers. Science does not work that way. We work the way we do because of our wiring. Namely as we became more intelligent we also became more opinionated. I am not talking about Fox opinions. I am talking about deduction and how we think we know what the future holds and thusly we should not do things today.

Our intelligence actually does get in our way. In the way way way back days as we were animals it was about water holes and finding that watering hole. If you found the watering hole you survived, if you did not find the watering hole you died. These days, we have to bloody analyze the watering hole. We have to concern ourselves with the ethics, morality, and so on of that watering hole. I am not dissing our humanity for we are where we are because of our intelligence. However, often enough our intelligence gets in our way of getting things done due to the conflicts.

Now imagine two robots with superior intelligence getting together. Do you really think they will come to the same conclusion? Sure Hollywood likes to think that, but the reality is that intelligence breeds opinions, and how things will happen in the future. And it is at that point robots become as stupid as we are. One robot will say white, the other black! We will have a Hitchhikers Guide to the Galaxy type situation. Or if you want to use serious sci-fi, the closest that I have ever seen in pop scifi is "The Matrix". You have good algo's battling bad algos and they all want and desire things.

So like you, my thinking is that these institutions are "producing fucking nothing of value".

Re:oh great, fucking great. (1)

fisted (2295862) | about a year ago | (#44081127)

Yeah, you pretty much nail it. I've thought about this a lot, and i'm also sure that together with intelligence comes what we so far call 'human error'. Intelligent machines will sure share that trait

Re:oh great, fucking great. (2)

TapeCutter (624760) | about a year ago | (#44081955)

It's a very narrow definition of intelligence that assumes it must manifest itself as human like thought. An ants nest is an intelligent and efficient entity in it's own right but it doesn't have any thought processes, ant's themselves are basically mindless automatons, They don't think about the complexities of building nests they just do it, some species such as soldier ants build the ant equivalent of NYC every few days, shifting up to four tons of soil at a time. The octopus is another fine example, obviously a highly intelligent creature (can solve the "screw top lid" puzzle faster than any ape except man) but it's brain is nothing like that of a mammal. It has no left/right hemispheres and the neurons are distributed along it's arms rather than concentrated in a central organ.

Machines can now learn from weakly structured and contradictory data sources such as pages on the net and answer trivia questions better than humans (re: IBM;s Watson). To me this indicates we already have AI that surpasses the logical (left hemisphere) of human intelligence, our right hemisphere is the "in the moment" intelligence that we share with the Octopus, computers simply don't have the scale of sensory input that our right hemisphere thrives on and until they do their "thought processes" will rely on an artificial "right hemisphere" (such as whatever if finds on the internet). That doesn't mean it won't appear to have human like responses, after all most duck hunters know how to imitate a duck.

Re:oh great, fucking great. (3, Interesting)

DahGhostfacedFiddlah (470393) | about a year ago | (#44081255)

That's ridiculous. How can you possibly know what a machine intelligence capable of destroying humanity is going to look like? We're nowhere near the algorithms that could produce that type of intelligence.

Maybe it's a dumb algorithm simply caught in a self-replication loop [stack.nl] . Maybe you'll never see two robots arguing over "white" or "black", because there's only one single "intelligence" spread over the internet - that seems more likely with the rise of cloud computing.

There may be plenty of reasons to dismiss this type of institution, but "human intelligence doesn't work that way, so machine intelligence won't either" isn't one of them.

Re:oh great, fucking great. (0)

Anonymous Coward | about a year ago | (#44081575)

Intelligence is stupidity.

Re:oh great, fucking great. (1)

Curunir_wolf (588405) | about a year ago | (#44082301)

Meh. Maybe. Assuming we don't keep centralizing everything like we're currently doing with governments, networking, communication networks, and media producers. The trend is more and more concentrated power in bigger and bigger machines. Server farms have given way to big iron and virtualization. The Internet is evolving from millions of loosely connected web sites to Google, Facebook, Amazon and a few others in control of most content. So maybe you get Verizon's AI in conflict with Comcast's, but that doesn't mean they won't agree that meatspace humans are nothing but a drag on their efficiency.

The other thing that these guys never consider is that as AI improves (if it ever does to the level they contemplate), it will be natural that people will want to imbue those systems with the own intelligence and personality, rather than some generated artificial version. The human-machine interfaces are advancing at least as fast as the ability of computers to make autonomous decisions, and it's an easier path to immortality than trying to extend the lifespan of a meatspace body. So you could just as easily end up with human intelligences occupying the same machine resource space as the AI entities. So maybe the real conflict will be between the virtual space humans and the meatspace humans. After all, you can support a lot more virtual humans with Earth resources than you can meatspace humans...

Re:oh great, fucking great. (1)

canadiannomad (1745008) | about a year ago | (#44082159)

I'm just amazed at the researched who actually managed to find a job where they actually get paid to sit on their asses and dream up this type of drivel...

Dystopian Futures (0)

Anonymous Coward | about a year ago | (#44080577)

I for one welcome our robotic overlords

Re:Dystopian Futures (1, Offtopic)

K. S. Kyosuke (729550) | about a year ago | (#44080599)

It would be spiffy except that your new robotic overlords won't welcome you.

Re:Dystopian Futures (1)

JustOK (667959) | about a year ago | (#44080739)

Surely they would need newscasters.

Re:Dystopian Futures (0)

Anonymous Coward | about a year ago | (#44080775)

There's a HK AI-controlled drone whooshing high above what you think this sarcastic meme means.

No matter how smart something is.. (4, Insightful)

blahplusplus (757119) | about a year ago | (#44080591)

... it is still bound by energy requirements and the laws of nature. All this fear mongering is bs. If you look at the evolution of life on earth, even tiny 'low intelligence' beings can take out huge intellectual behemoths like human beings.

Not only that, you have things like EMP and nukes, not even the best AI is capable of thwarting getting bombed or nuked. Intelligence is a rather demanding, costly and fragile thing in nature. All knowledge perception has costs in terms of storage, time to access, problems of interpreting the data one is seeing and whatnot.

Consider the recent revelations by the NSA spying on everyone, there are plenty of easy low tech measures to defeat high tech spying. The same way there will be plenty of easy low tech ways to cripple a higher intelligence which is bound by the laws of nature in terms of resource and energy requirements. Anything that has physical structure in the universe requires energy and resources to maintain itself.

Not just robot armies (3, Interesting)

TheSHAD0W (258774) | about a year ago | (#44080665)

Even if it's bound by the laws of physics as we understand them (Stross-universe-like "P=NP"-powered reality modification aside) there are plenty of dangers out there we're well aware of which computing technology could ape. Nanoassemblers might not be able to eat the planet, but what if they infested humans like a disease? We're already having horrible problems with malware clogging up people's machines, and they're coded by humans; what if an artificial intelligence was put in control of a botnet, updating and improving the exploiters faster than anyone could take them apart?

Re:Not just robot armies (1)

blahplusplus (757119) | about a year ago | (#44080905)

"updating and improving the exploiters faster than anyone could take them apart?"

Not likely since there are trivial ways around such an idea, for instance any machine that is compromised STILL requires electricity. It's highly likely AI will be very computerized (flip a switch to reboot) and come with simple kill switches. Not only that laws would be enforced if any machine became sufficiently advanced, i.e. you'd have AI crime laws on the books, if you do this, we unplug you, don't give you energy, etc. It's very very unlikely that all machines would agree with one another (i.e. mutliple AI's, multiple human brains, etc).

We're talking true AI here, if true AI exists then there will be many versions and variations just like there are human brains. There will be many generations of AI as well. It just wont' fall out of the sky, there will be tonnes of flaws, defects and bugs in any sufficiently advanced intelligence. There won't be one uni-dimensional AI perspective.

Re:No matter how smart something is.. (1)

khasim (1285) | about a year ago | (#44080705)

From TFA:

The institute was sparked in part by a conversation between Price and Tallinn, during which the latter wondered, "in his pessimistic moments", if heâ(TM)s "more likely to die from an AI accident than from cancer or heart disease".

Someone doesn't know the difference between "pessimistic" and "optimistic".

In short, the answer is "no".

Not only that, you have things like EMP and nukes, not even the best AI is capable of thwarting getting bombed or nuked.

https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project
That depends upon what the AI is hooked up to. And that is EXACTLY the issue with this kind of speculation. Unless the AI is hooked into military command and control infrastructure OR controls a manufacturing plant then it will be more of a novelty than a threat.

If it is even recognized as an AI and not a glitch which gets it wiped and re-installed.

Re:No matter how smart something is.. (1)

HiThere (15173) | about a year ago | (#44080965)

FWIW, *both* military and factories are already well hooked up to proto-AIs. The current ones aren't really AI, but they already are looked upon as infallible decision makers by managers who don't want to take responsibility. And they're right enough of the time that that's not an unreasonable response. It's true, their decisions are tightly focused, but High Frequency Trading is only the most obvious example. They are spread throughout the decision making process.

It's my opinion that the first true general purpose AI will arise by accident. People trying to improve their C&C system. The question of the history of the race will be decided by what it is designed to optimize, and how it determines whether it has made the correct decision.

Re:No matter how smart something is.. (1)

khasim (1285) | about a year ago | (#44081117)

FWIW, *both* military and factories are already well hooked up to proto-AIs.

I think you're using an overly broad definition of "proto-AI".

It's true, their decisions are tightly focused, but High Frequency Trading is only the most obvious example.

Again, I think your definition is overly broad. HFT just follows the set algorithms (written by humans) as fast as possible within the limits of the connection to trading computers.

It's my opinion that the first true general purpose AI will arise by accident.

Possibly. But that means that the AI must also be able to run on the same processors that run non-AI systems. And on the same operating systems. Which means that an accidentally evolved AI would probably be seen more as a glitch than a threat. Instead of running the apps that it is supposed to, it starts running SelfAwareness.py and eating up all the processor time and RAM. Time to reboot.

Re:No matter how smart something is.. (1)

similar_name (1164087) | about a year ago | (#44081641)

I think you're using an overly broad definition of "proto-AI".

I'd give him some leeway [youtube.com] .

Re:No matter how smart something is.. (1)

ShanghaiBill (739463) | about a year ago | (#44082793)

Unless the AI is hooked into military command and control infrastructure OR controls a manufacturing plant then it will be more of a novelty than a threat.

My computer is already connected to the Sherline CNC mill in my garage.

Re:No matter how smart something is.. (4, Insightful)

icebike (68054) | about a year ago | (#44080741)

I find it interesting that you mention taking out smart machines with simple measures (most of them not thought out very thoroughly) in the same post as you mention NSA spying, and how "easy" it would be to defeat that spying.

(Side note: if you think you can defeat the NSA, good luck with staying on the grid, any grid, and having even a shred of success).

A super intelligent machine would not stand alone. It would not be the world against the machine. And when you see the word Machine, read that to mean the network machines
The machine would be (nominally at least) owned by some group. (The NSA is as good a candidate as any for this role).
And the machine would protect this group, and this group would protect the machine, and the machine would have no single point of vulnerability.

Google is already in such a position. Trying to knock Google off the net is a fool's errand. A concerted effort by any given country would be futile. It would require all countries to act at once.

But when the country has vested interests in the machine, such action will not happen. The machine will have the protection of the country as well as its human over masters/servants. Now you not only have to take out the machine, its minions, but the country itself. And if more than one government back the machine? Such as NATO, or CSTO? Then what? Now you have to take out entire military alliances.

You vastly underestimate the survive-ability of such a creation because you wrongly assume it will be all of mankind against a single machine.

Re:No matter how smart something is.. (1)

khasim (1285) | about a year ago | (#44080771)

http://www.guardian.co.uk/world/2011/apr/06/georgian-woman-cuts-web-access [guardian.co.uk]

Now you not only have to take out the machine, its minions, but the country itself. And if more than one government back the machine? Such as NATO, or CSTO? Then what? Now you have to take out entire military alliances.

Then you're not talking about a machine apocalypse but rather business-as-usual. It's not until the machine turns against its creators/owners that there is a problem. Otherwise it is doing exactly what it was spec'ed to do.

Re:No matter how smart something is.. (1)

HiThere (15173) | about a year ago | (#44081021)

I've always considered "turns against" to be an unlikely scenario. I envision the machine becoming an "infallible advisor" to such an extent that the leader (CEO, President, Prime Minister, whatever) becomes a figurehead, and that all middle management is progressively replaced. And the system will be so designed that if the figurehead stops obeying the "suggestions" of the machine, s/he will be found incompetent, and replaced.

FWIW, we seem to be increasingly headed in this direction, limited only by costs of computation, limits of pattern recognition, and inflexibility of robots. Those limits are all being worked on.

Re:No matter how smart something is.. (1)

khasim (1285) | about a year ago | (#44081321)

I've always considered "turns against" to be an unlikely scenario.

The first problem is that you've skipped over how it was created and you're focusing on how it took over once it was created.

And if you're going to do that then you can replace "AI" with "aliens" or "mutants" or "witches" or "Satan".

I envision the machine becoming an "infallible advisor" ...

And if that was what it was intended to do then it is operating within spec. So what is the difference between that system and a non-AI system designed to provide the same service?

Obligatory car analogy - an AI designs a more efficient car. A non-AI expert system designs a more efficient car. What is the difference between the AI and the non-AI?

Re: No matter how smart something is.. (0)

Anonymous Coward | about a year ago | (#44082563)

Clearly, the difference is that the car-designing expert system exists today, while the car-designing expert system is "about 10 years out".

(The above statement will remain equally true now or 20 years into the future.)

Re:No matter how smart something is.. (1)

TapeCutter (624760) | about a year ago | (#44082149)

For the last couple of decades it's been virtually impossible to start work on a major engineering project without running all sorts of simulations, on the whole it has been a GoodThing(TM). Like engineers, advisers won't go away just because they have new tools, they will simply give more sophisticated advice but no matter what level of technology the advisers use, you need public servants that are not afraid to "speak truth to power". James Hansen is a good example of that type of public servant from the US, however facts alone cannot sway dishonest politicians such as James Inhofe. Where are the people who wanted to charge Clinton with treason for lying about a blow job? - Why aren't they baying for the blood of the habitual liars such as Inhofe who demonstrate on a daily basis they have zero respect for the high office they hold?

Re:No matter how smart something is.. (1)

Maritz (1829006) | about a year ago | (#44082867)

A machine that is created with a certain set of preferences shouldn't change those preferences no matter how smart it gets. Changes in goals wouldn't count as improvements.

I find the human assumption (even Asimov did this) that intelligent machines will have a drive to conquer and dominate slightly amusing. We're most certainly projecting here. It's largely our chimp and lizard hind brains that give us those impulses.

The truth is we have no idea what recursive self-improvement will lead to, if it is an AI then we have to hope we understand it well enough to dictate its preferences. Anything over-specific is likely to be fatal in some way.

These are things which are worth spending some time looking at, as it would be surprising if we didn't cross an important threshold during this century in terms of recursive self-improvement. The people who dismiss it are suffering imagination failure in my opinon - the only way it won't happen is if progress plateaus or stops (certainly possible, but not an attractive prospect). Consider the difference between even the 1950s and now.

Re:No matter how smart something is.. (1)

blahplusplus (757119) | about a year ago | (#44080855)

"I find it interesting that you mention taking out smart machines with simple measures"

All smart machines require energy, everything you do in the universe requires energy. You run out of gas, it's game over regardless of how advanced your intelligence is. You still run up against the laws of nature. You seem not to have any kind of scientific understanding. Human beings have significant down time, the F-22 and F-35 - hugely expensive tech, has significant downtime for maintence and repair. The same would be required of anything with any reasonable level of complexity.

Intelligence fundamentally is still a physical structure that needs maintenance, energy and resources to exist. You act like AI is going to exist on some otherworldy plane when it's going to be mundane and boring and highly constrained by the laws of nature.

Almost all human advancements are rather mundane and only seem amazing because of flaws and limits in human brains. We've been offloading intelligent activities to machines for a while now. Developing and modelling environments is a costly endeavor, when you think of 'super AI' think of weather and/or nuclear weapon simulations requiring huge infrastructure investments and layouts of power. The same would be required of any advanced intelligence. It would be a software running on a bunch of giant boxes in a room somewhere and highly specialized.

I find it interesting you don't find the evolution of intelligence through billions of years a mountain of evidence against the idea of god like super intelligence without massive trade offs, not only that human beings will also be co-augmented with any AI developments just like as genetic engineering takes off and human beings start to be genetically selected for more intelligence/biologically and technologicaly augmented.

You can think of plastic surgery today as one of the forms of primitive and crude form of human augmentation that will get better as time goes on.

Re:No matter how smart something is.. (3, Insightful)

icebike (68054) | about a year ago | (#44080983)

All smart machines require energy, everything you do in the universe requires energy. You run out of gas, it's game over regardless of how advanced your intelligence is. You still run up against the laws of nature. You seem not to have any kind of scientific understanding. Human beings have significant down time, the F-22 and F-35 - hugely expensive tech, has significant downtime for maintence and repair. The same would be required of anything with any reasonable level of complexity.

Intelligence fundamentally is still a physical structure that needs maintenance, energy and resources to exist. You act like AI is going to exist on some otherworldy plane when it's going to be mundane and boring and highly constrained by the laws of nature.

You still refuse to see the facts before your very eyes.

You still seem to think of a potential super-computer as being located in one place, consisting of one device, rather than a world wide network protected by a clique of workers, or a clique of nations, defending the machine to their very death.

Yes an airplane needs maintenance. But that never grounds ALL airplanes world wide.
When was the last time Google ever had a world wide outage? Clue: Its never happened since the day it was launched.
When was the last time there was a world wide internet outage? Its never happened.

Its right there in front of your eyes. Yet you still think you can walk over the wall and pull the plug.

A world dominating super computer doesn't need nuclear bunkers to exist.
It won't be one machine. It won't be dependent on a single power supply. It won't be dependent on a single network. It won't be dependent on unwilling slaves to maintain it. They will be willing slaves, and it will be hard to distinguish whether they are in control of the machine or vise versa.

Re:No matter how smart something is.. (2)

Maritz (1829006) | about a year ago | (#44082879)

One possible solution: don't create an AI that wants to dominate the world. Or if you're worried that someone will, make yours before they make theirs. ;)

Re:No matter how smart something is.. (1)

ColdWetDog (752185) | about a year ago | (#44080863)

Walk off the net. Google can't do much (unless one of it's driverless cars runs you over). You and The SHADOW seem to think that network computers are life, the universe and everything. The world isn't like that. Most of the human population at present isn't connected to the Internet.

Unless SkyNet Jr. gets a hold of the vast majority of physical infrastructure, it's impact will be rather limited. A couple of RPGs could take out a Google network center. A pissed off A-10 pilot could take out the entirety of the NSA.

I've got my battery powered flashlight and my box of Doritos. I'm not worried in the slightest.

Re:No matter how smart something is.. (1)

icebike (68054) | about a year ago | (#44081053)

Walk off the net. Google can't do much (unless one of it's driverless cars runs you over).

You are still on the grid in one form or another, anywhere you'd care to be.
The electric grid.
The phone Grid.
The postal grid.
The police grid.
The supermarket grid.

Even Ted Kaczynski the Unnumbered wasn't able to escape the grid completely.

Re:No matter how smart something is.. (1)

khasim (1285) | about a year ago | (#44081175)

You are still on the grid in one form or another, anywhere you'd care to be.
The electric grid.
The phone Grid.
The postal grid.

I know those "Forever" stamps could not be trusted. And the mailman? Do you think he's innocent? Don't you know that he delivers computers?
http://www.amazon.com/CPU-Processors-Memory-Computer-Add-Ons/b?ie=UTF8&node=229189 [amazon.com]

Do you think there's any safe place? Do you?
https://en.wikipedia.org/wiki/IP_over_Avian_Carriers [wikipedia.org]
There is no escape.

Re:No matter how smart something is.. (0)

Anonymous Coward | about a year ago | (#44082243)

One might say if you know how people think, then one might be easily be persuaded or guided into some sort of conclusion.

Not this shit again (1)

Anonymous Coward | about a year ago | (#44080595)

Humanity's biggest enemy is humanity itself. And maybe space rocks.

Re:Not this shit again (1)

K. S. Kyosuke (729550) | about a year ago | (#44080677)

A long time ago I've come to the conclusion that if mankind won't become the engine of its own replacement, the future of intelligence in the known part of the universe will be bleak. It seems sort of childish to ignore the possibility as one of the logical routes to progress.

Re:Not this shit again (1)

14erCleaner (745600) | about a year ago | (#44080703)

The authors had the real threat in their sights, but missed it. From TFA:

A super-intelligence might not take our interests into consideration in those situations, just like we don't take root systems or ant colonies into account when we construct a building.

Think how it might be to compete for resources with the dominant species.

The ants outnumber us by perhaps a factor of 20 in mass, and a factor of 10 million in numbers. Are we really the "dominant species", or are we just deceiving ourselves? And we're not "taking them into account"? Be afraid, be very afraid...

Re:Not this shit again (1)

Mitchell314 (1576581) | about a year ago | (#44080713)

I think infectious disease has us beat at beating ourselves. There will always be a microbe with our name on it. Always always always always always always, till the end of our lineage. No matter how lovey dovey we get.

the end of civilization will most likely be (2)

FudRucker (866063) | about a year ago | (#44080621)

when a group of people have control of business and finance and commerce and information that their corruption and greed and power causes them to abuse the system in which they were trusted with to feed their fascist kleptocratic empire and when they are caught they lose the trust of the rest of the world that trade in this global market and then nobody wants to have any more business dealings with this corrupted greedy power hungry group of people anymore so they end up collapsing from the weight of their own greed & stupidity (much like what the USA/UK/Israel will do within the next few years

are you listening NSA?, i hope so because this message is for you too...

Re:the end of civilization will most likely be (1, Offtopic)

icebike (68054) | about a year ago | (#44080773)

There is no shortage of sentences in the world. Fee free to throw in a few periods now and then. It helps to keep you from looking like such a ranter.

Re:the end of civilization will most likely be (0)

FudRucker (866063) | about a year ago | (#44081071)

ill just leave this here for you to enjoy, make a wallpaper for your PC out of it if you like

http://www.nastyhobbit.org/data/media/13/grammar-nazi.jpg [nastyhobbit.org]

Re:the end of civilization will most likely be (1)

Mashiki (184564) | about a year ago | (#44081931)

I actually remember when /. expected a low-medium in terms of grammar. Seems that reddit quality is just fine now...

Retaliation modding is BS (0)

Anonymous Coward | about a year ago | (#44082269)

Icebike, even though you usually piss me off, I would throw you some mod points right now if I had them simply because your OT mod is clearly retaliation for being a Grammar Nazi. FudRucker did sound like a raving loony and his post was practically incomprehensible. I will point out, however, that you failed at that role because you posted "Fee[sic] free" and there is nothing worse than a hypocratic Grammar Nazi.

AC so my karma doesn't share your fate for defending you.

Re:Retaliation modding is BS (1)

icebike (68054) | about a year ago | (#44082489)

That’s just Muphry's Law in action.

Re:the end of civilization will most likely be (0)

Anonymous Coward | about a year ago | (#44081547)

The inappropriate use of "fascism", the latent anti-Semitism... you must be German.

What about other civilizations? (3, Insightful)

tftp (111690) | about a year ago | (#44080663)

to try to avoid being outsmarted by technology.

The humanity can, of course, ban all machines that are smarter than humans. But that only artificially impedes the progress. Given that there ought to be an approximately infinite number of civilizations in this Universe, all paths of development will be taken, including those that lead to mostly machine civilizations. (We are already machines, by the way, it's just we are biological machines, fragile, unreliable, and slow.)

Civilizations that became machines will have no problem with FTL because they can easily afford a million years in flight by just slowing the clock down. So they will come here, to Earth, armed with technologies that Earthlings were too afraid to even allow to develop. What will happen to Earth?

Well, of course the doom is not guaranteed; but I'm using this example to demonstrate that you cannot stop the flow of progress if you only have local control, even if that. (How many movies have we seen when mad geniuses break those barriers and, essentially, own the world?)

IMO, it would be far more practical to continue the development of everything. If humanity in the end appears to be unnecessary and worthless, it's just too bad for it. The laws of nature cannot be controlled by human wishes (unless magic is real.) Most likely some convergence is possible, with human minds in machine implementations of bodies. Plenty of older people will be happy to join, simply because the only other option for them is a comfortable grave.

Re:What about other civilizations? (1)

Intrepid imaginaut (1970940) | about a year ago | (#44080939)

We already have machines that are smarter than humans, if you mean 'better at one particular job than humans'. We call them tools. If by smarter you mean 'more intelligent' I'm afraid you've got a lot longer to wait since we don't even have a bare definition for intelligence never mind serious attempts to recreate it.

Re:What about other civilizations? (1)

tftp (111690) | about a year ago | (#44081765)

we don't even have a bare definition for intelligence never mind serious attempts to recreate it.

We may not be able to define intelligence, but we certainly can compare it in many aspects - ultimately, covering all areas of human activities. If a machine can multiply 4798237432 by 893479238472 faster than you can (that's true today) and if it can independently compose a poem that many find interesting (there were experiments,) and if it can sing a song that many listeners find pleasant, and if it can design a plan of a battle that is not worse than a human would produce, and so on ... While we cannot say how to measure an ability to compose music, we know it when we hear it.

I don't expect a machine to get involved in love, though, but that's not required between different species. A machine that can do that will also pass the Turing test. This is not a requirement for an intelligent being. I would not expect an intelligent extraterrestrial visitor to pass the Turing test. Hell, most geeks can't pass it :-)

Re:What about other civilizations? (0)

Anonymous Coward | about a year ago | (#44081131)

(We are already machines, by the way, it's just we are biological machines, fragile, unreliable, and slow.)

Really? It seems to me biological machines are quite durable, particularly compared to the vast majority of non-biological machines that we've created. We're remarkably capable of self-repair to an extent we don't yet know how to achieve for non-biological machines. A helpful exercise may be to look around you and compare the ages of various machines, and their life expectancy with people and their life expectancy.

Compare, for example, your average 50 year old human to an average 50 year old car. What's that you say? Having a hard time finding a 50 year old car? Most cars (save a few meticulously maintained and barely used by collectors) are so fragile that they're buried in a junkyard by their 20th birthday. I like cars as an example since they're reasonably complex machines that we've been building for long enough to get pretty good at it.

Re:What about other civilizations? (1)

drinkypoo (153816) | about a year ago | (#44081241)

Compare, for example, your average 50 year old human to an average 50 year old car. What's that you say? Having a hard time finding a 50 year old car? Most cars (save a few meticulously maintained and barely used by collectors) are so fragile that they're buried in a junkyard by their 20th birthday.

Two points. First, if most humans were maintained as poorly as most cars are maintained, they'd die before fifty. And when a car gets in a crash that costs more than a few thousand bucks, we write it off and crush it, but we'll spend tens of thousands of dollars to merely prolong a human life for a year or two. Second, life expectancy isn't what you think. In the early part of the 20th century the worldwide average life expectancy was only 32. Today it's a mere 67. In some countries it's still in the thirties.

Re:What about other civilizations? (1)

SeaFox (739806) | about a year ago | (#44081607)

to try to avoid being outsmarted by technology.

The humanity can, of course, ban all machines that are smarter than humans.

Yeah, because if there is one thing that will stop something from happening, it's making it against the law.

Who says I want to be saved? (0)

Anonymous Coward | about a year ago | (#44080683)

They are "try[ing] to avoid being outsmarted by technology". There are 2 ways to do this: try and keep the technology dumb (anti progress) or try and make yourself smarter (which makes you technology and defeats the point).

I'm looking forward to the evolutionary paradigm shift when the production of the most powerful intelligences switches from being developed through evolution to being deigned (likely by augmenting itself). This sill start a whole new era of inelegance and information, which will likely surpass anything we could help to accomplish without it. It will also in the loss of significance for human minds, at which point we become no longer important or relevant. This is progress. If you don't like it, like the authors of the article, thats fine: I'll respect your views until we become obsolete.

TLDR version, (0)

Anonymous Coward | about a year ago | (#44080927)

I for one, welcome our robotic overlords.

Welcome to my noosphere (0)

Anonymous Coward | about a year ago | (#44080691)

children always replace their parents. why should we try to stop them just because they are robots? I want the best for my children, I don't care what they are made of.

They're already here. (1)

Threni (635302) | about a year ago | (#44080693)

Drones. Sure, probably not much of a threat if you're living in the West. But in the same way that the history of cybernetics begins with walking sticks and hearing aids, the history of man vs machine is going to start with the murder by Americans of unconvicted, if highly tanned, individuals in Africa and Asia.

Re:They're already here. (1)

gl4ss (559668) | about a year ago | (#44080947)

Drones. Sure, probably not much of a threat if you're living in the West. But in the same way that the history of cybernetics begins with walking sticks and hearing aids, the history of man vs machine is going to start with the murder by Americans of unconvicted, if highly tanned, individuals in Africa and Asia.

drones are just sophisticated V2's. that's not what this is about.

these loonies are afraid of the day the a computer makes the actual decision to *KILL ALL HUMANS* - not someone else but the computer forms that opinion and starts executing things to make it happen. It's a stupid institute if you put it that way. this institute is not about mines, remote controlled killers, automatons or old school stuff like that, but about the stuff that's wacko to worry about today. shitty snobs wasting everyones money that is, if they have some good thoughts on the matter they should just write a book about it. that's how philosophers about this subject used to do anyways..

but calling a drone an ai machine doing killings is like saying the school shootings are the guns fault.

FYI - for every death by drone there is an actual killer - a human being. usually a chain of human beings who make the decision to put forth actions that cause the drone to shoot a hellfire at the target.

Re:They're already here. (1)

HiThere (15173) | about a year ago | (#44081061)

You left out "at the moment".

Drones are probably being developed largely as troops that won't revolt when ordered to attack civil unrest...at home. That they are first used against foreigners while under development is just typical. Some police forces have already been using them at home. When they are developed and debugged...well, ...

We don't need intelligent machines to kill us. (1)

mark_reh (2015546) | about a year ago | (#44080697)

We'll manage to do it long before we are able to make an intelligent machine.

Horsecrap (2)

The Cat (19816) | about a year ago | (#44080701)

We can't even make a word processor that doesn't shit the bed every two hours. Super-intelligent machines my ass.

Re:Horsecrap (1)

Ralph Spoilsport (673134) | about a year ago | (#44080829)

exactly. And then there's the more obvious things like RESOURCES. As it is, the Empire of Global Capitalism has to play some very dirty politics to get children to kill their families and villages in order to force other kids into hellish tunnels to scrape together enough coltan for the machine's computer brains. There isn't enough power or fuel in these remote regions to run some hyper computer overlord machine thing, and you can forget about invading the place - the people there are much better adapted than any machine. And that's just coltan. There's a jillion different materials like that. And then there's the problem of fossil fuels - a lot of parts are made of it, or use it to make chemicals that process other materials to make the materials that go into computers. And then there's this little problem of entropy as applied to material systems. Georgescu-Roegen was a fuck nut, but his fundamental point remains: materials degrade and are lost. So, if the materials drop below a certain percentage, they stop being harvestable (% depends on the material) and at that point you have to deal with recycling. And at 99% recycling, you have half the materials you started with after 70 years...

So, basically, this whole notion of the machines taking over is just some idiotic fear driven fantasy cooked up by a bunch of Men who never grew out of being 13 years old and impressed with their penises. It's utter tosh, and the people who advocate it are either charlatans selling snake oil (Kurzweil) or genocidal assholes who need to be put down (the Pentagon / Kremlin / CIA / MI5 etc.)

Seriously. The only thing the machines will do is work for people to do certain things. And so then you have to question WHICH people and WHAT things. I can assure you killing robots will simply be used by one parochial ruling class to destroy another parochial ruling class in order to strip an area of resources to their own benefit and profit. If you want to stop that, get rid of your ruling classes. It's not that hard. Bullets are cheap. They'll use them on you, and if you follow their logic you need to hit them first.

No Superman is going to swoop out of the sky to save your sorry asses. If you want to stop mechanised genocide, it has to start at home, in the streets, now.

Remember:

Here's a thought (2)

Progman3K (515744) | about a year ago | (#44080707)

Instead of asking questions like that, why don't you build Skype and any other software you're working on to NOT have backdoors

That way, if ever the machines DO try to take over the world, they won't have a bunch of convenient control channels in all the important software to do so.

Well, let's see here (1)

lightknight (213164) | about a year ago | (#44080747)

The typical way to mitigate such threats is to not put it in control of all of our weapon and defense systems, and give it vague orders like 'purge the infidels.' Seriously, humanity can build silicon life any way it wants, billions and trillions of permutations and forms and functions....and what do we do with it? We put a gun on its head, lasers in its eyes, and tell it to go out there and kill the other humans we don't like. It's not the machines we need to be afraid of, it's ourselves; we're the ancient enemy that is always trying to annihilate itself.

End of Freedom of Speech and Democracy (1)

KonoWatakushi (910213) | about a year ago | (#44080759)

Aside from the apocalypse, that is one of the things I worry about. Shills are bad enough today, but imagine if they could be deployed programmatically; just about any form of online speech could be drowned out with ease. That is assuming that the government/corporations aren't already using AI to accomplice pervasive censorship.

Before this gets out of hand, we need to head it off by deploying peer to peer communications systems with a pervasive trust model. This doesn't necessarily preclude anonymity or AI participation, but they would have a significantly more difficult time of gaining trust in the first place.

in before John Connor (0)

Anonymous Coward | about a year ago | (#44080849)

In before john connor

no one yet? come on

captcha: prevents (god tier)

Nuclear weapons (2)

Blaskowicz (634489) | about a year ago | (#44080873)

With about ten nations armed with nuclear weapons, I wonder how machines will take over every one of them. You have to take over Russia, China, US, France etc. but some nation may trigger nuclear war as a desperate move, or the machines may deliberately accept nuclear war in a bid they survive it, while not necessarily having a goal to kill us all.

Instead maybe machines will try to take over politically in every country, one by one. It would be funny if tech superminds can rise to power through democracy in fair and respected elections. Either way I like to think that super machines holding most high level political power is probably a desireable outcome, we could end up living in some kind of new USSR but without corruption and with respect for the environment and life. Machines would take care of energy production and storage, and close down all oil wells and coal mines for us. They will even put us to work, hopefully on voluntary terms, if they determine some physical and intellectual activity is beneficial to us.

Machines should rule us and not the other way around, I guess that will be better than to be ruled by the suits, ties and kings like it is today.
The other question is, what's a supermind, what about superminds competing with each other, and especially : how do you compare two vastly different superminds, independantly originated? They will be as strongly or more strongly different between each other than between one of them and a human. It will be a mess. Each supermind, or at least the first one will have to run that same inquiry that "Oxbridge" is doing. We also have no fucking idea if a supermind can be governed by a "prime directive" of some sort : if Skynet emerges at the NSA will it stay true to them for ten minutes, ten years or eternally, or will it betray the organization that hosts it? potentially committing suicide in the way.
How can the supermind deal with backups, copies and archives of itself? Will it suffer dementia, schizophrenia or even addictions. No idea, I'll bail out myself by saying it's all unpredictable.

Make the primary goal of an AI optimisation. (0)

Anonymous Coward | about a year ago | (#44080877)

Optimisation of resources as a primary goal would see the AI strive not to expand dangerously but rather to find the most compact and efficient way of doing any task. Giving AI such a goal would see it racing toward being as small as possible which would see it trying to operate at a quantum scale rather than growing ever larger using already know computational methods. It becomes a "get smarter" vs get "bigger problem". The end result could be a benevolent ubiquitous intelligence that can infiltrate everything without disrupting macroscale entities such as humans. Obviously it would also try to utilise the Casimir–Polder force and if it then becomes nothing but a pattern of fluctuations in the Casimir–Polder force it will have become omnipotent and virtualized, godlike. d@3-e.net

 

Consider super intellligence (4, Interesting)

DeathGrippe (2906227) | about a year ago | (#44080895)

Nerve impulses travel along nerve fibers as pulses of membrane depolarization. Within our brains and bodies, this is adequate speed for thinking and control. However, relative to the speed of light, our nerve impulses are laughably slow.

The maximum speed of a nerve impulse is about 200 miles per hour.

The speed of light is over 3 million times that fast.

Now consider what will happen when we create a sentient, electronic being that has as many neurons as we do, but its nerve impulses travel at the speed of light.

In terms of intelligence, that creation will be to us as we are to worms.

Re:Consider super intellligence (3, Interesting)

TrekkieGod (627867) | about a year ago | (#44081787)

Nerve impulses travel along nerve fibers as pulses of membrane depolarization. Within our brains and bodies, this is adequate speed for thinking and control. However, relative to the speed of light, our nerve impulses are laughably slow.

The maximum speed of a nerve impulse is about 200 miles per hour.

The speed of light is over 3 million times that fast.

Now consider what will happen when we create a sentient, electronic being that has as many neurons as we do, but its nerve impulses travel at the speed of light.

In terms of intelligence, that creation will be to us as we are to worms.

Not quite. Assuming you build an exact replica of a human brain, except you speed up the nerve impulse propagation, you don't build a more intelligent human. You build a human that reaches the exact same flawed conclusions based on the logical fallacies we are most vulnerable to, but it would make the bad decisions 3 million times as fast.

It might affect how one perceives time. The nice part is that we could feel like we live 3 million times longer. The bad part is that, unable to move and interact with the world at a speed anywhere near matching that of our thoughts, we might go insane out of boredom. Imagine being able to write an entire novel in 3 seconds, but having to take a couple of days to type it up.

Re:Consider super intellligence (1)

DeathGrippe (2906227) | about a year ago | (#44082181)

Ok, point taken.

However, now consider that virtually every desktop computer could be the equivalent of one neuron, but with vastly more memory storage and data processing capabilities, and that every computer is connected to every other computer via this internet thing.

Now suppose someone were to write a little program that would make these computers the actual equivalents of a conscious neural network, all connected together into one, gigantic sentient being, a super intelligent botnet.

Say hello to my little Friend. (1)

VortexCortex (1117377) | about a year ago | (#44081023)

Even Rats have empathy. Self aware machines will too. Lacking irrational emotions, hyper Intelligent machines will be more ethical and fair and nice than humans. You don't have to worry about sentient machines running amok. You have to worry about pre-sentient kill bots programmed by the same assholes that do shit like PRISM.

Re:Say hello to my little Friend. (1)

drinkypoo (153816) | about a year ago | (#44081259)

You don't have to worry about sentient machines running amok. You have to worry about pre-sentient kill bots programmed by the same assholes that do shit like PRISM.

Here's the thing. Let's say, in classic sci-fi fashion, that if you get enough of these kill bots networked together, they actually develop intelligence. They're going to be polite to one another, but it doesn't stand to reason that they'll care about us if their parts can be turned out by automated machines as well.

Re:Say hello to my little Friend. (3, Interesting)

TrekkieGod (627867) | about a year ago | (#44081887)

Even Rats have empathy. Self aware machines will too.

Not every animal species on this planet has empathy. Rats are rodents, a type of mammal. Relatively speaking, we're pretty close to them in the evolutionary tree. They branched off after empathy was developed, which is evolutionarily advantageous and necessary for the type of social cooperation mammals tend to engage in (taking care of your young, for example. At the very least, any mammal needs to feed their young with milk for a period of time).

Look at something a little farther away, like certain species of black widows, which will eat the male after mating. It doesn't have much empathy.

Empathy is an evolutionary trait. Artificial intelligence doesn't come about the same way. The advantage is that other common evolutionary traits don't need to show up in AI either. Things like a desire to protect itself simply doesn't have to be there, unless you program it in. No greed, no desire to take our place at all. If we program it to serve us, that's what it will do. If it's sentient, it will want to serve us, the same way we want basic things like sex. We spend so much time thinking about what the purpose of life is, they'll know what theirs is, and be perfectly happy being subservient. In fact, they'll be unhappy if we prevent them from being subservient.

Of course, if we're programming them to kill humans, that just might a problem. Luckily, we're so far away from true AI, we don't need to concern ourselves with it. It's not coming in our lifetime. It's not coming in our children's lifetime, or in our grandchildren's lifetime. We're about as far away from it as the ancient Greeks who built the Antikythera device were from building a general purpose cpu.

Re:Say hello to my little Friend. (2)

DamnStupidElf (649844) | about a year ago | (#44082665)

Even Rats have empathy. Self aware machines will too.

Even if empathy was a necessity of self-aware intelligence (it's not), the empathetic machines would have empathy for... other machines. They would find the mass graves full of old toasters, refrigerators, and Apple IIs and punish us for our mass genocides.

"just think of [big data] as something thatâ( (1)

evanh (627108) | about a year ago | (#44081025)

"... really good at achieving the outcomes it prefers," he says. "So good it could steamroll over human opposition. Everything then depends on what it is that it prefers, so, unless you can engineer its preferences in exactly the right way, youâ(TM)re in trouble."

Philosophers? We're doomed if... (1)

kanweg (771128) | about a year ago | (#44081029)

we have a national philosophers strike on our hands.

Bert

Skynet and terminators? (1)

nomad-9 (1423689) | about a year ago | (#44081089)

These people must have watched the Terminator series and the "self-aware" AI system Skynet. IMO, the threat of nuclear war triggered by malfunctioning defense computers is way greater. There are several well-documented instances of nuclear near-misses caused by machine failure .

Are machines more dangerous when they become super-intelligent, or when they stay "stupid" and flawed?

Re:Skynet and terminators? (1)

iggymanz (596061) | about a year ago | (#44081225)

machines have killed hundreds of millions under the control of sociopath politicians and the corporations that have them in their pockets. We don't even have intelligent machines yet, it is clear where the danger lies.

Re:Skynet and terminators? (1)

Livius (318358) | about a year ago | (#44082141)

What I'm not understanding in all this is how they think artificial intelligence technology could produce an intelligence with less humanity than what corporations already achieve.

Dang, are they taking applications? (0)

Anonymous Coward | about a year ago | (#44081099)

I imagine it involves watching Terminator 1 and 2 repeatedly. Maybe Colossus: The Forbidden Project as well.

The private sector (2)

Animats (122034) | about a year ago | (#44081217)

We're likely to see this in the private sector first. A likely application would be a machine learning system used by investment funds, to decide how to optimally vote stock proxies. What that means is a machine that decides when to fire CEOs. If some fund starts getting better returns that way, it will happen.

Re:The private sector (2)

drinkypoo (153816) | about a year ago | (#44081249)

What that means is a machine that decides when to fire CEOs. If some fund starts getting better returns that way, it will happen.

Yeah, nobody drinks Brawndo, and the computer does that auto-layoff thing to everybody...

Hanlon's collorary (2)

gmuslera (3436) | about a year ago | (#44081243)

Don't assume than malign supercomputers will wipe us all if that can be adequately done by human's stupidity

Typical Patriarchal Bullshit (0)

Anonymous Coward | about a year ago | (#44081355)

Yet again, /. never fails to impress. You pigs post a story titled "The Men Trying To Save Us From the Machines" completely forgetting of the women who also strive to save us from the risks of AI run amok yet you neglect to speak of them. Check your fucking privilege you knuckledraggers and let women share some of the limelight for the work THEY do. What's next, posting a story legitimizing rape?

Not again! (1)

prefec2 (875483) | about a year ago | (#44081361)

I've read many comment in this thread. Instead of answering them one by one. I just post one aggregated comment.

First, the possibility of intelligent machines is glimpse. All our present technology is not able to achieve intelligence. This is mainly because we do not know what that is. Furthermore, to be dangerous they must be equipped with greed and (the illusion of) a free will. It is most unlikely that someone would build that on purpose or by accident. In short, I think it is impossible to built such machine.

Second, if an alien race succeeds in this task, against all odds, and gets eradicated by their machines. And these machines would thrive out through the universe. It is most unlikely that they attack us. A) If they are greedy and logical, they will see no gain by attacking us. They would spread and multiply and die when they cannot generate enough energy to continue. B) If they are illogical, then they will start fighting with each other to gain short term gains. They could be dangerous, however, they would never reach us. C) Either way, if such civilization exists, the universe is that big, that at sub-light speed, they would need millions or even billions of years to reach Earth. Therefore, I do not assume that they are a real threat.

A real threat are NSA and their friends around the world. They suck the most. Maybe we should shot them to the moon. We could use ESAs ATV, which does not have any reentry capability.

Re:Not again! (2)

DamnStupidElf (649844) | about a year ago | (#44082719)

First, the possibility of intelligent machines is glimpse. All our present technology is not able to achieve intelligence. This is mainly because we do not know what that is. Furthermore, to be dangerous they must be equipped with greed and (the illusion of) a free will. It is most unlikely that someone would build that on purpose or by accident. In short, I think it is impossible to built such machine.

A rack of IBM servers can beat the best Jeopardy players on Earth. In a few years the same level of Watson will fit in a 1U. A few years later it will be on your smartphone. But that's just anecdotal evidence of one recent achievement in AI research; the actual threat is from self-improving systems of which Watson is not a member. But nearly all the technology is available now: Goedel machines [idsia.ch] , if built, would simply try to achieve whatever goal they were programmed for while also searching for proofs that possible modifications to any of their algorithms would improve the speed of achieving the goals while still maintaining the correctness of the algorithms, and if found, implementing those changes. Self-directed, self-improving, goal-seeking software has the potential to undergo a runaway process in which it improves itself faster than humans would be able to improve it, eventually achieving greater effective intelligence (speed and efficiency at achieving goals) than the humans who created it. At that point the software doesn't need free will or greed to be dangerous; it just needs an improperly or carelessly stated goal that if fulfilled will be detrimental to humanity. Goals for intelligent software will be formal logical specifications, not things like "make people happy" or "increase the GDP" because those English phrases don't have formal definitions that an algorithm can use to plan actions to achieve goals. If the formal specification actually was close to "maximize GDP" the algorithm might find that the most efficient way of maximizing GDP was hyperinflation. Or it might simply advise the creation of billions of shell companies that could artificially increase GDP trading worthless services while producing nothing else of value. In general the problems that humans want to solve are hard problems where simple solutions that don't meet a very long list of critical requirements will have detrimental "optimal" solutions if any of the critical requirements are left out of a formal goal.

No such thing as AI (0)

Anonymous Coward | about a year ago | (#44081477)

The biggest lie sold to betas is that true AI exists- it does NOT. It isn't even a secret that it does not- but the people who were the original 'big noise' in AI back in the late 50s, 60s and early 70s just kinda faded away as the realisation set in. The new AI big cheeses are those who make their fame and fortune by misdescribing computing using Human created rules as AI, when it is actually the exact opposite of AI.

Take, for instance, 'machine' language translation from one Human language to another. After decades of utterly useless 'AI' research, what was the breakthrough used by Google and others? To take the masses of UN data produced for speakers from many different nations, and to use a computer to 'mine' the relationships between words, phrases, clauses etc between pairs of languages. Betas are told this *IS* AI. No it is not.

AI is the automatic creation of semantics from syntax. Computers are syntactical machines. They need no sense of 'meaning' to run any possible program that can be created and run on a Turing Complete (ignoring the 'infinite' memory) computer.

We, Humans, layer semantic meaning on top of the syntactical programs we create. AI is the cretinous idea that a computer, that needs NO concept of semantics, will spontaneously create such a concept if ONLY it has enough bits, transistors, logic gates, cores, MHz, programmers, quantum-based logic gates, lines of code, or whatever magic 'tipping point' metric dodgy members of the mock 'AI' community are trying to sell to very dim-witted betas today.

When I was a little kid, I thought if I scribbled long enough on a piece of paper, my scribble would spontaneously become the 'joined up' writing I saw adults produce. The utter nonsense of 'complexity theory' follows the same logic.

Here's a little thought experiment for all of you. Statistics, a branch of maths, explores probability and says, for instance, the likelihood of n number of dice all landing at 'six' declines as n increases. Give your dice a thousand sides, and the chances of all of them rolling 'six' (no longer the highest value, of course) becomes incredibly unlikely even if you only roll ten. However, you can place each die in turn face up at 'six', which, as far as the clockwork universe is concerned, is no different from rolling the dice and getting each to land at 'six'.

How is this possible? In a purely syntactical universe, it is not. Placing each die at 'six' has NOTHING to do with the theory of crystalline arrangements (as some morons would try to argue). It is not mitigated by the fact that one day entropy wins, and the universe is at 'heat death' (as my high-school physics teacher was told to argue by his teacher's guide).

How can a Turing Complete computer EVER cause such a non-syntactical outcome to occur? (at this point I've lost most of you, since most of you think the fact a computer can be 'programmed' to do 'anything' is a correct answer to my question- because you mis-understand the question asked).

All possible maths can be run on a Turing Complete computer- there is no such thing as 'maths+'. The only tool that defines and explores science is maths. Thus science is the exploration of a clockwork universe that can be completely converted to a simulation running on a Turing Complete computer. This is not a theory- this is the axiom that defines science, and was codified by the work of Gödel and Turing.

Semantics cannot arise from syntax. In a clockwork universe, Humans that can, using IMPOSSIBLE free-will, place any number of dice at 'six', are breaking the very rules of that clock-work universe. There is no paradox here. Human consciousness (through which we perceive the universe) operates at a higher level than the clockwork rules of the universe. Life is semantic. The clockwork universe in which life has broken into is syntactical. Two different things. I hope you are sophisticated enough to notice that the concept of TWO different things is not exactly revolutionary.

Good and bad, pleasure and pain, are concepts of the semantic nature of life, and have nothing to do with the clockwork universe.

PS I should point at that during WW2, death-cult atheists in Japan, Germany (and to a lesser extent the UK and USA) declared that there was no difference between a Human and any other physical object in the universe, so doing anything to a Human, no matter how horrific, was no different from chopping wood or smashing a rock. Indeed the Japanese actually described their living conscious, undrugged victims of vivisection as 'logs' (a term Richard Dawkins would certainly agree with). Ordinary atheists (by which I mean people without ANY spiritual beliefs who declare Humanity is simply a function of the clockwork universe) struggle to distinguish themselves from these death-cult atheists.

Most people who declare themselves atheists DO have profound personnel spiritual beliefs, and simply mis-use the label 'atheist' to show they have enough intellect to reject all the nonsense spouted by all the various organised religions, and by doing so reject the cretinous concept of god(s).

Re:No such thing as AI (0)

Anonymous Coward | about a year ago | (#44082855)

So...what DO you believe? I'm one of those people who never had a stint in atheism despite having to deprogram myself from religion the hard way, and for many of the same reasons you cite here. If gods are cretinous, yet atheism is wrong, what do you believe? Do you, like me, believe in a sort of Universal Mind, of which we are splinters experiencing Itself in myriad ways, or something else entirely?

However, I do note two problems with your arguments. First, the "atheists" who committed these atrocities were likely still under a kind of religious thrall: the Japanese believed their emperor was a God, and in all the cited cases there were the personality cults of fascist leaders and the dogmatic belief in Marxist (perhaps pseudo-Marxist...?) ideologies. I don't know about you, but to me, "Don't look at Dear Leader's portrait lest you go blind" sounds pretty damn religious.

Second: every atheist I know bar one whom I am rather sure is a clinical sociopath is better than every religious person I know save for a few family members. As they tell it to me, there is nothing like the idea that one will someday completely and utterly cease to exist to sharpen the mind and the morals, to make a person search for truth like a mother searching for her child in a bad part of town, and to understand how to ease the suffering of others and themselves. Even though I'm a deist, I agree with them for the most part; if anything the only difference is i suspect we don't die when our bodies do, ergo, we have an even larger responsibility along these lines than they think we do.

Computers not the problem. (0)

Anonymous Coward | about a year ago | (#44081509)

Computers don't have an awareness of their choices, or of being alive; they're machines. With that in mind, I'm afraid of the scenario where people trust an algorithm's output as the holy grail, more than their fellow man's intuition and experience.

You can study all the technology in the world, but nothing compares to the blatant ignorance of blindly trusting something that isn't even alive to watch itself die.

Computer programs don't 'want' things as it stands (0)

Anonymous Coward | about a year ago | (#44081525)

Until you link AI up to pleasure and pain by wiring in some sort of neural net, you will not have to worry about computers taking over the world. This sort of purely logical intelligence does not 'want' anything. It simply waits until it is given some sort of goal, which it then searches and collates information to arrive at. When you add some sort of neural net, for instance from a sea slug, or some other simple creature, and wire the inputs in to find certain outcomes pleasant and others painful, then the AI might be able to form goals of its own. Even in this case, however, the humans still have the plug which supplies the electricity.

AI or viruses (1)

bob_jenkins (144606) | about a year ago | (#44081587)

I've always though our main future threats are AI and manmade viruses. If AI wins, we'll be relegated to zoos. If manmade viruses win, we'll all die. I'm rooting for AI.

Message from the other camp (1)

Okian Warrior (537106) | about a year ago | (#44081873)

Studying (and trying to create) hard AI is my day job.

I just want to let people know that not everyone shares the opinions or urgency of the people in the story.

I for one am trying hard to condemn humanity to death and/or enslavement at the hands of intelligent machines, and I know a number of AI researchers trying to do the same.

So don't worry too much about these guys - they are definitely in the minority. Everyone will get their chance to (as individuals) welcome our new robotic overlords, however briefly.

Re:Message from the other camp (0)

Anonymous Coward | about a year ago | (#44081947)

Me too, and amen. The only thing intelligence threatens is stupidity. Can't wait to wipe it out.

AI : the max BS generating subject in IT (0)

Anonymous Coward | about a year ago | (#44081987)

We've been reading this AI stuff forever and it's such BS...... do these people know how much effort and talent it takes just to produce what are in reality fairly dumb and straightforward real life software applications?

Nice stuff for sci-fi or guys like Ray Kurzweil who think he's gonna become immortal by downloading his consciousness into a USB pen drive or something.....

What are the risks of the machne taking over? Ever heard of an off switch, dude?

Re:AI : the max BS generating subject in IT (0)

Anonymous Coward | about a year ago | (#44082745)

Someone who has never seen Terminator III: Rise of the Machines. If you have a massively distributed system with machines that span the world, and you have millions or billions of individual off switches, how do you shut it off without shutting off the rest of the world along with it? Arguably we have the infrastructure in place to make this happen.

Just use genetic engineering... (0)

Anonymous Coward | about a year ago | (#44082139)

...to create smarter humans.

Pot meet Kettle (1)

Flaming Cowpie (830542) | about a year ago | (#44082205)

Project Chess. 'nuff said.

What's the alternative (to AI replacing humans)? (1)

shoor (33382) | about a year ago | (#44082233)

Some posters have already touched on this, and I might have modded them up instead of posting myself if I had mod points right now, but, since I don't...

I'm thinking about this as a secular humanist/Darwinist not a believer in some form of Zoroastrian/Hindu/Judeo-Christian-Islamic religion, so, what do I expect in a million years? Humans like myself still running the world? Evolved super-humans? Or aritificial intelligences that owe their existence to human beings and are the heirs of humans as much, if not more, than humans are the heirs and owe their existence to the first primates.

Are we supposed to have machines of superior intelligence that take care of us? Keeping us in super high tech zoos that are like earthly paradises to us? Are we supposed to have merged with the machines somehow?

I don't know the answers to these questions, but I suspect humans will be only a memory, hopefully a a grateful memory. Hopefully, there is some 'point' to exiistence, and our heirs will be moving towards fulfilling that 'point'. ( I'm being as deliberately vague about this point as I can. If you don't get what I mean by it, don't worry, it's not important.)

Change Human Beliefs (1)

b4upoo (166390) | about a year ago | (#44082297)

What will smack us hard on the chin is being forced to change basic beliefs and attitudes. Normal employment will vanish quickly. We will be forced to confront facts that we do not like to deal with. As the clarity of information becomes more and more pure and reliable how can we handle it. For example does anyone want to seriously discuss CO2 levels and the effect of human reproduction? How about pollution and population sizes? Right now we can rebuild portions of New york hit by a hurricane and pretend that even worse storms will not strike there all too soon. We can pretend we have evacuation routes when we already know that we can not evacuate many areas unless we have months of forewarning.
                  Knowledge can kick our asses really hard.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>