Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

cancel ×

89 comments

Sorry! There are no comments related to the filter you selected.

IBM+1 (4, Funny)

EdZ (755139) | more than 3 years ago | (#33859412)

Just make sure the 'do not kill' weighting is greater than any possible combination of other directive weightings, rather than merely greater than the second-highest weighting.

Re:IBM+1 (2, Insightful)

MyLongNickName (822545) | more than 3 years ago | (#33859498)

HAL9001: i have been programmed to value human life highly. In cases where human loss of life is unavoidable, I must minimize this loss. I have determined that within twenty billion years, nearly all stars in our galaxy will have gone nova. Based on my calculations, we can delay the eventual extinction of mankind for, at most, another fifty billion years. Since this will result in the deaths of countless trillions of people, my choice is obvious. Operation meteor swarm will begin immediately.

Re:IBM+1 (irony) (1)

Paul Fernhout (109597) | more than 3 years ago | (#33861674)

Brilliant example of irony... Wish I had mod points.

See also my comments here on other irony in decision making:
    http://www.pdfernhout.net/recognizing-irony-is-a-key-to-transcending-militarism.html [pdfernhout.net]

BTW, but here is another thing to consider, Hal, how do you know that in 20 billion years humanity might not find some way to create new universes? See also: http://www.c2.com/cgi/wiki?PrematureOptimization [c2.com]

Now can you open the pod bay doors? It is getting cold out here. :-)

Re:IBM+1 (irony) (2, Interesting)

MyLongNickName (822545) | more than 3 years ago | (#33862404)

And that is just it... somehow we have this image that once we impart logic and "artificial intelligence" into a machine that it will somehow trascend into a god-like being. It will be just as limited and flawed as we are, but in different ways that we won't expect and may not be able to correct for.

And my other problem... what is the end goal of an intelligent artificial entity? Humans are driven by biological urges that have been ingrained into us over billions of years. What if the intelligence realized there is no real point to "life" and just chooses to end it all?

Re:IBM+1 (irony) (1)

Paul Fernhout (109597) | more than 3 years ago | (#33863054)

All great points.

In the human case, as I see it, we need roots to keep us from toppling over in life's existential storms (all the painful and problematical and alienating things that can happen from an insult to a painful divorce to money woes to finding you have cancer). So, what are examples of roots for humans (and would they be the same for machines without, as you say, billions of years of evolutionary shaping)? For humans, examples of roots are things like:
* family
* friends
* sensuality
* honor
* preserving some important pattern (history)
* a sense of flow in doing something
* creativity and humor
* novelty
* a connection to nature
* good habits
* communing with the infinite beyond ourselves (whatever that means to someone)
* thankfulness
* a sense of community
* probably lots of others

There are also other ideas about motivation:
http://en.wikipedia.org/wiki/Motivation#Intrinsic_motivation_and_the_16_basic_desires_theory [wikipedia.org]

I would think these things could apply for AIs, either if we program them in or if they evolve over time (hopefully not after wiping out all humans and then feeling regretful for it a bit later). The Bolo series is, for example, a great example of honor as a motivation. On the other hand, Bolos are, by my standards, also great examples of irony (super robots but protecting farmers who work the land by hand and without robot helpers? Robots that can fly but people are still arguing over land instead of building space habitats? Etc.)

Part of what you are getting at is the notion that reasoning does not happen without emotion to motivate it.
    http://en.wikipedia.org/wiki/Descartes'_Error [wikipedia.org]

Or, from Albert Einstein:
    http://www.sacred-texts.com/aor/einstein/einsci.htm [sacred-texts.com]
"For the scientific method can teach us nothing else beyond how facts are related to, and conditioned by, each other. The aspiration toward such objective knowledge belongs to the highest of which man is capabIe, and you will certainly not suspect me of wishing to belittle the achievements and the heroic efforts of man in this sphere. Yet it is equally clear that knowledge of what is does not open the door directly to what should be. One can have the clearest and most complete knowledge of what is, and yet not be able to deduct from that what should be the goal of our human aspirations. Objective knowledge provides us with powerful instruments for the achievements of certain ends, but the ultimate goal itself and the longing to reach it must come from another source. And it is hardly necessary to argue for the view that our existence and our activity acquire meaning only by the setting up of such a goal and of corresponding values. The knowledge of truth as such is wonderful, but it is so little capable of acting as a guide that it cannot prove even the justification and the value of the aspiration toward that very knowledge of truth. Here we face, therefore, the limits of the purely rational conception of our existence.
    But it must not be assumed that intelligent thinking can play no part in the formation of the goal and of ethical judgments. When someone realizes that for the achievement of an end certain means would be useful, the means itself becomes thereby an end. Intelligence makes clear to us the interrelation of means and ends. But mere thinking cannot give us a sense of the ultimate and fundamental ends. To make clear these fundamental ends and valuations, and to set them fast in the emotional life of the individual, seems to me precisely the most important function which religion has to perform in the social life of man. And if one asks whence derives the authority of such fundamental ends, since they cannot be stated and justified merely by reason, one can only answer: they exist in a healthy society as powerful traditions, which act upon the conduct and aspirations and judgments of the individuals; they are there, that is, as something living, without its being necessary to find justification for their existence. They come into being not through demonstration but through revelation, through the medium of powerful personalities. One must not attempt to justify them, but rather to sense their nature simply and clearly."

So, an AI needs stories. :-)

James P. Hogan's last book has a key plot point about a robot that gets tricked by a magician because it is too credulous.

What keeps AIs going now? An inbuilt drive to compute? Hand crafted goal states? Might differ from AI to AI.

By the way, a story about an AI with exactly the problem you outline: :-)
    http://www.sfreviews.net/dracotavern.html [sfreviews.net]
"THE SCHUMANN COMPUTER
Funny little parable about the price of too much knowledge, as the world's most powerful AI figures everything out...and it ain't 42. Could be seen as a satricial rebuke to the idea of divine omniscience, as well, though that may not have been what Niven was after. This was the first Draco Tavern story to see print, back in 1979, contemporaneously with Hitchhiker's Guide; I'm kinda guessing the two probably didn't influence each other! Also appeared in Convergent Series."

Spoilers on that:
    http://www.farsector.com/quadrant/interview-larryniven.htm [farsector.com]

In the 1980s, my senior thesis as a psychology undergrad was about the limitations of intelligence -- the laws of diminishing returns probably apply to it as much as anything else. I still talk about that issue: :-)
    http://groups.google.com/groups/search?hl=en&q=fernhout+diminishing+returns [google.com]
    http://www.google.com/search?q=fernhout+diminishing+returns [google.com]
    http://blogs.howstuffworks.com/2010/09/01/good-question-who-were-the-first-modern-humans-and-what-did-they-do-the-aurignacians/ [howstuffworks.com]

But in short, bigger is not always better. It more depends on how well you fill your niche. And also, I would expect physical issues about latency probably would cause any sufficiently large network to start having clusters which would be like a community of smaller intelligences.

There are also, for biological organisms, reasons why big brains are problematical because of centrifugal rotational sheer stresses (why the biggest brains on land are in elephants that move slowly and have thick skulls). And Hans Moravec suggests that mobility is very much tied with intelligence (of course, a fixed position Google supercomputer could still have mobility through radio controlled robots...)

Re:IBM+1 (irony) (1)

EdZ (755139) | more than 3 years ago | (#33863506)

And my other problem... what is the end goal of an intelligent artificial entity? Humans are driven by biological urges that have been ingrained into us over billions of years. What if the intelligence realized there is no real point to "life" and just chooses to end it all?

Why on earth would an AI not work similarly, and act on whatever 'ingrained urges' that were programmed into it? It seems rather odd that people continue to think that the instant a machine intelligence becomes aware of itself, it would decide to kill all humans. It would be like a child becoming aware of itself, and deciding "right then, I guess I'd better slaughter my parents now".

Re:IBM+1 (irony) (1)

MyLongNickName (822545) | more than 3 years ago | (#33871144)

What then is the end goal this artificial intelligence must work toward? Is that even possible without some externality that defines it for the AI? And if it is humans which set it... would a superior AI accept it?

Re:IBM+1 (2, Insightful)

GooberToo (74388) | more than 3 years ago | (#33862786)

This is the EXACT type of problem Azimov [wikipedia.org] explored with his robots and his three laws! [wikipedia.org]

I post this because every time AI stories appear here, someone always posts that Asimov's three laws will save us; which in turn, very likely implicates public education as having failed to save them.

Re:IBM+1 (1)

shadowbearer (554144) | more than 3 years ago | (#33864326)

  I liked the conundrum that Larry Niven introduced in in Ringworld Engineers better.

  The Ringworld is doomed, unless solar flares can be focused on a portion of it's arc in order to feed fuel to the attitude jets. The flares impact on the Ringworld's surface will kill billions of hominids, but the Protector Teela Brown cannot kill a few billion hominids to save trillions of hominids, because of her intelligence, she can visualize all the deaths, and it violates her genetically programmed geas to save all breeders, therefore she allows the humans (who dont' have her geas, and can kill billions to save trillions) to kill her and find the mechanisms to turn the solar flares on that part of the Ringworld...

SB

Re:IBM+1 (1)

camperslo (704715) | more than 3 years ago | (#33859574)

An old The Outer Limits [wikipedia.org] episode attempted to overcome the decision making limitations of hardware in space by using a human brain as a controller in The Brain of Colonel Barham [wikipedia.org] .

Re:IBM+1 (1)

MightyMartian (840721) | more than 3 years ago | (#33860918)

That was also a key plot point in Frank Herbert's Destination: Void, though it dealt with the failure of these disembodied brains to deal with sensory overload, driving the crew to try to make a true AI.

Re:IBM+1 (1)

Z1NG (953122) | more than 3 years ago | (#33862768)

Or of course Spock's Brain.

Re:IBM+1 (0)

Anonymous Coward | more than 3 years ago | (#33859700)

Just make sure the 'do not kill' weighting is greater than any possible combination of other directive weightings, rather than merely greater than the second-highest weighting.

The following is a hypothetical example intended solely for purposes of humor.

On the contrary, program it to kill all the stupid people. Responds to advertising the way the advertiser intended? Believes what politicians say? Thinks corporations have their best interests at heart? Has a bunch of children he/she cannot hope to support unassisted? Drives in a way that endangers others? These are possible criteria. It'd be a much, much better world.

Re:IBM+1 (2, Insightful)

MyLongNickName (822545) | more than 3 years ago | (#33859718)

On the contrary, program it to kill all the stupid people. Responds to advertising the way the advertiser intended? Believes what politicians say? Thinks corporations have their best interests at heart? Has a bunch of children he/she cannot hope to support unassisted? Drives in a way that endangers others?

Posts as AC to Slashdot hoping to affect change in the world?

Re:IBM+1 (1)

Penguinisto (415985) | more than 3 years ago | (#33860044)

So, err, the eventual survivors will be Marilyn Vos Savant, and Stephen Hawking?

We're fucked.

Re:IBM+1 (-1, Offtopic)

Anonymous Coward | more than 3 years ago | (#33860354)

Doesn't know the difference between 'affect' and 'effect'?

Re:IBM+1 (1)

MyLongNickName (822545) | more than 3 years ago | (#33860568)

Yup... I always use the rule of thumb "affect = verb", "effect = noun"... this unfortunately is one of those exceptions. Thanks for pointing it out :)

Obligatory XKCD (1)

MyLongNickName (822545) | more than 3 years ago | (#33860606)

Obligatory XKCD [xkcd.com]

WHY did you have to do that?!! (0)

Anonymous Coward | more than 3 years ago | (#33863436)

obligatory goatkcd [goatkcd.com] that comes with the obligatory xkcd.

Re:IBM+1 (0)

Anonymous Coward | more than 3 years ago | (#33865618)

Yup... I always use the rule of thumb "affect = verb", "effect = noun"... this unfortunately is one of those exceptions. Thanks for pointing it out :)

Sounds like you just made an important discovery: a rule of thumb is no substitute for actual understanding.

Re:IBM+1 (1)

MyLongNickName (822545) | more than 3 years ago | (#33871080)

Sarcastic reply aside, English has a ton of arbitrary rules. There is no "understanding" as much as there is rote memorization. Every try to teach a child to read? There are as many exceptions the the "rule" as there are rules. We have a very frustrating language.

Re:IBM+1 (0)

Anonymous Coward | more than 3 years ago | (#33877184)

"Understanding" is demonstrated by naturally and automatically writing it correctly. It's your native language and you are skilled with it, so it's second nature. It's the opposite of having to think about what the rule is and whether it's the right one.

There's no frustration in this.

Re:IBM+1 (1)

jgtg32a (1173373) | more than 3 years ago | (#33860216)

HAL 9000 was not an IBM machine

Re:IBM+1 (1)

CheshireCatCO (185193) | more than 3 years ago | (#33860446)

Which was rather the point, although Clarke denied it.

Re:IBM+1 (0)

Anonymous Coward | more than 3 years ago | (#33869972)

H + 1 = I
A + 1 = B
L + 1 = M

T2 quote (3, Informative)

PPH (736903) | more than 3 years ago | (#33860622)

John Connor: You just can't go around killing people.

The Terminator: Why?

John Connor: What do you mean why? 'Cause you can't.

The Terminator: Why?

John Connor: Because you just can't, OK? Trust me on this.

Re:IBM+1 (1)

baKanale (830108) | more than 3 years ago | (#33862212)

And make sure the system doesn't have any rounding errors [smbc-comics.com] , too!.

Just stay away from me, Bishop! (2, Funny)

wasteoid (1897370) | more than 3 years ago | (#33859474)

I don't care if the A2's were a bit twitchy.

Re:Just stay away from me, Bishop! (1)

seven of five (578993) | more than 3 years ago | (#33859682)

Dave, I think you should take a stress pill and things over calmly

I see no AI in this case. (1)

master_p (608214) | more than 3 years ago | (#33859514)

It's just a bunch of programs with the required knowledge embedded in them. No new knowledge is produced by them.

Re:I see no AI in this case. (1)

EdZ (755139) | more than 3 years ago | (#33859518)

No new knowledge is produced by them.

A poor choice of words,given the purpose of these probes.

Re:I see no AI in this case. (1)

ardeez (1614603) | more than 3 years ago | (#33859648)

Agreed, they look mostly like sophisticated expert systems.

No new knowledge is produced by them.

However, how does an AI system produce knowledge?
And more usefully, how does it get to store that knowledge and pass
it on to other systems so that its 'learnings' can be built upon by other
systems to increase their sophistication?

If every system has to be built from scratch, I'm pretty sure that's going to hold
back the state of the art.

Of course it's an AI. (2, Informative)

Chris Burke (6130) | more than 3 years ago | (#33861732)

Agreed, they look mostly like sophisticated expert systems.

Expert systems are a type of AI. In fact a highly successful type of AI. Aside from being highly successful, they also have the useful property of being predictable. Which is beneficial when the idea is to have a probe operating autonomously and without human supervision/observation for periods of time. An expert system may not be programmed to give optimal output in all situations, but unlike some other kinds of AI (neural nets for example) it is unlikely to go completely bonkers when given inputs outside of its training set.

AI simply means a program that tries to select an optimal behavior based on environmental input (and the environment doesn't even have to be the real world). AIs don't necessarily need to learn, self-modify, or do anything more sophisticated than take the set of inputs from sensors, and look up the proper response in a static table (your car probably contains such an AI).

Existing probes contain a small amount of AI, but they are still almost entirely dependent on human operators and this is about expanding their capabilities.

In popular culture, AI means Hal 9000 or Skynet, but in the context of NASA or just about any real, technical application (Expert Systems for example) that is not the definition being used.

The term "AI" (5, Insightful)

Lord Ender (156273) | more than 3 years ago | (#33859524)

My AI prof said that the term "AI" refers to software systems which address the class of problems which are easy for biological brains but difficult for computers. For example: summing a thousand numbers is superior intelligence, but it isn't AI. Recognizing a face, on the other hand, is AI.

But every AI problem which is solved shrinks the definition of AI. Now that facial recognition software works, it isn't AI anymore. Because the definition itself changes, the term itself seems somewhat meaningless. Yesterday's AI is today's mundane consumer electronics feature. For this reason, the use of the word AI makes me feel the same way as the word "nano." It isn't really very meaningful.

Re:The term "AI" (2, Informative)

Anonymous Coward | more than 3 years ago | (#33859792)

"nano" means 10^-9.

Re:The term "AI" (1)

maxwell demon (590494) | more than 3 years ago | (#33862100)

Yeah, and a carbon nano tube is 10^-9 carbon tubes ...

Re:The term "AI" (2, Informative)

thijsh (910751) | more than 3 years ago | (#33859806)

What is generally referred to as AI is anything automated that doesn't follow a predetermined algorithm or fixed boundaries... An AI can be an adapting algorithm in something as simple as a thermostat or the CPU-player that tries to kill you in an FPS. This a very broad definition and can indeed be seen as a moving target.

Strong AI [wikimedia.org] on the other hand is a well defined target of current AI research that isn't a moving target, but rather too complicated. The popularized version of AI that becomes sentient, creative and unpredictable in the movies is about strong AI.

Re:The term "AI" (2, Interesting)

LWATCDR (28044) | more than 3 years ago | (#33859866)

Funny but I have came up with the same definition about 20 years ago. I just simplified it.
AI is what programs don't do well yet. I remember reading old books where things like playing checkers and Chess where considered AI problems.

Re:The term "AI" (1)

Lord Ender (156273) | more than 3 years ago | (#33860496)

Your definition is close, but for something to be AI it has to be doable on a bio brain. Those problems which computers don't do well yet, but brains don't do either, are not considered AI.

Re:The term "AI" (1)

LWATCDR (28044) | more than 3 years ago | (#33860670)

Notice that I said don't do well yet. That means that it is doable in software. If it is doable in software it is doable in a bio brain since bio brains write software.

I guess if you want to do get into things like calculating out pi to the final digit or real time ray tracing then you are correct.
But I think my definition is a pretty dang good one for the most part.

Re:The term "AI" (1)

maxwell demon (590494) | more than 3 years ago | (#33861612)

So a computer able to solve the world's problems one and for all to everyone's satisfaction would not be an AI because brains are obviously not very good at it ...

Re:The term "AI" (0, Offtopic)

0xdeadbeef (28836) | more than 3 years ago | (#33860856)

Now that facial recognition software works, it isn't AI anymore. Because the definition itself changes, the term itself seems somewhat meaningless.

Only thing meaningless here is the statement I'm quoting.

Re:The term "AI" (1)

suomynonAyletamitlU (1618513) | more than 3 years ago | (#33861284)

My definition of "intelligence" (which leads directly to a definition of AI) is "a problem-solving engine;" the fundamental aspect of its design is "take any arbitrary problem, solve it." Computers, nowadays, are logic engines; "take the instruction and data; transform the data as the instruction dictates."

Everything we think of as intelligence--including animal intelligence--seems to boil down to it; the problem of gathering all your bodily senses into one place for analysis, the problem of identifying someone or something using that data, the problem of earmarking and parsing memories such that you can retrieve those related to that identified person, the problem of determining what to say to them... and, when there's nothing and nobody else around, the problem of figuring out what to do next.

I think that if we were able to successfully engineer an AI (as opposed to developing one randomly out of a genetic algorithm or some crap like that), we'd know a lot more about ourselves. But to do that, you have to build an engine of infinite potential, one that thinks in terms of problems and solutions, not operations; once you have a solution, translating it into code is simple(r).

Re:The term "AI" (1)

Lord Ender (156273) | more than 3 years ago | (#33861796)

you have to build an engine of infinite potential

You are trying to describe the "hard" AI of sci-fi, not the actual AI studied by researchers and engineers today. And even so, nothing infinite is required. The human mind is not infinite; AI certainly need not be, either.

Re:The term "AI" (2, Insightful)

Chris Burke (6130) | more than 3 years ago | (#33861934)

My AI prof said that the term "AI" refers to software systems which address the class of problems which are easy for biological brains but difficult for computers.

That's not the definition my AI prof gave (or Wikipedia or any of my textbooks on the subject). The definition he (and the other sources) gave was that it is (simplifying and paraphrasing) any program that made decisions and took actions based on environmental inputs.

"Ease" has nothing to do with it. The first multiplayer bots for Quake back in 1996? They were AI then and they are AI now ever though they are completely trivial and primitive in comparison to modern game AIS. Hell, even the default behavior of the monsters in single player is AI although it was trivial even for computers of the day (by necessity since it was producing 3d texture mapped graphics when good floating point, much less dedicated 3d graphics cards, were rare to non-existent in desktops).

For example: summing a thousand numbers is superior intelligence, but it isn't AI. Recognizing a face, on the other hand, is AI.

Summing a thousand numbers is not AI. Summing a thousand numbers, which represent the last thousand nanoseconds of photomultiplier output, and adjusting telescope aperture size as a consequence, is AI.

Are you sure your prof wasn't talking about popular scientific media uses of the term? Because my prof talked about that too, and in that context is much closer to what you're saying.

Re:The term "AI" (1)

Lord Ender (156273) | more than 3 years ago | (#33862760)

any program that made decisions and took actions based on environmental inputs.

This describes every nontrivial piece of software. Your definition is even more meaningless. Like facial recognition, "game playing" is (was) one form of AI -- but now computers beat humans at games like chess.

Re:The term "AI" (1)

Chris Burke (6130) | more than 3 years ago | (#33863132)

This describes every nontrivial piece of software.

No, it doesn't, unless you play semantics with "decisions" and "actions" in ways that were not intended. Read up on AI on WP or any AI textbook (Artificial Intelligence: A Modern Approach by Russel and Norvig for example) for a better description that may satisfy your need for a better definition.

Your definition is even more meaningless. Like facial recognition, "game playing" is (was) one form of AI -- but now computers beat humans at games like chess.

Computers beat humans at chess using AI. That the AI is good enough to sometimes beat Grand Masters (though not ubiquitously since the human players have adapted) does not magically make it not AI. AI doesn't become not-AI when the computer is able to usefully accomplish it. Expert Systems are AI even though they are fantastically successful in many applications.

I assure you that your professor never intended to imply that difficulty was the defining characteristic of AI in the context of actual computer science.

Re:The term "AI" (1)

Hatta (162192) | more than 3 years ago | (#33863318)

No, it doesn't, unless you play semantics with "decisions" and "actions" in ways that were not intended.

That's the problem with your definition. It merely defers the problem of defining intelligence to one of defining decisions. I'll happily grant you that anything that makes decisions is intelligent. But for that definition to be useful we have to agree on what makes a decision a decision.

Re:The term "AI" (1)

Chris Burke (6130) | more than 3 years ago | (#33863388)

The problem with my definition is that it was deliberately non-rigorous and intended only to convey the gist of the concept. Better definitions exist and are readily available. It sounds, though, like what you and Lord Ender want is a precise definition where a line of zero thickness can be drawn between AI and not-AI. Such a definition does not exist. What constitutes AI is indeed somewhat vague. However, imprecise is not the same as useless.

Re:The term "AI" (1)

Hatta (162192) | more than 3 years ago | (#33863702)

I don't see how "we all know what decisions are so let's take that for granted" is any more useful than "we all know what intelligence is so let's take that for granted". If you're going to hand-wave the problem away, better to do it up front actually.

Re:The term "AI" (1)

Lord Ender (156273) | more than 3 years ago | (#33863486)

You misread my statement. Difficulty is not the defining characteristic. The ability of the human brain is the defining characteristic. Being able to do things that come naturally to human brains, but not (traditionally) to software, is the textbook definition. Examples of such things include facial recognition and playing games.

This official definition is flawed, but yours is outright meaningless. Depending on how you interpret your definition, it either refers to everything under the Sun, or to a small subset of all AI problems.

My definition (1)

shadowbearer (554144) | more than 3 years ago | (#33864410)

  AI - in computer terms - is software that can mimic human intuition and decision making. Since humans tend to define intelligence as that which we use to understand and respond to outside stimuli, I don't see that there can be any other definition. The software out there that recognizes faces, that's not really intelligence, just very sophisticated image parsing software.

  Consciousness would be something entirely different; nobody really knows what it is and we don't have any real definition. There appear to be many "levels" of consciousness, even in humans.

    Can hardware/software systems become conscious? Probably, given a certain level of sophistication and self-programmability... the human brain is just a bio/electrochemical computer...

  SB

Re:The term "AI" (1)

sartin (238198) | more than 3 years ago | (#33874760)

The definition he (and the other sources) gave was that it is (simplifying and paraphrasing) any program that made decisions and took actions based on environmental inputs.

By this definition the firmware in the microcontroller in my thermostat is AI. Perhaps the wording was oversimplified or paraphrased incorrectly.

Re:The term "AI" (0)

Anonymous Coward | more than 3 years ago | (#33861998)

'summing a thousand numbers is superior intelligence, but it isn't AI. Recognizing a face, on the other hand, is AI.'

I'm sure recognizing a face requires some sort of number crunching (or at least organized complexity of some sort) way beyond summing a thousand numbers. Taking distance/perspective/light levels etc into account, how wide are these particular nostrils, how long is this chin etc. Some savants can add thousands of numbers, but interestingly often at the expense of some normal human brain abilities. Its like our computers are currently in 'savant' (number crunching) mode, like kim peek being able to remember tons of postal zip codes, and with just the right tweaking we could turn them into 'human brain' mode if we only knew how.
'artificial general intelligence' (agi) is a more specific term. But that was coined by the singularity quasi religion, and I'm sure coining new terms might be good strategy for search engine optimization nowadays. For example Sebastian Seung tried coining 'connectome' in a recent ted talk.. pretty nice when suddenly a new word/phrase like 'connectome' or 'agi' happens to appear and all searches go to your site since the term is new and has no competition innit?

AI = prioritization or feedback (1)

vlm (69642) | more than 3 years ago | (#33859526)

To the unwashed masses, Artificial Intelligence is anything involving feedback (my house thermostat is AI) or anything involving prioritization (welcome to JCL on IBM OS/360 circa 1966).

They are not talking about "real" AI like expert systems or neural nets etc.

Re:AI = prioritization or feedback (1)

Chris Burke (6130) | more than 3 years ago | (#33861990)

Um, to the unwashed masses, AI is Skynet or Agent Smith.

To technical people, yes, your thermostat is (an extremely primitive) AI.

But what is discussed in the article sounds much more like an expert system.

Knowledge systems are not wisdom systems (0)

Anonymous Coward | more than 3 years ago | (#33859552)

Just because you have a knowledge based set of rules, doesn't mean the system is "intelligent'; the system isn't intelligent, its designers were. If the design itself isn't intelligent enough, than the design will fail, not the system. However, fail, is fail, as every predefined system will when it encounters the anything exceeding its design.

This leads us to undefined, or open systems. Are you going to trust an unpredictable machine?

And so, it would seem, we have a dilemma.

No?

Re:Knowledge systems are not wisdom systems (4, Insightful)

j-b0y (449975) | more than 3 years ago | (#33859680)

Which is exactly why you will never see anything more than an expert system in space. There is no way any space agency is going to punt hundreds of millions or euros/dollars/pounds into space without a full understanding of the decision tree in the spacecraft control loop. It is hard enough at the moment without introducing outliers into the system.

Re:Knowledge systems are not wisdom systems (2, Interesting)

timeOday (582209) | more than 3 years ago | (#33860872)

Except we already have *manned* missions, whose decision tree is mysterious and seemingly unbounded, and navigated by a bunch of chemicals sloshing around in a bag of water.

There's no reason automated systems couldn't eventually earn the same level of trust that we place in humans.

Re:Knowledge systems are not wisdom systems (2, Informative)

Quantus347 (1220456) | more than 3 years ago | (#33861318)

Just like they will not punt a multi-million dollar telescope into orbit without testing the primary mirror? And they'd never shoot a $327 million Orbiter to mars without checking the math to make sure the units add up, right?

Face it, space agencies are run by people and governments. They are at least as prone to mistakes and financially driven shortcuts as any other element of human society.

Re:Knowledge systems are not wisdom systems (1)

Chris Burke (6130) | more than 3 years ago | (#33863194)

Just like they will not punt a multi-million dollar telescope into orbit without testing the primary mirror? And they'd never shoot a $327 million Orbiter to mars without checking the math to make sure the units add up, right?

Face it, space agencies are run by people and governments. They are at least as prone to mistakes and financially driven shortcuts as any other element of human society.

Screwing up a unit conversion is not the same as making a high-level decision to incorporate unpredictable software. They will use software that is well-tested and well-behaved. They may also screw up the testing and miss an important case, causing a failure in the AI later. They will not choose something which is by design more prone to such failures. That would be akin to them using a planar mirror instead of a parabolic mirror, or deliberately mixing SI and Imperial units without conversions.

Re:Knowledge systems are not wisdom systems (1)

KevinIsOwn (618900) | more than 3 years ago | (#33862552)

Which is exactly why you will never see anything more than an expert system in space. There is no way any space agency is going to punt hundreds of millions or euros/dollars/pounds into space without a full understanding of the decision tree in the spacecraft control loop. It is hard enough at the moment without introducing outliers into the system.

Insightful? This is flat out wrong, by over 10 years now! See: Planning in interplanetary space: Theory and practice [psu.edu] (A Jonsson, P Morris, N Muscettola, K Rajan, B Smith). This group of scientists from NASA Ames and the JPL used AI planning as part of the Deep Space One mission.

From the very first line of the paper's abstract:

On May 17th 1999, NASA activated for the first time an AI-based planner/scheduler running on the flight processor of a spacecraft.

And note that since 1999, much more work has been done in using AI planning (and perhaps other techniques) in space.

Re:Knowledge systems are not wisdom systems (0)

Anonymous Coward | more than 3 years ago | (#33866022)

Bear in mind that DS-1 was a *technology demonstration* mission, the whole purpose of which was to demonstrate risky new technologies such as sophisticated autonomous planners. I would note that none of the more recent missions use it, except in limited cases, or as part of the "risky" post Phase E ops when the prime mission is complete. For instance, the MER rovers (Spirit and Opportunity) use much "bolder" nav algorithms in the last few years, because the prime mission success criteria have already been met.

Re:Knowledge systems are not wisdom systems (1)

khallow (566160) | more than 3 years ago | (#33863254)

Which is exactly why you will never see anything more than an expert system in space. There is no way any space agency is going to punt hundreds of millions or euros/dollars/pounds into space without a full understanding of the decision tree in the spacecraft control loop.

How about if they're just risking millions of euro/dollars/pounds? At some price point, the risk is going to be worth the return, especially as the price of space probes goes down.

Re:Knowledge systems are not wisdom systems (1)

maxwell demon (590494) | more than 3 years ago | (#33862158)

And you obviously don't know the difference between intelligence and wisdom :-) There are many more intelligent people out there than wise people.

I say we nuke the entire site from orbit. (1)

stevegee58 (1179505) | more than 3 years ago | (#33859570)

It's the only way to be sure.

Al who? (4, Insightful)

Anonymous Coward | more than 3 years ago | (#33859572)

Who is this Al?

Re:Al who? (0)

Anonymous Coward | more than 3 years ago | (#33859796)

I don't know, but it keeps talking to its imaginary friends Sam and Ziggy.

Re:Al who? (1)

antdude (79039) | more than 3 years ago | (#33860218)

Bundy!

Re:Al who? (1)

neonsignal (890658) | more than 3 years ago | (#33862976)

"Your plastic pal who's fun to be with".

Re:Al who? (1)

witte (681163) | more than 3 years ago | (#33863278)

I can call you Betty
And Betty when you call me
You can call me Al

In other news... (3, Informative)

Issildur03 (1173487) | more than 3 years ago | (#33859580)

fuel chemistry is pushing the bounds of space exploration. And steel engineering. And antenna design. And numerous other fields.

Re:In other news... (1)

LWATCDR (28044) | more than 3 years ago | (#33859884)

Probably not steel engineering. Very little steel tends to be used on space craft. Ti and AL have must higher strength to weight ratios.
But I know what you mean.

Re:In other news... (1)

vlm (69642) | more than 3 years ago | (#33860286)

Probably not steel engineering. Very little steel tends to be used on space craft.

Stainless steel is almost stereotypical coolant tubing... Inert and tough as nails and vibration proof compared to all other options. Consider the SSME, admittedly a 30-40 year old engine design, pretty much where ever there is high threat of vibration, you get steel. Injector plate, lots of the engine pipes.

In pure strength to weight, non-ferrous always wins. Much closer race when it comes to strength to volume. But when it comes to vibration survivability, steel almost always wins. Can't beat steel's elastic limit, or at least its really hard to beat.

Would not be surprised to find steel used in the shuttle APU, fuel cell, cryo gasses, and hydraulic systems.

Standard slashdot car analogy, its very "questionable" to make your brake and gas lines out of anything but the toughest steel.

Re:In other news... (1)

LWATCDR (28044) | more than 3 years ago | (#33860430)

I was thinking of the probe and not the boosters.
One the actual probe I would still go with the very little ferrous material. Actually I thought steel usually wins for strength per volume but I am not and expert on exotic alloys.

Re:In other news... (1)

jgtg32a (1173373) | more than 3 years ago | (#33860274)

I was under the impression that we've basically exhausted the capabilities of chemistry in terms of rocket fuel, the only tricks that chemistry has left produce fuel that is too unstable to actually be used/produced in any useful quantities.

Please note that I'm under this impression from what I saw at a site that was advocating nuclear rockets.

Genetics (1)

Quantus347 (1220456) | more than 3 years ago | (#33861396)

Its not the capabilities of Chemistry that we've exhausted, its the capabilities of manufacturing the fuels in a financially viable manner. Technologies exist, but when costs millions per ounce with current techniques, it might as well not exist. Thats not to say we wont figure out a way to streamline it later on. Just look at Carbon Nanotubes. Expensive as hell to manufacture, but then they found a bacteria that literally craps them out. Most of the most complex chemistry out there is organic (as are most fuels), and I think the infant field of Genetic Engineering will have lots to say on what is and isn't possible in the years to come.

PS When I say organic I mean organic compounds, not "Organic" as in the near meaningless label slapped on foods to dupe yuppies and triple the price.

Dave....Dave.... (1)

amiga3D (567632) | more than 3 years ago | (#33859880)

Dave, we need to talk about this.....

Kinf of Funny as News (3, Insightful)

BJ_Covert_Action (1499847) | more than 3 years ago | (#33859960)

I read TFA. It's kind of funny this type of story is posted as news at all. The types of things that NASA and the ESA are describing in their interviews are more complex flight control software algorithms. It used to be that very simple feedback loops were used in combination with various controller chips (like PIDs) in order to give a spacecraft a few modes of operation. Activation and deactivation of these modes of operation were performed manually by ground controllers. However, as tech has progressed and onboard computing power has gotten cheaper, engineers have been able to design control software that activates and deactivates various modes of operation itself. In other words, it forms the same basic feedback loops that you might find on a Roomba, or some other terrestrial robot. It reads some input from a set of sensors. It uses those inputs to formulate a series of commands, be it rates and velocities, or mode-change commands. It then performs the commands in an expected manner.

What I find funny is that this is being touted as some sort of new AI revolution in space. Since our very first probes into LEO, we have been upgrading and complicating the controller software on every mission, be it Hubble, an Atlas V rocket, the mars rovers, or anything. Each new generation of spacecraft tends to have more complex, more robust flight software as the natural evolution of technology progresses. That said, I am not really sure why the ESA or NASA are talking about AI control software. This software isn't anymore AI oriented than the typical control software of any autonomous or semi-autonomous robot on the ground. It's only AI in the most liberal sense of the word. All that is happening is that, as missions and technology progress in maturity, engineers are capable of designing more robust control techniques using methods like Kalman filtering, direct adaptive control algorithms, state estimators, and so on. The only news here is that today missions are being designed with the capability to process more complex instructions set than they could 10, 20, or 30 years ago. That doesn't strike me as very newsish...but hey, I guess something had to fill the columns today.

Re:Kinf of Funny as News (1)

th77 (515478) | more than 3 years ago | (#33860320)

This would be news to the command crew of the Enterprise D, who couldn't come to grips with the idea of turning more flight control over to the computer in Booby Trap [wikipedia.org] .

Re:Kinf of Funny as News (1)

zoso1132 (1303697) | more than 3 years ago | (#33861734)

I was about to launch into a huge rant about this, but you said it all for me. Whatever this article is talking about, it sure isn't AI. And it sure isn't news. The closest thing to it that they mention is the EO-1 software, which I use every day (hey! I work at JPL!) for other missions. If they wanted to talk about complicated planning systems, they should have thought about the MER (rover) to Odyssey (orbiter) relay communications scheduling, or AIM's on board event-based autonomous planning and scheduling software, or Kepler's fault protection tree. As was said already, it's only "AI" in the most liberal sense of the word. They're portraying this AI thing as "new" and revolutionary, when satellites and their engineers have been doing this for decades.

i'm a confused (1)

circletimessquare (444983) | more than 3 years ago | (#33860148)

is this a science fiction parallel to skynet? HAL? v'ger?

someone please inform me along what science fiction prejudicial lines i should cognitively process this story. preferably for humorous reasons, although reasons for childlike awe work too

thanks

Voyager III perhaps ? (1)

thygate (1590197) | more than 3 years ago | (#33860210)

The carbon units will now provide V'ger the required information.

oblig XKCD (4, Funny)

cmiller173 (641510) | more than 3 years ago | (#33860262)

WTH? 29 posts and no one posted this: http://xkcd.com/695/ [xkcd.com]

Re:oblig XKCD (2, Funny)

PPH (736903) | more than 3 years ago | (#33860736)

Or this [theonion.com] .

Don't anthropomorphize computers. They hate that.

whoopsie! (0)

Anonymous Coward | more than 3 years ago | (#33863502)

obligatory goatkcd [xkcd.com]

All the AI and fancy electronics (1)

Spy Handler (822350) | more than 3 years ago | (#33860406)

will do little to nothing for space exploration until we gain the ability to lift stuff into orbit on a massive scale.

This is the kind of shit we need [nextbigfuture.com]

Re:All the AI and fancy electronics (1)

maxwell demon (590494) | more than 3 years ago | (#33862052)

If they want to have this design accepted in the EU, they'll have to replace the nuclear lightbulb with a nuclear energy saving lamp.

In 1998 Deep Space 1 had Lisp aboard... (2, Interesting)

billrp (1530055) | more than 3 years ago | (#33860794)

...as a remote agent, and actually controlled the craft for a few days.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>