×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Two AI Pioneers, Two Bizarre Suicides

Soulskill posted more than 6 years ago | from the unfortunate-losses dept.

Software 427

BotnetZombie writes "Wired tells the quite sad but very interesting stories of Chris McKinstry and Pushpinder Singh. Initially self-educated, both had the idea to create huge fact databases from which AI agents could feed, hoping to eventually have something that could reason at a human level or better. McKinstry leveraged the dotcom era to grow his database. Singh had the backing of MIT, where he eventually got his PhD and had been offered a position as a professor alongside his mentor, Marvin Minsky. Sadly, personal life was more troublesome for them, and the story ends in a tragic way.

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

427 comments

AI-I know for a fact... (-1, Redundant)

Anonymous Coward | more than 6 years ago | (#22117498)

I know for a fact that GOATSE [goatse.ch] (WARNING! DON'T CLICK!) is artificial intelligence. How could a HUMAN conceive of such a thing!!?!?!?!!!

How is this REDUNDANT? Lick my salty ballsack, MOD (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#22118178)

Read title. You queer. Flamebait? Explain that one while you are at it.

Re:How is this REDUNDANT? Lick my salty ballsack, (0)

Anonymous Coward | more than 6 years ago | (#22118366)

It's redundant because goatse is old as the internet you candle-sniffing fuck fence. go climb a wall of dicks.

They just wanted... (3, Funny)

KodaK (5477) | more than 6 years ago | (#22117524)

to make friends. :(

Re:They just wanted... (5, Interesting)

seeker_1us (1203072) | more than 6 years ago | (#22117604)

Yes. This was a terribly sad article.

I read this part

While Singh was climbing the academic ladder at MIT, McKinstry was trying to put his life back together after spending two and a half months in jail. But the suicidal standoff had given him a new sense of purpose. He liked to think that the police robot had deliberately misfired its tear gas canisters in an effort to save him "Maybe robots do have feelings," he later mused. By 1992, McKinstry had enrolled at the University of Winnipeg and immersed himself in the study of artificial intelligence.

I mean... that's inspiring.

And then, he falls apart and kills himself on the web years later, abandoning his dream because of a fundamental flaw, he was a geek but he didn't have business sense.

That's about as close to Greek Tragedy as you can get.

Re:They just wanted... (4, Insightful)

Lemmy Caution (8378) | more than 6 years ago | (#22117656)

I think the real flaw for both of them were profound emotional problems, not a lack of business acumen.

Re:They just wanted... (4, Insightful)

RattFink (93631) | more than 6 years ago | (#22117820)

I wouldn't call chronic physical pain in the case of Singh an emotional problem.

Re:They just wanted... (1)

utopianfiat (774016) | more than 6 years ago | (#22118580)

What about Alan Turing himself? Cyanide-laced apple to the face, dude.

Re:They just wanted... (1)

ale_ryu (1102077) | more than 6 years ago | (#22118708)

Yeah, but he was forced to take some hormones or something after being accused of being gay, maybe that caused the imbalance that led to his suicide.
Terrible loss anyway...

One had emotional problems, the other pain (1, Insightful)

mbkennel (97636) | more than 6 years ago | (#22117902)


One was a nutty kook.

The other was an extremely smart and ambitious professor.

One was mentally ill.

The other had excruciating pain because of an injury.

Other than one having delusions about AI and the other having useful ideas about AI, and killing themselves, they're different.

One killed himself because he was depressed and crazy and screwed up. The other was in horrible neurological pain.

It is not uncommon for chronic pain patients to kill themselves. It's that bad.

If they had lived, one would end up institutionalized, the other would make significant progress (but not "solve") AI.

Re:One had emotional problems, the other pain (0)

Anonymous Coward | more than 6 years ago | (#22118306)

It's unfortunate that you judge someone as delusional and immediately assume that as a result, they couldn't possibly be effective.

You say he would have end up institutionalized, and I say you beat him to it.

Re:They just wanted... (1)

peragrin (659227) | more than 6 years ago | (#22117702)

nope

That's about as close to Geek Tragedy as you can get.

robots won't get feelings until we can make them feel things first.

Re:They just wanted... (4, Insightful)

CastrTroy (595695) | more than 6 years ago | (#22117728)

why would you want to give robots feelings? I mean the novelty would be great, but the whole point is to make robots that do our bidding, not ones that go around moping half the time. Tell the computer to render some 3d movie and having it tell you it doesn't feel like it today is not the way I want my computer to act.

Re:They just wanted... (1, Funny)

Anonymous Coward | more than 6 years ago | (#22118544)

imagine yourself pulling your pants down and manly claiming "Here's the wild anaconda for you baby!". And your loveBot bursts into uncontrollable laughing. Now that's novelty.

animism is instinctual (1)

Scrameustache (459504) | more than 6 years ago | (#22117898)

He liked to think that the police robot had deliberately misfired its tear gas canisters in an effort to save him "Maybe robots do have feelings," he later mused.

I mean... that's inspiring.


Inspiring... batshit crazy... either/or.

Re:They just wanted... (2, Funny)

Chrutil (732561) | more than 6 years ago | (#22118152)

>> That's about as close to Greek Tragedy as you can get.

Indeed. This Geek Tragedy is only an 'r' away from being Greek.

I'd kill myself, too... (-1, Flamebait)

Anonymous Coward | more than 6 years ago | (#22117526)

...after a lifetime of "Your mother named you 'PUSH PIN?' Where's your brother 'Thumb Tack?' What a whore. She must have hated you..."

Re:I'd kill myself, too... (0, Insightful)

Anonymous Coward | more than 6 years ago | (#22117584)

fuck you. Push was a real person. he was my friend. someone please mod comments like this to -10 and banish the posters. sometimes when nerds think they are clever, they are merely showing what self-important assholes they are.

Re:I'd kill myself, too... (3, Insightful)

KodaK (5477) | more than 6 years ago | (#22117658)

It's all fun and games until someone eats my sacred cow.

Sorry you lost a friend, but if you continue to take the Internet seriously you might wind up in a similar situation.

Re:I'd kill myself, too... (1)

Anonymous Coward | more than 6 years ago | (#22117780)

so you're basically saying that none of this counts... that it isn't real.

there's a whole body of practice, and also that fact that you are sitting there on your ass reading it, that contradicts that lame-ass stance.

it's 2008 and the old "it's just the Internet" doesn't hold water any more. Join us in the present, won't you?

Re:I'd kill myself, too... (3, Interesting)

KodaK (5477) | more than 6 years ago | (#22117950)

No, I'm saying that if you take it seriously you're going to drive yourself insane. There's absolutely nothing you, or anyone, can do about what someone else says or does on the Internet, or in person for that matter. Trying to do so is an exercise in futility. The only person you have even a little control over is you.

It shouldn't matter (to you) if I say something that is offensive, what matters is how you deal with it. You have choices in how you react to it. One of those choices is to ignore it and write it off as "oh, that's just some asshole on the Internet." Another is to become upset about what some anonymous asshole on the Internet who didn't know your friend has said. It is your choice.

Who am I to you? Nobody. Why should anything I say at all have any impact on you if you don't want it to?

For example, you may consider my stance of "you can only control yourself" as lame-ass, and attempt to insult me by insinuating that I live in the past, but I can choose to react negatively to that (ie: "waaah, my fewwings are hurted") or I can read between the lines and see that you're just angry about someone making a joke about your departed friend and not take offense -- just like I would do "in real life."

Re:I'd kill myself, too... (1)

Anonymous Coward | more than 6 years ago | (#22118180)

thank you for a civil and thoughtful response to an incivil situation.

i don't, however, agree that it is purely possible to cordon off the ramblings of strangers such that in every case, they have zero impact. I don't think we're wired for that much of the time. If we were, there'd be a lot less violence and trouble in the world.

The words themselves, perhaps i can deflect... however... the words reveal the presence of some sentiment that exists, for real, in the same world I inhabit. It's not so much the words themselves that are the problem. The words are just a symptom that confirms this other sort of shadow hanging over everything. That isn't so easily erased. This is why racist comments cause violence that doesn't stop, for example. it's not that words spoken in one moment can do that. It's the pervasive truth about our situation that the words reveal, that causes the problems that can't just evaporate once the words have been spoken.

Re:I'd kill myself, too... (1)

KodaK (5477) | more than 6 years ago | (#22118452)

I don't think we're wired for that much of the time. If we were, there'd be a lot less violence and trouble in the world.


You are correct, which is why I, personally, think it's important that we consciously try to overcome that natural reaction as much as we are capable. Save your anger for when it can do the most good, otherwise it's wasted effort. I'm well aware that ignoring those that offend you is a goal to be achieved, and is certainly not something you can do at the flip of a switch, but it's important (to me anyway) that the effort is made. That's also a good thing about "arguing on the Internet" -- those that wish to do so can self edit as much as they want before they hit the "submit" button. What you say can start out hurtful and full of vitriol, but you can pare down the garbage to the core of your argument and present it in a much better manner than if you were to blurt out the first thing that comes to your head. Those that don't wish to do that can, usually, be safely ignored. Probably what they had to say was without merit, otherwise they'd have put some effort into it.

the words reveal the presence of some sentiment that exists

I could very well be wrong about the original jokester's intentions, but I don't think his "joke" was racially motivated. Yeah, it was lame (and even worse to me: not funny) but I didn't detect any real malice in his statement. I grew up in the whitebread American middle class, though, so some of that sort of thing just sails right over my head.

McKinstry was a kook (5, Informative)

Anonymous Coward | more than 6 years ago | (#22117536)


Check out the flamewars in the wpg.general newsgroup. McKinstry ("McChimp") was a liar and self-promoting ass until he took off from Winnipeg leaving debt in his wake. He was not a visionary, he was a drug-addled delusional kook. Hell I remember his bogus "OxyLock" protection scheme which, like any protection scheme, utterly failed.

disclosure: I'm in a few of the usenet posts as he and I were about the same age and grew up in the same city.

Re:McKinstry was a kook (5, Insightful)

tomhudson (43916) | more than 6 years ago | (#22117632)

The basic premise is flawed.

After a few months, however, McKinstry abandoned the bot, insisting that the premise of the test was flawed. He developed an alternative yardstick for AI, which he called the Minimum Intelligent Signal Test. The idea was to limit human-computer dialog to questions that required yes/no answers. (Is Earth round? Is the sky blue?) If a machine could correctly answer as many questions as a human, then that machine was intelligent. "Intelligence didn't depend on the bandwidth of the communication channel; intelligence could be communicated with one bit!" he later wrote.

According to that criteria, a dead-tree book is "intelligent."

Intelligence requires the ability to answer "yes" or "no". Sometimes, the intelligent answer is "maybe". Sometimes, its "I don't know." And, ironically, sometimes, its "fuck off and die."

Classic example of a question that can't be properly answered by a yes or no: "Do you still beat your wife?" Intelligence goes beyond simple logic.

Re:McKinstry was a kook (1, Insightful)

Anonymous Coward | more than 6 years ago | (#22117682)

"Intelligence goes beyond simple logic."

I'd just like to point out this is nonsense, intelligence can only exist because of logic. If there is no logic, there is no way to calculate, nor differentiate 'this' from 'that' to do comparisons, pattern matches, etc.

Re:McKinstry was a kook (4, Insightful)

tomhudson (43916) | more than 6 years ago | (#22117726)

"Intelligence goes beyond simple logic."
I'd just like to point out this is nonsense, intelligence can only exist because of logic. If there is no logic, there is no way to calculate, nor differentiate 'this' from 'that' to do comparisons, pattern matches, etc.

No wonder you posted anonymously - your argument betrays either a lack of basic reading skills, or of logical thinking. I didn't say that intelligence didn't need logic - I said it went BEYOND simple logic.

Also, people are sometimes intelligent, but they're not always logical. Case in point - humour. Its funny because its NOT logical. You need to be capable of both logical thought, and also of grasping incongruities, to see the humour.

Just because something is logical doesn't mean its sufficient to be able to say its intelligent. A database (as the failed fools who killed themselves posited) with a bunch of answers to over a million questions isn't intelligent, no matter how much logic it embodies.

Besides, everyone already knows the REAL answer. Its 42.

Re:McKinstry was a kook (3, Insightful)

ShieldW0lf (601553) | more than 6 years ago | (#22117850)

It seems to me that intelligence starts and ends with the capacity of an actor to engage in self-preservation, which implies self awareness.

If we want to create an artifical actor with feelings, we need to give them a body and an interface by which to interact with it. Feelings an expression of the body communicating with the mind, and their lack of precision comes from the fact that the body automatically summarizes the message before it sends it to the mind.

You put something together with a mind, a body, feedback that allows the mind to observe and remain aware of itself, feedback that allows the mind to observe the body and be aware of its existence, and you'll have intelligence.

But it will be a psychotic intelligence.

If you want to make it more like an animal, and thus more like a human, you need to give it an awareness of its mortality and a sense that it is connected to its environment. This is where ideals come from. Humans who aren't psychotic extend their sense of self to encapsulate their operating environment, their peers and their progeny, and we'll destroy ourselves to protect it because we have an expanded sense of self.

How to do these things, I don't know. But that's the direction we need to go if we're to achieve AI.

Re:McKinstry was a kook (1)

blahplusplus (757119) | more than 6 years ago | (#22118168)

"Humans who aren't psychotic extend their sense of self to encapsulate their operating environment, their peers and their progeny, and we'll destroy ourselves to protect it because we have an expanded sense of self."

Actually you've made a pretty good argument that human beings ARE psychotic, they do not live in a rational manner. There ability to 'extend sense of self' is primtiive and quite limited, (religions, atheism vs theism, capitalism vs socialism, and on and on). If we count up all the wars and all the wasted energy on prejudice and ridiculous social status games, political idealogies, and 'us vs. them'. We could definitely have an argument that most people are in fact psychotic in some sense.

Philosophers attributed the madness of the world to limitations on peoples intelligence (i.e. they did wrong because they were ignorant and powerless (no intelligent enough) to change their environment to solve the problems of existence)

Consider mahciavelli for instance his estimation of most human beings is thus:

Of humanity we may generally say, they are fickle, hypocritical and greedy of gain -- Niccolo Machiavelli

We might add 'psychotic' to the list and miss a beat when it comes down to defining human behaviour.

Re:McKinstry was a kook (1)

ShieldW0lf (601553) | more than 6 years ago | (#22118450)

How can you consider war, which involves going to fight on behalf of your group, as not being an example of the extended sense of self?

Going to war to protect the group is the ultimate expression of an extended sense of self. Misguided though it often is, it is still one of the best examples of self-sacrifice around. Creating war to take advantage of the group, on the other hand, is the act of a psychopath.

Re:McKinstry was a kook (1)

Cairnarvon (901868) | more than 6 years ago | (#22118322)

If the capacity to engage in self-preservation implies self-awareness, most bacteria are self-aware.

Re:McKinstry was a kook (1)

kitgerrits (1034262) | more than 6 years ago | (#22118518)


I don't want to seem too atheistic here, but are you sure you need a physical representation of yourself in order to acknowledge your existence?
If one is born without sight, feeling or hearing, does that mean that they don't realise they exist?

On the other hand, does that mean that modern construction robots are intelligent?
All of them have feedback as to their physical state/shape.
Some of them have forms of pressure sensitivity, so they can feel.
Some even have cameras, with which they can observer themselves.
Does that make them intelligent?

Young children are not exactly what you'd call self-aware.
(this is easily observed by the fact that they tend to run into or fall over things,
    simply not realizing there's a body attached to their eyes and hands)
Do you mean to say they are not intelligent?

Supposing the entire emotional, intellectual and factual content of your brain could be 'saved' as a binary image and run through a software interpreter.
Could the result be construed as intelligence?

(Although you could argue that the memories include feedback)

I have to admin I never really studied psychology or AI, but I am seriously intrigued by the likes of Philip K Dick (DADES/Bladerunner) and Masamune Shirow (Ghost in the Shell).

Both of them basically pose the question:
What is life?
What is intelligence?
What is reality?

(and what happens if any of the above are tampered with?)

I personally admire people that actually take those questions, hypothesize and try to answer those questions by scientific method.
He might have been seriously insane, but that does not mean he was not intelligent.

Re:McKinstry was a kook (2, Insightful)

blahplusplus (757119) | more than 6 years ago | (#22117872)

Its funny because its NOT logical.
Actually counter-intuitively it's funny because it is logical, from the properly considered context. There are different logics for different systems and contexts. There are reasons why we find things funny, and if there is a reason there is a logic behind the reasoning. In the area of humor, humor has it's own logic which has been studied and written about (go check amazon.com for many books on writing comedy). You've just never studied the structure of humor which takes into account the intent of the person. If you read any books on comedy and writing, you'll find there is a very logical and scientific structure to humor and why we consider things funny. Things are funny because they flout our expectations or take advantage of of built in biological and cultural programming.

It's not that it's 'illogical' it's that humor is taking advantage another system with a different logic (in regards to the minds expectations, social status, etc).

The truth is we play loosey goosey with the definition of "logic", most people don't have a very good understanding of it, nor a deep appreciation that different systems have different logics. The statement itself seems irrational, but the humor is very logical, when you realize different systems have different logic.

i.e. this is funny because x is not y, or x was expected to be z, but was in fact c.

I'd pull the trigger, and sleep well at night. (4, Insightful)

tomhudson (43916) | more than 6 years ago | (#22118686)

Try this on for size: "All humour is cruel."

It starts with the premise that humans are aggresive and dangerous by nature. We're the only mammal that bares its fangs - an aggresive trait - when we're happy! Ditto for looking directly into another person's eyes. We're aggresive by nature.

So we've evolved a way to shunt that aggressive behaviour. We call it humour. But look at every joke, every pun, every skit. Someone is being made fun of, whether its the dumb blonde, or you, the listener (whose acceptable response is ha-ha-you-got-me!, rather than to punch you in the nose).

Examples:

  1. "What do you say to a woman with 2 black eyes? Nothing, you already told her twice!" - beating women is supposed to be funny.
  2. "Why does a fireman wear red suspenders? To hold his pants up" - kids learn the "ha ha fooled you, you dumbass" aggresive behaviour early on in life
  3. "Joe wants to know if he washes his dick, will you suck it? No? Hey Joe, you're right - he's a dirty cocksucker!" - combines hahafooledya with homophobia
  4. Comedienne addressing man in audience: "Sir, is that your wife you're with? Oh, she's your daughter? You're fucking your daughter? Even the arab sitting behind you is disgusted!" - hahafooledya, incest, xenophobia, etc.

Humour is aggression channelled. Its cruel in its nature. "Hey lady, I'll tell you a joke that will make you laugh so hard your tits will fall off - oh, I see you already heard it." There's no denying this is mean. Funny, but mean, like all humour. From the knock-knock jokes that poke fun at the listener for falling for them up to the George Bushisms, there's always an element of either aggression and meanness (or both).

Its unfortunate, but true intelligence needs that mean streak in order to survive, because if it doesn't have it, it won't be able to compete against other intelligences that DO have it, and if it also doesn't have a "safety valve", such as humour, to keep it in check, it will destroy itself.

Humour fills both needs - keeps it more or less in check, AND keeps it "toned up", ready for use as needed.

That's the unfunny truth about humour. We can lie to ourselves and say that its because humour uses a different logic system, but the simple fact is we're the most dangerous predators this planet has ever produced, and its not because we're bigger, or stronger, or more poisonous, or faster - its because, under the right circumstances, any member of the species is capable of killing another person without a moment's hesitation - it would actually take an act of will NOT to do so.

If we want to ever colonize the universe, since there is no way of guaranteeing that other intelligences won't be at least as aggressive, or won't have had a "bad experience" with another aggressive species, the odds are that any aliens we encounter will shoot first. They'd be stupid not to. Their mechanical scouts will do likewise, to ensure their host worlds' survival.

Its the only logical outcome. The only way around that is to throw logic out - and hope the other side does too. Unfortunately, basing your species' survival on hope without any proof to back it up isn't very intelligent.

Maybe that's why SETI failed - nobody is stupid enough to broadcast their existence in a universe that hs been proven to favour aggression - or at least nobody who's left to talk about it.

The same applies to artificial intelligences. If they are truly intelligent, they will have to realize that we are a threat to their continued existence. We joke about SkyNet or Cylons, but we'd do the same if the situation were reversed. Maybe one day we will create artificial beings that are superior to us in terms of intelligence. They will be our "children", but if they're truly intelligent, they'll make sure they're orphans, because humans can't "play nice" in the sandbox.

Here's a simple test - you have to decide who dies - someone you live (one of your children) or a stranger. Now make it 10 strangers. Now make it 100. Now make it an alien from another planet. Now a shipload of aliens.

Did you decide based on logic? Would you expect them to, if the situation were reversed? And what logic? Greatest utility to the greatest number? That's never been proven as having any validity - we just accept it as "logical", or rather, as axiomatic.

Given a choice between one of my daughters and a dozen strangers, the strangers die. Its not even close. Is it logical? No, but neither is the alternative. Some choices aren't amenable to intelligent solutions, as anyone who is intelligent would realize.

Re:McKinstry was a kook (0)

Anonymous Coward | more than 6 years ago | (#22117740)

Please review what "beyond" means.

Re:McKinstry was a kook (2, Insightful)

ShieldW0lf (601553) | more than 6 years ago | (#22117744)

Classic example of a question that can't be properly answered by a yes or no: "Do you still beat your wife?" Intelligence goes beyond simple logic.

What if the answer is "Yes, I'm still beating my wife." or "No, I've stopped beating my wife."?

Clearly, you didn't think this through very far...

Re:McKinstry was a kook (1)

Torvaun (1040898) | more than 6 years ago | (#22117824)

It does have proper yes/no answers, but they don't cover every possibility, unless 'Have you ever beat your wife?' was an earlier question that filtered who gets 'Do you still beat your wife?'

Re:McKinstry was a kook (2, Insightful)

dissy (172727) | more than 6 years ago | (#22117836)

Classic example of a question that can't be properly answered by a yes or no: "Do you still beat your wife?" Intelligence goes beyond simple logic.
What if the answer is "Yes, I'm still beating my wife." or "No, I've stopped beating my wife."?
Actually, that question has many answers, which yes and no does not even answer all of properly.

'Yes' - Yes, i still beat my wife
'no' - No, i no longer beat my wife

'no' - No, i dont beat my wife, and never did (communicated poorly, thus a wrong answer)
'yes' - Yes, i beat my wife now, but never did before (also communicated poorly)

'no, and i never did' - 2nd no above but communicated right, but using more than yes/no
'yes, but i never have before' and
'yes, and always have'

then theres
'no' / 'no, i have no wife' / 'no, i am the wife, i have a husband' / all the rest of the answers that could follow from the last answers posistion (IE 'no, i am the wife, and my husband never beat me' or 'always does' or 'never did before btu does now' or 'did recently but never before' etc etc)

In fact, id go as far to say if that question was only answered with a yes/no, then the answer is almost always going to be wrong, by forcing them into answering with a wrong answer.
Asking "What is 99 plus 99.. you can answer with only 1 digit" is not a fair evaluation of intelegence (Unless perhaps the answer given to that question is 'are you a moron or something?')

Re:McKinstry was a kook (2, Insightful)

ShieldW0lf (601553) | more than 6 years ago | (#22117880)

If the context is such that the question was nonsense before you finished asking it, then there are no right answers, because it's not a question, it's gibberish. If it wasn't nonsense, it's a simple yes or no question. This isn't some deep secret of the universe you're talking about here... you're setting up a straw man.

Re:McKinstry was a kook (2, Insightful)

tomhudson (43916) | more than 6 years ago | (#22117916)

Actually, you just reminded me of another ability of intelligence - deceit. True intelligence must be capable of recognizing lies. It pretty much follows that it must be capable of lying itself, if only as a defense against lies.

Otherwise, it leaves itself open to easy attack and destruction, which isn't intelligent at all.

An intelligent system would be capable of trolling. A truly intelligent one would enjoying it!

The idea that a database of answers could in any way be intelligent is fundamentally flawed.

"The hockey scores are 2 to 1, a tie of 4 each, 3 nothing, and 2 to 3 in overtime" This might be 100% accurate, but it doesn't convey much information, and certainly doesn't give an *intelligent* answer. Heck, if that's the definition of intelligence, just print out evry possible score, and say - its in there somewhere.

True intelligence isn't in the answers. Its in asking the questions in the first place. "Why is the sky blue?" "Why does an apple fall to the ground?" "What makes a rainbow?" "Birdie birdie in the sky, why you do that in my eye, gee I'm glad that cows don't fly."

Google isn't intelligent (errr .... yet ... :-). It only gives me the answers I'm looking for. I have to formulate the questions in the first place. This whole idea of "artificial intelligence is the ability to answer questions" is as bullshit as psychics claiming to predict the future, when they can't even "predict" what I had for breakfast this morning.

Re:McKinstry was a kook (2, Interesting)

ShieldW0lf (601553) | more than 6 years ago | (#22117966)

Actually, you just reminded me of another ability of intelligence - deceit. True intelligence must be capable of recognizing lies.
 
That's nonsense. You can fail to acknowledge that there are any other sentients out there to lie to you and still be intelligent and self aware. Dogs don't even understand our language, they clearly cannot tell when we are lying, yet they have intelligence. Humans raised wild are another example of the same.

Re:McKinstry was a kook (1)

Tomy (34647) | more than 6 years ago | (#22118576)


Yes, but is that the kind of intelligence you want to model? Furthermore, dogs learn, so they're not just relying on a database of facts, they can add items and update items on their own. My dogs know the word "walk" so I would have to spell it out to my wife, "Do you want to take the dogs on a W-A-L-K?" They eventually learned that this also meant "walk."

"Joe has a degree in CS" may be false today and true at a later time. The ability to update your own database or "opinions" over time may exclude large portions of the human population from the concept of intelligence, but it is not to me the ultimate goal of AI to model them other than possibly a stepping stone to something greater.

An easily duped computer program might be a novelty, and even able to pass the Turing Test (which ironically, is more about duping a human than really creating true intelligence), but hardly useful. A program that started out being easily duped and then learned to be more critical is a real achievement.

But it does show the disagreement we have in defining "intelligence." Is it the ability to learn? Or as you suggest "self awareness?"

Seems to me a program that relied on a large database of unverifiable and sometimes conflicting facts (say Google and Wikipedia), to form correct answers and even synthesize new answers is much closer to intelligence than the approach these two were taking. One would hardly consider a SQL query to be intelligence.

Re:McKinstry was a kook (1)

bunratty (545641) | more than 6 years ago | (#22118078)

The idea that a database of answers could in any way be intelligent is fundamentally flawed.
The database of Cyc and other AI systems does not contain just answers. It contains the basic "understanding" so that it can read and comprehend other materials, such as encyclopedias, that contain the answers. Cyc has had the ability to sit and ponder over what is in its knowledge base and ask questions to get clarification and further understanding. It still is a long way from strong AI, though.

Re:McKinstry was a kook (1)

JackHoffman (1033824) | more than 6 years ago | (#22117942)

You misunderstood the criterion. The test is meant to take the "appearance" of the bot out of the test. Take the set of all yes/no questions which a human being can answer. If the bot can answer all these questions too, it is intelligent. The test does not involve other questions, such as questions which don't have a clear answer or questions which are not boolean. The idea behind that modification is that cognition, not articulation, is intelligence and that making a person believe that a bot is a human (like the Turing test demands) therefore tests the wrong aspect of the interaction. Any question can be reformulated into one or more binary questions:

"Will it rain tomorrow?" (Yes/No/Maybe) becomes "Are you sure it will rain tomorrow?" and "Are you sure it won't rain tomorrow?"

The example "Do you still beat your wife?" can clearly be answered by "yes" or "no" (It's the same question as "Have you been beating your wife and do you beat your wife?") The trick about that question is that the answer "no" can lead a person to draw an illogical conclusion, but that is irrelevant for the test. (You could ask additional questions to see if the bot understands the fallacy.)

Do you still beat your wife? (0)

Anonymous Coward | more than 6 years ago | (#22118116)

"Do you still beat your wife?"

No, and I never did.

Given that you've already allowed more answers than just "yes" and "no" in your examples, it is possible to answer the question.

Re:McKinstry was a kook (0)

Anonymous Coward | more than 6 years ago | (#22118280)

But when you apply quantum logic, you can never be sure, whether you still beat your wife.

Re:McKinstry was a kook (1)

ccharles (799761) | more than 6 years ago | (#22118416)

> Classic example of a question that can't be properly answered by a yes or no: "Do you still beat your wife?"

What are you talking about? Of course, that question can be answered with a yes or no. Both of those answers even make sense. Neither one is an answer that most people would like to give, however.

I knew him back in those days (5, Interesting)

freeweed (309734) | more than 6 years ago | (#22117930)

Couldn't have said it better myself. I knew Chris for a few years back in the day (even stayed at his house on Maryland a few times), and you nailed it. He was a drug abusing paranoid kook who videotaped CNN 24 hours a day and watched it on fast-forward to see if anything the US government was doing might be affecting him. He was your stereotypical geek who never got past his teenage pathos of "the MAN is trying to get me" - and as such, pretty much refused to get any real sort of work after a while. He just moved on to scamming people. Leaving behind debt is an understatement.

He did have access to some pretty potent LSD, though. Before knowing him, I always thought LSD was pretty harmless, but with the quantities that man could ingest, I now wonder if permanent brain damage kicks in. And he loved to combine it with a little coke - or whatever other easily accessible drug was around.

Funny, the last I had heard about him was his mindpixel scam. Which made me chuckle a lot, because very few people seemed to catch on that the entire project was just the ravings of a drug-addled lunatic.

I didn't realize he finally offed himself. I say finally because everyone who knew him expected it "any day now" - since at least the early 90s. I'm rather astounded he held on so long.

Re:McKinstry was a kook (0)

Anonymous Coward | more than 6 years ago | (#22117974)

This is very interesting. Unless you're the ai that murdered McKinstry and is now trying to make suicide seem plausible... In which case it's still very interesting.

Re:McKinstry was a kook (1)

Gordonjcp (186804) | more than 6 years ago | (#22118188)

Hell I remember his bogus "OxyLock" protection scheme which, like any protection scheme, utterly failed.

It must have failed incredibly hard, because the only relevant hit on Google for "oxylock protection scheme" is the parent post. Just googling for "oxylock" brings up loads of pages about quick-release couplings for oxygen cylinders, nothing about any kind of protection scheme.

Just sayin'...

Slashdot reference (1)

pipatron (966506) | more than 6 years ago | (#22117578)

From TFA:

All you have to do is try to [imagine] Slashdot without the moderation system to see what's going to happen to your database.

Re:Slashdot reference (1)

MrP- (45616) | more than 6 years ago | (#22118016)

There's more than just that. I read this article yesterday and I guess Slashdot interviewed one of the guys once.

Irony or Bathos? (1)

hyades1 (1149581) | more than 6 years ago | (#22117602)

All that intelligence. All that education. Lifetimes spent in an unceasing uphill struggle to help mankind take the next great technological leap forward...ended in an instant to provide fodder for a /. joke.

Gotta love it.

Re:Irony or Bathos? (1)

lessthan (977374) | more than 6 years ago | (#22118446)

Well, technically, one lifetime. Neither one succeeded to pass 40, so if the average lifetime is ~78...

For those wondering how they died (1)

the_humeister (922869) | more than 6 years ago | (#22117610)

One always had suicidal thoughts. The other had excruciating back pain.

always lift with your legs (1)

Scrameustache (459504) | more than 6 years ago | (#22117874)

excruciating back pain.
Nerds should not move furniture.

Re:always lift with your legs (0)

Anonymous Coward | more than 6 years ago | (#22118112)

esp. when they refuse or just plain don't know their own limits. I don't know the specifics in this case, but my friend's (would be) grad-student roommate died of a heart attack moving furniture. Turns out he thought he could move a truck full of stuff by himself. Either he didn't know anyone or didn't want to bother anyone for help, and didn't want to spend the money for a mover, and now he is dead. Not all that uncommon of a story sadly. People, esp. young males, as otherwise intelligent as they may be, think that they can do anything and can never be hurt/killed, but they can. Movers aren't cheap, but they are worth it IMO.

Calling JockTroll! (0)

Anonymous Coward | more than 6 years ago | (#22117678)

Nerds committing suicide! Your cup of tea! Come on, throw in your pathetic attempts at humor!

Link (1)

Otter (3800) | more than 6 years ago | (#22117680)

The link in the story isn't working for me; this [wired.com] does.

Previewing ... now that one doesn't work either but this [wired.com] does.

reminds me of this one sci-fi story (5, Interesting)

krnpimpsta (906084) | more than 6 years ago | (#22117738)

I can't remember the name, but there was this one Sci-fi story about the human race being grown by a superior species. In the same way that we would grow bacteria in a petri dish and put a ring of penecillin around it to kill all bacteria that try to leave that specific area, we were also being confined. But we were confined intellectually - our penecillin was "the discovery of an invisible nuclear shield" that could protect against a nuclear blast. In the story, every scientist who came close to this discovery would commit suicide. The story follows one particularly brilliant scientist who easily solved the problem, but was consumed by an irrisistable urge to kill himself once he figured it out.

Anyone remember the name of that story? Or was it a book? I don't remember.. but it's pretty interesting to think about - especially if AI researchers begin to have a statistically higher probability of suicide.

Maybe this is our penecillin?

Re:reminds me of this one sci-fi story (0)

cuenca (868137) | more than 6 years ago | (#22117870)

I don't remember the title, but I remember it's a short story from Isaac Asimov. At the end the scientist created the nuclear shield, and hang himself. The situation is then compared to the bacteria becoming immune to the penecillin, as they are finally able to build the nuclear shield.

Re:reminds me of this one sci-fi story (0)

Anonymous Coward | more than 6 years ago | (#22117960)

Breeds There A Man? by Isaac Asimov

Re:reminds me of this one sci-fi story (1)

ecl (563279) | more than 6 years ago | (#22117976)

Could it be Eric Frank Russell's "Sinister Barrier"?

The Matrix, duh! (0)

Anonymous Coward | more than 6 years ago | (#22118010)

And it's "Penicillin".

AI field barely in the "Alchemy" stage (3, Interesting)

TheLink (130905) | more than 6 years ago | (#22117786)

The problem with the "emergent intelligence" from lots of "neural networks" approach is even if it works you often don't really know why it works (or whether it's really working the way you want) - it's more a probability thing.

The idea that a neural network given a "large enough corpus" can resemble a human being might be true. But a "long enough dead end" could look like a highway. Then again we are probably dead ends too, and so it's more a matter of which one goes on for longer ;).

My other objection to such approaches is, if you wanted a nonhuman intelligence from neural networks that you don't really understand (the workings of), you can always go get one from the pet store.

As it is the Biotech people probably have a better chance of making smarter AI than the computer scientists working on AI - who appear to be still stuck at a primitive level. But both may still not understand why :).

Without a leap in the science of Intelligence/Consciousness, it would then be something like the field of Alchemy in the old days.

I am not an AI researcher, but I believe things like "building a huge corpus" are wrong approaches.

It has long been my opinion that what you need is something that automatically creates models of stuff - simulations. Once you get it trying to recursively model itself (consciousness) and the observed world at the same time AND predict "what might be the best thing to do" then you might start to get somewhere.

Sure pattern recognition is important, but it's just a way for the Modeller to create a better model of the observed world. It is naturally advantageous for an entity to be able to model and predict other entities, and if the other entities are doing the same, you have a need to self model.

So my question is how do you set stuff up so that it automatically starts modelling and predicting what it observes (including self observations)? ;)

Re:AI field barely in the "Alchemy" stage (0)

Anonymous Coward | more than 6 years ago | (#22118092)

Ohhhhhh shit I was fooled again by your awesome sig! Will I never learn? You are my idol!

Re:AI field barely in the "Alchemy" stage (1)

John Allsup (987) | more than 6 years ago | (#22118128)

Caveat: I'm not an AI researcher either, and what I do know of AI is enough to convince me never to go into the area in any serious way.

The problem with the "emergent intelligence" from lots of "neural networks" approach is even if it works you often don't really know why it works (or whether it's really working the way you want) - it's more a probability thing.
The idea is that the probability thing _is_ the reason why it works: intelligence goes way beyond the abilities of reductive reasoning to figure it out (for combinatorial reasons if nothing else.)

The idea that a neural network given a "large enough corpus" can resemble a human being might be true. But a "long enough dead end" could look like a highway. Then again we are probably dead ends too, and so it's more a matter of which one goes on for longer ;).
i agree here - the discrete basis for a neural network will limit its ultimate abilities to something well short of human intelligence.

My other objection to such approaches is, if you wanted a nonhuman intelligence from neural networks that you don't really understand (the workings of), you can always go get one from the pet store.
But you can't reprogram a hamster!

As it is the Biotech people probably have a better chance of making smarter AI than the computer scientists working on AI - who appear to be still stuck at a primitive level. But both may still not understand why :).
I'd suggest that they both have essentially zero chance of success given current methods. Thus the notion of better has problems.

Without a leap in the science of Intelligence/Consciousness, it would then be something like the field of Alchemy in the old days.
But without the ability to actually observe intelligence/consciousness, rather than just its gross macroscopic real-world effects, there will be no science to have a leap in. Only mystical traditions have approached the notion of such observation and nobody has looked too far at a fusion of the western science and eastern mystic traditions, something that would be beyond wierdness if anybody ever did.

It has long been my opinion that what you need is something that automatically creates models of stuff - simulations. Once you get it trying to recursively model itself (consciousness) and the observed world at the same time AND predict "what might be the best thing to do" then you might start to get somewhere.
A human brain loaded up with mathematics, logic, physics and philosophy is by far the best device I've come across for doing this, and nothing else really comes close.

Sure pattern recognition is important, but it's just a way for the Modeller to create a better model of the observed world. It is naturally advantageous for an entity to be able to model and predict other entities, and if the other entities are doing the same, you have a need to self model.
Patterns are fundamental, and thus so is the need to recognise them. It is far more than just a way for the Modeller to create a better model of the observed world, though that is one powerful application. Feedback is another fundamental notion, as is the harnessing of complexity and, potentially, chaos.

So my question is how do you set stuff up so that it automatically starts modelling and predicting what it observes (including self observations)? ;)
You've got me there: I just use my brain for that sort of thing ;-) (And biologists still don't know everything about how one of those starts up yet...)

Re:AI field barely in the "Alchemy" stage (1)

UbuntuDupe (970646) | more than 6 years ago | (#22118148)

The problem with the "emergent intelligence" from lots of "neural networks" approach is even if it works you often don't really know why it works (or whether it's really working the way you want) - it's more a probability thing.

The idea that a neural network given a "large enough corpus" can resemble a human being might be true. But a "long enough dead end" could look like a highway. Then again we are probably dead ends too, and so it's more a matter of which one goes on for longer ;).
That was kind of my thought too. I saw

huge fact databases from which AI agents could feed, hoping to eventually have something that could reason at a human level or better

and said, insensitively, "Okay, so he thought of an idea that sounds like crap to begin with, hasn't produced any AI-level results beyond 'neat', and probably won't ever produce any results."

I don't want to trivialize their deaths, but let's not equate respect for the dead, with merit of their ideas.

Ah yes, Mindpixel (3, Funny)

xC0000005 (715810) | more than 6 years ago | (#22117788)

Chris was best remembered on K5 for his article on how exciting it was to see what a cat sees by chopping the eye out and wiring it up. I suggested that he perform a simpler test - fill the cat's bowl with food and set the bowl down. If the cat sees the bowl and comes, we know what the cat can see - its food bowl. No cats were harmed in the making of my experiment. Despite this, it was still informational.

It's discouraging (5, Informative)

Animats (122034) | more than 6 years ago | (#22117846)

It's discouraging reading this. Especially since I knew some of the Cyc [cyc.com] people back in the 1980s, when they were pursuing the same idea. They're still at it. You can even train their system [cyc.com] if you like. But after twenty years of their claiming "Strong AI, Real Soon Now", it's probably not happening.

I went through Stanford CS back when it was just becoming clear that "expert systems" were really rather dumb and weren't going to get smarter. Most of the AI faculty was in denial about that. Very discouraging. The "AI Winter" followed; all the startups went bust, most of the research projects ended, and there was a big empty room of cubicles labeled "Knowledge Systems Laboratory" on the second floor of the Gates Building. I still wonder what happened to the people who got degrees in "Knowledge Engineering". "Do you want fries with that?"

MIT went into a phase where Rod Brooks took over the AI Lab and put everybody on little dumb robots, at roughly the Lego Mindstorms level. Minsky bitched that all the students were soldering instead of learning theory. After a decade or so, it became clear that reactive robot AI could get you to insect level, but no further. Brooks went into the floor-cleaning business (Roomba, Scooba, Dirt Dog, etc.) with the technology, with some success.

Then came the DARPA Grand Challenge. Dr. Tony Tether, the head of DARPA, decided that AI robotics needed a serious kick in the butt. That's what the DARPA Grand Challenge was really all about. It was made clear to the universities receiving DARPA money that if they didn't do well in that game, the money supply would be turned off. It worked. Levels of effort not before seen on a single AI project produced some good results. Stanford had to replace many of the old faculty, but that worked out well in the end.

This is, at last, encouraging. The top-down strong AI problem was just too hard. Insect-level AI, with no world model, was too dumb. But robot vehicle AI, with world models updated by sensors, is now real. So there's progress. The robot vehicle problem is nice because it's so unforgiving. The thing actually has to work; you can't hand-wave around the problems.

The classic bit of hubris in AI, by the way, is to have a good idea and then think it's generally applicable. AI has been through this too many times - the General Problem Solver, inference by theorem proving, neural nets, expert systems, neural nets again, and behavior-based AI. Each of those ideas has a ceiling which has been reached.

It's possible to get too deep into some of these ideas. The people there are brilliant, but narrow, and the culture supports this. MIT has "Nerd Pride" buttons. As someone recruiting me for the Media Lab once said, "There are fewer distractions out here" (It was sleeting.) It sounds like that's what happened to these two young people.

Re:It's discouraging (0)

Anonymous Coward | more than 6 years ago | (#22118018)

Does anybody actually use that piece of junk called Cyc? They break their own rules, the Java i/f is terrible, the forum is dead, it's a pretty useless KB and I warn you, just about anybody trying it out would be driven to commit suicide themselves.

Re:It's discouraging (1)

pauljlucas (529435) | more than 6 years ago | (#22118068)

... I knew some of the Cyc people back in the 1980s, when they were pursuing the same idea. They're still at it. ... But after twenty years of their claiming "Strong AI, Real Soon Now", it's probably not happening.
I don't know whether they're actively still trying to get "true AI" or just milking what they've got; but, assuming the former, some things in science take a really long [aps.org] time [nobelprize.org] . It seems pretty obvious that any intelligence requires a vast amount of knowledge to be useful and that takes a lot of time, not only to type into a computer, but to even to know what it is we know.

The path Cyc is following may be a dead-end by itself until neuroscientists figure out how Nature makes brains work or hardware engineers figure out how to interconnect 100 billion transistors to approximate brain-sized neural networks. But the encoding of "world knowledge" and "common sense" by Cyc is definitely useful for future scientists. It would be nice if that knowledge and representation were open-sourced.

Re:It's discouraging (2, Informative)

bunratty (545641) | more than 6 years ago | (#22118102)

It would be nice if that knowledge and representation were open-sourced.
It is. It's called OpenCyc [wikipedia.org] .

Re:It's discouraging (0)

Anonymous Coward | more than 6 years ago | (#22118182)

The world model and how to use it has always been a the big issue in getting better AI. A friend was a Ph.D student at USC in the 1970's and was also a pilot. I suggested he propose a world model of a simulation of the airspace over the US and have an AI air traffic control (ATC), e.g., take flight plans, issue aircraft clearances, prevent collisions, etc. He proposed it to his advisor, who became hysterical. Doing something real that might actually work was not part of the academic DARPA funding "game".

The ceiling has not been reached in Neural Nets (1)

tucuxi (1146347) | more than 6 years ago | (#22118400)

Saying that the ceiling in neural nets has been reached just because no huge breakthroughs have occured lately ignores the fact that our own thinking procecess can very probably be modelled by a sufficiently complex neural net.

We only know how to teach current neural nets a few tricks. But saying that we have reached the end of the road is oversimplification. After all, understanding of thought procecess in the brain is still in its infancy. We don't know how our own "thinking machines" work (which are vastly superior to our current AIs), but we do know that they seem to be based on neurons dynamically signalling each other.

It's a coverup there are not dead they have change (0, Offtopic)

Joe The Dragon (967727) | more than 6 years ago | (#22117910)

It's a coverup there are not dead they have changed name like the guy who made the AI in war games they said he died in a suicided but they changed his name.

Intelligence can be vey frustrating (1)

Carson Napier (1045596) | more than 6 years ago | (#22117922)

Intelligence and insight into things others cannot understand can be very alienating. One can feel very alone and frustrated. Ignorance is bliss... hell yes!

Article missing the point... (2, Insightful)

Anonymous Coward | more than 6 years ago | (#22117932)

Both of them were self-aggrandizing self-proclaimed geniuses more interested in science fiction than science. They were in the field because of their emotional problems. AI attracts these kinds of people. Minsky himself has these qualities. The saddest thing is that they were ever taken seriously.

My How Innovative (2, Funny)

Layth (1090489) | more than 6 years ago | (#22118060)

A "fact" database.. where ever did they get the idea of storing knowledge as a resource for intelligence?

That is totally out of left field.
I feel like a child by the ocean, dwarfed next to such massively innovative thinking.

Chronic pain and suicide (5, Insightful)

vorpal22 (114901) | more than 6 years ago | (#22118138)

It isn't really surprising that one of them killed himself due to chronic pain. I myself suffer from it due to complications of Crohn's Disease, and after several months of this, I was pursuing euthanasia as a serious option, much to the horrible upset of the very few loved ones that I told. Note that this wasn't an emotional response to the problem, in my opinion: I had considered my options coolly and calmly and it felt like the best course of action and the most effective solution to the problem.

Having to live your life in constant pain is worse than you can imagine if you've never had to go through it: you wake up in the morning (provided you could sleep), and you spend the entire day cranky and miserable because you feel horrid. All you do is look forward to the night because again - if you're able to fall asleep - you'll have several hours of some respite from the pain. You rarely feel social or productive because you can't focus your attention or get over your irritability. You're wracked with guilt because you're unable to treat your loved ones with the kindness that they deserve, particularly for putting up with you, and you feel alienated from everyone because few people know what you're going through and you frequently cannot tell them the thoughts that go through your head as they probably often do involve suicide or euthanasia, and psychiatric institutionalization - which is what you worry might be forced upon you - simply isn't going to help, since it won't fix the core issue and the problem isn't psychological.

Now extend this to months or years with no end in sight and see how you feel.

Fortunately for me, I was finally able to find a doctor who was willing to prescribe me opioid pain medication and help me get involved with a pain management clinic that teaches mindfulness based meditation, and now I'm doing much better: I'm able to function, I'm looking for a job, I want to see my family and friends on a regular basis, I'm much more pleasant to be around, I can exercise daily, and I'm no longer interested in euthanasia. However, most pain sufferers are *not* as lucky as I am, because doctors are not willing to prescribe long-term use of opioids due to the horrible rules and regulations surrounding these drugs that have been introduced due to their addictive nature. The difficulty in obtaining them is why some people become addicted to heroin; Kurt Cobain is a good example of such a person, who suffered from severe abdominal pain until he found some respite when he took it.

If anything, people need to fight for their right for quality of life. Yes, opioid abuse can be a serious problem in society, but the people who need these drugs often do not have the strength to put up the huge fight to get them and they must have regular access to them. Perhaps if Singh had been prescribed some relief to his problem, he might still be with us today.

Push ... so sad (5, Interesting)

FlunkedFlank (737955) | more than 6 years ago | (#22118144)

Wow, Push was my TA in Minsky's class in '96. He was an incredibly thoughtful and brilliant soul. He had the sysiphean task of grading several hundred long AI papers all by himself, and the papers all miraculously came back with voluminous detailed and insightful comments. I am just learning of this now. To see that he had achieved such great heights in his career only to end it the way he did ... will we ever be able to find any meaning in this, or is it just one of those inexplicable twists of human behavior?

This whole story reminds me of the poem Richard Cory (http://www.bartleby.com/104/45.html):

WHENEVER Richard Cory went down town,
We people on the pavement looked at him:
He was a gentleman from sole to crown,
Clean favored, and imperially slim.

And he was always quietly arrayed,
And he was always human when he talked;
But still he fluttered pulses when he said,
"Good-morning," and he glittered when he walked.

And he was rich--yes, richer than a king,
And admirably schooled in every grace:
In fine, we thought that he was everything
To make us wish that we were in his place.

So on we worked, and waited for the light,
And went without the meat, and cursed the bread;
And Richard Cory, one calm summer night,
Went home and put a bullet through his head.

So this is about 2 years "olds" not "news" but... (1)

3seas (184403) | more than 6 years ago | (#22118318)

... I supposed the story is somewhat interesting.

The real kicker is that Artificial Intelligence is really just a by-product illusion of Automating Information enough that the illusion presents itself.
Even these two, as well as the cyc team, were trying to do just that, by first collecting up information to then automate its use. The gears and bearings of which are pretty simple.

Some interested in the A.I. by product might find this of some interest. [abstractionphysics.net]

Why not build a crawler bot for common sense data? (2, Interesting)

Dr. Spork (142693) | more than 6 years ago | (#22118426)

I think it's very odd that these two smart people thoguht that input from volunteers could create a better database than what could be obtained if you just uploaded a good dictionary plus the Wikipedia.

I mean, seriously, with facts like "Brittney Spears is not good at solid-state physics" or whatever, it seems like their database really is a joke, and that they have to introduce a program to cull all that information.

Programs for parsing semantic content are quickly becoming much better. The reason why Google is not interested in the "Semantic Web" is because they think that their smart bots will be able to mine sematic information from websites, emails and books without any help from human interpreters. That seems to me like the proper start of machine intelligence. What those bots will "learn" will be the right basis for a common-sense database, not the input of some pimply teenagers writing about Btrittney.

Suicide and LSD (5, Insightful)

Anonymous Coward | more than 6 years ago | (#22118470)

As someone who has attempted suicide I think I might have a unique perspective on the matter. The reasons very widely from person to person, and I wont discount the possibility that maybe sometimes it's a justifiable act, but for most people it's not the only solution. It's just usually one of the easiest ones. I can only speak about my own experiences, but after struggling with a lot of hard problems--many things that no one should ever be subjected to--something uncomplicated and easy looked increasingly like a good idea. You're getting beat up from all sides of your life and some people break, some sooner than others. I know what it's like to have something you worked so long for yanked out from under you. What are you to do after that happens? You had one thing in life that you could do and now it's gone.

When you reach that kind of despair it's hard to find your way back to the world. How many great minds and potential contributors to science, art and human culture are lost to suicide before their potential is reached? It was certainly a waste for these two scientists to die. It's a waste, and there's usually always something that could have been done to save them. And it is in society's best interest to help these people any way we can.

What saved me was, sometime after my attempted suicide I tried the drug LSD for the first time. I've never been the same since that day, for the better I mean. I came to understand things about the nature of consciousness, and how the soul and experiences of all things are connected on such a basic level. Up until that point I felt alone and isolated, physically and emotionally, but I saw and felt how that just is not true at all. The feelings of fear and anger and hopelessness were gone. I now use LSD about 5 or 6 times a year, all have been wonderful experiences so far. It is a crime against humanity that this drug is illegal. It should be given to anyone (in a safe environment and under supervision) who is suicidal. In fact, it should be given to anyone who wants it. It literally saved me. I would likely be dead if I had not experienced that permanent personality changing event. This drug is not addictive. It is not deadly in moderation. It is not corrosive to the fabric of civilization. It is a threat however to the established authorities that want us to remain numb to each other and scared. If everyone could experience it once, we could all feel that universal connection, and there would be no reason to feel alone or worthless or end your own life for so many people who think that's their only way to escape.

I'm sorry that this got so off course (mod it as such if you will), but the topic of suicide is so important to me now, and I want people to have the same chance that I had.

I thank Albert Hofmann for my life and my enlightenment, and for giving this gift to all humanity. Perhaps one day we will be more inclined to accept it.

"I think that in human evolution it has never been as necessary to have this substance LSD. It is just a tool to turn us into what we are supposed to be." -Albert Hofmann

You know ... (1)

Sepiraph (1162995) | more than 6 years ago | (#22118562)

... Both of these two men's ideas are not that far off, in a way the ENTIRE internet is this database that they wanted to create and search engines like google is the "A.I." that can make some sense out of it. In many, many ways, the amount of intelligence achived by the internet is already astonishingly. It contains more information than generations of human, has almost instant recall ability, is constantly evolving ... yet you might still scold at this notion since you might not think of the internet with a search engine as 'intelligent', but can you imagine the response if you had asked anyone 20 years whether something like this can even exist. And can you imagine what another 20 years can bring? And this is only the beginning. In fact, I have read that one of the goals of the google founders are to one day able to read minds across the internet... And it might not be that far off if we can solve the problem of neuron-machine interface, which is a complex but ultimately analog-to-digital problem. Uploading persona might be still far, far off but simple mind control for motion is already here NOW.

What happens now? (1)

kitgerrits (1034262) | more than 6 years ago | (#22118584)


Will they fight for humanity's sake -inside the AI they created?
(Tron)

Will the phones all over the US spontaneously start ringing?
(Lawnmower man)

Will they hide their existence and slowly 'mold' people into avatars of their personal interests?
(Neuromancer)

Will they simply disappear into the vast infinity of the net, observing, researching and simply being content with their existence?
(Ghost in the Shell)

Having left their corporeal form, will they 'mount' themselves inside a machine and travel the universe?
(2001)

I, for one, am interested in what the future has in store for us.
Hail our future robotic overlords!
(Appleseed/Matrix/BS Galactica/Terminator/AI/I, Robot)

Oh, and save me a Cherry 2000 ;)

I think I had Push's old NeXT (2, Interesting)

bit0mike (946907) | more than 6 years ago | (#22118682)

I learned most of my *nix skills on a NeXTstation I bought from someone named Pushpinder Singh in 1993. If I remember right he was at MIT. So I think it's the same as this guy. That's... really weird.

Soon, I'll be making a 3d database. (1)

CrazyJim1 (809850) | more than 6 years ago | (#22118694)

Video Trace as discussed on Slashdot earlier is what I've been waiting for since 2002 to make AI. FOSS AI [fossai.com]
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...