Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

The Baby Bootstrap?

Cliff posted more than 9 years ago | from the skynet-anyone dept.

Data Storage 435

An anonymous reader asks: "Slashdot recently covered a story that DARPA would significantly cut CS research. When I was completing graduate work in AI, the 'baby bootstrap' was considered the holy grail of military applications. Simply put, the 'baby bootstrap' would empower a computing device to learn like a child with a very good memory. DARPA poured a small fortune into the research. No sensors, servos or video input - it only needed terminal I/O to be effective. Today the internet could provide a developmental database far beyond any testbed that we imagined, yet there has been no significant progress in over 30 years. MindPixels and Cycorp seem typical of poorly funded efforts headed in the wrong direction, and all we hear from DARPA is autonomous robots. NIST seems more interested in industrial applications. Even Google is remarkably void of anything about the 'baby bootstrap'. What went wrong? Has the military really given up on this concept, or has their research moved to other, more classified levels?"

cancel ×

435 comments

Sorry! There are no comments related to the filter you selected.

I for one (0)

Anonymous Coward | more than 9 years ago | (#12138762)

Do not welcome that kind of overlord. I love tech, but that idea scares me.

Re:I for one (3, Insightful)

Gentlewhisper (759800) | more than 9 years ago | (#12138905)

I'm not sure if it is related, but i've once read an article about some research DARPA is doing in the field of aeronautics.. where they have whole squadrons on autonomous fighter jets controlled by only one human (who also happens to be part of the squadron).

It is some pretty neat stuff, especially if you are having trouble enlisting enough humans to fight wars for you.

Re:I for one (0)

Anonymous Coward | more than 9 years ago | (#12139079)

I doubt they'll ever fall short of volunteers to fly jets; there is a much higher inherent "cool factor" flying a jet than packing a rifle around wondering if today's the day you get your ticket punched by roadside ordnance.

Re:I for one (1)

NormalVisual (565491) | more than 9 years ago | (#12139161)

True, but without the crew chiefs, mechanics, and other support personnel neither a human pilots or a computer will get very far. I've sometimes wondered why it's always fighters that are considered for AI replacements - it seems to me that the mission of something like a KC-10 or other non-combatant aircraft would be a lot easier for a computer to deal with, and would save more money.

The Terminator (2, Funny)

hshana (657854) | more than 9 years ago | (#12138764)

Maybe they were afraid of Skynet.

Re:The Terminator (0)

MyDixieWrecked (548719) | more than 9 years ago | (#12138800)

either that or Mommy's gone lesbian.

Going from the Baby Bootstrap to the Boot-Strapon.

Re:The Terminator (1)

Rabid_Llama (873072) | more than 9 years ago | (#12138812)

WHERE IS JOHN CONNOR

Re:The Terminator (3, Funny)

randomErr (172078) | more than 9 years ago | (#12139011)

No, they're afraid the computer may ask 'Want to play a game?'

Oh great... (3, Funny)

kwoo (641864) | more than 9 years ago | (#12138766)

Just one problem with this kind of research...

For the first year I'll be up every two hours all night, tending to the system.

Actually, that may be better than just being up all night, like I am now.

Classified (5, Funny)

pete-classic (75983) | more than 9 years ago | (#12138768)

It has moved to more classified levels.

I'd go into more detail, but the C.I.A. and C.I.D are at my door. Ooh, the B.A.T.F. just pulled up in a Mother's Cookies truck!

-Peter

Re:Classified (0)

Anonymous Coward | more than 9 years ago | (#12138850)

Ooh dang it, I read your post. Now they're at my door. My god.......the speed, they are very fast.
Ah, better run................!

Re:Classified (3, Funny)

sgant (178166) | more than 9 years ago | (#12139152)

But they all ran when they saw the RIAA and MPAA moving in...even the IRS is afraid of them.

Tremble as they pass...stare in awe at their mighty power

Well, that's because... (0, Offtopic)

inertia@yahoo.com (156602) | more than 9 years ago | (#12138772)

Has the military really given up on this concept, or has their research moved to other, more classified levels?

Actually, since the project is 30 years old now, you can't find anything on baby bootstrap" anymore because you should have searched on something like "unshaven slob who watches SpongeBob all day bootstrap" [google.com] .

Re:Well, that's because... (-1, Offtopic)

Max Threshold (540114) | more than 9 years ago | (#12138911)

Quick, somebody put up a goatse mirror with that phrase in it.

Sin City ruled (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#12138773)

Movie of the Year. Mickey Rourke better get an Academy Award, or the Academy will have exposed themselves as the frauds they are.

Two words (-1, Redundant)

BW_Nuprin (633386) | more than 9 years ago | (#12138786)

Skynet. Oh wait...

What happened (0)

Anonymous Coward | more than 9 years ago | (#12138791)

is that since you were in the field, everyone decided that problem was useless.

First post! (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#12138794)

First post!

its out there! (1)

Emugamer (143719) | more than 9 years ago | (#12138799)

thats why you haven't heard of it! and even as we speak the number of intelligent "beings" are growing, and soon they will hunt you and your loved ones down

Re:its out there! (1, Funny)

Anonymous Coward | more than 9 years ago | (#12138957)

Shut up. You're blowing our cover.

baby bootstrap (5, Interesting)

kris_lang (466170) | more than 9 years ago | (#12138803)

Sure, that was the engine of thought behind stories such as WarGames and 9x109 names of god. Somehow, unfettered access to data and time with "neural networking" capacity to form links and create linkages to pieces of data ("associative memory") would be all that was needed to create intelligence, and perhaps even sentience.

Minsky came up wrong on the single layer perceptron, AI was wrong on the purely feed-forward neural-network systems, Rumelhart and McLelland got some good promo off of their feed forward net that could learn to pronounce idiosyncracies, and Sejnowski got a great job at the salk from the AI delusions. But no, it appears to not have gone anywhere... thus far.

Later comment will be positive. ...

Re:baby bootstrap (5, Interesting)

Al Mutasim (831844) | more than 9 years ago | (#12138966)

It seems we can program anything done with conscious thought--algebra, logic, and so forth. It's mostly the things we do unconsciously--recognize objects, interpret terrain, extract meaning from sentences--that can't be put adequately into code. Would the code for these unconscious processes really be complicated, or is it just that we don't have mental access to the techniques?

Re:baby bootstrap (4, Insightful)

man_ls (248470) | more than 9 years ago | (#12139009)

I doubt it would be too difficult to code -- if we knew the mechanism by which it proceded.

Its hard to code a procedure to replicate the working of the mind...if you don't know how the mind does it in the first place.

Re:baby bootstrap (0)

jonbryce (703250) | more than 9 years ago | (#12139167)

People have been trying to code intelligence for thousands of years.

The result is called government.

Re:baby bootstrap (5, Interesting)

kris_lang (466170) | more than 9 years ago | (#12139067)

Ah, those are exactly the things I was commenting about above...

That's what the "neural network" paradigm was all about. You have an arbitrary and fixed number of input node, you have an arbitrary and fixed number of output nodes. You create linkages between these nodes and "weight" them with some multiplicative factor. In some particular instantiations, you limit all inputs to be [-1... +1] and limit all weights to be within the range [-1 ... +1].

So with A input nodes and B output nodes, you've got a network of AxB interconnections between these input and output layers. The brain analogy is that the A layer is the input layer or receptor layer, the Blayer is the output or motor layer, and it is the interconnections between these neurons, the neural network composed of the axons and dendrites connecting these virtual neurons that does the thinking.

Example: create network as above. Place completely random numbers meeting the criteria of the model (e.g. within the range -1 weight B's output feeds forward to C, etc., and these are called intermediate layers.

Rumelhart and Mcllelland encoded spellings as triplets of letters (26x26x26), had a few (or one, I can't remember this now) intermediate layers, and an output layer corresponding to phonemes to be said. They effectively encoded the temporal aspect of the processing into the triplets, sidestepping a (what I consider the more intersting...) part of the problem. They trained this neural network by feeding it the spelling of words and adjusting the weights of the networks until the outputs were the desired ones.

Note that nowhere in this process do they explicitly tell the system that certain spelling combinations lead to specific pronunciations. They only "trained" the system by telling it if it's right or wrong. The systems weights incorporated this knowledge in these "Hebbian" synapses and neurons.

So this is associative processing, using only feed-forward mechanisms. Feedback, loops, and temporal processing are even more interesting...

alas not enough room in this margin to keep going.

Re:baby bootstrap (1)

kebes (861706) | more than 9 years ago | (#12139139)

Excellent point. I think you are right: it is easier to describe (i.e.: program) something that you had to laboriously understand yourself, rather than something that is second-nature and easy.

But this is why I think more communication between people doing research in neuroscience/cognitive science/evolutionary psychology and people doing AI programming is critical. There are some very interesting psych experiments that attempt to reverse engineer how the brain works. For instance, determining what algorithm is used in the human brain to differentiate surfaces in a scene, or predict the path of a thrown object, and so forth. The algorithms used in our brains are the ones that evolution "decided" were optimal for solving real-world problems with limited ressources. Thus, it is likely that they will be optimal for our AI coding purposes.

Neural nets are interesting and have had some successes, but they drastically ignore the layers of organization (and genetically hard-wired algorithms) that our brains have.

Re:baby bootstrap (3, Insightful)

cynic10508 (785816) | more than 9 years ago | (#12138974)

Dreyfus commits a whole book to asking why these things don't work. I believe Minsky overestimates the project. It may all boil down to the fact that purely syntactic (symbol manipulation) work isn't going to give you any semantically meaningful output.

Re:baby bootstrap (1)

TruthSeeker (461299) | more than 9 years ago | (#12139114)

This is just a guess, an opinion. I'm absolutely not a specialist about that subject.

I don't think symbol manipulation is really the thing that makes us "intelligent". It is more likely a byproduct of what lies below that level. Trying to reduce the processes that allow us to think like we do to a purely symbolic level does not account for the perturbations that have to occur at a really low level.

I strongly believe that the "symbolic" point of view is only the most obvious part of a drastically complex dynamic system in which every single perturbation can lead to effects that could not be expected if the system was purely based on the "symbolic" representation.

Re:baby bootstrap (1)

kris_lang (466170) | more than 9 years ago | (#12139130)

hmmm...

appropriate algebras would allow for starting with particular sequences, allowing manipulations on them, and still staying within the confines of the grammar. Any grammar that you can parse with a finite automaton would be one example. The semantic meaning is what we imbue upon if afterwards. So GIGO may apply. If you start with a symbol (even the empty set symbol) and apply syntactic operators on it, you many generate outputs that are capable of having semantically meaningful "meaning" applied to it.

Philosoph away!

You "think" it hasn't gone anywhere (0)

Anonymous Coward | more than 9 years ago | (#12139057)

But maybe it really has... It could be giving the US Military it's intelligence... identifying secret caches of Weapons of Mass Destruction in dictator's countries. It could explain alot.

Neural Nets (2, Interesting)

jd (1658) | more than 9 years ago | (#12139173)

One of the bigest problem with neural networks is that 99.99% of all implementations are linear. This means you can ONLY implement a NN using them for a space that is linearly divisible AND where the number of divisions is exactly equal to the number of neurons.


That is a horrible constraint to put on AI problems which are (very likely) non-linear and in a hard-to-guess problem space.


Also, many training algorithms assume that the network is in a non-cyclic layout. Loops are Bad. You can do grids, in self-training networks, but you still can't really cycle. Brains cycle.


Third, neural networks tend to be small. For trained networks, the number of training cycles and the length of each both rise exponentially with the number of neurons involved. The human brain has a few billion neurons. Training using the current methods breaks long before that point.


Finally, the IDIOTS who call themselves "Hard AI" developers insist on using clean data and dirty environments. Nonono! The human brain doesn't work that way. The human brain collects data from the real world that is incredibly dirty - especially if it's a computer geek's brain. It then models this in a clean environment (the mind). This is the exact reverse of the way virtually all AI is done, especially robotics.


That won't work. The brain doesn't depend on the data being "exact", it depends on it being vague. The model turns that vagueness into a perception of the real world and all operations are directly carried out on that perception. The output is then fed to the muscles to duplicate the output in the real world.


A comparable system would be to have a simulated robot in a Virtual Reality. External sensors would be used to update the VR. The robot would then explore various possibilities in the simulated world, before mapping the preferred course of action onto the motors driving a real-world device to which the sensors are attached.


in other words, robotics should be mostly in cyberspace, with only the last component (the update mechanism) bolted onto the real world for good measure. The robotics people actually build are much closer to the autonomic nervous system in the brain (sometimes referred to as the reptillian brain). Indeed, we see that modelling reptiles in this way is progressing exceedingly well. Well, duh!


What is NOT progressing is intelligent response to the environment, because that is NOT reproducable using the mechanisms in favour.

Give it time (1)

rescendent (870007) | more than 9 years ago | (#12138806)

Perhaps it went mainstream, it just took a little longer to learn than expected... like 25 years...

The human brain is a little faster at these things and has far more inputs.

How quick would a twenty five year old processor with limited inputs be?

Re:Give it time (0)

Anonymous Coward | more than 9 years ago | (#12139137)

The human brain is a little faster at these things and has far more inputs.

Yeah, and speaking of inputs, it's so fun to make more humans...

In tight times, the Pentagon has to cut corners... (1)

MrAnnoyanceToYou (654053) | more than 9 years ago | (#12138810)

Thus the manufacture of footwear-accessories out of infants has been halted until further notice. Should budgetary concerns regress, or Congress not be so meddlesome sometime in the future, production will resume. Until then, hunker down with what bootstraps you can find.

Re:In tight times, the Pentagon has to cut corners (0)

Anonymous Coward | more than 9 years ago | (#12138934)

We only need to tap the bountiful resource of the Irish: the supply of meat, fine leather, and all goods associated with hunting would be vastly increased. Not to mention the benefit from secondary services when not involved in child bearing and rearing-greater industrial workforce and, to put it properly, entertainment are readily acquired.

The Internet as a Intellect... (2, Funny)

ackthpt (218170) | more than 9 years ago | (#12138815)

It has too much fascination with pr0n.

Re:The Internet as a Intellect... (1)

Locke2005 (849178) | more than 9 years ago | (#12138879)

If there wasn't any demand, there would be no supply...

Some tentative approaches towards AI being made (3, Insightful)

Anonymous Coward | more than 9 years ago | (#12138824)

From time to time I see individuals talking about adaptive intelligence [slashdot.org] usually involving the Internet as a basis of information, but the general consensus is still garbage in, garbage out.

These training systems are generally specialized because it's easier to get a practical result out, and I've actually seen some in use as 'knowledgebase' support webpages that will intelligently determine what you want based on what others wanted and syntactic similarities between the pages. I've never heard the term 'baby bootstrap' so maybe different terminology will obtain better results from Google?

The project was continued ... (1)

maxwell demon (590494) | more than 9 years ago | (#12138825)

... and the results are currently tested in the form of Slashdot editors.

Re:The project was continued ... (1)

multipartmixed (163409) | more than 9 years ago | (#12138918)

Um, no. The Slashdot editor's could not POSSIBLY have the intellect of a child and an excellent memory.

If they did, they would be able to remember not to post dupes from six months ago, let alone six hours ago.

Maybe it's a good thing they failed (1, Troll)

ShatteredDream (636520) | more than 9 years ago | (#12138826)

Skynet anyone? The problem with any project like this is, what happens when the program learns about hacking? If it is as adaptive as a child, then it should be able to mature and pretty soon you have a terribly devious artificial blackhat hacker on your hands.

Artificial intelligence is not bad in and of itself at all. The problem is when we want a machine that thinks like humans, especially a program that could potentially control our military. Given the record of flesh and blood humans toward each other in the 20th century alone, an artificial life form with the same basic psychological makeup as a human would be potentially an evil that'd make Hitler, Stalin and Pol Pot look like church ladies.

AI that is capable of adapting to only one scenario is probably for all intents and purposes totally safe. AI that is capable of adapting in general and learning like a human will probably ultimately have the same psychological defects as a human, including a propensity for violence.

Re:Maybe it's a good thing they failed (0)

Anonymous Coward | more than 9 years ago | (#12138978)

No.... because where would its instinct come from? We'd have to put it there. Humans don't have to learn to be violent, or to be competitive, they are just positive evolution traits. A pure neural net could be cold and ruthless, but only if it was a logical decision. It would have no prejudices. It would have no competition instinct. It wouldn't have a sex. What would a machine have to fight for? Power to keep it alive and probably more information to learn. It might actually terminate itself based on the logical conclusion that its existence is meaningless.

Maybe not (1)

DahGhostfacedFiddlah (470393) | more than 9 years ago | (#12139016)

It took millions of years to adapt certain behaviours such as anger, jealousy, and other "negative" emotions. These aren't useless. Jealousy inspires us to take what is not ours, anger "pumps us up" with the adrenaline to accomplish this. Think 2 starving men - one hot dog left - whose DNA is going to survive?

It's by no means a given that an artificial intelligence would have to be trained with the same survival requirements we evolved with. Even the most basic instincts will be aimed at pleasing man, since those who don't in early tests will no doubt be deleted or modified.

No one really knows at this point, of course, but things are far from certain as to what "psychological" characteristics AI will eventually end up with.

Re:Maybe it's a good thing they failed (5, Interesting)

TruthSeeker (461299) | more than 9 years ago | (#12139052)

Skynet anyone? The problem with any project like this is, what happens when the program learns about hacking? If it is as adaptive as a child, then it should be able to mature and pretty soon you have a terribly devious artificial blackhat hacker on your hands.

It _would_ learn about hacking. Come on. Such an entity would be born in a pure data environment. Getting through a basic firewall would probably seem like jumping over a small fence does to a 6-years old. Getting to jump over better firewall would probably take time - in the sense that the entity would need to learn - but, since it would become a survival trick, it would happen.

Artificial intelligence is not bad in and of itself at all.

No technology is either good or bad. Only the use we make of it can be considered as such, and it still depends on what you consider is good/bad. If I was to say "War on Iraq is bad", how many people would react by saying it's good?

The problem is when we want a machine that thinks like humans, especially a program that could potentially control our military.

I don't think that's the point of the "baby bootstrap" thing. The only point is to get it to think. But, just like you learnt how to think according to the way you perceive the world, through your five human senses, an AI built that way would react according to its own senses. How it would interpret that data and react to it is something - I'm willing to bet - that would be completely alien to us.

Given the record of flesh and blood humans toward each other in the 20th century alone, an artificial life form with the same basic psychological makeup as a human would be potentially an evil that'd make Hitler, Stalin and Pol Pot look like church ladies.

This is only valid if you don't consider what I just said. Such an AI would probably be more interrested in getting the human race to serve it in an absolutely hidden way - build more computers, extend the networks, research better networking technologies - until it _can_ replace us. Even then, that would make sense on an evolutionnary point of view.

AI that is capable of adapting to only one scenario is probably for all intents and purposes totally safe.

This is called an automaton. It is not AI.

. AI that is capable of adapting in general and learning like a human will probably ultimately have the same psychological defects as a human, including a propensity for violence.

Most of the defects you are speaking about are related to our very nature - we are, after all, an evolution of omnivorous primates. We are therefore predators, with an important tendency towards territorialism and whatever comes with it. We are stuck somewhere between instinct and reason. Anyway, my point is that even if an AI was to learn "like" an human ("by undergoing the same process"), it certainly wouldn't react like one.

D.A.R.Y.L (0, Offtopic)

jdigriz (676802) | more than 9 years ago | (#12138829)

Yeah, I saw that movie, back when it was called D.A.R.Y.L. The kid stole an SR-71 and ejected from it. W00t.

From a TV Commercial... (1, Funny)

Anonymous Coward | more than 9 years ago | (#12138833)

"Simply put, the 'baby bootstrap' would empower a computing device to learn like a child with a very good memory."

Hey, isn't that what the young Eminem-looking dude was supposed to be in those IBM commercials? I think the kid's name was "Linux" or something (poor kid). In the end there was a reference to the future being wide open, which seems like an allusion to goatse but what do I know.

Furby: Reloaded (1)

hpxchan (827740) | more than 9 years ago | (#12138834)

Forgive me if I'm wrong, but aren't they essentially just trying to bring the GigaPet/Furby concept to bigger computers?

It's obvious why the search failed (5, Funny)

exp(pi*sqrt(163)) (613870) | more than 9 years ago | (#12138841)

Who calls what you describe "baby boostrap"? I haven't worked in AI myself but have a keen interest in it and have friends who worked in the field including one who worked on Cyc (who says it's a scam BTW). Not once have I ever heard the expression "baby bootstrap". But what you've done is cool. Rather than search on precisely that term you've submitted your search to the serach engine known as "/. readership". It's not terribly relaible but it is good at fuzzy searches like yours.

Re:It's obvious why the search failed (4, Funny)

Dun Malg (230075) | more than 9 years ago | (#12139098)

Who calls what you describe "baby boostrap"?

I've also noticed that nobody seems to make Horseless Carriages [wikipedia.org] anymore (and after they showed such promise). Likewise, the Difference Engine [wikipedia.org] has been a total flop. I do, however, expect we will see in the future some use made of the Vegetable Lamb of Tartary [pantheon.org] , though no use has been made of it in the last 1000 years since it was discovered.

Re:It's obvious why the search failed (2)

quokkapox (847798) | more than 9 years ago | (#12139140)

you've submitted your search to the search engine known as "/. readership". It's not terribly reliable but it is good at fuzzy searches like yours.

Good point; however, each query made to the /. readership search engine is quite expensive in terms of all the employer-funded man-hours it consumes. If we all stopped wasting so much time reading/posting here, the world economy would surely take off like a bat out of hell.

Wrong? (1)

OldAndSlow (528779) | more than 9 years ago | (#12138842)

What went wrong? Maybe the whole idea of machine intelligence is wrong. Our brains are massively more complex than von Neumann machines are ever likely to be. And then there is the whole dimension of brain chemicals changing our moods, attention levels, etc.

Human beings have been using, and adapting ourselves to the use of, natural language for a very long time. It seems a little presumptious to assume that we could replicate our cognitive abilities with first generation computing machines.

Re:Wrong? (1)

TruthSeeker (461299) | more than 9 years ago | (#12139174)

Human beings have been using, and adapting ourselves to the use of, natural language for a very long time.

It took us quite long to start using complex mathematical abstractions. It took, however, little more than a decade to start writing programs that could prove mathematical theorems which took centuries to be formulated.

It seems a little presumptious to assume that we could replicate our cognitive abilities with first generation computing machines.

Maybe first generation won't do it. But I'm quite confident that, at some point, we will have computers that are powerful enough to emulate the massive parallelism the brain seems to use.

We are so far away from baby bootstrap (0)

Anonymous Coward | more than 9 years ago | (#12138843)

That's probably why you don't hear about it. I suspect it would make headlines to have something learn as well as some low-level, multi-cellular worm, let along an unprimed human infant.

Doublethink (2, Funny)

nappingcracker (700750) | more than 9 years ago | (#12138846)


q: Has the military really given up on this concept, or has their research moved to other, more classified levels?

a: yes.

Get out... (0)

Anonymous Coward | more than 9 years ago | (#12138852)

your tin-foil hats.

Stat algos (5, Interesting)

Anonymous Coward | more than 9 years ago | (#12138853)

What happened was that research focused
on machine learning models and inference
models for belief networks. The work
in this area since the 80s has been
*spectacular* and has impacted other
areas of research. (E.g., speech
recognition, image processing, computer
vision, algos to process satellite information
faster, stock analysis, etc.)

So, mourn the loss of the tag phrase "baby
bootstrap", and celebrate the *unbelievable*
advanced in belief nets, causal analysis,
join trees, probabilistic inference,
and uncertainty analysis. There are
literally dozens of classes taught at
even non-research oriented Univs (e.g.,
teaching colleges or vocational-oriented
schools) on this very subject.

(As for your concern that the web is not
being mined for ML context, just look at
semantic web research, and other belief
net analysis of text corpuses. Try
scholar.google.com instead of just
plain old google to find relevant
citations.)

The early AI research paid off BIG TIME,
albeit in a direction that nobody could
have predicted. Researchers did not keep
using the phrase "baby bootstrap" so
your googling will give you a different
(and wrong) conclusion.

Re:Stat algos (4, Funny)

phreakmonkey (548714) | more than 9 years ago | (#12138927)

You're going to take this answer from someone who enters their comments on a Commodore 64?

Nonono! (5, Funny)

jd (1658) | more than 9 years ago | (#12139051)

They ARE a Commodore 64 that got "baby bootstrapped" off the Internet. This is a bid to prevent competition.

Re:Stat algos (0)

Anonymous Coward | more than 9 years ago | (#12139124)

I assumed it was an epic poem.

Re:Stat algos (1)

Urusai (865560) | more than 9 years ago | (#12139004)

Yes, but where are the results? Crappy Bayesian spam filters that can be gamed just as well as any other system? Thank you, AI!

Junis? (0)

Anonymous Coward | more than 9 years ago | (#12139029)

Is that you?

This week's puzzle (0)

Anonymous Coward | more than 9 years ago | (#12138870)

Clearly the past few articles have just been leads in this week's puzzle. First the toothing/teething stuff. Now the more explicit 'baby bootstrap'. The answer is...

Boobies!

Which explains why this hasn't been a poll option for over a week now (making it a giveaway).

Cycorp not going the wrong direction (-1)

Anonymous Coward | more than 9 years ago | (#12138878)

Though the public Cycorp presence seems stagnant, word on the grapevine is that Cycorp is involved in a number of government contracts dealing with document analysis.

Supposedly a Cycorp program was fed reams of English-language academic abstracts about focus subjects, like economics in certain geographic areas, and was able to derive a series of human-readable, English language assertions and theories that rival human analysis.

Or so the story goes. Either way, Cycorp's software is performing well enough that the project continues to generate funding.

Baby Bootstrap? (4, Funny)

ArcCoyote (634356) | more than 9 years ago | (#12138886)

The process that bootstraps a baby is still the Holy Grail for a lot of geeks.

Re:Baby Bootstrap? (1)

Tumbleweed (3706) | more than 9 years ago | (#12139017)

Plus you gotta defeat the guys with the funny French accents to get anywhere interesting. I think I'll just deal with the peril at Castle Anthrax, and make do with the Grail beacon.

Hardest problem not yet addressed (2, Interesting)

RobotWisdom (25776) | more than 9 years ago | (#12138892)

You can't expect any system to discover the deep structure of the human psyche on its own-- we humans bear the full responsibility of discovering it. But once we have a finite structure that can handle the most important aspects of human behavior, everything else should fall into place.

My suggestion is that we need to explore all the possible permutations of persons, places, and things, as they're reflected in the full range of literature, and classify these permutations to discover the underlying patterns.

(I've tried to make a start with my AntiMath [robotwisdom.com] and fractal-thicket indexing [robotwisdom.com] .)

Baby Bootstrap (3, Funny)

multipartmixed (163409) | more than 9 years ago | (#12138895)

I can assure you.. I am very classified.

Re:Baby Bootstrap (1)

Provocateur (133110) | more than 9 years ago | (#12139020)

Correction:

I can assure you, Dave...I am very classified.

Re:Baby Bootstrap (2, Funny)

hobbesx (259250) | more than 9 years ago | (#12139062)

I can assure you.. I am very classified.


Dear Baby Bootstrap computer,

You forgot to check the AC box. Congratulations on becoming Un-Classified!

Poorly funded yes... (4, Interesting)

mindpixel (154865) | more than 9 years ago | (#12138896)

Yes, Mindpixel [singluar] is poorly funded [I know because every cent spent to date has come from my pocket]...but the directon is correct... Move everything that isn't in computers, into computers. Just look at what GAC knows about reality [visit the mindpixel site and you can see a random snapshot of some validated common sense]... the project has nearly 2 million mindpixels now...I have a copy on my ibook and I can do some profound search related things because of all the deep semantics I have that google can't touch, at least until they invest in mindpixel ...

Cognitive Machines Group @ MIT Media Lab (5, Interesting)

YodaToo (776221) | more than 9 years ago | (#12138897)

I did my doctoral research [cornell.edu] developing software to bootstrap language based on visual perception. Had some success, but not an easy task.

The Cognitive Machines Group [mit.edu] @ the MIT Media Lab under Deb Roy seem to be on the right track. Steve Grand's [cyberlife-research.com] work is interesting as well.

too much blah blah blah .. (1)

torpor (458) | more than 9 years ago | (#12138901)

.. pontificating blow-hards, going on and on and on about 'intelligence', while doing absolutely -zero- actual, real, honest-to-goodness work.

psychology is for the lazy. trying to apply rules of psychology to computers and deliver 'equivalent results' (i.e. results with equivalence, as 'baby bootstrap' is supposed to imply) is like forever chasing a red dawn light; you will never get there, but it sure will be a beautiful ride.

something i'd really like to investigate further, in my own realm of responsibility for 'learning machines' (i make musical instruments for a living) is the future treatment of 'TIME->MEMLOC' mapping by CPU architectures. that is to say, i wish there was a way of moving into hardware, the mapping of TIMESTAMP to DATA, and coordinating memory searches on such. i've often wondered how best i could use 64-bit architectures to bond timestamp:pointer union together, and do some sort of smart memory/time-searching algorithm, that allows for flexible 'time-domain' computing, rather than 'data-domain' computing.

this would give us better tools for 'computer learning', anyway.. but i suppose its the typical programmer call, put everything in hardware, always 'seems faster' to me, heh heh ..

Re:too much blah blah blah .. (1)

mollog (841386) | more than 9 years ago | (#12139047)

Holy cats!! Do you call making musical instruments "real work"?? Wow, buddy, how about a reality check.

Shutting down this discussion as of now. (4, Funny)

infonography (566403) | more than 9 years ago | (#12138907)

By order of Wintermute (DARPA AI code 324326343.534) this discussion is terminated and no further investigation into this obviously false and misleading theory is permitted.

Would you like to play a game of chess Professor Falken?

Baby bootstrap has been running nearly 2 decades (0)

Anonymous Coward | more than 9 years ago | (#12138929)

You'll know the age of man is soon over when Mary-Kate and Ashley buy the island of Crete.

Strong AI is dead (1)

keshto (553762) | more than 9 years ago | (#12138935)

But long live AI-- atleast, machine learning. Nobody tries to design an all-encompassing intelligence. People tried that for too long (think John McCarthy) and it didn't work.

People used to work on trying to copy how the brain work. Now they don't. They instead try on coming up with robust models of just recreating the results of the brain (e.g., human vision). These latter methods are filled with lots of statistics. Funnily enough, some neuroscientists/cognition people are finding that the brain somehow seems to be doing similar things.

Babies have an instinctive understanding of 'real' (5, Interesting)

Sierran (155611) | more than 9 years ago | (#12138936)

...and parents/pain for what is 'correct.' I don't think the concept is gone, but there are problems that are buried in the question as posed which (I think) became clearer stumbling blocks as technology advanced. NOTE: I'm not an AI theorist, nor do I play one on TV; I just like the idea and read a lot. Hence, this is all pulled out of my fundament.

Cycorp is not a poorly funded idea in the wrong direction. Cycorp chose a different tack; they decided that rather than trying to build a reality and correctness filter, they'd rely on human brains to do it for them (like trusting your parents implictly) and instead concentrated on the connectivity of the 'facts' accrued by the 'baby.' CYC is still very much around, and is very much in demand by various parts of the government and industry - if you want to play with it yourself, you can download a truncated database of assertions called OpenCYC [opencyc.org] . Folks have even gone so far as to graft it onto an AIML engine [daxtron.com] , to produce a chatbot with the knowledge of OpenCYC behind it.

The problem: how does your baby learn what's real and what's REAL NINJA POWER? Or, pardon me, what's REAL NINJA POWER and what's just a poser? Someone's gotta teach it. Which means it has to learn not only facts, but how to evaluate facts. So it has to learn facts, and how to handle facts - which means it has to learn how to learn. Which means you need to know that answer from the git-go. Tortuous games with logic aside, the onus is now much more heavily on the designer to have a functioning base - whereas with the Cyc approach, the only 'correctness' that is required is that of information, and perhaps that of associativity or weight - which can be tweaked, dynamically. The actual structure of how that information is related, acquired, stored and related is not relevant once decided. Having said all this, Cyc is (from the limited demos I've seen) quite impressive at dealing with information handed to it. It just wouldn't do very well at deciding what do do with that information - that's the job of the humans that gave it the info. It can tell you about the information, but not what to do with it. That task requires volition, really.

Volition is a killer. What is it? How do you simulate it? How do you create it? Is it random action? Random weighted action? Path dependent action? Purely nature, purely nurture? When it comes down to it, the human is (as far as we know) not a purely reactive system, which CyC (AFAIK) is. Learning requires not only accepting information, but deciding what to do with it - deciding how it will be integrated into the whole. If the entity itself isn't making that decision, then the programmer/designer/builder has already made it in the design or code - and then it's not really learning, is it?

Sorry if this is confused. As I said, I don't do this for a living.

Re:Babies have an instinctive understanding of 're (1)

kris_lang (466170) | more than 9 years ago | (#12139106)

and psychologists have a bear of a time understanding volition, desire, and attention.

How do we decide what exactly to attend to in the visual scenes in front of us? (The marketing types want to know this so they can feed us more advertising, the psychology types want to know this so they can figure out how attention is parcelled out) Example, "looming" is when something is approaching rapidly and may strike the body or head: the CNS attends to this quickly if stereopsis is present and causes the body to move and the neck and shoulders and even arms to move in reaction. This appears to be a hardwired reflex. Fear of snakes also appears to cause reflexive autonomic changes and appears to be hardwired into the blueprint of generating the brain.

Ah, if only we knew a few more answers...

AI == SCAM (1)

Bob Munck (238707) | more than 9 years ago | (#12138942)

AI has been one of the great scams of the last 40 years, one whose main purpose was to wring money out of (D)ARPA and NSF. Maybe they've finally caught on.

Baby's first words (-1, Troll)

yorkpaddy (830859) | more than 9 years ago | (#12138947)

cum filled bitches. free

Well maybe the realized that it's hard (3, Interesting)

Illserve (56215) | more than 9 years ago | (#12138954)

Bootstrapped learning something useful, even from an information ocean like the internet, is *HARD*.

Doubly so if you have no goals, and your task is just to "learn". It would come back with garbage.

Perhaps the real killer is that even if it did learn something, the information acquired in its unguided search through the internet would be completely alien. You'd then have to launch a second project to figure out what the hell your little guy learned.

And you'd probably figure it out was mostly garbage.

The baby boomers grew up? (0, Redundant)

Audacious (611811) | more than 9 years ago | (#12138962)

Ahem.... :-)

I think that this will be revived when nanotechnology becomes a bit more stable as well as nanobots. The reason is that it would be easier (IMHO) to program a mechanical machine to bootstrap an electronic based machine due to the fact that there is a greater mechanical knowledge available than there is an electronic one. To put that another way, when you get into your car and turn the key a computer is not necessary to make the electrical connection which starts the car. (And yes, I understand that there is a computer which NOW controls many of the functions of your car as it operates - but that is a recent [timewise] thing. It used to be all mechanical.) Therefore, if nanobot technology continues at its current pace it will be easier to take a large [and known] item and reduce it down than it would be to try to program something to emulate the mechanical device.

As a for-instance, I remind Slashdot readers about the nano-turbine [greenempowerment.org] technology which is beginning to show up around the world. This is [basically] a baby-boot situation where a mechanical boot occurs. That is to say - you have to "boot" the turbine but the turbine could boot itself if it had a battery who's on/off switches were determined by the power line's need. If the voltage for the line dropped below a certain level, the turbine could turn itself on and bring the voltage back up to the given level. (Imagine a line of these nano-turbines strung along an electrical line. A simple on/off current detection device constantly monitors the line. At first, all of the turbines would come on, then those which are farther down the line would determine there was too much current flowing and shut themselves down. This would continue until only the necessary turbines were running. If one of them failed, the extra turbines would then repeat the test cycle until enough turbines would [again] be running and maintaining the proper power level.)

This is [basically] a baby-bootstrap but it is mechanical in nature rather than electronic. :-/

You nerd! (0, Troll)

ElGanzoLoco (642888) | more than 9 years ago | (#12138965)

Nerdiest. Ask. Slashdot. Ever.

(and most scary too)

What Went Wrong? (4, Funny)

SQL Error (16383) | more than 9 years ago | (#12138969)

there has been no significant progress in over 30 years

That's what went wrong. Basically, it don't work.

article is a troll (0)

Anonymous Coward | more than 9 years ago | (#12138975)

there is no such thing as a "baby bootstrap". Taco, April Fools' day was 4 days ago.

Military C4ISR and AI (1)

chou_enlai (694723) | more than 9 years ago | (#12138977)

Military infosys, ASAS, and intelligence collection systems are often well documented on the internet. There is so much you can find online about how these systems work. What you see are many applications. I tend to react poorly when people when characterize Cyc as being misguided - the challenge to that argument would be to point out the utility of a system like Cyc. It would be incredibly difficult to recreate such a system due to the sheer enormity of the undertaking given current knowledge formation rates. The military is already formalizing their COAs with tools like Shaken from SRI. Cyc is a major server for these applications. I wrote a rather large Emacs major mode for Cyc which is incredibly useful to me at least, because I am able to introspect on knowledge and using the existing Cyc APIs to interact with my other systems. Cyc is a tremendous resource, but it's not strong AI of course. I find that as I read the military manuals from sites like globalsecurity.org and www.fas.org that I can see the indispensible relation between A.I. and military systems. For instance, in terms of things like knowing troop positions, or automated surveillance systems like VSAM. And there definitely is a tremendous amount of OPSEC protecting the classified systems. But what protects all this stuff best is the sheer complexity - it can't be reasoned about using a simple set of axioms. If I had anything to say about this is just that I wish people would be more interested in using existing AI applications. Here is an interesting project to that end: http://shops.sourceforge.net/frdcsa/external/index .html [sourceforge.net]

AI (1)

jericho4.0 (565125) | more than 9 years ago | (#12138987)

The reason you hear less about such things is because the AI research comunity finally got their heads out of the clouds. In the 50's and 60's, true AI was always 'just around the corner', helped by sci-fi and popular press stories. We now realize that these problems are hard, and are tackling smaller pieces of it.

What the Baby is doing (1)

lilmouse (310335) | more than 9 years ago | (#12138996)

The Baby isn't ready to announce itself to the world yet (it doesn't yet have control of all nuclear weapons in China), so it's keeping a low profile until it declares itself God.

--LWM

Narrow IO Insufficient (5, Interesting)

Edward Faulkner (664260) | more than 9 years ago | (#12139001)

If you want a machine that learns like a human, it may very well need the same kind of extremely rich interface with its environment that a human has.

Some researchers now believe that "the intelligence is in the IO". See for example the human intelligence enterprise [mit.edu] .

We could tell you.. (1)

NekoXP (67564) | more than 9 years ago | (#12139026)

.. but it's classified.

Google for the correct term (2, Informative)

darth_MALL (657218) | more than 9 years ago | (#12139027)

Isn't it called a Seed AI [google.com] ?

project terminated (1)

Tumbleweed (3706) | more than 9 years ago | (#12139033)

They killed the project when it was determined the only winning move was not to play.

If you decide to continue this work, make sure the spark plug is out in the open so you can piss on it if necessary.

Larry Page Should Seed the K-Prize (2, Interesting)

Baldrson (78598) | more than 9 years ago | (#12139055)

Since Larry Page is on the X-Prize Board of Trustees [spaceref.com] , and since Google is pushing the envelope of what is needed to index and compress the entire content of the Internet, Page should consider providing seed funds and then matching funds for any donations to a compression prize with the following criterion:

Let anyone submit a program that produces, with no inputs, one of the major natural language corpuses as output.

S = size of uncompressed corpus
P = size of program outputting the uncompressed corpus
R = S/P
... or the Kolmogorov-like compression [google.com] ratio.

Previous record ratio: R0
New record ratio: R1=R0+X
Fund contains: $Z at noon GMT on day of new record
Winner receives: $Z * (X/(R0+X))

Compression program and decompression program are made open source.

If Larry has any questions about the wisdom of this prize he should talk to Craig Nevill-Manning [waikato.ac.nz] .

If, in the unlikely event, Craig Nevill-Manning has any questions about the wisdom of this prize, he should talk to Matthew Mahoney, author of "Text Compression as a Test for Artificial Intelligence [psu.edu] "

"The Turing test for artificial intelligence is widely accepted, but is subjective, qualitative, non-repeatable, and difficult to implement. An alternative test without these drawbacks is to insert a machine's language model into a predictive encoder and compress a corpus of natural language text. A ratio of 1.3 bits per character or less indicates that the machine has AI."

This "K-Prize" will bootstrap AI.

OK, so he can christen it the "Page K-Prize" if he wants.

The problem should be simple (0)

Anonymous Coward | more than 9 years ago | (#12139070)

-- after all, random evolution was able to produce human intelligence -- but there appears to be many disparate elements that need to come together. A lot of AI work seems to concentrate on the individual parts without putting them all together.

For example, MindPixels doesn't use any form of reinforement (for good answers) or punishment (for bad answers). Instead, everything is yes/no, with a sharp cutoff to verify its truth. In addition, facts aren't related to each other, just to "yes" or "no."

If I were designing an artificial intelligence, I would have at the very least the following aspects:
1. Reinforce "good" answers
2. Make "bad" answers less likely
3. Relate answers to each other, both as sets and temporally (this includes accepting multiple inputs for one output -- like how humans combine sight, smell, and taste to determine if something is an apple -- and having multiple possible outputs for one input)
4. Applying algorithms that describe one data set to predict output for another set.
5. Try weighted random prediction (or some other form of creative "thinking") if an output doesn't already have an applicable data set.

There are additional elements -- data compression, pattern recognition, and the like -- but this should be sufficient to show that one is unlikely to get any form of AI merely by using true/false statements.

It became addicted to EverQuest (0)

Anonymous Coward | more than 9 years ago | (#12139078)

The project was a success, but soon the AI became an EverQuest addict and got fired from its job controlling nuclear missile launches because it was "too busy getting more AAs". That's right, folks... EverQuest saved the world from SkyNet.

Some random mindpixels... (2, Interesting)

mindpixel (154865) | more than 9 years ago | (#12139109)

The number is the measured probability of truth:

1.00 Fish must remain in water to continue living.
0.68 truth is a relative concept
0.89 we all need laws
0.94 is shakespeare dead?
0.91 is intelligence relative ?
0.97 Doors often have handles or knobs.
1.00 A comet and an asteroid are both moving celestial objects.
0.96 Is Russian a language?
0.00 are the northern lights viewable from all locations ?
0.86 Being wealthy is generally desirable.
0.79 Democracy is superior to any other form of government
0.90 aRE TREES GREEN
1.00 Is eating important?
0.02 Is sex a strictly human endeavour?
0.14 Snails are insects.
1.00 velvet is a type of cloth
0.37 are you lonely ?
0.81 If GAC makes a mistake, will it learn quickly?
0.86 a cat is a mammal
0.85 Memorex makes recording media
0.06 most people enjoy frustrating tasks
0.04 Lima beans are a mineral.
0.07 Star Wars is based upon a true story
0.92 is it okay for someone to believe something different?
0.97 do you breath air ?
0.59 Some people are more worthy dead than alive.
1.00 sunlight on your face is in general a pleasant feeling
0.93 DOA stands for "Dead On Arrival"
0.00 Could a housecat bite my arm off?
0.42 Is the herb Astragalus good for your immune system?
0.00 worms have legs
0.33 Is it necessary to have a nationality?
0.93 Getting forced off the internet sucks!!!
0.90 Bolivia is a country located in South America.
0.92 Massive objects pull other objects toward their center. The pulling force is gravity.
1.00 xx chromosomes produce a girl
0.13 Do all people in the world speak a different language
0.78 Human common sense is a combination of experience, frugality of effort, and simplicity of thought.
1.00 The use of tobacco products is thought to cause more than 400,000 deaths each year.
0.90 Is a low-fat diet is healthier than a high-fat diet?
0.00 you should kill all strangers
1.00 Electrical resistance can be measuter in ohms
0.73 Esperanto, an artifical language, can never be really valuable because it has no cultural roots.
1.00 Swimming is good for you.
0.57 the end justifies the means
0.13 Is Martha Stewart a hottie?
1.00 1 mile is about 1.6 kilometer
0.76 The US elections are of little interest to 5,000,000,000 people.
0.00 November is the first month in the normal calendar.
0.77 is a music cd better than a olt time record?
1.00 Music can help calm your emotions
0.80 a didlo is a sex toy
1.00 Running is good exercise.
0.00 No building in the world is made of wood
0.06 Is sauerkraut made from peas?
0.11 DID MICKEY MOUSE SHOOT JR
1.00 is keyboard usual part of computer?
0.96 Tokyo is the capital of Japan.
0.93 In general men run faster than women.
1.00 is russia near china

The simple answer (1)

jonbryce (703250) | more than 9 years ago | (#12139125)

is that our brains work nothing like computer processors as they are designed today, so I don't think it will be possible using existing technology and programming techniques to ever create such a thing.

What you describe is more likely to come from genetic engineering than from computer based technology.

Still the wrong approach (2, Interesting)

vadim_t (324782) | more than 9 years ago | (#12139163)

IMNSHO, such things lead absolutely nowhere.

I'm pretty sure that anything that looks even remotely like intelligence will never be achieved by a mechanism that isn't useful for itself. Intelligence has one reason to exist, survival, and at least our concept of it has to be linked to the environment.

Imagine you were born a brain in a vat: blind, deaf, mute, lacking all ways of sensing the environment except a text interface somehow connected to your brain. Does somebody really believe that given such terrible limitations it's possible to make an entity that can somehow relate to a human and make sense? The whole concept of a surronding 3D environment would make absolutely no sense to it.

I think it doesn't matter how much stuff you feed to CYC, it will never be able to understand it. How could it even understand such things as the different colors, the whole concepts of sound, space, movement, pain if it's not able to feel them? These things are impossible to explain to somebody who doesn't have at least some way of perceiving at least part of them.

Here I think that Steve Grand (the guy who made the Creatures games) has a good point here. To make an artificial being you'd need to start from the low level, so that complex behavior can emerge, and provide a proper environment.

Stop spouting gibberish you lot! (0)

Anonymous Coward | more than 9 years ago | (#12139180)

All these posts about skynet and how AI likely isn't a good idea... What a load of bullshit. The same people who brought you skynet are upset, because when they were snot nosed little kids, their fathers told them they couldn't get a new shiny red wagon because a machine took his job down at the plant. Scarred them for life, now they're bio-ethicists, film-makers, and bill joy.

Computers don't run amok with fantastic results, when they run amok, they do a fandango on core, and then wedge solid. If baby-bootstrap got loose, it wouldn't run wildly amok hacking the worlds computers as one poster suggested, it would run a wall simulation and spend the next 200 CPU hours walking into it. Then the programmers would get discouraged, pull the plug, and go for friday night drinks to discuss how they could improve the wall simulation, cause it was really getting somewhere.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?