Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

DARPA Tackles Machine Learning

samzenpus posted about a year ago | from the learn-faster dept.

Government 95

coondoggie writes "Researchers at DARPA want to take the science of machine learning — teaching computers to automatically understand data, manage results and surmise insights — up a couple notches. Machine learning, DARPA says, is already at the heart of many cutting edge technologies today, like email spam filters, smartphone personal assistants and self-driving cars. 'Unfortunately, even as the demand for these capabilities is accelerating, every new application requires a Herculean effort. Even a team of specially-trained machine learning experts makes only painfully slow progress due to the lack of tools to build these systems,' DARPA says."

cancel ×

95 comments

MACHINES CANNOT LEARN !! (-1)

Anonymous Coward | about a year ago | (#43244399)

They just can't !!

Re:MACHINES CANNOT LEARN !! (0)

Anonymous Coward | about a year ago | (#43244433)

you must be a machine then

Re:MACHINES CANNOT LEARN !! (1)

alexgieg (948359) | about a year ago | (#43245313)

They just can't !!

Carbon machines can. Why couldn't silicon ones?

Actually, scratch that. With graphene circuits coming around in a few years it'll be carbon machines all the way.

Oblig... (4, Funny)

famebait (450028) | about a year ago | (#43244471)

Even a team of specially-trained machine learning experts makes only painfully slow progress due to the lack of tools to build these systems

Why not just teach a machine to do it?

Re:Oblig... (0)

Anonymous Coward | about a year ago | (#43244533)

Sure, it sounds like a good idea until it starts working on systems for exterminating unwanted human elements and building sinister looking underground hive areas.

Re:Oblig... (1)

Whiteox (919863) | about a year ago | (#43244639)

Programming a machine to teach is not as hard as it sounds.

Re:Oblig... (4, Funny)

gshegosh (1587463) | about a year ago | (#43244771)

Programming a machine to teach is not as hard as it sounds.

I hear you man, I probably had the same German teacher ;-)

Re:Oblig... (0)

Anonymous Coward | about a year ago | (#43244961)

The problem is that once they get tenure, they start working at 30% capacity.

Re:Oblig... (1)

Anonymous Coward | about a year ago | (#43245129)

If you want to tinker with some Machine Learning stuff, check out the free Machine Learning Tool at http://www.satyrasoft.com/ [satyrasoft.com]

If anything, it can be quite useful for simplifying complex if statements in programming tasks.

Re:Oblig... (1)

K. S. Kyosuke (729550) | about a year ago | (#43246355)

Even a team of specially-trained machine learning experts makes only painfully slow progress due to the lack of tools to build these systems

Why not just teach a machine to do it?

Indeed. [wikipedia.org]

human creativity... (1)

schlachter (862210) | about a year ago | (#43246793)

because at the moment it's more of an art than a science for many applications.

ROLLOVER AD (-1)

Anonymous Coward | about a year ago | (#43244509)

Anyone else (those not blocking ads) get really annoyed by the MASSIVE rollover advert at the top of the page?

Re:ROLLOVER AD (1)

Annirak (181684) | about a year ago | (#43244649)

There are /. readers that don't block ads?

Re:ROLLOVER AD (2)

programmerar (915654) | about a year ago | (#43244785)

That ad is super annoying. But as an answer to parent, I actually leave ads *on* for slashdot - only for their gesture to let me turn them off. You know the setting up in the corner on slashdot. This alone made me keep them on, to support them.

Re:ROLLOVER AD (1)

daem0n1x (748565) | about a year ago | (#43244667)

What [adblockplus.org]

ads? [noscript.net]

Re:ROLLOVER AD (1)

i kan reed (749298) | about a year ago | (#43245741)

My work blocks adblockplus. For who knows what reason.

Re:ROLLOVER AD (1)

bdwebb (985489) | about a year ago | (#43246859)

Unless you work in the advertising industry, I'm with you...that's a bit crazy. Apparently they like you getting dangly-carroted to death on a daily basis by flashing neon poker chips and penis enlargement pills...that doesn't distract from your work day at all! (I'm assuming you work in an industry where you utilize the internet for reference regularly of course.)

Re:ROLLOVER AD (1)

i kan reed (749298) | about a year ago | (#43246907)

Maybe it's so wasting time on the internet at work sucks.

Re:ROLLOVER AD (1)

bdwebb (985489) | about a year ago | (#43247063)

You may be on to something there...*checks over shoulder for boss-man*. Actually I work from home so that should be *checks over shoulder for boss-woman*.

Skynet (4, Funny)

Edis Krad (1003934) | about a year ago | (#43244567)

Defense agency investing in Machine Learning technology? What could possibly go wrong?!

Re:Skynet (2, Informative)

Required Snark (1702878) | about a year ago | (#43244589)

Yep, just another stupid waste of time by DARPA, just like the internet.

Re:Skynet (3, Interesting)

Anonymous Coward | about a year ago | (#43244811)

As somebody who is currently taking an advanced class in machine learning *shutter, no more prob stat, no more vector calculus, no more linear algebra, please!*, I'm not going to claim to be an expert by any means, but I will point out that as far as I can tell machine learning is more about classifiers, i.e., is this a square peg, or a round peg, and is that a square hole or a round hole. In other words, take a piece of data, figure out if it belongs in a particular class, or decide if a new class should be created. I can't see any form of sentience coming from anything happening right now.

Re:Skynet (4, Interesting)

Anonymous Coward | about a year ago | (#43245209)

Then you haven't seen my spam filter!

Seriously, I am an AI PhD student/researcher. We get this kind of crap all of the time.
"you are working on robots, when is SkyNet? Hahaha"
"...so... the robot is lost and can't figure out where it is... I'm trying to make it so it can figure it out by how many steps its taken and looking around"
"SkyNet!"

"you are working on a program to control a controller for a video game, when is SkyNet? Hahaha"
"...so... I'm trying to figure out how the computer can make Mario jump over the bad guys without telling him that the bad guys are 'bad'"
"SkyNet!"

"you are working on a program to figure out emotional states of students, how long before you unemploy all the nation's teachers?"
"...so... I'm trying to figure out how to teach a computer to recognize when people are bored..."
"Why do you hate your teachers?!"

Seriously, the idea that we will be able to classify spam, or map a room, of jump over an obstacle, or recognize boredom so well that it gets sentience (and decides to kill all of us) is laughable.

Posting Anon from work.

simplify the problem. people are the problem. (1)

Thud457 (234763) | about a year ago | (#43246847)

Oh come on, you know in your heart SkyNet is the only feasible solution to the spam problem.

All narrative is driven by conflict. If our glorious utopian future entailed having all our needs and wants attended by robowetnurses, nobody would be making movies about that stagnant society. ex - "Zardoz" with its Eternals, Niven's "Safe at any speed", the society on Aurora in Asimov's robot stories.

Re:Skynet (1)

jafac (1449) | about a year ago | (#43255051)

most likely, the guys who have solved this problem, are working for the large financial institutions around the world, writing trading algorithms. Darpa's not worth their time, and neither is the ad industry.

Re:Skynet (1)

Sigg3.net (886486) | about a year ago | (#43257379)

So what you are saying is that SkyNET essentially decides to kill us because we are:
a) spamming
b) building impossible rooms
c) obstacling the world
d) bored
or a combination or all of these..?

I say, you must be one of those who believe in AI through complexity.

Re:Skynet (1)

Nerdfest (867930) | about a year ago | (#43244903)

It might be better than the military making decisions themselves ...

joshua (0)

Anonymous Coward | about a year ago | (#43246027)

Do you want to play a game?

Re:Skynet (-1)

Anonymous Coward | about a year ago | (#43246983)

You're an idiot. And probably a Democrat. But I repeat myself...

Re:Skynet (1)

CosaNostra Pizza Inc (1299163) | about a year ago | (#43249905)

You're an idiot. And probably a Democrat. But I repeat myself...

You are an idiot and a wingnut for spinning this into a political discussion

Re:Skynet (0)

Anonymous Coward | about a year ago | (#43244943)

Defense agency investing in Machine Learning technology? What could possibly go wrong?!

All of this has happened before...

Re:Skynet (0)

Anonymous Coward | about a year ago | (#43245053)

Defense agency investing in Machine Learning technology? What could possibly go wrong?!

A sucky reboot of a reasonably well done sci-fi movie franchise?

Re:Skynet (0)

Anonymous Coward | about a year ago | (#43245515)

Defense agency investing in Machine Learning technology? What could possibly go wrong?!

A sucky reboot of a reasonably well done sci-fi movie franchise on the 11 o'clock news?

There, ftfy

Re:Skynet (1)

CosaNostra Pizza Inc (1299163) | about a year ago | (#43249829)

LoL....Singularity....SkyNet. Just use your imagination

"DARPA Tackles Learned Machines" (5, Funny)

hildolfr (2866861) | about a year ago | (#43244617)

a headline for future 2030.

This headline pops up every few years (4, Informative)

Viol8 (599362) | about a year ago | (#43244675)

They've been trying it since the 50s without it has to be said, too much success given the amount of effort thats been put in. I suspect until we REALLY understand how boligical brains do it (not , "meh, some sort of neural back propagation", yeah , we know that , but what propagation and how exactly?) then machine learning will still remain at the bottom rung of the intelligence ladder.

Personally I think at the moment pre programmed intelligence is still a more successful route to go down. Though hopefully that will change.

Re:This headline pops up every few years (1)

snarkh (118018) | about a year ago | (#43244743)

Machine learning and more broadly AI has had tremendous success recently. Google search is some sort of machine learning program. Pretty useful, no?

I am not even talking about speech recognition, chess machines, auto-focus in your camera and so on.

Re:This headline pops up every few years (4, Interesting)

WillAdams (45638) | about a year ago | (#43244775)

A.I. is a classic case of moving goal posts --- there's an assumption a hard problem requires it, the problem gets solved using ever-more sophisticated analysis/pattern-matching/data-processing --- the problem domain is no longer considered A.I.

Re:This headline pops up every few years (3, Interesting)

g4b (956118) | about a year ago | (#43244851)

exactly.

the research field of AI already considered the idea of "artificial intelligence" to be more "solutions based on imitating intelligence", and it has long been postulated, that while the dream is still the real thing, it probably will not be possible with electronics (which do great in calculus, but still have problems with parallelism).

the results in the last decades were OOP, neuronal networks, or the good known Spamchecking algorithms.

But the approach to learning in all these cases is still very different each time. I am e.g. not sure, if spam filters really use neuronal algorithms - it mostly concentrates on the relations of words in a text, or the alterations of a word in a text, and how to use the statistical data about these relations to flag content which is probably spam.

Since humans (or any intelligent mammals) learn to learn by playing, both establishing recognition of rules, and the usage of data, I wonder if it will be ever possible to have an abstract learning machine, which not just "learns", but also learn "what to learn", and "why to learn" on its own. But each respective problem is getting addressed.

Oh yes, and the latest implications, like gamification in industry, and the revelations of the true meaning of "playing", researched more in social and psychological sciences is maybe also an indirectly linked to the field of AI. Which still has a long way to go in a society, where "playing" is associated with "kids", and a waste of time.

Re:This headline pops up every few years (0)

Anonymous Coward | about a year ago | (#43244981)

Since children absorb and process vast amounts of complex information in a relatively short time span with no formal methodology, maybe "kid's stuff" isn't such a bad area of study in AI research.

Re:This headline pops up every few years (0)

narcc (412956) | about a year ago | (#43244995)

No.

The best chess programs do not learn (4, Informative)

Viol8 (599362) | about a year ago | (#43244809)

They're hard coded and use massively parallel depth searching. The brute force approach has been the best for chess computers for decades.

And google search and translate isn't really learning, they're just statistical systems that given the best result based on the data they've gathered. They don't "think" about it in any meaningful way.

Re:The best chess programs do not learn (2)

citizenr (871508) | about a year ago | (#43245091)

And google search and translate isn't really learning, they're just statistical systems that given the best result based on the data they've gathered. They don't "think" about it in any meaningful way.

This is machine learning, they derive results based on statistical data, but new data input changes statistics = learning

Re:The best chess programs do not learn (1)

Anonymous Coward | about a year ago | (#43245461)

Try changing the board and see what happens. The simplest change that could break search algorithms would be to wrap the chessboard around a sphere so there are no edges.

Re:The best chess programs do not learn (2)

snarkh (118018) | about a year ago | (#43245321)

I never said that chess was machine learning.

>And google search and translate isn't really learning, they're just statistical systems that given the best result based on the data they've gathered.

And what exactly is your definition of learning?

Re:The best chess programs do not learn (2)

eennaarbrak (1089393) | about a year ago | (#43245349)

They don't "think" about it in any meaningful way.

O yes? And what does it mean to think about something in a meaningful way?

Re:The best chess programs do not learn (1)

Viol8 (599362) | about a year ago | (#43245957)

Being able to analyse deeper meaning beyond statistics for a start. A machine would happily take a phrase where every word was "wibble" and attempt to translate it. A human wouldn't bother because they'd know it was rubbish. Learning the statistical relationships between bits of data isn't thinking when it has no idea what those bits of data actually mean.

Re:The best chess programs do not learn (2)

ralphdaugherty (225648) | about a year ago | (#43246165)

At what point does one "know" it's rubbish? Saying "wibble wibbke wibble" to a baby will evoke a smile if said in the right tone of voice.

Re:The best chess programs do not learn (1)

Viol8 (599362) | about a year ago | (#43246275)

True, but then you wouldn't try and hold an intelligent conversation with a baby. Quite how babies learn is another matter, but we haven't written anything yet than can mimic it.

Re:The best chess programs do not learn (1)

K. S. Kyosuke (729550) | about a year ago | (#43246769)

A human wouldn't bother because they'd know it was rubbish.

And how did the human find out that "(wibble )+" is rubbish? By learning, perhaps? From a large sample of sentences encountered in his lifetime, perhaps?

Re:The best chess programs do not learn (1)

eennaarbrak (1089393) | about a year ago | (#43246783)

Sure, I get what you are saying, but there is a problem with it.You assume that the concept of "meaningful thinking" is well-defined. That is a perilous assumption that can get you into all sorts of trouble.

It may well be that what we perceive as "meaningful thinking" is nothing but simple machine algorithms that get interpreted in a specific way but other machine algorithms in our brain. Our brains may very well be machines that have, instead of being programmed by another machine, evolved to categorize information and apply it within our evolutionary context.

Applying "meaning" to it may very well be an illusion. A self-propagating illusion.

If you are interested, an interesting handling of this topic by Alex Rosenberg can be found here: http://onthehuman.org/2009/11/the-disenchanted-naturalists-guide-to-reality/ [onthehuman.org]

Re:The best chess programs do not learn (0)

Anonymous Coward | about a year ago | (#43271745)

....isn't really learning, they're just statistical systems that given the best result based on the data they've gathered.....

How is what you've written meaningfully different FROM WHAT YOUR BRAIN DOES, GENIUS?

Jesus Christ, why is this modded insightful?

P.s IMA PHD AI grad student in machine learning

Re:This headline pops up every few years (1)

ralphdaugherty (225648) | about a year ago | (#43244813)

If I store purchase data away in files and then have a re-order routine/program that generates replenishment orders based on purchase history, that is no more "learning" than any of this neural network stuff capturing patterns and interpreting it.

I wrote a Double Deck Pinochle program back in 1981 that is hard coded logic, no "learning". There is as much or as little AI in it as anything else "AI".

When programming applied to human like operations is stopped being called "artificial intelligence" until there is indeed self generated change in behavior based on input beyond pre-determined algorithmic control, then there will be some honesty and integrity about the programming process now called AI, and possibly with honesty and integrity may come advances.

The self-generated change in behavior would require self-determined changes in programming and data that provides for actual non-preprogrammed behavior. This is obviously extremely difficult.

I have a substantial collection of books on AI and AI history and have a lot more to read but of what I've read a lot of AI programming efforts are done by people with limited time and effort. Very unimpressive stuff from the university crowd.

Re:This headline pops up every few years (5, Insightful)

Spottywot (1910658) | about a year ago | (#43244895)

I think that learning how the biological brain does it before building a learning machine is the wrong way around. I think that the person/team that builds the first genuinely successful learning machine will give the biological researchers a clue about potential mechanisms for learning, it will take a genuine leap of imagination as well as the type of grunt work the DARPA guys are doing.

Re:This headline pops up every few years (1)

thereitis (2355426) | about a year ago | (#43245533)

P2P cluster of humans solving simple, but related, problems, and upload the results. Humans are more forgiving of ambiguities so it should be easier to jump start. Automate these tasks over time.

Re:This headline pops up every few years (2)

bangular (736791) | about a year ago | (#43245787)

The same thing could have been said about computers. Machine learning algorithms can predict really useful things today way better than a human. Sure, they may not be able to understand the context of spoken language very well, but given sufficient training data we can already prescribe medical treatments from ML that surpasses a human doctor in effectiveness.

I do think understanding the human brain would be a big breakthrough, but I don't see them as sequential. ML will actually help us understand the brain better because it will allow us to process the big data of medical experiments in a meaningful way.

Re:This headline pops up every few years (1)

Anonymous Coward | about a year ago | (#43249123)

the first genuinely successful learning machine

I'm feeling a No True Scotsman fallacy here. We've had successful machine learning projects. Didn't you enjoy Watson on Jeopardy?

What, EXACTLY, would you consider "genuinely successful machine learning"?

Re:This headline pops up every few years (1)

xtracto (837672) | about a year ago | (#43246785)

I kind of agree with you, however I tink the science progress is running in the path you describe more than you think. For example, the "meh, some kind of back propagation" thought has is now being replaced by RBM and SVMs, which are based in new theories of how the brain works. This has given some kind of new 'life' to AI [github.com] . It is now known that the typical neural networks and other "classical" machine learning techniques are very prone to overfitting.

As with every field in science, we put theories, and base new research on those theories, until it yields everything it can, and then someone comes with a new theory.

.

Re:This headline pops up every few years (0)

Anonymous Coward | about a year ago | (#43251039)

Actually the backpropigation is a direct departure from how natural neural networks work. It was a solution to the problem of cycles and race conditions which make it difficult to have a neural net that works like a black box that turn input into output (like other computer programs are). Which shouldn't be surprising given natural neural nets work more like industrial controllers (a black box that continuously monitors sensors and adjusts actuators when they sensor data changes) than the magic answer box that ANNs get used for.

Re:This headline pops up every few years (1)

Sigg3.net (886486) | about a year ago | (#43257497)

It all depends on the definition of AI. If you think about a working human brain in a computer, virtualized AI based on neurological models may get us there. But what is the result? A miserable human-clone without any contact with the world? We are animals, machines are not (yet).

But parallell to this, you could just as well acchieve an intelligence that is artificial and computational, but it could be so alien to us that we wouldn't understand it.
Or perhaps we are misinterpreting what it means to be intelligent, and that programs already run algorithms the same way that we do.

Or something else entirely! AI discussions are pretty loose with regards to definitions.

Re:This headline pops up every few years (0)

Anonymous Coward | about a year ago | (#43277701)

And it could even be that biological brains are super-Turing computers.

Didn't IBM do this? (1)

jonwil (467024) | about a year ago | (#43244847)

Didn't IBM do this when they created a computer to play on that quiz show? (the name escapes me)

Re:Didn't IBM do this? (2)

Rockoon (1252108) | about a year ago | (#43244985)

Its a form of A.I. for sure, but the skill shown has more to do with the volume of data it uses than it has to do with a skill at learning.

Machine Learning is a very particular subset of A.I, often characterized by one or more training phases which build of model of the training set that is smaller than the set itself.

Re:Didn't IBM do this? (1)

ShanghaiBill (739463) | about a year ago | (#43247605)

the skill shown has more to do with the volume of data it uses than it has to do with a skill at learning.

Actually, the skill shown had more to do with being a fast button pusher. If the questions were distributed fairly, Watson would not have beaten its human opponents. The Jeopardy game was stage managed show business, not a fair contest.

Re:Didn't IBM do this? (0)

Anonymous Coward | about a year ago | (#43251267)

Its a form of A.I. for sure, but the skill shown has more to do with the volume of data it uses than it has to do with a skill at learning. .

based on how public schools teach students, having a bigger set of memorized data is leaning.

Re:Didn't IBM do this? (3, Informative)

DI4BL0S (1399393) | about a year ago | (#43244993)

Jeopardy, and the machine is called Watson [wikipedia.org]

We need data, not algorithms (3, Insightful)

Anonymous Coward | about a year ago | (#43244925)

There are a ton of off-the-shelf machine learning toolkits that are sufficient for 90% of possible use cases. The problem is getting annotated data to feed into these tools so they can learn the appropriate patterns. But all that requires is a host of annotators (i.e. undergrads and interns), not machine learning experts.

Re:We need data, not algorithms (1)

lorinc (2470890) | about a year ago | (#43245425)

There are a ton of off-the-shelf machine learning toolkits that are sufficient for 90% of possible use cases. The problem is getting annotated data to feed into these tools so they can learn the appropriate patterns. But all that requires is a host of annotators (i.e. undergrads and interns), not machine learning experts.

Exactly this!

Almost everything you ever dreamed of as a non machine learning expert is available at https://mloss.org/software/ [mloss.org]
Please now annotate more data so that we can tune the algorithms ;-)

Re:We need data, not algorithms (1)

bangular (736791) | about a year ago | (#43245509)

+1 to this. The algorithms are great and we are not using them anywhere near capacity. We lack standards for data formats and standards for interfacing. If I write a program using MySQL, it can reasonably be moved to another RDBMS with maybe 80-100% of the code saved. If I write a program using C4.5 in Matlab, good look porting it to Weka using Decision Stump and a meta learner. It boggles my mind that a company like Microsoft hasn't packaged it as Microsoft ML 2012 and have Visual Studio integration.

Re:We need data, not algorithms (1)

Hizonner (38491) | about a year ago | (#43245639)

If I tried to teach a human, or indeed if I set an untaught human loose on an unstructured problem, and that human turned around and demanded a huge mass of annotated data, I would not conclude that the human was a good learner, or even "sufficient for 90% of possible use cases". I would conclude that the human didn't have the complete machinery of learning.

Re:We need data, not algorithms (1)

bangular (736791) | about a year ago | (#43245865)

AI and ML are different. AI is this grandiose goal which we are many decades away from. ML is much more humble and doing very useful things today. I'm not really sure where the line for ML stops and AI begins. Maybe the name was just arbitrary so it wouldn't have 50 years of expectations on its back. But we're just leaving phase I in the history of ML (supervised learning) and getting into phase II (unsupervised). Next we'll have to figure out domain transfer knowledge. Who knows what's next after that.
You're showing disappointment in the industrial revolution because it hasn't invented the calculator yet.

Re:We need data, not algorithms (1)

Hizonner (38491) | about a year ago | (#43245905)

I'm not disappointed at all. I'm reacting to somebody who seems to think the job is done when it's not.

All I'm saying is that the present, early stuff is NOT "sufficient for 90% of possible use cases". That doesn't mean I don't realize that things are still at an early stage and progress is being made.

Re:We need data, not algorithms (0)

Anonymous Coward | about a year ago | (#43247163)

Plenty of data has been organized by http://www.cyc.com/ [cyc.com]

I'm not connected with this company, but one of my college roommates worked with Doug Lenat at Cycorp for a number of years.

Re:We need data, not algorithms (1)

Anonymous Coward | about a year ago | (#43250857)

No, sorry, please try again.
You've got a lot [wikipedia.org] to go through [wikipedia.org] before [wikipedia.org] even knowing [wikipedia.org] you're wrong [stackoverflow.com] .

Machine learning is a subset of artificial intelligence. We have obtained both AI and ML solutions for certain problems. High fives all around. We have yet to achieve "strong AI", and you have to get into a philosophy debate to define just what the hell that means. There is no point where ML stop and AI begins. Unsupervised ML has been in use for quite a while.

I'm disappointed in the mysticism surrounding AI because the pseudo-intellectuals of the industrial revolution think gears will lead to sprockets that will land us on the moon. That's a metaphor. I'm actually talking about you.

Actually, we do need a nice toolkit (1)

bouldin (828821) | about a year ago | (#43247683)

In my ML class, we used WEKA. Of course, there is also Matlab. Problem is, neither of these are free, and they are both slow as hell. I would not use either one outside of class/prototyping.

Ideally there would be a free, open source toolkit written in a compiled language. The toolkit should have a variety of ML techniques that can be switched around with little pain. Only toolkit I know of like this is the ML part of OpenCV, and the documentation for OpenCV is... lacking.

Another poster linked to mloss.org. I hadn't seen that site.. Looks like a great resource, but it also looks like 400+ fragmented tools that do not play well together, and are probably mostly dead projects by now.

NO! (1)

noshellswill (598066) | about a year ago | (#43245267)

Godels' bamboo cane-of-mind-pain  smacks the arrogant byteboy across --tappatappatappa--  fingers. Does it hurt, byteboy that **all** formal systems bow-down  of necessity to meatspace creatures? And YES the IBM human expert directed dog+pony mechanical-Turks were/are jokes. But, it's actually much worse as all "AI" exists **only** in the (human) mind of the constructor.  Yep ... all that babble pledging your affection to inchoate voltages and currents you are so fyucked byteboy ..... 

Good luck with that (3, Insightful)

Black Parrot (19622) | about a year ago | (#43245327)

Sounds like the 1990s fetish for making programming languages so simple that even your boss could make reports and do other stuff for himself. Unfortunately, programming language syntax wasn't the primary hurdle: I've had bosses request reports that would add pounds of product and shipping costs.

For ML, it takes a good bit of training just to know what kinds of problems you can apply it to. A cookbook toolkit isn't going to reduce the need for expertise very much.

Re:Good luck with that (3, Insightful)

Black Parrot (19622) | about a year ago | (#43245439)

Here's an analogy: We've had sophisticated, easy-to-use statistics software packages for decades. What percentage of the population can use them correctly for anything non-trivial?

Tools are nice, but some stuff just inherently takes training. No tool is going to make me a competent oceanographer or particle physicist.

Re:Good luck with that (2)

bangular (736791) | about a year ago | (#43245559)

Creating new programming languages for domain specific problems has never worked. However, there really is a lack of developer friendly tools out there. On one end we have the researchers creating algorithms and (if we're lucky) implementing that algorithm as a stand alone script in Java. On the other end are developers. Most developers are fickle and if the tool requires knowledge of the internals, probably won't use it. That's where the Microsoft's and Oracle's and Google's are supposed to step in and make a crap-ton of money packaging these algorithms with a shiny API.

However, the current state is that no middle man has really stepped in.

Re:Good luck with that (1)

jlowery (47102) | about a year ago | (#43253621)

It's never worked for untrained end-users, perhaps, but the are plenty of successful DSLs out there. Spreadsheets formulas, for one.

Quality of tools (2)

bangular (736791) | about a year ago | (#43245361)

I was just talking with someone about this the other day. Machine learning is going to be the SQL database of the next generation. In 15 years it will be hard to find basic apps that don't use it. The tools will reach a point that it's so easy to include them in your program, people will assume to include them even though they may not really be the most appropriate method to solve the problem. This is how SQL is today. Go to any SMB and try to find a non-trivial application that doesn't use a SQL database. It's difficult.

However, the state of current tools is not good. We currently have really good algorithms for machine learning. The gap is in actually getting a developer to use them. If it's not branded and blessed by Oracle or Microsoft, many businesses won't use it. If you search for implementations on the internet you can usually find an implementation of R or Matlab. However, people are weary of including R and Matlab in their programs to begin with. If it's not in .net or Java, they won't use it. Weka can be used for Java, but it's a difficult library for a machine learning novice to use. The developer has to know some internals of machine learning to know which algorithm to use and their pros and cons. Meta learners complicate the issue even more. Modern RDBMS have been sugar coated so much a developer can use a RAD IDE and not understand a single line of SQL. I'm not saying that's really a good thing, but it definitely has made SQL databases very common and improved the state of the industry for everyone.

Shh .. Oracle is listening (0)

Anonymous Coward | about a year ago | (#43247715)

Not so loud! ORACLE is listening. If they take over this domain, imagine the size of the drivers you have to install on your "smart" client.

Re:Quality of tools (0)

Anonymous Coward | about a year ago | (#43247981)

I disagree, having published ML papers, just having "better tools" won't fix things when you run up against data that violates the fundamental assumptions of the methods. Honestly, smashing things with ML engines is the easy part. Figuring out what to smash... that's harder.

Re:Quality of tools (0)

Anonymous Coward | about a year ago | (#43249193)

I was just talking with someone about this the other day. Machine learning is going to be the SQL database of the next generation.

How old were they? More specifically, did they live through the last AI winter?

If it's a fresh graduate... pfffft, please.

Re:Quality of tools (0)

Anonymous Coward | about a year ago | (#43254519)

"This is how SQL is today. Go to any SMB and try to find a non-trivial application that doesn't use a SQL database. It's difficult. "

It's actually really easy: your OS kernel does not use SQL and is a non-trivial application.

don't do it (0)

alteveer (979070) | about a year ago | (#43245403)

inb4 skynet

Off topic, prolly modded troll (2)

flying_fortress (2657331) | about a year ago | (#43246199)

I read an interesting article the other day suggesting humans are the organic soup from which a new branch of binary-encoded as opposed to dna-encoded life will emerge. They argued that it's already happened given that computer viruses are self-replicating.

They say soon it will be fish - lizard - hampster - chimp - neanderthal - human - roomba basically. And that these new "machine life forms" will entirely surpass us in so many ways - longevity, freedom from depression and the faults of our "blind watch maker" brains, strength, speed, the ability to withstand the stresses of heat, pressure, interstellar radiation, the claustrophobia and isolation of intergalactic travel, etc.

I know our efforts towards cultivating AI and facilitating this now seem clumsy and awkward, but I'm with Kurzweil in thinking this only picks up steam more and more quickly from here.

So here's the real nutjob part: If the vast majority of this research is being done by the worlds' militaries, isn't the likelihood of these new uberspecies being aggressive towards humans quite high? And, if cats breed out of control wiping all the birds, and humans breed out of control devastating all sentient beings weaker than themselves, even enslaving, exploiting, and killing their own kind, is it not likely that evolution just kind of favors ruthless dominance? And if so, even in the best of cases, does the evolution of these new uber-species bode well for us basically??? : )

Intelligence is dangerous it seems! I feel like we are witnessing a race between various possible means of causing the 7th (?) mass extinction in geological time - Will we eradicate the current biosphere, ourselves included, by our own hands first by: a) nuclear weapons, b) genetically engineered viruses, c) polluting the environment to the point of mass ecological collapse, or now d) purposely creating species more dominant than our own and allowing nature to run its course?

Maybe it was bird flu, asteroid, or sun unexpectedly blowing up all along, but my money's on us! Just not sure which way???

Black Box (1)

onebeaumond (1230624) | about a year ago | (#43246521)

What they really want is the classic "Computer that Gives a Shit". Instead of the usual passive-aggressive taunting, using your own dumb SQL statement, it fixes it for you instead!

Re:Black Box (0)

Anonymous Coward | about a year ago | (#43247217)

i sat down at a terminal at mit in the early 1980's and typed something like "who" to see who had logged in. turned out that for that system the command was something else, maybe "whois"(??)

anyway, it gave me the answer i was looking for and also chided me for using the wrong command.

sounds like a DAPPA RFP from 1970s (1)

peter303 (12292) | about a year ago | (#43246547)

"Deja vu all over again" for lon-undersolved computer problems.

The Reasons for "Herculean effort" (3, Informative)

scruffy (29773) | about a year ago | (#43246801)

Raw data need to be cleaned up and organized to feed into the ML algorithm.

The results of the ML algorithm need to be cleaned up and organized so that they can be used by the rest of the system.

No one (currently) can tell you which ML algorithm will work best on your problem and how its parameters should be chosen without a lot of study. Preconceived bias (e.g., that it should be biologically based, blah, blah) can be a killer here.

The best results typically come from combinations of ML algorithms through some kind of ensemble learning, so now your have the problem of choosing a good combination and choosing a lot more parameters.

All of the above need to work together in concert.

Certainly, it's not a bad idea to try to make this process better, but I wouldn't be expecting miracles too soon.

Every journey starts with the first step? (1)

Anonymous Coward | about a year ago | (#43248533)

Just want to point out that this is about machine learning, not AI so need to worry yet about Skynet although the ability to understand data and learn from out is the first step, or at least one part of the jigsaw to achieve Artificial Intelligence.

From what I can gather, this is trying to standardize how machine learns. It sounds like what we have at the moment where we have in t he education system where there are numerous systems on how to teach children to read and write. Rather than having numerous systems on machine learning, why not combine resources and have one method? This has good points and bad points. What happens if the method that DARPA approves isn't the right one? I think a better way, as pointed out by another poster, is to first to standardize how data is stored which will then make it easier for software with machine learning functions to process it. It may even be a case of changing the way we communicate with machine has to change. Languages, particular English, has constantly evolved throughout centuries so why should it not evolve to take account of communicating with machines?

The real key to Artificial Intelligence... (0)

Anonymous Coward | about a year ago | (#43248711)

is not building machines capable to learn or teaching computers to automatically understand data.

The real breakthrough will be when we build machines able to teach what they learnt to other machines, in their own terms. Even if they don't do it perfectly.

Perhaps... (1)

frank_adrian314159 (469671) | about a year ago | (#43251091)

... if they didn't kill AI research in the mid-eighties, they wouldn't have had to fund research today when it's more expensive? Thanks, DARPA...

miri - singularity (0)

Anonymous Coward | about a year ago | (#43254277)

they wont ask but you should know

this is something we all should be supporting...

If you have not been here yet check them out

http://intelligence.org/donate/

http://intelligence.org/research/

this is the reason they exist

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...