×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Poison Attacks Against Machine Learning

timothy posted about 2 years ago | from the think-zippy-the-pinhead dept.

AI 82

mikejuk writes "Support Vector Machines (SVMs) are fairly simple but powerful machine learning systems. They learn from data and are usually trained before being deployed. SVMs are used in security to detect abnormal behavior such as fraud, credit card use anomalies and even to weed out spam. In many cases they need to continue to learn as they do the job and this raised the possibility of feeding it with data that causes it to make bad decisions. Three researchers have recently demonstrated how to do this with the minimum poisoned data to maximum effect. What they discovered is that their method was capable of having a surprisingly large impact on the performance of the SVMs tested. They also point out that it could be possible to direct the induced errors so as to produce particular types of error. For example, a spammer could send some poisoned data so as to evade detection for a while. AI based systems may be no more secure than dumb ones."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

82 comments

Why solely the link to "i-programmer.info"? (5, Informative)

Anonymous Coward | about 2 years ago | (#40729419)

Why the hell is the only link in the summary to that rather useless "I Programmer" website? The summary here at Slashdot is basically the content of the entire linked "article"!

Here is a much more useful link for anyone interested in reading the actual paper: http://arxiv.org/abs/1206.6389v1 [arxiv.org]

Re:Why solely the link to "i-programmer.info"? (0)

Anonymous Coward | about 2 years ago | (#40729447)

Because the guy that posts these likes spamming links to his own site.
http://developers.slashdot.org/story/12/07/21/2040257/html5-splits-into-two-standards

Re:Why solely the link to "i-programmer.info"? (2)

mbone (558574) | about 2 years ago | (#40729449)

The "original article" has a link to the arxiv preprint at the bottom.

Re:Why solely the link to "i-programmer.info"? (1)

justforgetme (1814588) | about 2 years ago | (#40746271)

Yes but the sentiment stands. why base the story on a story about an article and not on the article itself?
Bone idleness? Idiocy? Hunger? Gas?

Re:Why solely the link to "i-programmer.info"? (0, Troll)

Anonymous Coward | about 2 years ago | (#40729649)

For the wonderful low prices of $7USD, I will touch your sex.

Re:Why solely the link to "i-programmer.info"? (0)

Lisias (447563) | about 2 years ago | (#40730085)

For just $93USD more, I'll leave your sex in place when I retrieve my hand.

Re:Why solely the link to "i-programmer.info"? (0)

EdIII (1114411) | about 2 years ago | (#40730709)

For just $93USD more, I'll leave your sex in place when I retrieve my hand.

So... for a hundred dollars you will not rip my off my johnson? Sounds quite reasonable. Do you happen to work for Viacom?

Re:Why solely the link to "i-programmer.info"? (0)

Anonymous Coward | about 2 years ago | (#40729793)

Why the hell is the only link in the summary to that rather useless "I Programmer" website? The summary here at Slashdot is basically the content of the entire linked "article"!

That's easy: timothy doesn't proofread, much less edit.

Try this on humans (4, Interesting)

s_p_oneil (795792) | about 2 years ago | (#40729433)

Universities should run a number of psychology experiments to see how this can be done to human intelligence to see how susceptible it is compared to AI. Or you could just study people who tune in to .

Re:Try this on humans (1)

s_p_oneil (795792) | about 2 years ago | (#40729439)

Sorry, Slashdot stripped out my "insert questionable media outlet here" message. I previewed it a bit too quickly.

Re:Try this on humans (-1)

Anonymous Coward | about 2 years ago | (#40729603)

no one gives a shit cuntface

Re:Try this on humans (2)

Yaa 101 (664725) | about 2 years ago | (#40729883)

It is already known that human brains make up what they miss in presented info.

With people you only have to withhold info to get them to make bad decisions

Re:Try this on humans (1, Insightful)

sg_oneill (159032) | about 2 years ago | (#40729975)

When you think about it, whats going on here is inducing mental illness in "thinking" machines.

We already know how to induce mental illness in humans. Religion and war.

Re:Try this on humans (3, Insightful)

marcosdumay (620877) | about 2 years ago | (#40730115)

You mean propaganda and social pressure.

Religion and war are just consequences of those.

Re:Try this on humans (1)

Runaway1956 (1322357) | about 2 years ago | (#40731893)

GP is probably on a crusade to stamp out religion and war. Your definitions will have zero impact on his views.

Re:Try this on humans (1)

Decker-Mage (782424) | about 2 years ago | (#40732941)

That was what came to mind immediately after just reading the summary here. Unfortunately the cure, open-mindedness, frequently sets up the 'victim' to the disease. Sad.

Re:Try this on humans (1)

Lisias (447563) | about 2 years ago | (#40730097)

Universities should run a number of psychology experiments to see how this can be done to human intelligence to see how susceptible it is compared to AI. Or you could just study people who tune in to .

They're still busy trying to understand the Milgram's results.

Re:Try this on humans (0)

Anonymous Coward | about 2 years ago | (#40730201)

This could explain religion.

Propaganda (5, Insightful)

mbone (558574) | about 2 years ago | (#40729441)

On this side of the human / AI line, we call this propaganda. It has historically proved very effective, specially if you can control all of the "training data."

Re:Propaganda (1)

betterunixthanunix (980855) | about 2 years ago | (#40729467)

Historically? Just what do you think D.A.R.E. is?

Re:Propaganda (1)

MightyYar (622222) | about 2 years ago | (#40729599)

D.A.R.E. would be a pretty poor example - it has never been found to be effective.

Re:Propaganda (2)

betterunixthanunix (980855) | about 2 years ago | (#40729739)

That depends on your definition of "success" -- D.A.R.E. has been overwhelmingly successful at convincing people that some drugs should be illegal. See, for example, the large number of people who are convinced that cocaine, heroine, and methamphetamine are evil and must be banned (and never mind that two of the three drugs are legal by prescription).

Re:Propaganda (1)

peragrin (659227) | about 2 years ago | (#40729873)

I don't believe drugs are bad, but the use of some drugs should be tightly regulated.

the average person is really bad at self medication. either going way to far or to little. Drugs with side effects trigger attachments. Caffeine is just as dangerous as Alcohol in that respect. Some people really can't handle their caffeine very well either. Go to a coffee stand (or at work) and watch some people with their hands shaking so hard they can't hold the coffee in the cup.

That is a sign of a drug addiction beyond the persons ability to control.

Prescribed drugs can be abused but at least someone is trying to limit the effects

Re:Propaganda (2)

LateArthurDent (1403947) | about 2 years ago | (#40730155)

the average person is really bad at self medication.

And why is it our job to protect them?

Boxing is extremely dangerous. If two people make the choice to get in the ring, we may think that's unwise, but it's their decision. If you make the decision to do something that will harm you, you may be an idiot, but I don't have the moral right to stop you through means other than making an argument to try to change your mind.

When you get into things that have the potential of harming others, then that's another story. You're free to drink alcohol and use whatever other drugs you want to. You're not free to drive on public roads under their influence.

Re:Propaganda (1)

adri (173121) | about 2 years ago | (#40730909)

Because you don't live in a world where individuals' actions have no effect outside of the individual.

If two people decide to get in the ring and box, and suffer brain damage in the long term, so be it. What effect could it have?

If a hundred thousand pairs of people decide to get in the ring and box, what kind of long term effects will that have on the people around them? Would there be an increase in accidents? A decrease in critical thinking? What kind of effects would it have on their planning and execution skills? What about those families whose fathers/mothers/daughters/sons are suffering from boxing effects and what stresses/effects does it have on them?

Done at a large enough scale, _everything_ has an influence on society as a whole.

Re:Propaganda (1)

Rhalin (791665) | about 2 years ago | (#40731447)

the average person is really bad at self medication.

And why is it our job to protect them?

Boxing is extremely dangerous. If two people make the choice to get in the ring, we may think that's unwise, but it's their decision. If you make the decision to do something that will harm you, you may be an idiot, but I don't have the moral right to stop you through means other than making an argument to try to change your mind.

When you get into things that have the potential of harming others, then that's another story. You're free to drink alcohol and use whatever other drugs you want to. You're not free to drive on public roads under their influence.

I'm unfamiliar with a theory of social morality that supports the line of reasoning you start from. Could you point me towards more information on this that is supported by contemporary social theory? Preferably grounded in a processural approach?

Thanks!

Re:Propaganda (5, Insightful)

betterunixthanunix (980855) | about 2 years ago | (#40730211)

Drugs with side effects trigger attachments. Caffeine is just as dangerous as Alcohol in that respect

Except that "attachments" are not dangerous. Coma and death are dangerous, brain damage is dangerous, liver damage is dangerous, and the typical doses of alcohol are frighteningly close to such adverse effects -- whereas the typical dose of caffeine is nowhere near that point.

Go to a coffee stand (or at work) and watch some people with their hands shaking so hard they can't hold the coffee in the cup.

Which may be scary, but is not a sign of any permanent damage to that person's mind or body. Caffeine withdrawal is tough, but it is not life threatening, and a person who is committed to it can get through the symptoms at home (maybe with the help of close friend) in less than a week. Alcohol withdrawal, on the other hand, can be so dangerous that it requires medical supervision.

That is a sign of a drug addiction beyond the persons ability to control.

Yet the drug abuse and dependence treatment programs that emerged from clinical psychology (read: science) are based on teaching people how to take control and avoid harmful behaviors.

Prescribed drugs can be abused but at least someone is trying to limit the effects

Really? A typical Adderall prescription (d,l-amphetamine salts) is for 10-20mg, two-three times per day, for a month. That is well above a lethal quantity, and a person could easily give themselves brain damage by taking a large fraction of their month's supply. People who abuse Adderall and related medicines (other amphetamines, Ritalin, etc.) can have psychotic episodes; see, for example, this recent NY Times article (sorry for paywall) about prescription stimulant abuse among high school and college students:

https://www.nytimes.com/2012/06/10/education/seeking-academic-edge-teenagers-abuse-stimulants.html?_r=1&hp [nytimes.com]

It's not just psychiatric drugs; prescription opiates are also readily abused, and people get high by using the prescribed amount of those drugs. Some pharmaceutical opiates are more potent than heroin, and abuse is an ever-present concern with those drugs; Rush Limbaugh abused prescription opiates:

http://www.cbsnews.com/2100-201_162-1561324.html [cbsnews.com]

Here is the problem with the war on drugs: recreational drugs need not be any more dangerous than prescription drugs. Pharmaceutical methamphetamine is safer than "truck stop" methamphetamine, not because it is a different drug, but because the production is much better controlled. Many of the dangerous of recreational methamphetamine stem from the adulterants that are left over from poor production techniques.

So in a sense, I agree with you: we need better regulation. That means legalizing recreational drugs, and requiring that legal sources adhere to standardized and regulation production and distribution methods (I do not think anyone can argue that a 14 year old should be buying recreational drugs). When someone buys cocaine, they should not have to worry about what is mixed into the drug; when someone buys MDMA (ecstasy), they should not worry about having actually received methamphetamine mixed with caffeine (a well known trick on the black market). There will still be problems with abuse, but when someone visits their doctor, they should be able to tell their doctor what drugs they have been taking, and in what doses -- which is basically impossible if you are buying some mystery powder in an alley somewhere.

Re:Propaganda (1)

inasity_rules (1110095) | about 2 years ago | (#40730275)

I deliberately quit coffee every 4 months or so. Then when I start again it is so much more effective. Quitting isn't that hard, given I drink more than 7 cups a day normally..

Re:Propaganda (1)

betterunixthanunix (980855) | about 2 years ago | (#40730299)

I found that quitting coffee came with headaches and tiredness for a day or two -- not the worst thing in the world (people go through worse with tobacco) but not something to shrug at.

Re:Propaganda (1)

Smauler (915644) | about 2 years ago | (#40733629)

I've recently (the last 6 months or so) been on and off of tobacco, ie. smoke about 20 a day for a week, stop for 3 or 4 days, smoke for a week, stop again, etc. I've been a smoker for almost 20 years. This isn't because I want to quit - I don't, I enjoy smoking. I think the physical dependencies are completely exaggerated...

I have a much bigger physical craving for alcohol after not drinking for a while, to the extent I deliberately don't drink a lot of the time. Cocaine's not too bad, but it's insidious - I used to be a weekend user, and found that sometimes I wasn't looking forward to the weekend, I was looking forward to the cocaine. I slowed down a bit after noticing that. Mephedrone I went a little silly on some weekends, when it was legal, because it was gorgeous, but it made the next few days feel dull as hell. When they made it illegal, I quit, because when something is illegal, it's generally cut to crap, and dosage was quite important for me with it - too much, and you end up talking complete crap constantly, if you're not careful. Cocaine you can regulate better, ie. you know how high you are more easily (though it's often cut with other uppers, just to make it more difficult).

I've stopped illegals now, not for moral or self-preservation issues, but for practical ones - I go out less, and if I get caught again I'll be in deep shit.

Re:Propaganda (0)

Anonymous Coward | about 2 years ago | (#40731321)

"That is a sign of a drug addiction beyond the persons ability to control."

Which results in zero* harm anyway.

*Neglible

Re:Propaganda (0)

Anonymous Coward | about 2 years ago | (#40731943)

the average person is really bad at self medication

Patients who are on doses of narcotics that are high enough that addiction is a concern are frequently given use of a mechanism [wikipedia.org] to (within limits) control the dose, because doing so reduces the likelihood that the patient will become addicted. I.E. the patient, despite being out of his head from pain and narcotics, is better able to get the dosage right than is the attending physician.

Re:Propaganda (1)

mbone (558574) | about 2 years ago | (#40729919)

Historically? Just what do you think D.A.R.E. is?

amateurs

Re:Propaganda (4, Interesting)

betterunixthanunix (980855) | about 2 years ago | (#40730257)

I disagree; D.A.R.E. has been overwhelmingly successful at convincing people of the legitimacy of the war on drugs and the paramilitary police that were created in the name of that war. Hardly anyone questions the fact that we have soldiers (but with "POLICE" or "DEA" written on their uniforms) attacking unarmed civilians just to serve an arrest warrant. Hardly anyone questions the fact that the executive branch of government, through the Attorney General's office, now has the power to make and enforce drug laws, without democratic action. Hardly anyone questions the fact that the DEA, supposedly a law enforcement agency, has so much signals intelligence capability that the dictators of some nations have tried to demand the DEA's help in spying on political opponents.

How many propaganda programs have been so successful at convincing people that this sort of unwinding of a democratic system is the right thing to do?

Will AI's become too smart for us? (1)

k(wi)r(kipedia) (2648849) | about 2 years ago | (#40729499)

The security implications aside, one problem I see is a possible arms race between the poisoners and the AI designers. The only way for the designers to win is to build tests that are less tolerant of the poisoned data. This is good if AI systems are built to interact only with other AI systems. But what if humans are the end users?

At some point, the increase in data precision will come up against the natural imprecision of human users. Fewer humans will be smart enough to pass the Turing test. A practical example: I've noticed how Google's recaptcha puzzles have become more difficult. I now need to magnify the page view in order to make out some of the letters.

Re:Will AI's become too smart for us? (2)

mbone (558574) | about 2 years ago | (#40729731)

I have this mental image that in the future not everyone will be able to pass as human (i.e., routinely solve captchas), and the ones who can may be able to rent out that service to those who can't.

Re:Will AI's become too smart for us? (2)

Gaygirlie (1657131) | about 2 years ago | (#40729973)

I have this mental image that in the future not everyone will be able to pass as human (i.e., routinely solve captchas), and the ones who can may be able to rent out that service to those who can't.

The good thing is that us non-humans can then travel all around the world really cheap. I, personally, belong in healthcare products as a natural Fleshlight-substitute!

Re:Will AI's become too smart for us? (1)

betterunixthanunix (980855) | about 2 years ago | (#40730449)

I have a mental image of a future without captcha, where we rely on things like HashCash instead -- slowing down spammers, rather than defeating them entirely.

works on people too (1, Troll)

circletimessquare (444983) | about 2 years ago | (#40729545)

it's called propaganda

see: Fox News

Re:works on people too (1)

Toonol (1057698) | about 2 years ago | (#40730689)

Your comment is amusing, because by singling out Fox News, you're demonstrating that you're a victim of very successful propaganda.

Re:works on people too (1)

circletimessquare (444983) | about 2 years ago | (#40730863)

because I point to a source of propaganda can only mean I am a victim of propaganda?

Re:works on people too (1)

colinrichardday (768814) | about 2 years ago | (#40731645)

He's saying that if you believe that Fox is the only source of propaganda, then you are a victim of the other sources of propaganda. Your citing Fox may not be singling them out, but just an indication that you believe that they are the worst in this regard.

Re:works on people too (1)

gl4ss (559668) | about 2 years ago | (#40735029)

no, it's more like a guy down the street yelling that the end of the world is nigh and you believing him despite fox news(the main source) telling otherwise..

GIGO (0)

Anonymous Coward | about 2 years ago | (#40729629)

Well,m duh ... leave the learning on and GIGO rule is active! Leave the learning off and people will figure out how to be ignored by it.

Nothing new here at all.

Known problem, known solutions (4, Interesting)

Kanel (1105463) | about 2 years ago | (#40729663)

There's already a whole subfield of machine learning which concern itself with these problems. It's called "adversarial machine learning".
The approaches are very different from usual software security. Instead of busying oneself with patching holes in software or setting up firewalls, adversarial machine learning re-design the algorithms completely, using game theory and other techniques. The premise is "How can we make an algorithm that works in an environment full of enemies that try to mislead it?" It's a refreshing change from the usual software-security paradigm, which is all about fencing the code into some supposedly 'safe' environment.

Re:Known problem, known solutions (0)

Anonymous Coward | about 2 years ago | (#40730145)

Also, this very same effect they mention happens in human intelligence...

The first spams we ever see we likely thought were real to some extent... then we got wise to spam... the spammers change their MO and some people are caught by surprise by the new approach... until they wise up...

AI does not somehow exclude all the bad parts of real intelligence (and how it can be effected)

Re:Known problem, known solutions (1)

betterunixthanunix (980855) | about 2 years ago | (#40730507)

"How can we make an algorithm that works in an environment full of enemies that try to mislead it?"

This sounds like it is closely related to secure multiparty computation, where the goal is to correctly compute some function on multiple parties' inputs without revealing those inputs. This has been researched since the 1980s, and there have been numerous results on feasibility and impossibility, as well as several practical systems (including at least one that was used in the real world). It is likely that both approaches can be used to solve the same set of problems, but that the machine learning approach is more natural for some problems and MPC is more natural for others.

Re:Known problem, known solutions (1)

node636 (2526762) | about 2 years ago | (#40730913)

agreed. there already exist plenty of simple methods for identifying and removing 'bad' data. Currently they're usually applied to a static data set before sending it to the machine. It should be simple to implement algorithms that perform this computation at run time.

Re:Known problem, known solutions (0)

Anonymous Coward | about 2 years ago | (#40734367)

How can we make an algorithm that works in an environment full of enemies that try to mislead it?

That's sounds like a question that could drive internal security, regulatory compliance and consistency inside a corporation's firewalls.

Throwing sand into the gears of MI (0)

Anonymous Coward | about 2 years ago | (#40729689)

I wonder how long it will take for Machine Intelligence Sanding to be incorporated into a sci fi flick:

"What are you doing? I didn't even know you liked to fish."
"Whenever I order weapons I also order something from an unrelated site. Besides, a box of streamers might come in handy"
Thanks for your order! Bass Pro Shops

Not very practical (3, Insightful)

ceoyoyo (59147) | about 2 years ago | (#40729749)

So if you know the algorithm and training data, and you can feed the system new data with manipulated labels then you can confuse it. It's a little early to panic about your spam filter. Hopefully everyone realizes that if you let the spammers tell your computer what is and is not spam, they can cause it to let their spam through.

Re:Not very practical (1)

Kjella (173770) | about 2 years ago | (#40729849)

So if you know the algorithm and training data, and you can feed the system new data with manipulated labels then you can confuse it. It's a little early to panic about your spam filter. Hopefully everyone realizes that if you let the spammers tell your computer what is and is not spam, they can cause it to let their spam through.

Well I assume that's why the spam/not spam buttons are there in my webmail reader, that somehow this goes into a form of feedback system. I'd not be surprised if spammers send spam to themselves, then flag it as not spam in order to confuse the system. Or signing up for stuff legitimately, then flagging it as spam anyway. Anything to increase the noise floor so they have to back off on filtering or lose genuinely wanted mail.

Re:Not very practical (1)

Sqr(twg) (2126054) | about 2 years ago | (#40729957)

I doubt they would spend energy on this. Setting up fake mail accounts costs time/money, and even though the spammers as a collective might benefit from attacking the spam filter, it is more profitable for the individual spammer to use those accounts for sending spam.

Also, a support vector network could easily learn that the "not spam" flag from certain users actually means the opposite.

Re:Not very practical (0)

Anonymous Coward | about 2 years ago | (#40730023)

Most spam comes from fake accounts. It costs almost nothing for them to set some up and use them to game feedback channels. Good filters learn how much trust to put into various feedback channels.

Re:Not very practical (1)

ceoyoyo (59147) | about 2 years ago | (#40730679)

I doubt it. Google's spam filter seems to work just as well as my local one, and spammers are definitely not managing my spam/not spam button.

If that were the case though, it's an excellent reason not to use spam filters that spammers control.

Re:Not very practical (0)

Anonymous Coward | about 2 years ago | (#40729905)

Hopefully everyone realizes that if you let the spammers tell your computer what is and is not spam, they can cause it to let their spam through.

I can guarantee you they won't. You could say the same thing about most malware. Surely nobody would be dumb enough to run CutePuppy.jpg.exe they downloaded from an unknown site in Romania, but people actually are that dumb. When it comes to computing, 99% of the public employs zero thought. There simply isn't any, beyond "click what pops up on my screen".

Not new? (1)

TheRealMindChild (743925) | about 2 years ago | (#40729809)

I know that email spammers have been exploiting this to make bayesian filters for the past decade

Re:Not new? (-1)

Anonymous Coward | about 2 years ago | (#40729847)

Yes. Strategy has been well known since the movie, 'Independence Day'.

SVM != AI (3, Informative)

SpinyNorman (33776) | about 2 years ago | (#40729927)

Support Vector Machines are just a way of performing unsupervised data partitioning/clustering. i.e. you feed a bunch of data vectors into the algorithm and it determines how to split the data into a number of clusters where the members of each cluster are similar to each other and less similar to members of other clusters.

e.g. you feed it (number of wheels, weight) pairs of a lot of vehicles and it might automatically split the data into 3 clusters - light 2-wheeled vehicles, heavy 4-wheeled ones, and very heavy 4-wheeled ones. If you then labelled these clusters as "bikes", "cars" and "trucks" you could in the future use the clustering rules to determine the category a new data point falls into.

This isn't Artificial Intelligence - it's just a data mining/classification technique.

Re:SVM != AI (1)

lorinc (2470890) | about 2 years ago | (#40730117)

SVM are primarily a classification technique that has been extended to clustering, regrssion, structured output learning (such as ranking), and so on. So yes, the max margin principle has been used is basically all the areas of machine learning.

How do you argue machine learning is not AI? You know the vast majority of researchers and publishers in the ML field consider it to be AI.

Re:SVM != AI (0)

Anonymous Coward | about 2 years ago | (#40730783)

Well, if you listen to Marvin Minsky, then ML is not AI...

Re:SVM != AI (0)

Anonymous Coward | about 2 years ago | (#40732077)

Well, if you listen to Marvin Minsky, then ML is not AI...

http://en.wikipedia.org/wiki/AI_effect

Re:SVM != AI (0)

Anonymous Coward | about 2 years ago | (#40731195)

A SVM can be considered a trainable building block to build a simplistic AI but I would not call a simple SVM an AI on its own. It's far too dumb even if very useful for building trainable classifiers that can adapt to the problem at hand.

To build an AI you would probably need some trainable machine learning tools to parse the raw sensorial input of your system and to cast it into a higher level representation (e.g. semantic image segmentation of matrix of pixels or semantic parse tree for text snippets). Then use that higher level representation to take decisions on what to do next.

IMO AI requires some kind of embodiment of the system into an evolving yet actionable environment: a sensorimotor loop + some kind of reinforcement learning to interact with the environment and adapt to new situations based on delayed action feedbacks. The environment could be physical (e.g. if the AI controls one or several robots) or digital (e.g. if the environment is the internet or a simulated universe as in games).

Many ML researchers don't bother with embodiment / reinforcement learning part while they are probably aware that their contributions could indeed be useful one day to build some kind of AI.

Re:SVM != AI (1)

SpinyNorman (33776) | about 2 years ago | (#40731575)

It depends on what level of (artifical) intelligence you're talking about. If it's amoeba level intelligence, then maybe ML can achieve similar results, but if it's rat or human level intelligence then obviously not.

I think most people take AI to mean something that could minimally pass a Turing test, not a silicon slug.

Re:SVM != AI (5, Informative)

tommeke100 (755660) | about 2 years ago | (#40730463)

Wrong. SVM is a supervised learning technique. It looks like you're talking about K-means clustering which is unsupervised.
The difference between supervised and unsupervised is that in the first you use both features and outcome in your training of the system, where the unsupervised will just use the features. So supervised uses both X and Y to learn (if X are the features and Y is the class/cluster), whereas unsupervised will just use X.

Re:SVM != AI (4, Informative)

bmacs27 (1314285) | about 2 years ago | (#40731619)

Right. In many ways SVM is almost the opposite of what OP describes. Rather than grouping things based on their natural clustering, it takes knowledge of the data you wish to separate, and makes it linearly separable. That is, it distorts the data (with the kernel trick) in order to find a way to make the data cluster naturally in the way that you want. In fact, in traditional SVM they explicitly eschew assigning a "similarity metric" over the space, and prefer instead to give only binary classification by comparison to a dividing hyperplane. That's because the space you're working in is such a distortion of your data, it's hopeless to talk about meaningful measures of similarity at all. That could be why it is easy to trick. All you have to do is slightly bias the input along some dimension (X^whatever) which will cause it to deterministically collapse to the desired binary output. Re: Is it AI? Please define I.

Inflammatory self-aggrandizing self-advertising (2)

fygment (444210) | about 2 years ago | (#40729971)

From the article, if you have access to the training data and know the learning algorithm, you can game the machine learning (SVM,not AI) system. How is that anything but self-evident, non-news?!

And what about 'poisoned' neurons/nodes? (1)

garyebickford (222422) | about 2 years ago | (#40730291)

A couple of commenters have noted that there is a branch of research related to defending against this - according to one it's called "adversarial machine learning". I've been casually wondering for some time about a related question, which is very relevant to the questions of using the various 'bottom up' AI systems like SVM and neural nets as models of human intelligence and of various complex adaptive systems ('living systems') including economies and polities (and evolutionary biology for that matter). If we look at these systems (both the real world ones and the mathematical models) as decision convergence models, what is the effect of nodes that make errors once, occasionally, frequently, or continuously ? And how does a successful neural network that is dealing with a continuously changing environment accommodate an element/node that provides, for example, randomly varying responses? What about a node that 'purposely' provides poisoned responses - like a secret agent putting false data into the news? In a machine, those things may be manageable by simply starting over, but in a continuous system like a real brain, that is not an option.

I learned a while back that in the human brain, a neuron whose output signals become ignored (the output from its axons becomes weighted so low that it has no influence on the 10,000 other neurons it is talking to), it dies. The brain seems to act very much like a republic of cantankerous, disagreeable citizens arguing at many different levels (and with shifting alliances). But if one continuously shouts "We're all gonna dieeee!!!", pretty soon nobody listens any more.

Minimun poisoned data (0)

PPH (736903) | about 2 years ago | (#40730571)

The Texas board of education has a pretty good handle on the minimum amount of poisoned data it takes to affect learning.

Some restrictions may apply (0)

Anonymous Coward | about 2 years ago | (#40730781)

From the paper:

"...we assume that the attacker knows the learning
algorithm and can draw data from the underlying
data distribution. Further, we assume that our attacker
knows the training data used by the learner;"

They characterize these assumptions as "unrealistic", which I think is about right in a real world setting.

Is the future like the past? (1)

scruffy (29773) | about 2 years ago | (#40733543)

I'm not sure why this would be surprising. ML algorithms work best if the future behaves like the past, if it has the same probability distribution as the training data. Some algorithms can handle slow changes if they can continually get new training data, but large changes is a problem.

in other words (1)

minstrelmike (1602771) | about 2 years ago | (#40739057)

In other words, artificial intelligence is just as limited and varied as regular ole human intelligence.
Jeez. Who'd a thunk it?
Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...