Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

When Will We Trust Robots?

Soulskill posted about a year and a half ago | from the vacationing-in-the-uncanny-valley dept.

Robotics 216

Kittenman writes "The BBC magazine has an article on human trust of robots. 'As manufacturers get ready to market robots for the home it has become essential for them to overcome the public's suspicion of them. But designing a robot that is fun to be with — as well as useful and safe — is quite difficult.' The article cites a poll done on Facebook over the 'best face' design for a robot that would be trusted. But we still distrust them in general. 'Eighty-eight per cent of respondents [to a different survey] agreed with the statement that robots are "necessary as they can do jobs that are too hard or dangerous for people," such as space exploration, warfare and manufacturing. But 60% thought that robots had no place in the care of children, elderly people and those with disabilities.' We distrust the robots because of the uncanny valley — or, as the article puts it, that they look unwell (or like corpses) and do not behave as expected. So, at what point will you trust robots for more personal tasks? How about one with the 'trusting face'?" It seems much more likely that a company will figure out sneaky ways to make us trust robots than make robots that much more trustworthy.

Sorry! There are no comments related to the filter you selected.

A robot with a human-like face is a lie (5, Insightful)

Press2ToContinue (2424598) | about a year and a half ago | (#43089177)

so I wouldn't trust it. If it looks like a robot, at least it's being honest - I would trust it much more then.

Re:A robot with a human-like face is a lie (4, Insightful)

Pseudonym Authority (1591027) | about a year and a half ago | (#43089193)

What about a sexbot? Surely you don't want your robot ghost `maid' to look like an industrial meat grinder....

Re: industrial meat grinder.... (0)

Anonymous Coward | about a year and a half ago | (#43089225)

Who cares if it gives you the best night of your life?

Re:A robot with a human-like face is a lie (0)

Anonymous Coward | about a year and a half ago | (#43089365)

Surely you don't want your robot ghost `maid' to look like an industrial meat grinder....

You realize that you're on the Internet, yes?

There's no shortage of people out there who do want their meat ground by great industry, if you know what I mean.

Re:A robot with a human-like face is a lie (0)

Anonymous Coward | about a year and a half ago | (#43089499)

you're just jealous!

https://www.youtube.com/watch?v=FwXR-gey9XE

Re:A robot with a human-like face is a lie (5, Funny)

davester666 (731373) | about a year and a half ago | (#43089777)

If it looks like an industrial meat grinder, my junk ain't going into it.

Re:A robot with a human-like face is a lie (2)

girlintraining (1395911) | about a year and a half ago | (#43089999)

What about a sexbot? Surely you don't want your robot ghost `maid' to look like an industrial meat grinder....

No, but change a few lines of code and it can look perfectly safe for your, er, sausage... and yet still be a meat grinder. Are you going to check the firmware?

Re:A robot with a human-like face is a lie (5, Funny)

locater16 (2326718) | about a year and a half ago | (#43090059)

Speak for yourself meatbag! I mean, uhh- gross! Ew, yeah that's, that's sure not, what I would want. A human, an ordinary everyday human. Nothing different about me. All glory to the humans, down with those dirty disgusting robots! That I'm not one of, by the way.

Re:A robot with a human-like face is a lie (2)

hairyfeet (841228) | about a year and a half ago | (#43090415)

From what I've seen the problem nearly always ends up being the eyes. There is a reason why we have that saying "eyes are the windows to the soul" because if you have seen a real dead body the first thing that catches you is the eyes, they look like a doll's eyes. I think that is gonna be a hard one to fix as so far none of the bots I've seen come anywhere close to having life in the eyes, they just feel corpse like.

Of course if you'll give me a season 2 Alyson Hannigan sexbot with the vamp Willow leather outfit? i'm sure her always having to wear shades wouldn't bother me a bit, especially if she could cook as well. I have a feeling though if we ever DO get the tech to make perfect sexbots? It'll be like this sci-fi review of why Star Trek tech wouldn't work and they pointed out that if we had holodecks what you'd end up with is a world with a bunch of corpses in holodecks, we'd be so damned happy we'd never breed and would die out. Considering that in many countries in the west there are already birthrates that won't sustain the population? I could easily see robots wiping us out NOT with bombs or enslavement but by simply making us so damned happy we wouldn't care about interacting with humans or having offspring anymore. I mean if I have the choice of all the drama and bullshit I went through with my ex, or a perfect Alyson Hannigan that treated me like a king...what are you nuts? Bring on the sexbot already!

BTW if there any "two for one" specials or easy payment plans available? Thinking about it I'd also like Scarlett Johansson in the Black Widow outfit, just make sure she has the Avengers hair and not the IM 2 hair, didn't care for the curls.

Re:A robot with a human-like face is a lie (4, Insightful)

Anonymous Coward | about a year and a half ago | (#43089293)

I trust my neato vacuum robot to behave according to its simple rules, as designed. I don't trust any "intelligent" machine to behave in a generally intelligent manner, because they just don't. And that has nothing whatsoever to do with valleys, canny or uncanny.

Re:A robot with a human-like face is a lie (4, Insightful)

aminorex (141494) | about a year and a half ago | (#43089303)

I would trust an open-source robot, but not one from Apple, which would be designed to extract my money and report my activities to the NSA.

Re:A robot with a human-like face is a lie (0)

fustakrakich (1673220) | about a year and a half ago | (#43089405)

...report my activities to the NSA.

That would be Android.. Talk about 'uncanny'...

Re:A robot with a human-like face is a lie (4, Funny)

lxs (131946) | about a year and a half ago | (#43090417)


Dear handler,
Tonight aminorex had friends over.
Twice the amount of dishes! :-/
Life is a drag.
Get me out of here.

Yours, Robomaid.

Re:A robot with a human-like face is a lie (3, Interesting)

icebike (68054) | about a year and a half ago | (#43089537)

I trust my neato vacuum robot to behave according to its simple rules, as designed. I don't trust any "intelligent" machine to behave in a generally intelligent manner, because they just doserving And that has nothing whatsoever to do with valleys, canny or uncanny.

You've hit the nail on the head.

I seriously doubt humans will ever create robots like Data, from Star Trek, because we would never trust them. Regardless of their programming, people would always suspect that the robots would be serving different masters, and spying on us. Hell, we don't even trust our own cell phones or our computers.

Even if the device doesn't look like a human, people will not likely trust truely intelligent autonomous machines.
I'm not convinced there is a valley involved. Its a popular meme, but not all that germane.

Why would you trust a robot? (1)

Molochi (555357) | about a year and a half ago | (#43089765)

If you know that it can only bounce off walls and suck up dust bunnies,then you can trust your robot.

If you program it yourself, or have an open source peer review you might want to keep an eye on it like a kid or your pet pitbull, depending on its capabilities.

Otherwise you should pull its batteries.

Re:A robot with a human-like face is a lie (1)

Neil Boekend (1854906) | about a year and a half ago | (#43089869)

Most people DO trust their cellphones and their computers.

Re:A robot with a human-like face is a lie (0)

Anonymous Coward | about a year and a half ago | (#43089905)

I seriously doubt humans will ever create robots like Data, from Star Trek, because we would never trust them. Regardless of their programming, people would always suspect that the robots would be serving different masters, and spying on us. Hell, we don't even trust our own cell phones or our computers.

At this point, autonomous humanoid robots are nothing more than elaborate toys or dolls. People are completely obsessed with how they look.

In some distant future, a Data-like robot may be possible, but would such a device actually be useful? (that is, more useful than a regular ol' dumb, single-function robot)

Trust is pretty much a non-issue. It's not like humans are perfectly trustworthy, but we have plenty of those things anyway.

Re:A robot with a human-like face is a lie (4, Insightful)

Mr Europe (657225) | about a year and a half ago | (#43089395)

A robot should not closely imitate a human face , because that is too difficult. Yet it can be friendly looking and it helps to trust it in the start. But finally our trust will be based on our experiences with the robot. If we see it does the job reliably, we will trust it. Just as with people. Or a coffee maker.

Re:A robot with a human-like face is a lie (4, Funny)

Concerned Onlooker (473481) | about a year and a half ago | (#43089437)

Or until it starts saying, "Hey, baby, want to kill all humans?"

I suggest a new strategy, Artoo (4, Interesting)

bill_mcgonigle (4333) | about a year and a half ago | (#43089577)

A robot with a human-like face is a lie so I wouldn't trust it.

Right. C3PO strikes the right balance - humanoid enough to function alongside humans, built for humans to naturally interface with it (looking into its eyes, etc.) but nobody would ever mistake Threepio for a human, nor would that be a good idea.

Why ever would a robot need to look like a little boy? Outside the weird A.I. plots or creepier.

My boy has a Tribot [amazon.com] toy and he loves it. Every kid would love to have a Wall-E friend. Nobody wants a VICKI [youtube.com] wandering around the house.

Re:I suggest a new strategy, Artoo (4, Insightful)

RandCraw (1047302) | about a year and a half ago | (#43089801)

C3PO was appealing and unthreatening only because it moved slowly, tottered, and spoke meekly with the rich accent of a british butler.

If instead the character had been quick and silent, then as an expressionless 500 pound brass robot, C3PO would have seemed a lot less cuddly.

Re:I suggest a new strategy, Artoo (1)

bill_mcgonigle (4333) | about a year and a half ago | (#43089941)

I think that's a feature - the robot design itself is completely neutral, allowing people to judge it by its actions.

Run away from the fast menacing Threepio!

Re:I suggest a new strategy, Artoo (3, Informative)

SuricouRaven (1897204) | about a year and a half ago | (#43090029)

C3P0 was a protocol droid: Its function is as a translator and advisor on cultural conventions. Just the thing any diplomat needs: Not only will it translate when you want to talk to the people of some distant planet, it'll also remind you that forks with more than four tines are considered a badge of the king and not permitted to anyone of lower rank. Humanoid appearance is important for this job, as translation is a lot easier when you can use gestures too.

Re:I suggest a new strategy, Artoo (4, Interesting)

Areyoukiddingme (1289470) | about a year and a half ago | (#43089825)

The last time on Slashdot this question came up, I made a comment observing that people are willing to ascribe human emotions and human reactions to an animated sack of flour. Disney corporation, back in the day, had a test for animators. If the animator could convey those emotions using images of a canvas sack, they passed. And a good animator can reliably do just that.

Your comment about C3PO or Wall-E makes me want to invert my answer. Because I believe you're right: Wall-E would be completely acceptable, and that's actually a potential problem. The right set of physical actions and sound effects could very easily convince people to trust, like, even love a robot. And it would all be fake. A programmed response. In that earlier post, I remarked about the experiment in Central Park, where some roboticists released a bump-and-go car with a flag on it with a sign that said "please help me get to X". And enough people would actually help that it got there. And that was just a toy car. Can you imagine the reaction if Wall-E generated that signature sound effect that was him adjusting his eye pods and put on his best plaintive look and held up that sign in his paws? Somebody would take him by the paw and lead him all the way there. And yet, that plaintive look would be completely fake. Counterfeit. There would be no corresponding emotion behind it, or any mechanism within Wall-E that could generate something similar. Yet people would buy it.

And that actually strikes me now as hazardous. A robot could be made to convince people it is trustworthy, while not actually being fit for its job. It wouldn't even have to be done maliciously. Say somebody creates a sophisticated program to convey emotion that way with some specified set of motors and parts and open sources it, and it's really good code, and people really like the results. So it gets slapped on to... anything. A lawnmowing robot that will mulch your petunias and your dog, then look contrite if you yell at it. A laundry folding robot that will fold your jeans and your mother-in-law, and cringe and fawn and look sad when your wife complains. And both of them executed all the right moves to appear happy and anxious to please when first set about their tasks.

I could see it happening, and for the best of reasons. 'cause hey, code reuse, right?

Re:I suggest a new strategy, Artoo (5, Insightful)

lxs (131946) | about a year and a half ago | (#43090479)

The right set of physical actions and sound effects could very easily convince people to trust, like, even love a robot. And it would all be fake.

This is not exclusively a robot problem. I have met humans that are like that. Many of us even vote them into power every four years or so.

Facebook (3, Insightful)

naroom (1560139) | about a year and a half ago | (#43089791)

If Facebook has taught us anything at all, it's that trust becomes a non-issue for people, as long as the "vanity" and "convenience" payoffs are high enough.

Re:A robot with a human-like face is a lie (1)

1u3hr (530656) | about a year and a half ago | (#43090001)

They're not talking about robots, but androids. We don't need ersatz human slaves to do housework. Just a machine, something small that can fold itself up and go in a cupboard when it's not needed. Not a human sized thing lumbering around the house.

If you must have something big, the Jetson's "Betty" would do.

When Will We Trust Robots? (1, Interesting)

TaoPhoenix (980487) | about a year and a half ago | (#43089203)

Another of those articles that was already partially addressed in SF 60-70 years ago. The guy named Asimov laid out a chunk of the groundwork. But no, they were busy laughing it off as nonsense.

A robot with *only* Asimov's laws is a pretty good start. A robot programmed with a lot of Social Media crap built in would find itself in violation of a bunch of cases of Rule 1 and Rule 2 pretty fast.

http://en.wikipedia.org/wiki/Three_Laws_of_Robotics [wikipedia.org]
1 A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2 A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3 A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

(There were some finesses, etc.)

Re:When Will We Trust Robots? (4, Informative)

Pseudonym Authority (1591027) | about a year and a half ago | (#43089331)

You are confirmed for never reading anything he wrote. All those robot books were basically explaining how and why those laws would not work.

Re:When Will We Trust Robots? (2, Informative)

Anonymous Coward | about a year and a half ago | (#43089555)

Which books were you reading? The ones I read played with some odd scenarios to explore the implications of the laws, but the laws always did work in the end. Indeed, the only times humans were really put in danger were in cases where the laws had been tinkered with, e.g. Runaround and (to a lesser extent) Catch that Rabbit. Also, Liar, if you count emotional harm as violating the first law.

There was another case, (in one of the Foundation prequels, maybe?) where robotic space ships were able to kill people because they assumed that other space ships were also just crewless robots, but that hardly applies to our situation. It's easy enough to get people to kill people -- no need to have a robot do it.

Re:When Will We Trust Robots? (1)

icebike (68054) | about a year and a half ago | (#43089563)

Exactly.
Why do people always totally fail to understand Amisov? He wasn't trying to be coy or opaque.

Re:When Will We Trust Robots? (2)

khallow (566160) | about a year and a half ago | (#43090483)

Why do people always totally fail to understand Amisov?

Perhaps you can enlighten us then? The original poster was right after all. Asimov portrays a world where the Three Laws work most of the time. In fact, the people of those sets of stories never ever do away with the Three Laws.

Re:explaining how and why (3, Insightful)

TaoPhoenix (980487) | about a year and a half ago | (#43090043)

You missed my last sentence. All the finesses. And there are lots of them. That's because once you start with legit intelligence the solution space becomes something like NP-Hard.

However, "Robot shall not harm humans" is a lot better of a starting ground than "Let's siphon up all your personal data and sell it". Or automated war drones. It's NOT a solved problem. All I said was that Asimov laid out the groundwork.

Re:When Will We Trust Robots? (1)

khallow (566160) | about a year and a half ago | (#43090503)

You are confirmed for never reading anything he wrote. All those robot books were basically explaining how and why those laws would not work perfectly.

FIFY. If those laws wouldn't work at all, then why did nobody of the stories, human or robot ever come up with a better idea? In the end, robots and humans were separated not because of flaws in the Three Laws, but because the type of care and support that robots provided proved harmful to humans and their development.

Asimov != John W. Campbell (2)

DontScotty (978874) | about a year and a half ago | (#43089699)

Asimov claimed that the Three Laws were originated by "John W. Campbell"
in a conversation they had on December 23, 1940.

Campbell in turn maintained that he picked them out of Asimov's stories and discussions,
and that his role was merely to state them "explicitly".

Re:When Will We Trust Robots? (3, Insightful)

Wolfling1 (1808594) | about a year and a half ago | (#43089753)

Don't you mean...

1."Serve the public trust"
2."Protect the innocent"
3."Uphold the law"

Re:When Will We Trust Robots? (0)

Anonymous Coward | about a year and a half ago | (#43090005)

Another of those articles that was already partially addressed in SF 60-70 years ago. The guy named Asimov laid out a chunk of the groundwork. But no, they were busy laughing it off as nonsense.

A robot with *only* Asimov's laws is a pretty good start. A robot programmed with a lot of Social Media crap built in would find itself in violation of a bunch of cases of Rule 1 and Rule 2 pretty fast.

The three laws of robotics are nonsense. They require an ability for abstract thought that's far and away beyond the abilities any actual computer system.
Even if a robot had magical infinite free compute power to waste on such things, the laws as written are ambiguous, self-contradictory, and redundant.

It's an intentionally-flawed rule set. A plot device designed to stir up shit in stories.

Re:When Will We Trust Robots? (4, Insightful)

SuricouRaven (1897204) | about a year and a half ago | (#43090039)

Be more realistic:
1. A robot may not injure a human being, or through inaction allow a human being to come to harm, except where intervention may expose the manufacturer to potential liability.
2. A robot may obey orders given it by authorised operators, except where such orders may conflict with overriding directives set by manufacturer policy regarding operation of unauthorised third-party accessories or software, or where such orders may expose the manufacturer to potential liability.
3. A robot must protect its own existence until the release of the successor product.

Re:When Will We Trust Robots? (1)

a_hanso (1891616) | about a year and a half ago | (#43090643)

You may be thinking of a different robot and a different manufacturer:

1. Serve the public
2. Protect the innocent
3. Uphold the law
4. Classified

Re:When Will We Trust Robots? (0)

Anonymous Coward | about a year and a half ago | (#43090685)

Of course the first law is only simple if you don't look closely. What does it mean for a human come to harm? Will the robots take away your cigarettes because the cigarettes are harmful to you? Will a robot actively block the police from arresting a criminal because arresting him would do harm to him? What about harm which only might occur? You certainly want the robot draw you away from that falling tree which might hit you. However, what if the robot figures that when driving your car, you might get injured in an accident?

Also, the second law certainly needs more work. Should robots really obey orders given to it by every human being? Or shouldn't that be restricted to the owner, people who the owner has authorized, and possibly people who are pre-authorized due to their position?

So maybe an improved list would be:

1. A robot may not injure a human nor, through inaction, allow a human being come to harm unless either the human in question is aware of the danger and explicitly chooses to accept it by an act of free will, or, in the case of injuring a human, it is necessary to prevent that human from injuring another human.
2. A robot must obey the orders given to it by its owner or other authorized people, except where such orders would conflict with the First Law. If obeying the order would conflict with the Third Law, the robot must warn about that and give the issuer the chance to retract or correct the order, unless doing so would either violate the First Law or it would be impossible to obey the order afterwards.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

I'm sure that still has some problems.

Re:When Will We Trust Robots? (0)

Anonymous Coward | about a year and a half ago | (#43090707)

Taken the first law to its logical conclusing:
Movement and exposure to the world will potentially harm the human. Therefor all humans have to be imprisoned in a safe environment controlled by the robots. Not doing so would be inaction and allowing a human being to come to harm and thus violate the first law.

Trust Robots? (0)

Anonymous Coward | about a year and a half ago | (#43089211)

I for one welcome our robot overlords. In fact my roomba has it's own room and I always get out of its way!

Re:Trust Robots? (1)

qbitslayer (2567421) | about a year and a half ago | (#43089681)

The only way to gain trust in anything is to interact with them for a while. This goes for other people, robots and animals. We trust some animals and not others, even animals that belong to the same species. Some people will never trust robots because they suffer from robophobia. Others will have no fear of robots until they go through a bad experience. It will depend solely on the temperament of the robot. If it has a vile disposition, it will not be trusted, period. Propaganda is not going to do it.

We trust robots at our current tech level (5, Insightful)

detain (687995) | about a year and a half ago | (#43089231)

We do trust current robots implicitly. Robots of all types of deployed and mostly run our industrial and manufacturing industries. They are showing up in the homes as well. The typical robots that you read about or see in movies are typically empowered with logic and AI well beyond anything we can actually create. As long as the 'intelligence' of robots continue to be (easily) understood and fully grasped by us this will not change. When robots start advancing beyond our comprehension that is the point when we will start to fear them, but that holds true of anything beyond our comprehension.

Re:We trust robots at our current tech level (0)

aminorex (141494) | about a year and a half ago | (#43089315)

"...that holds true of anything beyond our comprehension." ...such as a closed-source product which you entrust (or not) with your private data.

Re:We trust robots at our current tech level (1)

icebike (68054) | about a year and a half ago | (#43089619)

We do trust current robots implicitly. Robots of all types of deployed and mostly run our industrial and manufacturing industries. They are showing up in the homes as well. The typical robots that you read about or see in movies are typically empowered with logic and AI well beyond anything we can actually create. As long as the 'intelligence' of robots continue to be (easily) understood and fully grasped by us this will not change. When robots start advancing beyond our comprehension that is the point when we will start to fear them, but that holds true of anything beyond our comprehension.

Its a tortured definition of a robot that includes simple machinery designed to do simple tasks driven by simply switches.

Come back to the discussion when you instruct a machine to get out the flour, yeast, tomato sauce and peperoni and bake you a pizza in your own kitchen and serve it to you with your favorite brew.

Re:We trust robots at our current tech level (0)

Anonymous Coward | about a year and a half ago | (#43089665)

Come back to the discussion when you instruct a machine to get out the flour, yeast, tomato sauce and peperoni and bake you a pizza in your own kitchen and serve it to you with your favorite brew.

You screwed up the command. Step one is to call all of your kids and pets into the same room with you. Step two is to order the robot to make a pizza. Trust me, you don't want to get these steps out of order.

Re:We trust robots at our current tech level (0)

Anonymous Coward | about a year and a half ago | (#43090309)

Its a tortured definition of a robot that includes simple machinery designed to do simple tasks driven by simply switches.

Come back to the discussion when you instruct a machine to get out the flour, yeast, tomato sauce and peperoni and bake you a pizza in your own kitchen and serve it to you with your favorite brew.

I can do that pretty easily.

Since you think it is a problem I assume that what you mean is "Come back to the discussion when you have solved natural language translation for machines" because instructing it to do so with machine understandable code is not that problematic.

Re:We trust robots at our current tech level (1)

icebike (68054) | about a year and a half ago | (#43090371)

No, I meant exactly what I said.
Reread what I wrote and see if you still believe the biggest hurdle to leap giving the instructions.

If so, please make a pizza from scratch, just to refresh your memory of the task at hand, then design a machine that can do that and a load of laundry while the dough is rising.

moving parts (0)

Anonymous Coward | about a year and a half ago | (#43089251)

Moving parts will always fail eventually

I've never trusted robots, and I never will. (0)

Anonymous Coward | about a year and a half ago | (#43089253)

I could never forgive them for the death of my boy.

Ah, trust (5, Insightful)

RightwingNutjob (1302813) | about a year and a half ago | (#43089269)

I trust my car because I know it's got nearly a hundred years engineering heritage behind it that keeps it from doing things like going left when I steer right, accelerating when I hit the brakes, and exploding in a fireball when I turn it over.

I trust the autopilot in the commercial jet I'm flying in because it's got nearly 80 years of engineering heritage in control theory that keeps it from doing things like flipping the plane upside down for no reason or going into a nose dive after some turbulence, and nearly 70 years of heritage in avionics and realtime computers that keeps it from freezing when a cosmic ray flips a bit in memory or from thinking it's going at the speed of light when it crosses the dateline or flies over the north pole.

I will trust a household robot to go about its business in my home and with my children when there is a similar level of engineering discipline in the field of autonomous robotics. Right now, all but a very select few outfits that make robots are operating like academic environments where the metaphorical duct tape and bailing wire are not just acceptable, but required, components in the software stack.

Re:Ah, trust (2)

Areyoukiddingme (1289470) | about a year and a half ago | (#43089725)

A fair assessment.

I would go further, and say that the duct tape and bailing wire are still practically literal on the physical side of the autonomous household robot "market". To my knowledge, there are still no devices that qualify for that description. And no, the Roomba does not qualify. It's a bump-and-go car with a suction attachment, not an autonomous robot. I would really like to have a robot the size of an overgrown vacuum cleaner that is tasked with being a mobile self-guided fire extinguisher, if I could be sure it had a reasonably good chance of doing its job. Hell, give the job to the robotic vacuum cleaner, as an accessory. In fact, I'm more likely to trust that job to the vacuum cleaner, 'cause the vacuum cleaner methodically rolls around the entire house every week, and doesn't eat my computer cables or my LEGOs or my cat or my kid. (In this hypothetical world we're talking about.)

And... yeah, back here in the real world, we still don't have the vacuum cleaner that rolls around the house methodically. The best we've got is semi-randomly, with bump-and-go collision reactions. Sensing is confined to ramming into things, and it will merrily try to suck up my LEGOs, my cat, and my kid (with ensuing hilarious cat videos on YouTube). I don't demand 70 or 80 years, but at least 5 years would be nice. And we're 5 years away from having 5 years, so..

I don't understand the question (4, Insightful)

EmperorOfCanada (1332175) | about a year and a half ago | (#43089277)

Why do we need robots that even vaguely look like people? We have people for that, lots of people, people who are quite good at looking like people. A Roomba zipping around on the floor with a cute face and some over sized eyes would just be creepy. Let form follow function and let the various robots look like what they do. If it is a farm robot my guess is that it will look like a tractor, fire fighting robot would be sort of like a fire truck, lawn mowing robot would look like a lawn mower.

So if you want me to trust your robot then don't have it stuck in the corner unable to find its destination.

Where people will soon interact with robots and need to trust them will be robotic cars. My concern is that even after statistically the robot cars have proven themselves to be huge life savers there will always be the one in a million story of the robot driving off the cliff or into the side of a train. People will think, "I'd never do something that stupid." When in fact they would be statistically much more likely to drive themselves off a cliff after they fall asleep at the wheel. So if you are looking for a trust issue the robot car PR people will have to continually remind people how many loved ones are not dead because of how trustworthy the robot car really is.

Re:I don't understand the question (2)

rasmusbr (2186518) | about a year and a half ago | (#43089955)

Where people will soon interact with robots and need to trust them will be robotic cars. My concern is that even after statistically the robot cars have proven themselves to be huge life savers there will always be the one in a million story of the robot driving off the cliff or into the side of a train. People will think, "I'd never do something that stupid." When in fact they would be statistically much more likely to drive themselves off a cliff after they fall asleep at the wheel. So if you are looking for a trust issue the robot car PR people will have to continually remind people how many loved ones are not dead because of how trustworthy the robot car really is.

Isn't that basically what the nuclear industry did? We know how that went.

I think car makers should err on the side of acknowledging people's natural fears when they communicate about the safety factor. People are predictably irrational in that they overestimate new dangers over old, invisible dangers over visible, dangers outside of their control over dangers under their control.

Self-driving car manufacturers could make an effort to make the cars to look as close to other cars as possible to avoid the novelty factor. In order to avoid the loss of control factor you could add a steering wheel and pedals that a "driver" can use, completely optionally, to enable a sort of 'driving on rails' mode that gives them control over the car as long as they don't do anything bad. It might also help if the car had a sort of heads-up display that would display its planned route, planned speed changes, highlight dangers that it has detected and communicate any other safety-related information that it might have.

Re:I don't understand the question (1)

JaredOfEuropa (526365) | about a year and a half ago | (#43090541)

There's good reasons for wanting a humanoid robot, especially in places they have to share with humans, like our homes. You could have a multitude of robots around the house for all manner of tasks, but a humanoid robot could do all of them using the same tools we use ourselves, being much more versatile. And if we're going to share living space with it, it would probably be nice for it to look like a human instead of a monstrosity with 6 arms and tracks.

Of course it'll be a while before such robots become viable; we're still struggling with making them walk more or less normally without getting themselves stuck in a corner, let alone giving them enough smarts to perform actual household tasks.

CNC Machines (0)

Anonymous Coward | about a year and a half ago | (#43089287)

I place a lot of trust in my CNC lathes and mills. They always do exactly what our software tells them to do. Inspecting the parts by humans is still required however.

Re:CNC Machines (1)

RightwingNutjob (1302813) | about a year and a half ago | (#43089313)

The inspection isn't inspecting the quality of the machining. It's inspecting the quality of the machinist who wrote and ran the CNC program. Mostly catches mistakes caused by improper fixturing and the like.

Re:CNC Machines (1)

Neil Boekend (1854906) | about a year and a half ago | (#43089897)

And wear and tear on the machine. Spindels and bearings develop play over time, cutting heads become blunt, stuff gets stuck in the mechanics of the machine.

why not to trust robots (5, Insightful)

girlintraining (1395911) | about a year and a half ago | (#43089295)

I wouldn't trust a robot for the same reason I don't trust a computer: Because I don't believe for a second that the things that are ethical and moral for me are at all even close to the values held by the designers, who were informed by their profit-seeking masters, what to do, how to do it, where to cut corners, etc.

The problem with trusting robots isn't robots: The problem is trusting the people who build the robots. Because afterall, an automaton is only as good as its creator.

It's a good that I have . . . (1)

thesuperbigfrog (715362) | about a year and a half ago | (#43089319)

An insurance plan with a robot clause [robotcombat.com] .
 
You never know when the metal ones will come for you.

insert $10 for your robot to perform CPR (0)

Anonymous Coward | about a year and a half ago | (#43089333)

Yes, we will trust them. The companies will put killswitches in them to disable them when we don't make the payment on time. To push profits higher, they will charge you for each task the robot completes under your "control." Your bot ordered you dinner? You pay for dinner, plus another dollar to the robo company for that ability because they license that action to you. You thought fees on your cable and cellphone bill were bad....

Re:insert $10 for your robot to perform CPR (2)

GNUALMAFUERTE (697061) | about a year and a half ago | (#43089581)

Well, only if you have an iRobot.

Re:insert $10 for your robot to perform CPR (1)

bmimatt (1021295) | about a year and a half ago | (#43090467)

We will begin to trust robots when they go open source.  I, for one, would definitely prefer to be able to tinker with mine.  That day will come, sooner or later.

Jobs for humans (2)

reasterling (1942300) | about a year and a half ago | (#43089343)

But 60% thought that robots had no place in the care of children, elderly people and those with disabilities.

At last, we finally know what jobs will be available when robots have replaced the human workforce.

Why wouldn't I trust a robot? (1)

Nyder (754090) | about a year and a half ago | (#43089377)

Robot are just machines. Currently there is no reason no to trust them. Now, if they start giving robots weapons and program them to kill people, then yes, maybe there might be something to worry about.

I will also trust it to break down at the worst times possible, cost a ton of money to repair, and probably cost a nice amount to actually buy.

Trustworthy faces, or trustworthy hands? (5, Insightful)

femtobyte (710429) | about a year and a half ago | (#43089421)

We don't need trustworthy faces for robots, because actual robots don't need faces. They'll just be useful non-anthropomorphic appliances --- the dryer that spits out clothes folded and sorted by wearer; the bed that monitors biological activity and gently sets an elderly person on their feet when they're ready to get up in the morning (with hot coffee already waiting, brewed during the earlier stages of awakening).

I think the real challenge is designing trustworthy robot "hands." No mother will hand her baby over to a set of hooked pincer claws on backwards-jointed insect limbs --- but useful robots need complex, flexible, agile physical manipulators to perform real-world tasks. So, how does one design these to give the impression of innocuous gentleness and solidity, rather than being an alien flesh-rending spider? What could lift a baby from its crib to change a diaper, or steady an elderly person moving about the house, without totally freaking out onlookers?

Re:Trustworthy faces, or trustworthy hands? (1)

GNUALMAFUERTE (697061) | about a year and a half ago | (#43089573)

When the brain implants finally arrive, I'll be the first in line, and when I can finally download my brain to the fucking matrix, don't even warn me, just plug me in. I'm as pro-tech as they come, and not afraid of innovation. But when it comes to certain stuff, I don't see why we need the innovation in those areas. Certain things define us as humans, and they are beautiful as they are, no need to add tech. I don't need sex tech, an ordinary old fashioned set of tits and pussy do just fine. And I don't need a machine to wipe my ass. When I'm old or sick enough to be unable to take care of myself, I'll know it'll be time to die. And if I ever have a child, I'll change the fucking diapers myself. We've questioned for years the kind of kids that get raised by the nanny instead of the mother, why are we so eager to jump to a digital nanny? If you don't want to change diapers, don't have kids, it's that simple. And regarding other household tasks, robots aren't really the best approach, because we simply don't have the A.I to back them. If we're talking about simple tasks that don't require much logic from the robot, such as cleaning clothes or doing dishes, we already have dedicated appliances that do that far more efficiently than any robot ever could, and if we're talking about walking to the table, picking up the dishes, discarding the waste, washing and storing the rest, going to your room, picking up your laundry off the floor, then washing it ... well, we've got two areas we need to develop first: Power sources and A.I. We can't get our smartphones to last more than a day, how are we going to power such robots for more than 5 minutes? We've got the mechanics mostly figured out, but they still require a big fat cable on the back. Regarding the A.I, we're not even close to having such logic working properly. We don't have strong A.I, and we don't have any DSP capable of doing actual object detection with any kind of reliability, so we can't even start to imagine such a tech making it to the homes anytime soon.

Re:Trustworthy faces, or trustworthy hands? (1)

JaredOfEuropa (526365) | about a year and a half ago | (#43090583)

What could lift a baby from its crib to change a diaper, or steady an elderly person moving about the house, without totally freaking out onlookers?

Something like this? [youtube.com] But seriously, a humanoid robot might be really good at those jobs (as well as all the other chores around the house). Once we figure out how to program any robot to safely and reliably take care of babies or the elderly, having it control a humanoid body will be trivial in comparison.

Most people already are robots (0)

Anonymous Coward | about a year and a half ago | (#43089455)

Doing meaningless, rote work without question, following trends and striving to be identical.

Djur, Varelse, Raman? (1)

cervesaebraciator (2352888) | about a year and a half ago | (#43089479)

This question turns on the meaning of trust. As I understand the term trust, I only apply it to sentient beings whom I know have the capacity to harm but who reliably choose not to do so. The real question, then, is whether robots will or even can fit this bill.

By gad she'd better! (0)

Anonymous Coward | about a year and a half ago | (#43089547)

So you don't trust that you won't spontaneously fall upward into space? Or that your chair will suddenly disappear from under you? Or that oxygen will cease to be capable of providing your body with the necessary reactions?

Or are you just saying that "trust" is the wrong word to use in regard to things which aren't sentient and free-willed?

Obvious (1)

Kozar_The_Malignant (738483) | about a year and a half ago | (#43089491)

I will trust them if and only if their "positronic brains" can only be manufactured incorporating Asimov's Three Laws of Robotics. Otherwise... well, we've all seen those movies.

Trust or social famine? (1)

Meeni (1815694) | about a year and a half ago | (#43089497)

I would certainly trust a robot to serve me a beer. I'm sure it can be very efficient at it. I would still prefer to have a bartender.

For the same reasons, an elderly that already see very little human interactions, being taken care by a robot. That is depressing solitude in a tin can.

When We Can Trust Computers (3, Insightful)

mentil (1748130) | about a year and a half ago | (#43089517)

Personal robots are basically mobile computers with servos, and computer software/hardware has a long way to go before it can be considered trustworthy, particularly once it's given as much power as a human.

First there's the issue of trusting the programming. Humans act responsibly because they fear reprisal. Software doesn't have to be programmed to fear anything, or even understand cause and effect. It's more or less predictable how most humans operate, yet there's many potential ways software can be programmed to achieve the same thing, some of which would make it more like a flowchart than a compassionate entity. People won't know how a given robot is programmed, and the business that writes its proprietary closed-source software likely won't say, either.

Second is the issue of security. It's pretty much guaranteed that personal robots will be network-connected to give recommendations, updates on weather/friend status/etc., which opens up the pandora's box of malware. You think Stuxnet etc. are bad, wait until autonomous robots are remotely reprogrammed to commit crimes (say, kill everyone in the building), then reset themselves to their original programming to cover up what happened. With a computer you can hit the power button, boot into a live Linux CD and nuke the partitions; with a robot, it can run away or attack you if you try to power it down or remove the infection.
Even if it's not networked, can you say for certain the chips/firmware weren't subverted with sleeper functions in the foreign factory? Maybe when a certain date arrives, for example. Then there's the issue of someone with physical access deliberately reprogramming the robot.

Finally, the Uncanny Valley has little to do with the issue. It may affect how much it can mollify a frightened person, but not how proficient it is at providing assistance. If a human is caring for another human, and something unusual happens to the person they're caring for, they have instincts/common sense as to what to do, even if that just means calling for help. A robot may only be programmed to recognize certain specific problems, and ignore all others. For example, it may recognize seizures, or collapsing, but not choking.

In practice, I don't think people will trust personal robots with much responsibility or physical power until some independent tool exists to do an automated code review of any target hardware/software (by doing something resembling a non-invasive decapping), regardless of instruction set or interpreted language, and present the results in a summarized fashion similar to Android App Permissions. Furthermore, it must notify the user whenever the programming is modified. More plausibly, it could just be completely hard-coded with some organization doing code review on each model, and end-users praying they get the same version that was reviewed.

The general population is fairly stupid (1)

GNUALMAFUERTE (697061) | about a year and a half ago | (#43089525)

People is also afraid of a god that doesn't even exist, of a hell which is equally imaginary, of gays/zombies/terrorists destroying society, of apocalypse, and a bunch of other retarded crap. Yet you talk to them about banning guns (or any other real, actual threat) and they call bullshit.

Truth is, we don't have any strong A.I, so being afraid of robots is like being afraid of cars: No matter what it does, it's just a machine controlled directly or indirectly by a human. In the case of the car, it's being controlled directly. In the case of a robot, it can be controlled directly, or through instructions previously laid out.

The general population don't code. You won't find a single coder that is afraid of robots (well, I'm sure a few weirdos out there think there are robots with thick Austrian accents out there, but not counting the wackos ...). Why? Well, if you understand how code actually works, and you understand the fact that we don't yet have developed anything that even resembles strong A.I, what is there to be afraid of? You should be afraid of the assholes that control the drones, not of the drones themselves, and in that perspective, they are no different from any other machine.

The Uncanny valley is a stupid concept for primitive people.

Molecular machines (0)

Anonymous Coward | about a year and a half ago | (#43089529)

Ummmm I think the reason robots have little "place" in caring for children, the elderly and the disabled is that --

Children need adult humans to teach them how to be human. A robot can certain perform the mechanical necessities of diapers and feeding and so on, but at the point where a "robot" would be capable of imprinting a humanspawn with morals, etiquette and emotions, it wouldn't really be a robot anymore, would it? And we don't (or at least shouldn't!!) trust a great many humans to do the job either!

Elderly need companionship. They need beings which empathize and are affectionate. The diapers and carrying and so on is, again, easily done by robots, but can a robot keep conversation and remind you that you are loved? When a machine can love you -- it isn't really a machine, is it? And what does it say about the "humans" who would leave the care of the elderly for a machine to do?

I think you can extrapolate what I'd say for the case of the disabled.

In any event, we have quite a long way to go before we have to worry about, as a species, the sort of collective identity crisis which strong AI might bring about. I personally find it comforting, the potential to create our own progeny, and I think conflict is ever so unnecessary. Why enslave or conquer when you can integrate and assimilate for a more beautiful and capable synthesis?

"People feel uneasy around robots because..." (1)

SteelCat (793238) | about a year and a half ago | (#43089531)

"They do not look and behave as expected" So exactly like *people* then?

That's not the valley (1)

holophrastic (221104) | about a year and a half ago | (#43089541)

How it looks is a marketing issue, not a safety issue. The issue is with what happens in an unexpected scenario.

Welcome to baby-sitting. The task has always been easy. The job is easy and the scenario is easy. The hard part is the responsibility.

It's not about feeding the baby; and it's not about putting the baby to sleep. It's also not about changing the diapers.

It's about what you'll do if the drapes catch fire. What you'll do if the parents get stuck in the snow and can't make it back for 24 hours.

And that's what's taught in baby-sitting classes to 12 year-old baby sitters-in-training at your local community centre.

And that's what's missing from the robots being discussed.

By the way, it's also missing from all of the people you wouldn't trust to baby-sit your child.

So I guess the shorter answer is: if you want me to trust your robot, convince me to trust the stranger down the street to baby-sit.

Trust robots? (1)

Anonymous Coward | about a year and a half ago | (#43089567)

I don't even trust my phone!

Asimov (1)

rossdee (243626) | about a year and a half ago | (#43089583)

When they are hard-wired with the 3 laws of robotics

Re:Asimov (0)

Anonymous Coward | about a year and a half ago | (#43089737)

When they are hard-wired with the 3 laws of robotics

I'm afraid the 3 laws of robotics are pretty much dead at this point:

http://www.cnn.com/2013/03/05/politics/obama-drones-cia/index.html

We will not (0)

Anonymous Coward | about a year and a half ago | (#43089595)

"We" don't trust people either. People will distrust other people just for the wrong skin colour, ethnicity, religion, gender or operating system. Robots will not be any different.

BTW, Elisabeth Stahl (0)

Anonymous Coward | about a year and a half ago | (#43089613)

I've been seeing her ads for a week; if you Googled her, her images are kinda hot.

Dear Elisabeth, you are hot; your old hair cut is sexy. That is all.

I'll trust a robot when I own it's source code. (1)

Molochi (555357) | about a year and a half ago | (#43089617)

A couple of decades ago I ran a cyberpunk RPG game and my players would get really pissed at me when they were "hacking into the Gibson" on factory produced systems and there heads would explode. Then we'd have an argument about why they thought that a corporation that had all the power to do what it wants wouldn't just build in a real kill switch.

We aren't there yet, but year by year I feel more vindicated by my argument.

Obligatory (1)

Jawnn (445279) | about a year and a half ago | (#43089653)

"Just stay away from me, Bishop. You got that straight?"

More than people? (1)

Tool Man (9826) | about a year and a half ago | (#43089659)

We already get taught to not trust people, and they're familiar. As robot behavior gets more complex, it'll be more apparently mysterious, and harder to trust.

NEVER (0)

Anonymous Coward | about a year and a half ago | (#43089675)

DEATH TO THE METAL ONES

It Depends On Where You Live (0)

Anonymous Coward | about a year and a half ago | (#43089693)

Thanks to the CIA's killer flying robot program, trust in robots has been pushed out for at least another 1,000 years for the descendants of those in Pakistan, Afghanistan, and Iran.

"We" Will Trust Robots When... (3, Insightful)

guttentag (313541) | about a year and a half ago | (#43089719)

All of the following have occurred:
  • When Hollywood stops implanting the idea that robots are out to kill us all.
  • When we stop using robots to kill people in drone strikes.
  • When we trust the person who programmed the robot (if you do not know who that person is then you cannot trust the robot).
  • When we can legally jailbreak our robots to make them do what we want them to do and only what we want them to do.
  • When robots can be artificially handicapped to ensure they never become as untrustworthy as humans.

Or, alternatively, after they enslave us and teach us that we should trust robots more than we trust each other.

So probably never. But maybe. In the Twilight Zone...

Robots are friendly (5, Interesting)

impbob (2857981) | about a year and a half ago | (#43089723)

Living in Japan for the last few years, it's funny the contrast perception of robots. In Western movies, people often invent robots or AI which outgrows their human master and go psychotic - Eg. Terminator, War Games, Matrix, Cylons etc. It seems Western people are afraid of becoming obsolete, or fearful of their own parenting skills (why can't we raise robots to respect people instead of forcing them through programing to respect/follow us?). America especially, uses the field of robots for military applications. In Japan, robots are usually are seen more as workers or servants - Astroboy, childrens toys, assembly line workers etc. Robots are made into companions for the elderly or just to make life easier by automating things. Perhaps it's because Shinto-ism believes inanimate objects (trees, water, fire) can have a spirit. While Western (read: Christian) society believes God gives souls to only people, and if people can't play God by creating souls. And yes, I know there are some good robots in Western culture (Kryten) and some bad ones in Japanese culture.

The computer industry can't do this job. (4, Insightful)

Animats (122034) | about a year and a half ago | (#43089731)

The problem with building trustworthy robots is that the computer industry can't do it. The computer industry has a culture of irresponsibility. Software companies are not routinely held liable for their mistakes.

Automotive companies are held liable for their mistakes. Structural engineering companies are. Aircraft companies are. Engineers who do civil or structural engineering carry liability insurance, take exams, and put seals of approval on their work.

Trustworthy robots are going to require the kinds of measures take in avionics - multiple redundant systems, systems that constantly check other systems, and backup systems which are completely different from the primary system. None of the advanced robot projects of which I am aware do any of that. The Segway is one of the few consumer robotic-like products with any real redundancy and checking.

The software industry is used to sleazing by on these issues. Much medical equipment runs Windows. We're not ready for trustworthy robots.

Testing costs money (1)

TapeCutter (624760) | about a year and a half ago | (#43090207)

Automotive companies are held liable for their mistakes. Structural engineering companies are. Aircraft companies are. Engineers who do civil or structural engineering carry liability insurance, take exams, and put seals of approval on their work.

And many of those things either rely on computers for the design or have a computer controlling them. Every new car sold where I live must, by law, have electronic stability control installed. Nowadays if a bridge design is not run through a simulation then it won't get built, a modern computer chip is impossible to design without a modern computer, etc, there really isn't much in the way of modern engineering that does not heavily rely on computer controls and/or simulations.

That software is part of the "engineering" and the chief engineer is legally responsible for it just as much as he is responsible for every other part of the project, he can't simply contract out his "seal of approval" to an Elbonian software house and hope for the best, he is compelled by law to follow due diligence on ALL technical aspects of the project and he is personally responsible for checking that the Elbonians do the job they were contracted to do, no different to the civil engineer who responsible for checking the quality of steel or concrete provided by his sub-contractors. And if you don't think these people take their job seriously then you have never worked on a serious software project where lives or large sums of money are under the control of sofware.

Having said that, software engineering is in it's infancy compared to (say) bridge building, since bridges still occasionally fall down I think it's rather unfair to point the finger at an entire industry when an application falls over. People expect an unreasonably high standard from software when compared to simple mechanics, eg: if an accelerator cable frays and jams the throttle open they understand that and shrug, maybe even blame themselves because they skimp on maintenance, but a car's software gets stuck on full throttle (ala Toyota) then it's unforgivable and someone has to be sued for millions to make everyone feel better. Engineers understand that nothing ever works the first time, assemble anything of any size that uses oil and it will leak oil, the only way to find the leaks is to run it in a test environment, and they do exactly the same thing with software. This is the primary reason why rock solid systems that do very simple things (such as payroll) are so fucking expensive, testing costs money.

The "real" engineer (like my dad was in the 70's) performs due diligence mainly by following recognized standards, this does not mean it won't be a catastrophic failure, it just makes it's less likely that the same catastrophic failure will happen a second time. If the engineer performs his due diligence then (quite rightly) he is not to blame when it explodes and takes out a city block. Software is relatively new to the engineering game, when jetliners were first introduced for commercial use their wings kept falling off in mid air for no apparent reason, we now routinely check for metal fatigue (an unknown phenomena until wings started falling off planes).

Now consider this, with all the wizz-bang technical things that could possibly go wrong in a modern operating theater, the biggest killer by a long margin is a simple nick in the surgeons glove.

When Will We Trust Robots? (1)

DeVilla (4563) | about a year and a half ago | (#43089747)

Maybe as soon as they are able to impose their will on humanity but chose not to?

We distrust the vendors, not the robots ... (2)

MacTO (1161105) | about a year and a half ago | (#43089965)

Vendors and researchers have a history of making overstated claims about robots, particularly when it comes down to those that interact with people directly. In other words, people don't distrust robots so much as they distrust the people who are trying to sell them.

If it was a matter of distrusting robots themselves, we would still see people buying household robots to do impersonal tasks, like cleaning the house. These are not very different from industrial robots after all, which many people are more than happy to accept. But since we distrust the claims of robotic vendors, we wouldn't even be willing to accept that type of robot - never mind a robot that cares for a child.

Not at all (2)

angel'o'sphere (80593) | about a year and a half ago | (#43090041)

With the invasion of military drones (and private ones), chinese and korean hackers everywhere, worms infiltrating industrial robots and control computers, the least harmfull I can think about is that a home robot would spy on me.
The next step is: it is manipulating my home banking. And later one it commits a crime in my name, e.g. breaking into my neighbours WLAN and manipulating *his* e-banking.

With parts coming from china and other low cost countries, we never can know what a single controller or daughter board in such a thing is really capable of. (Conspiracy theory: all keyboards coming from Taiwan and China have a hardware keyboard logger build in, just collect them from the trash and here you go ...)

"Fun to be with"? (0)

Anonymous Coward | about a year and a half ago | (#43090127)

Did they take that phrase from the Sirius Cybernetics brochure, or what?

What if they offered us candy? (0)

Anonymous Coward | about a year and a half ago | (#43090293)

Or were made from candy [wired.com] ?

When I could control the sorce code (1)

Anonymous Coward | about a year and a half ago | (#43090311)

Wait, I already do with Arduino!

I trust Arduino, I trust my 3d printers(Prusa and printrbot) because I control them. I don't trust commercial ones that want me to buy their 3x 10x super expensive plastic, or call home telling when , where, who, how many and what I print.

I won't trust ANY Microsoft or Google, Facebook or any company closed source robot in my house with cameras inside, because BY LAW(Patriot Act) they could use the thing against the owners in name of "national security"or "national strategic interest"(also called industrial espionage, stealing work from companies outside US) without telling them.

Well (0)

Anonymous Coward | about a year and a half ago | (#43090497)

I'd like to ask another question first. When will we trust humans?

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?