×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Why Self-Driving Cars Are Still a Long Way Down the Road

Soulskill posted 1 year,6 days | from the i-see-what-you-did-there dept.

Transportation 352

moon_unit2 writes "Technology Review has a piece on the reality behind all the hype surrounding self-driving, or driverless, cars. From the article: 'Vehicle automation is being developed at a blistering pace, and it should make driving safer, more fuel-efficient, and less tiring. But despite such progress and the attention surrounding Google's "self-driving" cars, full autonomy remains a distant destination. A truly autonomous car, one capable of dealing with any real-world situation, would require much smarter artificial intelligence than Google or anyone else has developed. The problem is that until the moment our cars can completely take over, we will need automotive technologies to strike a tricky balance: they will have to extend our abilities without doing too much for the driver.'"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

352 comments

The more you know... (-1)

Anonymous Coward | 1 year,6 days | (#43466415)

The Cockroach can live up to two weeks without a head because its "brain" (or control center) is spread throughout its body.

Don't have to be perfect, just better (5, Insightful)

JDG1980 (2438906) | 1 year,6 days | (#43466437)

This writer makes a fundamental mistake: believing that if full driverless technology is not perfect or at least near-perfect, it is therefore unacceptable. But this is not true. Driverless technology becomes workable when it is better than the average human driver. That's a pretty low bar to clear. I know all of us think we're above-average drivers, but there are a lot of really bad drivers out there, and even a flawed automatic system could do a better job.

Re:Don't have to be perfect, just better (5, Insightful)

icebike (68054) | 1 year,6 days | (#43466557)

I know all of us think we're above-average drivers, but there are a lot of really bad drivers out there, and even a flawed automatic system could do a better job.

That depends entirely on the failure mode.

"Fail to Death" is actually acceptable to society as a whole as long as the dead person held his fate in his own hands at some point in the process. This is why we get so incensed about drunk drivers that kill others. A person doing everything right is still dead because of the actions of another. But if you drive under that railroad train by yourself, people regard it as "your own damn fault".

When the driverless car crosses the tracks in front of an oncoming train it will be regarded differently. Doesn't matter that the driver was a poor driver, and had a lot of fender benders. Most of those aren't fatal

In spite of that, I believe Google is far closer than the author gives them credit for. They have covered a lot of miles without accidents.

Granted, we don't know how many times there were errors in the Google cars, where one part of the system says there is nothing coming so change lanes, and the human or another part of the system notices the road is striped for no-passing and prevents it. Google is still playing a great deal of this pretty close to the vest.

Re:Don't have to be perfect, just better (3, Interesting)

VortexCortex (1117377) | 1 year,6 days | (#43467295)

This is why we get so incensed about drunk drivers that kill others. A person doing everything right is still dead because of the actions of another. But if you drive under that railroad train by yourself, people regard it as "your own damn fault".

Ah but see here, the driverless cars have full 360 degree vision, and they already stop for trolleys and erroneous pedestrians crossing the road illegally. They do so without honking and swearing at the git too. So, as you say, if your ignore the copious warnings the driverless car's self diagnostic gives you that its optics are fucked, and manage to override the fail-safe then wind up under a train, they yes, people will still regard it as "your own damn fault". Not only that, but YOUR insurance is paying to clean up the mess.

Re:Don't have to be perfect, just better (0)

Anonymous Coward | 1 year,6 days | (#43466567)

I think you may be wrong. Humans are held responsible for their own actions when driving. How many companies are going to want to be held responsible for the actions of a buggy automated driving system?

Re:Don't have to be perfect, just better (2)

icebike (68054) | 1 year,6 days | (#43466667)

Or a buggy automated system under the direction of a buggy human?

The manufacturers will never assume all responsibility, and no one would even suggest that as a viable business model. Some things (like our current cars, guns, stairways, electricity, etc) carry inherent dangers. Product liability law does not mandate that there be zero risk.

There are design standards, that if adhered to, pretty much absolve the manufacturer of responsibility. There is no reason to believe something like this will not be incorporated in automobiles.

Re:Don't have to be perfect, just better (2, Insightful)

Anonymous Coward | 1 year,6 days | (#43466601)

Exactly. Just because a self-driving car can't respond to outlier situations as well as a human can doesn't mean the car shouldn't be driving. By definition, those outlier situations aren't the norm. Most accidents are caused by someone picking something up they dropped, looing the wrong way at the time, changing the radio station, etc. or inibriation. These problems would be solved by a self-driving car, and while I don't have any numbers to back it up, something tells me that eliminating these will far outweigh the disasters that happen every once in a while.

Re:Don't have to be perfect, just better (2)

icebike (68054) | 1 year,6 days | (#43467621)

Most accidents are caused by someone picking something up they dropped, looing the wrong way at the time, changing the radio station, etc. or inibriation.

Really? I'd like to see the source of your assertion.

Granted distracted driving is the darling of the press these days. But that doesn't make it the major contributor to fatalities.

In fact, fatalities by all causes are on a steady year by year decline [census.gov] and have been for 15 years.

Drunk driving [edgarsnyder.com] still accounts for a great deal, 31% of the overall traffic fatalities in 2010. One-half of traffic deaths ages 21 to 25 were drinking drivers.

Distracted driving hovers around 16% of fatal crashes by comparison. [census.gov]

Drunk driving is about 4 times as deadly as distraction.

Re:Don't have to be perfect, just better (1)

Shavano (2541114) | 1 year,6 days | (#43467921)

I think your calculation is off. By my count, fully 10% of drivers AT ANY GIVEN TIME are talking on a cell phone and should therefore be classified as distracted. Given all other sources of distraction, the level of distraction has to be at least 15% if not much higher. How many people do you think are driving DRUNK at any given time? Seven percent????

Re:Don't have to be perfect, just better (1)

Hentes (2461350) | 1 year,6 days | (#43466661)

And those bad drivers get their licence revoked, just like bad software should be banned. It's not singleing out aoutonomous cars.

Re:Don't have to be perfect, just better (2, Insightful)

tnk1 (899206) | 1 year,6 days | (#43466749)

Point is, it is *not* better. Yes, it is probably better at normal driving situations, and certainly would probably be very welcome on a nice open highway. I imagine it would excel at preventing fender benders, which would be very nice indeed.

However, how would it react to sudden situations at high speed? According to the article and everything I know so far... not well. And the point of the article is that the automation would still require a driver for that, but the automation would actually cause the driver to be less alert than usual since they are less engaged. In turn that makes the driver be less capable in a sudden situation than they would have been without automation to begin with. In this way, it actually makes the average driver *worse* than they already are, just when they are needed to perform the most.

So, it is not necessarily good enough to improve one part of driving, if that improvement actually degrades another, equally important part of the system.

Re:Don't have to be perfect, just better (1, Funny)

Sponge Bath (413667) | 1 year,6 days | (#43466995)

...probably better at normal driving situations

Will auto-pilot have the blood lust to take out the squirrel crowding your lane? Humans are number one!

Re:Don't have to be perfect, just better (5, Insightful)

Jeremi (14640) | 1 year,6 days | (#43467299)

However, how would it react to sudden situations at high speed? According to the article and everything I know so far... not well.

In principle, at least, an automated system could react better than a human to sudden emergency situations... because a computer can process more input and faster decisions than a human can, and also (just as importantly) a computer never gets bored, sleepy, or distracted.

Dunno if Google's system reaches that potential or not, but if not it's just a matter of improving the technology until it can.

Re:Don't have to be perfect, just better (4, Insightful)

Anonymous Coward | 1 year,6 days | (#43467643)

I'm sorry but your comment is extremely clueless. You are asking about sudden situations at high speed. How would one end up in such a situation? By ignoring the laws and safe driving conditions. Just by obeying the speed limit and reducing it even more when sensor data gives warnings, the car can avoid such problems in the first place.

Almost %90 of the human drivers are confusing their luck of driving at 80+ Mph on bad weather (which is equal to playing Russian roulette with a gun with slightly higher capacity for ammo) with their driving abilities.

Re:Don't have to be perfect, just better (1)

DerekLyons (302214) | 1 year,6 days | (#43466751)

This writer makes a fundamental mistake: believing that if full driverless technology is not perfect or at least near-perfect, it is therefore unacceptable.

Is it a mistake? Or your unsupported bias against human drivers?
 

there are a lot of really bad drivers out there, and even a flawed automatic system could do a better job

That's a claim I keep hearing... but I don't buy it. For all the really bad drivers supposedly out there, in a wide variety of road conditions, lighting, weather, etc... there's doesn't seem to be nearly as many accidents as all the handwaving and bile would lead an outside observer to believe.

Re:Don't have to be perfect, just better (4, Informative)

PraiseBob (1923958) | 1 year,6 days | (#43467415)

There are roughly 200 million drivers in the US. They have roughly 11 million accidents per year.

http://www.census.gov/compendia/statab/cats/transportation/motor_vehicle_accidents_and_fatalities.html [census.gov]

The catch is, nearly all traffic accidents are preventable by one of the parties involved. Most are at low speeds and most are due to the driver not paying attention to the situtation around them. Next time you are at a busy traffic light, count the cars around you. Chances are one of them will be in an accident that year. Now do that every time you stop at a traffic light....

Re:Don't have to be perfect, just better (1)

DerekLyons (302214) | 1 year,6 days | (#43467913)

Which proves what exactly? That you can use Google and throw statistics around with abandon? Because it doesn't prove the OP's assertion to be correct. (In fact, given the ratio of drivers to accidents - it shows rather the opposite.)

Re:Don't have to be perfect, just better (3)

gorehog (534288) | 1 year,6 days | (#43467119)

I don't think this is true. No auto maker wants to deal with the insurance overhead involved in installing a "marginally flawed driverless system." Can you imagine that meeting? The moment when some insurance executive is told by GM, Ford, Honda, Toyota, Mercedes, BMW, whoever, that their next car will include a full-on autodrive system? The insurance company will raise the manufacturer's insurance through the roof! Imagine the lawsuits after a year of accidents in driverless cars. Everyone would blame their accidents on the automated system. "Not my fault, the car was driving, I wasn't paying attention, can't raise my rates, I'll sue the manufacturer instead." A few go-rounds like that and no one will want the system installed anymore.

Re:Don't have to be perfect, just better (1)

Time_Ngler (564671) | 1 year,6 days | (#43467791)

They could get around this by having the insurance company work with the manufacturer, so you lease the automated driving software and one of the terms of the lease is to pay for the manufacturer's insurance cost for the use of the automated car. Or you have the insurance contract written to support both automated and non-automated driving. Automated driving should have cheaper insurance if it works better than the average driver, which makes it a win for everybody.

Re:Don't have to be perfect, just better (4, Insightful)

spire3661 (1038968) | 1 year,6 days | (#43467849)

Actually, some day it will get to the point where it will cost more in insurance to self-drive.

Re:Don't have to be perfect, just better (3, Insightful)

ArsonSmith (13997) | 1 year,6 days | (#43467129)

It'll be adopted just as quickly as the orders of magnitude safer Nuclear Power over coal has taken off despite a few relatively minor and contained accidents.

Re:Don't have to be perfect, just better (1)

yurtinus (1590157) | 1 year,6 days | (#43467817)

A driverless car will certainly be overall more attentive than a human driver, but it also needs to be able to handle the unexpected things a human driver handles. The mundane tasks, sure - but how do you handle things like a tire blowout in a curved section of road with sand on it? As long as there are relatively common scenarios that crop up that a human can handle some reasonable percent of the time that the software can't, it's not ready for prime time. How do you failover when road conditions exceed the thresholds of the car? The software can't simply say "deal with it" and have the driver take over. Driver could have their hands full of coffee and iPads, be sleeping, or otherwise unaware of the situation.

Re:Don't have to be perfect, just better (2)

Shavano (2541114) | 1 year,6 days | (#43467853)

This writer makes a fundamental mistake: believing that if full driverless technology is not perfect or at least near-perfect, it is therefore unacceptable. But this is not true. Driverless technology becomes workable when it is better than the average human driver. That's a pretty low bar to clear. I know all of us think we're above-average drivers, but there are a lot of really bad drivers out there, and even a flawed automatic system could do a better job.

It becomes workable when it achieves ALL of these things:

  • performs as well in most situations as a human being
  • doesn't perform significantly worse than a normal human being in any but extremely rare situations
  • is highly reliable and
  • it is cheaper than the alternatives

the first obvious application is to replace cab drivers.

Taxis first (2, Insightful)

Guano_Jim (157555) | 1 year,6 days | (#43466459)

I think we'll probably see self-driving cars in congested, relatively low-speed environments like inner cities before they're screaming down the highway at 75mph.

The first robot taxi company is going to make a mint when they integrate a smartphone taxi-summoning app with their robo-chauffeur.

Re:Taxis first (2)

Attila Dimedici (1036002) | 1 year,6 days | (#43466545)

No, they aren't (well at least not in the U.S.) because the main reason that there are not more taxis in most major U.S. cities is because the city governments strictly limit the number of taxes there are allowed to be in the city.

Re:Taxis first (1)

icebike (68054) | 1 year,6 days | (#43466741)

Most governments limit taxi services to control safety.

Its the Taxi companies (and unions) themselves that lobby long and loud for control of quantity.

Re:Taxis first (4, Insightful)

Attila Dimedici (1036002) | 1 year,6 days | (#43467675)

Most governments justify limiting taxi services by claiming it is to control safety, when in fact it is to control transportation (and to reward those taxi companies that support the correct government officials and initiatives).

Maybe AUTO drive only express lanes that cars will (5, Insightful)

Anonymous Coward | 1 year,6 days | (#43466595)

Maybe AUTO drive only express lanes that cars will go at high speed at near bumper to bumper

actually, probably less issues on the highway (2)

Chirs (87576) | 1 year,6 days | (#43466783)

Screaming down the highway at 75MPH is *exactly* where I want a self-driving car. I live in the Canadian prairies, the nearest large city is 5hrs of highway driving, next nearest is 7hrs. I would _love_ to put my car on autopilot for that trip.

Also, on the highway you generally have long straight sections, sight lines are long, cars are further apart, there are no pedestrians, and often you have divided highways so you don't even need to worry about oncoming traffic.

Re:actually, probably less issues on the highway (1)

ebno-10db (1459097) | 1 year,6 days | (#43467343)

Canadian prairies? In that degenerate case (computationally speaking - no offense to our northern neighbors) I think you already have the technology. Just point the car straight, hit the cruise control, and set an alarm clock for a few minutes before you get to your destination. It's worked for me when I've driven on the American prairies.

Re:actually, probably less issues on the highway (1)

yurtinus (1590157) | 1 year,6 days | (#43467899)

But you do need to deal with wildlife. I wonder what thought they've put into the programmatic decision making when an accident is unavoidable. Collide with that obstacle? Can it identify a deer from elk or moose? How will it handle swerving? I suspect we'll see cars cruising high-speed on freeways in caravans well before an arbitrary auto-pilot.

Re:Taxis first (0)

Anonymous Coward | 1 year,6 days | (#43466877)

Wrong. Zooming down the highway at 75mph requires a lot less skill than navigating a dense inner city.

Also, I see Google's autonomous vehicles on the highway nearly every day on the way to work (CA-85 in San Jose).

Re:Taxis first (1)

janimal (172428) | 1 year,6 days | (#43467591)

Driving straight is not the challenge. The failure mode is much more severe on a highway, and extreme conditions are probably just as difficult to manage for an autopilot as city traffic, if not more. For example, what does the AP do when
- the car hits a big pothole and perhaps blows a tire? There's too little time for a human to take over. Sometimes you need to make a choice, whether to stop abruptly or not.
- lane markings disappear because of prior construction? Construction detour?
- truck blows a tire in front of you? (happened to me twice)
- some animal/idiot wanders into the road? Hint: avoid is a better maneuver than hitting and holding the brakes. You don't want to stop dead on a highway.

Give me city traffic with a 50km/h speed limit any day.

Re:Taxis first (4, Informative)

ebno-10db (1459097) | 1 year,6 days | (#43466911)

I think we'll probably see self-driving cars in congested, relatively low-speed environments like inner cities before they're screaming down the highway at 75mph.

On the contrary, "screaming down the highway at 75mph" (never been on the Autobahn, have you?) is a lot easier to automate than driving around a city block. Similarly the easier part of a plane's autopilot is the part that handles cruising at 500mph at 30,000 feet. The numbers are impressive, but the control is comparatively easy.

On a highway there are no traffic lights or stop signs, and there are nicely marked lanes and shoulders. Just stay between the lines at a constant speed and hit the brakes if something appears in front. Compare that to trying to figure out if some guy who's not watching is going to step off the curb and into your way, or if the car pulling out of a parking spot is going to wait for you to pass.

Re:Taxis first (1)

phantomfive (622387) | 1 year,6 days | (#43467061)

I think we'll probably see self-driving cars in congested, relatively low-speed environments like inner cities before they're screaming down the highway at 75mph.

This makes sense. I would be willing to drive a self-driving car in a crowded city, where the maximum speed is 25, and the risk of serious injury is minimal if I get in a collision. Driving at 75 down the highway has a lot bigger risk if things go wrong.

Of course, looking at it the other way, I wouldn't want to be a pedestrian with a bunch of self-driving cars going around me. A small sensor error could lead to catastrophe.

Re:Taxis first (1)

ebno-10db (1459097) | 1 year,6 days | (#43467315)

maximum speed is 25, and the risk of serious injury is minimal if I get in a collision

Warn me beforehand - I don't want to be a pedestrian in that city.

Autonomous vehicle milestones (0)

Anonymous Coward | 1 year,6 days | (#43466469)

It's interesting that the article indicates that autonomous vehicles are still worse than the best Chinese drivers, but that Google expects to surpass that milestone within four years.

And when one is involved in an acident ... (2, Insightful)

Anonymous Coward | 1 year,6 days | (#43466473)

... because they will be, who is going to be sued?

If I was a car manufacturer I don't think I'd be mad keen on going down the self-driving route - it's only going to mean more lawsuits.

Re:And when one is involved in an acident ... (0)

Anonymous Coward | 1 year,6 days | (#43466565)

i'd like to see robots duking it out in a deathmatch on our roads

Re:And when one is involved in an acident ... (0)

Anonymous Coward | 1 year,6 days | (#43466659)

This is a non-issue.
You start by implement it the same way a cruise control is done. The moment the driver touches the steering wheel the control is transferred back to the driver.
Then you say that the driver has an obligation to monitor the traffic and take control if something unexpected happens.
Now the "driverless" car only have to be able to manage highway driving or some other menial task where the driver wants to relax and if something goes wrong it was because "the driver didn't pay enough attention."
You can pretty much just dump all liability on the driver because he is lazy and trusts the engineers enough to use the technology even if it is a deathtrap.
After a decade people will be used to the new technology and the laws will have adapted enough for us to remove the steering wheel.

Re:And when one is involved in an acident ... (1)

neminem (561346) | 1 year,6 days | (#43467041)

I'm totally fine with that. I feel like regardless of how smart cars will get, they should always have a fully mechanical failsafe, and should always be driveable in fully-AI-disabled mode. At that point, I'd argue an accident should only be the fault of the car manufacturer if it could be clearly proven that either a. no reasonable human driver would have done what the car did (like, say, it just decided to drive off a bridge suddenly and without warning), or b. the failsafe didn't work when engaged. I'd be fine with an agreement like that. If I wanted to zone out and not have someone who wasn't zoning out in the driver's seat, and my car got into an accident because something unexpected happened, it would totally be my fault (or maybe the fault of someone else on the road, but not the car manufacturer's.)

In any case. We should *never* remove the steering wheel. Ever.

What's wrong with Google cars (1, Interesting)

phantomfive (622387) | 1 year,6 days | (#43466517)

If anyone is wondering the reasons they say Google cars are not good enough, here is the only section of the article that addresses that point:

[Google] says its cars have traveled more than 300,000 miles without a single accident while under computer control. Last year it produced a video in which a blind man takes a trip behind the wheel of one of these cars, stopping at a Taco Bell and a dry cleaner.

Impressive and touching as this demonstration is, it is also deceptive. Google’s cars follow a route that has already been driven at least once by a human, and a driver always sits behind the wheel, or in the passenger seat, in case of mishap. This isn’t purely to reassure pedestrians and other motorists. No system can yet match a human driver’s ability to respond to the unexpected, and sudden failure could be catastrophic at high speed.

Re:What's wrong with Google cars (1)

MrEricSir (398214) | 1 year,6 days | (#43466563)

What a vague statement! This entire article is lacking in any real specifics or citations.

Re:What's wrong with Google cars (1)

phantomfive (622387) | 1 year,6 days | (#43466697)

Not at all. The article is full of specifics. The article content merely has no relation to the Slashdot headline. Imagine that.

Re:What's wrong with Google cars (1)

ebno-10db (1459097) | 1 year,6 days | (#43466639)

What's that, Google's hype is just, well, hype? Say it ain't so.

Numerous car manufacturers have been working on self-driving cars for many years. In 2014 Mercedes will actually have a car with some very limited self-driving capabilities (sort of cruise control on steroids - you can use it on the highway when you stay in your lane). As limited as it is, that's a lot more real world application than you're likely to see out of Google anytime soon. Contrary to the beliefs of some Silicon Valley and Google hype artists, not everyone outside of those domains is an uncreative idiot. Even some car companies have good creative ideas. They're also constrained by being in the business of selling cars instead of hype.

Google has a habit of playing with a lot of cool science project type stuff. They have the cash to do that. Maybe someday something will come out of one of them, if they have the tenacity not to ditch it for the next shiny new toy to hype. However, there's no particular reason to think they'll be the innovative force behind self-driving cars.

Re:What's wrong with Google cars (1)

phantomfive (622387) | 1 year,6 days | (#43466733)

In 2014 Mercedes will actually have a car with some very limited self-driving capabilities (sort of cruise control on steroids - you can use it on the highway when you stay in your lane).

FYI the article describes the author driving a 2013 Ford Fusion with exactly those features. Also mentions a 2010 Lincoln with auto-parking.

Re:What's wrong with Google cars (2)

ebno-10db (1459097) | 1 year,6 days | (#43467019)

I RTFA, and I think that's impressive. What struck me about Mercedes is they're actually going to be selling cars with those features in less than 6 months. I don't think the Ford features are going to be sold soon, though the article wasn't clear. The automated parallel parking has been around for a while, and in all fairness should count as a limited form of self-driving car.

It reinforces my point about Google. While they're hyping completely self-driving demos, various car makers have been doing the hard work of refining things they can really sell. Google reminds me of the old top-down AI guys, who spent decades claiming that with another few million lines of LISP and a few more orders of magnitude of computing power (damn those hardware guys for holding us back), they could create a program that would pass the Turing test. After a while, people who got tired of being laughed at for saying they were in AI, took the bottom up approach. Forget sparkling conversation - let's see if we can create a robot that can run down the hall without running over your thesis adviser, and plug itself into the wall. Who's gotten further?

Re:What's wrong with Google cars (1)

phantomfive (622387) | 1 year,6 days | (#43467075)

I don't think the Ford features are going to be sold soon, though the article wasn't clear

Looks like Ford's selling it now [ford.com].

Re:What's wrong with Google cars (1)

ebno-10db (1459097) | 1 year,6 days | (#43467289)

Yeah, the active parking assist, which was on the Lincoln driven by the article's author. A quick search though suggests that the highway stuff on the Ford is also production. Unfortunately I don't have the time right now to be sure. Ok, score one (three?) for the bottom-up self-driving people.

Re:What's wrong with Google cars (1)

mlts (1038732) | 1 year,6 days | (#43467199)

The 2014 Sprinter [1] has this technology. It not just auto-brakes when someone tried a swoop and squat (easy insurance money in most states), but can compensate for wind, alerts the driver if there is someone in a blind spot, and can automatically follow in a lane. IIRC, there is even an adaptive cruise control which doesn't just follow by speed, but also automatically keeps a distance from the car in front, so cruise control can be used safely in more places.

[1]: Sprinters are weird beasts. Stick a Freightliner logo on the back of a new one, and the local homeowner's association starts sending notices that "company vehicles" are prohibited in driveways unless active work is being done. Pull two torx screws out of the flap between the double doors and attach a Mercedes logo, and the same HOA types now consider the vehicle an asset to the neighborhood due to the make.

Re:What's wrong with Google cars (2)

phantomfive (622387) | 1 year,6 days | (#43467377)

Stick a Freightliner logo on the back of a new one, and the local homeowner's association starts sending notices that "company vehicles" are prohibited in driveways unless active work is being done. Pull two torx screws out of the flap between the double doors and attach a Mercedes logo, and the same HOA types now consider the vehicle an asset to the neighborhood due to the make.

Wow, yet another reason to stay away from HOAs.

Re:What's wrong with Google cars (0)

Anonymous Coward | 1 year,6 days | (#43466675)

No system can yet match a human driverâ(TM)s ability to respond to the unexpected, and sudden failure could be catastrophic at high speed.

No system can match a human driver's ability to over react to the unexpected and cause a serious failure.

Re:What's wrong with Google cars (1)

icebike (68054) | 1 year,6 days | (#43466809)

What road do you know of where a human hasn't driven the route at least once?

The street view cars have driven the bulk of roads in most major US cities at least once documenting everything along the way.

Left unsaid by Google is how many human interventions were required, (for what ever reason).

Re:What's wrong with Google cars (0)

CanHasDIY (1672858) | 1 year,6 days | (#43467109)

The street view cars have driven the bulk of roads in most major US cities at least once documenting everything along the way.

... which would work perfectly, if not for the fact roads deteriorate, get re-routed, have lanes shut down for construction, become flooded, etc., etc., etc...

Left unsaid by Google is how many human interventions were required, (for what ever reason).

My guess is, more than what Google would consider acceptable, hence the tight lips.

Re:What's wrong with Google cars (1)

sl149q (1537343) | 1 year,6 days | (#43467829)

Can't find the article, but there was a mention that in the (small number of) incidents where a human operator took over control, they reviewed the logs afterwards and in all cases the computer would have taken either the same or equally safe action. These where mostly related to pedestrians and jay walking if I recall.

Re:What's wrong with Google cars (2)

Kjella (173770) | 1 year,6 days | (#43466855)

Impressive and touching as this demonstration is, it is also deceptive. Googleâ(TM)s cars follow a route that has already been driven at least once by a human, and a driver always sits behind the wheel, or in the passenger seat, in case of mishap. This isnâ(TM)t purely to reassure pedestrians and other motorists. No system can yet match a human driverâ(TM)s ability to respond to the unexpected, and sudden failure could be catastrophic at high speed.

Google has cars driving around almost everywhere for their map feature, I'd have no problems with a first edition limited to what they already know. And they're legally obligated to have a driver ready to take over, even if they wanted to go solo. Miiiiiiiiinor detail.

Re:What's wrong with Google cars (-1, Troll)

CanHasDIY (1672858) | 1 year,6 days | (#43467179)

Impressive and touching as this demonstration is, it is also deceptive. Googleâ(TM)s cars follow a route that has already been driven at least once by a human, and a driver always sits behind the wheel, or in the passenger seat, in case of mishap. This isnâ(TM)t purely to reassure pedestrians and other motorists. No system can yet match a human driverâ(TM)s ability to respond to the unexpected, and sudden failure could be catastrophic at high speed.

Google has cars driving around almost everywhere for their map feature, I'd have no problems with a first edition limited to what they already know. And they're legally obligated to have a driver ready to take over, even if they wanted to go solo. Miiiiiiiiinor detail.

Scenario - you get one of the first automated cars, and take it on a test drive down a road that has been mapped by one of Google's Streetview cars.

Unfortunately, it rained a LOT since the Streetview image was taken, and part of your route is 4 feet under water.

What happens next? I know, you, being a responsible person, will take control and stop the vehicle once you notice it's about to drive right into a lake. But what about the less responsible passengers (not really a driver if you aren't controlling the vehicle, regardless of what seat you occupy), who will be too busy Tweeting about how 'OMG Car iz driving itself!' instead of being prepared to take manual control?

If you really think about it, that "miiiiiinor detail" is a actually a pretty damn major one.

Re:What's wrong with Google cars (1)

femtobyte (710429) | 1 year,6 days | (#43467305)

I bet that's also why Google has these people called "engineers" who worry about "miiiiiiinor details," like unsafe driving conditions (shallow water flowing across the road, and subsequent hydroplaning when an idiot human driver thinks "what harm can three inches of water do?" causes a lot of accidents); probably to a greater extent than even experienced drivers do. And, once a "detail" has been identified and solved, it's solved 100% of the time for 100% of the drivers (no matter how many people die a year from water across the road, other drivers don't learn; but a firmware update will instantly teach every driverless car to be more careful). At most, there will be one carload of deaths from each "unexpected" detail, rather than the same problem killing hundreds over and over again.

Re:What's wrong with Google cars (5, Insightful)

compro01 (777531) | 1 year,6 days | (#43467323)

What happens next?

The driving computer sees the 4 feet of water ahead using the cameras/radar and stops because it determines the water is too deep to ford?

Re:What's wrong with Google cars (1)

sl149q (1537343) | 1 year,6 days | (#43467881)

Oh, so I guess you have come up with the ONE scenario that Google's engineers either haven't thought about, haven't considered as a sub-case of a generic problem (obstruction of current route) and didn't think any action would be required. I suggest you send your resume to Google ASAP! They obviously haven't thought this through at all.

Re:What's wrong with Google cars (1)

ColdWetDog (752185) | 1 year,6 days | (#43467099)

So, most trips in cars are repetitive - you drive to work every day. Your Google-O-Matic could 'learn' the route over time, follow your driving habits, confer with other Google-O-matics about their experiences. Maybe the first time you drive a route, you go manual. Not such a big deal.

Boats have had autopilots for years. Most of them are pretty primitive. Planes likewise. The captains of both devices are responsible for the vessel at all times. Same as with an autonomous car. The driver decides when / if to go autopilot.

Now the car just might come back with "I'm sorry Dave, I can't do that", but I just don't see a scenario where the human being is completely out of the loop. Even the 'Johnny Cab' scenario is going to have a dispatcher / remote driver supervising the taxi.

Re:What's wrong with Google cars (2)

sl149q (1537343) | 1 year,6 days | (#43467811)

The Google cars don't REQUIRE a human. They CAN operate fine without one. They MAY NOT operate without one.

They have a human behind the wheel so that they comply with the various licensing regimes. As long as there is a human behind the wheel capable of assuming control the current laws in most places are fine with the computer controlling the car.

Hm (0)

Anonymous Coward | 1 year,6 days | (#43466531)

they don't need full autonomy to be useful. ability to follow close behind a human-driven leader car in caravan is enough to start with.

Great AI (2, Funny)

Anonymous Coward | 1 year,6 days | (#43466533)

Imagine if you had a car with a great AI, better than what is out there today. You just tell it to drive somewhere and it does. It never gets lost, knows where all addresses are, knows how to park, etc. Basically everything.

There'd still be people that did things like, "It seemed to be going too fast so I slammed on the brakes and the car spun out of control and into a ditch. If it weren't for your AI, this never would have happened! I want a million dollars." or "I was sitting in the driver's seat drunk off my ass with my hands on the wheel and pretending to steer the car. Your AI drove the car into a school bus full of nuns and now everyone is accusing me of being a drunk driver, I want a million dollars!" or maybe even "Your AI car was trying to kill me, so I had to run it off the road and set it on fire, killing the person who was also inside. Now the police are charging me with murder. Had your AI not been sent back in time to kill the humans, I wouldn't have had to do this!"

Look at the case history of cruise control, for example. It was a big thing to automatically claim cruise control fucked up and cause you to drive to the bar, get drunk, and then try to drive home.

Its not here yet but. (3, Insightful)

maliqua (1316471) | 1 year,6 days | (#43466547)

i like to think of them more as personal variable path trains. whats really needed to make it work is a road infrastructure designed around this. when the focus moves away from the AI that can replace regular driver and more towards a combination of smart roads and smart cars that work together, then we will have what they hype is suggesting

Re:Its not here yet but. (1)

CanHasDIY (1672858) | 1 year,6 days | (#43466769)

whats really needed to make it work is a road infrastructure designed around this.

Came here to say that myself.

Also, the issue of liability is another major barrier; until the government figures out who to blame when something goes awry and one of these things causes damage to life/property, you can bet your bonnet that Uncle Sam will not allow driverless cars on "his" streets.

Re:Its not here yet but. (1)

Firethorn (177587) | 1 year,6 days | (#43467071)

Uncle Sam already allows driverless cars, it's just that they're all test platforms 'insured' by the company doing the developing. It's for liability purposes that somebody is behind the wheel of the google cars currently.

Still, there are a number of possibilities I can see. Much like how the federal government set down specific rules on how nuclear power liability will be addressed, the same can be done with cars. I see several possibilities(but I'm writing this on the fly, so I'll probably miss stuff):
1. Manufacturer limited liability: Insurance is currently only required to be $100k per person, $300k per incident in many states. Maybe limit that to $250k per person, $500k per incident per vehicle(my insurance). Relatively speaking, this screws with anybody harmed by a auto-driving car no worse than they currently are with a person driving
2. No-fault insurance: Some states already require that you get insurance more for yourself, treating other people that hit you as random factors to be insured. Has the problem that it doesn't fully charge bad drivers for the cost of their bad driving, but in this case
3. Make the 'operator'/owner be at fault no matter what, basically the current situation, 'my autodrive did it!' doesn't make you not responsible as the owner of the vehicle. Insurance rates *should* drop for well performing auto-drive cars.

The way I see it going:
1. The manufacturer of the autodrive system proves that it is more capable than the *average* driver. It's accidents will probably be different than for human drivers, favoring avoiding 'fast twitch' accidents where faster reactions can prevent the accident, but more in the way of 'dumbass' accidents where a competent human would have spotted the danger from miles away.
2. Said systems are mandated for the most at risk - people with DUIs on record(replacing breath test systems), too many points, or just plain history of extreme bad driving.
3. Insurance companies figure out that autodrive cars are better - so especially if you have strikes against your record, shifting to an auto-drive vehicle saves you a lot of money on your car insurance.
4. It spreads as people who don't like driving but have to travel invest in the systems.

I've calculated the value of an autodrive system as being worth $5-15k, between fewer accidents(I plotted on half as many), reclaimed time, and higher fuel efficiency.

Manufacturer limited liability (1)

Firethorn (177587) | 1 year,6 days | (#43467083)

Oh yeah, forgot part of #1: As part of making the manufacturer liable for accidents caused by the AD system, even limited, the manufacturer would build the liability into the price of the system, enabling dirt-cheap insurance if you can afford the auto-drive.

Field tests prove them wrong? (1)

mveloso (325617) | 1 year,6 days | (#43466577)

Google has apparently been using this technology for their StreetView cars, and apparently meet all the straw-man requirements of the article.

So...do they have more bogus requirements that need to be met?

Driverless cars don't need to be able to handle any possible situation. Most drivers can't handle those situations either - witness the large number of accidents that happen every day. The driverless cars just have to be better, and have superior liability coverage, than human drivers.

Re:Field tests prove them wrong? (1)

Hentes (2461350) | 1 year,6 days | (#43466735)

I haven't seen Google participate in any independent test. They are making big claims, but I wouldn't just accept the words of a company praising its own product when they fail to provide proof for years.

How about semi-automated, remotely piloted? (1)

cnaumann (466328) | 1 year,6 days | (#43466599)

If the car's AI cannot handle the situation, control of the car could be transfered to a central location where a human could take over. Another option would be to get the car to a safe spot and have a human come out and take over.

Also, the cars don't need to go anywhere at anytime under any conditions to be useful, they just need to be able to follow pre-determined courses safely. In the even of an accident, detour, heavy traffic, or even bad weather, the automatic driving cars could be sent home or told to stay home.

The big market I see is getting elderly people to and from simple destinations like the grocery store, the doctors office, etc. If the driving conditions are not ideal, the trip can be cancled.

remote lag / data coverage / tower roaming (0)

Anonymous Coward | 1 year,6 days | (#43466641)

remote lag / data coverage / tower roaming / other network issues make that a bad idea. And just think of what a hacker can do with that?

And overseas lag (if they just the center there) or even satellite lag is to high.

Headline (1)

Beorytis (1014777) | 1 year,6 days | (#43466701)

Really, no one read the headline and thought: "They're a long way down the road because they're driving away without us!!!"

Re:Headline (1)

neminem (561346) | 1 year,6 days | (#43467171)

I didn't, but I'd give you +1 funny for the visual (if I had any mod points, and if I hadn't already posted a comment :p)

Not about fully automated cars (3, Informative)

Animats (122034) | 1 year,6 days | (#43466803)

The article is about semi-automated cars, not fully automated ones. Semi-automated cars are iffy. We already have this problem with aircraft, where the control systems and the pilot share control. Problems with being in the wrong mode, or incorrectly dealing with some error, come up regularly in accident reports. Pilots of larger aircraft go through elaborate training to instill the correct procedures for such situations. Drivers don't.

A big problem is that the combination of automatic cruise control (the type that can measure the distance to the car ahead) plus lane departure control is enough to create the illusion of automatic driving. Most of the time, that's good enough. But not all the time. Drivers will tend to let the automation drive, even though it's not really enough to handle emergency situations. This will lead to trouble.

On the other hand, the semi-auto systems handle some common accident situations better than humans. In particular, sudden braking of the car ahead is reacted to faster than humans can do it.

The fully automatic driving systems have real situational awareness - they're constantly watching in all directions with lidar, radar, and cameras. The semi-auto systems don't have that much information coming in. The Velodyne rotating multibeam LIDAR still costs far too much for commercial deployment. (That's the wrong approach anyway. The Advanced Scientific Concepts flash lidar is the way to go for production. It's way too expensive because it's hand-built and requires custom sensor ICs. Those problems can be fixed.)

Re:Not about fully automated cars (1)

ebno-10db (1459097) | 1 year,6 days | (#43467133)

The fully automatic driving systems have real situational awareness - they're constantly watching in all directions with lidar, radar, and cameras.

They have the sensory information needed for situational awareness, which can be a long way from having situational awareness.

Re:Not about fully automated cars (0)

Anonymous Coward | 1 year,6 days | (#43467355)

I've thought it would be interesting to try using a "light-field camera [wikipedia.org]" to retrieve depth information. One advantage is due to a smaller sensor, you don't disturb the car aerodynamics as much as the LIDAR sensor. There is also potential for for light-field cameras to be manufactured cheaper than LIDAR sensors (I've got no idea which will become cheap first)

Re:Not about fully automated cars (1)

Solandri (704621) | 1 year,6 days | (#43467783)

The article is about semi-automated cars, not fully automated ones. Semi-automated cars are iffy. We already have this problem with aircraft, where the control systems and the pilot share control. Problems with being in the wrong mode, or incorrectly dealing with some error, come up regularly in accident reports.

I can't help but think we're going about this the wrong way. Automating transportation in 3D is really hard. Automating it in 2D is a lot easier. Automating it in 1D is dirt simple.

Perhaps what we should be doing is building a rail system to replace the highway system. People can still drive the shorter, more complicated routes. But if you're going on a trip from the suburbs where you live to the city where you work, once you get on the highway the car (and all others) get put on a rail. While on the highway it drives itself and you're free to read the paper, put on your makeup, eat breakfast, whatever.

Don't worry there's a human! (0)

Anonymous Coward | 1 year,6 days | (#43466839)

"Don't worry, there's a human to take over!"

Am I the only one who always finds those claims laughable? No one* is going to be paying attention to react to the unexpected better than the computer controlled car. That's like asking if you're always ready to takeover the car as a passenger.

Of course you can do it, but the computer will have already slammed on the brakes before you noticed the issue. A human suddenly taking over will almost surely result in disaster.

*yes yes, racecar drivers etc for pedants

If a human has to be in the driver's seat (4, Insightful)

Spy Handler (822350) | 1 year,6 days | (#43467043)

ready to take over in case of an emergency, what is the point of the whole thing?

And assuming the human will be tweeting on his Facebook Amazon phone with his hands nowhere near the steering wheel and feet propped up on the dashboard, how is he going to take over control of the car in a split second when an emergency occurs? He can't. So that means he will have to be alert and in a ready driving posture and paying attention to the road like he's really driving. But then what is the point? Might as well have him drive it himself and save money by not buying the Google Car stuff in the first place.

Either make a car that can go 100% without a human driver, or go back to web advertising and forget about the whole thing.

Re:If a human has to be in the driver's seat (0)

Anonymous Coward | 1 year,6 days | (#43467195)

This * 100.

Re:If a human has to be in the driver's seat (0)

Anonymous Coward | 1 year,6 days | (#43467325)

It's safe to say that's just legal bullshit/trying to avoid scaring off technophobes. You can't take it over in a realistic scenario, as you noted.

I suspect it's partially so the legal rulings on purely self driving cars are pushed back a bit. I don't think we(especially lawyers and politicians, yay!) are ready to deal with the legalities of driverless cars yet, so delaying that is a good idea.

Re:If a human has to be in the driver's seat (1)

joe_frisch (1366229) | 1 year,6 days | (#43467661)

I agree, and this is a very different situation from an automated aircraft. Things generally happen quite slowly in aircraft except for the few minutes around takeoff and landing during which times the pilots can be alert. In cruise flight if something goes wrong with the automation there are usually many seconds for the pilot to become aware of the problem and to take over. In the famous AF 447 flight it was several minutes between the beginning of the problem and the last time at which the aircraft could have been saved - a skilled crew should have been able to recover.

In a car when things go wrong there is often only a second or two to respond before impact. This is too fast for a human who is not already paying attention to gain situational awareness and to react. It is completely unreasonable to expect that a driver will be paying constant attention for the entire drive while the computer is driving the car - I'm sure that most people would much rather drive themselves.

Even if the automation is better on average than a human, there is still a responsibility issue. Who is at fault when a car swerves to avoid a trash bag in the street and hits a child? The drive? The auto manufacturer? The programmer who designed the image recognition system? The cars will not be perfect - thousands of people will die, and there will be constant lawsuits.

Re:If a human has to be in the driver's seat (1)

sl149q (1537343) | 1 year,6 days | (#43467927)

Unlike a single focus human, fully automated cars have many more sensors and a much wider (360) field of vision through multiple cameras and technologies (LIDAR).

So in the event of your scenario, the more probable outcome is that the car would recognize both the bag of trash and child for what they are and take out the trash. (no pun intended!)

Re:If a human has to be in the driver's seat (1)

lgw (121541) | 1 year,6 days | (#43467759)

There are plenty of situations where a human driver could reasonably take over, with many seconds of warning: when the car starts having trouble with its sensors (or the lane markers disappear), when the map and reality diverge, and so on. The "sudden surprises" need to be 100% computer-handled on auto-pilot, but that's a whole different problem than the system's failsafes detecting that normal operation has degraded for whatever reason, including overdue maintenance.

It's simple (0)

Anonymous Coward | 1 year,6 days | (#43467143)

There will never be a "self-driving" care because nobody can afford the liability of selling an autonomous car. In any accident, the manufacturer will obviously get sued. You always have to have a human "in control" so you have someone to point the finger of blame at.

Re:It's simple (2)

lgw (121541) | 1 year,6 days | (#43467793)

There will never be a "self-driving" care because nobody can afford the liability of selling an autonomous car. In any accident, the manufacturer will obviously get sued. You always have to have a human "in control" so you have someone to point the finger of blame at.

That's blatantly false. The price of the liability insurance will just be folded into the price of the car. Where manufacturers would worry is getting sued for more than would normally be the case, for any given accident, but I'm sure they can buy protection from that from their favorite senator cheaply enough.

Easy Problem (0)

Anonymous Coward | 1 year,6 days | (#43467241)

Constructing reliable, efficient self-driving cars is a solved problem: they're called trains. Now if only some of the major industrialized powers were more aware of their potential.

Other roadblocks (1)

Kwyj1b0 (2757125) | 1 year,6 days | (#43467485)

There are other fundamental problems with driver-less cars: Illusion of control, exchange of information, and unmodeled conditions

The illusion of control is why some people are scared to death of flying, but feel confident of getting behind the wheel: even though the stats say that people are safer in an aircraft than on the road, people hate to give up the control. Even if the control is just an illusion (you can do little if a drunk driver slams into your car).

The exchange of information is crucial to any large-scale automated technology. Right now, you try to infer the intentions of other drivers and account for them. If someone ahead of you turns on their indicator lights, you slow down and give them space (except if you live in a major city, where you speed up to avoid giving up an inch - God, I hate San Diego and LA drivers). If every automated car tries a greedy algorithm, it will be havoc.

The unmodeled conditions are the outliers that might not occur often, but a human is much better equipped to deal with them. What happens in a storm when some underpass might be flooded? A human would take a look out the window and decide to try a different route from the start. This could come under exchange of information as well - basically an automated system cannot make "judgement calls". It needs accurate information to function.

Given information, the technology today has enough bandwidth to control an unstable aircraft into a standing position; I think we can handle a car on a road. A basic problem relates to how information is collected and shared - an offline car is limited by having only local information, and little predictive abilities.

sod cars (1)

Anonymous Coward | 1 year,6 days | (#43467679)

I want the self driving bus. Cheap and efficient buses endlessly cruising the countryside, ideally stopping for a mobile phone request. Who would need a car or a parking space? Life would be wonderful!

Driving is a human activity (1)

MichaelSmith (789609) | 1 year,6 days | (#43467733)

To be an efective driver an AI would need to understand my hand signals. Until then it is not safe for it to be on the road when I am on my bike.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...