Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Software Bug Caused Qantas Airbus A330 To Nose-Dive

Unknown Lamer posted more than 2 years ago | from the bugs-on-a-plane dept.

Bug 603

pdcull writes "According to Stuff.co.nz, the Australian Transport Safety Board found that a software bug was responsible for a Qantas Airbus A330 nose-diving twice while at cruising altitude, injuring 12 people seriously and causing 39 to be taken to the hospital. The event, which happened three years ago, was found to be caused by an airspeed sensor malfunction, linked to a bug in an algorithm which 'translated the sensors' data into actions, where the flight control computer could put the plane into a nosedive using bad data from just one sensor.' A software update was installed in November 2009, and the ATSB concluded that 'as a result of this redesign, passengers, crew and operators can be confident that the same type of accident will not reoccur.' I can't help wondering just how a piece of code, which presumably didn't test its input data for validity before acting on it, could become part of a modern jet's onboard software suite?"

cancel ×

603 comments

Sorry! There are no comments related to the filter you selected.

What about Google driverless car? (1, Insightful)

InsightIn140Bytes (2522112) | more than 2 years ago | (#38430694)

The worst part is that Google wants to build a driverless car [wikipedia.org] . Flight pilots have been trained to react to emergencies in a calm manner and they have time to do so while in air. Neither is true for cars. People will panic when something goes wrong, and there won't be any time to react to them. Your life (and others life) will be completely dependent on the AI, and lets face it, there will be bugs.. Google isn't exactly known for bug free products. Hell, even NASA has bugs and they use billions so that there wouldn't be any. I just think it's a really bad idea and Google is being irresponsible and malicious with such project. Of course they will also hide some "we are not responsible for accidents in any way" under some clause. Let me just say that somewhere in the future we will be hearing how Google killed some innocent people and children.

Re:What about Google driverless car? (5, Insightful)

Anonymous Coward | more than 2 years ago | (#38430718)

sure, but the number of accidents will likely still be fewer than those caused by human drivers.

Re:What about Google driverless car? (1, Troll)

InsightIn140Bytes (2522112) | more than 2 years ago | (#38430744)

Still, it would most likely be your own fault. But with Google driverless car it doesn't matter if you're a good driver and drive carefully or not, because you could get killed anyway. I know it's not always your own fault, but you can affect that. With driverless you cannot.

Re:What about Google driverless car? (5, Insightful)

Pikoro (844299) | more than 2 years ago | (#38430780)

Even on the road today this is an issue. Doesn't matter how good of a driver you are. If one other idiot on the road is driving crazy, you could get killed no matter how you drive. Weakest link and all that...

Re:What about Google driverless car? (4, Insightful)

mug funky (910186) | more than 2 years ago | (#38430888)

done much driving lately?

even if MS wrote the software, it'd definitely be well in the top 2 percentile as far as driving skills go.

see how input data validation works in your brain when you're tired, drunk or just distracted?

Re:What about Google driverless car? (-1)

Anonymous Coward | more than 2 years ago | (#38430972)

Really? You're so fucking gay that you had to throw the Microsoft troll in? Please do fuck off.

Re:What about Google driverless car? (-1, Flamebait)

Anonymous Coward | more than 2 years ago | (#38431016)

Really? You're so fucking gay that you had to throw the Microsoft troll in? Please do fuck off.

Really? You're so fucking homophobic that you had to throw the 'gay' troll in? Please do fuck off.

Re:What about Google driverless car? (-1)

Anonymous Coward | more than 2 years ago | (#38431048)

And you're so fucking stupid that you had to feed the troll. You're probably a homophobic gay Microsoft-hating fanboi. Please do fuck off.

Re:What about Google driverless car? (-1)

Anonymous Coward | more than 2 years ago | (#38431134)

I think now would be a good time for all the homos to cum out of the closet...

Re:What about Google driverless car? (5, Insightful)

Anonymous Coward | more than 2 years ago | (#38430790)

Which is actually Airbus relies on sensor input over the "pilot". Boeing believes in the opposite. I'm inclined to believe Airbus in that the majority of accidents are human error over computer error.

The problem with aviation accidents is the relatively small sample size. With cars there will be much better data (i.e. more data points).

If anything computer driven cars will be better - since due to the safety "fears" like the OP, they will be programmed to be cautious. They have to be better at handling conditions than human operators, otherwise it's instant blame. They have to be better to the degree that you can blow the stats out of the water. e.g. When the first computer driven car hits a person, they need to say "well based on hours on the road, if it was human driving this it would have hit 30 kids by now".

Re:What about Google driverless car? (1, Troll)

0123456 (636235) | more than 2 years ago | (#38431092)

Which is actually Airbus relies on sensor input over the "pilot". Boeing believes in the opposite. I'm inclined to believe Airbus in that the majority of accidents are human error over computer error.

Yeah, right. The computer is unable to fly the plane, so it suddenly dumps control into the hands of the pilot who has spent the last three hours drinking coffee, playing Angry Birds on his iPad and chatting up the head stewardess, and they crash. And it's 'human error'.

Re:What about Google driverless car? (5, Insightful)

slew (2918) | more than 2 years ago | (#38431166)

Which is actually Airbus relies on sensor input over the "pilot". Boeing believes in the opposite. I'm inclined to believe Airbus in that the majority of accidents are human error over computer error.

Sometime in a flight like AF447 the computer doesn't know jack either and gives up the ghost. In the AF477 flight(equipment airbus A330), apparently, the pitot sensors gave inconsistent readings and the autopilot disengaged. What insued was apparently what can happen when you have pilots that are error prone and a computer that doesn't know what the hell to do to help them. In these situations, I think it's prudent to still have a system that defaults to the pilot as if they knew what to do when they know the sensors have crapped out and apparently even Airbus agrees with this. Unfortunatly, it appears that the AF447 pilots were not up to the challenge in this circumstance.

Re:What about Google driverless car? (-1)

Anonymous Coward | more than 2 years ago | (#38430720)

You're an idiot. Please castrate yourself and anyone related to you, you stupid mutt.

Re:What about Google driverless car? (-1, Offtopic)

HeavyDDuty (2506392) | more than 2 years ago | (#38430734)

calm down google fan-boy/employee/puppet.

Re:What about Google driverless car? (0)

Anonymous Coward | more than 2 years ago | (#38430726)

Ugh and humans don't ever screw up anything.

Re:What about Google driverless car? (4, Insightful)

Kenja (541830) | more than 2 years ago | (#38430730)

Cant be worse then the drivers out there.

Re:What about Google driverless car? (5, Interesting)

timeOday (582209) | more than 2 years ago | (#38430738)

But the best part is that once you fix a bug in an automated system, it's fixed forever, whereas a fresh new crop of novices hits the roads/skies every day.

There were people against airbags, too, because they killed some people who otherwise wouldn't have died. You work on fixing those things. But whether the system as a whole is worthwhile is judged on whether it saves more than it kills.

Re:What about Google driverless car? (-1)

Anonymous Coward | more than 2 years ago | (#38430798)

But the best part is that once you fix a bug in Windows it's fixed forever.

ftfy

Re:What about Google driverless car? (4, Interesting)

HeavyDDuty (2506392) | more than 2 years ago | (#38430832)

nothing in software is ever free of bugs. just because it's a bug-fix doesn't preclude the possibility of the bug-fix itself (or its side effects) from introducing new bugs, or being an incomplete fix which just happens to pass whatever inadequate test was thrown at it.

Re:What about Google driverless car? (2)

tsotha (720379) | more than 2 years ago | (#38430834)

But the best part is that once you fix a bug in an automated system, it's fixed forever

Sure, the same way any bug is fixed forever. But software is still loaded with bugs. Even a completely bug-free system will accumulate bugs over time as the code is maintained and/or features are added.

Re:What about Google driverless car? (3, Insightful)

murdocj (543661) | more than 2 years ago | (#38430868)

Right, because bug fixes never introduce bugs. Code just keeps getting better and better and better.

Re:What about Google driverless car? (1)

DerekLyons (302214) | more than 2 years ago | (#38430946)

But the best part is that once you fix a bug in an automated system, it's fixed forever

Only so long as the software, hardware, operating system, real world environment (physical, regulatory, etc..) you operate in, etc... etc... remain fixed and unchanging forever.

we already fixed it. its called 'trains'. (5, Insightful)

decora (1710862) | more than 2 years ago | (#38430976)

the idea that a bunch of automatically piloted vehicles is somehow a better solution to city transport than mass-transit, it boggles my mind.

real people do not have money to maintain their cars properly. things are going to break. there are not going to be 'system administrators' to fix all the glitches that come up when cars start breaking down after a few years.

there will be problems. do i know which problems? no, but i know the main problem.

arrogance amongst revolutionaries. it is historically a pattern of the human species. declaring that nothing could go wrong is usually a precursor to a lot of things going wrong. not because the situation was unpredictable, but because human beings in an arrogant mindset tend to make a lot of mistakes, be reckless, and try to cover their asses when things go wrong.

but successful engineering is the anti-thesis of arrogance. nobody worth his salt is going to say 'what could go wrong'? they are going to have a list of 500 things that could go wrong, and all the ways they have tried to counter-act those wrong things happening.

Re:we already fixed it. its called 'trains'. (0)

0123456 (636235) | more than 2 years ago | (#38431106)

the idea that a bunch of automatically piloted vehicles is somehow a better solution to city transport than mass-transit, it boggles my mind.

While I agree that computer-controlled cars are a joke with current technology levels, the idea that trains are a better solution to city transport than cars boggles my mind. Then again, I spent a couple of years actually commuting to work by train and know just how much they suck ass.

Re:What about Google driverless car? (1)

jklovanc (1603149) | more than 2 years ago | (#38431032)

It is a good chance that the fix was to limit the degree to which the autopilot can dive the plane. Now wait till there is an accident because the readings were accurate at the plane didn't dive hard enough.

Re:What about Google driverless car? (4, Insightful)

Anonymous Coward | more than 2 years ago | (#38430748)

Are you seriously accusing Google of being malicious in developing a driver-less car? Do they have a stake in keeping the population numbers down or something?

While I agree that software will never be bug free, it will quite possibly save many more lives as human drivers are terrible. They are prone to panicking under pressure, misjudging distances, unable to handle a car as efficiently as possible, take too many risks (swerving in and out of traffic, following too close), drive under the influence of drugs and alcohol, get distracted by phones, screaming kids among many other things that well written and tested software could do better.

Do you also want pilots to fly planes manually at all times and remove auto-pilot since software can never be perfect?

Re:What about Google driverless car? (4, Insightful)

Delarth799 (1839672) | more than 2 years ago | (#38430756)

I know, those evil monsters and their want to improve the lives of people by inventing things. Since there might possibly be a bug that may cause issues they should just stop and throw in the towel right? I mean humans are perfect drivers as is so why fix something that's not broken.

Re:What about Google driverless car? (1)

tapspace (2368622) | more than 2 years ago | (#38430768)

I think that new technology is always under more scrutiny. We don't need to worry about safe software practices degrading in the driverless car market for many years (by then we'll all be complacent with the technology).

Re:What about Google driverless car? (2)

Firehed (942385) | more than 2 years ago | (#38430800)

I trust Google's engineers not to get me killed more than I trust the vast majority of drivers, especially knowing how little it takes to get a drivers license. So far, the only incident involving one of Google's self-driving cars is when a human was in control (i.e., it was sheer coincidence that it was one of those cars); statistically speaking, they're the safest vehicles currently in existence. At least software can be fixed; try as we might, we haven't yet fixed stupid. I'm trying to look up how many are mechanical failures versus human error but this hotel internet connection sucks, but I'd be willing to bet the vast majority of problems are people's faults (and of mechanical failures, most of those probably would have been preventable with proper maintenance)

That said, I won't be beta testing this one.

thats funny b/c google docs went down (1)

decora (1710862) | more than 2 years ago | (#38430996)

uhm, a few days in a row last year.

if you were being logical, you would say "i trust trains and subways more than the automobile-highway system. we should get rid of car subsidies and start building trains and bicycle paths everywhere"

Re:What about Google driverless car? (4, Insightful)

EdIII (1114411) | more than 2 years ago | (#38431080)

It's bad idea for a specific reason.

There are two "brains" that can operate the car. Google can make a pretty decent brain, but it is not going to come remotely close (in any way) to the human brain in terms of its ability to perceive the environment (sensors), make sense of it (pattern recognition), and put it all into context (experience, extrapolation).

Google will excel in reaction times and advanced planning. Through Google it will be possible to mitigate traffic by solving a very human problem, which is cooperation towards a common goal. Google could react faster, and with less overcompensation, to a car drifting into its lane.

Where Google will fall far short is recognizing the road rage in the driver next to it (beating his hands on the steering wheel and screaming), the lack of concentration (woman putting her lipstick on), etc. Putting those things in context and assigning risk to drivers next to you is not something Google will be able to do from its sensors. However, even the average driver is getting cues in so many ways about what is really going on around them.

The reason why it is a bad idea, is that while Google is operating, the human brain is off. It's not instant-on either. Driving is a constant level of concentration, even when it seems like you are doing it "subconsciously". From start to finish, the average driver is pretty aware of their surroundings and processing an impressive amount of data. A human brain will beat Google every time on those terms.

When Google fails, or "judges" the environment poorly, how quickly can the human brain come back online, evaluate the current environment, take control, and make the required adjustments?

Until the Google brain is able to fully replace a human brain, it is not a good idea to involve the two in a hybrid system. The lag between the two systems taking control from one another is just too great.

Self-parking is fine, and limited operations involving high efficiency traffic lanes where human control is not permitted will be fine. As long as the transition into those operations is in a time frame a human can deal with.

Example being, the human brain pulls the car along the high efficiency traffic lane, "tags" the Google brain in to insert itself into the traffic. The Google brain then notifies the driver and validates proper control and awareness before exiting the traffic and turning control over to the human driver. Failure means Google pulls the car to the left in the emergency lane and brings the car to a full stop.

Any other kind of operations just seems fundamentally unwise to me because of the hybrid nature and inherent limitations of Google's AI, advanced as it may be for now.

My threshold for letting a computer operate a car no differently than a human, is the computer can meet or exceed the human's ability in every respect. That is not true right now, and will not be true for decades.

You may trust a Google car more than the average driver, but that is only really true if the Google car also has no driver.

Re:What about Google driverless car? (5, Interesting)

Geldon (444090) | more than 2 years ago | (#38430816)

It's so interesting to see people's reaction to the whole driver-less car thing. It's incredible to see the kind of ethical thought-experiment that must necessarily go through everyone's mind when they come to this conclusion: How many lives must be saved before I will tolerate someone being brutally slain by a malfunctioning computer?

Every day, children are run down by drivers who are not paying attention, tired, drunk, or just plain don't have time to react. Since a driver-less car is incapable of being drunk, tired, or distracted, then it's a safe bet that they'll be much better at avoiding those accidents that can be avoided. But the reality is that the latter scenario (no time to react) would still lead to the deaths of many children (and others!).

At what point does it become "worth it"? When the driver-less car causes 1/10th as many fatalities? 1/100th? 1/1,000th? How many human deaths must be prevented by letting computers drive cars before we're willing to accept 1 single death by those same computers?

It's a real-life example of the "Trolley Problem"

http://en.wikipedia.org/wiki/Trolley_problem [wikipedia.org]

Re:What about Google driverless car? (0)

Anonymous Coward | more than 2 years ago | (#38430948)

Are we really reduced to paraphrasing Star Trek: Insurrection now?

it wasnt worth it to build mass transit systems (0)

decora (1710862) | more than 2 years ago | (#38431024)

we already know that big passenger cabins, when situated on rail systems, do not veer off into playgrounds or farmers markets and kill people.

we already made the decision to abandon those systems in favor of the deadly automobile, which kills 30,000 people a year.

now, you want to convince me that Google's "driver less car" is so wonderful because, "think of the children". I did think of the children. big industry, big oil, big auto, and corrupt governments decided to say "fuck the children", abandoned public transit, and went with the mass-car culture we have, on purpose, deliberately, to make money.

so you want me to trust google, another huge, faceless corporation, whose only duty is to its shareholders, to make them a profit. and you expect me to believe that they are doing this 'for the children'? if we cared about the children, we have solutions already, and we simply chose not to spend money on them, because it wasnt profitable enough for hedge funds and investment bankers.

i will believe google cares about 'the children' when it does something about e-waste farms in china, child laborers in the mines in africa, etc etc etc.

not when it makes 'driverless cars' to appease some people who spent too much time watching "beyond 2000" when they were kids.

Re:it wasnt worth it to build mass transit systems (1)

0123456 (636235) | more than 2 years ago | (#38431148)

big industry, big oil, big auto, and corrupt governments decided to say "fuck the children", abandoned public transit, and went with the mass-car culture we have, on purpose, deliberately, to make money

Yes, it's all the fault of BIG OIL, and nothing to do with the fact that public transit sucks ass.

Re:What about Google driverless car? (4, Insightful)

SendBot (29932) | more than 2 years ago | (#38430842)

have you SEEN the way meatware AI operates a car? At least a google driverless car would use its turn signal before suddenly jerking into a turn and trying to kill me on a bike with a right hook.

Speaking of faulty sensors, that's pretty much what goes down when meatware AI has a certain alcohol content. Or uses a cellphone. Or eats fast food. Or puts on makeup. Or deals with newer meatware instances in the back seat. Or looks down to adjust the radio. Or falls alseep. Or is distracted in thought. Or....

Re:What about Google driverless car? (4, Informative)

engun (1234934) | more than 2 years ago | (#38430862)

Your post is full of FUD. A million people die annually because of human drivers. A driverless car killing half that many would still be an improvement.
www.un.org/ar/roadsafety/pdf/roadsafetyreport.pdf

Re:What about Google driverless car? (3, Interesting)

TubeSteak (669689) | more than 2 years ago | (#38431180)

A million people die annually because of human drivers. A driverless car killing half that many would still be an improvement.

When a human driver kills another human being, the courts can punish that person and allow for the victim's family to claim compensation.
When a driverless car kills a human being... ?

Maybe we could copy the system we have for vaccines [wikipedia.org]

Re:What about Google driverless car? (4, Interesting)

thisnamestoolong (1584383) | more than 2 years ago | (#38430956)

This is such a common fallacy -- we would expect an AI driver to be fucking perfect before we would ever call it "safe". Sure, they will have bugs, and people will die. But they will have nowhere near as many bugs as the meat computer that we have in our heads. Amazing as it is, the human brain is simply not meant for the types of tasks that we often apply it to, and as such, tens of thousands of people die on the road each year. Even if the adoption of driverless cars cut that down to 1% of the current death rate, people would still be screaming about the cars killing us. George Carlin was right; some people are really fuckin' stupid.

Re:What about Google driverless car? (0)

Anonymous Coward | more than 2 years ago | (#38430964)

You are saying this technology can never be done safely. If google were to spend 75 years perfecting it, could it still not be done? If it was worked on for 500 years could we still not have driverless cars?

They most certainly are not irresponsible for researching a technology that is inevitable and will be implemented in the not so distant future.

If even buggy, this technology cut traffic fatalities down to 1% of current rates would you condemn them for those that died without praising them for all the people that were saved? Your morality is childish and simplistic. Try thinking about the situation in a more sophisticated manner and perhaps you will come up with something worth reading.

Re:What about Google driverless car? (0)

tqk (413719) | more than 2 years ago | (#38431116)

The worst part is that Google wants to build a driverless car.

I wish I had mod points. OFF TOPIC!

I can't help wondering just how a piece of code, which presumably didn't test its input data for validity before acting on it, could become part of a modern jet's onboard software suite?

That's on topic!

I don't even use Google, but it's pretty trollish to bring them up on something like this.

not responsible will not get them out of criminal (1)

Joe_Dragon (2206452) | more than 2 years ago | (#38431150)

Being responsible in the criminal ways and the auto insurance has there own legal teams that can fight that as well.

Now what if someone get's killed or hurt out side of the auto car I don't think "we are not responsible for accidents in any way" will work with some in there own car or on the street why sue the owner of the car (who may not be the driver) any ways when you can go after the deep pockets of Google.

And someone get't killed may be a accidental death investigation and the people who made the software can end up facing the court under criminal negligence.

maybe... (0)

Anonymous Coward | more than 2 years ago | (#38430710)

same way presumable don't proof read.

Re:maybe... (1)

MichaelKristopeit420 (2018880) | more than 2 years ago | (#38430722)

slashdot = stagnated

Re:maybe... (-1)

Anonymous Coward | more than 2 years ago | (#38430936)

you = gay nigger

how it became part of the suit (1)

Anonymous Coward | more than 2 years ago | (#38430716)

Presumably, in the same way a story with the phrase "software suit" got posted to the front page of Slashdot without being about some sort of matrix-like cyberworld.

Bad software (5, Funny)

Hadlock (143607) | more than 2 years ago | (#38430732)

I can't help wondering just how could a piece of code, which presumable didn't test its' input data for validity before acting on it, become part of a modern jet's onboard software suit?"

This, from the same company, while building the A380 megajet decided to upgrade half of their facilities to plant software version 5, while the other half decided to stick with version 3/4. And did not make the file formats compatible between the two versions, resulting in multi-month delays of production as a result.
 
Point being, in huge projects, simple things get overlooked (with catastrophic results). My favorite is when we slammed a $20 million NASA/ESA probe in to the surface of mars at high speed because some engineer forgot to convert mph in to kph (or vice-versa).

Re:Bad software (5, Informative)

RealGene (1025017) | more than 2 years ago | (#38431044)

My favorite is when we slammed a $20 million NASA/ESA probe in to the surface of mars at high speed because some engineer forgot to convert mph in to kph (or vice-versa).

No, it was when two different softwares were used to calculate thrust. The spacecraft software calculated thrust correctly in newton-seconds.
The ground software calculated thrust in pounds force-seconds. This was contrary to the software interface specification, which called out newton-seconds.
The result was that the ground-calculated trajectory was more than 20 kilometers too close to the surface.
The engineers didn't "forget to convert", they failed to read and understand the specifications.

Re:Bad software (0)

Anonymous Coward | more than 2 years ago | (#38431086)

It's probably more a lack of testing than a bad software. If the airspeed sensor tells you that you are flying to slow and that you are going to stall, you need to dive to take more speed. That's simple and logic programming.
Now the fact that the information from multiple sensors was not aggregated through some form of consensus is a problem that should have been detected by proper testing. Programming in the presence of failures is much harder to test, especially that you can only tolerate a certain number of faults.

It is the Programmer's Fault (1, Funny)

Anonymous Coward | more than 2 years ago | (#38430740)

It is clearly the fault of the programmer. They should be held liable for incidents like this. Management tries their best, but ultimately it always comes down to the coders. The company can be protected by the ToS, but not the lazy programmers.

Re:It is the Programmer's Fault (0)

Anonymous Coward | more than 2 years ago | (#38430750)

Ok! Ok! I must have, I must have put a decimal point in the wrong place or something. Shit. I always do that. I always mess up some mundane detail.

Re:It is the Programmer's Fault (0)

Anonymous Coward | more than 2 years ago | (#38430804)

Well this is not some mundane detail Michael!!!

Outsourcing is bad. (-1, Offtopic)

Zaldarr (2469168) | more than 2 years ago | (#38430746)

Qantas has had a lot of problems lately. A google search for Qantas will reveal the dozens of mechanical faults that have very nearly killed people. Over here in Australia we get a report of this sort of shit happening every week or so. It's surreal, because Qantas (as Rainman will attest) has been THE world's safest airline. The problem is that they moved all their labour and expertise out into Malaysia, using substandard parts and engineering to save cash, rather than doing the job properly with Australian parts and expertise. Obviously they've hired some cheap IT guys as well. They need to stop this and bring back the fleet's maintenance back home, or this is just going to keep happening.

Re:Outsourcing is bad. (1)

Anonymous Coward | more than 2 years ago | (#38430794)

The cheap IT Guys Qantas may have hired are irrelevant. the Fight control software is all Airbus.

Re:Outsourcing is bad. (0)

Anonymous Coward | more than 2 years ago | (#38430802)

Qantas doesn't build the aircraft or the avionics. This is an airbus issue, not a qantas issue.

Re:Outsourcing is bad. (0)

Anonymous Coward | more than 2 years ago | (#38430892)

If Qantas did all that outsourcing etc., than Qantas was safe .. purely due to the very limited flights it has, and Australia's foreign policy.. where it only bullies immigrants in boats.

Re:Outsourcing is bad. (0)

Anonymous Coward | more than 2 years ago | (#38430910)

Wow, you've got a really fucked up idea about aircraft if you think the operator (QANTAS) is in any way linked to flight control software, LOL.
But I'll throw you a bone so you can continue trolling Aus media style; this airspeed sensor fault has occurred three times across the worldwide A330 fleet, all on QANTAS aircraft.

Re:Outsourcing is bad. (0)

Anonymous Coward | more than 2 years ago | (#38430968)

I really do not think Qantas is responsible for the software on the Airbus 330. Please stop blaming every Qantas incident on outsourcing.

Technically correct I suppose (2)

Chuck Chunder (21021) | more than 2 years ago | (#38430980)

After all, buying planes that someone else made is outsourcing. However I am not sure they'd fair better building their own.

Re:Outsourcing is bad. (1)

GumphMaster (772693) | more than 2 years ago | (#38431128)

Common sense has had a lot of problems lately. A google search for common sense will reveal the dozens of mechanical faults that have very nearly killed people. Over here in Australia we get a report of this sort of shit happening every week or so. It's surreal, because common sense has been THE world's safest survival strategy.

Shame so little is displayed in the Australian media and the fear-of-the-other crowd.

Even a rudimentary comprehension of the report shows the event has nothing to with Qantas in particular. The problem lies in the Northrup Grumman ADIRU equipment fitted by Airbus and the Airbus software response to unusual outputs from that equipment. This is backed up by the prompt issuing of interim procedures and software fixes by the aircraft manufacturer (two years ago). If anything, the decision by Qantas pilots to fly the aircraft above all else, and put it down in a remote location rather than continue to Perth, is what made sure that the injured did not become mortalities.

Things fail on aircraft all the time. Aircraft are hostile environments to electronics and other systems. This is not unique to Qantas or any other operator regardless of the part of the world their maintenance is done in. The unique thing with Qantas is the incessant media hype around every little thing that goes wrong.

don't just wonder, learn (5, Interesting)

fche (36607) | more than 2 years ago | (#38430766)

"I can't help wondering just how could a piece of code, which presumable didn't test its' input data for validity before acting on it, become part of a modern jet's onboard software suit?""

How about reading the darned final report, conveniently linked in your own blurb? There was lots of validity checking. In fact, some of it was relatively recently changed, and that accidentally introduced this failure mode (the 1.2-second data spike holdover). (Also, how about someone spell-checking submissions?)

Re:don't just wonder, learn (5, Interesting)

inasity_rules (1110095) | more than 2 years ago | (#38430846)

Mod parent up. Anyhow, information from a sensor may be valid but inaccurate. I deal with these types of systems regularly(not in aircraft, but control systems in general), and it is sometimes impossible to tell with out extra sensors. Its one thing to detect a "broken wire" fault, and a completely different thing to detect a 20% calibration fault, for example, so validity checking can only take you so far. Its actually impressive the failure mode in this case caused so little damage.

Re:don't just wonder, learn (4, Interesting)

wvmarle (1070040) | more than 2 years ago | (#38431090)

Agreed, valid but inaccurate.

Though such an airliner will have more than one air speed sensor, no? Relying for such a vital piece of information on just one sensor would be crazy. And that makes it to me even more surprising that a single air speed sensor to malfunction causes such a disaster. But then it's the same kind of issue that's been blamed on an Air France jet crashing into the ocean - malfunctioning sensors, in that case ice buildup or so iirc, and as all sensors were of the same design this caused all of them to fail.

Another thing: I remember that when Airbus introduced their fly-by-wire aircraft, they stressed that one of the safety features to prevent problems caused by computer software/hardware bugs, was to have five different flight computer systems built and designed independently by five different companies, using different hardware. So that if one computer has an issue causing it to malfunction, the other four computers would be able to override this. And a majority of those computers should agree with one another before an airplane control action would be undertaken.

Re:don't just wonder, learn (1)

0123456 (636235) | more than 2 years ago | (#38431176)

Though such an airliner will have more than one air speed sensor, no? Relying for such a vital piece of information on just one sensor would be crazy.

According to a TV documentary I watched a while back, one crash about ten years ago happened because one of the three pitot tubes was blocked and it was the only one that was connected to the autopilot. The unblocked tubes were telling the crew that the plane was about to stall, whereas the blocked tube was telling the autopilot that the plane was flying too fast. The autopilot pulled the nose up and the crew had contradictory warnings that they couldn't reconcile, with the plane simultaneously telling them that it was going too fast and too slow. When they decided to believe the autopilot and cut power, the plane stalled and crashed.

I thought that only having one pitot tube connected to the autopilot was a dumb idea too.

Re:don't just wonder, learn (3, Insightful)

inasity_rules (1110095) | more than 2 years ago | (#38431178)

I'm sure they must have more than one sensor. Perhaps even more than one sensing principle is involved. The problem with the system of having multiple computers vote, is we tend to solve problems in similar ways, so if there is a logic error in one machine (as opposed to a typo) it is fairly likely to be repeated in at least 2 of the other machines. Some sets of conditions are very hard to predict and design for. Even in the most simple systems. I often see code (when updating a system) that does not account for every possibility because either everyone considers that combination unlikely, or nobody thought of it in the first place(until it happens of course...) Being a perfectionist in this business is very costly in development time.

The fact is a complex system such as an aircraft could easily be beyond human capability to perfect first time. And test completely.

it's more complicated than that (3, Interesting)

holophrastic (221104) | more than 2 years ago | (#38430774)

we're going to see a huge change in programming methods coming pretty soon. Today, A.I. is still math and computer based. The problem is that data, input, and all of the algorithms you're going to write can result in a plane nose-diving -- even though no human being has ever chosen to nose-dive under any scenario in a commercial flight.

Why was an algorithm written that could do something that no one has ever wanted to do?

The shift is going to be when psychology takes over A.I. from the math geeks. It'll be the first time that math becomes entirely useless because the scenarios will be 90% exceptions. It'll also be the first time that psychology becomes truly beneficial -- and it'll be the direct result of centuries of black-box science.

That's when the programming changes to "should we take a nose-dive? has anyone ever solved anything with a nose-dive? are we a fighter jet in a dog fight like they were?" Instead of what is it now: "what are the odds that we should be in a nose-dive? well, nothing else seems better."

Re:it's more complicated than that (3, Insightful)

RightwingNutjob (1302813) | more than 2 years ago | (#38430806)

Instead of what is it now: "what are the odds that we should be in a nose-dive? well, nothing else seems better."

Probably more like, "the sensor spec sheet says it's right 99.99999% of the time. may as well assume it's right all the time".

The devil almost surely lives on a set of zero measure.

Re:it's more complicated than that (5, Interesting)

holophrastic (221104) | more than 2 years ago | (#38430872)

yup. all the while forgetting that the while altimeter shows altitude, it rarely actually measures distance to the ground, it measures air pressure, and then assumes an aweful lot.

Re:it's more complicated than that (2)

wvmarle (1070040) | more than 2 years ago | (#38431136)

Interesting one indeed. Could be a tough measure.

For starters: what is one's current altitude? What is your reference point? The ground level at that point? Changes quickly when passing over mountainous terrain. Or the height compared to sea level? Which is also tricky, as the earth's gravitational field is not uniform and sea level is far from a perfect flattened sphere around the Earth's centre.

And how about GPS based altitude measurements? That's easily accurate to within a few meters, less than the size of the aircraft itself. Should be good enough.

Re:it's more complicated than that (0)

Anonymous Coward | more than 2 years ago | (#38430928)

rofl ok.

Re:it's more complicated than that (1)

GodfatherofSoul (174979) | more than 2 years ago | (#38430932)

They should've been using some kind of fuzzy algorithm to prevent drastic inputs. That would've been one of the first thoughts if I were the designer and I know it's a issue developers in the auto industry have addressed.

Re:it's more complicated than that (2)

holophrastic (221104) | more than 2 years ago | (#38430986)

certainly better. but anything they do which translates input into output suffers from the same lack of decision-making in the middle. There needs to be a step, the amigdala step, where a decision is questioned -- the official opposition step. And it's not about checking over the work. The work is fine. It's about self-doubt based purely on the most important observation available: I've been wrong before.

Re:it's more complicated than that (3, Funny)

jamesh (87723) | more than 2 years ago | (#38431022)

A better use of psychology will be to examine the heads of anyone who wants to throw maths out of the window and engage psychologists when designing AI algorithms.

Hey, what happened to voting? (1)

bill_mcgonigle (4333) | more than 2 years ago | (#38431030)

Why was an algorithm written that could do something that no one has ever wanted to do?

Two or three times no less ... at least that's what I've been told repeatedly, that three independent airplane computer systems are written from spec by different teams, and then given the input they all produce output, and the best 2-out-of-3 results win and cause physical action to happen.

So either at least two teams messed this specific item up the same crazy way, or that airline computer safety story they've been telling is a crock.

Re:Hey, what happened to voting? (1)

holophrastic (221104) | more than 2 years ago | (#38431138)

they all feed from the same malfunctioning sensor. and yeah, it's not a complete solution, so it's carp.

Re:it's more complicated than that (2)

WaffleMonster (969671) | more than 2 years ago | (#38431034)

we're going to see a huge change in programming methods coming pretty soon. Today, A.I. is still math and computer based. The problem is that data, input, and all of the algorithms you're going to write can result in a plane nose-diving -- even though no human being has ever chosen to nose-dive under any scenario in a commercial flight.

There are some humans alive today who have wisely done so to the point of causing injuries to recover from stalls real and imagined.

Re:it's more complicated than that (2)

Chuck Chunder (21021) | more than 2 years ago | (#38431050)

even though no human being has ever chosen to nose-dive under any scenario in a commercial flight. Why was an algorithm written that could do something that no one has ever wanted to do?

Is that something you are saying from knowledge or just making up? I was under the impression that getting the nose pointed down was a fairly 'normal' thing for a pilot to do when faced with a stalling plane. Indeed, keeping the nose up [flightglobal.com] can be precisely the wrong thing to do.

Re:it's more complicated than that (2)

holophrastic (221104) | more than 2 years ago | (#38431142)

lowering the nose, yes, absolutely. nose-dive, no. the kind of thing that injures passengers is not standard anything.

Re:it's more complicated than that (2)

Sarten-X (1102295) | more than 2 years ago | (#38431098)

That's assuming that the computer knows what a "nose-dive" even is, or why it's (usually) a bad thing. It would have to know every problem, every tactic, and every risk, and nothing would actually be safer, though the program would be far more complex..

Instead, the "psychological" program thinks "We're going a lot slower than we should for this altitude. Oh no! We're going to stall, and it's only by sheer luck that we haven't already! Why are we this high, anyway? The pilot told me to go this high, but maybe he entered the flight plan wrong. Maybe there was a fight in the cockpit, and that last change wasn't really supposed to happen. Quick! Let's go down to denser air as fast as we can! It's either that or stall, crash, and kill everyone on board!". It still dives, because the basic problem hasn't changed: The sensor failed, and had no redundancy. There's still a bit of self-doubt, but without enough relevant information about what's going on, the program still takes the option that causes the least damage, in terms of probability: the nose-dive.

AI involves large amounts of both math and psychology, and that's not going to change. A major aspect of psychology is to effectively reverse-engineer human actions to determine their underlying mental algorithms. Current AI programs simulate the algorithms, and we compare the emergent behavior to what's observed in humans (or other, simpler, animals). They are effectively the same problem, working in opposite directions.

Re:it's more complicated than that (1)

RealGene (1025017) | more than 2 years ago | (#38431120)

even though no human being has ever chosen to nose-dive under any scenario in a commercial flight.

This is incorrect. A winged aircraft will stall when the speed of air over the wings is too low.
The correct response of a pilot or computer to a stall is to point the nose down in order to increase airspeed.
The failure here was of the computers calculating that the aircraft was about to stall due to the reading from one airspeed sensor.

Pitot tubes used for sensing airspeed are subject to plugging up due to icing (which is why most are heated), and from spiders who like to climb
into them (which is why you will see covers on the pitot tubes of grounded aircraft, with long red streamers attached so that the ground crew
doesn't forget to remove them).

Pitot tubes are also implicated in the loss of Air France 447.

The obvious cause (1)

Zibodiz (2160038) | more than 2 years ago | (#38430808)

Wait, you mean cell phones aren't being blamed?

How should a computer behave? (2)

junglebeast (1497399) | more than 2 years ago | (#38430826)

I can't help wondering just how could a piece of code, which presumable didn't test its' input data for validity before acting on it, become part of a modern jet's onboard software suit?"
---

I'm surprised there are people who think that we have the technology to program computers to make decisions about how to control things like airplanes better then a human being.

Computers excel at solving mathematical problems with definitive inputs and outputs, but our attempts to translate the problem of controlling an airplane, or an organism, into a simple circuit...will necessarily be limiting.

They can only test that the computer program will behave as expected, but there is no test to prove that the behavior we attempted to implement is actually a "good" way to behave under all circumstances.

Re:How should a computer behave? (1)

RightwingNutjob (1302813) | more than 2 years ago | (#38430854)

Not all circumstances, per se, but for something as limited as an airplane autopilot, we can reasonably expect design and testing to cover all *classes* of circumstances, such as "this sensor is flaky" or more insidiously, "the wind sensor reading is slowly drifting and starting to disagree with the INS and/or the GPS".

We can also assume that it's never safe to assume that real data from real sensors is perfect.

Re:How should a computer behave? (0)

Anonymous Coward | more than 2 years ago | (#38430884)

Well, I'm glad you have proof that humans behave optimally under all circumstances. It really is absurd to use computers when people have had zero mistakes in the field.

Re:How should a computer behave? (1)

sitharus (451656) | more than 2 years ago | (#38431006)

Yes, all those plane crashes caused by software bugs would be completely eliminated if left to human judgement!

Re:How should a computer behave? (1)

rbmyers (587296) | more than 2 years ago | (#38431154)

We *already* have aircraft that cannot be flown nearly as reliably by a human being as it can be flown by a computer. Automatic control that is not easily duplicated manually is going to be the rule rather than the exception. You still have to design aircraft so that they can be flown manually in an emergency, but the flight envelope will be more restricted and the performance less than optimal.

Re:How should a computer behave? (2)

jklovanc (1603149) | more than 2 years ago | (#38431182)

Take a look at this [wikipedia.org] incident. The autopilot did everything right except that lack of action, poor decision making and disorientation by the pilots caused a 747 to roll out of control.
The pilots did the following things wrong;
1. Failed to descend to correct altitude before attempting engine restart.
2. Failed to notice the extreme inputs the autopilot was using that did not correct the roll(the pilot should have used some rudder to help the autopilot)
3. Became fixated on the engine issue when he should have left it to the copilot and flight engineer.
4. Failed to trust instruments when he had no visual reference (they were in the clouds)
5. Failed to take flight control when limitations if the autopilot (it could not control the rudder) reduced it's ability to control the aircraft.
This is an example where the computer made the right decision but wrong decisions by a human caused an accident.

A software suit? (0)

Anonymous Coward | more than 2 years ago | (#38430830)

I prefer my suits to be made of gabardine, or maybe some modern synthetic. They're easier to care for than a suit made of an airplane.

What? (5, Informative)

Spikeles (972972) | more than 2 years ago | (#38430870)

"I can't help wondering just how could a piece of code, which presumable didn't test its' input data for validity before acting on it, become part of a modern jet's onboard software suit?"" - pdcull

What are you? some kind of person that doesn't read the actual articles or documents? Oh wait.. this is slashdot. Here let me copy paste some text for you

If any of the three values deviated from the median by more than a predetermined threshold for more than 1 second, then the FCPC rejected the relevant ADR for the remainder of the flight.

The FCPC compared the three ADIRUs’ values of each parameter for consistency. If any of the values differed from the median (middle) value by more than a threshold amount for longer than a set period of time, then the FCPC rejected the relevant part of the associated ADIRU (that is, ADR or IR) for the remainder of the flight.

So there you go, there actually really was validity checking performed. Multiple times per second in fact, by three separate, redundant systems. Unfortunately all 3 systems had the bug. Here is the concise summary for you:

The FCPC’s AOA algorithm could not effectively manage a scenario where there were multiple spikes such that one triggered a memorisation period and another was present 1.2 seconds later. The problem was that, if a 1.2-second memorisation period was triggered, the FCPCs accepted the next values of AOA 1 and AOA 2 after the end of the memorisation period as valid. In other words, the algorithm did not effectively handle the transition from the end of a memorisation period back to the normal operating mode when a second data spike was present.

How did it happen? (3, Funny)

kawabago (551139) | more than 2 years ago | (#38430882)

Airbus poached engineers from Toyota!

Re:How did it happen? (1)

RealGene (1025017) | more than 2 years ago | (#38431144)

Pardon me, but I don't recall that software/firmware was ever implicated in the Toyota unintended accelerations.
Some cases were blamed on the floor mats, but, as with the Audi 5000, the most likely failure involved the placement of
the driver's foot on the wrong pedal.

of course, there are no other bugs lurking (1)

decora (1710862) | more than 2 years ago | (#38430954)

in the millions of lines of code in these modern flying death traps.

Lemme guess, Linux? (-1)

Anonymous Coward | more than 2 years ago | (#38430984)

No thank you, I'll stick with a proven, safe and tested OS like Windows or even OSX. This Linux nonsense is too scatter shot to trust on even a gumball machine, must less with my life.

Bull. It was pilot error... (0)

Anonymous Coward | more than 2 years ago | (#38430990)

just like those runaway Toyotas. We all know that no bug of that severity could make it into the finished product.

Cheap Oakley Sunglasses Outlet (-1, Offtopic)

lgxfly (2527120) | more than 2 years ago | (#38431014)

Welcome to our Cheap Oakley Sunglasses Outlet [cheapoakle...es2012.net] supply top quality sunglasses 9.50usd per pair. You can enjoy shopping Cheap Oakley Sunglasses [cheapoakle...es2012.net] . All the Cheap Oakleys [cheapoakle...es2012.net] are beautiful branded garments and accessories to immerse you in a shopping spree. So purchase your fashionable Cheap Oakley Sunglasses clearance [cheapoakle...es2012.net] from here,all kinds of discounts and the best after-sales service guarantee!

Could this be what hit Air France Flight 447? (0)

Anonymous Coward | more than 2 years ago | (#38431066)

This sounds very much like the failure of the Pitot tubes (used to measure airspeed) on the A330 that chrashed in the Atlantic on 1 june 2009. Does anyone know if that might be the case?

Re:Could this be what hit Air France Flight 447? (2)

0123456 (636235) | more than 2 years ago | (#38431132)

This sounds very much like the failure of the Pitot tubes (used to measure airspeed) on the A330 that chrashed in the Atlantic on 1 june 2009.

Actually, this would presumably have saved AF447, as the crash was caused by the pilot holding the nose up in a stall. Probably because the stall warning apparently turned off when he pulled the stick back and turned back on when he pushed it forward, so the correct action to get out of the stall seemed to be causing it.

wait... (1)

alienzed (732782) | more than 2 years ago | (#38431076)

is this article about driverless cars or software for an airplane?

other Airbuss had issues with auto pilot over rid (1)

Joe_Dragon (2206452) | more than 2 years ago | (#38431104)

other Airbuss had issues with auto pilot over riding the pilots and having no way to force it off. Now a sensor malfunction can make the auto pilot do stuff that a real pilot will never do and does the software have any kind of work around for broken sensors or way to find out that a sensor is out of range and to stop reading it?

Calling all cost accountants (1)

Billly Gates (198444) | more than 2 years ago | (#38431126)

So are good software engineers still cost centers rather than assets, where hiring the cheapest programmers in 3rd world countries for something like this makes sense?

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?