Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Murphy's Law Rules NASA

michael posted more than 9 years ago | from the cat-lands-butter-side-down dept.

Space 274

3x37 writes "James Oberg, former long-time NASA operations employee, now journalist, wrote an MSNBC article about the reality of Murphy's Law at NASA. Interesting that the incident that sparked Murphy's Law over 50 years ago had a nearly identical cause as the Genesis probe failure. The conclusion: Human error is an inevitable input to any complex endeavor. Either you manage and design around it or fail. NASA management still often chooses the latter."

cancel ×

274 comments

Sorry! There are no comments related to the filter you selected.

if i dont get first post... (-1, Offtopic)

InfoHighwayRoadkill (454730) | more than 9 years ago | (#10597676)

Its beacuase of human error!

Mark my words (5, Funny)

zerdood (824300) | more than 9 years ago | (#10597680)

Someday all decisions will be made by machines. We'll just sit back while they do all the work. Then, no more human error.

Re:Mark my words (1)

Jeffery (810339) | more than 9 years ago | (#10597691)

then the machines kill us all, and error will cease to exsist :) (as long as the machines don't use windows)

Re:Mark my words (1, Insightful)

zerdood (824300) | more than 9 years ago | (#10597725)

Why does everyone have this Crightonesque fear? As long as competent humans program the machines, they will be made unable to harm humans.

Re:Mark my words (0)

Anonymous Coward | more than 9 years ago | (#10597769)

But humans working on complex machines will make mistakes! I think AI is one of the most complicated ventures humans will ever undertake!

Re:Mark my words (1)

sgant (178166) | more than 9 years ago | (#10597791)

Ah, so the "Murphy Law" prone human programmer will program the machines that will be made to make decisions for us.

Ah, but first, we must program a machine to find a competent human to program the decision-making machine!

Re:Mark my words (0)

Anonymous Coward | more than 9 years ago | (#10597852)

Competent humans, huh? Look up Therac 25, my friend. And if your response to that is that the humans weren't competent, I'd ask you if it takes a few deaths to judge if someone is competent or not?

Re:Mark my words (4, Insightful)

wiggys (621350) | more than 9 years ago | (#10597733)

Except, of course, that we programmed the machines in the first place.

When a computer program crashes it's usually down to the human(s) who programmed it, and in the rare occasions it's a hardware glitch and it was humans who designed the hardware, so we're still to blame either directly or indirectly.

I suppose it's like the argument about whether bullets kill or the human who pulled the gun's trigger.

Re:Mark my words (5, Funny)

j0yb0y (641351) | more than 9 years ago | (#10597874)

Let me restate what he said,

Someday all errors will be made by machines. We'll just sit back while they do all the work. Then, no more human error.

Re:Mark my words (3, Funny)

WormholeFiend (674934) | more than 9 years ago | (#10597876)

I suppose it's like the argument about whether bullets kill or the human who pulled the gun's trigger.

Color me medieval, but I prefer the following analogy:
Crossbows don't kill people, quarrels kill people.

Re:Mark my words (0)

Anonymous Coward | more than 9 years ago | (#10598203)

I read that as squirrels kill people. Didn't make sense, but it was funny nonetheless.

Re:Mark my words (0)

Anonymous Coward | more than 9 years ago | (#10598229)

you cant shoot squirrels out of a crossbow...

Re:Mark my words (1)

pixelpusher220 (529617) | more than 9 years ago | (#10597958)

Obligatory:
<humor>
Are you implying that Microsoft programmers are *human*?

Taco...remove this infidel!
</humor>

Re:Mark my words (1)

iluvgfx (685312) | more than 9 years ago | (#10597967)

I agree. Computers are as smart as the people who program it.

Re:Mark my words (0)

Anonymous Coward | more than 9 years ago | (#10598015)

There aren't any completeness theorems about bullets.

kneel before the overlords (0)

Anonymous Coward | more than 9 years ago | (#10598023)

...I for one, welcome the computer overlords...

Re:Mark my words (1)

VistaBoy (570995) | more than 9 years ago | (#10597738)

Who will make the machines? Humans. With error.

Re:Mark my words (1)

theparanoidcynic (705438) | more than 9 years ago | (#10597741)

Yes, but machines are programmed by people. Your average hacker is lazy, impatient, tired, horny and may or may not be intoxicated. They fuck up. That's why we have bugs.

Re:Mark my words (0, Redundant)

IchBinDasWalross (720916) | more than 9 years ago | (#10597771)

That's why we beta test.

Re:Mark my words (0)

Anonymous Coward | more than 9 years ago | (#10598038)

Have you been near my cube today? Straw poll of my coworker votes me matching that description today.

Add hungover to that list and it's me. Not a pretty sight.

Re:Mark my words (1)

zerdood (824300) | more than 9 years ago | (#10597761)

The machines will be able to diagnose and change their own code if they find bugs in it. Wouldn't that be cool?

Re:Mark my words (0)

Anonymous Coward | more than 9 years ago | (#10597778)

All future erros will then be of the blue screen varity. Posting anonymously because slashdot has tried to silence me.

Re:Mark my words (1)

penguinoid (724646) | more than 9 years ago | (#10597779)

Yes, but that does not mean we will agree with them. For example, the Terminators decided to eliminate human error.

Re:Mark my words (1)

_Sprocket_ (42527) | more than 9 years ago | (#10597787)

You would think an article that outlines several failed autopilot systems might indicate a fundimental flaw in that thought process.

Re:Mark my words (1)

EvilTwinSkippy (112490) | more than 9 years ago | (#10597837)

Computers cannot make decisions. They can perform computations. They can evaluate formulas. They can even pen new algorythems. But their decision making power ultimately comes down to flipping a coin.

It can be a very heavily weighted coin, but it is a coin nonetheless.

Re:Mark my words (3, Insightful)

NonSequor (230139) | more than 9 years ago | (#10597856)

If you're expecting this to result from the development of human level AI I wouldn't bet on it. In order to solve problems not predicted by its creators it will have to make some leaps of intuition the way humans do when they solve problems. The ability to propose original solutions also introduces the possibility of error. An AI will also have to rely on inductive reasoning in some situations and there is no reason to believe that a computer can avoid making any false inductions. I suspect that human level AIs will be able to do a lot of things better than us, but they will have at least some of the same flaws we do.

Could it be ? (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#10597681)

Fist pr0st?

interesting but it's not really true (4, Interesting)

spacerodent (790183) | more than 9 years ago | (#10597692)

while it's possible to always have a mistake, having people double check a project from the ground up will almost always find the problems. Nasa's current difficulties arise from scattered teams that all only check their parts rather than having fully qualified teams that go over the entire vehical. The fact that the whole thing is usually designed by committee and in several pieces then assembled at the last minute probally helps facilitate error. The Saturn V rockets and other technology we used to land on the moon had hte capability of being far less relyable than today's technology but we still managed to use them for years without error.

Re:interesting but it's not really true (1)

jmmcd (694117) | more than 9 years ago | (#10597730)

ok, but how are they supposed to test all their parts together in advance of the "real" launch?

Re:interesting but it's not really true (0)

Anonymous Coward | more than 9 years ago | (#10597804)

if they had run diagnostics on the circiuts they would have seen the switch was installed backwards by the resitance differences, they could also have simply traced the whole circuit looking for errors and would most likey have seen the switch reversed. The mistakes that I'd say are unavoidable are more in the realm of totally overlooked things like the door on the Apollo capsule that got everyone inside burned to death. Simple errors are not just "luck", they're sloppy engineering.

Re:interesting but it's not really true (4, Informative)

Moby Cock (771358) | more than 9 years ago | (#10597731)

Its an oversimplification to say that older technology was used without errors. In fact, its just downright incorrect. Appolo 1 and Appolo 13 both suffered from catastophic failures. Furthermore, the next generation of space vehicles, the shuttle, has had two very significat disasters and reams of other failures.

Re:interesting but it's not really true (0)

Anonymous Coward | more than 9 years ago | (#10597762)

the saturn 5 never had a failure

Re:interesting but it's not really true (2, Informative)

rabtech (223758) | more than 9 years ago | (#10597814)

No, it isn't. The Saturn V rocket was the most complicated and largest system ever built by man at the time and launched without a SINGLE failure for its entire operational life. The vehicles and satellites it carried had problems but the rocket itself never failed.

Re:interesting but it's not really true (3, Informative)

eggoeater (704775) | more than 9 years ago | (#10597900)

The fact that the Saturn V rockets never blew up doesn't mean they never had problems! There were plenty of things that went wrong. Even in the movie Apollo 13, one of the Saturn V engines malfunctioned during take off. We survive failures in rockets and other critical pieces of technology due not only to pragmatic design but also redundancy. (Also, think about the design of airplanes...triple redundancy on hydrolic lines.)
Also, there was some kind of semi-critical problem in EVERY SINGLE Apollo mission except Apollo 17, the very last one.

Re:interesting but it's not really true (4, Insightful)

EvilTwinSkippy (112490) | more than 9 years ago | (#10597760)

I'm still trying to figure out why the Apollo formula of contractors with Nasa oversight doesn't seem to work anymore.

Then I remember Apollo 1, that killed 3 astronauts, and Apollo 13, that nearly killed 3 more.

To invoke Heinlien, Space is a harsh mistress.

To invoke Sun Tsu, success in defense is not based on the likelyhood of your enemy attacking. It is based on your position being completely unassailable.

Re:interesting but it's not really true (0)

HeghmoH (13204) | more than 9 years ago | (#10597767)

I have not RTFA, but it seems to me that having people double-check things qualifies as acknowledging human error and designing around it.

Re:interesting but it's not really true (3, Insightful)

GR1NCH (671035) | more than 9 years ago | (#10597842)

I think this goes along with the saying: 'If you make something idiotproof, someone will build a better idiot'. Sure maybe they could have designed the accelerometers so that they couldn't be installed backwards. But then again what else might have failed. I guess in the end it all comes down to econnomics. What does the cost-benifit analysis say? Is it better to keep checking and double-checking, or to just send it out like it is. Now, I can understand cost-benifit is a little bit difficult when you are talking about a space probe. But chances are you could keep redesigning and rechecking a probe for 50 years and then something you never thought of will come up and all your plans will go to hell.

Maybe we should just make lots of cheap crappy probes, and hope and expect most to fail instead of one really good expensive one with the hope that it will suceed.

Re:interesting but it's not really true (5, Insightful)

Wizzy Wig (618399) | more than 9 years ago | (#10597863)

...having people double check a project from the ground up will almost always find the problems...


Then you double check the checkers, and so on... that's the point of the article... humans will err... Like Demming said... "you can't inspect quality into a process."

Re:interesting but it's not really true (1)

Moley (690497) | more than 9 years ago | (#10597866)

not really true??? Even using formal methods errors are bound to creep in. I agree with the article that on large projects human error is simply unavoidable! That's why NASA use fault tolerance systems, and that's why it's a good idea to use fault tolerance in combination with formal methods for critical systems.

Re:interesting but it's not really true (5, Insightful)

Control Group (105494) | more than 9 years ago | (#10597913)

No, it is true. It's the "almost always" in your statement that's the key. It's simple statistics, really. Assume that a well-trained, expert engineer has a 5% chance of making a material error. This implies that 5% of the things s/he designs have flaws.

Now suppose this output is double-checked by another engineer, who also has a 5% chance of error. 95% of the first engineer's errors will be caught, but that still leaves a .25% chance of an error getting through both engineers.

No matter what the percentages, no matter how many eyes are involved, the only way to guarantee perfection is to have someone with a zero percent chance of error...and the chances of that happening are zero percent. Any other numbers mean that mistakes will occur. Period.

I remember reading a story somewhere about a commercial jet liner that took off with almost no fuel. There are plenty of people whose job it is to check that every plane has fuel...but each of them has a probability of forgetting. Chain enough "I forgots" together, and you have a plane taking off without gas. At the level of complexity we're dealing with in our attempts to throw darts at objects xE7 kilometers away, it is guaranteed that mistakes will propagate all the way through the process.

Re:interesting but it's not really true (2, Interesting)

mikael (484) | more than 9 years ago | (#10598097)

Just like the case in which the airport crew assigned to clean an aeroplane put some masking tape over the air pressure sensors, but forget to remove it. Or rather, as the airport was badly lit and the masking tape wasn't noticably different from the skin of the aircraft, nobody noticed this small defect. Until the pilots came in to land the aeroplane that is, then it became a large problem.

Re:interesting but it's not really true (2, Interesting)

orac2 (88688) | more than 9 years ago | (#10597964)

from scattered teams that all only check their parts rather than having fully qualified teams that go over the entire vehical.

Your sentiment is correct, but your details are a little off. For example the Saturn V rocket was built by "scattered teams" (and committees were heavily involved, despite the mythology around Von Braun)-- the first stage was built by Boeing, the second by North American, the third by Douglas Aircraft, the Instrument Unit (the control system) by IBM, the LEM by Grumman and the CSM by North American, and so on, all the way down a huge chain of sub-contractors. But Apollo had brilliant technical management: it was pricey, but did do an amazing job of system integration.

It's when you try for cheaper missions that having one team take a spacecraft from design through operation is important: this was done on the Mars Pathfinder mission to great success, but wasn't done on other "Faster, Cheaper, Better" missions, to great falure, as demonstrated by, well, take your pick.

Re:interesting but it's not really true (2, Insightful)

gammygator (820041) | more than 9 years ago | (#10598034)

Finding problems is a good thing but I've found that nobody likes to be told their baby is ugly.... and if they're far enough up the corporate food chain... good luck getting 'em to listen.

Re:interesting but it's not really true (2, Funny)

olderchurch (242469) | more than 9 years ago | (#10598201)

Makes me think of the quote from Armageddon [imdb.com] :
Rockhound : You know we're sitting on four million pounds of fuel, one nuclear weapon and a thing that has 270,000 moving parts built by the lowest bidder. Makes you feel good, doesn't it?

And it will have flaws, no matter how often and thorough you check.

Oh no! Murphy's law again! (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#10597700)

Of course... I could not write a first post! Shit!

Cost Effective (4, Interesting)

clinko (232501) | more than 9 years ago | (#10597709)

It's actually more cost effective to allow for failures. You build the same sat 5 times and if 4 fail in a cheaper launch situation, you still save money.

From this [scienceblog.com] article:

"Swales engineers worked closely with Space Sciences Laboratory engineers and scientists to define a robust and cost-effective plan to build five satellites in a short period time."

Re:Cost Effective (0)

Anonymous Coward | more than 9 years ago | (#10597799)

Speaking of cost effective, wasn't Jimbo trying to save some money for Wiki? This is going to put him over budget.

Funny... (-1, Troll)

Anonymous Coward | more than 9 years ago | (#10597715)

Funny, that contradicts this report [google.com] .

I think... (-1, Offtopic)

Anonymous Coward | more than 9 years ago | (#10597717)

game developers should stop overpromising.

Murphy's corrolary (1)

draxredd (661953) | more than 9 years ago | (#10597727)

Murphy's law is rocket science

Let's get serious. (0)

Anonymous Coward | more than 9 years ago | (#10597740)

I sincerely doubt that NASA engineers ever set out to intentionally fail. Nice troll though.

Good Point (5, Insightful)

RAMMS+EIN (578166) | more than 9 years ago | (#10597766)

``Human error is an inevitable input to any complex endeavor. Either you manage and design around it or fail.''

This is a very good point, and I wish more people would realize it.

For software development, the application is: Just because you can write 200 lines of correct code does not mean you can write 2 * 200 lines of correct code. Always have someone verify your code (not yourself, because you read over your errors without noticing them).

Re:Good Point (0, Redundant)

IchBinDasWalross (720916) | more than 9 years ago | (#10597943)

Precisely. That's why we beta test, too.

You'd think so. (2, Insightful)

Bill, Shooter of Bul (629286) | more than 9 years ago | (#10598130)

Thats a very popular cliche. The fact is with NASA's shrinking budgets, they don't have the resources to design around potential failures. There's old school NASA that desinged the Cassini probe that has redundant systems and is properly designed and tested, and there's new school NASA that makes the cheap Mars probes. Just looking at the Mars probes you'll see why they have moved to this method. If you can make five fault intolerant probes for the same cost as one fault tolerant probe, and odd are that only two of the five work, then its a better idea to build the five crappy probes as you'll probely get twice the science benifit. The problem comes once you start throwing in human lives. Its okay if a manless probe crashes, its only things, but if you apply the same logic to manned missions.. you get the Columbia accident. Its not that NASA intentionally overlooked the problems because they expect people to die, its just that the methodology from the non manned flights have crept into their minds. At least thats my non-expert opinion.

But I guess it sort of applies to your software analogy as well. There have been a few companies who have discovered that its cheaper to have paying customers find the flaws in their software, rather than do any kind of formalized testing before release.

Re:You'd think so. (2, Insightful)

RAMMS+EIN (578166) | more than 9 years ago | (#10598266)

``There have been a few companies who have discovered that its cheaper to have paying customers find the flaws in their software, rather than do any kind of formalized testing before release.''

Not only that, but it's actually beneficial to produce and ship buggy software. Bugs have to be fixed, and who can fix them better than the people who wrote the code? So, it makes sense for programmers to leave flaws in their programs. Companies that ship flawed products can make customers pay for upgrades that also fix bugs, or get good karma by providing bugfixes for free. In the process they get publicity, and the world can see they're not sitting still and their products have not been abandoned.

That's right (3, Funny)

nels_tomlinson (106413) | more than 9 years ago | (#10597788)

That's right, blame it all on the Irish. After all, it's not like anyone else ever screwed up...

Bullsh^H^H^H (-1, Troll)

lastmachine (723265) | more than 9 years ago | (#10597789)

The Genesis Probe failed due to the use of Protomatter in the Matrix.

Now try to imagine Keanu Shatner..."DU-U-U-UDE! ... DU-U-U-U-U-U-UDE!!!"

That is NOT correct. (4, Insightful)

Puls4r (724907) | more than 9 years ago | (#10597795)

>>Either you manage and design around it or fail. >>NASA management still often chooses the latter.

This is hindsite at its best, and is the classic comment by beareaucrats who have no concept of what cutting edge design is about. F1 race cars, Racing Sailboats, Nuclear Reactors - NO design is failsafe, and NO design is foolproof. Especially a one off design that isn't mass produced. Even mass produced designs have errors, like in the Auto Industry. It is a simple fact of life that engineers and managers balance Cost and Safety constantly.

What you SHOULD be comparing this against is other space agencies that launch a similar number of missions and sattelites - i.e. other real world examples.

Expecting perfection is not realistic.

Re:That is NOT correct. (2, Informative)

EvilTwinSkippy (112490) | more than 9 years ago | (#10597918)

If you are going be sheer number of launches, body count, payload capacity, or cost effectiveness, the Russians have us beat hands down.

Sure we've been to the moon. But we haven't done a damn bit of fundimental research since then. (A lot of improvements to our unmanned rocket technology have been bought/borrowed/stolen from the Russian program.)

Re:That is NOT correct. (2, Insightful)

_Sprocket_ (42527) | more than 9 years ago | (#10597954)



This is hindsite at its best, and is the classic comment by beareaucrats who have no concept of what cutting edge design is about. F1 race cars, Racing Sailboats, Nuclear Reactors - NO design is failsafe, and NO design is foolproof.


But this isn't about design. It's about implementation. In each of the examples, the failure occurred because of incorrect assembly of key components.

Having said that - there IS an issue of design brought up by the article. That is, the design of a system should not allow for catastrophic configuration. In several examples, failure occurred when sensors (accelerometers) were installed backwards. Those devices should have been designed with some sort of keying system that only allows installation in the intended configuration. Heck - one of the accelerometers' configuration could only be determined after x-raying the device!

Re:That is NOT correct. (4, Insightful)

orac2 (88688) | more than 9 years ago | (#10598101)

This is hindsite at its best, and is the classic comment by beareaucrats who have no concept of what cutting edge design is about.

You only get to play the hindsight card the first time this kind of screw-up happens. If you actually read the article you'll see that Oberg (who isn't a beauracract but a 22-year veteran of mission control and one of the world'd experts on the Russian space program) is indicting NASA for having a management structure that leads to technical amnesia: the same type of oversight failure keeps happening again and again.

Oberg is not alone in this. The Columbia Accident Report despairingly noted the similities between Columbia and Challanger: both accidents where caused by poor management but what was worse with Columbia was that NASA had failed to really internalise the lessons of Challanger, or heed the warning flags about management and technical problems put up by countless internal and external reports.

Sure, space is hard. But it's not helped by an organization that has institutionalised technical amnesia and abandoned many of its internal checks and balances (at least this was the case at the time of the Columbia report, maybe things have changed).

And if you really want to compare against other agencies, NASA's astronaut bodycount does not compare favorably against the cosmonuat bodycount...

Sadly, your post is a classic comment by slashdotters who have no concept what effective technical management of risky systems looks like. (Hint: not all cutting edge designs get managed the same way. There's a difference between building racing sailboats and spaceships. This is detailed in the Columbia accident report. Read it and get a clue).

Re:That is NOT correct. (1)

khallow (566160) | more than 9 years ago | (#10598227)

This is hindsite at its best, and is the classic comment by beareaucrats who have no concept of what cutting edge design is about. F1 race cars, Racing Sailboats, Nuclear Reactors - NO design is failsafe, and NO design is foolproof. Especially a one off design that isn't mass produced. Even mass produced designs have errors, like in the Auto Industry. It is a simple fact of life that engineers and managers balance Cost and Safety constantly.

This advice better applies to yourself. Why does NASA use "one off" designs for all of its work (eg, the Space Shuttle, space probes, etc)? And NASA doesn't balance cost and safety. First, they don't practice even the most rudimentary of cost controls. Namely, NASA still uses "cost plus" as the basis of most of its contracts. Second, NASA has come up with elaborate safety measures and procedures that were routinely bypassed by those who supposedly were performing the inspections.

Incidentally, most commercial nuclear reactors should not be "one off" designs. Ie, build a few prototypes and then get a stable design. That's just my opinion, but I think it explains part of what went wrong with US nuclear power in the 60's and 70's.

What you SHOULD be comparing this against is other space agencies that launch a similar number of missions and sattelites - i.e. other real world examples.

Why? We lower our standards of performance to government bureaucracies when we do that. They are inherently inefficient since survival of the organization doesn't depend on success. Scaled Composites, for example, has demonstrated a suborbital craft capable of barely reaching space for a cost of around $25 million. In comparison, NASA developed and flew three X-15 prototypes with similar capabilities for a cost of $300 million in 60's dollars (which incidentally was considered a cheap program).

Isn't that... (3, Funny)

computational super (740265) | more than 9 years ago | (#10597805)

NASA management still often chooses the latter.

Why be different than any other management?

Re:Isn't that... (1)

EvilTwinSkippy (112490) | more than 9 years ago | (#10597892)

If a business manager fails, he gets fired. If an engineer fails, a few thousand people could die or be horribly injured.

NASA deserves more chances at failure (2, Insightful)

Anonymous Coward | more than 9 years ago | (#10597811)

Human error is an inevitable input to any complex endeavor. Either you manage and design around it or fail. NASA management still often chooses the latter.

There's a contradiction in that above statement .. but I cant think what it is exactly. Along lines of human manages and designs the error handling dont they?

That said nothing wrong with building in redundancy and failsafes

In space probes redundancy comes at the cost of number of unique mission goals and financial cost.

Sometimes you just have to eat the failure, thats what insurance is for. We in the public shouldn't always expect NASA to have 100% failure free (non human) missions and then extract harsh punishment on them which invariably gets passed down to engineers and not management decision makers.

Wuith the current attitude NASA of old would have been shutdown in the first couple of years for wasting tax payer money. Luckily there was competition with Soviets.

Re:NASA deserves more chances at failure (1)

orac2 (88688) | more than 9 years ago | (#10598230)

No-one, least of all Oberg (a 22-year veteran of mission control), is asking NASA to have a 100% success rate. Space is harsh, unknown unkowns lurk, etc.

What is is calling for is a management structure that allows solutions to problems that have occured before to be implemented properly. Columbia was destroyed for almost the same root causes that were exposed after Challanger. I don't think it's unreasonable to expect people to have elimiated those problems, and kept them eliminated.

The Columbia Accident Board had some harsh things to say about "Cheaper, Faster, Better" and NASA technical oversight in general, you should read their report.

Circular reasoning (2, Interesting)

D3 (31029) | more than 9 years ago | (#10597812)

The fact that human error isn't compensated for is the true human error that needs compensation.
I think I just sprained my brain thinking up that one.

Human error is a factor in ALL work... (1)

ites (600337) | more than 9 years ago | (#10597826)

All moderately-complex projects have to be built around:

1. change
2. error

In Soviet Russia... (-1, Offtopic)

penguinoid (724646) | more than 9 years ago | (#10597830)

In Soviet Russia, Murphy's Law obeys YOU!

The problem with errors (3, Interesting)

RealityProphet (625675) | more than 9 years ago | (#10597838)

The problem with errors is that detecting all errors all the time is absolutely impossible. Think back to your intro theory cs class and to Turing Recognizability. Think halting problem. Now, reduce the problem of finding all errors to the halting problem:

if (my_design_contains_any_errors) while(1);
else exit;

Feed this into a program that halts on all input and see what happens. You can't, because we know it is impossible for it to always return an answer. QED: errors are unavoidable. No need to sniff derisively in the direction of NASA's "middle management". Let's see if YOU can do a better job!

Re:The problem with errors (1)

orac2 (88688) | more than 9 years ago | (#10598178)

No need to sniff derisively in the direction of NASA's "middle management".

We're not talking about some unknown unknowns that crop up as the inevitable residue of your halting problem analogy.

We're talking about a class of errors that have happened before. We already know about them, they've already been detected. And yet, because of management failure, they continue to persist. The Columbia Accident Board identified this as NASA's key problem, not the weakeness of Reinforced Carbon Carbon leading wing edges.

Unknown errors are unavoidable, but ignoring solutions to known errors is unforgivable.

No Excuse! (1)

webgit (805155) | more than 9 years ago | (#10597844)

Since scientists have defined Murphy's Law [slashdot.org] , no-one should have any excuses for letting anything that can, go wrong!

where would we be without mistakes... (5, Insightful)

woodsrunner (746751) | more than 9 years ago | (#10597850)

If you compare the advances to Science and Knowledge due to mistakes rather than deliberate acts, it might come out that everything is a mistake.

Recently I took a class on AI (insemination, not intelligence) and apparently the two biggest breakthroughs by Dr. Polge, in preserving semen were due to mistakes. First, his lab mislabeled glycerol as fructose and they were able to find a good medium for suspension. Secondly, he blew off finishing freezing semen to go get a few pints and didn't make it back to the lab until the next day thus discovering that it was actually better to not freeze the stuff right away.

Mistakes are some of the best parts of science and life in general. It's best to try to make more mistakes (i.e. take risks) than it is to try and always be right. (unless you are obsessive compulsive).

Re:where would we be without mistakes... (2, Funny)

jmmcd (694117) | more than 9 years ago | (#10597895)

"he blew off finishing freezing semen to go get a few pints"

you can't, possibly, have written that with a straight face. can you?

Re: Straight face (1)

woodsrunner (746751) | more than 9 years ago | (#10598087)

I guess I have passed the threshold where I even think of these jokes. That actually was funny and I didn't even realize it. er, my mistake...

It's not too difficult to become immune to it talking about semen all day. We even have post it notes and pens with pictures of semen on them.

I guess that is the fate of working in AI. On the upside, it must have an effect on overall fertility. Most of the people in this company seem to have 3 kids minimum!

In Science it is $Serendipity ... (1)

foobsr (693224) | more than 9 years ago | (#10597977)

... not mistakes. Have a look [simonsingh.net]

CC.

Re:where would we be without mistakes... (1)

khallow (566160) | more than 9 years ago | (#10598024)

We're not advocating the elimination of all mistakes but merely the elimination of easily foreseen and prevented mistakes.

Re:where would we be without mistakes... (1)

Dhalka226 (559740) | more than 9 years ago | (#10598045)

I completely agree that a lot of good has come from making mistakes and finding something new out.

Still, I'd prefer my multi-million dollar spacecraft be programmed with the right units and not crash headlong into a planet. I'm picky that way though.

we're living in an impefect world (1)

igzat (817053) | more than 9 years ago | (#10597869)

Nasa's current difficulties arise from scattered teams that all only check their parts rather than having fully qualified teams that go over the entire vehical. The fact that the whole thing is usually designed by committee and in several pieces then assembled at the last minute probally helps facilitate error. The Saturn V rockets and other technology we used to land on the moon had hte capability of being far less relyable than today's technology but we still managed to use them for years without error.

Re:we're living in an impefect world (3, Insightful)

Halo- (175936) | more than 9 years ago | (#10598139)

Nasa's current difficulties arise from scattered teams that all only check their parts rather than having fully qualified teams that go over the entire vehical.

I'm not sure I buy that completely. While it certainly would help to have a single SME go over the entire vehicle, I doubt such a person could exist and complete the checks in a reasonable amount of time. The guy who checks the computer code is probably not going to be an expert in metal fatigue, nor electrical engineering. Even if you could find some sort of uber-genius who had expert knowledge of every system, he or she would have to work serially. If they started at component "1" of 654224166 and went down the line in order, the checks they started with would be out of date by the time they finished.

I not so sure its just Murphys law (1)

Timesprout (579035) | more than 9 years ago | (#10597871)

Its important not to loose sight of the harshness of the enviroment these systems are designed to operate in. As a simple example striking a match is an easy task to perform yet people trapped in cold remote areas have died because they were under too much stress to light a fire even though they had the tools. Its also question of consequences when something goes wrong, and space is not very forgiving.

Human Factor (4, Insightful)

xnot (824277) | more than 9 years ago | (#10597887)

I think the biggest difficulty surrounding large organizations is the lack of communication tools linking the right engineers together. It seems unfathomable that some of these mistakes were able to propegate throughout the entire engineering process and nobody caught them.

Unless you consider the fact that often in large organizations, the left hand typically has no clue what the right hand is doing. I work at Lockheed Martin, and typically I'm involved in situations where one group makes an improvement that then none of the other groups know about, changes/decisions are poorly documented (if at all) so nobody knows where the process is going, people making poor decisions due to lack of proper procedures from management about what to do, teams not being co-located, poor information about which people have the necessary knowledge to solve a particular problem, or any number of things that confuses the engineering process, to the detriment of the product. Most of these situations are caused by a lack of communication throughout the organization as a whole.

This is a serious problem, and it needs to be acknowledged by the people in a position to make a difference.

Nasty Remark (3, Insightful)

mathematician (14765) | more than 9 years ago | (#10597940)

"Either you manage and design around it or fail. NASA management still often chooses the latter."

I find this remark very unfair. It is a really nasty snide attitude to it, like "we are perfect - why can't you be."

Come on guys, NASA is trying to do some really difficult and ground breaking stuff here. Cut them some slack.

Re:Nasty Remark (1)

GigsVT (208848) | more than 9 years ago | (#10598143)

Ground breaking stuff... like ...?

Collecting dust?
A pointless low gravity lab in space?
25 year old shuttles that are more expensive than disposable craft?

armchair rocket science (5, Insightful)

onion_breath (453270) | more than 9 years ago | (#10597960)

I love how journalists and others like to sit back and criticize these engineers' efforts. They are human, and they will do stupid things. Having been trained as a mechanical engineer (although I mostly do software engineering now), I have some idea of how many calculations have to be made to design even one aspect of a project. I couldn't imagine the complexity of such a system, trying to account for every scenario, making sure agorithms and processes work as planned for ONE mission. No second chances. That we have individuals willing to dedicate the mental efforts to this cause at all is worthy of praise. These people have pride and passion in what they do, and I'm sure they will continue to do their best.

For anyone wanting to yack about poor performance... put your money where your mouth is. I just get sick of all the constant nagging.

Re:armchair rocket science (1)

andrewbaldwin (442273) | more than 9 years ago | (#10598032)

Well said!!

Oh how I wish I had mod points right now!

Re:armchair rocket science (1)

BenjyD (316700) | more than 9 years ago | (#10598037)

Exactly - people should listen to themselves sometimes.

"Can't you even fling a 2 tonne piece of incredibly delicate scientific apparatus a billion miles across space without one thing going wrong? Call yourself a scientist?"

John Galls Systemantics (4, Interesting)

Anonymous Coward | more than 9 years ago | (#10597968)

Systems display antics. John Gall has written a great book which vastly expands on Murphys law which is called Systemantics - The Underground Text of Systems Lore [generalsystemantics.com] . I cannot recommend this book enough. It contains some truths about the world around us that's blindingly obvious once you see it, but until then you're part of the problem. Systemantics applied to political systems is very enlightening. Too bad that the only people who think like this in politics are the selfish and egomanical Libertarians (yeah, yeah.. I know. Libertarianism is the new cool for the self styled nerd political wannabe).

Here are some of the highlights:
  • 1. If anything can go wrong, it will. (see Murphy's law)
  • 2. Systems in general work poorly or not at all.
  • 3. Complicated systems seldom exceed five percent efficiency.
  • 4. In complex systems, malfunction and even total non-function may not be detectable for long periods (if ever).
  • 5. A system can fail in an infinite number of ways.
  • 6. Systems tend to grow, and as they grow, they encroach.
  • 7. As systems grow in complexity, they tend to oppose their stated function.
  • 8. As systems grow in size, they tend to lose basic functions.
  • 9. The larger the system, the less the variety in the product.
  • 10. The larger the system, the narrower and more specialized the interfaces between individual elements.
  • 11. Control of a system is exercised by the element with the greatest variety of behavioral responses.
  • 12. Loose systems last longer and work better.
  • 13. Complex systems exhibit complex and unexpected behaviors.
  • 14. Colossal systems foster colossal errors.

Comforting quote from the article. (3, Informative)

wiredog (43288) | more than 9 years ago | (#10598009)

"these switches were reportedly developed as a nuclear warhead safety device"

Very comforting to know how easy it is to wire the safeties on nuclear weapons up backwards.

Re:Comforting quote from the article. (0)

Anonymous Coward | more than 9 years ago | (#10598050)

don't worry, it would be hopefully discovered after the first explosion.

If Murphy knew what was going on today (1)

Prince Vegeta SSJ4 (718736) | more than 9 years ago | (#10598033)

he would roll over in his grave and say:

After all this post has GENESIS and outer space in it.

Wisdom (0)

Anonymous Coward | more than 9 years ago | (#10598035)

What humans do well is use wisdom. It is the process of making decisions when you don't have enough information. Then you use judgement based on experience and training.

The idea that, if we give a technician a detailed set of instructions, we will get the right result is the opposite of wisdom. It is bureaucracy. It will always produce errors in unfamiliar situations.

The management system that I like best is that practiced by Admiral Nelson. He spent much time with his officers discussing strategy and tactics. When the time for battle came, it didn't matter what happened to the command structure, somebody would be able to take the right decision. That's why the British navy was so good at that time.

The opposite was when the British formed a square to fight the Zulus. The Zulu general decided to attack one side of the square only. That side ran out of bullets. They sent a runner for more ammunition and the quartermaster said something like: "Sorry, you've used your allocation. No bullets for you." The Zulus won of course.

The bottom line is that if you want someone to do something for you, make sure they UNDERSTAND what they are doing and why. If someone installs a fitting backwards for you, it's your fault because the installer clearly didn't understand what was required.

Anyone else find this sobering quote? (1)

argStyopa (232550) | more than 9 years ago | (#10598036)

from the article:
"After all, these switches were reportedly developed as a nuclear warhead safety device, so one could just assume that they were properly wired."

Nice to know those safety devices are foolproof.

Time for grass-roots action? (1)

mwood (25379) | more than 9 years ago | (#10598047)

Maybe we should pass the hat and send every NASA manager a copy of _Systemantics_, for their enlightenment. (Likely the scientists and engineers already have their own copies.)

Self-referential Murphy's Law (1)

quoob (791191) | more than 9 years ago | (#10598117)

As the article quotes Murphy's Law: "Every component than can be installed backward, eventually will be."

KISS... (3, Interesting)

Kong99 (618393) | more than 9 years ago | (#10598120)

A great engineer demonstrates his/her skill not by designing something terribly complex, but by designing the object that meets required specifications as SIMPLY as possible with as FEW unique parts as possible. That is GREAT engineering.

However, with that being said I really do not believe Engineers are the problem at NASA. Bureaucracy is the enemy at NASA. NASA needs a complete top to bottom overhaul.

Problems are due to political correctness. (-1, Troll)

Anonymous Coward | more than 9 years ago | (#10598122)

With the sickening trend of changing qualifications to include race/ethnicity/gender, it's no surprise that they can't get the job right.

Back in the Mercury-Apollo days, NASA only hired the ELITE. They were the best candidate for the job, with the only qualification being their intelligence and skill for that task.

Such a team could no longer be formed because of the PC trend. Social pressure wouldn't allow it. They'd be called racist and insensitive for not hiring employees based on Affirmative Action hiring practices.

Where they used to pick the best damn person for X task, they now have to pick a candidate that satisfies other groups' demand for diversity. Diversity wouldn't be bad if you picked the elite, and then (and only then) looked who comprised that group and respected them for who they are. Nowadays, there is demand for candidates who fill a race/ethnicity/gender requirement to satisfy what people see as an image problem. Instead of hiring the best mission control lead that they can find, there might be pressure to find a flight control lead who is of X race, or a mission planner who is of Y gender.

There used to be diversity in NASA but you were assured that they were the best that could be chosen. Now, unfortunately, NASA hires candidates based on demand for a "diverse image".

My suggestion: This is rocket science, not a Microsoft commercial. Choose people based on ability alone and forget about all other factors.

Author of TFA... (0)

Anonymous Coward | more than 9 years ago | (#10598124)

Looking at the author's webpage about himself, one sees that he's has a long history with NASA, stretching back to the 70's.

Then I noted how he represents his name. WTF is with the "O"?? Suddenly, I'm reminded of the Demotivational poster for Consulting...

TFOAE

We need Errors! Right? (1)

mu_shadow (813428) | more than 9 years ago | (#10598129)

When you make an error you eliminate a possibility out of the equation thus bringing you closer to a solution. You can't learn without making mistakes. Lots of AI is based on computers making mistakes.

http://www.darwinmag.com/learn/curve/column.html?A rticleID=44 [darwinmag.com] Explains: "A computer using AI to play chess, for example, has the ability to "learn" millions of possible moves, process them and make a choice. Witness Gary Kasparov's stunning defeat by IBM Corp.'s Deep Blue. The computer did not just mimic a memorized set of moves; it altered its strategy based on the results of the mistakes it made in previous games."

Of course you could argue that humans wrote the code, that caused the computer to intentionally make a mistake therefore it is the human's fault, Wait? Do you call an intestinal mistake an error? Now I'm confused!

Wait? Errors can be used to make a funny too! Did I make an error typing above or was it intentialy funny? Because farts are funny!
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>