Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Space NASA

The Ancient Computers Powering the Space Race 253

An anonymous reader writes "Think that the exploration of space is a high tech business? Technology dating back to the Apollo moon landings is still used by Nasa mission control for comms and the 1980s 386 processors that keep the International Space Station aloft."
This discussion has been archived. No new comments can be posted.

The Ancient Computers Powering the Space Race

Comments Filter:
  • This is news? (Score:3, Insightful)

    by Anonymous Coward on Monday September 27, 2010 @10:13AM (#33710696)

    I thought everyone was aware of this by now. :-/

    • Re:This is news? (Score:5, Informative)

      by commodore64_love ( 1445365 ) on Monday September 27, 2010 @10:41AM (#33711054) Journal

      It's the same in any long-life service, like space and military. For example the Aegis missile system runs on 286s and 386s while the busses run on a sedate 200 kilohertz speed. There have been recent upgrades to "new" PowerPCs or Pentiums, but only for a few select ships.

      There are even some strange home users that still run on primitive CPUs from the Seventies! Like 6502, 8088, and 68000

      • Re:This is news? (Score:5, Insightful)

        by TheRaven64 ( 641858 ) on Monday September 27, 2010 @11:07AM (#33711496) Journal

        It's slightly different in space, because radiation hardening is also an important factor. ESA uses a lot of SPARC32 chips, in the form of the (GPL'd) LEON, which was designed to be able to be created in rad-hardened versions by anyone, cheaply. Intel periodically produces rad-hardened versions of their chips, but they certainly don't do it for the latest versions (the transistor density for the hardened process isn't has high as for the consumer-grade process), so you have longer upgrade cycles, and you also need rad-hardened versions of all of the support chips, so it's worth skipping a few generations if something works.

        And, really, there's nothing wrong with using a 386, if it's fast enough. Upgrading from a chip that is twice as fast as you need to one that is a hundred times as fast as you need is not an easy decision to make.

        The military was still buying Z80s until a few years ago for a lot of things. They had Z80 code that worked, and had been very well tested. Hopefully everyone involved in space learned from Arianne that upgrading something requires (expensive) revalidation and testing of everything that interfaces with it.

        • by Mr 44 ( 180750 ) on Monday September 27, 2010 @12:56PM (#33713406)

          OK, later ones aren't exactly non-deterministic, but the 386 was the last of the straightforward microprocessors, that simply executed one instruction aftr another. No microcode, out-of-order execution, crazy on-chip L2/L3 caches, etc.

          Wonder if that leads to easier "verification" at a very low level, if NASA cares about that...

    • I worked on guidance and control systems for the USAF. When I got the chance to look at the shuttles inertial nav systems, I wasn't really that shocked to see they were basically the same as the systems I was working on that were designed in the '60s and modified only slightly through the '70s. The systems work, and with redundancy provide an incredibly accurate system.
    • Re: (Score:3, Interesting)

      by tixxit ( 1107127 )

      I would have been surprised had I not worked for a nuclear power plant before. I was surprised when I found out many of their computers were decades old. They've even had a couple of museums asking them if they could buy equipment off them, not realizing it was still in use.

      Of course, their motto is, if it ain't broke, don't fix it. When it comes to critical systems, old and known to work is better than new and unproven.

  • Makes sense (Score:5, Insightful)

    by Pojut ( 1027544 ) on Monday September 27, 2010 @10:14AM (#33710702) Homepage

    Given how wonky IT and communication upgrades can be, it makes sense to keep these systems the same for as long as possible. I imagine that after the Shuttle is fully and completely retired, NASA will begin to take a serious look at their aging hardware.

  • by Sonny Yatsen ( 603655 ) * on Monday September 27, 2010 @10:14AM (#33710704) Journal

    It's not that simple to just update NASA's technology. Yes, a lot of NASA's computer systems are antiquated, but they've also been vetted and engineered so that all the bugs and kinks have been worked out. They can update the technology, but they'll have to go through the whole process of figuring out where all the bugs are all over again. Unlike buying a buggy desktop application, though, when NASA has a bug, lives and millions of dollars are at stake.

    • Yeah, a BSOD while working on your term paper due to wonky 64-bit drivers really sucks.

      Now imagine your machine was controlling part of a launch sequence for the shuttle.
    • by Joce640k ( 829181 ) on Monday September 27, 2010 @10:22AM (#33710814) Homepage

      Those "ancient" 386 chips are probably mil-spec radiation hardened chips, too. Good luck getting your 45nm quad cores to work reliably in space...

      • Re: (Score:3, Interesting)

        by poetmatt ( 793785 )

        This is the part I always wondered about. why haven't they at least tried to have new military spec radiation hardened chips created (faster procesors, etc)? I can think of plenty of uses for that that would also coincide with the medical field, although ~400mhz can certainly handle plenty of things as needed.

        • Re: (Score:2, Informative)

          by phobos512 ( 766371 )
          "Mil-spec" and "radiation hardened" are not hardly the same thing. A typical military system does not used radiation hardened parts - they're unnecessary. However, chips used in military hardware have to go through extensive proofing to ensure that there aren't sneak circuits, single point failures, etc. That costs money and takes a fair amount of time. You also need to understand that those "mil-spec" and "radiation hardened" pieces of hardware are not designed nor manufactured BY the military or the f
          • I thought "mil-spec" was a function of the number of lunches eaten during the acquisition process - more than ten lunches and a couple of freebies makes something mil-spec.

          • Re: (Score:3, Informative)

            by Voyager529 ( 1363959 )

            It's called the ACQUISITION process for a reason.

            ...because governments are required to go through all 285 rules of acquisition before finally obtaining the parts they need. When dealing with Ferengi, surely that must be a time consuming process.

        • by TheRaven64 ( 641858 ) on Monday September 27, 2010 @11:15AM (#33711638) Journal
          They do. People are constantly making new rad-hardened chips, mostly for commercial satellites. The latest LEON (SPARCv8) chips go up to about 25MHz in the rad-hardened version. It's not just a matter of using a slightly older technology - space is an incredibly IC-hostile environment.
        • by networkBoy ( 774728 ) on Monday September 27, 2010 @11:15AM (#33711654) Journal

          Largely this is a function of geometry. The smaller gates required for higher speed operation are also vastly more sensitive to imparted charge from ionizing radiation. Large slow chips are inherently more robust, so when you do things like Si on sapphire you get a lot of bang for your buck.

          I don't doubt that a fast core could be RAD hardened, but the current generation of Core2 arch and ix arch from Intel/AMD/IBM are virtually impossible to make into a rad hardened build. You really would need to do a redesign with things like ECC registers and the demand for such chips is so low as to not be a profitable endeavor for any of the main players. Demand is satisfied by the RAD600/750 families (PowerPC 750 / Apple G3), so why invest gobs of money into R&D for a product that has little to no demand?
          -nB

          • hmm.

            Inquiring about the same, would it make any difference if it was an ARM chip?

            • by networkBoy ( 774728 ) on Monday September 27, 2010 @11:31AM (#33711898) Journal

              Based on geometry alone, no.
              However I think a Cortex series core would be vastly easier to re-implement with double bit error ECC Parity.
              If I were a Rocket Chip Designer:
              Cortex A6 redesign:
              2 ALUs with parity checks on output, run combinationally. Any parity errors, re-run calculations.
              All register memory is ECC capable of detecting 2 bit errors and correcting single bit errors.
              similar over designing on all other functions in the die.
              Dual instruction caches, again parity checked.
              Built as Si on sapphire.
              increase geometyr of gates to > 90nM (likely 130nM).
              Adjustable clock gating so the thing can be clocked as slow as possible for a given job.

              Realistically though, that will cost a lot of money. You can get a RAD750 running at about 600MHz for $200,000 already.

          • by SETIGuy ( 33768 )

            the current generation of Core2 arch and ix arch from Intel/AMD/IBM are virtually impossible to make into a rad hardened build.

            That depends upon where you are going and how much weight you can carry. For a relatively benign environment like low earth orbit which usually has lots of weight margin you can always add shielding. Not that I would choose them for a design... The issue there is that they are more complicated than necessary for what needs to be done. Never use a 16-bit processor when an 8 bit processor will do the job. For an spacecraft onboard computer, I would probably choose ARM, MIPS, or PPC.

        • Re: (Score:3, Informative)

          by vlm ( 69642 )

          This is the part I always wondered about. why haven't they at least tried to have new military spec radiation hardened chips created (faster procesors, etc)?

          They have...

          http://en.wikipedia.org/wiki/Category:Radiation-hardened_microprocessors [wikipedia.org]

          Specifically

          http://en.wikipedia.org/wiki/Proton200k [wikipedia.org]

          About a gigaflop or a couple gigamips or giga-whatevers.

          The problem is not finding an app to burn some mips, but finding the weight for the power supply and cooling ...

          And the realistic market shipping quantity is probably triple digits at most.

          And running a thousand times quicker, seems to mean on land based processors that it'll crash by memory leak or whatever a thousand

        • They have had some newer chips space rated and rad hardened actually. Granted, many of NASA's older programs (like the shuttle) use very old hardware to do a job. Some of the systems that have been designed in the last decade, however, do use newer hardware if it is necessary. Computational requirements are a primary subsystem designed for in any space system. As a mission concept gets vetted out, system engineers weigh the cost vs. benefit of using newer hardware and having more processing power vs. higher
      • Definitely true. Plus, the more complex a system is, the more places the system has that can fail. A 386 in comparison, has much fewer points of failure.

        Plus, maybe it's just me, but I think it's just inspiring that NASA was able to accomplish some of the things they've done with minimal computing power and so much finesse. The average desk calculator today has more computing power than the lunar module for the Apollo missions, and yet, Apollo still took men safely to the moon and back.

      • Just so we're clear "mil-spec" means: runs at half speed, weights double, costs ten times as much as it should. But on the other hand, some really nice lunches get eaten during the tender process.
        • Re: (Score:3, Informative)

          by jimicus ( 737525 )

          It costs ten times as much because it comes with a sheet of paper.

          Not a spectacularly amazing sheet of paper, it has to be said. But a sheet of paper that confirms that the chip is specced to handle a lot more abuse than anything available in the commodity market, a sheet of paper that says "You want to use this in applications where lives are at stake? Where if it goes wrong, someone is more-or-less guaranteed to die? No problem!".

          You look at the paper for ordinary consumer chips - it normally says the e

      • by c0lo ( 1497653 )

        Those "ancient" 386 chips are probably mil-spec radiation hardened chips, too.

        What? But I thought.... my iPhone3... ummm... never mind, it's not part of the problem.

      • by crgrace ( 220738 ) on Monday September 27, 2010 @10:55AM (#33711240)

        Those "ancient" 386 chips are probably mil-spec radiation hardened chips, too. Good luck getting your 45nm quad cores to work reliably in space...

        They certainly are mil-spec. Intersil is still doing wafer runs of Silicon-on-Sapphire rad-hard 386s at their fab in Palm Bay, FL. I got to tour the fab during a job interview. Regarding the 45nm cores, they are probably quite radiation tolerant. Smaller feature size transistors have much smaller oxide thickness so it is much, much, easier for ions caught in the oxide due to radiation to tunnel away. So, total dose ceases to be a problem. The Single-Event-Upset (SEU) becomes a big problem though because embedded RAMs are not as robust (much lower noise margins with reduced power supplies) but that is usually dealt with using redundancy and a design style that doesn't allow dynamic logic or flip-flops.

        High-performance circuits *are* used in space. There is some kick-ass stuff being designed at Northrup Grumman Space Technology, for example. It just isn't used in manned missions due to the incredible liability.

      • by LWATCDR ( 28044 )

        Yes they are and all of the bugs are well known and documented.
        These are embedded systems.
        If you look at any complex embedded system odds are you wil find lots of z-80s, 68000, and other very old chips.
        The CPU used in the ELF the RCA 1802 is still in production and being used on satellites. It is made using silicon on sapphire and is very resistant to radiation.
        It is now mainly used for housekeeping but they keep using it because it works.
        Also most people don't understand that for control applications a 38

      • by Jeng ( 926980 )

        Good luck getting your 45nm quad cores to work reliably in space

        I was under the impression that there are a lot of laptops on the ISS running experiments.

      • There's also a question of what software it is running. A specialized task might only need a small processor. There's a lot of household, commercial, medical and industrial devices that only need a few megahertz to tend to a certain number of inputs, a few conditionals and computations and a few outputs. The basic concept is old and could have been achieved decades ago, but the same tasks still need to be done.

      • There are more modern alternatives but as a rule, all of the space grade microprocessors are based on older designs because they are simpler, more reliable, and easier to implement in a rad-hard process. More annoying when designing for space is the limited choice of suitable 16-bit and 8-bit microprocessors. When board space is at a premium it isn't pleasant to be forced into using 32-bit memories just to implement a simple control function.

    • by puto ( 533470 ) on Monday September 27, 2010 @10:28AM (#33710874) Homepage
      I forget which sci fi author it was, but there is a book where one of the main characters is hired to analyze code of a failing satelite. And he says "Perhaps the cleanest most boring software he had ever seen, virtually bug free, and what bugs there were had 3000 pages of documentation."
      • Re: (Score:3, Interesting)

        by s7uar7 ( 746699 )
        I was at the Kennedy Space Center a couple of days ago. As part of the 'preparation' during the Shuttle Launch Experience there are lines and lines of IF a THEN b ELSE IF c AND d THEN e code scrolling up the screen for about 2 minutes. Each line was unique (as far as I could tell) which suggests it is actual NASA code rather than something just created for the ride. No loops, functions or anything else any programmer would normally use today, but it would be extremely easy to debug.
    • by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Monday September 27, 2010 @10:31AM (#33710938) Journal

      Yes, a lot of NASA's computer systems are antiquated ...

      Furthermore, I thought the United States was still a bit stymied at how the Russians managed to compete with us in space while severely lacking in the VLSI chips department [slashdot.org]? There may still be some technologies, improvements and lessons to be learned from The Space Race -- especially from the side that fell apart first.

  • 286's (Score:4, Informative)

    by toygeek ( 473120 ) on Monday September 27, 2010 @10:14AM (#33710708) Journal

    I'm not sure if it is still the case but for a LONG time 286 processors were the only ones available that had been hardened against cosmic radiation and were rated for space. When you're lobbing people into space, it matters most what works and is proven, not what is fastest or the newest technology.

    • This may or may not delight you—no where in TFA does it say what kind of CPU is used; the only product mentioned directly is VMS. Combined with the whole "space race" thing, submitter is full of shit.
    • I'm not sure if it is still the case but for a LONG time 286 processors were the only ones available that had been hardened against cosmic radiation and were rated for space. When you're lobbing people into space, it matters most what works and is proven, not what is fastest or the newest technology.

      Yes but the other priority concern for space travel is size. Every square inch of space is critical. Space agencies must balance old-but-proven technology with newer but way smaller technology. My cell phone contains more processing power, memory, and data storage space than the entirety of 1960's era Mission Control.

      • Re:286's (Score:4, Insightful)

        by ThatOtherGuy435 ( 1773144 ) on Monday September 27, 2010 @10:50AM (#33711184)

        I'm not sure if it is still the case but for a LONG time 286 processors were the only ones available that had been hardened against cosmic radiation and were rated for space. When you're lobbing people into space, it matters most what works and is proven, not what is fastest or the newest technology.

        Yes but the other priority concern for space travel is size. Every square inch of space is critical. Space agencies must balance old-but-proven technology with newer but way smaller technology. My cell phone contains more processing power, memory, and data storage space than the entirety of 1960's era Mission Control.

        Don't forget about heat, either. Heat dissipation in space is a pain in the ass, and throwing a few hundred extra watts of heat at every data problem is a lot less viable than it is under your desk.

  • Probably the most solid platform too! theres no way i'd trust window 7 to launch a rocket into outta space!
    • by toygeek ( 473120 ) on Monday September 27, 2010 @10:15AM (#33710728) Journal

      I agree 100%! I'd go with something more time proven like Windows ME. They didn't call it "Millenium Edition" for nothing!

      • That's what it stood for? I always thought it was an in joke, and they knew it brought about CFS [wikipedia.org] in computer hardware all along.
      • They didn't call it "Millenium Edition" for nothing!

        It's designed for space, and as reliable as the Millennium Falcon?

    • by anss123 ( 985305 )

      Probably the most solid platform too! theres no way i'd trust window 7 to launch a rocket into outta space!

      Windows 7 is not a RTOS (Real Time OS), so it's a poor choice for controlling the space shuttle in flight. But it's a perfectly fine for hosting the big red launch button.

  • by eldavojohn ( 898314 ) * <eldavojohn@noSpAM.gmail.com> on Monday September 27, 2010 @10:15AM (#33710718) Journal

    The Ancient Computers Powering the Space Race

    From general agreement on the definition of the Space Race [wikipedia.org]:

    The Space Race was a mid-to-late twentieth century competition between the Soviet Union (USSR) and the United States (USA) for supremacy in outer space exploration. The term refers to a specific period in human history, 1957-1975, and does not include subsequent efforts by these or other nations to explore space.

    Emphasis mine. As to the 'ancient tech', it's stable and still working so what's the problem? People are bitching about rising taxes not the fact that we are stunting ourselves in exploring space. It's not 1975 anymore, people have moved on to other [slashdot.org] international penis/rocket/missile envy matches.

    In related news, the house fails to agree on a meager NASA funding bill [space.com] while space tourism continues to progress [google.com].

  • by axx ( 1000412 ) on Monday September 27, 2010 @10:18AM (#33710774) Homepage
    If the stuff in space is from the seventies, this means it's not running Free and Open Source Software ! Proprietary alert, space stuff doesn't run Linux!
    • Re: (Score:3, Insightful)

      By federal law, any product of the Federal Government cannot be copyrighted (and thus, it's probably even less encumbered in that regard than FOSS). Of course, good luck getting them to disclose it.

      • Re:Wait a minute... (Score:4, Informative)

        by _Sprocket_ ( 42527 ) on Monday September 27, 2010 @11:49AM (#33712246)

        By federal law, any product of the Federal Government cannot be copyrighted (and thus, it's probably even less encumbered in that regard than FOSS). Of course, good luck getting them to disclose it.

        First - you'll find Fed Gov't contributers to various OSS projects if you do a bit of digging. Having said that, it's not that simple.

        While the Government might not be able to copyright works, individuals are free to patent inventions. One of the perks working at NASA is that they assist their employees with patent applications for whatever they're working on with the stipulation that the Government gets carte blanc to use the invention. But that's if you're a civil servant. NASA's strategy these days is to limit their Civil Service manpower to mostly oversight / management of programs. Meanwhile, much of the technical work is being shifted to contractors. Contractors hold all rights to whatever works they do under contract and are generally able to sell that work to other entities (law allowing). So not all Federal Government work goes in to the community pot with less and less doing so these days.

        I should note that this off-loading strategy isn't absolute. There are still many Civil Servants at NASA doing technical work. NASA is less of a top-down directed organization than a collection of organizations within various groupings and sub-groupings with their own little fiefdoms and budgets that tend to work towards common goals. So while there may be a general trend, there will be plenty of small pockets of resistance that buck that trend if they have firm control over their own budget and the leeway with which to finance it (that and firing a Civil Servant is rather involved).

      • It is often the case that the contractors hold the copyrights for products produced for the Federal government.

    • Re: (Score:2, Interesting)

      by Bobakitoo ( 1814374 )
      http://www.gnu.org/gnu/gnu-history.html [gnu.org]

      In 1971, when Richard Stallman started his career at MIT, he worked in a group which used free software exclusively. Even computer companies often distributed free software. Programmers were free to cooperate with each other, and often did.

      Before micro-soft, software was the source code. But it is too easy to patch source code then compiled binarys, so it is more profitable to have customer unable to apply patchs and have them buy the same thing over and over every year. This "normal" state of closness didnt happen until the 80s. Thanks to Bill Gate. See http://en.wikipedia.org/wiki/Open_Letter_to_Hobbyists [wikipedia.org]

    • Re: (Score:3, Informative)

      by mcgrew ( 92797 ) *

      Admiral Grace Hopper, who wrote the world's first compiler and co-wrote the world's second compiler, advocated FOSS in the 1950s. Admiral Hopper encouraged programmers to collect and share common portions of programs. [yale.edu]

  • by g0bshiTe ( 596213 ) on Monday September 27, 2010 @10:20AM (#33710788)
    Don't fuck with it.
    • by Xiver ( 13712 )
      Truer words have never been spoken.
    • by Rogerborg ( 306625 ) on Monday September 27, 2010 @10:40AM (#33711038) Homepage
      See that glowing thing in front of you? The thing you're reading this on? It's just like little pictures of cats and pyramids scratched onto stone tablets, only we fixed it.
    • by mcgrew ( 92797 ) *

      That's a very anti-nerd statement, are you in management by chance? We nerds are ALWAYS fucking with stuff that isn't broken.

      "If it ain't broke, don't FIX it" is a logical statement. My Acer's now running kubuntu instad of Windows 7; it wasn't broke, I didn't fix it, but it's better now that I've fucked with it (or at least it serves my purposes better). When I was a teenager I'd buy ten dollar transistor radios and make guitar fuzzboxes out of them, and earned a few bucks selling them to guitar players. Th

  • by Chrisq ( 894406 ) on Monday September 27, 2010 @10:22AM (#33710810)
    I read a while ago that for space use the older integrated circuits are many times more reliable. On a new high density IC a cosmic ray can knock out a connection track, whereas on older "8-bit" processors you would need thirty or forty hits in the same place.
    • nothing that some 10k pullups (on every line, data and address alike) can't fix.

      or maybe 4.7k. its space; musn't take chances. don't want to make a field service call late at nite out there.

    • by networkBoy ( 774728 ) on Monday September 27, 2010 @11:23AM (#33711782) Journal

      It's not that it would knock out a track. A single cosmic ray hit will not ablate the metal layers. It's that the newer parts use much lower voltage to get lower leakage to get higher speed. Lower voltage == lower gate charge, in some cases the difference in charge states is < 100 electrons*. A single cosmic ray is capable of changing the charge state on these gates enough to make a bit undefined. That is a BadThing(tm).

      -nB
      * My info is specifically on flash and a couple years old.
      (n-m)==100.
      0-m electrons on the gate == logic 0
      n+ electrons on the gate == logic 1
      between m and n electrons on the gate == undefined value.

  • B-2 Stealth (Score:5, Interesting)

    by tekrat ( 242117 ) on Monday September 27, 2010 @10:23AM (#33710834) Homepage Journal

    And the B-2 Stealth bomber has the equivalent of an Amiga 1000 running it. What is the point of this article? Critical systems require reliable, proven, hardened hardware, not flakey netbooks.

    If they are not the fastest CPUs, who cares? They aren't playing half-life on these systems they are flying space shuttles, and if you can't tell the difference, do not work in the defense or space industries. CPU speed isn't the prevailing factor here, reliablility and a known/proven system is.

    • Re:B-2 Stealth (Score:5, Insightful)

      by OzPeter ( 195038 ) on Monday September 27, 2010 @10:39AM (#33711022)

      What is the point of this article?

      I think the point of this article is to show the disconnect between the "oh-look-new-shiny-shiny" crowd who have to download and install their latest favorite application from nightly builds vs the "if-it-fucks-up-someone-gets-hurt" crowd who actually have a clue about reliability.

      • I wonder how many "nines" reliability there is on a shuttle computer.
        • by vlm ( 69642 )

          I wonder how many "nines" reliability there is on a shuttle computer.

          The reliability is high enough that it has little meaning.

          http://www.nasa.gov/mission_pages/shuttle/flyout/flyfeature_shuttlecomputers.html [nasa.gov]

          "Well, it has been 24 years since the last time a software problem required an on-orbit fix during a mission."

          So a MTBF of 24 years?

          "But perhaps the most meaningful statistic is that a software error has never endangered the crew, shuttle or a mission's success."

          100% uptime, essentially? Assuming no computer problems on the last flight, they might actually achieve 100%

          • I wonder how many "nines" reliability there is on a shuttle computer.

            The reliability is high enough that it has little meaning.

            http://www.nasa.gov/mission_pages/shuttle/flyout/flyfeature_shuttlecomputers.html [nasa.gov]

            "Well, it has been 24 years since the last time a software problem required an on-orbit fix during a mission."

            So a MTBF of 24 years?

            "But perhaps the most meaningful statistic is that a software error has never endangered the crew, shuttle or a mission's success."

            100% uptime, essentially? Assuming no computer problems on the last flight, they might actually achieve 100% uptime?

            To pick some nits, I don't think they should be able to brag about decades of uptime/MTBF if those computers are only every switched on for a mission at most 18 days at a time - even Windows ME might manage that... (Though 100+ missions without critical computer errors is still a nice number.)

    • And the B-2 Stealth bomber has the equivalent of an Amiga 1000 running it.

      That seems like an odd example given the Amiga hardware's emphasis on graphics and sound.

      Does the B-2 have a MC68k in it? Taito's Chase Bombers does, but I don't know about the B-2.

    • Re: (Score:3, Insightful)

      by LWATCDR ( 28044 )

      True. People don't understand that reliability and capability need more than speed.
      These are the same folks that look at an IBM Z mainframe and compare it to an over clocked i7.

      Many systems need enough CPU and memory to get a single job done. Once you have that amount of power the rest of the effort goes into making sure that the job always gets done.

  • Nothing New Here (Score:5, Interesting)

    by bkmoore ( 1910118 ) on Monday September 27, 2010 @10:30AM (#33710928)
    My first engineering job out of college was as an avionics engineer at McDonnell Douglas in 1996. We were designing avionics using a Highly Reliable Industrial (HRIP) M68000 CPU downclocked to a couple of MHz. The reason for this CPU choice was that it did exactly what was required for building an embedded system. Also the M68000 had/has a very long production cycle and would be around for many years to come, which is important if you need spare parts in the future. We used the minimum clock setting required to achieve the required performance and to reduce power consumption and thermal cooling requirements. Modern general-purpose desktop CPUs normally aren't good choices for single-task embedded systems because of their power consumption, short product life spans, and general feature overkill. You do not need a particularly fast CPU to perform basic guidance and control tasks or to run avionics computers. The PowerPC has been adapted for imbedded MILSPEC systems for example and it's about 10 years behind the "state of the art."
  • High-tech? (Score:2, Insightful)

    When did "high-tech" become synonymous with "has a lot of transistors"?
    • Re: (Score:3, Insightful)

      Flamebait? Really?

      You could run the ISS on a flip-flop and a popsicle stick, and it would still constitute "high tech."
  • Not surprised (Score:5, Insightful)

    by JLangbridge ( 1613103 ) on Monday September 27, 2010 @10:40AM (#33711044) Homepage
    I'm not surprised, not at all. The A320 ELAC uses 3 68k chips, and the A320 SEC uses an 80186 and even an 8086 chip. Why? For lots of reasons. Basically, it doesn't require billions of instructions per second, it doesn't need to access gigabytes of memory, and most importantly, they are proven chips that have gone through years of testing, and they are relatively simple. At the time they were complicated, granted, but they were still within reach of severe quality control. Remember the problems Intel had with the Pentium and floating point calculations? Nothing serious, but still... The chip was so complex that problems crept into the design phase, and at 38000 feet, you do not want problems. To cite a fellow Slashdotter above, (thanks tekrat), Critical systems require reliable, proven, hardened hardware, not flakey netbooks. Enough design faults have crept into aeronautical design, so I can only imagine the space sector. NASA used to program everything in 68k because they were reliable, simple, fast enough, and because they had lots of really, really good engineers that knew every single aspect of the chips. Don't get me wrong, I love todays chips, and i7s look sexy, but with a TDP of 130W for the Extreme Edition chips, they just add problems. Running at 3.2GHz, with over a billion transistors, you are just asking for trouble. At those speeds and heat, problems do happen, the system will crash. Ok, not often, but with mission critical systems, just once is enough. Did anyone seriously expect the shuttle to run quad-cores with terabytes of RAM?
  • by OzPeter ( 195038 ) on Monday September 27, 2010 @10:42AM (#33711076)
    My car uses 100 year old internal combustion technology.
  • Welcome to the soak (Score:4, Informative)

    by mbone ( 558574 ) on Monday September 27, 2010 @10:44AM (#33711092)

    It has been 4 + decades since the space program dominated electronics development.

    Anyway, by the time any piece of electronics gets radiation hardened and goes through the "soak [eetimes.com]" - i.e., a few simulated years or decades worth of cycling through heat, usage, etc., plus fixing any uncovered problems, it is by definition not going to be cutting edge.

    It's good that space computers are more commonplace, anyway. Viking 1 died because JPL couldn't afford to keep the people who understood the archaic assembly language for the landers in the ramped down extended mission team.

  • Mostly contractual (Score:4, Informative)

    by sunking2 ( 521698 ) on Monday September 27, 2010 @10:56AM (#33711250)
    Virtually anything related to space has a huge development cycle. Contract bid to delivery is easily 5+ years. One of the first things you do is source your suppliers so you will never deliver anything state of the art. It'll be at least 5-10 years old. At pretty much the same time you have to also deliver most of your spares for the near or distant future. And there probably is no money in the contract for hardware upgrades. It is what it is until it's replaced.
  • ... much of the flying hardware designs are decades old too - but this is IMO due to so much of it relying on govt funding or govt being a primary customer. It seems that there might be progress on this front, though - with the like of Musk, Bigelow and perhaps even Branson (suborbital now - but it's a good start). Guidance computers do not need to be terribly powerful - they need to be reliable. Witness what happened to the first Ariane 5 launch. It wasn't very long ago that the venerable COSMAC 1802 g
  • by crgrace ( 220738 ) on Monday September 27, 2010 @11:05AM (#33711444)

    While the article is quite right to highlight the proven, reliable technology in manned space missions, it is a mistake to infer that all space electronics technology used today is from the 70s and 80s. There is a vibrant design community for space electronics and a lot of quite whiz-bang stuff goes up in comms, scientific and recon sats. Someone mentioned the space industry hasn't dominated the electronics business for 40 years. That's true, but there are still niches that are absolutely dominated by space. For example, there are some incredibly high-performance millimeter-wave circuits, amazingly sensitive photodetectors and bolometers, and extremely fast Indium-Phosphide digital circuits (not full-on processors) going up in missions every year. Modern CMOS technology (deep submicron) is inherently radiation-tolerant, so rad hardening isn't as important commercially as it used to be, because there is an acceptable level of risk. Manned missions have a MUCH lower acceptable level of risk so mission planners are loathe to deploy anything new.

  • Since the older processors and RAM were built with bigger transistors, aren't they safer, i.e. less prone to errors due to cosmic radiation?

  • Does autopilot systems still use 3-4 386'S?

  • If It ain't broke (Score:2, Interesting)

    by slashhax0r ( 579213 )

    Don't fix it. Really, except for the aging of some discreet components why should this even be a concern. SO the tech is old? It has been well engineered and proven time and time again.

  • Laptops (Score:3, Insightful)

    by drumcat ( 1659893 ) on Monday September 27, 2010 @11:11AM (#33711554)
    Man, the article makes it sound like NASA is allergic to tech. There's no reason not to bring up kick ass laptops and other non-essential tech that runs hella fast. But don't fuck with what works. It's kept a lot of NASA problems from becoming NASA disasters. Hyperbole will get you nowhere fast.
  • And I thought it was centrifugal force.
  • Hell (Score:4, Interesting)

    by ledow ( 319597 ) on Monday September 27, 2010 @11:46AM (#33712194) Homepage

    Damn right - I'd rather be using a chip that has a 20-year errata and proven silicon revision than ANYTHING produced in the last five years. Every single processor ever made has errata and when you're talking about a sole life support for the astronauts, damn right it should be from the "old, tried, tested, we know all it's quirks" bin than the local Intel shop.

    People never understand this, and I can't understand why. If you tell me that my car's airbags runs on a Dual-core processor, I will be extremely worried for several reasons (unnecessary amount of state-of-the-art technology, unnecessary complications with timing, unnecessary amount of power to do a simple job, etc.) but tell me that it uses a Pentium with an FDIV bug, or even a Z80 with uncorrected "Z80A" original silicon and I'll feel as safe as houses.

    Bugs take a while to find. Every extra transistor makes bugs more likely. Every day in ordinary production use makes bugs less likely (because you'll experience them and work around them). And if you NEED 2GHz of processor to do some of these tasks, the astronauts are stuff if their machine ever breaks. If you keep things simple, so that you CAN go to human/paper backup like some of the moon missions did, then you have much less to worry about. Plus the cost is cheaper of course.

    It worries me EVERY time I see some modern, state-of-the-art revamp of a critical system (air-traffic control, road traffic signalling, in-car braking systems, etc.)

  • How much processing power does a spacecraft need anyway? It's not like these CPUs are burdened with the overhead of running a desktop OS. These things are completely dedicated to number crunching on whatever task they've been assigned to. And power draw and heat are probably minor issues compared to more current processors. Chances are anything more current will simply be overpowered for the job.

    And given how long it takes to design and build a spacecraft by the time that vehicle is actually being used, the

  • I've often said that we should just put a 5 year stop to all new language and framework development and just spend the time fixing what we have already. A new language is far less useful than a fix for an existing, widely-used language.

    When I hear about yet another stack doing something for which we've had solutions for years, I just get tired. Java is the poster child for this insanity. I sometimes think there are more Java frameworks than there are enterprise applications successfully delivered, but le

One man's constant is another man's variable. -- A.J. Perlis

Working...