Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Security

Is Analog the Fix For Cyber Terrorism? 245

chicksdaddy writes "The Security Ledger has picked up on an opinion piece by noted cyber terrorism and Stuxnet expert Ralph Langner (@langnergroup) who argues in a blog post that critical infrastructure owners should consider implementing what he calls 'analog hard stops' to cyber attacks. Langner cautions against the wholesale embrace of digital systems by stating the obvious: that 'every digital system has a vulnerability,' and that it's nearly impossible to rule out the possibility that potentially harmful vulnerabilities won't be discovered during the design and testing phase of a digital ICS product. ... For example, many nuclear power plants still rely on what is considered 'outdated' analog reactor protection systems. While that is a concern (maintaining those systems and finding engineers to operate them is increasingly difficult), the analog protection systems have one big advantage over their digital successors: they are immune against cyber attacks.

Rather than bowing to the inevitability of the digital revolution, the U.S. Government (and others) could offer support for (or at least openness to) analog components as a backstop to advanced cyber attacks could create the financial incentive for aging systems to be maintained and the engineering talent to run them to be nurtured, Langner suggests."
Or maybe you could isolate control systems from the Internet.
This discussion has been archived. No new comments can be posted.

Is Analog the Fix For Cyber Terrorism?

Comments Filter:
  • by mindpivot ( 3571411 ) on Tuesday March 18, 2014 @12:05AM (#46513383)
    the terrorists are like cylons and we need to disconnect all networked computers for humanity!!!
  • sure, no problem (Score:4, Informative)

    by davester666 ( 731373 ) on Tuesday March 18, 2014 @12:10AM (#46513399) Journal

    >Or maybe you could isolate control systems from the Internet

    said the person volunteering to get up at 3 am to go to the office to reset the a/c system.

    • Re: (Score:2, Funny)

      by Anonymous Coward

      Don't worry, Bill Gates says a robot will take that guy's job soon enough.

    • by TWX ( 665546 ) on Tuesday March 18, 2014 @12:16AM (#46513425)

      said the person volunteering to get up at 3 am to go to the office to reset the a/c system.

      Sounds to me like you need a better A/C system.

      Or you need to not consider an HVAC system to be so critical that it can't be on the network. Or, perhaps you need to design the HVAC system to take only the simplest of input from Internet-connected machines through interfaces like RS-422, and to otherwise use its not-connected, internal network for actual major connectivity. And design it to fail-safe, where it doesn't shut off and leave the data center roasting if there's an erroneous input.

      And anything that is monitored three-shifts should not be Internet-connected if it's considered critical. After all, if it's monitored three shifts then it shouldn't have to notify anyone offsite.

      • Re:sure, no problem (Score:5, Informative)

        by CGordy ( 1472075 ) on Tuesday March 18, 2014 @08:05AM (#46514755)

        There's a lot of misconceptions on slashdot about how these "critical infrastructure" plants actually run. I've spent a lot of time working in chemical plants, and these plants are heavily instrumented, with all parameters recorded. These are accessible in real time to the plant engineers, who typically don't sit in the control room, and often aren't in the same state (there's a very limited pool of people available who are "experts" at some of these processes, and when a serious problem occurs companies want the best person to look at the data ASAP).

        The guys who sit in the control room are not engineers. They're plant operators, and their job is to keep the plant running as smoothly as possible, and escalate the issue to an engineer if there's a non-standard problem. Most plants these days are so heavily automated that for normal, stable operation only two operators are required on site per say $100 million of plant (as a guesstimate - more during the day when scheduled maintenance is occurring).

        The engineers at these sites are actually classed as management. That's because they have ultimate responsibility for the plant when problems happen, although they don't control the day to day operation of the site. Most of an engineer's day on a chemical plant should be spent looking at whether the plant is configured optimally, and trying to troubleshoot longer term problems which require a more theoretical viewpoint. However, they do have to get out of bed at three in the morning if something's gone wrong. They also have to manage the operators, and have a promotion path to "real" management - refinery managers (for example) are usually engineers.

        However, what the article totally missed is that these sites already have two layers of control system - the Distributed Control System (DCS), and the Safety Instrumented System [wikipedia.org] (SIS). The wikipedia contains a lot more detail, but essentially these SIS's are hard wired systems that aren't programmable at all, so they are intrinsically resistant to an internet or software based attack. However, they're very expensive (every trip needs to be built as a dedicated circuit), so these systems are only used to ensure that the plant fails in a safe manner, not continued operation. Priority is given to safety of people in the vicinity over integrity of the plant equipment - these systems wouldn't typically be used a stop a pump or centrifuge (for example) from running too fast, unless that could cause some consequential (human) damage.

        Finally, an analog system would be a big step backwards from a safety viewpoint because it wouldn't allow the plants to automatically shut down safely when a problem occurs. Plant shutdowns are typically a multiple step process, and in a refinery (for example), large quantities of high temperature, high pressure flammable gases need to be disposed of, which would simply not be possible to safely "program" in an analog environment. Before digital systems came along, plant trips were "all hands on deck" incidents, with operators frantically adjusting adjusting setpoints on dials to bring the plants down. Of course, the risk of operator error was high, so automated shutdowns were a big step forwards in plant safety.

        • by nukenerd ( 172703 ) on Tuesday March 18, 2014 @09:44AM (#46515303)
          I am a nuclear power station engineer, in fact I am in line of signing off everything that might affect plant safety. I recognise most of what you say, such as the plant not relying on any one safety system, but on two or even three (depending on potential severity) independent and differently designed control systems (not counting the human watchkeepers) - the jargon being "redundancy and diversity". An earlier poster implied that a digital system would save people being called out of bed at 3 am for a plant event, but on my nuclear plants this would happen anyway. The station manager would certainly be called up for a plant trip (at the very least because he would want to know about it), as would several other personnel, even though safe shut-down would not depend on their presence as it would be done automatically anyway.

          However, the plant operators are engineers (this is the UK) and the senior ones and fast-track juniors have degrees (though a degree does not mean so much these days), even though the Operating Department is separate from the Engineering Department. Personnel do move from one to the other, and it is expected that even senior management will have had at least a few months experience "on the desk" (ie in the Control room).

          There is no way whatsoever, no-how, any-which-way-but-loose (how else can I say it?) that these sysems would have any connection to the outside world or even within the plant itself to other than to the essential control panels.

          There is however a problem with modern "smart" devices such as thermocouple local amplifiers/transmitters with microchips in them. This is that we don't always know how they are programmed. I am not talking about malware, but simply the programmer making errors (or well-meaning assumptions) such as buffer overflow after a certain future date. For this reason we prefer the old-fashioned analog versions of devices at this level.
      • Re:sure, no problem (Score:4, Interesting)

        by AmiMoJo ( 196126 ) * on Tuesday March 18, 2014 @08:19AM (#46514803) Homepage Journal

        Or, perhaps you need to design the HVAC system to take only the simplest of input from Internet-connected machines through interfaces like RS-422, and to otherwise use its not-connected, internal network for actual major connectivity.

        I used to do software for fire alarm systems and heard a story about this. A shopping centre wanted to have a remote monitoring and reset system. All it could do was read the indoor temperature or reset the system. RS-485 link to a dedicated PC, firewalled with just the remote management service exposed to the LAN. Access was by using a VPN connection to the LAN.

        One day they noticed that the system was stuck in some kind of reset loop. Seems someone found a way in and caused the machine it was connected to to keep sending reset commands. It must have happened some time in the night, and by the time they figured out what was going on the next day a couple of the motorized vents and one fan had failed due to the motors overheating. Every time the reset command was sent they did a self test where they exercised their motors.

        The suspicion was that this was a distraction to cover up whatever else they were doing inside the network. Not being close to it I never found out the fully story, but it just shows that even a simple reset command can cause significant damage if abused.

      • Just don't put your HVAC controls on the same network as your credit card payment devices...

    • Re:sure, no problem (Score:5, Interesting)

      by phantomfive ( 622387 ) on Tuesday March 18, 2014 @12:22AM (#46513463) Journal

      said the person volunteering to get up at 3 am to go to the office to reset the a/c system.

      I can't speak for everyone, but I would rather pay extra for someone to be willing to do that (or do it myself, it shouldn't be a common situation) before I connect important systems to the internet.

      Having an air gap isn't a perfect solution, but it makes things a lot harder for attackers.

      • Re:sure, no problem (Score:5, Interesting)

        by mlts ( 1038732 ) on Tuesday March 18, 2014 @12:30AM (#46513489)

        As a compromise, one can always do something similar to this:

        1: Get two machines with a RS232 port. One will be the source, one the destination.

        2: Cut the wire on the serial port cable so the destination machine has no ability to communicate with the source.

        3: Have the source machine push data through the port, destination machine constantly monitor it and log it to a file.

        4: Have a program on the destination machine parse the log and do the paging, etc. if a parameter goes out of bounds.

        This won't work for high data rates, but it will sufficiently isolate the inner subsystem from the Internet while providing a way for data to get out in real time. Definitely not immune to physical attack, but it will go a long ways to stopping remote attacks, since there is no connections that can be made into the source machine's subnet.

        • by phantomfive ( 622387 ) on Tuesday March 18, 2014 @12:37AM (#46513523) Journal
          The main use case that causes problems with air gaps (AFAIK) is transferring files to the computer that's hooked up to the heavy machinery. People get tired of copying updates over USB, for example, and hook it up. Or they want to be able to reboot their air conditioner remotely.

          And that is the use case that caused problems with for Iran with Stuxnet. They had an airgap, but the attackers infected other computers in the area, got their payload on a USB key, and when someone transferred files to the main target, it got infected. That is my understanding of how that situation went down. But once you start thinking along those lines, you start thinking of other attacks that might work.
          • by khasim ( 1285 )

            That's part of a larger issue. People will ALWAYS get sloppy and lazy.

            Part of the security system has to include a way to check the systems and to check the people.

            Security is not an item in itself. It is only a reference point. You can have "better security" than you had at time X or you can have "worse security" than you had at time X (or the same).

            Improving security is about reducing the number of people who can EFFECTIVELY attack you.

            Once you've gotten that down to the minimum number then you increase t

            • Iran needs to learn about superglue on USB ports.

              How do you suggest they copy files to the computers then? Type them in by hand?

          • Re:sure, no problem (Score:5, Interesting)

            by thegarbz ( 1787294 ) on Tuesday March 18, 2014 @05:40AM (#46514403)

            This is why security should be a system and not an airgap. The idea that a computer should not be on the internet and patting yourself on the back for the idea and calling it a job well done is almost becoming a slashdot meme.

            Never underestimate what bored shift workers do during night shift. We had one group of people figure out how to watch a divx movie on the screen of an ABB Gas Chromatograph.

            The problem is more social than technological.

        • by richlv ( 778496 )

          i remember watching 'nikita' episode where they hacked a computer through its power connection and going "um, that's a bit stretching it..."

          then, several years later, some proof of concept attack vector like that was demonstrated. assuming that experts in the field can do much more than public knows, it might have been not that much of a stretch after all.

          i would also imagine that attacks for analog systems have been polished quitealot, given that they have been around longer. not that they could not be mor

          • Re:sure, no problem (Score:4, Informative)

            by mlts ( 1038732 ) on Tuesday March 18, 2014 @10:46AM (#46515681)

            When a local startup went out of business, one of the things the failed startup had at their bankruptcy auction was an electric motor that would spin a crankshaft/flywheel... only for a generator head on the other end to turn the motion back into electricity. I wondered why they had something that inefficient until I found that it was a "power firewall"... i.e. to mitigate attacks via the mains power.

        • by Jeremi ( 14640 )

          That's known as a data diode, and it's a great idea (and can be done at higher speeds than RS232, if necessary; e.g. you can do something similar with an Ethernet cable).

          It does have one big limitation, though -- it won't let you control the system from off-site. If that's okay, then great, but often off-site control is something that you want to have, not just off-site monitoring.

      • by LoRdTAW ( 99712 )

        Stuxnet proved that air gapping isn't enough.

        Air gapping is not a 100% fix. Its part wishful thinking and part buzz phrase which gets thrown around carelessly. If someone guarantees nothing will go wrong because of an air gap or one way serial connection then they are full of shit.

        Think about it, how many computers have you ever come across that could function on a 100% "air gap"? What about updates or software fixes? You could write a control program and debug the hell out of it to ensure nothing will go

    • by darkain ( 749283 )

      >Or maybe you could isolate control systems from the Internet

      Oh, you mean like all those systems Stuxnet infected?

      http://en.wikipedia.org/wiki/S... [wikipedia.org]

    • Re:sure, no problem (Score:5, Informative)

      by Technician ( 215283 ) on Tuesday March 18, 2014 @05:02AM (#46514297)

      A more common control with this type of critical limits is an elevator. The digital controls calls the cars to the floors, opens doors, etc. Between the digital world and electrical/mechanical world is control relays. Limit switches are in pairs. One you are used to. The elevator arrives at a floor and there is a pause while the fine alignment is completed to level with the current floor. The hard limit on the other hand such as exceeding safe space below bottom floor or past the top floor, does interrupt power to the control for the power relays. One drops power to the motor and the other drops the power to the brake pick solonoid. Brakes fail safe in an elevator. Need power to release the brakes.

      Yea, it is a pain to reset the elevator at 3 am with someone stuck inside, but that is better than a runaway elevator. And no, there is no software defeat for the hardware limit switches.

    • by LoRdTAW ( 99712 )

      "said the person volunteering to get up at 3 am to go to the office to reset the a/c system."

      That is not a realistic scenario. I know what you are saying but an a/c system isn't turned on at 3AM to begin with (unless you like wasting electricity). Most likely you will have these systems in a plant that runs 24/7 with 3 shifts and someone will know how to handle minor breakdowns and press a reset button if need be. A major breakdown can be solved in one of two ways: remotely or someone has to come on site. G

    • by splutty ( 43475 )

      No. Said the person who should have known that the Stuxnet attack had an attack vector that didn't have anything to do with the internet.

      The actual machines it was aimed against actually weren't connected to the internet at all.

      So the comment is just dumb.

  • by Anonymous Coward

    ever been compromised :) Physical kill switches, human operated are not simply analog (one might argue they are digital at the switch level). Analog might be the wrong word, since analog systems have been repeatedly compromised (from macrovision, to phreaking boxes, etc, etc). keep it off a communications network, even off local networks if they are uber critical.

  • Stuxnet (Score:4, Informative)

    by scorp1us ( 235526 ) on Tuesday March 18, 2014 @12:14AM (#46513417) Journal

    "Or maybe you could isolate control systems from the Internet."
    Wasn't Stuxnet partially a sneakernet operation? I can't imagine Iran being so stupid to connect secret centrifuges to the internet.

    The only way to win is not to play.

    • Re:Stuxnet (Score:4, Informative)

      by NixieBunny ( 859050 ) on Tuesday March 18, 2014 @12:23AM (#46513473) Homepage
      Yes, it was a USB flash drive with a firmware update.

      I work on a telescope whose Siemens PLC is so old that it has a PROM in a 40 pin DIP package for firmware updates. Not that we've touched the firmware in 20 years. After all, it works. And it ought to work for another 20 years, as long as we replace the dried-out aluminum electrolytic capacitors regularly.
  • Digial, analog, trinary, HIKE! You won't safe them without MIKE!

    In other words children it's all the humans who're messing up your security chain.

    You need better, faster, stronger, smarter people who have a driving need to make your security better from the floor sweep to the ablative meat.

    Without it you're just asking for an ass raping.

  • by gweihir ( 88907 ) on Tuesday March 18, 2014 @12:20AM (#46513453)

    It is called self-secure systems. They have limiters, designed-in limitations and regulators in there that do not permit the systems to blow themselves up and there is no bypass for them (except going there in person and starting to get physical). This paradigm is centuries old and taught in every halfway reasonable engineering curriculum. That this even needs to be brought up shows that IT and CS do not qualify as engineering disciplines at this time. My guess would be that people have been exceedingly stupid, e.g. by putting the limiters in software in SCADA systems. When I asked my EE student class (bachelor level) what they though about that, their immediate response was that this is stupid. Apparently CS types are still ignoring well-established knowledge.

    • by DMUTPeregrine ( 612791 ) on Tuesday March 18, 2014 @12:39AM (#46513531) Journal
      That's because CS is math, not engineering. Computer Engineering is engineering, Computer Science is the study of the mathematics of computer systems. CE is a lot rarer than CS though, so a lot of people with CS degrees try to be engineers, but aren't trained for it.
      • by AK Marc ( 707885 )
        At Texas A&M, the choice was Computer Engineering, or Computer Science Engineering. Both were under the Electrical Engineering department. There was no computer science that wasn't managed by the EE department, and all were proper engineering courses.
      • by dkf ( 304284 )

        That's because CS is math, not engineering.

        There are rather more disciplines than that. Theoretical CS is definitely towards the math side of things, but that's really at one end of the spectrum. The study of how people and computers interact is definitely part of CS, but isn't either engineering or math; it's closer to psychology. On the other hand, Computer Engineering is definitely an engineering discipline (as you'd expect with anything doing construction of physical objects on a mass scale).

        Software Engineering is unusual though, as the costs o

    • Heh, nice try, but you can't blame the programmers for this one. The only thing programmers can do is write software for the device once the engineers have built it. If the engineers build a system that is not self-secure, what do you expect the software guys to do? Pull out the soldering iron?

      All blame is on the engineers if they don't build a self-secure system (or management if it's their fault).
    • by vux984 ( 928602 ) on Tuesday March 18, 2014 @12:47AM (#46513581)

      My guess would be that people have been exceedingly stupid, e.g. by putting the limiters in software in SCADA systems.

      Or they just did what they were told by management. After all, software solutions to problems tend to be a fraction of the price of dedicated hardware solutions, and can be updated and modified later.

      Apparently CS types are still ignoring well-established knowledge.

      You can't build a SCADA system with *just* CS types; so apparently all your 'true engineers' were also all asleep at the wheel. What was their excuse?

      Seriously, get over yourself. The CS types can and should put limiters and monitors and regulators in the software; there's no good reason for them not to ALSO be in there; so when you run up into them there can be friendly error messages, logs, etc. Problems are caught quicker, and solved easier, when things are generally still working. This is a good thing. Surely you and your EE class can see that.

      Of course, there should ALSO be fail safes in hardware too for when the software fails, but that's not the programmers job, now is it? Who was responsible for the hardware? What were they doing? Why aren't those failsafes in place? You can't possibly put that at the feet of "CS types". That was never their job.

      • Hardware fail-safes protect from so called, "never events". They are an added layer of protection beyond the software level, and should never be depended upon by the SCADA system.
    • Way to shunt blame!

      I design code, your "EEs" design electrical hardware. I have been delivered hardware without such safeties. I could simply refuse to deliver code for the platform -- it will simply be offshored.

      Just costs me work.

    • by hjf ( 703092 )

      Your "EEs" actually "code" too, but in disguise. PLCs are programmed, just (usually) not in written code, but rather, in Ladder Diagram or Function Blocks. But you know that, right?

      I'm a programmer, but also a hobby electronics guy. And I've worked with PLCs. And I know for sure that "CS" types are never involved in these projects. The programming required is minimal (as usual with "elegant" engineering solutions), so a CS degree isn't required. It's much more about the hardware than software.

      A CS guy usual

      • Yep I'll be the first to call you old fashioned. Just like I would also call the article ridiculous. Digital positioners as well as advanced digital electronics in field instrumentation has been one of the best things to come to process industry. Your old analogue valve may be unhackable, but it will also be unable to report advanced diagnostic data such as torque, stiction, and won't be able to report stroke test results, or alarm on deviations from normal performance parameters.

        So pat yourself on the back

    • It is amazing how fast we have forgotten the Therac 25....
    • This paradigm is centuries old and taught in every halfway reasonable engineering curriculum. That this even needs to be brought up shows that IT and CS do not qualify as engineering disciplines at this time

      Any halfway reasonable engineering curriculum also teaches that engineering is all about tradeoffs, and that safety and security are variables like any other. Hardware based safety and security features are expensive, costs that aren't made up for by reductions in risk in many applications.

      Furthermore, s

  • by Osgeld ( 1900440 ) on Tuesday March 18, 2014 @12:22AM (#46513465)

    analog is actually more suceptable to interference generated by rather simple devices, as there is no error checking on whats being fed to the system

    the problem is your reactor is for some fucking reason hooked to the same network as facebook and twitter

    • by Tablizer ( 95088 )

      the problem is your reactor is for some fucking reason hooked to the same network as facebook and twitter

      Rats, I knew I shouldn't have "liked" nuclear meltdown.

  • Good idea (Score:5, Insightful)

    by Animats ( 122034 ) on Tuesday March 18, 2014 @12:32AM (#46513495) Homepage

    There's a lot to be said for this. Formal analysis of analog systems is possible.The F-16 flight control system is an elegant analog system.

    Full authority digital flight control systems made a lot of people nervous. The Airbus has them, and not only do they have redundant computers, they have a second system cross-checking them which is running on a different kind of CPU, with code written in a different language, written by different people working at a different location. You need that kind of paranoia in life-critical systems.

    We're now seeing web-grade programmers writing hardware control systems. That's not good. Hacks have been demonstrated where car "infotainment" systems have been penetrated and used to take over the ABS braking system. Read the papers from the latest Defcon.

    If you have to do this stuff, learn how it's done for avionics, railroad signalling, and traffic lights. In good systems, there are special purpose devices checking what the general purpose ones are doing. For example, most traffic light controllers have a hard-wired hardware conflict checker. [pdhsite.com] If it detects two green signals enabled on conflicting routes, the whole controller is forcibly shut down and a dumb "blinking red" device takes over. The conflict checker is programmed by putting jumpers onto a removable PC board. (See p. 14 of that document.) It cannot be altered remotely.

    That's the kind of logic needed in life-critical systems.

    • That's interesting they have a different system cross checking. But what happens when they are in disagreement? Who wins? There might not be time for the pilots to figure it out.

      • It's not that the secondary system is 'cross checking' or comparing results. They are really just monitoring circuits with a particular set of rules embedded in separate circuitry that just makes sure the primary system never breaks those rules. It is effectively the master control and will always 'win' if there is a problem. They are designed to be simple, robust and if possible, completely hardware based.

        Some other examples are 'jabber' control hardware lockouts to stop a radio transmitter from crashing a

      • That's interesting they have a different system cross checking. But what happens when they are in disagreement? Who wins? There might not be time for the pilots to figure it out.

        Then the minority report is filed in the brain of the female, who is obviously the smarter one. Duh. Didn't you see the movie?

      • by GuB-42 ( 2483988 )

        - There may be third, possibly simplified system to make a 2 vs 1 situation.
        - Ridiculous values (out of bounds, ...) can be checked and the faulty system disabled.
        - When it is not clear who the winner is, the pilot is shown an alert and can manually select the correct system. If you look closely in a cockpit, you'll probably find several "1-N-2" switches for this.

    • For example, most traffic light controllers have a hard-wired hardware conflict checker. [pdhsite.com] If it detects two green signals enabled on conflicting routes, the whole controller is forcibly shut down and a dumb "blinking red" device takes over.

      That's really cool

    • ...Full authority digital flight control systems made a lot of people nervous. The Airbus has them, and not only do they have redundant computers, they have a second system cross-checking them which is running on a different kind of CPU, with code written in a different language, written by different people working at a different location. You need that kind of paranoia in life-critical systems.

      Code written in a different language is totally helpless here. Unless you believe the avionics is running on an interpreter instead of compiled code. Once compiled, the code is dialect free. An even if it is not my field, I doubt any sane designer will design avionics to run on a interpreter. We are talking about realtime systems here. A different kind of CPU makes sense if you want to isolate the system from bugs in hardware that may be specific to a kind of CPU.

      • Re:Good idea (Score:4, Insightful)

        by Viol8 ( 599362 ) on Tuesday March 18, 2014 @06:05AM (#46514471) Homepage

        "Code written in a different language is totally helpless here"

        No it isn't. Some languages have different pitfalls to others eg, C code often has hidden out of bounds memory access issues , Ada doesn't because checking these is built into the runtime. Also different languages make people think in slightly different ways to solve a problem which means the chances of them coming up with exactly the same algorithm - and hence possibly exactly the same error - is somewhat less.

      • by Alioth ( 221270 )

        Compiled code that's functionally identical will differ depending on the language, though. It'll even differ when the same language is used but you merely change the compiler (or merely even change some options to the same compiler - for example, a latent bug may manifest itself by merely changing the compiler's optimization setting) To see this happen just compile to assembler a simple "Hello world" program using GCC, then do the same with the LLVM compiler. The outputs will look different even though the

    • There's a lot to be said for this.

      There's a lot to be said against this as well. Digital process control has opened up a whole world of advanced diagnostics which are used for protecting against critical process excursions. Most industrial accidents had failed instrumentation as a contributing factor. Most instrumentation these days have so much internal redundancy and checking that you're missing out on a whole world of information in the analogue realm. So you got a pressure reading on the screen is that number the actual pressure or is t

  • by globaljustin ( 574257 ) on Tuesday March 18, 2014 @12:33AM (#46513499) Journal

    Or maybe you could isolate control systems from the Internet.

    Unkown Lamer has it.

    tl;dr - using analog in security situations would be obvious if "computer security" wasn't so tangled in abstractions

    Sure someone may point out that the "air gap" was overcome by BadBios http://it.slashdot.org/story/1... [slashdot.org] but that requires multiple computers with speakers and microphones connected to an infected system

    IMHO computer security (and law enforcement/corrections) has been reduced to hitting a "risk assessment" number, which has given us both a false sense of security & a misperception of how our data is vulnerable to attack

    100% of computers connected to the internet are vulnerable...just like 100% of lost laptops with credit card data are vulnerable

    Any system can have a "vulnerability map" illustrating nodes in the system & how they can be comprimised. I imagine it like a Physical Network Topology [wikipedia.org] map for IT networking only with more types of nodes.

    This is where the "risk assessment" model becomes reductive...they use statistics & infer causality...the statistics they use are historical data & they use voodoo data analysis to find **correlations** then produce a "risk assessment" number from any number of variables.

    If I'm right, we can map every possible security incursion in a tree/network topology. For each node of possible incursion, we can identify every possible vulnerability. If we can do this, we can have alot more certainty than an abstract "risk assessment" value.

    Analog comes into play thusly: if you use my theory, using **analog electronics** jumps out as a very secure option against "cyber" intrusions. Should be obvious!

    "computer security"....

  • by raymorris ( 2726007 ) on Tuesday March 18, 2014 @12:36AM (#46513515) Journal

    Analog vs. digital, fully connected vs less connected - all can fail in similar ways. If it's really critical, like nuclear power plant critical, use simple, basic physics. The simpler the better.

    You need to protect against excessive pressure rupturing a tank. Do you use a digital pressure sensor or an analog one? Use either, but how also add a blowout disc made of metal 1/4th as thick as the rest of the tank. An analog sensor may fail. A digital sensor may fail. A piece of thin, weak material is guaranteed to rupture when the pressure gets to high.

    Monitoring temperature in a life safety application? Pick analog or digital sensors, ei ther one, but you better have something simple like the vials used in fire sprinklers, or a wax piece that melts, something simple as hell based on physics. Ethanol WILL boil and wax WILL melt before it gets to be 300 F. That's guaranteed, everytime.

    New nuclear reactor designs do that. If the core gets to hot, something melts and it falls into a big pool of water. Gravity is going to keep working when all of the sophisticated electronics doesn't work because "you're not holding it right".

    • by jrumney ( 197329 )
      In other words, it is nothing to do with analog vs digital, but about having failsafe mechanisms that contain the damage when all your control systems go wrong. Failsafe mechanisms tend to be "analog", as they need to be effective even when the electricity and anything else that can fail has failed.
    • Inherently safe design and mechanical safety systems are the final word you are absolutely correct, however in the digital vs analogue debate I would not be so quick to say use either. Digital systems have allowed a world of advanced diagnostics to be reported. Your pressure transmitter can now not only tell you what it thinks the pressure is, but it can also tell you if the tapping / impulse line is plugged. Your valve can report when it's near failure or if torque requirements are increasing, or stiction

  • No, it's education (Score:5, Insightful)

    by Casandro ( 751346 ) on Tuesday March 18, 2014 @12:39AM (#46513533)

    Such systems are not insecure because they are digital or involve computers or anything. (seriously I doubt the guy even understands what digital and analog means) Such systems are insecure because they are unnecessarily complex.

    Let's take the Stuxnet example. That system designed to control and monitor the speed at which centrifuges spin. That's not really a complex task. That's something you should be able to solve in much less than a thousand lines of code. However the system they built had a lot of unnecessary features. For example if you inserted an USB stick (why did it have USB support) it displayed icons for some of the files. And those icons can be in DLLs where the stub code gets executed when you load them. So you insert an USB stick and the system will execute code from it... just like it's advertised in the manual. Other features include remote printing to file, so you can print to a file on a remote computer, or storing configuration files in an SQL database, obviously with a hard coded password.

    Those systems are unfortunately done by people who don't understand what they are doing. They use complex systems, but have no idea how they work. And instead of making their systems simpler, they actually make them more and more complex. Just google for "SCADA in the Cloud" and read all the justifications for it.

  • by sg_oneill ( 159032 ) on Tuesday March 18, 2014 @12:47AM (#46513577)

    Reminds me a bit of one of the tropes from battlestar galactica. Adama knew from the previous war that the cylons where master hackers and could disable battlestars by breaking into networks via wireless and then using them to disable the whole ship, leaving them effectively dead in the water, so he simply ordered that none of his ship ever be networked and that the ship be driven using manual control. Later on they meet the other surviving battleship, the pegasus, and it turns out that only survived because its network was offline due to maintainance. Its not actually a novel idea in militaries. I remember in the 90s doing a small contract for a special forces group I can't name, and asked them about their computer network. He said they used "Sneaker-net", which is that any info that needed transfer was put on a floppy and walked to its destination, thus creating an air gap between battlefield systems.

    I guess this isn't quite that, but it certainly seems to be a sort of variant of it.

  • Editor or submitter said

    isolate control systems from the Internet.

    Stuxnet has shown that it is not enough. You can still be infected by an USB key.

  • Analog vs digital has nothing to do with "cyberterrorism". Analog refers to systems with an infinite number of states, digital refers to systems with a finite number of states. If properly designed, both are perfectly safe.

    Cyber security has nothing to do with digital or analog, and everything to do with software and networking. Which have nothing whatsoever to do with the analog vs digital design choices.

    TFA reads like a science essay from a 3rd grader who write with technical words to look smart, but does

    • The problem is that modern digital systems have to many possibilities. You can not be certain that a security system with in field reprogramming abilities is safe.
      It may be expensive (in both space and dollars) but critical systems should have safe limits embedded in the hardware. A powerplant should not be able to increase the output voltage without hardware modifications. A nuclear plant must fail safe, even if the software is hacked.

      In essence you are right: It doesn't matter if those securities are in d

  • >Or maybe you could isolate control systems from the Internet.

    Yes, maybe is the keyword there. Set up everything to be nice and air-gapped, and maybe some joker won't bring in his malware-infected laptop the next day and temporarily hook it up to your "secure network" in order to transfer a file over.

    Or then again, maybe he will. Who knows?

  • by gman003 ( 1693318 ) on Tuesday March 18, 2014 @01:30AM (#46513715)

    The core problem is that "data" and "code" are being sent over the same path - the reporting data is being sent out, and the control "data" is being sent in, but it's over a two-way Internet connection. If you had an analog control system that was openly accessible in some way, you'd have the exact same problems. Or you could have a complete separate, non-public digital control connection that would be secure. But nobody wants to lay two sets of cable to one device, and there's a convenience factor in remote control. So since security doesn't sell products*, but low price and convenience features do, we got into our current situation. It's not "digital"'s fault. It's not "analog"'s fault. It probably would have happened even if all our long-range communication networks were built of hydraulics and springs.

    * For those who are about to point out how much money antivirus software makes, that's fear selling, not security. Fear moves product *very* well.

  • by roca ( 43122 ) on Tuesday March 18, 2014 @01:33AM (#46513725) Homepage

    Air-gap alone is not enough. Stuxnet travelled via USB sticks. And if your hardware (or anything connected to it) has a wireless interface on it (Bluetooth, Wifi, etc), you have a problem ... an operator might bring a hacked phone within range, for example.

    Simplifying the hardware down to fixed-function IC or analog reduces the attack surface much more than attempts to isolate the hardware from the Internet.

    • Air-gap alone is not enough. Stuxnet travelled via USB sticks.

      The Stuxnet attack was (for the Iranians) a failure of operational security.
      The attackers knew exactly what hardware/software was being used and how it was set up.
      If the Iranians had one less centrifuge hooked up, or a different SCADA firmware version, the worm would have never triggered.

      There is such a thing as security through obscurity.
      It's never a complete solution, but it should always be your first line of defense.

    • by thegarbz ( 1787294 ) on Tuesday March 18, 2014 @06:24AM (#46514515)

      Simplifying the hardware down to fixed-function IC or analog reduces the attack surface much more than attempts to isolate the hardware from the Internet.

      It also dramatically reduces the functionality. You've saved yourself from hackers only to get undone by dangerous undetected failure of instrumentation. Anyone who boils a security argument down to stupefying everything has missed a world of advancements which have come from the digital world. Thanks but no thanks. I'm much more likely to blow up my plant due to failed equipment than due to some hacker playing around.

  • The key is hard stop rather than analog. For a simple example, imagine 3 machines that draw a great deal of inrush current using typical start/stop controls. Since we're in the digital age, we put them under computer control. The controller can strobe the start or stop lines for the 3 machines.

    Now, they must not all be started at once or they'll blow out everything back to the substation. We know they must be started 10 seconds apart at least. Doing it the "digital way" we program the delay into the control

  • Whether it is a series of mechanical cogs or a digital controller problem in abstract seems not so much selection of technology as it is proliferation of "nice to have" yet possibly unnecessary capabilities.. widgets which may not offer significant value after closer inspection of all risks. Is remote management really a must have or can you live without? Perhaps read-only monitoring (cutting rx lines) is a good enough compromise... perhaps not all systems need network connections, active USB ports..etc

    Th

  • by Karmashock ( 2415832 ) on Tuesday March 18, 2014 @02:01AM (#46513803)

    The hubris of some thinking that everything can be linked to the internet while maintaining acceptable security is ignorant.

    Some systems need to be air gapped. And some core systems just need to be too simple to hack. I'm not saying analog. Merely so simple that we can actually say with certainty that there is no coding exploit. That means programs short enough that the code can be completely audited and made unhackable.

    Between airgapping and keeping core systems too simple to hack... we'll be safe from complete infiltration.

    • The hubris of some thinking that everything can be linked to the internet while maintaining acceptable security is ignorant.

      Actually I find the entire debate boiling down to one side thinking everything is completely open directly connected to the internet almost as laughable as the other side thinking air gapping is the answer.

      I'll meet you in the middle. Air-gapping is not a solution in many cases. You simply can't run many modern plants without the ability to get live data out of the system and whisk it across the world. Does that mean your control system has an ADSL modem attached? Hell no. But there are many ways to network

      • As to modern plants requiring remote control, I would look at that very carefully and do my best to limit it.

        Most plants are manned 24/7. There's no reason those plants couldn't take directions from grid operators and manually throttle the plant up or down. Sure, the standby diesel plants might throttle up and down a lot but most of the large coal, hydro, etc plants tend to hold a given output.

        As to insiders hacking the system, there is no solution to that issue so that's a bullshit counter argument. An ins

  • by johnnys ( 592333 ) on Tuesday March 18, 2014 @02:14AM (#46513829)

    "obvious: that 'every digital system has a vulnerability,' "

    So far, this has been demonstrated (NOT proven) only in the current environment where hardware and software architects, developers and businesses can get away from product liability requirements by crafting toxic EULAs that dump all the responsibility for their crappy designs and code on the end user. If the people who create our digital systems had to face liability as a consequence of their failure to design a secure system, we may find they get off their a**es and do the job properly. Where's Ralph Nader when you need him?

    And as the original poster noted, you CAN isolate the control systems from the Internet! Cut the wire and fire anyone who tries to fix it.

    "analog protection systems have one big advantage over their digital successors: they are immune"

    Nonsense! There were PLENTY of breakins by thieves into banks, runaway trains, industrial accidents and sabotage BEFORE the digital age. There was no "golden age" of analog before digital: That's just bullsh*t.

  • It is not a analog or digital issue, it is a cost issue. To be secure from remote attack you have to be willing to pay to have trusted (human) individual with a sense of what is reasonable (with respect to the process) to be in the control loop. The problem is of course that trusted humans with a sense of reason are expensive.
    • It doesn't necessarily come down to humans (who can't necessarily save you if very fast responses are required or very subtle deviations need to be detected), though they can certainly help; but cost is much of the problem on the software side as well. More than a few important things run at the 'incompetent and overworked IT staff usually apply patches within a few months of release, assuming it isn't one of the systems that the vendor says you shouldn't touch' level and people are unwilling enough to shel
  • by volvox_voxel ( 2752469 ) on Tuesday March 18, 2014 @02:37AM (#46513895)

    There are billions of embedded systems out there, and most of them are not connected to the internet. I've designed embedded control systems for most of my career, and can attest to the many advantages a digital control system has over an analog one. Analog still has it's place (op-amps are pretty fast & cheap), but it's often quite useful to have a computer do it. Most capacitors have a 20% tolerance or so, have a temperature tolerance, and have values that drift. Your control system can drift over time, and may even become unstable due to the aging of the components in the compensator (e.g. PI, PID,lead/lag) .. Also a microcontroller wins hands down when it comes to long time constants with any kind of precision (millihertz). It's harder to make very long RC time constants, and trust those times. Microcontrollers/FPGA's are good for a wide control loops including those that are very fast or very very slow. Microcontrollers allow you to do things like adaptive control when you plant can vary over time like maintaining a precision temperature and ramp time of a blast-furnace when the volume inside can change wildly.. They also allow you to easily handle things like transport/phase lags, and a lot of corner conditions, system changes -- all without changing any hardware..

    I am happy to see the same trend with software-defined radio, where we try to digitize as much of the radio as possible, as close to the antenna as possible.. Analog parts add noise, offsets, drift, cross-talk exhibit leakag,etc.. Microcontrollers allow us to minimize as much of the analog portion as possible.

  • by Yoda222 ( 943886 ) on Tuesday March 18, 2014 @03:37AM (#46514051)
    A "cyber-attack" is a digital attack. So if your system is not digital, you can't be cyber-attacked. Great news.
  • I think I have a call from 1985 on line one, from some guy called 'Therac-25' who seems very excited about the importance of hardware safeguards and not trusting your software overmuch...
    • The Therac-25 problems could have been easily prevented with better software processes and practices; no hardware safeguards were/are needed. If the hardware had been developed like the software was, the hardware would likely have failed too.

  • My sister-in law was excitedly showing off her new car to me, and I said that I didn't care for the idea of a remote-start function for cars. "But it's security coded." she said. My response was this:

    If a device can be controlled with an electronic signal, that means that the device can be controlled with an electronic signal.

    Sometimes that signal will come from where you want it to, but there can be no guarantee that it will not come from somewhere else.
  • "the analog protection systems have one big advantage over their digital successors: they are immune against cyber attacks."

    Unfortunately they are not immune to idiotic engineers as we learned the hard way.

  • Slashdot needs an official galacticawasntnetworked tag.

"If it ain't broke, don't fix it." - Bert Lantz

Working...