×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Should the FDA Assess Medical Device Defenses Against Hackers?

Soulskill posted about 2 years ago | from the or-maybe-somebody-who-knows-things-about-computers dept.

Government 138

gManZboy writes "The vulnerability of wireless medical devices to hacking has now attracted attention in Washington. Although there has not yet been a high-profile case of such an attack, a proposal has surfaced that the Food and Drug Administration or another federal agency assess the security of medical devices before they're sold. A Department of Veterans Affairs study showed that between January 2009 and spring 2011, there were 173 incidents of medical devices being infected with malware. The VA has taken the threat seriously enough to use virtual local area networks to isolate some 50,000 devices. Recently, researchers from Purdue and Princeton Universities announced that they had built a prototype firewall known as MedMon to protect wireless medical devices from outside interference."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

138 comments

No (0, Troll)

Anonymous Coward | about 2 years ago | (#39786627)

More money down the shitter. I can't think of anything a hacker would gain from a medical device. What would be the point? Are hackers just evil and nefarious and out to hurt people in the hospital for the lulz? I doubt it.

Re:No (4, Informative)

WrongSizeGlass (838941) | about 2 years ago | (#39786679)

More money down the shitter. I can't think of anything a hacker would gain from a medical device. What would be the point? Are hackers just evil and nefarious and out to hurt people in the hospital for the lulz? I doubt it.

Some just do it to see if it can be done, some of them *are* out to extort money and will hurt people in the process.

Charged with murder. (0)

Anonymous Coward | about 2 years ago | (#39787075)

More money down the shitter. I can't think of anything a hacker would gain from a medical device. What would be the point? Are hackers just evil and nefarious and out to hurt people in the hospital for the lulz? I doubt it.

Some just do it to see if it can be done, some of them *are* out to extort money and will hurt people in the process.

And the hackers, if caught, should be charged with attempted murder. And if someone acutually dies, then charged with murder.

Re:Charged with murder. (5, Insightful)

Anonymous Coward | about 2 years ago | (#39787391)

I would rather they try to patch the security holes *before* we start charging people with attempted murder and murder, personally.

Re:Charged with murder. (0)

Anonymous Coward | about 2 years ago | (#39788009)

Of course the government will program a back door into any of these "secure systems" so they will be able to kill whoever they want as soon as the slave ceases to do their bidding.

Re:Charged with murder. (0)

Anonymous Coward | about 2 years ago | (#39788297)

I can see this happening mandatory medical devices with mandatory health care. When you don't pay your taxes or pirate a movie or something the secret code to break the hidden cyanide capsule is transmitted.

Re:Charged with murder. (1)

froggymana (1896008) | about 2 years ago | (#39788611)

I would rather they try to patch the security holes *before* we start charging people with attempted murder and murder, personally.

You can never really be certain that every security hole has been patched though, after all programming is the art of adding bugs to software.

Re:No (5, Insightful)

t4ng* (1092951) | about 2 years ago | (#39786851)

Really? How about a hacker selling malware to the highest bidder that could be used to assassinate someone with a medical implant, or while they are recovering in the hospital after surgery? That's just two I can think of off the top of my head, I'm sure there are more.

Re:No (2)

IcyHando'Death (239387) | about 2 years ago | (#39787981)

It's unlikely that a would-be assassin will learning the art of medical implant hacking in assassin school on the off chance that he'll one day have a target who just happens to have such an implant. As with today's black-hats, who focus on Windows over Linux (well, until the recent Mac headlines), their efforts will concentrate where they get the most leverage -- on cars. Even people who don't drive almost surely step into a car fairly regularly. The high-tech hacker-assassin may eschew the "old bomb under the chassis" bit, but why not a drive-by reprogramming of the ABS computer to disable the brakes when the car hits highway speed?

Great TED talk on this topic here [ted.com]

Re:No (1)

DeadCatX2 (950953) | about 2 years ago | (#39788403)

The would-be assassin doesn't learn how to hack medical implants. The assassin goes onto an underground forum and looks for vulns that match a specific target device that the assassin's mark is using.

Re:No (0)

Anonymous Coward | about 2 years ago | (#39788457)

Most electronic medical implants can be disabled or caused to heat up and emit sparks if enough emf radiation is directed towards it.

Re:No (1)

DeadCatX2 (950953) | about 2 years ago | (#39788593)

Define "enough". Of course if you set off an EMP, most electronics will be fried. Is it practical to apply enough EMI to a device to cause a failure? Keep in mind that FDA and FCC tests are pretty stringent and there are a ton of certifications you need in order to sell an implant.

Re:No (1)

rst123 (2440064) | about 2 years ago | (#39788427)

It's unlikely that a would-be assassin will learning the art of medical implant hacking in assassin school on the off chance that he'll one day have a target who just happens to have such an implant.

many implants are expensive, and I suspect there is a strong correlation, at least in some countries, between "has more money/power than average" and "more likely to have implants". Therefore, you are learning an attack against a group that self-selects to be a more tempting target, for either extortion or assassination.

Re:No (2)

lightknight (213164) | about 2 years ago | (#39788175)

There are much easier, and explainable, ways to kill someone. What assassin leaves a paper trail?

This whole thing stinks of a bunch of people selling a service no one needs. Symantec, McAfee, and friends used to make good money pushing out anti-virus software; then worms where the big problem, so they adapted; then mal-ware was the new problem, so they adapted; MS got bitched at left and right about the security issues with their platform, then they released Microsoft Security Essentials; Windows XP is being phased out, Windows Vista is as well, and Windows 7 is slowly taking over, with many of the old exploits being patched. These companies, if they are going to survive, need a new schtick. Seeing the writing on the wall, they converted themselves to 'security consultants,' and began lobbying Congress for contracts to fight 'zee evil Hackers, unt!'

You've noticed the sudden influx of articles focused on finding some 31337 h@xors. They can't find any, but the money is too good to give up. Sooner or later, they're going to need to invent some, if they want to stay on that gravy train.

Re:No (0)

Anonymous Coward | about 2 years ago | (#39786953)

1. Extortion
2. Assasination
3. Lulz

Re:No (3, Insightful)

TheGreatOrangePeel (618581) | about 2 years ago | (#39786997)

More money down the shitter. I can't think of anything a hacker would gain from a medical device.

Things like record keeping blood bank software is regarded as a medical device by the FDA. Such software can contain sensitive information like you Social Security Number or drivers license number. In Sort, a hacker can gain plenty from breaking into a medical device.

Speaking as someone who has worked in the software side of the medical industry I just want to say that this is long overdue and the FDA has their work cut out for them. The systems I worked on are laughable in their "security" as they typically rely on how secure the local intranet is. Software vendors rarely put in any kind of serious authentication methods.

Re:No (0)

Anonymous Coward | about 2 years ago | (#39788805)

Things is regarded. information like you Number. In Sort.

I'm sorry, I just can't take you seriously.

Sincerely,
The Grammar Fascist

Re:No (4, Insightful)

fuzzyfuzzyfungus (1223518) | about 2 years ago | (#39787023)

I see two major areas of concern with, arguably, quite different requirements:

1. Implants/embedded systems with some measure of field-programmability: On the plus side, these are much more likely to be running something fairly esoteric, possibly not even an OS at all, possibly some RTOS or embedded OS. They are also likely(for the moment) to have only short-range connection capabilities, quite possibly over a somewhat obscure protocol. This makes them low risk devices in terms of untargeted worm/phishing/etc. attacks, by virtue of limited connection and oddity of software. On the minus side, being directly connected to the patient, these offer a handy target for personally-directed sabotage, possibly from a surprising distance, depending on the whims of the RF gods(surely, the first person to reinact the classic 'sniper on the roof, suit with bodyguards crossing the parking lot toward the armored limo' scene; but with a rifle-stocked Yagi and lethal exploit code for the suit's pacemaker will be awarded a signed copy of every cyberpunk book of note).

2. Systems that have much more in common with the PLCs and management console computer systems that we are always complaining about in factory scenarios. That box running WinNT SP2 connected to a monstrously expensive diagnostic science machine, etc. etc. These are much more prosaic, just badly patched and outdated WinSomething boxes that really ought to be air-gapped properly, which makes them much more likely to suffer lots, and lots, and lots of expensive downtime when they eventually cave to the demand for electronic transmission of radiology data to another hospital for a consult and hook the sucker to the internet....

'Type 1' stuff seems like it would be best off with a "When in doubt, don't" approach: Don't interpret unsigned inputs, use very short range(inductive rather than RF, say) interfaces. It won't be perfect; but it'll at least confine the universe of potential hackers to people who could have just shived you anyway.

'Type 2' is where the mess really hits. Like industrial stuff, the economics of ripping out expensive capital investments are Deeply Unexciting; but persuading the vendor to deliver a service contract that doesn't read "Fuck you. Buy a Model N+1" is going to be a challenge. Also the (by no means necessarily false) promises of various 'telemedicine' applications are going to be constantly tugging at the people who run that stuff, urging them to connect it up. That isn't go to go well at all...

Re:No (0)

Anonymous Coward | about 2 years ago | (#39787857)

1. Implants/embedded systems with some measure of field-programmability

The guys with inflatable penis implants are going to be very nervous, very soon... UpDownUpDownUpDownUpDown

Re:No (1)

Caratted (806506) | about 2 years ago | (#39787935)

My area of concern revolves around the VA stating they have "isolated some 50k devices with vlans." This implies two things: 1) They're already networked such that they can be placed on their own vlan (or, at least the controllers, or whatever connects to that RF int) and 2) the VA is under the impression that a vlan is a legitimate security measure worth promoting. I do not want something controlling my insulin pump, which is capable of killing me, hooked up to the network. AT ALL.

"Sorry, your daughter died because our network had a brownout and the switches stopped switching, so it interpreted the input from your TV remote which you pointed the wrong way as "PURGE INSULIN." Ugh.

Re:No (0)

Anonymous Coward | about 2 years ago | (#39787305)

"Send whatever you think is appropriate in bitcoins to this location on the Silk Road by next Friday. If I think it's too low, your grandmother's pacemaker will play taps on her heart."

So, yes, someone ought to be screening this stuff, and there ought to be serious liability laws. But before anyone starts thinking it's too easy, consider that these things do or will have wireless access for good medical reasons, and consider the effects of strong crypto on battery life. This is not an easy trade space to work in.

Re:No (1)

Arancaytar (966377) | about 2 years ago | (#39787307)

How likely do you rate it that a random malware author will put special safeguards into his spam botnet worm to ensure it does not interfere with the operation of a medical device should it happen to infect one? Right now, this cross-infection is unlikely due to incompatibility - in the future, the platform running on a specialized medical device could be susceptible to the same viruses as a desktop computer.

Re:No (2)

IorDMUX (870522) | about 2 years ago | (#39787473)

Are hackers just evil and nefarious and out to hurt people in the hospital for the lulz? I doubt it.

Well, two issues, here. First, you seem to be assuming "hacker" roughly equates to "guy who messes with computer-stuff for the heck of it". There most certainly are hackers/crackers (depending on your preferred use of the term) who harm people and systems, sometimes for money, sometimes for fame, sometimes for fun.

Aside from that, a hacked medical device makes for a really easy way to kill someone from a moderate distance and leave very little trace of whodunit. And I'm not even going to begin to consider all the reasons a person may have for wanting to kill, or even simply extort via credible death threats.

It's not limited to hospitals, either. I have Type I Diabetes (the autoimmune strikes-randomly and needs-insulin-to-survive type) and so I always wear an insulin pump jacked into my abdomen. In the pump, there is an insulin cartridge which contains a large reservoir of insulin -- injecting 1/20th of the reservoir could kill me if I'm not treated quite quickly. Injecting the whole thing is a death sentence if I'm not already in a hospital bed and hooked up to an IV. The kicker is that the device has RF access, and is likely hackable. I have turned off the RF from day one (partially due to the battery drain, partially due to my worries of a possible hack or mis-delivery) and sacrificed some of the pump's features, but most pump users will not do this.

It's a glaring vulnerability in a life-or-death system.

Re:No (0)

Anonymous Coward | about 2 years ago | (#39788875)

More money down the shitter. I can't think of anything a hacker would gain from a medical device. What would be the point? Are hackers just evil and nefarious and out to hurt people in the hospital for the lulz? I doubt it. [Citation Needed]

Think beyond just gaining access to a medical device. The medical device itself isn't the end goal. Yeah, you could potentially do malicious things to and using that device, but more importantly it gives you a foothold in the network that you're attacking. It's simply a means to an end, whatever that end may be. Once you own the medical device, you'll use that as a jumping off point to go attack and compromise something else. You're now on the network and it makes it so much easier to attack other things on the network once they have a foothold. Lather, rinse, repeat until you accomplish your final objective.

Should They? (4, Interesting)

WrongSizeGlass (838941) | about 2 years ago | (#39786643)

Yes, they should. It should be a separate certification that allows doctors and consumers to chose medical devices with confidence.

They Should But Why Not Use Existing Solutions? (1)

eldavojohn (898314) | about 2 years ago | (#39786919)

Yes, they should. It should be a separate certification that allows doctors and consumers to chose medical devices with confidence.

Personally I don't trust the FDA with something like this nor do I think it would help to give them funding to expand their expertise in a field like security. I don't even trust the best in the private world with something like this: Microsoft, Apple, Google, IBM, I don't care they all have failed at security at some point. I have to imagine that our government's security agencies already have a generalized form of protection testing and certification within their own systems, why not reuse that process and actually get some use and protection for citizens out of said government money vacuums?

Re:They Should But Why Not Use Existing Solutions? (0)

Anonymous Coward | about 2 years ago | (#39786985)

They suck at assessing drugs, and scandals abound with lobbying powers on all fronts. This is just further opportunity for bribery.

Re:They Should But Why Not Use Existing Solutions? (4, Interesting)

mcgrew (92797) | about 2 years ago | (#39787303)

Personally I don't trust the FDA with something like this

Why not? They're the UL of medical devices. They're the ones who approved my eye implant. They're the ones who approve pacemakers. They're the ones we cyborgs rely on for safe implants.

I don't even trust the best in the private world with something like this: Microsoft, Apple, Google, IBM

The difference between the FDA and IBM is that you have no vote whatever over who runs IBM or what they do. The head of the FDA is appointed to the President, who you do have a vote in electing. Our power company is owned and operated by the city, and we've historically had the lowest rates and best uptime in the state. But they had a boondoggle that's going to raise rates, so I don't see the Mayor getting reelected unless the Democrats run someone REALLY bad.

I have to imagine that our government's security agencies already have a generalized form of protection testing and certification within their own systems, why not reuse that process and actually get some use and protection for citizens out of said government money vacuums?

That's exactly right -- the security people would be transferred to the FDA.

Re:They Should But Why Not Use Existing Solutions? (-1)

Anonymous Coward | about 2 years ago | (#39787883)

In this case the government are the hackers. They force prisoners and military personnel to have passive transmitters implanted in their skulls without their knowledge. They can then target enough electromagnetic radiation at them to give them shocks when they say or do something that they don't approve of.

Re:They Should But Why Not Use Existing Solutions? (0)

Anonymous Coward | about 2 years ago | (#39788269)

I can sue IBM if they fuck up; I can't sue the FDA

Re:They Should But Why Not Use Existing Solutions? (2)

techno-vampire (666512) | about 2 years ago | (#39788333)

Why not? They're the UL of medical devices. They're the ones who approved my eye implant. They're the ones who approve pacemakers. They're the ones we cyborgs rely on for safe implants.

Same here. And, of course, they also had to approve my hearing aids, the meter I use every day to monitor my blood sugar and the dialysis equipment a friend of mine needed when his kidneys stopped working. People like to complain about how much it costs to get new drugs, devices and proceedures approved by the FDA, but I bet they'd complain even more if the FDA suddenly went away.

Re:They Should But Why Not Use Existing Solutions? (1)

thoth (7907) | about 2 years ago | (#39787317)

I'm not sure security agencies model this problem well: a lot of their certification and/or protection methods come down to high costs (armed guards, lots of physical security, etc.) or long, slow, thorough auditing plus heavy screening of personnel, etc - the stuff the rabid anti-government folks scream about when the spending isn't directed at their favorite projects.

Meanwhile, private corporations merely treat customers as a cost-analysis problem, weighing their life versus lawsuit payout amounts, and take a failure rate deemed OK by bean counters.

The first method will be safer but pricey; the second other will be cheaper but risky. People hate that but the free market fails to deliver "safe and cheap".

Re:They Should But Why Not Use Existing Solutions? (1)

geekoid (135745) | about 2 years ago | (#39787397)

I think you miss the point of what they want to do.

They would test the security to a certain bar of expectation. Basically they will set the floor.
For example, they could hire security experts to break something, or more likely, they will have a set of attacks the item will be tested against.

Yes, some agency's of certification process for there systems. You know what? those aren't medical systems. And if you treat each system like they are the same, you will fault. That's a lot of the reason IT is a security nightmare as it is.

Absolutely not. (0)

Anonymous Coward | about 2 years ago | (#39787209)

This is so far outside any reasonable scope of the FDA's mission as to appear absurd on its face, either in terms of cost or benefit. The 'safety' of medical devices and treatments as regulated by FDA has been compromised by the machinations of industry and politics to the point that it's efficacy is already questionable, and I can see no 'reasonable' excuse for expanding this mission to encompass the question of security at the level inherent to this proposition. If crimical hackers, governments or corporate industry becomes so corrupt that this line of attack is used, then it's encumbent upon the security industry to provide coverage for the few individuals who might fall prey to such an attack.

And if it can be shown in court that a company is negligent in providing an insecure device vulnerable to some known and defensible means of attack, then it's up to industry to correct the oversight or suffer the consequences.

Good Luck to all you private citizen/corporations and indviduals possessing any reasonable fear. (Oh, and you too Mr. Cheney.)

Re:Should They? (1)

fermion (181285) | about 2 years ago | (#39787251)

It seems to me that this would be of equal or higher benefit to the drug maker. From what I can tell, the FDA regulation really provides more of an affermative defense to the drug makers than real protection to the consumer. If the drug maker jumps through certain hoops, conducts certain tests, then they are basically guaranteed that if their product kills someone, even if the data shows that it kills people, they will have limited liability if the FDA said the drug was safe.

Of course the problem right now is that devices that can be hacked are unregulated, so the device manufacturers can say they were following all the FDA rules, which are none, and therefore cannot be held responsible for anything. Of course any regulation will probably be insufficient and will likely only serve to give the manufacturers cover. I would just like to see the companies be criminally and civilly responsible for any device that is hacked. This would give confidence to the patient. If the device is hacked, even if you are not harmed, you will have grounds to go after the doctor, the firm who sold the device, the manufacturer.

Re:Should They? (0)

Anonymous Coward | about 2 years ago | (#39787309)

With all things there is a balancing of risk. The current state of glucose monitors and continuous glucose monitors and insulin pumps sucks. It is very difficult for Animas or Minimed to get approval for such devices and it take a long time forcing them to all be 2 to 3 generations old in Tech.

The harder you make it in the name of safety the longer it takes. That is all fine because we want the device to be ‘safe’, but keep in mind that people are currently going without since the devices are too expensive since they take too long to approve.

Before you say companies shouldn’t be allowed to sell medical devices with vulnerabilities, ask why computer companies can sell computers with vulnerabilities.

Reducing the requirements on devices would cut costs and increase availability to people. That would save lives, no doubt. So where is the line? How do we draw it? That is the tough question.

Re:Should They? (0)

Anonymous Coward | about 2 years ago | (#39788695)

No they should not, it's outside of their core competence and I still have politicians, bankers, CEOs, Hummer drivers, AGW fanatics and others to deal with.

magnets: terrorist devices? (1)

jsepeta (412566) | about 2 years ago | (#39786673)

If magnets can be used to reset or interfere with a pacemaker, should ownership of magnets be considered a terrorist offense?

My refrigerator can take more lives on an airplane than your bottle of shampoo.

Re:magnets: terrorist devices? (0)

Anonymous Coward | about 2 years ago | (#39787129)

If you can manage to get a refrigerator on a plane, we have bigger issues to deal with first.

LOL (1)

vlm (69642) | about 2 years ago | (#39786681)

1) Can't abbreviate VLAN properly
2) A firewall for wireless devices
3) attracted attention in Washington = some politically connected consultant is making bank

Re:LOL (0)

Anonymous Coward | about 2 years ago | (#39786889)

1) Since when are you surprised to see news articles spell out acronyms? 2) What's your definition of a firewall then? This is a device that monitors the incoming and outgoing traffic of network(-able) hosts and can block/deny malicious traffic.

Yes (0)

Anonymous Coward | about 2 years ago | (#39786741)

Yes, but devices as important as medical hardware should be ROM only operation with the ability to be flashed for updates only by vetted, qualified licensed personnel.

Anyone caught intentionally cracking anything should get, at a minimum, 20 years of hard labor. Intentionally trying to harm or kill someone attached to a medical device should be a hanging sentence. Full stop.

Re:Yes (2)

sexconker (1179573) | about 2 years ago | (#39786917)

Yes, but devices as important as medical hardware should be ROM only operation with the ability to be flashed for updates only by vetted, qualified licensed personnel.

The problem with that is every time you want to update the device you have to physically get to it.
Taking updates wirelessly makes things much easier and safer.

As far as (EEP)ROM-only, that's good for the code, but many devices log data (and dump it out wirelessly).
You have to protect against attacks that try to make the device do bad things as well as attacks designed to get or overwrite that data.

Re:Yes (0)

Anonymous Coward | about 2 years ago | (#39786993)

Not easier and safer, just easier!

Re:Yes (0)

Anonymous Coward | about 2 years ago | (#39787057)

Safer because wireless updates don't require surgery for each code update.

Re:Yes (2)

a90Tj2P7 (1533853) | about 2 years ago | (#39787065)

Yes, safer, in the sense that you don't have to go in for surgery every time the settings on your implant need to be adjusted.

Re:Yes (5, Insightful)

negRo_slim (636783) | about 2 years ago | (#39787415)

Anyone caught intentionally cracking anything should get, at a minimum, 20 years of hard labor. Intentionally trying to harm or kill someone attached to a medical device should be a hanging sentence. Full stop.

Glad to see you've fallen in love with the DMCA [chillingeffects.org] friend! Anything that could lead to crime should be a crime aye? Never mind how close that comes to dangerously impeding our legitimate rights to freedom of speech including research that includes circumvention of various controls.

Like most of life's problems easily solved (1)

decipher_saint (72686) | about 2 years ago | (#39786799)

Embed the device in concrete and sink to the bottom of the ocean. Virtually hack proof.

It's also great for annoying servers that won't patch and people who send meeting invites with no description...

Certify the software works first (2)

l2718 (514756) | about 2 years ago | (#39786803)

Before worrying about security of the software, how about worrying about the correctness and fault-tolerance of the software and hardware?

Most famous is the Therac-25 [vt.edu] incident, but it's not the only one.

Re:Certify the software works first (2)

bsDaemon (87307) | about 2 years ago | (#39786831)

Security flaws are derived from incorrectness and lack of fault tolerance. It's part-in-parcel, and if you don't design security in from the start, it'll just become harder and harder to retrofit into the product later.

Re:Certify the software works first (0)

Anonymous Coward | about 2 years ago | (#39788101)

They do, very much so. The FDA isn't afraid to stop a product from being developed if they fail to meet their requirements. I know that GE had to stop development on a whole line of products because they couldn't prove that the software was safe.

The Therac-25 event happened almost 20 years ago, I'd like to believe we are in a much better state now.

Better idea: (0)

CanHasDIY (1672858) | about 2 years ago | (#39786843)

Stop making the goddamn things wireless!!! WTF are you thinking??!!

If you have a pacemaker, then you're already 'zipper-chested,' so the addition of a firmware update port would be a non-issue.


Or hey,here's an even better idea: Make the goddamn things right in the first place, so they don't need software updates! I mean, fuck, we're not talking about a SOHO router here, we're talking about a device people rely on to not fucking die; One would think they would be better engineered.

Re:Better idea: (4, Insightful)

a90Tj2P7 (1533853) | about 2 years ago | (#39786969)

There are a ton of other implanted devices, not just pacemakers. A lot of these devices might need to be adjusted to make a patient "not fucking die" - it isn't about system patches, it's about making medical adjustments to things like the dosage/voltage/rate/etc that the device is pumping out. You can't tear someone open every month when you need to adjust their insulin pump.

Re:Better idea: (3, Informative)

IorDMUX (870522) | about 2 years ago | (#39787275)

You can't tear someone open every month when you need to adjust their insulin pump.

I understand your point, but... As a user of an insulin pump myself, I'd like to clarify that it is an external device, usually carried on the belt or in a pocket, as it needs to be refilled every few days and adjusted quite often. There are implantable insulin pumps in existence, but these are primarily for research purposes, and are not commercial devices to treat diabetes.

Re:Better idea: (0)

Anonymous Coward | about 2 years ago | (#39787393)

The answer to this is to use a wireless transmitter that gives a weak signal. (A few feet!) That way a doctor would have to put the receiver/transmitter physically near to the patient. The other part of the equation is to put STRONG security on the doctors equipment.

Re:Better idea: (0)

Anonymous Coward | about 2 years ago | (#39787411)

There's an obvious solution that makes the thing unhackable, and has existed for years. Magnetic communication. This kind of communication requires very close contact with the device. Essentially skin contact with the communication device. Anyone coming into such close contact is already inherently trusted.

Putting wifi in the things is about the stupidest thing I've ever heard of.

Re:Better idea: (0)

Anonymous Coward | about 2 years ago | (#39787421)

They may need to be adjusted from outside but there's no reason they need to be adjustable from a distance.

Re:Better idea: (1)

CanHasDIY (1672858) | about 2 years ago | (#39787553)

A lot of these devices might need to be adjusted to make a patient "not fucking die" - it isn't about system patches, it's about making medical adjustments to things like the dosage/voltage/rate/etc that the device is pumping out.

OK, so use a physical connection; as I said, if you have a pacemaker then you're already scarred all to hell, what difference will an 1/8" serial plug make?

Someone below mentioned magnetic communications, which sounds just plain awesome.

Re:Better idea: (0)

Anonymous Coward | about 2 years ago | (#39788235)

Why not install a 1/8 serial plug? It would become a focus for all sorts of horrible fungal and bacterial infections.

One possible solution.. (2)

willy_me (212994) | about 2 years ago | (#39786947)

Whichever federal agency takes charge could offer a large reward for security holes/bugs found in applicable systems. The agency would validate claims, pay an applicable reward to those who reported the issue, then bill the offending company for the reward.

The idea is to make the reward large enough that it is more profitable for people to report a flaw then to abuse it. Government involvement would be the review of claimed flaws, not to access the security of every device. Private companies would then have a financial incentive to ensure their code is secure.

NO (0)

Anonymous Coward | about 2 years ago | (#39786981)

No, becuase they will just use it as an excuse to raise prices on everything, and to grab more personal information.

how about the NSA instead of the FDA? (1)

ChipMonk (711367) | about 2 years ago | (#39787059)

If a medical device can be made available to heads-of-state, why not task the NSA with proving that it won't be a vector for carrying out a political assassination?

Re:how about the NSA instead of the FDA? (0)

Anonymous Coward | about 2 years ago | (#39787197)

Except for the backdoor...

Re:how about the NSA instead of the FDA? (1)

thoth (7907) | about 2 years ago | (#39787217)

Their charter is for DoD computer systems, not medical devices. Another agency would be better... and of course they can always be asked to check out a medical device that will be provided to a head-of-state. Surely various regulations already cover other medical devices - what agency accredits those?

Re:how about the NSA instead of the FDA? (0)

Anonymous Coward | about 2 years ago | (#39788145)

The NSA has been doing this for years now. They invented implantable chips as a means of monitoring and controlling anyone they can get their hands on.

Is there a way ... (0)

Anonymous Coward | about 2 years ago | (#39787147)

to have the entire Medical Insurance industry restrained from having anything to do with this?

Let's stop their meddling before this even gets started!

Ridiculous. (2)

roman_mir (125474) | about 2 years ago | (#39787149)

More ridiculous government nonsense.

There are already a million and one law about unauthorised computer access and there are already a million and one law about causing harm to people, and this situation falls under all of those provisions already.

This is just another way to raise the costs, increase government apparatus, increase government spending, lower the economic activity and probably this is going to end up costing a number of lives, as products are prevented from entering the market at all or soon enough at lower costs.

Re:Ridiculous. (0)

Anonymous Coward | about 2 years ago | (#39787313)

While I agree with the 'no new LAWS' bit, I do think as a regulated medical device they should have some form of regulation/certification for the manufacture of such devices, adding on financial disincentive or legal repurcussions for the manufacturer could also be favorable (although I can see rival companies abusing this to sabotage their competitors).

Re:Ridiculous. (0)

Anonymous Coward | about 2 years ago | (#39788389)

Laws don't mean anything to the government neither do individual rights. There's absolutely no reason to trust the corporate run profit driven FDA with regulating medical devices.

how big of an asshat do you gota be (1)

cod3r_ (2031620) | about 2 years ago | (#39787169)

to hack someones medical device.. I mean seriously. There are certainly some jerk offs, but it's a bit alarmist to need to spend all sorts of money to protect medical devices against hackers.. They've been reading the interent too long they think everyone really is that big of an ass hat in real life..

Re:how big of an asshat do you gota be (1)

Fned (43219) | about 2 years ago | (#39788239)

No matter where you set the bar, sooner or later the universe will deliver you a bigger asshat.

Why is this possible? (0)

Anonymous Coward | about 2 years ago | (#39787195)

Single pad encryption has been proven to be unbreakable. Each device can be assigned a specific key, kept in a safe, that a doctor could use to communicate with a device.

So we all get to pay more for health care (0)

Deliveranc3 (629997) | about 2 years ago | (#39787201)

So some rich assholes can feel safe? Really?

How about just making "hacker proof" hospitals for assholes.

Re:So we all get to pay more for health care (1)

hellkyng (1920978) | about 2 years ago | (#39787565)

"Rich asshole"? Seriously, a pacemaker isn't just for the rich asshole. Failing to assess these devices for security controls would be ridiculous negligence. Malicious software has a tendency to spread where it can, it doesn't need a reason to compromise a pacemaker if its able to. I guarantee that if proper security controls aren't implemented in medical devices you will see deaths related to failed or compromised devices. It doesn't even have to be intended malice, if a piece of malware compromises a device and decides a reboot is necessary, guess what happens to the heart behind the pacemaker...

Yes (1)

HideyoshiJP (1392619) | about 2 years ago | (#39787249)

They already have to certify medical devices that are essentially Windows boxes with medical software. Often times, these vendors get quite snippy if you ask about security software on said devices. These boxes will never be updated in all likelihood. During the course of certification, security definitely needs to be considered.

You have to (1)

onyxruby (118189) | about 2 years ago | (#39787283)

If you don't protect a computer (whatever shape that computer comes in), some hacker somewhere will hack it just because they can. The fact that the computer controls a piece of factory equipment, city sewer system, a person's pacemaker or any other thing is irrelevant. Someone will hack it because they can, that's just the way the hacker works.

Companies have a habit of saying something can't be hacked, would be impractical to hack, or no one would want to hack our /whatever/ for decades. Hackers than have a habit of exposing the exploit when said company ignores their work. Why does the form factor make a difference?

Alternative (1)

MobyDisk (75490) | about 2 years ago | (#39787315)

Something definitely needs to be done because I can vouch that very few programmers even consider security, especially embedded software developers. It is worse than average in the medical industry since the idea of putting a medical device on a network is totally new to them. To put it in perspective, many new medical devices being built today use 9600 baud serial ports for communication.

Alternatively, you could change the law so that if someone hacks a medical device the hacker is not liable - the designer is. That way, when someone remotely sets off a defibrillator or stops a heart pump the companies will pay attention to security. The way things are today they will just hire extra lawyers to avoid liability and marketing to cover it up.

I worked in the industry... (0)

Anonymous Coward | about 2 years ago | (#39787355)

I worked in the industry for 20 years, in all that time, I never encountered a single medical device that employed even the most basic notions of security adequately to withstand anything resembling an "attack." Many of them were vulnerable to random noise.

Expensive (1)

RicoX9 (558353) | about 2 years ago | (#39787389)

We're already years behind the curve where I work (hospital) because FDA certification costs so much. Yay, because the vendor won't spend another $50K or so, our brand new IV pumps are stuck for eternity with 2.4GHz radios (802.11b/g). Also, because the older model that could manage 4 IV's at a time was so buggy, we're replacing them with the wireless ones that only do 1 IV. Wireless because the drug database updates can be pushed, saving a ton of time putting hands on each device. Now we add a bunch of extra access points on low power to avoid cross-channel interference and spread the load around.

Then there's bedside meds administration. There are some devices with 5GHz radios, but our people don't like them. Great. More load on the shitty 2.4GHz spectrum. Seems like every week there's a new project that "has to have wireless to work".

173 infected with malware between 2009-11? (1)

oDDmON oUT (231200) | about 2 years ago | (#39787523)

Dick Cheney had an LVAD, or a Left Ventricular Assist Device, implanted in 2010. Hmmmm.

Yes they should (1)

doston (2372830) | about 2 years ago | (#39787567)

If they don't protect medical devices, including implants against 'hackers', then the politicians who run the FDA won't get the bribes they need for reelection from McAffe, Symantec and Kapersky. This is important stuff, people. Now we just need a paid 'security analyst' to go on TV and frighten grandma "Yes, it's technically possible a person could die" during her mid morning 'news'. That's right after the story about the baby with 3 heads, but after the inspiring story of a dog who saved its friend...a chicken, from a house fire. AWWW.

Not the FDA... (1)

gweihir (88907) | about 2 years ago | (#39787587)

While a competent security assessment is a very good idea, I highly doubt the FDA is capable of doing it. More likely this would result in another basically worthless "security" certification.

How about... (1)

flameproof (1460175) | about 2 years ago | (#39787601)

...The FDA pulls their head out of Monsanto's ass first before they ask for any more money to goof with technologies they clearly don't understand.

Yes and no. (1)

Karmashock (2415832) | about 2 years ago | (#39787669)

I'm not sure if the FDA should set computer security policies. That seems well outside their wheelhouse. That said, security policy on devices should be too dumb to fail.

I can see the virtue of a wireless programmable pacemaker. But the security system should be something that can't be tampered with... not because the security is good but because it LITERALLY cannot be tampered with... at all.

For example, instead of using bluetooth (just an example) or something that is a radio signal, maybe use a different sort of signal that requires body contact but not partially close contact. I'm sure you could send a very weak electrical signal into someone through a finger or hand that a device could pick up. And it would be very hard for a hacker to touch someone, send a signal to their pacer through that contact, and potentially kill them. Especially when compared to a more remote signal that someone might be able to send from across the room.

So I guess I'd suggest they avoid certain types of technology for transmitting commands. And even then I'd strongly suggest some decent encryption but I wouldn't have the FDA regulating it. I'd sooner put the NSA in charge of setting those standards. They'd at least know what they were talking about.

No need. We have laws already... (0)

Anonymous Coward | about 2 years ago | (#39787775)

we have hacking laws now. Just the fact that they are medical in nature doesn't change the laws.

Sure, it's worse if the products are in people. We have laws for that too -- Attempted Murder, for example.

I have an ICD. (1)

Blinkin1200 (917437) | about 2 years ago | (#39787801)

Magnets are used to disable or suspend operation of the device (therapy). The devices can malfunction where an inappropriate shock is repeatedly delivered. There are also times when they need to be disabled. When a magnet is placed on the device there is a rather loud alarm. Magnetic fields can also pose a problem as the lead(s) that transmit the minute electrical impulses from the heart muscle to the ICD can also act as an antenna. They tell you 'don't lean / don't linger' around certain electrical devices and things that generate a strong magnetic field - security posts leaving a store (I am aware of one documented 'event'), working on a running engine, and the like. There a times that I do not want to be 'surprised' because of something I'm doing at the time.

Some devices are capable of transmitting their data to a 'base station' that later transmits the data to a server for examination by a physician. I did not RTFA yet, but am curious to know if the malware infection is in the actual device or the base station / server network. My device is not one of them. It requires an antenna to be placed over the device and after some handshaking, the data is transmitted to the controller / monitor. I have been playing with it and have been able to communicate with it up to a distance of 10m. With a better antenna design on my rig I think I can get it up to 30m.

Yes, I am 'zippered' - three on the left leg to remove the spare 'plumbing', large vertical n the chest - where they installed the now spare plumbing parts to reroute blood flow in three places, three little zips below the rib cage for temporary drainage, and don't forget just below the left collarbone to implant the ICD. Even with all of these zippers, I would not allow an constant open wound for a firmware port. That is an idea waiting for an infection. Also, they don't stitch anymore so there is no zipper. It is more like the 'ZipLock Club' now with the use of superglue and packing tape - you know - the stuff with the threads imbedded...

BTW - they don't replace the battery on these devices, they replace the device..

In other words (1)

J'raxis (248192) | about 2 years ago | (#39787927)

Although there has not yet been a high-profile case of such an attack

In other words, a literal "solution in search of a problem." And an excuse to give an already corrupt [wikipedia.org] and counterproductive [eprci.net] government agency more power.

Re:In other words (1)

fa2k (881632) | about 2 years ago | (#39788519)

Although there has not yet been a high-profile case of such an attack

In other words, a literal "solution in search of a problem."

Finally someone anticipates a problem before it happens, and they get shot down like this?

Re:In other words (1)

J'raxis (248192) | about 2 years ago | (#39788609)

When it's being used as an excuse to pre-emptively give a government agency more power, yes. Isn't it bad enough that, typically, they wait for a crisis to happen before exploiting it? Now you're all ready to give them more power merely because of theorized or imagined crises?

I work with medical devices (0)

Anonymous Coward | about 2 years ago | (#39788373)

We leave security up to the customers. Generally because they already have their own corporate security technology stack and also because we're built on top of windows. We're mostly running java so it wouldn't be completely out of the realm of possibility to run a more secure OS.

That being said, working with the VA has been a genuine pain in the butt. Mostly because they purchased a product without realizing that the data analysis took place on a web based cloud platform. Because of them we re-engineered our product into a new product where they could buy a standalone server to do the processing.

Lots of things are classified as medical devices (1)

ChumpusRex2003 (726306) | about 2 years ago | (#39788731)

Medical devices don't just include things like implantable equipment (such as implantable defibrillators, pacemakers, pumps, etc.) but analysis equipment, and more recently computer software running on regular PCs (such as electronic patient records, order management systems, digital X-ray system/picture archiving and communications systems), etc.

Implantable devices have been in the public eye recently because they don't use very secure protocols. Typically, the wireless controller transmits a command prefixed by the serial-number of the implanted device. The device then ignores commands which are not prefixed by the appropriate serial number. This is OK for preventing programming the wrong device in a clinic situation, but a hacker could easily perform a replay type attack to cause the device to administer an inappropriate treatment or dose. One reason that manufacturers have given for this is an extremely limited power budget - strong cryptography simply burns too much energy for a device which cannot be recharged.

One problem that has concerned me as a user of medical software is just how poor the security is on a surprising number of products. One product that I use at the moment is part of an electronic patient record system. This system doesn't quite store user passwords as cleartext in the database. However, instead, it encrypts them with a Vigenere cipher (using the username as key). However, because of excess load on the database server, the software very concienciously caches the entire "Users" table as a CSV file on the client computer. Yes, when I discovered the file, it didn't take long for the Mk I eyeball and my recollection of my password history (which was also documented in great detail in encrypted format) to determine the cipher and what was being used as the key. This was subsequently confirmed by running the binary through a decompiler, which revealed a number of other wonders such as potential SQL injection vulns. Of course, none of that really mattered - there was an interesting file called "C:\epr.ini" which contained such lines as:
[ClientDatabaseConnectionString]
Data Source=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=EPRORA)(PORT=1521)))(CONNECT_DATA=(SERVER=DEDICATED)));User Id=SYSTEM;Password=pyramid1;

However, even leaving aside such extraordinarily bad software from small IT contractors, even the big-boys in the healthcare arena seem to have problems with basic testing, and anything even vaguely corner-case will often result in strange behavior - and that's just routine use, I can imagine all sorts of vulnerabilities appearing if these software packages were subjected to serious attack.

In fact, even in healthcare systems which are supposed to be paradigms of good design, implementation is often very poor. Professor Ross Anderson in his book "Security Engineering" mentions a national security system used in the UK for securing health records, where an individual user's smartcard contains an individual certificate and permitted user roles, which interact with the software to release the appropriate records. On the face of it, an excellent system - and one that Anderson mentions as an example in his book. For a user, however, the implementation is a disaster area; it's unreliable (depending on a national authentication server - local caching was broken in the first 11 6-monthly releases) and vulnerable to DOS attacks. Authentication with the national server was hopelessly slow (taking up to 5 minutes) so was useless for doctors in a busy environment such as the ER. The Roles are administered on a national level, with no way to override errors in role allocation before the next 6-month release (e.g. the first few releases did not permit doctors to change the brightness/contrast of an X-ray that they were examining - this function was restricted to sysadmins only) - the user role administrators acknowledged that this was a serious problem, but refused to push out a hotfix, instead it had to wait for the next role release. In reality, the nurse in Anderson's example would not simply be restricted to her patients. Instead, what would have happened is that the first doctor on shift in the morning would have used her smartcard to log in, and then left her smartcard in the terminal for the rest of the day - with every other member of staff that needed patient record access piggy-backing on her login. This procedure was sanctioned by senior hospital management in recognition of the fact that the authentication system was unsuable - and this sharing of logins led to further problems, with annotations being attributed to the wrong person. In the end, the solution was for each person annotating a case record to sign their note with "This note was made by Nurse Doe at 11:41 on 1/2/2010", so that it was clear who had actually written the note.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...