Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Scientists Tout New Way To Debug Surgical Bots

samzenpus posted about a year ago | from the surgery-safety dept.

Medicine 68

coondoggie writes "When it comes to having robotic surgeons slicing around inside your brain, heart or other important body organ, surgeons and patients need to know that a software or hardware glitch isn't going to ruin their day. That's why a new technique developed by researchers at Carnegie Mellon University and the Johns Hopkins University Applied Physics Laboratory that promises to reliably detect software bugs and verify the software safety of surgical robots could be a significant development."

cancel ×

68 comments

Sorry! There are no comments related to the filter you selected.

Free surgery? (2)

techsimian (2555762) | about a year ago | (#43393001)

I guess it's really a public beta...

For real safety (3, Informative)

fustakrakich (1673220) | about a year ago | (#43393047)

Do not connect these things to the internet, please.

Re:For real safety (1)

Synerg1y (2169962) | about a year ago | (#43393209)

but... how else can the doctor perform surgery from the beach? You don't want any sand in your open wound do you?

Re:For real safety (0)

Anonymous Coward | about a year ago | (#43393611)

UW had a doctor doing surgery from Bermuda when the patient was in Seattle no bot involved. At least, that's what he was billing for.

Re:For real safety (1)

Dahamma (304068) | about a year ago | (#43393607)

The original idea for this robotic surgical equipment was remote operation, so it's pretty certain that's going to happen eventually.

increadible recovery time (0)

Anonymous Coward | about a year ago | (#43393065)

if you override all the safety settings and manually program abdominal surgery for a female patient on a machine only programmed for male patients, don't be surprised if your squidbabby isn't properly aborted.


also, who's liable if the bulb in the service light burns out right after the serviceman checks out the system?

Re:increadible recovery time (0)

Anonymous Coward | about a year ago | (#43393429)

Naturally, there would be an indicator light to alert you that the indicator light is out.

ruin their day ? (3, Interesting)

middlemen (765373) | about a year ago | (#43393071)

This is typical "doctor" talk. They are worried about ruining their day or malpractice suits. How about for a change think about not ruining a patient's life with your negligence !

Re:ruin their day ? (3, Insightful)

Anonymous Coward | about a year ago | (#43393119)

This is what you get with a market-based approach to healthcare.

Re:ruin their day ? (0)

Anonymous Coward | about a year ago | (#43393193)

+5 insightful!

Re:ruin their day ? (2, Insightful)

Anonymous Coward | about a year ago | (#43393703)

No, this is what you get with a market-based approach to healthcare coupled with a government-granted monopoly in the form of ridiculous licensing laws.

If you increased the competition these people face, they wouldn't be so smug anymore.

Re:ruin their day ? (1)

LordLimecat (1103839) | about a year ago | (#43394137)

I am not convinced there is a reliable system short of market forces for rooting out
  * People who would take advantage of the system (whatever it is)
  * People who would charge unreasonable rates (of course, insurance seriously hampers this)
  * Underachievers

Re:ruin their day ? (1)

gatkinso (15975) | about a year ago | (#43394249)

Because somehow a surgical mistake have far more dire consequences in the US than in, say, the UK or Canada.

Re:ruin their day ? (4, Insightful)

ColdWetDog (752185) | about a year ago | (#43393197)

If the doctor doesn't ruin your life, then you don't get to ruin his day. Works for both 'concerns'.

Not that I'm sure the touted system will work as planned (it never does) but trying to minimize bad outcomes is sort of the whole point to improving medical practice.

It's not just BMWs and trophy wives.

Re:ruin their day ? (1)

greenbird (859670) | about a year ago | (#43397781)

If the doctor doesn't ruin your life, then you don't get to ruin his day. Works for both 'concerns'.

How do you get to ruin his day? He has insurance. Know if or how many times your doctor has been accused of or sued for malpractice? Of course not. That information is secret.

trying to minimize bad outcomes is sort of the whole point to improving medical practice.

One of the best ways to do that is to make information about doctors public.

How to debug a surgical bot? (4, Funny)

jamesl (106902) | about a year ago | (#43393129)

Just run it through the autoclave.

Re:How to debug a surgical bot? (1)

ColdWetDog (752185) | about a year ago | (#43393213)

Be sure to leave the lithium batteries inside.

(We never did like that particular autoclave anyway.)

Re:How to debug a surgical bot? (1)

girlintraining (1395911) | about a year ago | (#43395543)

(We never did like that particular autoclave anyway.)

You need to completely disassemble something so every surface is visible in an autoclave. If you don't do that, you don't kill all the bacteria. Which means, you failed at your job -- with potentially lethal results. If you ever put something in that had a battery compartment to begin with, that tells me before you even close the door that you should be ejected out of the fourth floor window in a firey blob as a lesson to other technicians to not mess around with safety procedures.

Re:How to debug a surgical bot? (1)

aiht (1017790) | about a year ago | (#43397761)

In the beginning, there was nothing. Then it exploded.

In an autoclave.

Re:How to debug a surgical bot? (1)

LordLimecat (1103839) | about a year ago | (#43394145)

Before or after it is taken out of the patient? You could debug both at the same time...

Re:How to debug a surgical bot? (0)

Anonymous Coward | about a year ago | (#43396221)

I just autoclaved my entire DVD software collection. They become wet, warm and the bugs stayed or became worse.

Can detect buggy software? (4, Insightful)

Zerth (26112) | about a year ago | (#43393139)

wow, does it solve the halting problem as well?

Re:Can detect buggy software? (4, Insightful)

Jane Q. Public (1010737) | about a year ago | (#43393299)

"wow, does it solve the halting problem as well?"

Pretty close to my thoughts as well.

Reliably finding all "bugs" is not possible without an intimate knowledge of exactly what the software is trying to accomplish. You can find certain kinds of errors algorithmically (what amount to "compile time" errors), and even prevent many kinds of run-time errors.

But there is simply no way to prevent the program from doing something unintentional (like cutting the wrong thing) without prior detailed knowledge of the actual intent.

Re:Can detect buggy software? (3, Insightful)

radtea (464814) | about a year ago | (#43393797)

But there is simply no way to prevent the program from doing something unintentional (like cutting the wrong thing) without prior detailed knowledge of the actual intent.

Back in the '80's David Parnas argued that software verification was fundamentally different from hardware verification precisely because software has very nearly infinite bandwidth, which is precisely the opposite of the argument being made in the article.

That is, for hardware, a small change in some parameter will in almost all practical cases result in behaviour that is not wildly different from the behaviour with the original value. This means that we can test it under a finite set of conditions and smoothly extrapolate to the rest.

With software, a single bit flipped can result in behaviour that is arbitrarily different from the original behaviour. As such, nothing short of exhaustive testing over all possible accessible states can prove software correct.

Algorithmic testing of the kind described here will catch some bugs, but provably correct algorithms can still run into practical implementation details that will result in arbitrarily large deviations from ideal behaviour. For example: the inertial navigation module on the Ariane V could have had provably correct code (for all I know it actually did) and the rocket still would have destroyed itself.

Most problems with embedded systems are of the kind, "The hardware did something unexpected and the software responded by doing something inappropriate." Anticipating the things the (incorrectly assembled, failure-prone, unexpectedly capable) hardware might do and figuring out what the appropriate response is constitutes the greater part of building robust embedded code, and this kind of verificationist approach, while useful as a starting point, won't address those issues at all.

Re:Can detect buggy software? (1)

Chuckstar (799005) | about a year ago | (#43394351)

I think referring to it as "useful as a starting point" very much under values this approach. None of that other stuff you referenced matters, at all, if the algorithm is guaranteed to screw up under certain circumstances. The example they gave in the article was of a provably screwed-up algorithm.

However, I'm really commenting along the lines of "tone" or "emphasis". It is true that once you have a known-good algorithm, there's still other stuff that can go wrong that you need to pay attention to in how you both engineer and operate such machines.

Re:Can detect buggy software? (0)

Anonymous Coward | about a year ago | (#43394923)

Back in the '80's David Parnas argued that software verification was fundamentally different from hardware verification precisely because software has very nearly infinite bandwidth, which is precisely the opposite of the argument being made in the article.

Software cannot have infinite anything, nor 'nearly infinite' anything. An algorithm exists outside of hardware and compiling limitations, so those can have aleph7 scale infinities involved while still being a valid algorithm (impossible to implement, but still a valid algorithm).
This isn't truly algorithm testing, this is implementation testing. In implementation, the only differences between software and hardware are how many layers of abstraction between the implementation and the electron and how easy it is to alter the implementation.
I'd like more software, firmware, and hardware to be tested this thoroughly, but it is very resource intensive, especially with anything that accepts an unbound stream of commands and/or data.

Re:Can detect buggy software? (1)

blippo (158203) | about a year ago | (#43395787)

Huh?

I'm pretty sure that the number of possible states in a moderately sized software system can be regarded as unfathomably too large, which isn't "infinite" but as close to "nearly infinite" as you can get... What's possibly saving these guys is that they are probably validating that the robotic "scalpel" is following all possible inputs and positions of the input device in the usual tree cosy and well behaved dimensions... OTH I didn't bother to read the paper, so what do I know...

Besides, what happened to Gödel, and all that? eh?

Oh, and
10 PRINT "K THX BYE"
20 GOTO 10

Stateless vs Statefull (1)

erice (13380) | about a year ago | (#43396001)

Back in the '80's David Parnas argued that software verification was fundamentally different from hardware verification precisely because software has very nearly infinite bandwidth, which is precisely the opposite of the argument being made in the article.

That is, for hardware, a small change in some parameter will in almost all practical cases result in behaviour that is not wildly different from the behaviour with the original value. This means that we can test it under a finite set of conditions and smoothly extrapolate to the rest.

With software, a single bit flipped can result in behaviour that is arbitrarily different from the original behaviour. As such, nothing short of exhaustive testing over all possible accessible states can prove software correct.

More accurately, this is a distinction between stateless and stateful systems. Stateless machines always perform the same operations. They makes them much easier to test than stateful systems whose behavior depends on memory of prior events.

Software is always stateful although it can approach stateless behavior. Analog electronics and mechanical systems are generally stateless. Digital systems can be built either way but registers and memories are pretty fundamental so almost all modern digital chips are stateful even if they don't embed cpu's and software.

Re:Stateless vs Statefull (1)

Jane Q. Public (1010737) | about a year ago | (#43403053)

"Software is always stateful although it can approach stateless behavior. Analog electronics and mechanical systems are generally stateless. Digital systems can be built either way but registers and memories are pretty fundamental so almost all modern digital chips are stateful even if they don't embed cpu's and software."

This is all true, but the very nature of the kinds of applications being discussed demand that they be stateful.

Re:Can detect buggy software? (0)

Anonymous Coward | about a year ago | (#43396315)

They characterized all possible inputs to a very specific algorithm, according to the article. They didn't even attempt to prove a larger subsystem of the surgical robot in any eventually occurring "impossible" states. Then again, those would be something for which designed redundancy and enforced limits might be the only way to respond to.

Re:Can detect buggy software? (2)

jeffasselin (566598) | about a year ago | (#43393409)

It's doable to ensure a program does exactly what its specs say it should do. See http://en.wikipedia.org/wiki/Formal_verification [wikipedia.org] .

It's not often done because it's extraordinarily time-consuming; the time it takes to do goes up exponentially with program length/complexity, I believe.

Re:Can detect buggy software? (1)

Maximum Prophet (716608) | about a year ago | (#43393453)

It's doable to ensure a program does exactly what its specs say it should do. See http://en.wikipedia.org/wiki/Formal_verification [wikipedia.org] .

It's not often done because it's extraordinarily time-consuming; the time it takes to do goes up exponentially with program length/complexity, I believe.

But it's not doable to automatically ensure the specs are correct. i.e. If you have correct specifications for an adult, those might not be appropriate for an child or infant.

Re:Can detect buggy software? (1)

Maximum Prophet (716608) | about a year ago | (#43393485)

wow, does it solve the halting problem as well?

The insolvability of the "halting problem" is only for arbitrary programs with arbitrary input. Certain defined classes of problems with defined inputs are very solvable.

Re:Can detect buggy software? (0)

Anonymous Coward | about a year ago | (#43393715)

wow, does it solve the halting problem as well?

The insolvability of the "halting problem" is only for arbitrary programs with arbitrary input.

Then it's a good thing that human anatomy only comes in a small number of predetermined dimensions and arrangements so that the software in these surgical bots don't have to deal with arbitrary input!

Re:Can detect buggy software? (0)

Anonymous Coward | about a year ago | (#43394419)

This biopsy robot is not certified for use in abnormal tissues.

Re:Can detect buggy software? (1)

Anonymous Coward | about a year ago | (#43393511)

Yes.

The halting problem says that you can't determine if a program halts or not in the general case. You definitely can check that for a chosen restricted set of programs (usually relatively small ones, but the core control system of a robot or the avionics control system for an airplane are small enough). See model checking [wikipedia.org] for how this is done in general. The article actually talks about a different technique which is to develop of a methodology for formally proving that class of programs correct, which is a more heavyweight technique than model checking (i.e. requires more effort but can handle more programs).

Re:Yes, can detect buggy software. (2)

Animats (122034) | about a year ago | (#43393639)

wow, does it solve the halting problem as well?

That's not insightful, it's clueless about the subject.The halting problem is very rarely a problem in program verification. It's possible to write a program for which halting is undecidable. That's a bug in the program.

The Microsoft Static Driver Verifier, in practice, runs into problems it can't decide about 5% of the time. The goal is to determine that the driver is memory-safe to be in the kernel and calls all the driver APIs with valid parameters. If it does that, the driver can't crash the rest of the OS, and any problems it has are confined to its own device. The Static Driver Verifier tries to either find a case that will crash or prove that a crash can't happen. The 5% of cases the Verifier can't decide are considered bugs. Really, if it's that hard to decide if a driver can cause a crash, the driver is broken.

I'm not sure how sound this new verification system is. There's been a lot of progress in this area in the last 30 years, since I worked on it. Formal verification is widely used in IC design, where "patch and release" doesn't work. The software world isn't as aware of the technology.

Re:Yes, can detect buggy software. (1)

radtea (464814) | about a year ago | (#43396099)

The software world isn't as aware of the technology.

It's not "less aware of" so much as "this technology is less useful/more difficult to apply in the kind of complex semantic domains where software operates".

Hardware has very limited semantics compared to software: the number of distinct types on a chip is small compared to the number of distinct types in even a moderately complex application.

Each type of thing has its own set of behaviours that have to be encoded in the checking software. This encoding process has proven thus far to have very high difficulty, and every time you introduce a custom type you have to specify it in the language of the checking program.

None of this is to say that such work isn't making progress (particularly in the functional world) but that the problem in the hardware domain really is easier, so we should expect software to lag, relatively speaking.

The halting problem is practically solved. (2)

ebiederm (614622) | about a year ago | (#43393687)

In the general case it is true that you can not tell if an program will halt. However most programs do halt and most programmers understand why. Theorem provers present a language to their users that does not admit the general case in which halting is guaranteed which is part of a more important guarantee that all programs can be reduced to a normal form.

Which means that the halting problem is solved for a program written in a language like coq.

The practical challenge seems to be that formal detailed requirements and specifications for algorithms are about as hard to write as the algorithms themselves.

Re:The halting problem is practically solved. (1)

russotto (537200) | about a year ago | (#43397865)

The practical challenge seems to be that formal detailed requirements and specifications for algorithms are about as hard to write as the algorithms themselves.

Yes; it's intuitively obvious that this must be the case, but I think someone managed to prove it a few years back. However, there's some doubt if the proof is valid...

Re:Can detect buggy software? (0)

Anonymous Coward | about a year ago | (#43393841)

The halting problem is only an issue for the set of all programs that can run in a Turing machine. Once you reduce that set, determining if a program halts becomes trivial.

This new technique... (2)

QuietLagoon (813062) | about a year ago | (#43393229)

... will lead to sloppy software being written. Why write high-quality software when you know any bugs will be caught later on?

Re:This new technique... (1)

Graydyn Young (2835695) | about a year ago | (#43393369)

Because it's going to be used on a person's organs?

Re:This new technique... (1)

Anonymous Coward | about a year ago | (#43393425)

I bet you say the same thing about stack traces, right?

Re:This new technique... (1)

QuietLagoon (813062) | about a year ago | (#43394141)

I bet you say the same thing about stack traces, right?

No. I prefer such safety back-stops.

.
The reason why I wrote my prior comment was that I read the reason the GNU-Linux people do not want to adopt BSD's strlcpy function is that the strlcpy encourages sloppy coding. The rationale given for that assertion is that strlcpy checks for the destination buffer being too small and zero-terminates regardless (unlike strncpy, which returns a non-zero-terminated buffer if the destination buffer is too small).

I think such types of rationale for not implementing safety checks are ridiculous.

Re:This new technique... (2)

DamnStupidElf (649844) | about a year ago | (#43393667)

The proof verifier won't verify sloppy software without a proof of correctness. It's much harder to write a formal proof of correctness for sloppy code, so anything that can pass the verification stage will most likely be well written, simple, and very human readable since a human has to read and understand it sufficiently to write the formal proof of correctness for it. Writing the formal proofs will also probably be the largest part of the workload in the software development process so it makes no sense to write sloppy code when the cost will be magnified during the verification stage.

Re:This new technique... (1)

smooth wombat (796938) | about a year ago | (#43393927)

will lead to sloppy software being written.

As opposed to what is being written now, right?

Ruin THEIR day? (0)

Anonymous Coward | about a year ago | (#43393273)

what about your day?

Re:Ruin THEIR day? (0)

Anonymous Coward | about a year ago | (#43395343)

End it, most likely

Why not do this for all software? (0)

Anonymous Coward | about a year ago | (#43393277)

For any software that has the possibility of creating harm to humans something like this should be norm.

Re:Why not do this for all software? (1)

ahadsell (248479) | about a year ago | (#43394055)

Most software (games being the obvious exception) has the possibility of creating harm to humans. Payroll, accounting, project management, word processing, etc. all have that potential.

TDD (0)

Anonymous Coward | about a year ago | (#43393331)

"I believe it's called Test Driven Development. Thank yew..." - comic book guy

Oh Crap! A Bug in the Debugging Software.. (1)

wanfuse123 (2860713) | about a year ago | (#43393335)

I wanted to read the article to find out information about the method, but there website ain't working. Too much traffic.

Re:Oh Crap! A Bug in the Debugging Software.. (0)

Anonymous Coward | about a year ago | (#43393495)

Should have wrote that in Scala too.

KeYmaeraD (1)

Anonymous Coward | about a year ago | (#43393381)

I believe that all people fall into one of two categories: either they can write good code or they can come up with clever names for things. Clearly this software was written by the former group.

Misleading summary (3, Insightful)

MasseKid (1294554) | about a year ago | (#43393463)

This debugs the algorithm, not the software, nor the hardware used. There is an enormous difference.

Not just the summary misleading (1)

evanh (627108) | about a year ago | (#43395953)

The whole article is written with the notion of a total replacement debugging/designing solution for all software.

It is really dealing with mechanical and human motion behaviours like inertial overshoot, path avoidances and realistic feedback to the operator rather than things like buffer overruns, atomic accesses and race conditions.

Need for stronger languages. (0)

Anonymous Coward | about a year ago | (#43393529)

This raises the idea that we need more solid systems both in software and hardware so no problems occur. Error-less is something we've yet to see, this is only the first of many reasons to develop a system which is both secure, real-time and without error. For industries with this type of demand.

Litton-Radcliffe LR260 (1)

Anonymous Coward | about a year ago | (#43393599)

Huey, Louie and Dewey, please report to the Medical Bay.

(I must have gotten the drone mfr/model from the novelization. I don't think it's ever mentioned in the movie.)

How to debug a surgical bot? (0)

viviana34 (2870193) | about a year ago | (#43393915)

Your jamesl...like you say...Just run it through the autoclave.

undo mod (2)

necro81 (917438) | about a year ago | (#43394023)

posting to undo an accidental moderation

Wonder how they do it (2)

TheSkepticalOptimist (898384) | about a year ago | (#43394347)

bool DidBrainBotOperateSuccessfullyOn(Patient patient)
{
          if ( patient != null && patient.IsVegetable && !patient.WasVegetable )
                    throw new PendingLawsuitException(patient.Lawyer, Doctor.Lawyers, Hospital.Lawyers, University.Lawyers, Manufacturer.Lawyers);
          return true;
}

Those buggers! (0)

Anonymous Coward | about a year ago | (#43395021)

Who's debugging the debuggers?

What they actually do (1)

PhamNguyen (2695929) | about a year ago | (#43395195)

Here's what they actually do according to the paper:

The problem is to construct a control system such that the robot scalpel (controlled by a person by a force-feedback device) will not exit some pre-defined boundaries. E.g. boundaries that prevent the scalpel cutting a nerve. The preivous system was flawed as it didn't take into account delays in the reponse of the scalpel. By creating a more accurate model, and using methods that are able to prove things about the dynamic system, they prove that using their new control algorithm no matter what the human operator does, the scalpel will always remain in the boundaries.

It's not clear to me whether the lack of safety in the original control algorithm was a real problem. After all, it's not like the operation will be perfectly safe and a success as long as the scalpel never leaves the boundary. So it's not necessarily worth spending a huge amount of effort to go from "X happens with probability 1e-6" to "X never happens" if there are other more important risks.

My vote would be: (1)

tarpitcod (822436) | about a year ago | (#43397279)

Let's stick these folks who believe in this verification technique underneath the robot as the first patients.

Just do some pointless surgery, open em up, and stitch them back up.

If they survive then trials on other people can start.

 

Re:My vote would be: (0)

Anonymous Coward | about a year ago | (#43397465)

No, let's stick you under the robot.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>