Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Computer Detection Effective In Spotting Cancer

kdawson posted about 6 years ago | from the mechanical-helper dept.

Medicine 89

Anti-Globalism notes a large study out of the UK indicating that computer-aided detection can be as effective at spotting breast cancer as two experts reading the x-rays. Mammograms in Britain are routinely checked by two radiologists or technicians, which is thought to be better than a single review (in the US only a single radiologist reads each mammogram). In a randomized study of 31,000 women, researchers found that a single expert aided by a computer does as well as two pairs of eyes. CAD spotted nearly the same number of cancers, 198 out of 227, compared to 199 for the two readers. "In places like the United States, 'Where single reading is standard practice, computer-aided detection has the potential to improve cancer-detection rates to the level achieved by double reading,' the researchers said."

Sorry! There are no comments related to the filter you selected.

more importantly: (-1, Offtopic)

Anonymous Coward | about 6 years ago | (#25260489)

can it spot my fp? say yes to shaved pussy!

Re:more importantly: (2, Funny)

Anonymous Coward | about 6 years ago | (#25260683)

Parent poster is not offtopic, this troll is expressing its experience with this technology. It's saying that it got a false positive (fp) and underwent chemotherapy as a result, which caused the hair to fall off its genitalia (shaved pussy). Quite a tragic tale.

Trolls are really an amazing species once you learn to understand their language.

Re:more importantly: (1)

rice_burners_suck (243660) | about 6 years ago | (#25260723)

This is similar to the way those tabloids appear to be full of bullshit, but are really the news for those chasing after dangerous aliens from outer space. I learned that fact from the movie Men In Black.

Re:more importantly: (-1, Offtopic)

Anonymous Coward | about 6 years ago | (#25260709)

Vote "NO" to hairy pie!

Re:more importantly: (-1, Offtopic)

Anonymous Coward | about 6 years ago | (#25261135)

pussy twat pussy beaver pussy box pussy pussy cunt pussy pussy quim pussy pussy pussy nookie pussy pussy pussy twat pussy pussy honey pot pussy pussy cunt pussy snatch pussy quim pussy dripping wet pussy pussy honey pot pussy beaver pussy cunt pussy pussy box pussy poontang pussy pussy pussy pussy cunt pussy pussy sweet pot pussy poontang pussy nookie pussy snatch pussy pussy poontang pussy pussy honey pot pussy pussy twat pussy pussy snatch pussy pussy twat pussy pussy box pussy box pussy pussy cunt pussy pussy poontang pussy pussy honey pot pussy pussy sweet pot pussy pussy cunt pussy beaver pussy pussy cunt pussy pussy box pussy cunt pussy pussy twat pussy pussy poontang pussy pussy nookie pussy pussy quim pussy pussy pussy pussy snatch pussy pussy quim pussy sweet pot pussy twat pussy dripping wet pussy

Re:more importantly: (1)

narcberry (1328009) | about 6 years ago | (#25261607)

In other news, mass firings across the nation as radiologists caught e-mailing photos of topless women around, about 31,000 photos in all.

WTF? just WTF? (0)

zappepcs (820751) | about 6 years ago | (#25260517)

Why is this news and NOT standard practice already? I'm sorry, it should not be news that computers can do 'things' as well as humans in many cases. Why this has not been implemented is seriously something to think about, not the fact that it is possible.

Saying that computers can be as good as a human at some things is like saying different brands of cow milk taste the same. Why is this not standard now! Computers are more capable at many tasks, especially things that are repetitive and tedious. sigh

Re:WTF? just WTF? (1, Insightful)

Anonymous Coward | about 6 years ago | (#25260575)

As much as I afree with you, keep in mind that that 199th woman would be really really glad that it was two radiologists and not a radiologist and a computer... her life could well have depended on it.

Re:WTF? just WTF? (2, Insightful)

lysergic.acid (845423) | about 6 years ago | (#25260997)

that's a statistically insignificant difference in accuracy. i think the conclusion to be drawn from this is that computer-aided detection is much more effective than an unaided human expert. this has significant implications when doing cost-benefit analysis.

the cost of an extra computer is likely a lot less than another technician or radiologist. so this data will help medical institutions make better use of funds while improving quality of patient care. it doesn't mean they have to lay off their radiologists/technicians and replace them with computers, but perhaps they could add a computer to their radiology lab and allocate new personnel for other tasks that demand human judgment.

Re:WTF? just WTF? (4, Interesting)

Metasquares (555685) | about 6 years ago | (#25260595)

Worry not, this is standard practice. Although there is general support that CAD (computer-assisted diagnosis) is effective vs. a second reader, there is still a bit of controversy in the field from time to time, since the results have not been overwhelmingly in favor of CAD yet. There's always at least one talk on the general usefulness of CAD at conferences. Sometimes whole sections get devoted to the topic.

What is a bit more puzzling is why it isn't as prevalent in diagnosis of other types of cancer. Most of the computer-aided detection algorithms draw on general machine learning and image processing techniques rather than specific domain-knowledge of the breast, and thus many of them can be applied, sometimes without any changes, to other organs. There is nothing particularly special about the breast.

My group developed a CAD system for MRI images of the brain, and in the course of performing experiments to put in the paper, I decided to run a few images from a breast CAD project through the classifier. Sure enough, the classifier we had developed for MRIs correctly classified 96% of the mammograms we fed it as well.

Re:WTF? just WTF? (1)

slimjim8094 (941042) | about 6 years ago | (#25260703)

nothing particularly special about the breast

Says you.

What, too obvious? Meh. Anywho, I think much of the attention to breast cancer is unwarranted. There are far more common and more dangerous cancers in each sex. I hate to put it this way, but it's fairly easy to isolate breast cancer vs. a brain tumor or liver cancer (mastectomy might not be the favorite choice, but it's pretty easy)

Am I sexist? I don't think so, I just wish that, say, similar attention was being paid to prostate cancer. As far as I know, they're roughly equally prevalent and equally dangerous

Re:WTF? just WTF? (1)

timmarhy (659436) | about 6 years ago | (#25260761)

it boils down to the fact women are better whingers. just as many men get prostate cancer, but more die from it because there aren't the screening services women have.

Not true (3, Informative)

Kibblet (754565) | about 6 years ago | (#25261125)

Where do you get your stats from? I've seen otherwise (ACS site, for starters. School, too, in my community health class. I'm in nursing). Furthermore, of all the inequities in research and healthcare, this is just one that is female-positive. Take, for example, cardiovascular health and women. Women are treated differently when it comes to suspected heart attacks and other issues of cardiovascular health, and it usually winds up killing them.

Re:WTF? just WTF? (0)

Anonymous Coward | about 6 years ago | (#25260797)

I just wish that, say, similar attention was being paid to prostate cancer

this same sentiment came up in a discussion on /. quite a while back, and i'll just attempt to paraphrase what the original respondant said. breast cancer used to be one of the more major causes of death among women because it was largely ignored. a lot of attention is now paid to breast cancer because, basically, women got off their butts and did something to raise awareness. they lobbied, agitated, etc., and it worked. if you want a similar amount of attention paid to prostate cancer, do the same. get off your butt, get your fellow males to do the same, and do something about it. agitate, lobby, etc. it can work for you, too.

Re:WTF? just WTF? (1, Interesting)

Anonymous Coward | about 6 years ago | (#25260811)

As far as I know, they're roughly equally prevalent and equally dangerous.

No, they are (very) roughly equally prevalent, but not nearly equally dangerous. They typically present very differently, for example there is not a significant population of aggressive cancers in younger people with prostate like there is in mammo.

Re:WTF? just WTF? (3, Interesting)

Metasquares (555685) | about 6 years ago | (#25260827)

I fell right into that one, didn't I? :)

I agree. I actually much prefer working with brains; the organs themselves are more interesting and analyzing the images tends to involve more challenges than 2D mammograms. Volumes vs. static images, spatiotemporal analysis, the option of acquiring functional data to map the lesion to cognitive deficits... I find it a very interesting area. Unfortunately, early diagnosis doesn't always make a difference in certain forms of brain cancer. This needs more research in treatment rather than in diagnosis.

Now we're going into the sociological dynamics of research, which turn out to be really messy, but I'm pretty sure the disproportionate amount of interest in breast cancer is in no small part fueled by the ample funding that gets provided to it vs. other types of cancer. However, as I mentioned in the other post, a lot of the CAD methods tend to be general, and breast cancer is really only a specific application, so this is perhaps not as bad as it sounds (if others apply existing methods elsewhere). Given that other forms of cancer strike more often or have greater mortality rates, and that this one tends to strike only half of the population with any frequency (although it is possible for it to develop in men as well), I think something like pancreatic or colon cancer would be more useful to direct some of the study towards, particularly because the current methods for diagnosis are wholly inadequate in the case of pancreatic cancer and rather invasive in the case of colon cancer.

Prostate cancer may also be a useful cancer to study more due to its high prevalence, but it's also gender-specific and the survival rates are rather high already, so I don't think it would be the first cancer to research on my list.

Re:WTF? just WTF? (1)

Metasquares (555685) | about 6 years ago | (#25261117)

*"Volumes vs. 2D images", that is.

Re:WTF? just WTF? (0)

Anonymous Coward | about 6 years ago | (#25264413)

I actually much prefer working with brains; the organs themselves are more interesting and analyzing the images tends to involve more challenges than 2D mammograms

"I used to think that the brain was the most interesting organ in the body. Then I realized what was telling me this..." - Emo Philips

Re:WTF? just WTF? (0)

Anonymous Coward | about 6 years ago | (#25261137)

I'm not sure where you get your statistics but according to the American Cancer Society, breast cancer is the most common cancer in women and second only to lung cancer (with smoking the predominant risk factor) in terms of mortality. The primary breast cancer is not the biggest problem, metastasis is the really killer when the cancer spreads elsewhere in the body. []

Re:WTF? just WTF? (2, Funny)

TheLink (130905) | about 6 years ago | (#25264179)

I've got a preventative treatment for prostrate cancer.

It's called trans-fatty acids.

Take enough of it and your odds of getting prostrate cancer go way down.

There's plenty of scientific evidence to back my claims.

Re:WTF? just WTF? (2, Informative)

Anonymous Coward | about 6 years ago | (#25260851)

I worked on the first clinically useful mammo CAD system (also the first to have FDA approval in the US). Unlike some of the smaller scale (often academic) programs I saw at the time, there was a large degree of domain knowledge (i.e. breast specific) in our codes.

This is pretty typical in other pattern recognition domains as well. You can get a certain distance with fairly generic approach algorithmics, but to really push the performance boundaries, you need local info and approaches as well.

This paper doesn't actually say anything particularly new relative to our results 10 years ago, but it's a broader study than was available then.

Re:WTF? just WTF? (1)

morgan_greywolf (835522) | about 6 years ago | (#25260891)

Yeah, but can they please choose another acronym? Computer Aided Design has been around a very, very long time. How about Computer Enabled (or Enhanced) Detection (CED) or Computer Facilitated Detection (CFD)?

Re:WTF? just WTF? (2, Interesting)

Metasquares (555685) | about 6 years ago | (#25260919)

Some people like to call it Computer Aided Risk Estimation (CARE), although some also use this term as a subfield of CAD, but unfortunately, the terminology has become entrenched by this time.

Re:WTF? just WTF? (1)

dj245 (732906) | about 6 years ago | (#25261023)

There is nothing particularly special about the breast.

3 billion men beg to differ.

Re:WTF? just WTF? (1)

TooMuchToDo (882796) | about 6 years ago | (#25261841)

Well, hopefully imaging can be augmented with this: []

By Robert Cooke, Globe Correspondent | September 23, 2004

Harvard researchers at Children's Hospital Boston have developed a simple urine test that appears to detect breast cancer early and accurately track tumor growth.

The findings are still preliminary, but if further research supports them, the test could be a major advance in the effort to catch breast cancer before it turns deadly. The Boston scientists are searching for similar markers in urine for other cancers.

Re:WTF? just WTF? (1)

Sopor42 (1134277) | about 6 years ago | (#25263499)

Interesting comment in that article:

"In a control population of 46 women without cancer, there were seven false positive results, or 15 percent. In these seven women, the amounts of the telltale enzyme were very low, the researchers said."

Is it possible that they are able to detect the cancer even sooner than they realize? How can they be 100% sure that all 46 of these women did not have breast cancer? If we can only be 87% accurate...

Re:WTF? just WTF? (1)

mesterha (110796) | about 6 years ago | (#25261963)

You must have test your algorithm on some pretty easy breast cancer cases and/or tested on a pretty small sample. They are reporting 87.6% accuracy using a human and their CAD system, while you are getting 96% using just a computer designed for a different type of cancer. Something doesn't add up.

Re:WTF? just WTF? (1)

Metasquares (555685) | about 6 years ago | (#25263417)

Yes, we're still investigating why the accuracy we obtained was so high. We've ruled out overfitting. I suspect it may be because our images are galactograms (i.e., mammograms with contrast injected into the ducts prior to imaging), which can more easily visualize certain types of abnormalities than unenhanced mammograms. I believe they also come from a single scanner, which isn't usually the case in clinical studies (although I wouldn't expect this to have as much impact in mammography as it does in modalities such as MRI).

Our dataset is smaller than the one used here, but not so small: we had 54 images, 13 of which had tumors. We're correcting for the balance of the classes both by iterated random sampling and by ROC analysis, however, so I'm pretty sure that's not a factor.

there seems to be a bit of resistance in principle (1)

Trepidity (597) | about 6 years ago | (#25266339)

One frequently recounted tale is that the first computer system to systematically beat most doctors in diagnosis (in an admittedly narrow domain), the blood-infection diagnosis system MYCIN from Stanford in the early 1970s, was never deployed due in significant part to opposition in principle to having computers do diagnosis. (Another major reason, of course, was that computers at the time were expensive, clunky to interface with, and not already routinely installed at hospitals.)

I suppose the situation may have improved over the past few decades? I research in AI myself (though not bioinformatics-oriented AI), and I'd say probably most people in the field still assume that the medical profession is a bunch of anti-AI luddites, possibly driven by self-interested doctors' organizations who don't like the idea of being "replaced by machines".

Re:WTF? just WTF? (3, Informative)

ColdWetDog (752185) | about 6 years ago | (#25260603)

Why is this news and NOT standard practice already?

Because the computer systems are expensive and it hasn't been clear that they work as good or better than humans. It's a very complex issue and has been studied for quite some time. In particular, the issue is "false positives" which cause anxiety and often prompt additional, invasive, expensive testing. From a rather quick [] Google Review of Available Information and Literature (GRAIL):

Although there is some evidence to support the view that prompting can significantly improve an individual radiologist's detection performance [6,7], there are still many unanswered questions about the way in which prompting will affect radiologists in the National Health Service Breast Screening Programme. In particular, studies have shown that the number and distribution of false prompts are critical to the success of the process [5,7]. In most commercial systems, in which sensitivity to abnormalities is seen as an important selling point, false prompts are the price one has to pay. Some systems produce, on average, nearly one false prompt per mammogram. Published research studies on commercial prompting systems [8,9] have not to date demonstrated any statistically significant improvement in radiologists' detection performance within a screening programme in which the vast majority of films are normal. However, such studies have shown that systems can detect a large proportion of subtle lesions using a retrospective review of prior screening films of patients with cancer. Because it is not yet known whether prompting has any detrimental effects on radiologists' performance in population screening mammography, these systems cannot yet be used in a population screening programme.

TFA doesn't even mention the false positive rate, just the fact that it found as many cancers as the double Radiologist method. So keep your pantyhose on. It's something that should get better with time and experience, but it's hard to say that the system is ready for universal application.

Re:WTF? just WTF? (4, Informative)

Metasquares (555685) | about 6 years ago | (#25260705)

Most results are presented via ROC curve (for the uninitiated, this is a curve that plots true positive rate against false positive rate based on some threshold for classifying a lesion), so the FPR can theoretically be reduced if you're willing to lose sensitivity as well.

The thing is, the outcomes are not balanced. The risk of missing a cancer is considered far greater than the risk of returning a false positive, so the algorithms are usually created with sensitivity rather than specificity in mind. In my opinion (and since I work on some of these algorithms, my opinion is important :)), this is as it should be, and we should worry about specificity only if we can keep a comparable level of sensitivity.

In any case, the article Yahoo is sourcing from does mention the specificity (which is 1-false positive rate), and it is encouraging: with CAD, the specificity was 96.9%, vs. 97.4% for double reading. Given that sensitivity was also similar (87.2% vs. 87.7%), this article paints CAD in a very favorable light.

Re:false positives (1)

dcw (87098) | about 6 years ago | (#25265587)

In Newfoundland, Canada, a lab that screwed up testing thousands of biopsies as false negatives has been calling the women to inform them they should get re-tested, only to be told that they have died . . . from cancer their lab tests said they didn't have. I'm sure women who were really negative would have had no problem dealing with the stress of retesting if it meant Grandma/Mom/Sister/Daughter/Girl-Friend would be alive today.

Re:WTF? just WTF? (1)

Ostracus (1354233) | about 6 years ago | (#25260623)

"Saying that computers can be as good as a human at some things is like saying different brands of cow milk taste the same. Why is this not standard now! Computers are more capable at many tasks, especially things that are repetitive and tedious."

And what computers can't do. Cheap labor can.

Re:WTF? just WTF? (1)

Yold (473518) | about 6 years ago | (#25260625)

Medical device companies and universities have been working on this problem for years. It just isn't ready for prime-time usage. People go to school for about a decade to become a pathologist, and replicating that kind of domain knowledge isn't an easy task.

If you are a non-programmer, I understand how it seems like a trivial task to identify abnormal cells in tissue. We can naturally recognize similar/dissimilar cells with our vision, but to do this with a computer requires some serious mathematics, namely using a clustering algorithm.

I am actually researching this problem next semester, particularly how well a certain clustering algorithm works when applied to the problem, so I guess thats where I am coming from.

Re:WTF? just WTF? (1)

Metasquares (555685) | about 6 years ago | (#25260863)

Clustering algorithms are generally unsupervised. The domain knowledge isn't usually necessary in clustering so much as verifying that the cluster results make sense, since they're much more difficult to quantify than supervised tasks, such as classification, where your data is already labeled.

Of course, you need to do that too before you can present meaningful results and convince people that a system works.

(*As an aside: although you can certainly use a clustering algorithm to segment, the problem you've identified is technically lesion segmentation rather than clustering. There are a lot of non clustering-based approaches to it as well.)

Re:WTF? just WTF? (1)

Yold (473518) | about 6 years ago | (#25261671)

In all seriousness, how else do you assign a weight between two vertices without domain knowledge?

Re:WTF? just WTF? (1)

Metasquares (555685) | about 6 years ago | (#25261837)

Are you referring to the distance metric used in clustering algorithms? Most simply use Euclidean or cosine distance unless they have specific reason to believe a different metric would work better. It isn't really domain dependent.

Or did you mean how "else could you segment a lesion other than by clustering it in the absence of domain knowledge?" There are numerous ways, but edge detection filtering, fuzzy-connectedness segmentation, and watershed segmentation are the first ones that come to mind. They're all general image processing tools.

That's not to say that you couldn't create a clustering algorithm out of those, or that you couldn't combine domain knowledge with these algorithms (the more you know, the better your model can be). But you'd probably get better performance from dedicated clustering algorithms, such as k-means, over using segmentation algorithms in clustering... and perhaps vice versa.

Re:WTF? just WTF? (1)

Yold (473518) | about 6 years ago | (#25262007)

From my limited understanding, the algorithm is a dedicated clustering algorithm (it actually incorporates k-means), but it also utilizes a graph-cut algorithm for segmentation.

Re:WTF? just WTF? (1)

timeOday (582209) | about 6 years ago | (#25261455)

People go to school for about a decade to become a pathologist, and replicating that kind of domain knowledge isn't an easy task.

That's the presumption, but it's often false. An algorithm can often outperform an expert's intuition. See the chapter on "Evidence Based Medicine" in the book "Super Crunchers," or the chapter on diagnosing heart attack in Malcolm Gladwell's Blink. Often in these classification tasks, an expert using a statistical tool performs worse than the statistical tool alone, because they override it and degrade its performance. The human mind is very poor at properly weighting a large number of factors and considering all their interactions.

Re:WTF? just WTF? (1)

Yold (473518) | about 6 years ago | (#25261679)

you still need a weight function to determine how similar two vertices are. This is where domain knowledge comes into play. A clustering algorithm without a measure of similarity between points is useless.

Re:WTF? just WTF? (1, Interesting)

Anonymous Coward | about 6 years ago | (#25260677)

Well, let me tell my tale of working as an assistant in a hospital in Germany.

Medical care is first and foremost bureaucratic, and I guess it's no different in other countries. If it's socialized medicines fault or not is another topic.

The fact is that the processes are horribly inefficient - the computer systems for cancer therapy were from the 1970, and I literally had to hack OpenVMS commands into a terminal with a monocrome green-black display. Then I would have to wait 5 minutes or so to receive ~10mb of data (CT Images).

We had another, "modern" system that should eventually replace the old system, but it was basically the old OpenVMS code with a buggy Win32 GUI glued on. In some aspects, it was even *worse* than the old one. Id would crash randomly, didn't provide shortcuts for the most basic tasks. It was literally so bad that I would not use it for planning my garden - and we were using it for treating cancer patients! Cost of the new system? About 300.000 Euro (415 000 Dollar).

How can this kind of business go on? In my opinion, because it isn't a real business. Money isn't a big deal. The people who made the decision, mostly doctors, aren't really qualified and are easily impressed with what the salesguy tells them. They are in their 50ies and don't really understand this new fancy computerstuff anyways.

I hope this technology, which is based on image recognition (as far as I can tell from the Article), will advance, because it is so obvious and was already obvious to me when I worked there. There is much more to be done in terms of software/engineering in the medical field. I am talking about getting rid of the endless paper trails, not storing images physically in the smelly hangar with the leaky roof, not having to hear doctor say "I can't decipher this.. does this say 'left' or 'right'?"....

/anonymous rant

Re:WTF? just WTF? (3, Insightful)

mikael (484) | about 6 years ago | (#25260747)

Because you have to *PROVE* with clinical certainty (ie. research studies) that the computer system is as good as an expert under all conditions. A mammogram is a two dimensional monochrome picture of a three-dimensional object. As you are attempting to detect a life-threatening defect using a piece of software, false alarms can be as devastating to the patient as missed detections, and thus have the same lawsuit risks.

Also, this requires the entire hospital to have a digital patient record management system, in order to handle digital X-ray images. Many hospitals and dentists are still using photographic plates and paper records. With the digital system, everything from doctors notes to X-rays, CAT and MRI scans are automatically placed into the patients record when they are generated. The resulting data is then accessible to any consultant or doctor involved with the patient. The new system has the advantage that there is no need to wait for X-ray plates to be developed.

Re:WTF? just WTF? (1)

timmarhy (659436) | about 6 years ago | (#25260771)

what about the one cad missed? i bet that person would be pissed off with you? cad systems still aren't as good as 2 eyes, and when your talking life and death that just doesn't cut it.

Re:WTF? just WTF? (2, Insightful)

Metasquares (555685) | about 6 years ago | (#25260907)

Be careful: A slight improvement in the classifier (or acceptance of another false positive or two) and you may have to make that argument in the other direction. The difference is accuracy is not statistically significant for a binary classification problem of that size.

What this article demonstrates is that current state-of-the-art CAD is nearly as good as a second reader. The performance of the radiologist is pretty much fixed; the algorithm's performance is not.

It is news (1)

sjbe (173966) | about 6 years ago | (#25261193)

Why is this news and NOT standard practice already?

Actually it is reasonably widely used as a diagnostic aid and becoming more so all the time, at least in the US. I've personally done consulting work in radiology clinics where they use computers to assist diagnosis. That said, it is still a developing technology and every scan is read by a radiologist too (which is just common sense) but these system do occasionally catch something the radiologist missed and vice-versa. It provides another set of eyes which don't get tired and that is a useful thing.

Why isn't it more widely used? Several reasons. First while impressive, these systems are still being tested for clinical utility. So far are not demonstrably better (meaning they don't catch more tumors under double blind testing) than a radiologist. But achieving statistical parity is an impressive feat and worthy of mention. Second, these systems are not cheap and unfortunately yes that matters. There is not an infinite amount of money available for healthcare and in many settings the computer system cannot be justified unless it is demonstrably and significantly improves diagnostic capability. I have no doubt this will be standard equipment in time but it will not happen overnight.

Re:WTF? just WTF? (1)

ceoyoyo (59147) | about 6 years ago | (#25261507)

Because it's not nearly that straightforward. Reliable image recognition in a clinical setting is tricky. Mammograms are a good place to start because they're fairly well controlled. There are rarely any serious artefacts or patient motion. Breast tumors are fairly obvious, and the cost of false positives isn't very high (a biopsy, which is a bit painful but is a simple outpatient procedure done under local anesthetic).

Much of the stuff radiologists do is a LOT harder. Even with the easy stuff, computers are JUST starting to get good enough that we might start thinking about trusting people's lives to them.

Amazing (2, Insightful)

rice_burners_suck (243660) | about 6 years ago | (#25260549)

It's amazing what the technology can do these days. The thought that software can help in the detection of this sort of thing is a testament to the fact that those who build these systems are standing on the shoulders of giants due to the immense amounts of knowledge and experience that have gone into making all parts of this system (besides the part that detects cancer) function. This is at least hundreds of years of engineering in the design and production of the electronics over many iterations, plus the centuries of development of mathematics that had to be developed before electronics were discovered. Now let's get to the software that detects cancer. The people who wrote this software had to be experts both in software and in the relevant medical fields. I think all of this is amazing and we need to be thankful that we live in a time when these sorts of things are possible.

Totally OT but worthwhile IMO: [] - [] - []

'Nuf said

Nip it at the bud. (0)

Anonymous Coward | about 6 years ago | (#25260571)

To be honest, the greater the chances of cancer being found the better. I've lost many family members to it and I feel anything that can allow the early detection and possibly treatment/prevention of it the better.

This is good unless... (2, Insightful)

dgatwood (11270) | about 6 years ago | (#25260579)'re #199. If the computer provides that much advantage when combined with a single person, it stands to reason that it would also provide a huge advantage when two people read the charts. Unfortunately, knowing our medical system in the U.S., they'll probably just use this as an excuse to pay only one doctor to read the chart....

Having worked on a different computer diagnosis... (4, Interesting)

CustomDesigned (250089) | about 6 years ago | (#25260843)

system, there is a synergy between man and machine. Our system was for a general practitioner (general diagnosis with symptoms, physical findings, history, tests, etc as input). The computer is somewhat "dumb", but it always checks all the possibilities. The doctor would be looking for the usual stuff, and sometimes miss the more exotic diseases that would turn up from time to time. The machine would flag some exotic condition with a high probability, and the doctor would go "Interesting! I hadn't thought of that, let's check it out." Dr. House probably doesn't need one :-)

Lupus? (3, Funny)

jamesh (87723) | about 6 years ago | (#25261301)

Dr. House probably doesn't need one :-)

Depends... what's its false positive and false negative rate for Lupus?

Re:This is good unless... (0)

Anonymous Coward | about 6 years ago | (#25260909)

If you're other 198 I'm sure you'll appreciate reduction in your medical bills.

Re:This is good unless... (1)

PatDev (1344467) | about 6 years ago | (#25267373)

Right, but the point is that this is not (or at least should not) be to replace to pairs of eyes looking at the results. The opinions of two human radiologists is still the best. The proposal is that adding a computer to the one pair of eyes looking at these pictures currently will improve the accuracy. That 199th woman would get missed anyways, because she wouldn't have two doctors looking at it, she'd have one doctor looking at it.

As many? Or more? (2, Interesting)

MeanMF (631837) | about 6 years ago | (#25260583)

It doesn't say if the 198 that CAD found were a subset of the 199 that the two readers found.. So would two readers + CAD have found more than 199? Or did both groups miss the same 28?

been done. (1, Interesting)

Anonymous Coward | about 6 years ago | (#25261625)

i built the original software which was deployed by the NHS around 1998. The systems now are several generations ahead. Both groups would find different 28s with large amounts of overlap based on the smaller studies we did. Unfortunately mine was torpedoed due to liability issues the first time around. Mayber theyve figured out a way around the liabilit caused by the computer missing a tumor but probably not.
BTW, mine was open source for your code viewing cancer analyzing pleasure.

Link (2, Interesting)

Anonymous Coward | about 6 years ago | (#25260593) []

That's the original research. If you read the Yahoo article you'll see the researchers got money from the manufacturer of a computer-aided reading system.

Re:Link (1)

lysergic.acid (845423) | about 6 years ago | (#25261129)

as long as we rely on private sector funded commercial research to advance medical science/technology, we will run into the issue of potential researcher bias as there's an inherent conflict of interest. this is also true of the pharmaceutical industry.

unless we as a society decide that public research is an important area of government funding, problems of slanted studies and deceptive/false research findings will continue to plague the medical field and other areas of research which depend on corporate funding. certain things need to be placed above profit, such as public good. our commerce-driven culture has huge social costs, including the commodification of health, science, and human knowledge.

if this technology & research were backed by public-funding the issue of objectivity wouldn't be clouded by commercial interests. but as it is right now, we'll just have to place faith in the scientific integrity of the researchers, and if we're lucky other independent researchers will come along to verify the results. but there's only so much publicly-funded research and grant money to go around. this doesn't seem like a popular enough issue to receive attention from public research.

Ah yes (2, Informative)

XanC (644172) | about 6 years ago | (#25261183)

Because government-funded research is inherently free of any and all bias. It is never politically motivated, and areas to research and not to research are chosen purely on scientific merit by a government bureaucrat, whose #1 goal is not to increase and extend his own power. </sarcasm>

Seriously though, there are a lot of people who believe exactly that, and even if the commercial research may be biased, at least that's known and out in the open.

Worthless without FP count (0)

Anonymous Coward | about 6 years ago | (#25260617)

What's the false positive rate? If it takes more work to filter bad results than it would take for a second person to study the x-rays, then the whole process is moot.

Why isn't this 99.9%? (0)

linzeal (197905) | about 6 years ago | (#25260899)

Are we in the dark ages of computer aided pattern recognition for oncology, get with the program peoples. There should be an open source program with 6 competing overly complicated algorithms and one that simply makes no sense, wrapped up in a GUI that requires extensive Emacs training, right now on Sourceforge that does this, sheesh. I want to see a prototype by next week.

Re:Why isn't this 99.9%? (2, Informative)

Robert1 (513674) | about 6 years ago | (#25260951)

Because mammography is an extremely non-sensitive test. []

This shows how few women the test can actually benefit - 37 out of 10,000 over all lifetimes. Even worse is that the women who are diagnosed falsely positive far outstrips those that actually have cancer by orders of magnitude. This creates a harmful burden on the falsely diagnosed women - creating morbidity and even mortality.

You can make a machine that gets 99.9% of all women who do have breast cancer. Unfortunately, out of the 9740 that never will/don't have breast cancer 9700 will be falsely diagnosed as having it.

Re:Why isn't this 99.9%? (1)

linzeal (197905) | about 6 years ago | (#25260977)

Why? Just because something was one way when humans were doing it doesn't mean an intelligent system over time will become attuned to variables we do not even understand let alone know how to properly implement in an algorithm. I think we can do better than you say.

Re:Why isn't this 99.9%? (1)

ceoyoyo (59147) | about 6 years ago | (#25261515)

Aim higher. You can easily make a machine that flags 100% of the women who have breast cancer. Unfortunately it also gives the nod to 100% of the ones who don't.

Re:Why isn't this 99.9%? (1)

Deanalator (806515) | about 6 years ago | (#25261677)

Seems to be that the graph shows that 50 too late to start getting mammograms. From what I understand, it is recommended that women start getting mammograms at 35.

Also, isn't the point of the mammogram to detect anomalies before they turn into cancer? The numbers for whatever reason, seem a bit skewed in an attempt to get the most disproportionate ratio possible.

Re:Why isn't this 99.9%? (0)

Anonymous Coward | about 6 years ago | (#25265329)

The point of mammography screening is to find cancer when it is small. If breast cancer is found when it is less than 1 cm, the long term survival is very good. A third of breast cancers are seen retrospectively on the previous year's mammogram. If CAD helps radiologists detect these missed cancers a year or more earlier, the patient has much better prognosis.

Thank God! (1)

slughead (592713) | about 6 years ago | (#25260995)

As a med student, I couldn't be more pleased about this. Hopefully by the time I get out there, they'll have these standard in hospitals. And, more importantly, part of the standard of care, so when they screw up, I wont be sued.

Have you ever tried to see a small diffuse tumor on an X-ray before? It take a Jedi mind trick on just to convince yourself they're there.

X-rays are cheap, fast, and awesome for bones/opaque liquids, but my eyeballs can't see loose tissue worth a crap.

Computer detected, thus... (1)

SEWilco (27983) | about 6 years ago | (#25261081)

"Computer Detection Effective In Spotting Cancer"
I detect computers in my room, thus that means there is cancer in my room? Ick.

Nothing New (1)

pinqkandi (189618) | about 6 years ago | (#25261197)

Computer Aided Diagnosis has been around for years - it's just now it's becoming more popular.

Don't get me wrong - this is great stuff, just not new.

Re:Nothing New (1)

ceoyoyo (59147) | about 6 years ago | (#25261519)

Real proof that it works better than a trained radiologist in a particular application is new.

Re:Nothing New (0)

Anonymous Coward | about 6 years ago | (#25267539)

That's actually not true. We had demonstrated similar things 10 years ago, at least on a reasonable (but not broad enough) data set for a particular algorithm.

Actually, one of the really interesting things to come out of that sort of work was the amount of variability amongst radiologists. Tricky (politically and otherwise) subject to do clinical work on, though.

Breast cancer detection Radiologists (0)

Anonymous Coward | about 6 years ago | (#25261647)

Posting anonymously since I once worked for a company in the Bay Area that had machines which detected cancer/carcinomas/calcifications from x-ray films.

The detection is REALLY good. It's actually better than most (>80%?) radiologists and many radiologists find using it intimidating. In the US it often used to cover the ass of the radiologist since usually just 1 radiologist looking at the film.

Why Does SLASHDOT suck? (0)

Anonymous Coward | about 6 years ago | (#25261711)

Why oh why did they screw up the front page so bad?

To quote one of the most famous social critics of our time "This sucks worse than anything has ever sucked before"

8 eyes in India just as cheap as 2 in the US (1)

Politicus (704035) | about 6 years ago | (#25261731)

Yeah, but how well does it do against 4 experts from India? Hardware and software required to email images will hardly cost as much as this image recognition setup.

Complex privacy vs. Liability only (1)

DrYak (748999) | about 6 years ago | (#25267737)

Hardware and software required to email images will hardly cost as much as this image recognition setup.

Guaranteeing privacy will be much more complicated.
The channel between the US radiology machines and the Indian doctor's office has to be protected against unwanted intruders. Email doesn't cut it as it is as secure as a post card.

Sadly, slapping a GPG plugin to thunderbird won't do the trick either because :
- I'm pretty sure legislation are rather complex and will require solution that have been explicitly approved and certified (someone need to get the FDA and similar organisations in other countries to approve the encryption plugin - and that will cost money)
- Depending of the data source the DICOM files (medical imaging standard format) can get pretty huge and beyond the tiny limitation of e-mail attachment size (mammograms are very high resolution and that's only 2D, other type of image are 3D and can weight in the order of gigabytes).

This will probably require establishing encrypted VPNs.
- Probably using a small set of approved VPN boxes.
- and involving an administrative nightmare at both ends to obtain proper clearance. (probably this will end up being the US hospital setting a separate PACS server isolated from the rest of the network for security reasons).

And then there's the question of the compliance of the equipment used by the Indian doctors (at least the open source OsiriX for Mac OS X is currently getting FDA-approved)

And that's only the technical part. Then there's the whole question of legal liability complexified by the fact that the doctors who did the diagnose aren't even living under the same juridiction as the rest (hospital, patients and so one).

Whereas, with the CAD software, the situation is much simpler.
- Only the problem of liability comes.
- And as the software is operated by a doctor, you can even bypass the liability by saying that the final diagnostic decision is the doctor's, so he's liable for whatever is decided, the software is only a helping tool (After all it's called Computer *Assisted* Diagnosis).


Last but not least, there's the problem of different standard in medical education and training. The Indian doctor's knowledge may not be approved by US.

Indian doctors learn standard scientific medicine.
Not intelligently-designed creationist bible-compliant medicine ~ :-P

As well as, or nearly as well as (1)

dwater (72834) | about 6 years ago | (#25261991)

>In a randomized study of 31,000 women, researchers found that a single expert aided by a computer does as well as two pairs of eyes.

"As well as"?

> CAD spotted nearly

or "nearly as well as"?

> the same number of cancers, 198 out of 227, compared to 199 for the two readers.

Ah, 198 vs 199 - it seems their first statement is not accurate. I wonder why people keep doing this - they use numbers accurately enough, but use language inaccurately.

Re:As well as, or nearly as well as (1)

justinlee37 (993373) | about 6 years ago | (#25262705)

You obviously flunked statistics.

Re:As well as, or nearly as well as (1)

dwater (72834) | about 6 years ago | (#25263547)

What has this to do with statistics, apart from 'lies, damned lies, and statistics'.

I did pass statistics, btw, though it was some time ago. I don't recall learning that 'as well as' being the same as '198 equals 199' (iirc). If it were as good as, it would be 199.

* I can't easily look back at the original numbers, so they might not have been as above.

Re:As well as, or nearly as well as (1)

justinlee37 (993373) | about 6 years ago | (#25267697)

You flunked because you don't seem to realize that sample data is never perfect; you can't just take a random sample and assume it is 100% representative of the entire population. It always deviates slightly from the entire population's true mean.

It's entirely possible that another random sample could result in, say, an average score of 200 for the computer and an average score of 199 for the two-person reviewers.

Would you make the conclusion that, based on that sample, the computer is "better" than the reviewers? Or would you say "hey, those numbers are really fucking close," and chalk up the difference to random sample variance?

Haven't you ever heard of "confidence intervals?"

A difference of 1 in the mean is not statistically relevant. As far as fucking science is concerned, the study supports the hypothesis that the computer is on par with the human reviewers.

Go back to school or stay out of the laboratory.

Re:As well as, or nearly as well as (1)

dwater (72834) | about 6 years ago | (#25269905)

You're just saying that it 'could be' 'as good as'.

Just because it could be, doesn't make it so, and the numbers don't make it so either, statitically or otherwise.

Perhaps *you* should go back to school and learn some English.

Not good enough. (1)

srothroc (733160) | about 6 years ago | (#25262151)

I personally think that when you're dealing with something like cancer, even if the computer-assisted detection is ALMOST as good as two humans, it's still not good enough to be used on a regular basis.

It's all well and good to say that it's almost as good as two humans together, but I'm sure the couple of dozens of people who slip through the cracks would have something to say to the contrary.

I mean, imagine if you had two bullet-proof vests -- one with multiple layers that let bullets through 23 out of 10,000 times, and one with a lightweight, high-tech material that let bullets through 89 out of 10,000 times. Would you really want to go with the latter?

Re:Not good enough. (0)

Anonymous Coward | about 6 years ago | (#25264683)

Quite probably you would want the lighter weight (and more flexible vest). Though of course this would depend upon the difference in weight/flexibility...and of course the type of bullet being fired at it (here's a hint: most vests wont stop a heavy rifle round).

Also worth remembering that a vest wont save you from a legshot, which will disable or kill you *very* fast if the femoral artery is breached.

Anyway...the real question should be regarding the intersection between the 198 and 199 sets. If the computer program is good at picking up the things that humans miss, then great it should certainly find its way into standard medical procedures.

Re:Not good enough. (0)

Anonymous Coward | about 6 years ago | (#25265411)

The current standard of care in the U.S. is a single reader, not a double reading as was used for comparison in this study. Many studies have showing that double reading is better than single reading, so if a single reader w/ a computer can achieve statistically the same results as double reading, CAD is good enough to be used on a regular basis.

Re:Not good enough. (1)

dcw (87098) | about 6 years ago | (#25266589)

I mean, imagine if you had two bullet-proof vests -- one with multiple layers that let bullets through 23 out of 10,000 times, and one with a lightweight, high-tech material that let bullets through 89 out of 10,000 times. Would you really want to go with the latter?

That would depend on how much the multi-layer one hinders my ability to complete my mission. If gear weight and space in no problem, and I just have to stand around, hell give two of the bulky ones! 11.5/10,000 sounds even better. But if I can't get my ass out of harms way wearing it or the high-tech one, just paint me with woad and give me a loin cloth.

Breast Specific Gamma Imaging (1)

Eowaennor (527108) | about 6 years ago | (#25265461)

A mammogram provides too many false negatives/positives. After some point it wont matter if its being looked at by a human or machine.

The future of breast cancer detection is Gamma imaging. See [] for comparisons between BSGI and X-Ray.

Re:Breast Specific Gamma Imaging (0)

Anonymous Coward | about 6 years ago | (#25265673)

I think BSGI may be useful for diagnostic workup, but it won't replace mammography for screening. CAD has its greatest potential in aiding in reading screening mammograms of asymptomatic women.

Wasn't this already done somehow? (1)

Raliaga (1027504) | about 6 years ago | (#25266321)

I mean, somethin like this [] ?

what about one set of eyes? (1)

dcw (87098) | about 6 years ago | (#25266681)

Anybody see what the detection rate is with only one MD looking at the images? The article seems to be missing that bit of date. I'm willing to bet it is it 'statically' lower than the two MD system.

This would be proof that four eyes are better than two! =8-]

Seeing 'CAD' made me think... (1)

davidsyes (765062) | about 6 years ago | (#25267053)

'AutoBREAST'... But, i also instantly thought of Star Trek's medical tricorder, too. (captcha: defects)

weeding out trivial cases (1)

azery (865903) | about 6 years ago | (#25270239)

software is already used for similar activities, but in another way. With medical images, it is often difficult to decide that something is not there. It is more easy to see e.g. a tumor than to decide that no tumor, even not a small one, is present. So, in some laboratories, they use a computer to judge the images first. The computer will only flag the cases where he found a tumor and also indicates the place. As such the doctor only has to do a quick verification: he knows where to look and what to look for, because the computer has given him the required information. As such, the medical expert has more time to judge the difficult cases in more detail: all cases where the computer did not find anything. Tests have shown that with that system, fewer experts can process more images with the same level of accuracy.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?