As MDs, I think it is very clear that all of us who understand even the slightest about computers and tech see that machine learning is the way to go. Medicine is ideally suited to ML, and in time, it will absolutely shine in that domain.
Now for people eagerly awaiting the MDs downfall, I think you are precipitating things a bit. We all tend to believe in what we do, and I concur in saying that expert systems will replace doctor judgement in well-defined, selected applications in the decade to come. But thinking that the whole profession will be impacted as hard as factory workers, with lower wages and supervision-only roles, is not realistic. What will be lacking is the automation of data collection, because you seem to underestimate by far the technical, legal, and ethical difficulties in getting the appropriate feedback to make ML appliances efficient. I firmly believe in reinforcement learning, and as long as the feedback system will be insufficient, doctors will prevail, highly-paid jerks or not.
I myself am an anesthesiologist, a profession most people think of as a perfect use case for those techs (as I do), and wonder why we haven't been replaced already. The reality is that the job is currently far beyond what an isolated system could do. We already have trouble in making cars follow the right lane in non-standard settings. I hope people realize that in each and every medical field, the number and complexity of factors to control is far greater than driving in the right lane.
People who drive the medical system have no sense of technology. They cannot even envision the requirements for machines to become efficient in medicine. That is why we are seeing quite a lot of efficient isolated systems pop up, but we won't be seeing fully integrated, doctor-replacement systems for a long time. This will require a new generation of clinical practitioners, who will understand how to make the field truly available to machine efficiency.
Recently, my dad was sick with a pretty bad cough. Like, so bad that he couldn't speak without coughing. He fainted twice from minute long coughing fits, one of those times hitting his head his head on the stove on the way down, leaving a deep cut and blood everywhere.
He went to at least three different doctors. He got a scan of his chest. Everything looked clear, and all of the doctors were stumped. Things were pretty bad.
I mentioned this to a UCSF resident friend, and her immediate response was "Oh, is he on <some blood pressure medication I forget the name of>?" I was like, uh, let me see. Called my mom, she checked, and, lo and behold, he was on it. So his doctors took him off it and within a week he was better.
This coughing wasn't some obscure side effect of the medication she knew through sheer brilliance: it's a side effect that's been widely known since the 1970's. Hell, it was on the drug's Wikipedia page.
So there's a couple morals you could take from this. One would be, wow, doctors are smart to be able to diagnose an issue based on a single symptom and some reasonable assumptions about a patient's background! The other is that the median doctor is pretty worthless; spending tens of thousands of dollars gives you no guarantee you'll see someone competent; and that a medical system that relies on you grabbing drinks with a UCSF resident to get good results is fundamentally broken.
Machine learning and expert systems don't have to be as awesome as the best doctors to be valuable. They don't need to be better than competent doctors, even. They just need to provide a bare level of competence to provide a huge amount of value.
(In my opinion) Doctors have done a good job of protecting them selves from technological disruption by only allowing it to be used where it is profitable to them (billing) and not where it would threaten them (diagnosis).
Even the concept of electronic medical records has been resisted, and is currently poorly implemented. The gov't had to literally bribe doctors to convert to electronic systems and most seemed to go out and get the least poorly built products on the market.
I'm not trying to paint them as evil. Heck, even NASA scientists were suspicious of computers taking their jobs. I see this in other fields all the time.
Seeing the potential and seeing the current state of affairs kind of makes me sad. I want to live in the world where a balance has been reached. Where my medical record is a salient form of AI that checks if I lost those 10 pounds I promised and loops in the doctor when my flu symptoms linger too long. I want it to eliminate the hassle for doctors, hospitals, insurance companies as well as patients, and I think in the process it can drive costs down and raise quality of life.
> What will be lacking is the automation of data collection, because you seem to underestimate by far the technical, legal, and ethical difficulties in getting the appropriate feedback to make ML appliances efficient. I firmly believe in reinforcement learning, and as long as the feedback system will be insufficient, doctors will prevail, highly-paid jerks or not.
We're already seeing a significant rise in the role of nurse practitioners at the front line of medicine. Today, they gather the data and hand it to an MD, so handing it off to an ML system would be straightforward.
This is the second major study applying deep learning to medicine, after Google Brain's paper in JAMA in December, and there are several more in the pipeline.
If you've developed expertise in deep learning and want to apply your skills to healthcare in a startup... please email me: [email protected]. My co-founder and I are ex-Google machine learning engineers, and we've published work at a NIPS workshop showing you can detect abnormal heart rhythms, high blood pressure, and even diabetes from wearable data alone. We're working on medical journal publications now based on an N=10,000 study with UCSF Cardiology.
Your skills can really make a difference in people's lives. The time is now.
Indeed, I second that the time is now. There are several imaging modalities/organs that provide diagnostic information about a variety of human functions or diseases. For example, retina is a unique organ that is amenable to imaging of central nervous system, cardio-vascular system, and microvasculature without any incision. This allows us to detect, screen, monitor, and predict risk for diseases such as diabetic retinopathy, macular degeneration, glaucoma, and even cardio-vascular risk, Alzheimer's, and stroke.
If this interests you and you have developed the expertise, there is another startup opportunity to explore -- please email me at [email protected]. We are bunch of machine learning PhDs, developing/publishing and commercializing deep learning algorithms for disease/risk identification from retinal photography. We play with millions of retinal images, and it's a lot of fun!
I heard before that a problem with heart monitoring for ML is that most of the samples out there are with abnormal hearts rather than normal and false alarms. As in, there's not enough published data to establish a baseline for high accuracy across general population. Most claims like this cite rules protecting medical records/data. Never got to ask a specialist in the field for confirmation or rejection of that claim.
Honestly, I can't wait for deep learning and computational methods to dethrone doctors and upend the medical profession. In the next five years, expect a computer to be able to predict most diseases a lot better than doctors can -- and with none of the attitude, high cost, or inconvenience.
Mind you I'm not talking about researchers, who will always have a job. I'm talking about practitioners. I've had a medical condition from birth and I've had to deal with my share of doctors. Outside of the insurance system, they are easily the most unpleasant part of the whole ordeal to deal with. There are some gems, but most you will encounter are pompous, arrogant, and "commanding" -- when they enter a room, they are flanked by "residents", "assistants" and generally give off this air of superiority which is really just because of their route experience. The whole thing comes off more as a performance than anything else. Worse, they often get mad when you question them or ask them to explain themselves, or how they arrived at a conclusion.
Good luck finding work when an algorithm can do your job better than you. It's only a matter of time.
I feel the exact opposite - in my treatment for prostate cancer the human interaction with doctors were a hugely positive experience for me. Interpretation of biopsies and inspection of cancer images were part of that process and I'm sure machine vision algorithms could help in this area. However, even if the classification of the cancer cells improves, the role of the doctor guiding the patient through the right treatment process still remains something I would not want to turn over to an algorithm.
I have also encountered doctors I did not like but fortunately for me I had a choice where to go. Maybe machine learning should focus on weeding out unpopular practitioners instead.
Not so sure we are going to see an automation completely replace doctors. Just look at all the problems both technical as well as social in the self-driving cars arena. There lot more people are qualified to be drivers than to practice medicine.
Instead, I think we'll see more powerful diagnostic tools at the disposal for physicians to use. Doctors will still play an important role in treating their patients and will be more effective because they'll have powerful tools assisting them.
But to your point, will technology help patients feel more empowered in their medical encounters? Or to get more value out of their interactions with their doctors? https://www.remedymedical.com/ seems to think their platform will do just that for primary care / telemedicine visits.
I've been harbouring similar thoughts to you for quite some time. I live in Canada, where healthcare is supposedly "free" (it's not, you get taxed like crazy here!). The treatment you receive here is subpar in my opinion. The declining quality of treatment, I'm guessing is most likely attributed to an increasing population and the limited number of quality doctors available to treat these people.
I would assume these challenges aren't unique to Canada and from an outsider's perspective the medical system in the US seems worse (maybe not if you're rich)
I never thought doctors would be hit so soon by the automation/AI crisis, but this article challenges that thought. However, due to the state of robotics at the moment, surgeons for example aren't going anywhere for a good decade I'd estimate, and it's not like people can, in the mole example, self-remove a chunk of it for biopsy with 99% accuracy. Then there's treatment that has to be done at a hospital under monitor of professionals, etc.
What I find people miss here is that computers will not help you heal your boo-boos. Say you get stabbed in the face with a knife. You need a doctor to help you. Or to give birth.
Honest question: if you have had a bad experience with a healthcare system that doesn't deliver the sort of care that you want, what makes you think that some machine learning based implementation with a human spokesperson is going to be better? How will you question the results of the algorithm?
There are not that many doctors in the pipeline to meet the demands of the future. Especially, with longer lifespans and demographics spurts in places like Africa. These are good complementary tools for medical care with the doctor "quarterbacking" while all the "blocking and tackling" is automated.
I don't have much experience with specialists, but my experience with GPs has been awful, for every good one you meet, you have to deal with 5 other terrible ones. I will never give up my GP until he retires or an AI replaces him because he is probably the first doctor since my childhood with an actual air of competency around him.
I feel bad for the rest of the people who visit my clinic and have to deal with any of the other garbage practitioners who usually fall into one of two buckets. Foreign (mainly Indian) hacks with zero medical knowledge and bedside manners, and greedy yuppie strivers with a knack for memorization but terrible analytical ability.
Most doctors don't deserve their inflated salaries or social status, and I hope they are soon brought back down to Earth by technology, they have been able to skid by for way too long.
You should always ask your doctor for a treatment plan, i.e., a structured approach to curing your condition. Make them plan a few steps ahead. And question that plan.
Systems that outperform doctors in some specific area of diagnostics aren't new. One of the earliest examples of such systems is Mycin [1], which also was developed at Stanford, but around forty-something years ago. Never went to production because of practical issues that have nothing to do with its accuracy. It's interesting that all of those "practical issues" are no longer relevant, and yet we don't see a widespread use of similar software.
I think now really is different. Part of that is algorithmic advances like deep learning, as shown in this Nature paper.
An even larger part of it is that the financial incentives are flipping due to value-based care. In 1979, a hospital that implemented an expert system for accurate diagnosis may, paradoxically, see its revenue fall. Nowadays, with ACOs, risk-based contracting, and bundled payments, the financial incentives create tailwinds rather than headwinds for large-scale adoption of AI in medicine.
Contrary to popular belief, the medical system can absorb new techniques very quickly--when incentives are aligned. And they are now becoming aligned.
Comparing apples and oranges here. Mycin is an expert system dealing with changing rulesets with A LOT of manual teaching. The current papers deals with computers discerning visual patterns by itself.
I hope someday soon we'll develop systems that allow us to "ask" a ML algorithm what factors led to a decision (diagnosis in this case).
It would be interesting to compare that with the current state of the art in the field, and see if ML can contribute new scientific/medical theory as well.
This reminds me of a talk that I saw about wavelet based algorithms in the 1990s for detecting tumors in mammograms.
The algorithms found most of the tumors that humans had missed, with similar false positive rates. BUT humans refused to work with the software!
The problem was that the software was very, very good at catching tumors in the easy to read areas of the breast, and had lots of false positives in more complicated areas. Humans spent most of their effort on the more complicated areas. Every tumor that the software found that the human didn't simply felt like the human hadn't paid attention - it was obvious once you looked at it. The mistakes felt like stupid typos do to a programmer. But the software constantly screwed up where you needed skill. The result is that humans learned quickly to not trust the software.
This is very true and directly related to my research. (I work at a company developing software to interpret EEG data.) There's a huge difference between an algorithm with a low error rate that makes mistakes seemingly at random vs. an algorithm with a somewhat higher error rate whose mistakes are at least comprehensible. A doctor is much more likely to trust the latter than the former. Almost as important as developing a detector with a low false positive rate is developing a detector that can figure out when the problem is too hard so it knows not to even try. (And it seems that this problem is just has hard.)
One of the things we do is perform a Turing test of sorts where we test if the performance of our detector is statistically indistinguishable from a human. (In fact, we actually have a contest running right now where we give you 10 EEG records, some marked by humans, some marked by our software, and if you can figure out which were marked by which we'll donate $1000 to the American Epilepsy Society.)
Unfortunately the paper is in Nature, paywalled, instead of Arxiv, and data/code/model/weights inaccessible. While publishing in Science/Nature/NEJM/JAMA is definitely the right approach for deep learning to gain validity in the medical community, faster progress could be gained by having a more open platform, with constant and real-time validation with more data, more medical centers and clinics. The reason progress in DL has been so breathtaking is in no small part due to the culture of openness and sharing.
This is interesting and impressive work, however, I noticed that they compared the algorithm's performance to dermatologists looking at a photo of a skin lesion. This seems like a straw man comparison because any dermatologist would presumably be looking directly at a patient and would benefit from a 3D view, touch, pain reception etc. I realize that this was the only feasible way to conduct this study, but still suggests that an algorithm looking at a photo cannot match the performance of a dermatologist looking directly at a patient.
The paper ends with "deep learning is agnostic to the type of image data used and could be adapted to other specialties, including ophthalmology, otolaryngology, radiology and pathology."
As someone with two melanomas under my belt (and more than a 1000 moles) what I really want is the ability to do a mass scan of my body also further down at the cellular level not just looking at the moles on the surface.
I am lucky enough to have Sloan Memorial as my hospital and no other than Dr. Marghoob one of the leading experts and I actually have a scan of my body made with 50 or so High Definition Cameras (I am litterally a 3d model in blue speedos and with a white net on my head).
They have a new system where they can look at the cell level without doing a biopsy and actually found my melanoma before they did the biopsy (i.e. they knew it was melanoma before they did biopsy) but it's really a cumbersome process and I had 6 experts studying and working to position that laser properly.
So the real challenge today is how do we get the data into the system.
Are you sure about that? Playing devil's advocate here, we have plenty of examples of scientists jumping the gun without being peer reviewed or going through a rigorous follow up testing progress, especially when it comes to medicine. The alzheimer's 60FPS blinking light example is pretty good - some scientists got it working on mice, but we don't know any potential side effects it could have on humans. Maybe none and that'd be great! Maybe it causes schizophrenia, who knows? We have no way of knowing yet! Just very very educated guesses.
I say when it comes to medicine, err on the side of caution. Obviously a diagnosis app isn't too dangerous - worse case scenario the app gives you a positive diagnosis, so you go into the doctor's, they take a sample, and find the growth to not be cancerous. No harm, no foul. But other ideas could be more dangerous.
I'd love that. I'm still irritated that the FDA forced 23andme to remove the Alzheimer's/Parkinson's report (though I completely understand why). Now I have to run all my 23andme data through promethease, which is significantly more complicated to understand.
This is not hard to imagine at all. I know that there must be some absolutely excellent doctors out there, but I don't trust the bottom 80% of doctors much at all, and honestly would rather have an algorithm most of the time, especially starting off. The lack of robust consumer level 'medical doctor apps' is one of the biggest mysteries to me.
There's an app used by over a million doctors called "Figure 1" that allows them to share medical images for crowdsourced diagnosis and treatment of rare cases.
I wonder when we will get to a point where machine learning can help there?
I read the headline and wondered how ml could train the difference between a new dermatologist and a seasoned one. Cancer I get, looks totally different than non-cancerous skin :)
That said, pulling this is one of the best ML applications to date. Recognizing cats or scenery doesn't seem nearly as useful
Great results! Deep learning has been gaining track in other medicine areas as well.
One such task is lung cancer nodule detection from CT scans. A paper I recently co-authored applied many different architectures to this detection and achieved very good results. (https://arxiv.org/pdf/1612.08012.pdf)
The best combination of systems detected cancer nodules which were not even found by four experienced thoracic radiologists.
Do you participate in the current Data Science Bowl regarding CT lung cancer detection [0]? The prize pool of $1,000,000 seems quite attractive, especially if you recently developed new state-of-the-art CT lung cancer detection ML models. The only somewhat strange aspect of this competition (at least to me) is to not include locality annotations. They only provide labels of cancer/no cancer per patient...
Dermatologist here. Most skin cancer diagnosis is relatively straightforward and if the lesion is suspicious will require a biopsy to establish the subtype of the cancer and plan further treatment. There is no reason why this initial visual diagnosis cannot be performed at the same level as a dermatologist by a machine or indeed by a non-doctor trained intensively for a relative short period to interpret photographs.
The difficulty is two-fold. Firstly liability - a dermatologist aims not to miss a single case of melanoma in the tens of thousands of patients seen over their career, if this algorithm is used widely in millions of patients then either the sensitivity will have to be higher and more biopsies performed or there will have to be an acceptable rate of missed diagnosis for melanoma.
Secondly, in edge cases such as moles that are slightly atypical. In these scenarios there is no way that I would be comfortable making an assessment from a photograph. Now of course, a machine could also gather further information via methods such as in vivo confocal microscopy but in this case the cost savings are likely to be negligible.
Can someone clarify for me how the training and testing sets were constructed? One problem is that cancerous and benign skin are unbalanced in a representative population. How was this imbalance handled in testing? How was the testing set constructed? And so on.
For each of the 3 tests the training sets were classified with a biopsy, images were randomly seleced then blurry images were filtered out by a separate dermatologist. The ratios Benign:Malignant were 70:65, 97:33, and 40:71 respectively.
These close-to-even ratios make for a more powerful test of classification. I would assume that these test samples have biopsy data means that some dermatologist thought that they might be malignant (unnecessary medical operations are unethical). This might lead to some bias towards samples that are difficult for humans to diagnose.
Separating these into binary classifications of specific tumor types makes it easier to classify than out of every possible tumor type (as a dermatologist does).
Still the claims this paper makes are very promising. A lot of the training data was classified by dermatologists, not biopsy. Using more biopsy data could lead to even better classification, as well as improvements to the model.
One major, major advantage that medical imaging has for deep learning is the similarity of each data point, especially the 'background data.' For instance, human brains typically look very similar across individuals (up to scanning parameter differences), except in the abnormalities - which are often precisely what you want to highlight.
As an example, I recently trained a neural neural network to perform a useful task for our lab using 3 (!) hand-labeled brains.
It's insane that you were able to get reasonable results with such a tiny dataset.
I am learning machine learning right now and I find working with datasets with fewer than 100 examples to be quite difficult.
It seems counter intuitive when you first think about it but having way more data actually makes the task of fitting the model much easier as there is granularity that can be used to get feedback on adjustments to the structure of the model.
Diagnosis based on image recognition is something machines are already very good at, even without recent deep learning techniques (although I am sure they will help).
For instance in college I worked with a radiologist to write an image-recognition program to identify osteoporosis from 3D MRI data. We used some super-basic image segmentation algorithms to identify the bounds of the bone layer that we cared about. From there a model was able to determine mechanical properties of the bone and therefore make an assessment with much more granularity than the human eye.
This was a first-year grad student class and I was coming at this totally naive with some Matlab scripts, and we managed to get usable results in weeks.
While I am not in the camp of "machines will replace doctors", I think radiology and other similar fields are in for a sea-change in technique and a large reduction in the use of human judgement.
Coming from a family of people in the medical professions, they've all seen reports of how _everything_ is going change in their fields because some new computer program can do X...
To which my father usually mutters something like: "Why fuck are they wasting their time with that? Can't they fix the fucking medical billing system instead?"
Most of the medical professionals I know echo similar sentiments.
Telemedicine has a lot of regulatory hurdles to get to market, but initiatives like this are extremely exciting, since they can likely be taken to market in a way that explicitly clarifies that it's not a diagnostic, it's simply a low barrier way to actually get that mole you've got looked at. If you don't have health insurance, you could actually get an idea of how critical it is to get in to see a doctor. That said, the obvious concern would be the extreme cost of a false negative (although the evidence suggests that the algorithm is no more likely to provide one than a doctor, the concern over single accidents caused by self-driving cars, even when the overall rates are far lower makes it pretty clear that the bar for success to the public for non-humans is substantially higher than it is for humans)
> That said, the obvious concern would be the extreme cost of a false negative
Probably not. People won't go to doctor unless they sense something wrong with their body. So it is actually filling the void here.
On the other hand, false positives will cause a bigger problem, because swarm of people will get triggered by the fear of cancer, and hospital might not handle the sudden surge of traffic for treatment.
Are people allowed to practice medicine without a license, if they "explicitly clarify that it's not a diagnostic, it's simply a low barrier way to actually get that mole you've got looked at."
I think the same, whatever the law regarding that is, would apply to a person or company providing this service via deep learning.
[+] [-] rscho|9 years ago|reply
Now for people eagerly awaiting the MDs downfall, I think you are precipitating things a bit. We all tend to believe in what we do, and I concur in saying that expert systems will replace doctor judgement in well-defined, selected applications in the decade to come. But thinking that the whole profession will be impacted as hard as factory workers, with lower wages and supervision-only roles, is not realistic. What will be lacking is the automation of data collection, because you seem to underestimate by far the technical, legal, and ethical difficulties in getting the appropriate feedback to make ML appliances efficient. I firmly believe in reinforcement learning, and as long as the feedback system will be insufficient, doctors will prevail, highly-paid jerks or not.
I myself am an anesthesiologist, a profession most people think of as a perfect use case for those techs (as I do), and wonder why we haven't been replaced already. The reality is that the job is currently far beyond what an isolated system could do. We already have trouble in making cars follow the right lane in non-standard settings. I hope people realize that in each and every medical field, the number and complexity of factors to control is far greater than driving in the right lane.
People who drive the medical system have no sense of technology. They cannot even envision the requirements for machines to become efficient in medicine. That is why we are seeing quite a lot of efficient isolated systems pop up, but we won't be seeing fully integrated, doctor-replacement systems for a long time. This will require a new generation of clinical practitioners, who will understand how to make the field truly available to machine efficiency.
[+] [-] scarmig|9 years ago|reply
Recently, my dad was sick with a pretty bad cough. Like, so bad that he couldn't speak without coughing. He fainted twice from minute long coughing fits, one of those times hitting his head his head on the stove on the way down, leaving a deep cut and blood everywhere.
He went to at least three different doctors. He got a scan of his chest. Everything looked clear, and all of the doctors were stumped. Things were pretty bad.
I mentioned this to a UCSF resident friend, and her immediate response was "Oh, is he on <some blood pressure medication I forget the name of>?" I was like, uh, let me see. Called my mom, she checked, and, lo and behold, he was on it. So his doctors took him off it and within a week he was better.
This coughing wasn't some obscure side effect of the medication she knew through sheer brilliance: it's a side effect that's been widely known since the 1970's. Hell, it was on the drug's Wikipedia page.
So there's a couple morals you could take from this. One would be, wow, doctors are smart to be able to diagnose an issue based on a single symptom and some reasonable assumptions about a patient's background! The other is that the median doctor is pretty worthless; spending tens of thousands of dollars gives you no guarantee you'll see someone competent; and that a medical system that relies on you grabbing drinks with a UCSF resident to get good results is fundamentally broken.
Machine learning and expert systems don't have to be as awesome as the best doctors to be valuable. They don't need to be better than competent doctors, even. They just need to provide a bare level of competence to provide a huge amount of value.
[+] [-] bluetwo|9 years ago|reply
Even the concept of electronic medical records has been resisted, and is currently poorly implemented. The gov't had to literally bribe doctors to convert to electronic systems and most seemed to go out and get the least poorly built products on the market.
I'm not trying to paint them as evil. Heck, even NASA scientists were suspicious of computers taking their jobs. I see this in other fields all the time.
Seeing the potential and seeing the current state of affairs kind of makes me sad. I want to live in the world where a balance has been reached. Where my medical record is a salient form of AI that checks if I lost those 10 pounds I promised and loops in the doctor when my flu symptoms linger too long. I want it to eliminate the hassle for doctors, hospitals, insurance companies as well as patients, and I think in the process it can drive costs down and raise quality of life.
[+] [-] dfabulich|9 years ago|reply
We're already seeing a significant rise in the role of nurse practitioners at the front line of medicine. Today, they gather the data and hand it to an MD, so handing it off to an ML system would be straightforward.
[+] [-] brandonb|9 years ago|reply
If you've developed expertise in deep learning and want to apply your skills to healthcare in a startup... please email me: [email protected]. My co-founder and I are ex-Google machine learning engineers, and we've published work at a NIPS workshop showing you can detect abnormal heart rhythms, high blood pressure, and even diabetes from wearable data alone. We're working on medical journal publications now based on an N=10,000 study with UCSF Cardiology.
Your skills can really make a difference in people's lives. The time is now.
[+] [-] kevinalexbrown|9 years ago|reply
http://suzukilab.uchicago.edu/research.htm
IIRC they were outperforming the average radiologist on some tasks 10 years ago.
[+] [-] ksolanki|9 years ago|reply
If this interests you and you have developed the expertise, there is another startup opportunity to explore -- please email me at [email protected]. We are bunch of machine learning PhDs, developing/publishing and commercializing deep learning algorithms for disease/risk identification from retinal photography. We play with millions of retinal images, and it's a lot of fun!
[+] [-] StavrosK|9 years ago|reply
What if we haven't but we do?
[+] [-] nickpsecurity|9 years ago|reply
So, how true or false is it?
[+] [-] rpedela|9 years ago|reply
[+] [-] iamleppert|9 years ago|reply
Mind you I'm not talking about researchers, who will always have a job. I'm talking about practitioners. I've had a medical condition from birth and I've had to deal with my share of doctors. Outside of the insurance system, they are easily the most unpleasant part of the whole ordeal to deal with. There are some gems, but most you will encounter are pompous, arrogant, and "commanding" -- when they enter a room, they are flanked by "residents", "assistants" and generally give off this air of superiority which is really just because of their route experience. The whole thing comes off more as a performance than anything else. Worse, they often get mad when you question them or ask them to explain themselves, or how they arrived at a conclusion.
Good luck finding work when an algorithm can do your job better than you. It's only a matter of time.
[+] [-] zwieback|9 years ago|reply
I have also encountered doctors I did not like but fortunately for me I had a choice where to go. Maybe machine learning should focus on weeding out unpopular practitioners instead.
[+] [-] doesnotexist|9 years ago|reply
Instead, I think we'll see more powerful diagnostic tools at the disposal for physicians to use. Doctors will still play an important role in treating their patients and will be more effective because they'll have powerful tools assisting them.
But to your point, will technology help patients feel more empowered in their medical encounters? Or to get more value out of their interactions with their doctors? https://www.remedymedical.com/ seems to think their platform will do just that for primary care / telemedicine visits.
[+] [-] j8m88|9 years ago|reply
I would assume these challenges aren't unique to Canada and from an outsider's perspective the medical system in the US seems worse (maybe not if you're rich)
[+] [-] komali2|9 years ago|reply
[+] [-] eatbitseveryday|9 years ago|reply
What I find people miss here is that computers will not help you heal your boo-boos. Say you get stabbed in the face with a knife. You need a doctor to help you. Or to give birth.
[+] [-] Gatsky|9 years ago|reply
[+] [-] mhb|9 years ago|reply
[+] [-] sremani|9 years ago|reply
[+] [-] supersaiyanverx|9 years ago|reply
I feel bad for the rest of the people who visit my clinic and have to deal with any of the other garbage practitioners who usually fall into one of two buckets. Foreign (mainly Indian) hacks with zero medical knowledge and bedside manners, and greedy yuppie strivers with a knack for memorization but terrible analytical ability.
Most doctors don't deserve their inflated salaries or social status, and I hope they are soon brought back down to Earth by technology, they have been able to skid by for way too long.
[+] [-] amelius|9 years ago|reply
[+] [-] romaniv|9 years ago|reply
[1] - https://en.wikipedia.org/wiki/Mycin
[+] [-] brandonb|9 years ago|reply
I think now really is different. Part of that is algorithmic advances like deep learning, as shown in this Nature paper.
An even larger part of it is that the financial incentives are flipping due to value-based care. In 1979, a hospital that implemented an expert system for accurate diagnosis may, paradoxically, see its revenue fall. Nowadays, with ACOs, risk-based contracting, and bundled payments, the financial incentives create tailwinds rather than headwinds for large-scale adoption of AI in medicine.
Contrary to popular belief, the medical system can absorb new techniques very quickly--when incentives are aligned. And they are now becoming aligned.
[+] [-] lucidrains|9 years ago|reply
[+] [-] leereeves|9 years ago|reply
It would be interesting to compare that with the current state of the art in the field, and see if ML can contribute new scientific/medical theory as well.
[+] [-] btilly|9 years ago|reply
The algorithms found most of the tumors that humans had missed, with similar false positive rates. BUT humans refused to work with the software!
The problem was that the software was very, very good at catching tumors in the easy to read areas of the breast, and had lots of false positives in more complicated areas. Humans spent most of their effort on the more complicated areas. Every tumor that the software found that the human didn't simply felt like the human hadn't paid attention - it was obvious once you looked at it. The mistakes felt like stupid typos do to a programmer. But the software constantly screwed up where you needed skill. The result is that humans learned quickly to not trust the software.
[+] [-] antognini|9 years ago|reply
One of the things we do is perform a Turing test of sorts where we test if the performance of our detector is statistically indistinguishable from a human. (In fact, we actually have a contest running right now where we give you 10 EEG records, some marked by humans, some marked by our software, and if you can figure out which were marked by which we'll donate $1000 to the American Epilepsy Society.)
[+] [-] transcranial|9 years ago|reply
[+] [-] rikelmens|9 years ago|reply
[+] [-] nbmh|9 years ago|reply
[+] [-] doesnotexist|9 years ago|reply
The paper ends with "deep learning is agnostic to the type of image data used and could be adapted to other specialties, including ophthalmology, otolaryngology, radiology and pathology."
[+] [-] ThomPete|9 years ago|reply
I am lucky enough to have Sloan Memorial as my hospital and no other than Dr. Marghoob one of the leading experts and I actually have a scan of my body made with 50 or so High Definition Cameras (I am litterally a 3d model in blue speedos and with a white net on my head).
They have a new system where they can look at the cell level without doing a biopsy and actually found my melanoma before they did the biopsy (i.e. they knew it was melanoma before they did biopsy) but it's really a cumbersome process and I had 6 experts studying and working to position that laser properly.
So the real challenge today is how do we get the data into the system.
[+] [-] lucidrains|9 years ago|reply
[+] [-] komali2|9 years ago|reply
I say when it comes to medicine, err on the side of caution. Obviously a diagnosis app isn't too dangerous - worse case scenario the app gives you a positive diagnosis, so you go into the doctor's, they take a sample, and find the growth to not be cancerous. No harm, no foul. But other ideas could be more dangerous.
[+] [-] wakkaflokka|9 years ago|reply
[+] [-] calebgilbert|9 years ago|reply
[+] [-] rawnlq|9 years ago|reply
I wonder when we will get to a point where machine learning can help there?
[1]https://figure1.com/medical-cases
[+] [-] ChuckMcM|9 years ago|reply
That said, pulling this is one of the best ML applications to date. Recognizing cats or scenery doesn't seem nearly as useful
[+] [-] lscholten|9 years ago|reply
One such task is lung cancer nodule detection from CT scans. A paper I recently co-authored applied many different architectures to this detection and achieved very good results. (https://arxiv.org/pdf/1612.08012.pdf)
The best combination of systems detected cancer nodules which were not even found by four experienced thoracic radiologists.
[+] [-] michaf|9 years ago|reply
[0] https://www.kaggle.com/c/data-science-bowl-2017
[+] [-] sungam|9 years ago|reply
The difficulty is two-fold. Firstly liability - a dermatologist aims not to miss a single case of melanoma in the tens of thousands of patients seen over their career, if this algorithm is used widely in millions of patients then either the sensitivity will have to be higher and more biopsies performed or there will have to be an acceptable rate of missed diagnosis for melanoma.
Secondly, in edge cases such as moles that are slightly atypical. In these scenarios there is no way that I would be comfortable making an assessment from a photograph. Now of course, a machine could also gather further information via methods such as in vivo confocal microscopy but in this case the cost savings are likely to be negligible.
[+] [-] hughdbrown|9 years ago|reply
[+] [-] fantispug|9 years ago|reply
These close-to-even ratios make for a more powerful test of classification. I would assume that these test samples have biopsy data means that some dermatologist thought that they might be malignant (unnecessary medical operations are unethical). This might lead to some bias towards samples that are difficult for humans to diagnose.
Separating these into binary classifications of specific tumor types makes it easier to classify than out of every possible tumor type (as a dermatologist does).
Still the claims this paper makes are very promising. A lot of the training data was classified by dermatologists, not biopsy. Using more biopsy data could lead to even better classification, as well as improvements to the model.
[+] [-] kevinalexbrown|9 years ago|reply
As an example, I recently trained a neural neural network to perform a useful task for our lab using 3 (!) hand-labeled brains.
[+] [-] jpgvm|9 years ago|reply
I am learning machine learning right now and I find working with datasets with fewer than 100 examples to be quite difficult.
It seems counter intuitive when you first think about it but having way more data actually makes the task of fitting the model much easier as there is granularity that can be used to get feedback on adjustments to the structure of the model.
[+] [-] habosa|9 years ago|reply
For instance in college I worked with a radiologist to write an image-recognition program to identify osteoporosis from 3D MRI data. We used some super-basic image segmentation algorithms to identify the bounds of the bone layer that we cared about. From there a model was able to determine mechanical properties of the bone and therefore make an assessment with much more granularity than the human eye.
This was a first-year grad student class and I was coming at this totally naive with some Matlab scripts, and we managed to get usable results in weeks.
Here's a sample of that professor's research: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2926228/
While I am not in the camp of "machines will replace doctors", I think radiology and other similar fields are in for a sea-change in technique and a large reduction in the use of human judgement.
[+] [-] drfritznunkie|9 years ago|reply
To which my father usually mutters something like: "Why fuck are they wasting their time with that? Can't they fix the fucking medical billing system instead?"
Most of the medical professionals I know echo similar sentiments.
[+] [-] the_watcher|9 years ago|reply
[+] [-] eva1984|9 years ago|reply
Probably not. People won't go to doctor unless they sense something wrong with their body. So it is actually filling the void here.
On the other hand, false positives will cause a bigger problem, because swarm of people will get triggered by the fear of cancer, and hospital might not handle the sudden surge of traffic for treatment.
[+] [-] leereeves|9 years ago|reply
I think the same, whatever the law regarding that is, would apply to a person or company providing this service via deep learning.