The AI Doctor will see you now – Machine learning transforming healthcare
Machine learning has the power to fundamentally reinvent healthcare. In a world where industries have been radically transformed by technology – we watch technology finally slowly, but steadily disrupt healthcare delivery. Ready to meet your AI doctor?
Machine learning has the power to fundamentally reinvent healthcare. In a world where industries have been radically transformed by technology – healthcare delivery continues to follow the traditional approach to rely heavily on doctors’ ability to not only absorb humongous amounts of varied data but also correctly apply that to your complex personal context. As Vinod Khosla puts it, it’s time to move beyond the stethoscope – which remains the iconic diagnostic tool for most healthcare professionals worldwide, 200 years after its invention.
If you deep-dive into the problem, it really begins at the very onset of healthcare education. As data and research explode, it’s challenging for the average doctor to catch up without technology. To the extent that studies show that “Students who graduate in 2020 will experience four doublings in knowledge. What was learned in the first 3 years of medical school will be just 6% of what is known at the end of the decade to 2020.” This potentially explains why a 2013 study estimated that greater than 400,000 deaths a year were attributable to medical errors in the US. Misdiagnosis and conflicting diagnoses are common challenges plaguing the current medical system – and while solvable, they tend to occur due to reasons so intrinsically human. A challenge, that technology can play a huge role in helping us overcome. On this note though, I find it important to highlight that the target for applying this megatrend to healthcare is not to eliminate doctors, but instead to leverage technology to overcome human deficiencies and amplify doctor’s strengths through relevant insights.
And as you read this, a U.K based startup Babylon Health, strives to make this ambitious vision a reality since the past two years. Ambitious indeed, given the skyhigh expense of engineers who’re experts at building AI models – while the net return on this heavy investment could be simply a hit or miss. The startup’s founder, Ali Parsa, calls it a major step towards his ambitious goal of putting accessible healthcare in the hands of everyone on the planet. “Five billion people globally have no access to surgery. And without adequate primary care, a $10 problem becomes a $100 solution.” But catch it early through intensive data analytics of Babylon’s product, and the founder believes the company can stop an illness from becoming an expensive problem for a state provider like the National Health Service. Moreover, this technology driven assistant can enable doctors can spend their time way more efficiently. With the result that over time, providers such as NHS won’t need to hire so many of them.
Bablyon directly interact with the customer, providing them access to its network of 250 doctors on video-call. To assist doctors in their diagnosis, the company also sells an AI software that helps investigate an ailment. It is this latter feature that Babylon has spent the last two years heavily investing in, so that his human doctors are freed up from note-taking and diagnosing common illnesses, and instead can focus their effort in analysing complex insights and looking after more complicated problems. “You don’t need to see a doctor for every diagnosis, what you want is a treatment.” And Bablyon’s core is working on what goes behind the scenes in assisting the doctors to deliver this treatment with increased reliability and efficiency.
And if this technology sounds far too ambitious or into the future – wait till you read this. In June this year, they challenged their product to the MRCGP exam, which trainee general practitioners take to test their ability to diagnose. And against the average passmark of 72% over the past five years, Babylon scored 82% – an uplifting milestone for the entire company. In terms of diagnosis, Babylon’s digital doctor had proved itself at par with the average human doctor, if not more.
And while the company aspires for the grand long-term future, what does near future look like? Definitely not without its fair share of challenges. The AI doctor’s journey has barely begun – and will only slowly evolve in sophistication – no different than any other great doctor who invests many years in training under best practitioners, even beyond the decade invested in learning at medical school. Our AI doctor would be no different. Similar to what we saw in the IBM Watson case, expect many laughing-stock attempts as this technology begins to grow from infancy. Early in their lifecycle they will be the butt of jokes from many – but let not that cloud your perception of the eventual power the technology holds.
And now the company seeks to expand this product to assist doctors in the U.S. Would love to hear more about the key challenges you think this product could help solve in the US Market.
The AI doctor, will see you now.
(795)
Khoslaventures.com. (2018). “20 Percent Doctor Included” & Dr. Algorithm: Speculations and Musings of a Technology Optimist | Khosla Ventures. [online] Available at: https://www.khoslaventures.com/20-percent-doctor-included-speculations-and-musings-of-a-technology-optimist [Accessed 14 Nov. 2018].
Olson, P. (2018). This AI Just Beat Human Doctors On A Clinical Exam. [online] Forbes. Available at: https://www.forbes.com/sites/parmyolson/2018/06/28/ai-doctors-exam-babylon-health/#345f346f12c0 [Accessed 14 Nov. 2018].
Heaven, D. (2018). Your next doctor’s appointment might be with an AI. [online] MIT Technology Review. Available at: https://www.technologyreview.com/s/612267/your-next-doctors-appointment-might-be-with-an-ai/ [Accessed 14 Nov. 2018].
TechCrunch. (2018). Babylon Health raises further $60M to continue building out AI doctor app. [online] Available at: https://techcrunch.com/2017/04/25/babylon-health-raises-further-60m-to-continue-building-out-ai-doctor-app/ [Accessed 14 Nov. 2018].
Futurism. (2018). Your future doctor may not be human. This is the rise of AI in medicine.. [online] Available at: https://futurism.com/ai-medicine-doctor [Accessed 14 Nov. 2018].
Very interesting. My main concern with AI-driven diagnoses and applications in medicine is how the patient will react to the use of a machine. Patients already decry the evaporation of bedside manner in the medical care experience, with doctors’ noses buried in charts and laptops when the patient actually wants calm, assured attention. Will the expanded use of AI make it even easier for doctors to interact with their patients even less, further degrading the experience? My solution would be that if doctors become more accurate and efficient with AI and Machine Learning assistance, they should not be given additional patients. Rather, the extra time should be spent getting back to the basics of healthcare: patients.
This was a great read, an I really like how you framed the essay in terms of machine learning not replacing doctors but assisting doctors so that they can focus on more complex issues . However, a challenge I see is that how do you convince a traditional patient to trust a machine’s diagnosis over that of a doctor whose instinct, intellect, and experience a machine can never replace. Nonetheless, I do agree that not very illness needs human consultation. For example, there’s no reason why one should wait hours in line in a hospital to be treated for a 30 second consultation for illnesses such as a fever or food poisoning, as I imagine machines would be much more efficient at that.
This article touches on a very important issue – human augmentation. Since for now we are quite far away from developing anything that resembles general AI, it is logical to use the machine learning technology for enhancing human performance. This can be done through identifying relatively routine tasks that technology can easily outperform humans at and outsourcing those task to technology. I believe Babylon Health is trying to do just that. However, I am still skeptical about using machine learning algorithms without human supervision and without a clear understanding of the decision-making process. You mention that the digital doctor outperformed real doctors on average, but averages can be highly misleading. Given my scant knowledge of the issue, I understand that designers of machine learning algorithms themselves often times struggle to explain as to why has the AI reached a certain decision and those decisions often seem quite bizarre to humans. To conclude, I believe transparency of the technology is of paramount importance for this startup to succeed, especially given that we are dealing with people’s health.
Really nice article. Since my first year of medical school, I’ve always felt that technology should play a role in addressing or overcoming human shortcomings in medicine (e.g. electronic stethoscopes that are able to analyze heart sounds more accurately than the human ear). The AI capabilities that you describe coming down the pipeline are directly in line with current trends in medical education. Medical students are now relying less and less on accumulating a body of knowledge (though a hefty background is necessary) and relying more on “learning how to learn” given the explosion of data entering the medical field. I do worry about the “laughing stock” mistakes a la Watson that you reference, as these could potentially be life threatening in the medical context. However, this technology, if used judiciously, could help with physician shortages in the rural US and globally by providing physicians with initial, first-pass diagnoses.
I agree with a lot of the points that have been made in the comments so far. You specifically mention that machines are not here to replace doctors. From many patients’ perspective, though, I think there will be a huge reluctance to start accepting “robots” as doctors in any context. Healthcare is one of the few industries where people still value the human element of a physician, and I think there is a fear that there is a slippery slope when it comes to making recommendations based on machine learning algorithms. I agree with Michael’s comment that this is not a field where there is room for error; a wrong diagnosis or treatment recommendations can have severe consequences. This opens up these companies to a whole host of liabilities and, more importantly, can run the risk of harming a patient if there is any mistake in the data or analysis.
Great article! The first question that comes to mind in response to this is, how then do you regulate a computer? An AI program may pass an initial test to prove it’s competent, but what happens when there is a misdiagnosis – who is responsible? What if a programmer makes a mistake in the code or data being read in? I guess that’s why you have the technology assist doctors vs. replace them, but I think given how regulated the healthcare industry is (particularly if the company is working with systems like the NHS), I’m curious to know what KOLs have to say on the topic. Regardless – really enjoyed the article, very thought provoking!
AI in health is a fascinating area – the potential benefits are enormous, and as Babylon are showing in the UK, automated diagnosis is rapidly becoming a reality. A big concern here though is the lack of an ethical framework – where do we draw the line between human and machine decision making? To take it from the obvious to a little further down the road – how about when machine learning algorithms have developed to the point where they are consistently, provably more accurate than even the best human doctors. At that point, should humans have the right tot over-ride the machine’s decision? We need some philosophers in here alongside the doctors…
Interesting blog topic! You do a nice job of highlighting where a data-driven platform may provide benefits over human doctors. Reading through, I really like the point you made on how data should really be used at this point in time to augment current practitioners. One worry I have in moving to machine diagnostics is that people care about human interaction, and may be turned off by the lack of human contact if we shift fully to a data driven tool. Rather, I see the benefit here as creating a mechanism against which human doctors can compare their assessments, and both correct incorrect assessments, and over time learn from mistakes that were prevented. A second question I had was whether you think even though the Babylon platform outscored humans, is the perceived barrier for a robot higher than for a human? We expect humans to make mistakes, and I suspect that humans may hesitate to implement robotic platforms until they demonstrate an even larger benefit over human performance.
Very interesting article! I am personally a strong believer in Artificial Intelligence and genuinely believe that Machine Learning has the potential to outperform humans in certain fields. I totally agree to your point that Machine Learning wouldn’t essentially replace doctors but instead help do diagnosis and therefore increase the utilization of doctors for other more sophisticated tasks. While I believe that Machine Learning can deliver more accurate results in some fields (which can linked to them scoring higher than doctors in the MRCGP), my biggest concern of utilizing machine learning in a critical field such as healthcare is accountability. Even if the machine is projected to be more accurate than doctors, what happens when a machine gets it wrong? Who is to be held accountable? I believe that as humans we always feel secure when we know who is being held accountable for something and even if we know that the machine will be more accurate, the fact the we can not hold a machine accountable, in my opinion, one of the biggest challenges of AI in healthcare.
I can’t wait to see the AI doctor! I think the US healthcare system is facing a lot of challenges that hopefully the AI doctor could help solve. For example, perhaps the technology could make doctors able to diagnose patients faster and thus help decreasing rising healthcare costs in the country. My concern with this technology is that it is only as good as the data we put in. I would worry that doctors would trust the AI Doctor too much and go against their intuition and misdiagnose patients. I think it is critical that confidence intervals are attached to recommendations to avoid this problem.
Such an interesting area of disruption! Although I agree with many of the comments that there are downsides to furthering the personal relationship with the doctor, there could be some upsides from the video-call approach. Aside from the sheer practicality of seeing a doc from home, some patients are afraid of the clinic/ hospital experience itself – the waiting room, the smell, the fear of picking up other bugs etc. For more sensitive health issues, patients might get some comfort in the physical distance from an AI doc and be more willing to open up about their health problems.
Thank you for an interesting read! While I understand that AI driven diagnosis approach many help some of the cost issues and inaccuracies in healthcare, one of my biggest concerns is that an algorithm is likely going to give me the most common answer to a person’s symptoms. However, human doctors are often able to catch some of the cases in the marginal fringes of the normal distribution for diagnosis of a person. Will the number of negative outcomes or deaths increase because the AI recommendation would be based on the most common diagnosis?
I think general AI will get to a place where diagnoses can be properly made by computers. I think it’s fairly easy to predict direction but not as easy to predict timing. From chess to jeopardy to driverless cars to everything else we have utilizing AI/ML today, there was at one point where each was considered impossible. I would presume there are several diagnoses that a computer today can make better than a human doctor. I can only assume that at some point the same will be true for even the complex ones.
Very good read, thank you so much! Having worked in primary care services as a consultant, I truly believe in the value of preventive and primary care and I also am a very strong supporter of the use of technology. I have personally seen two main challenges with this: healthcare professionals are very proud of their occupation, and most do not believe that technology could add value to their diagnosis and treatment. So I see this group as a “difficult to create buy-in”. Another potential challenge would be the accountability and ethical aspects of this. Similar to the discussions with autonomous cars, when there is malpractice, who would we blame?
This is a really interesting read, thank you. I’m curious what happens though when the AI doctor gets a diagnosis wrong potentially leading to a fatal mistake. Where does the responsibility lie and who gets sued, etc?