Can a crowdsourced AI medical diagnostic app outperform your doctor?

Shantanu Nundy recognized the symptoms of rheumatoid arthritis when her 31-year-old patient with crippling hand pain visited Mary’s Center in Washington, DC Instead of starting treatment immediately, Nundy decided to check her up first. diagnosis using a smartphone application. who helps with difficult medical cases by seeking advice from doctors around the world. Within a day, Nundy’s intuition was confirmed. The application had used artificial intelligence (AI) to analyze and filter the advice of several medical specialists in an overall ranking of the most probable diagnoses. Created by the Human diagnostic project (Human Dx) – an organization led by Nundy – the app is one of the latest examples of growing interest in human-AI collaboration to improve healthcare.

Human Dx advocates the use of machine learning – a popular AI technique that automatically learns from the classification of data models – for crowdsourcing and drawing on the best medical knowledge of thousands of physicians in 70 countries . Doctors from several major medical research centers showed interest in the application very early on. Human Dx announced on Thursday a new partnership with leading organizations in the medical profession, including the American Medical Association and the Association of American Medical Colleges, to promote and develop the Human Dx system. The aim is to provide timely and affordable specialist advice to general practitioners serving millions of people around the world, especially the so-called Hospitals and clinics “safety net” across the United States that provide access to care regardless of the patient’s ability to pay.

“We need to find solutions that increase the ability of existing physicians to serve more patients at the same cost or at a lower cost,” says Jay komarneni, founder and president of Human Dx. About 30 million uninsured Americans depend on safety net facilities, which typically have little or no access to medical specialists. These patients often face the difficult choice of paying out of pocket for an expensive in-person consultation or waiting months to be seen by the few specialists working in public hospitals, who receive public funds to help pay for care. patients, says Komarneni. Meanwhile, studies have shown that between 25 percent and 30 percent (pdf) such expensive specialist visits could be conducted through online physician-to-physician consultations while sparing patients additional costs or long wait times.

Komarneni plans to “increase or expand the capacity of physicians with AI” to bridge this “specialist gap”. Within five years, Human Dx aims to be available in the 1,300 community health centers and free clinics of the Safety Network in the United States.

How it works

When a doctor needs help diagnosing or treating a patient, he open the Human Dx smartphone app or visit the project web page and type in their clinical question as well as their labor diagnosis. The doctor can also upload pictures and test results related to the case and add details such as medications the patient takes regularly. The doctor then asks for help, either from specific colleagues or from the network of doctors who have joined the Human Dx community. Over the next day, the Human Dx AI program aggregates all responses into one report. It is the new digital equivalent of a “curbside consultation” where a doctor can ask a friend or colleague for a quick contribution on a medical case without setting up a formal and expensive consultation, says Ateev Mehrotra, associate professor of health care policy and medicine at Harvard Medical School and physician at Beth Israel Deaconess Medical Center. “He is intuitive [crowdsourced advice] would be better advice, “he said,” but how much better is an open scientific question. Yet, he adds, “I think it’s also important to recognize that misdiagnosis by physicians is quite common. One of Mehrotra’s colleagues at Harvard has studied the performance of the AI-powered Human Dx system against individual medical specialists, but has yet to release the results.

Mehrotra’s warning comes from research he and Nundy published last year in JAMA Internal Medicine. This study used the Human Dx service as a neutral platform to compare the diagnostic accuracy of human physicians with third-party “symptom checker” websites and applications used by patients for self-diagnosis. In this case, humans have far surpassed the computer algorithms of symptom checkers. But even doctors provided incorrect diagnoses about 15 percent of the time, which is comparable to previous estimates of doctors’ misdiagnosis.

Human Dx could potentially help improve medical education and training for human physicians, according to Sanjay Desai, doctor and director of Osler Medical Education Program at Johns Hopkins University. As a first step in verifying the service’s capabilities, he and his colleagues conducted a study where preliminary results showed the app could tell the difference between the diagnostic capabilities of resident physicians and fully trained physicians. Desai wants to see the service become a system capable of tracking the clinical performance of each physician and providing targeted recommendations to improve specific skills. Such objective evaluations could be an improvement over the current method of human physicians qualitatively judging their less experienced colleagues. The open question, says Desai, is whether “algorithms can be created to provide finer information about a [individual] the strengths and weaknesses of the physician in clinical reasoning.

AI-assisted healthcare

Human Dx is one of many AI systems tested in healthcare. Perhaps the largest unit is IBM Watson Health, with the company claiming for several years that its AI helps major medical centers and hospitals with tasks such as genetic sequencing of brain tumors and match cancer patients to clinical trials. Studies have shown that AI can help predict which patients suffer from heart attacks or strokes in 10 years or even expected who will die in the five. Tech giants like Google have joined start-ups to develop AI capable of diagnose cancer from medical images. Yet AI in medicine is still in its infancy and its true value remains to be seen. Watson seems to have succeeded in Memorial Sloan Kettering Cancer Center, but he floundered at the MD Anderson Cancer Center at the University of Texas, although it’s not clear if the issues were the result of the technology or its implementation and management.

The Human Dx project also faces questions to achieve widespread adoption, according to Mehrotra and Desai. One of the main challenges is getting enough doctors to offer their time and free labor to meet the potential increase in demand for remote consultations. Another possible issue is how Human Dx’s AI quality control will address users who consistently provide extremely incorrect diagnoses. The service will also require a large user base of specialist physicians to help resolve the more sensitive cases where GPs may be lost.

In any case, the executives of Human Dx and the doctors helping to validate the usefulness of the platform seem to agree that AI alone will not support medical care in the near future. Instead, Human Dx seeks to harness both machine learning and the participatory wisdom of human physicians to make the most of limited medical resources, even as the demands for medical care continue to increase. “The complexity of practicing medicine in real life will require both humans and machines to solve problems,” Komarneni says, “as opposed to pure machine learning.”


Source link