MIT AI System Knows When to Make a Medical Diagnosis or When to Call an Expert

AI can now detect cancers of the lung, breast, brain, skin and cervix. But in the world of medical AI, figuring out when to rely on experts versus algorithms is always tricky. It is not simply a question of who is the “best” to make a diagnosis or a prediction. Factors such as the time available to healthcare professionals and their level of expertise also come into play. To solve this problem, researchers at MIT’s Computing and Artificial Intelligence Laboratory (CSAIL) have developed a machine learning system that can decide whether to make a prediction or rely on an expert.

More importantly, the system can adapt when and how often to rely on a human expert, depending on that teammate’s availability, experience and scope of practice. For example, in a busy hospital setting, the system may call for human assistance only when absolutely necessary.

Researchers trained the system to do several things, including examining chest x-rays to diagnose conditions such as a collapsed lung. When asked to diagnose cardiomegaly (an enlarged heart), the hybrid human-AI model scored eight percent better than either AI or medical professionals could on their own.

“There are many barriers that naturally preclude full automation in clinical settings, including issues of trust and accountability,” says David Sontag, senior author of a document that the CSAIL team presented at the International Machine Learning Conference. “We hope that our method will inspire machine learning practitioners to become more creative by incorporating real-time human expertise into their algorithms. “

Then the researchers will test a system that works with and relies on several experts at once. For example, AI can collaborate with different, more experienced radiologists with different patient populations.

The team also believe their system could have implications for content moderation, as it is able to detect offensive text and images. As social media companies struggle to eradicate misinformation and hate, a tool like this could help ease some of the burden on content moderators without resorting to full automation.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through any of these links, we may earn an affiliate commission.


Source link