Artificial Intelligence Can Diagnose Cancer As Effectively As Human Doctors - Study

© Photo : PixabayRobotics
Robotics - Sputnik International
While scientists note that many studies suggest that AI systems can diagnose cancer on par with medical specialists, there is not enough data to say machines can significantly outperform them. Not that there are not enough studies, it is the design of the studies that does not evoke confidence in its claims, they say.

An artificial intelligence (AI) can now be as effective as a medical professional in diagnosing cancer, says a UK study led by Prof. Alastair Denniston of University Hospitals Birmingham NHS Foundation Trust, according to the Daily Star report.

"We found deep learning could indeed detect diseases ranging from cancers to eye diseases as accurately as health professionals,” Denniston said. "Our review found the diagnostic performance of deep learning models to be equivalent to that of healthcare professionals."

In the course of the study the machine outperformed humans. The study, which examined data from 14 trials, discovered that AI correctly detected the disease in 87 percent of cases, over 86 percent achieved by doctors. The machine also ruled out patients who have no disease at a 93-percent correct rating, a slight edge over the human doctor’s 91 percent.

“Some have suggested AI applications will even replace whole medical disciplines or create new roles for doctors to fulfil, such as 'information specialists',” Denniston says, but criticized study biases that he says claim that machines vastly outperform medical specialists. "These biases can lead to exaggerated сlaims of good performance for AI tools which do not translate into the real world.”

Denniston lamented that, out of 20,500 articles, only 14 were designed in a way that was suitable for an independent analysis and provoked sufficient confidence in their claims.

"Only 25 studies validated the AI models externally - using medical images from a different population — and just 14 studies actually compared the performance of AI and health professionals using the same test sample,” Denniston said.

Dr Livia Faes, Denniston’s co-author, opines that it is not enough to examine the diagnosis itself to measure a machine’s performance, saying that it should instead be measured by outcomes.

"Evidence on how AI algorithms will change patient outcomes needs to come from comparisons with alternative diagnostic tests in randomised controlled trials,” she said, adding that such studies do not yet exist.

"So far, there are hardly any such trials where diagnostic decisions made by an AI algorithm are acted upon to see what then happens to outcomes which really matter to patients, like timely treatment, time to discharge from hospital, or even survival rates."

The study team concluded that it is too early to talk about machines ousting human doctors from their jobs, but cautiously added that computers could work on par with their human colleagues.

To participate in the discussion
log in or register
Заголовок открываемого материала