Artificial intelligence programs can help to predict breast cancer better than humans, according to a new study.
Researchers used an AI program to retrospectively examine thousands of mammograms from U.S. and U.K. breast cancer screening programs and compared the computer’s results to what human experts were able to find.
According to the results, published in the journal Nature, the AI reduced false positives by 5.7 per cent in the U.S. and 1.2 per cent in the U.K. datasets.
It also reduced false negatives by 9.4 per cent in the U.S. and 2.7 per cent in the U.K., meaning it picked up on cancers that humans had missed.
“This is a huge advance in the potential for early cancer detection,” said Dr. Mozziyar Etemadi, one of the study’s co-authors and a research assistant professor of anesthesiology and biomedical engineering at Northwestern University.
“Breast cancer is one of the highest causes of cancer mortality in women. Finding cancer earlier means it can be smaller and easier to treat. We hope this will ultimately save a lot of lives.”
The difference between the U.K. and U.S. results likely has to do with the datasets the authors were examining and differences between how breast cancer screening programs work in the two countries. In the U.K., each image is analyzed by two clinical experts, whereas in the U.S., it’s only one.
But while this study is an interesting piece of work and an advancement in the field, AI likely won’t be used to analyze your mammograms anytime soon, said Dr. Alejandro Berlin, a radiation oncologist and medical director of data science, outcomes and smart cancer care at Toronto’s Princess Margaret Cancer Care Centre.
Research like this has been done retrospectively: taking results we already know and running them through the computer program to see how AI compares, he said, but clinical trials are still ongoing to see how this kind of software works in the real world.
“What AI does, fundamentally, in my view, is predicting things,” he said. A computer program can quickly and cheaply analyze a massive amount of information — more than a person can process — and “find connections and data that were invisible to the human eye.”
AI programs are also more consistent than people, he said, in that they don’t have a bad day or come into work overtired and make mistakes. So, they will likely produce more consistent results than a group of people, some of whom might be more talented than others.
But one of the big questions for Berlin is what do you do with those results?
“You can have an algorithm that performs beautifully in the lab that predicts the likelihood of you having appendicitis,” he said.
“Are you willing to go into the OR just based on that? Or would you like your clinician to take a look at your belly and say: ‘Yes, based on my experience, I think you have appendicitis and we’re going to go through the risks of the surgery together.’”
AI can help to identify a problem, he said, but deciding what to do about the problem — like treatment options that are best suited to a particular patient — is still better done by a human being, he said.
“Machines lack common sense.”
Researchers like him are studying how to integrate AI into clinical practice, but there are a lot of unanswered questions, he said, about how much patients will want to trust a computer program and what kinds of questions AI programs are best suited to answer.
“I think the message is there’s due diligence to be done,” he said. “I think now the burden is on that evaluation piece: what is acceptable for the people affected by cancer and where the maximal value should come if we were to take actions based on these machine learning tools.”