True to form, artificial intelligence continues to equal and even surpass doctors in the prediction and diagnosis of condition after condition. Most of this work, however, has occurred in carefully controlled laboratory experiments, with clean databases and images acquired and reviewed by experts.
Now, companies are making a concerted push to bring AI into real healthcare settings, where things are messier and far less controlled.
Last year, the U.S. Food and Drug Administration (FDA) approved the first machine learning application for healthcare: The Arterys Cardio DL. It uses a deep learning algorithm to analyze MRI images of the heart. The tool assists doctors in recognizing a problem and making a diagnosis, but other AI applications seek to flag disease without specialists overseeing the process.
Recently, Iowa City-based IDx announced that the FDA has expedited the review of the company’s autonomous AI system for early detection of diabetic retinopathy, a leading cause of blindness in diabetics. The IDx-DR system, developed by IEEE Senior Member Michael Abramoff over the past 21 years, is designed to work without the help of an eye specialist, which could make a big difference for patients. Currently, individuals often wait weeks or months to see a eye specialist, and may not be diagnosed in time to prevent blindness.
The autonomy of the AI system initially made regulators uncomfortable, says Abramoff. “There is essentially no one looking over the shoulder of the algorithm,” he says. So IDx and the FDA went back and forth for seven years over how to evaluate the system and make sure it was accurate and safe. Making the algorithm explainable to regulators was critical to gaining approval, adds Abramoff. “It was a long journey, but we’re close to the end, I hope.”
The company did make some needed adjustments to move from the lab into the clinic, he adds. Notably, the IDx team added an interactive component, so the AI will tell the nurse or doctor taking a retinal image if it is of sufficient quality for a diagnosis. “It’s not easy to take pictures of the retina,” says Abramoff. “The algorithm tells the operator if they need to re-take the image or it’s good. That made a big difference.”
After early testing on publicly available datasets, IDx completed a 900-person clinical trial last summer, comparing the diagnoses rendered by primary care staff with four hours of training with the system against those proffered by experts with 10-plus years of training and experience in taking and analyzing retinal images. Abramoff declined to share the results of the trial, which are under review for publication at a leading medical journal, but noted, “we’re very excited.”
AI diagnostics have flourished in ophthamology, including diagnostics for maladies such as congenital cataracts and glaucoma. Google, for example, is training DeepMind to spot signs of common eye diseases. This early momentum in the eye is no surprise, as the field boasts well-defined standards for diagnosis and treatment, and the eye is easily accessible, making it ideal for the application of new technologies. For example, one of the first gene therapies approved for use by the FDA was for an inherited form of vision loss.
Additionally, AI is better suited to the task of solving well-defined problems than ill-defined ones—a maxim that holds true across healthcare, says Abramoff. So, areas of medicine with hard data, such as pathology images, are riper for AI applications than areas with soft data, such as general diagnosis from electronic medical records. (Check out our AI vs Doctors graphic for more recent examples.)
The further one moves away from hard data and areas of medicine with agreed upon symptoms and treatments, the harder it will be to use AI as a diagnostic tool, says Abramoff. “The more objective it is, the easier it is to take the middle man out of the picture.”