Published online Jun 28, 2022. doi: 10.35711/aimi.v3.i3.55
Peer-review started: January 19, 2022
First decision: March 12, 2022
Revised: April 12, 2022
Accepted: June 16, 2022
Article in press: June 16, 2022
Published online: June 28, 2022
Processing time: 159 Days and 22 Hours
Much of the published literature in Radiology-related Artificial Intelligence (AI) focuses on single tasks, such as identifying the presence or absence or severity of specific lesions. Progress comparable to that achieved for general-purpose computer vision has been hampered by the unavailability of large and diverse radiology datasets containing different types of lesions with possibly multiple kinds of abnormalities in the same image. Also, since a diagnosis is rarely achieved through an image alone, radiology AI must be able to employ diverse strategies that consider all available evidence, not just imaging information. Using key imaging and clinical signs will help improve their accuracy and utility tremendously. Employing strategies that consider all available evidence will be a formidable task; we believe that the combination of human and computer intelligence will be superior to either one alone. Further, unless an AI application is explainable, radiologists will not trust it to be either reliable or bias-free; we discuss some approaches aimed at providing better explanations, as well as regulatory concerns regarding explainability (“transparency”). Finally, we look at federated learning, which allows pooling data from multiple locales while maintaining data privacy to create more generalizable and reliable models, and quantum computing, still prototypical but potentially revolutionary in its computing impact.
Core Tip: It is necessary to understand the principles of how different artificial intelligence (AI) approaches work to appreciate their respective strengths and limitations. While advances in deep neural net research in Radiology are impressive, their focus must shift from applications that perform only single recognition task, to those that perform realistic multi-recognition tasks that radiologists perform daily. Humans use multiple problem-solving strategies, applying each as needed. Similarly, realistic AI solutions must combine multiple approaches. Good radiologists are also good clinicians. AI must similarly be able to use all available evidence, not imaging information alone, and not just one/Limited aspects of imaging. Both humans and computer algorithms (including AI) can be biased. A way to reduce bias, as well as prevent failure, is better explainability – the ability to clearly describe the workings of a particular application to a subject-matter expert unfamiliar with AI technology. Federated learning allows more generalizable and accurate machine-learning models to be created by preserving data privacy, concerns about which form a major barrier to large-scale collaboration. While the physical hurdles to implementing Quantum computing at a commercial level are formidable, this technology has the potential to revolutionize all of computing.