Volume 3, Issue 2 pp. 144-151
REVIEW
Open Access

Application of Artificial Intelligence in Medical Imaging: Current Status and Future Directions

Yixin Yang

Yixin Yang

School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China

Contribution: Writing - original draft (equal), Writing - review & editing (equal)

Search for more papers by this author
Lan Ye

Corresponding Author

Lan Ye

School of Basic Medicine, Guizhou Medical University, Guiyang, China

Correspondence: Zhanhui Feng

([email protected]) and

Lan Ye

([email protected])

Contribution: Conceptualization (equal), Funding acquisition (equal), Validation (equal)

Search for more papers by this author
Zhanhui Feng

Corresponding Author

Zhanhui Feng

Department of Neurology, Guizhou Provincial People's Hospital, Guiyang, China

Correspondence: Zhanhui Feng

([email protected]) and

Lan Ye

([email protected])

Contribution: Conceptualization (equal), Funding acquisition (equal), Project administration (equal), Supervision (equal), Writing - review & editing (equal)

Search for more papers by this author
First published: 09 April 2025

Funding: This work was supported by the National Natural Science Foundation of China (Grant Nos. 82360266; 81960224; and 81860248); Guizhou Provincial Basic Research Program (Grant Nos. Qiankehe basic-ZK[2023] general 395; Qiankehe basic-ZK[2023] general 324; and Qiankehe basic-MS[2025]548); and Key Lab of Acute Brain Injury and Function Repair at Guizhou Medical University (Grant No. [2024]fy0071).

ABSTRACT

A revolution in medical diagnosis and treatment is being driven by the use of artificial intelligence (AI) in medical imaging. The diagnostic efficacy and accuracy of medical imaging are greatly enhanced by AI technologies, especially deep learning, that performs image recognition, feature extraction, and pattern analysis. Furthermore, AI has demonstrated significant promise in assessing the effects of treatments and forecasting the course of diseases. It also provides doctors with more advanced tools for managing the conditions of their patients. AI is poised to play a more significant role in medical imaging, especially in real-time image processing and multimodal fusion. By integrating multiple forms of image data, multimodal fusion technology provides more comprehensive disease information, whereas real-time image analysis can assist surgeons in making more precise decisions. By tailoring treatment regimens to each patient's unique needs, AI enhances both the effectiveness of treatment and the patient experience. Overall, AI in medical imaging promises a bright future, significantly enhancing diagnostic precision and therapeutic efficacy, and ultimately delivering higher-quality medical care to patients.

Abbreviations

  • AI
  • artificial intelligence
  • CNN
  • convolutional neural network
  • CT
  • computed tomography
  • DL
  • deep learning
  • MATR
  • multi-scale adaptive transformer
  • MRI
  • magnetic resonance imaging
  • PET
  • positron emission tomography
  • 1 Introduction

    Medical imaging is a science that uses various imaging techniques to obtain functional images, pathological changes, and the internal structure of the human body. This visual information can often intuitively reflect the nature and condition of human tissues, whether normal or abnormal, or lesions. Its primary applications are in disease detection, monitoring, and treatment [1]. Images are created using a variety of medical imaging techniques that combine signals and other physical signals. Computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET), nuclear medicine imaging, optical imaging, and x-rays are examples of core technologies [2]. Medical imaging encompasses more than just image identification; it also involves image processing, analysis, and interpretation. These processes are extensively used in diagnosis, treatment planning, early disease detection, efficacy monitoring, and disease prediction. Accurately and promptly diagnosing, classifying, and assessing prognosis for a variety of diseases has become a key topic in contemporary research because of the rapid advancement of medical image processing technology and the growth of clinical image data [3]. Medical imaging professionals have historically used the empirical evaluation of medical images to produce definitive reports for disease detection, diagnosis, and follow-up. This experience is the result of condensing and generalizing the characteristics of earlier, often quite subjective, disease visions. Currently, imaging experts with vast experience and superior skills still interpret the information found in medical image data. Conversely, artificial intelligence (AI) has a significant advantage in the extraction of complex image features and the capacity to automatically and quantitatively analyze any type of data. As a result, it can produce more reliable support data for medical decision-making and reach more objective, repeatable conclusions [4].

    Within the broader field of computer science, AI is a prominent area of research [5]. Natural language processing, computer vision, machine learning, neural networks, and expert systems are just a few of the fields covered by the core concepts of AI. Problem-oriented logical grading and statistical thinking were introduced as early as 1959 [6] but the real technological breakthroughs and conventional applications of AI came about as a result of the recent rapid development of computer technology and the explosive growth of big data information [7]. AI's ability to process complex data using massive datasets is its most important feature. AI can be used to extensively mine the knowledge within these datasets, increasing the amount of resources available for the development of human society. AI has a wide range of applications, and its main purpose is to mimic and enhance human intelligence. This makes it possible to improve work quality and efficiency while simplifying the use of human labor. It not only can eliminate monotonous work but also, in many cases, replace humans in carrying out intricate tasks [8].

    One of the most promising directions for medical technology development is the combination of AI with medical imaging. A significant amount of image data were produced as a result of the widespread use of medical imaging equipment, and these data are an essential resource for the development of AI [9, 10]. AI technology has shown great promise in helping medical imaging specialists diagnose and treat patients more quickly, analyze medical images more efficiently, and see patients for shorter periods of time. Its main uses include disease diagnosis and prediction model construction, image enhancement and reconstruction, image segmentation, and recognition and classification [11]. Currently, AI is being used in medical imaging in a variety of ways, from simple image identification to complex big data analysis and mining as well as more nuanced interpretations of image data and algorithm training for assisted diagnosis. The use of AI in the field of medical imaging will grow in scope and depth in the future as a result of ongoing technological advancements and the strengthening of relevant laws. This is anticipated to result in previously unheard-of changes to the medical sector [12].

    This review aims to summarize the current progress in the clinical application of AI for medical imaging and discuss future directions for research and development. Where possible, we cite literature published in the past 5 years.

    2 Application of AI in Medical Imaging

    2.1 Developments in the Image Recognition Field Using AI Technology

    Within the broader field of AI, image recognition is an important domain that deals with the use of computers to process, analyze, and understand images to recognize a wide range of patterns and objects. The core process of image recognition consists of three main steps: preprocessing the image, extracting features, and recognition [13]. The visual recognition system on the computer first gathers the image, after which it sends it to the image processing system for preliminary processing. After the initial detection of potential lesions, the suspected lesions can be marked, and the image can be segmented and then categorized. This approach ensures that the segmentation process accurately isolates regions of interest before they are classified based on their features. Furthermore, it is possible to extract and identify tumor traits that are invisible to the human eye through the use of specific algorithms.

    Automatic liver tumor segmentation is highly important for assisting physicians in the diagnosis and treatment planning of liver cancer. The Couinaud liver segmentation method is widely used by radiologists to document findings related to liver cancer in their reports. To further enhance segmentation accuracy, researchers developed a novel model called CouinaudNet that estimates pseudo-tumor masks from Couinaud liver segment annotations to serve as pixel-level supervision for training a fully supervised tumor segmentation model. This model consists of two components: a repair network with the Couinaud liver segment masks, which effectively removes tumors from pathological images by filling tumor regions with reasonably healthy intensities, and a difference detection network for tumor segmentation, trained using healthy-pathological image pairs generated through an effective tumor synthesis strategy. In practice, CouinaudNet significantly reduces the annotation workload [14].

    Accurate segmentation of the hepatic veins and portal veins also remains a clinical challenge; however, a proposed solution is the dual-stream hepatic portal vein segmentation network that extracts local features and long-range spatial information to delineate the anatomical details of the hepatic veins and portal veins, thereby avoiding misclassification of adjacent peripheral vessels. This method demonstrates excellent performance in resolving the issue of peripheral vessel misclassification in the segmentation tasks of hepatic veins and portal veins [15].

    Over the past 5 years, deep learning (DL) has advanced quickly, producing many findings and making major strides in a variety of domains, including speech recognition, natural language processing, and protein structure decoding [16]. Learning from a sequence of successive representation layers is emphasized in DL. DL typically involves dozens or even hundreds of layers with each layer learning increasingly abstract representations from the training data. Multilayer neural networks are used to train a DL model on a vast amount of medical imaging data to extract various abstract properties from the sample data, attain quantitative experience of intelligent diagnosis, and ultimately categorize and predict diseases. The sophistication and capability of the network system is greatly improved through DL-based training on large datasets of medical images, and these systems have the potential for broad application in precision medicine. DL is frequently used in the realm of medical imaging to create models for treatment planning, diagnosis, and screening.

    Within the broader field of DL research, computer vision is an important domain that has seen substantial breakthroughs in recent years [17]. Target detection, image segmentation, and image identification are just a few of the specific applications that use convolutional neural networks (CNNs), mainly to process images [18, 19]. The use of CNNs has significantly increased the precision and effectiveness of medical image analysis, making it possible to handle large volumes of data automatically. In many medical image-related tasks, such as pathology slice analysis, radiation therapy planning, diagnosis, image segmentation, feature extraction, image enhancement, image alignment, and multi-modal learning, CNNs have emerged as essential tools. They also make it easier to create remote diagnostics and improve access to healthcare services, especially in areas with limited resources [20, 21]. To achieve early identification of diabetic retinopathy, for instance, some researchers have used an automated method that employs convolutional neural networks to provide synaptic plasticity during the backpropagation phase, improving learning rate and accuracy. The InceptionV3 architecture is used in this method. On 3662 training images, the model achieved an accuracy of 95.56% and an F1 score of 94.24%, highlighting the advantages of quick convergence, simple training, and good performance [22]. Using chest X-ray images, a CNN method was used for the early identification and classification of 14 different lung diseases. Using data enhancement techniques, the ResNet-152 model achieved an accuracy of 67%, whereas the InceptionV3 model, that included synaptic plasticity, achieved an accuracy of 95.56%. These results highlight how DL models can improve the accuracy and effectiveness of X-ray image processing for images of the chest [23].

    2.2 Developments in the Use of AI in Medical Imaging for Early Disease Diagnosis

    The early detection of medical issues can greatly benefit from AI, especially in the case of cancers, cardiovascular diseases, and neurological disorders [24, 25]. The ability to identify lesions at an early stage is critical for early diagnosis. Physicians can create a more precise treatment plan using imaging data to accurately determine the location and size of lesions. Additionally, AI can identify potential lesions earlier than traditional methods because it can extract minute lesion features from a wide range of complex visual data through the application of DL algorithms. AI is particularly adept at spotting anomalies and early lesions in tumor applications. Some researchers have used AI-assisted screening mammography to detect breast cancer at an early stage, allowing for rapid treatment. The study found that the AI system outperformed imaging physicians in terms of false alarm reduction (5.7% and 1.2%, respectively) and missed diagnoses (9.4% and 2.7%). In an independent comparison trial with six radiologists, the AI system performed better, not only retaining non-inferiority but also reducing the second physician's workload by 88%. This comprehensive assessment of AI systems paves the way for clinical studies that are expected to increase the accuracy and efficiency of breast cancer screening [26].

    Additionally, there is significant evidence of the effectiveness of using AI in the early diagnosis of lung cancer. In a cohort of 6716 people from the National Lung Cancer Screening Trial, an AI-assisted analysis of patient CT images from the past and present was found to accurately predict the risk of lung cancer with a sensitivity of 94.4% and specificity of 93.8%. The model demonstrated comparable performance in a separate clinical validation set. When compared with six radiologists, the model performed better at minimizing false positives and false negatives by 11% and 5%, respectively, in the absence of prior CT imaging for comparison. The model's performance was shown to be on par with that of radiologists when prior CT imagery was available for comparison. These results suggest that DL models can improve the global acceptability, accuracy, and consistency of lung cancer screening [27].

    The potential of AI to aid in the identification of cardiovascular disease at an early stage was studied. An investigation of 22,641 adult patients who underwent electrocardiography revealed that the AI-assisted group had a higher risk than the control group of having a new diagnosis of poor cardiac ejection fraction within 90 days. Moreover, a higher percentage of patients in the intervention group underwent echocardiography after a positive AI-assisted electrocardiography result. The study's findings suggest that AI algorithms may make it easier to identify poor ejection fraction in routine primary care patients early on, improving both diagnostic precision and patient care quality [28]. It is crucial to remember that the most fundamental imaging tests served as the foundation for all of the aforementioned research. Despite this drawback, they all showed a discernible improvement in early disease identification with AI support, indicating that AI is likely to play a big part in early disease screening and diagnosis.

    In recent years, imaging analysis techniques based on DL made significant progress in disease stratification. For example, AI techniques have been applied to the stratification assessment of Parkinson's disease. Using noncontact posture extraction technology combined with geometric analysis and machine-learning algorithms, the system is able to precisely stratify motor features in patients with Parkinson's disease [29]. The model was evaluated on a large dataset comprising 7671 individuals, sourced from multiple hospitals in the United States and several public datasets, and the severity and progression of Parkinson's disease were estimated based on the Movement Disorder Society—Unified Parkinson's Disease Rating Scale. The AI model uses an attention layer that allows its predictions with respect to sleep and electroencephalogram to be interpreted. Moreover, the model can assess Parkinson's disease in the home setting in a touchless manner by extracting breathing from radio waves that bounce off a person's body during sleep. The study demonstrated the feasibility of an objective, noninvasive, and at-home assessment of Parkinson's disease. Moreover, it provided initial evidence that its proposed AI model could be useful for risk assessment before clinical diagnosis [29].

    2.3 AI Advances in Medical Imaging Disease Progression Prediction Studies

    In medical imaging, AI is becoming increasingly important for predicting the course of disease. AI can build models to forecast the potential course of a disease by examining a patient's imaging data and medical history. This is particularly important in the fields of oncology, neurological disorders, cardiovascular diseases, and chronic disease management.

    Using a DL model, some researchers investigated how CT imaging data can be used to predict the chance of recurrence and the length of patient life in cases of lung cancer [30]. Numerous applications, such as lung nodule detection, benign and malignant classification, noninvasive gene molecule prediction, histological type, and the prognostic evaluation of pathology images, have shown great potential for this model. AI was used in neurology to predict how Alzheimer's disease will progress. AI algorithms can identify the period during which moderate cognitive impairment becomes dementia by analyzing brain imaging and clinical data, allowing for early intervention [31]. In addition, an AI model that analyzes nocturnal respiratory signals was developed that can identify and monitor the progression of Parkinson's disease. This method was tested on a large dataset that included information from 7671 people from various US hospitals. With areas under the receiver operating characteristic curve of 0.90 and 0.85, the AI model showed strong diagnostic capacity on an independent test set. Additionally, the model could determine the severity of the disease with a correlation of 0.94 using the widely used Movement Disorder Society—Unified Parkinson's Disease Rating Scale. By examining radio wave reflections, the model was moreover able to evaluate disease changes in contactless home-based assessments and predicted the course of Parkinson's disease using the attentional layer. This work demonstrated that an AI-based, objective, noninvasive, and home-based evaluation of Parkinson's disease is feasible [29]. AI models are also used in the CT image analysis of chronic obstructive pulmonary disease (COPD) to detect and grade different subtypes of emphysema and to predict the severity and course of the disease. Specifically, the vision transformer model introduced in 2020 captures the spatial structure of images to accurately classify subtypes which is crucial for COPD treatment [32].

    The use of AI technology is becoming increasingly important in predicting the risk of cardiovascular disease. These methods can identify populations at increased risk more precisely than traditional approaches because they can analyze large amounts of data to find previously unknown relationships and patterns. For instance, a study by Gao Pei's group of researchers at Peking University's Clinical Research Institute showed that machine learning models can improve the precision of risk prediction for cardiovascular disease [33]. Additionally, risk prediction models have been built using AI approaches to forecast the risk of in-hospital death for patients experiencing cardiac arrest as well as the risk of death or readmission for patients suffering from acute heart failure [34]. Cardiopulmonary arrest ratings can be produced by cardiac prediction systems, such as those built on the random forest algorithm, by scanning patient records [35]. The aforementioned research shows how AI can improve patient prognosis through accurate prediction and help doctors understand the dynamic course of a disease. Furthermore, by combining a patient's genetics, past imaging reports, family history, and personal medical history, DL can forecast a person's likelihood of developing a disease.

    2.4 Advances in the Use of AI-Assisted Imaging to Assess the Results of Disease After Therapy

    Considerable progress was made in the evaluation of the results of disease therapy thanks to the use of AI-assisted medical imaging, especially in the case of tumor treatment, cardiovascular disorders, and neurological diseases. An integrated evaluation of the condition following treatment is made possible by AI's ability to analyze changes in images taken before and after treatment. AI technology was particularly helpful in the treatment of breast cancer. The development of AI-assisted image-based tumor diagnosis technology is the best way to improve the effectiveness and precision of image diagnosis. Because of its learning process, that includes the study of pre- and post-treatment image data inputs and the creation of algorithmic models, AI is able to automatically recognize, segment, and diagnose tumor lesions. As a result, it is possible to assess a treatment's impact more effectively [36]. AI possesses numerous potential uses in the evaluation of treatment outcomes and the characterization of lung cancer. AI-based methods, such as DL models like U-Net and BCDU-Net, can be used to replace manual segmentation, a laborious process with significant inter-observer variability, to objectively measure lung nodules and malignancies and extract radiomics properties to characterize tissues. Additionally, by fusing radiomics traits with clinical data, AI models have proven their capacity to forecast therapeutic responses (such as immunotherapy and targeted medications). Furthermore, prognostic models powered by AI have been created to pinpoint high-risk individuals and tailor treatment plans [37]. AI technology can also analyze medical imaging data to help doctors estimate how well a cancer treatment is working. Moreover, AI technology can improve therapy approaches and more precisely identify the condition by creating new biomarkers, especially in the area of image-based early treatment response assessment [38]. These applications not only enable a significant improvement in the accuracy and objectivity of treatment response assessment but also help doctors make timely adjustments to treatment plans and improve patients' overall outcomes.

    3 Discussion of the Potential Future Applications and Prospects of AI in the Field of Medical Imaging

    In future, it is anticipated that the application of AI in medical imaging will allow for more precise prognosis assessment, treatment planning, and disease diagnosis. DL algorithms, in particular, can analyze and understand large volumes of medical imaging data to increase the precision and effectiveness of diagnosis. The potential uses and applications of AI in medical imaging are outlined in this study.

    3.1 Use of Multi-Modal Fusion Techniques in Various Applications

    Multi-modal AI is an intelligent system that integrates multi-source heterogeneous medical data. Using methods such as feature extraction and modality fusion, it achieves a comprehensive understanding and precise diagnosis of diseases. Its core lies in simulating a multi-disciplinary consultation model, fully leveraging the complementarity of various types of medical data, including but not limited to electronic health records, medical imaging, genomic data, and wearable device data [39]. Multi-modal AI enhances the accuracy and efficiency of disease diagnosis. For example, it significantly improves the accuracy and reliability of medical diagnoses by conducting a comprehensive analysis and diagnosis based on medical records, imaging data, and genetic information [40]. Some studies have shown that with the introduction of the transformer model, AI has evolved to be capable of analyzing the diverse and multi-modal data sources currently present in medicine.

    Multi-modal AI has demonstrated strong potential for increasing the accuracy of disease risk assessment and stratification as well as for monitoring the key drivers of cardiovascular and metabolic diseases—factors that include blood pressure, sleep, stress, blood glucose control, weight, nutrition, and physical activity [41]. By integrating different imaging modalities such as CT, MRI, and PET, comprehensive information support can be provided for the detection, diagnosis, staging, and treatment planning of tumors [39]. Moreover, the fusion of histopathological slide data and genomic information can further reveal the molecular characteristics of tumors, thereby enabling precise personalized treatment [40]. In 2024, the development of the first foundation model in the field of ophthalmology laid the foundation for a general-purpose medical AI that is adaptable to new tasks. However, the rapid development of large language models, such as GPT-4 and Gemini, has yielded models that have been adjusted for medical specialization and have achieved encouraging results in clinical settings [42].

    It is expected that DL will enable further progress in the combination and evaluation of multi-modal data, improving the precision of diagnostic results. A new unsupervised technique called the multi-scale adaptive transformer (MATR) was proposed for merging multi-modal medical images. This method introduced an adaptive convolution to adjust the convolution kernel adaptively based on the global complementary environment, rather than using a standard convolution directly. To improve global semantic extraction, an adaptive transformer was used to further describe long-range dependencies. The goal of the multi-scale network architecture is to fully capture multi-modal information that is useful at multiple scales. To further constrain information retention at the structural and feature levels, an objective function that combines structural loss and regional mutual information loss was developed. Comprehensive tests on popular datasets showed that the proposed approach works better than other representative and innovative techniques in terms of quantitative evaluation and visual quality. The favorable fusion results demonstrate that MATR has good generalization skills. The authors further adapted the proposed strategy to other biomedical image fusion challenges [43].

    Integrating different neuroimaging perspectives is crucial in the field of neuroimaging to extract significant insights and gain a more comprehensive understanding of complex psychiatric diseases. If each modality is analyzed separately, it might only provide a limited understanding or miss important connections between different types of information. Through the synergistic integration of data from various modalities, multi-modal fusion approaches make it easier to identify underlying patterns and linkages that could go unnoticed otherwise. The authors in Ref. [44] offered data-driven multi-modal fusion techniques that can be used both with and without prior knowledge with an emphasis on independent component analysis and canonical correlation analysis. These fusion methods are quite versatile and can be applied to a wide range of brain diseases by taking into account many factors such as genetic, environmental, cognitive, and treatment outcomes. In summary, the use of MRI fusion for neuropsychiatric conditions has provided important new insights into the brain underpinnings of mental illnesses. These findings could highlight minute irregularities or provide potential biomarkers that could guide treatment planning and individualized medical interventions [45]. Future breakthroughs in smart healthcare will be made possible by the continuous development of these technologies which also creates the groundwork for more individualized and precise treatment.

    3.2 Real-Time Image Analysis

    Using contemporary technology, real-time image analysis is a method that collects and processes visual data in real time. This is a commonly used approach in several fields such as biomedicine, medicine, and aerospace. By collecting and evaluating image data on a frequent basis, real-time image analysis helps medical professionals make more accurate diagnoses and execute surgical procedures [46, 47]. Several important domains, such as navigation and localization, demonstrate the use of real-time imaging in diagnosis and therapy. During surgical procedures, real-time images can be obtained using CT, MRI, ultrasound, and other imaging technologies thanks to real-time image navigation technology. This helps to ensure the safety and effectiveness of the operation by allowing surgeons to keep an eye on the position of the catheter and the dispersion of embolic materials [44]. Machine vision technology allows clinicians to identify diseased tissues by providing real-time image analysis during surgery. This is especially helpful for navigational endoscopic surgery as computer vision technologies help increase the precision of surgical procedures [48]. Additionally, noninvasive, real-time,and localized monitoring of cells and tissues outside the body is possible with optical molecular imaging technology that supports the development of new drugs and the diagnosis of diseases [49]. In the diagnosis and treatment of soft tissue interventions, real-time imaging technology is essential. For example, the combination of AI and imaging technologies with surgical robots can significantly improve the accuracy and effectiveness of diagnosis and treatment [50]. Real-time image analysis is important because it can greatly improve the precision and effectiveness of medical diagnosis and therapy. Medical practitioners can create more rational and scientific treatment strategies by developing a more thorough grasp of the condition through the real-time acquisition and analysis of image data. This raises the bar for medical treatment and service quality overall by improving the effectiveness of surgical procedures while simultaneously reducing patient discomfort and risk [50].

    3.3 Role of AI in Personalized Therapy

    Personalized medicine is being substantially aided by the convergence of AI and medical imaging which is boosting the creation of customized treatment in a number of ways. These consist of precise diagnosis, customized treatment programs, comprehensive process coverage of intelligent services, and the use of innovative technologies and interdisciplinary teamwork. AI technology can quickly recognize and evaluate the identification and analysis of organs, tissues, and vascular structures, improving the efficiency and accuracy of diagnosis. For example, although the proper diagnostic rate has increased to 90%, the reading time for pathology examinations was shortened from 30–40 to 3 min [51]. AI also has the ability to create individualized treatment regimens based on a person's genetic makeup, lifestyle, medical background, and genome, that could improve treatment precision and lower the possibility of misdiagnosis. AI is used in the service process in a variety of ways, from precise treatment to supported diagnosis and detection. To guarantee the continuity of diagnosis and therapy, it manages imaging data in several modalities including CT, ultrasound, MRI, and PET [52]. Simultaneously, interdisciplinary cooperation and the introduction of innovative instruments, such as wearables, virtual assistants, and predictive analytics models, improved patient experience and care quality even more, demonstrating the enormous potential of AI in medical imaging. AI-assisted strategy is widely used in real-world clinical practice, especially in the highly active immunotherapy efficiency evaluation of breast cancer and gastric cancer [53, 54].

    3.4 Role of AI in Automatic Radiology Reports

    Automated radiology reporting can reduce the workload of radiologists. Researchers developed a state-of-the-art chest x-ray report generation system called the Flamingo-CXR [55]. By inviting a group of board-certified radiologists to conduct expert evaluations, errors were observed in both human-written and Flamingo-CXR reports. Among these, 24.8% of the outpatient/inpatient cases contained clinically significant errors in both types of reports, 22.8% were found only in Flamingo-CXR reports, and 14.0% found only in human-written reports. For the reports containing errors, a supportive setup was developed that demonstrates collaboration between clinicians and AI in radiology report writing, indicating new possibilities for potential clinical applications. In future, automated radiology reporting will play a crucial role in optimizing radiology workflows by reducing the workload of radiologists and improving diagnostic accuracy. It will also enhance patient care by enabling faster diagnosis and more standardized reporting.

    4 Conclusion and Prospects

    This paper presented an overview of the developments in the use of AI in medical imaging, highlighting the ways in which AI is progressively changing the way that diseases are diagnosed and treated. With the development of medical imaging technology, there was a significant improvement in the accuracy and speed of diagnosis as well as a decrease in the possibility of physician error. This is accomplished by using DL and computer vision techniques to automate the processing of medical images acquired through imaging methods such as CT and x-rays. AI also has the ability to create individualized treatment regimens based on a person's genetic makeup, lifestyle, medical background, and genome, that could improve treatment precision and lower the possibility of misdiagnosis. AI is used in the service process in a variety of ways, from precise treatment to supported diagnosis and detection. To maintain the continuity of diagnosis and therapy, it manages imaging data in several modalities including CT, ultrasound, MRI, and PET. Simultaneously, interdisciplinary cooperation and the introduction of innovative instruments, such as wearables, virtual assistants, and predictive analytics models, improved patient experience and care quality even more, demonstrating the enormous potential of AI in medical imaging.

    Author Contributions

    Yixin Yang: writing – original draft (equal), writing – review and editing (equal). Lan Ye: conceptualization (equal), funding acquisition (equal), validation (equal). Zhanhui Feng: conceptualization (equal), funding acquisition (equal), project administration (equal), supervision (equal), writing – review and editing (equal).

    Acknowledgments

    The authors have nothing to report.

      Ethics Statement

      The authors have nothing to report.

      Consent

      The authors have nothing to report.

      Conflicts of Interest

      The authors declare no conflicts of interest.

      Data Availability Statement

      Data sharing is not applicable to this article as no datasets were generated or analyzed.

        The full text of this article hosted at iucr.org is unavailable due to technical difficulties.