Sort by:
Page 88 of 2372362 results

Curriculum check, 2025-equipping radiology residents for AI challenges of tomorrow.

Venugopal VK, Kumar A, Tan MO, Szarf G

pubmed logopapersJun 9 2025
The exponential rise in the artificial intelligence (AI) tools for medical imaging is profoundly impacting the practice of radiology. With over 1000 FDA-cleared AI algorithms now approved for clinical use-many of them designed for radiologic tasks-the responsibility lies with training institutions to ensure that radiology residents are equipped not only to use AI systems, but to critically evaluate, monitor, respond to their output in a safe, ethical manner. This review proposes a comprehensive framework to integrate AI into radiology residency curricula, targeting both essential competencies required of all residents, optional advanced skills for those interested in research or AI development. Core educational strategies include structured didactic instruction, hands-on lab exposure to commercial AI tools, case-based discussions, simulation-based clinical pathways, teaching residents how to interpret model cards, regulatory documentation. Clinical examples such as stroke triage, Urinary tract calculi detection, AI-CAD in mammography, false-positive detection are used to anchor theory in practice. The article also addresses critical domains of AI governance: model transparency, ethical dilemmas, algorithmic bias, the role of residents in human-in-the-loop oversight systems. It outlines mentorship, faculty development strategies to build institutional readiness, proposes a roadmap to future-proof radiology education. This includes exposure to foundation models, vision-language systems, multi-agent workflows, global best practices in post-deployment AI monitoring. This pragmatic framework aims to serve as a guide for residency programs adapting to the next era of radiology practice.

Comparative accuracy of two commercial AI algorithms for musculoskeletal trauma detection in emergency radiographs.

Huhtanen JT, Nyman M, Blanco Sequeiros R, Koskinen SK, Pudas TK, Kajander S, Niemi P, Aronen HJ, Hirvonen J

pubmed logopapersJun 9 2025
Missed fractures are the primary cause of interpretation errors in emergency radiology, and artificial intelligence has recently shown great promise in radiograph interpretation. This study compared the diagnostic performance of two AI algorithms, BoneView and RBfracture, in detecting traumatic abnormalities (fractures and dislocations) in MSK radiographs. AI algorithms analyzed 998 radiographs (585 normal, 413 abnormal), against the consensus of two MSK specialists. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy, and interobserver agreement (Cohen's Kappa) were calculated. 95% confidence intervals (CI) assessed robustness, and McNemar's tests compared sensitivity and specificity between the AI algorithms. BoneView demonstrated a sensitivity of 0.893 (95% CI: 0.860-0.920), specificity of 0.885 (95% CI: 0.857-0.909), PPV of 0.846, NPV of 0.922, and accuracy of 0.889. RBfracture demonstrated a sensitivity of 0.872 (95% CI: 0.836-0.901), specificity of 0.892 (95% CI: 0.865-0.915), PPV of 0.851, NPV of 0.908, and accuracy of 0.884. No statistically significant differences were found in sensitivity (p = 0.151) or specificity (p = 0.708). Kappa was 0.81 (95% CI: 0.77-0.84), indicating almost perfect agreement between the two AI algorithms. Performance was similar in adults and children. Both AI algorithms struggled more with subtle abnormalities, which constituted 66% and 70% of false negatives but only 20% and 18% of true positives for the two AI algorithms, respectively (p < 0.001). BoneView and RBfracture exhibited high diagnostic performance and almost perfect agreement, with consistent results across adults and children, highlighting the potential of AI in emergency radiograph interpretation.

Hybrid adaptive attention deep supervision-guided U-Net for breast lesion segmentation in ultrasound computed tomography images.

Liu X, Zhou L, Cai M, Zheng H, Zheng S, Wang X, Wang Y, Ding M

pubmed logopapersJun 9 2025
Breast cancer is the second deadliest cancer among women after lung cancer. Though the breast cancer death rate continues to decline in the past 20 years, the stages IV and III breast cancer death rates remain high. Therefore, an automated breast cancer diagnosis system is of great significance for early screening of breast lesions to improve the survival rate of patients. This paper proposes a deep learning-based network hybrid adaptive attention deep supervision-guided U-Net (HAA-DSUNet) for breast lesion segmentation of breast ultrasound computed tomography (BUCT) images, which replaces the traditionally sampled convolution module of U-Net with the hybrid adaptive attention module (HAAM), aiming to enlarge the receptive field and probe rich global features while preserving fine details. Moreover, we apply the contrast loss to intermediate outputs as deep supervision to minimize the information loss during upsampling. Finally, the segmentation prediction results are further processed by filtering, segmentation, and morphology to obtain the final results. We conducted the experiment on our two UCT image datasets HCH and HCH-PHMC, and the highest Dice score is 0.8729 and IoU is 0.8097, which outperform all the other state-of-the-art methods. It is demonstrated that our algorithm is effective in segmenting the legion from BUCT images.

Evaluation of AI diagnostic systems for breast ultrasound: comparative analysis with radiologists and the effect of AI assistance.

Tsuyuzaki S, Fujioka T, Yamaga E, Katsuta L, Mori M, Yashima Y, Hara M, Sato A, Onishi I, Tsukada J, Aruga T, Kubota K, Tateishi U

pubmed logopapersJun 9 2025
The purpose of this study is to evaluate the diagnostic accuracy of an artificial intelligence (AI)-based Computer-Aided Diagnosis (CADx) system for breast ultrasound, compare its performance with radiologists, and assess the effect of AI-assisted diagnosis. This study aims to investigate the system's ability to differentiate between benign and malignant breast masses among Japanese patients. This retrospective study included 171 breast mass ultrasound images (92 benign, 79 malignant). The AI system, BU-CAD™, provided Breast Imaging Reporting and Data System (BI-RADS) categorization, which was compared with the performance of three radiologists. Diagnostic accuracy, sensitivity, specificity, and area under the curve (AUC) were analyzed. Radiologists' diagnostic performance with and without AI assistance was also compared, and their reading time was measured using a stopwatch. The AI system demonstrated a sensitivity of 91.1%, specificity of 92.4%, and an AUC of 0.948. It showed comparable diagnostic performance to Radiologist 1, with 10 years of experience in breast imaging (0.948 vs. 0.950; p = 0.893), and superior performance to Radiologist 2 (7 years of experience, 0.948 vs. 0.881; p = 0.015) and Radiologist 3 (3 years of experience, 0.948 vs. 0.832; p = 0.001). When comparing diagnostic performance with and without AI, the use of AI significantly improved the AUC for Radiologists 2 and 3 (p = 0.001 and 0.005, respectively). However, there was no significant difference for Radiologist 1 (p = 0.139). In terms of diagnosis time, the use of AI reduced the reading time for all radiologists. Although there was no significant difference in diagnostic performance between AI and Radiologist 1, the use of AI substantially decreased the diagnosis time for Radiologist 1 as well. The AI system significantly improved diagnostic efficiency and accuracy, particularly for junior radiologists, highlighting its potential clinical utility in breast ultrasound diagnostics.

Integration of artificial intelligence into cardiac ultrasonography practice.

Shaulian SY, Gala D, Makaryus AN

pubmed logopapersJun 9 2025
Over the last several decades, echocardiography has made numerous technological advancements, with one of the most significant being the integration of artificial intelligence (AI). AI algorithms assist novice operators to acquire diagnostic-quality images and automate complex analyses. This review explores the integration of AI into various echocardiographic modalities, including transthoracic, transesophageal, intracardiac, and point-of-care ultrasound. It examines how AI enhances image acquisition, streamlines analysis, and improves diagnostic performance across routine, critical care, and complex cardiac imaging. To conduct this review, PubMed was searched using targeted keywords aligned with each section of the paper, focusing primarily on peer-reviewed articles published from 2020 onward. Earlier studies were included when foundational or frequently cited. The findings were organized thematically to highlight clinical relevance and practical applications. Challenges persist in clinical application, including algorithmic bias, ethical concerns, and the need for clinician training and AI oversight. Despite these, AI's potential to revolutionize cardiovascular care through precision and accessibility remains unparalleled, with benefits likely to far outweigh obstacles if appropriately applied and implemented in cardiac ultrasonography.

Addressing Limited Generalizability in Artificial Intelligence-Based Brain Aneurysm Detection for Computed Tomography Angiography: Development of an Externally Validated Artificial Intelligence Screening Platform.

Pettersson SD, Filo J, Liaw P, Skrzypkowska P, Klepinowski T, Szmuda T, Fodor TB, Ramirez-Velandia F, Zieliński P, Chang YM, Taussky P, Ogilvy CS

pubmed logopapersJun 9 2025
Brain aneurysm detection models, both in the literature and in industry, continue to lack generalizability during external validation, limiting clinical adoption. This challenge is largely due to extensive exclusion criteria during training data selection. The authors developed the first model to achieve generalizability using novel methodological approaches. Computed tomography angiography (CTA) scans from 2004 to 2023 at the study institution were used for model training, including untreated unruptured intracranial aneurysms without extensive cerebrovascular disease. External validation used digital subtraction angiography-verified CTAs from an international center, while prospective validation occurred at the internal institution over 9 months. A public web platform was created for further model validation. A total of 2194 CTA scans were used for this study. One thousand five hundred eighty-seven patients and 1920 aneurysms with a mean size of 5.3 ± 3.7 mm were included in the training cohort. The mean age of the patients was 69.7 ± 14.9 years, and 1203 (75.8%) were female. The model achieved a training Dice score of 0.88 and a validation Dice score of 0.76. Prospective internal validation on 304 scans yielded a lesion-level (LL) sensitivity of 82.5% (95% CI: 75.5-87.9) and specificity of 89.6 (95% CI: 84.5-93.2). External validation on 303 scans demonstrated an on-par LL sensitivity and specificity of 83.5% (95% CI: 75.1-89.4) and 92.9% (95% CI: 88.8-95.6), respectively. Radiologist LL sensitivity from the external center was 84.5% (95% CI: 76.2-90.2), and 87.5% of the missed aneurysms were detected by the model. The authors developed the first publicly testable artificial intelligence model for aneurysm detection on CTA scans, demonstrating generalizability and state-of-the-art performance in external validation. The model addresses key limitations of previous efforts and enables broader validation through a web-based platform.

Developing a Deep Learning Radiomics Model Combining Lumbar CT, Multi-Sequence MRI, and Clinical Data to Predict High-Risk Adjacent Segment Degeneration Following Lumbar Fusion: A Retrospective Multicenter Study.

Zou C, Wang T, Wang B, Fei Q, Song H, Zang L

pubmed logopapersJun 9 2025
Study designRetrospective cohort study.ObjectivesDevelop and validate a model combining clinical data, deep learning radiomics (DLR), and radiomic features from lumbar CT and multisequence MRI to predict high-risk patients for adjacent segment degeneration (ASDeg) post-lumbar fusion.MethodsThis study included 305 patients undergoing preoperative CT and MRI for lumbar fusion surgery, divided into training (n = 192), internal validation (n = 83), and external test (n = 30) cohorts. Vision Transformer 3D-based deep learning model was developed. LASSO regression was used for feature selection to establish a logistic regression model. ASDeg was defined as adjacent segment degeneration during radiological follow-up 6 months post-surgery. Fourteen machine learning algorithms were evaluated using ROC curves, and a combined model integrating clinical variables was developed.ResultsAfter feature selection, 21 radiomics, 12 DLR, and 3 clinical features were selected. The linear support vector machine algorithm performed best for the radiomic model, and AdaBoost was optimal for the DLR model. A combined model using these and clinical features was developed, with the multi-layer perceptron as the most effective algorithm. The areas under the curve for training, internal validation, and external test cohorts were 0.993, 0.936, and 0.835, respectively. The combined model outperformed the combined predictions of 2 surgeons.ConclusionsThis study developed and validated a combined model integrating clinical, DLR and radiomic features, demonstrating high predictive performance for identifying high-risk ASDeg patients post-lumbar fusion based on clinical data, CT, and MRI. The model could potentially reduce ASDeg-related revision surgeries, thereby reducing the burden on the public healthcare.

Automated Vessel Occlusion Software in Acute Ischemic Stroke: Pearls and Pitfalls.

Aziz YN, Sriwastwa A, Nael K, Harker P, Mistry EA, Khatri P, Chatterjee AR, Heit JJ, Jadhav A, Yedavalli V, Vagal AS

pubmed logopapersJun 9 2025
Software programs leveraging artificial intelligence to detect vessel occlusions are now widely available to aid in stroke triage. Given their proprietary use, there is a surprising lack of information regarding how the software works, who is using the software, and their performance in an unbiased real-world setting. In this educational review of automated vessel occlusion software, we discuss emerging evidence of their utility, underlying algorithms, real-world diagnostic performance, and limitations. The intended audience includes specialists in stroke care in neurology, emergency medicine, radiology, and neurosurgery. Practical tips for onboarding and utilization of this technology are provided based on the multidisciplinary experience of the authorship team.

A Dynamic Contrast-Enhanced MRI-Based Vision Transformer Model for Distinguishing HER2-Zero, -Low, and -Positive Expression in Breast Cancer and Exploring Model Interpretability.

Zhang X, Shen YY, Su GH, Guo Y, Zheng RC, Du SY, Chen SY, Xiao Y, Shao ZM, Zhang LN, Wang H, Jiang YZ, Gu YJ, You C

pubmed logopapersJun 9 2025
Novel antibody-drug conjugates highlight the benefits for breast cancer patients with low human epidermal growth factor receptor 2 (HER2) expression. This study aims to develop and validate a Vision Transformer (ViT) model based on dynamic contrast-enhanced MRI (DCE-MRI) to classify HER2-zero, -low, and -positive breast cancer patients and to explore its interpretability. The model is trained and validated on early enhancement MRI images from 708 patients in the FUSCC cohort and tested on 80 and 101 patients in the GFPH cohort and FHCMU cohort, respectively. The ViT model achieves AUCs of 0.80, 0.73, and 0.71 in distinguishing HER2-zero from HER2-low/positive tumors across the validation set of the FUSCC cohort and the two external cohorts. Furthermore, the model effectively classifies HER2-low and HER2-positive cases, with AUCs of 0.86, 0.80, and 0.79. Transcriptomics analysis identifies significant biological differences between HER2-low and HER2-positive patients, particularly in immune-related pathways, suggesting potential therapeutic targets. Additionally, Cox regression analysis demonstrates that the prediction score is an independent prognostic factor for overall survival (HR, 2.52; p = 0.007). These findings provide a non-invasive approach for accurately predicting HER2 expression, enabling more precise patient stratification to guide personalized treatment strategies. Further prospective studies are warranted to validate its clinical utility.

Diagnostic and Technological Advances in Magnetic Resonance (Focusing on Imaging Technique and the Gadolinium-Based Contrast Media), Computed Tomography (Focusing on Photon Counting CT), and Ultrasound-State of the Art.

Runge VM, Heverhagen JT

pubmed logopapersJun 9 2025
Magnetic resonance continues to evolve and advance as a critical imaging modality for disease diagnosis and monitoring. Hardware and software advances continue to propel this modality to the forefront of the field of diagnostic imaging. Next generation MR contrast media, specifically gadolinium chelates with improved relaxivity and stability (relative to the provided contrast effect), have emerged providing a further boost to the field. Concern regarding gadolinium deposition in the body with primarily the weaker gadolinium chelates (which have been now removed from the market, at least in Europe) continues to be at the forefront of clinicians' minds. This has driven renewed interest in possible development of manganese-based contrast media. The development of photon counting CT and its clinical introduction have made possible a further major advance in CT image quality, along with the potential for decreasing radiation dose. The possibility of major clinical advances in thoracic, cardiac, and musculoskeletal imaging were first recognized, with its broader impact - across all organ systems - now also recognized. The utility of routine acquisition (without penalty in time or radiation dose) of full spectral multi-energy data is now also being recognized as an additional major advance made possible by photon counting CT. Artificial intelligence is now being used in the background across most imaging platforms and modalities, making possible further advances in imaging technique and image quality, although this field is nowhere yet near to realizing its full potential. And last, but not least, the field of ultrasound is on the cusp of further major advances in availability (with development of very low-cost systems) and a possible new generation of microbubble contrast media.
Page 88 of 2372362 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.