Sort by:
Page 10 of 45441 results

AI-based diagnosis of acute aortic syndrome from noncontrast CT.

Hu Y, Xiang Y, Zhou YJ, He Y, Lang D, Yang S, Du X, Den C, Xu Y, Wang G, Ding Z, Huang J, Zhao W, Wu X, Li D, Zhu Q, Li Z, Qiu C, Wu Z, He Y, Tian C, Qiu Y, Lin Z, Zhang X, Hu L, He Y, Yuan Z, Zhou X, Fan R, Chen R, Guo W, Xu J, Zhang J, Mok TCW, Li Z, Kalra MK, Lu L, Xiao W, Li X, Bian Y, Shao C, Wang G, Lu W, Huang Z, Xu M, Zhang H

pubmed logopapersAug 20 2025
The accurate and timely diagnosis of acute aortic syndrome (AAS) in patients presenting with acute chest pain remains a clinical challenge. Aortic computed tomography (CT) angiography is the imaging protocol of choice in patients with suspected AAS. However, due to economic and workflow constraints in China, the majority of suspected patients initially undergo noncontrast CT as the initial imaging testing, and CT angiography is reserved for those at higher risk. Although noncontrast CT can reveal specific signs indicative of AAS, its diagnostic efficacy when used alone has not been well characterized. Here we present an artificial intelligence-based warning system, iAorta, using noncontrast CT for AAS identification in China, which demonstrates remarkably high accuracy and provides clinicians with interpretable warnings. iAorta was evaluated through a comprehensive step-wise study. In the multicenter retrospective study (n = 20,750), iAorta achieved a mean area under the receiver operating curve of 0.958 (95% confidence interval 0.950-0.967). In the large-scale real-world study (n = 137,525), iAorta demonstrated consistently high performance across various noncontrast CT protocols, achieving a sensitivity of 0.913-0.942 and a specificity of 0.991-0.993. In the prospective comparative study (n = 13,846), iAorta demonstrated the capability to significantly shorten the time to correct diagnostic pathway for patients with initial false suspicion from an average of 219.7 (115-325) min to 61.6 (43-89) min. Furthermore, for the prospective pilot deployment that we conducted, iAorta correctly identified 21 out of 22 patients with AAS among 15,584 consecutive patients presenting with acute chest pain and under noncontrast CT protocol in the emergency department. For these 21 AAS-positive patients, the average time to diagnosis was 102.1 (75-133) min. Finally, iAorta may help prevent delayed or missed diagnoses of AAS in settings where noncontrast CT remains the only feasible initial imaging modality-such as in resource-limited regions or in patients who cannot receive, or did not receive, intravenous contrast.

[The application effect of Generative Pre-Treatment Tool of Skeletal Pathology in functional lumbar spine radiographic analysis].

Yilihamu Y, Zhao K, Zhong H, Feng SQ

pubmed logopapersAug 20 2025
<b>Objective:</b> To investigate the application effectiveness of the artificial intelligence(AI) based Generative Pre-treatment tool of Skeletal Pathology (GPTSP) in measuring functional lumbar radiographic examinations. <b>Methods:</b> This is a retrospective case series study,reviewing the clinical and imaging data of 34 patients who underwent lumbar dynamic X-ray radiography at Department of Orthopedics, the Second Hospital of Shandong University from September 2021 to June 2023. Among the patients, 13 were male and 21 were female, with an age of (68.0±8.0) years (range:55 to 88 years). The AI model of the GPTSP system was built upon a multi-dimensional constrained loss function constructed based on the YOLOv8 model, incorporating Kullback-Leibler divergence to quantify the anatomical distribution deviation of lumbar intervertebral space detection boxes, along with the introduction of a global dynamic attention mechanism. It can identify lumbar vertebral body edge points and measure lumbar intervertebral space. Furthermore, spondylolisthesis index, lumbar index, and lumbar intervertebral angles were measured using three methods: manual measurement by doctors, predefined annotated measurement, and AI-assisted measurement. The consistency between the doctors and the AI model was analyzed through intra-class correlation coefficient (ICC) and Kappa coefficient. <b>Results:</b> AI-assisted physician measurement time was (1.5±0.1) seconds (range: 1.3 to 1.7 seconds), which was shorter than the manual measurement time ((2 064.4±108.2) seconds,range: 1 768.3 to 2 217.6 seconds) and the pre-defined annotation measurement time ((602.0±48.9) seconds,range: 503.9 to 694.4 seconds). Kappa values between physicians' diagnoses and AI model's diagnoses (based on GPTSP platform) for the lumbar slip index, lumbar index, and intervertebral angles measured by three methods were 0.95, 0.92, and 0.82 (all <i>P</i><0.01), with ICC values consistently exceeding 0.90, indicating high consistency. Based on the doctor's manual measurement, compared with the predefined label measurement, altering AI assistance, doctors measurement with average annotation errors reduced from 2.52 mm (range: 0.01 to 6.78 mm) to 1.47 mm(range: 0 to 5.03 mm). <b>Conclusions:</b> The GPTSP system enhanced efficiency in functional lumbar analysis. AI model demonstrated high consistency in annotation and measurement results, showing strong potential to serve as a reliable clinical auxiliary tool.

Objective Task-Based Evaluation of Quantitative Medical Imaging Methods: Emerging Frameworks and Future Directions.

Liu Y, Xia H, Obuchowski NA, Laforest R, Rahmim A, Siegel BA, Jha AK

pubmed logopapersAug 19 2025
Quantitative imaging (QI) holds significant potential across diverse clinical applications. For clinical translation of QI, rigorous evaluation on clinically relevant tasks is essential. This article outlines 4 emerging evaluation frameworks, including virtual imaging trials, evaluation with clinical data in the absence of ground truth, evaluation for joint detection and quantification tasks, and evaluation of QI methods that output multidimensional outputs. These frameworks are presented in the context of recent advancements in PET, such as long axial field of view PET and the development of artificial intelligence algorithms for PET. We conclude by discussing future research directions for evaluating QI methods.

Machine Learning in Venous Thromboembolism - Why and What Next?

Gurumurthy G, Kisiel F, Reynolds L, Thomas W, Othman M, Arachchillage DJ, Thachil J

pubmed logopapersAug 19 2025
Venous thromboembolism (VTE) remains a leading cause of cardiovascular morbidity and mortality, despite advances in imaging and anticoagulation. VTE arises from diverse and overlapping risk factors, such as inherited thrombophilia, immobility, malignancy, surgery or trauma, pregnancy, hormonal therapy, obesity, chronic medical conditions (e.g., heart failure, inflammatory disease), and advancing age. Clinicians, therefore, face challenges in balancing the benefits of thromboprophylaxis against the bleeding risk. Existing clinical risk scores often exhibit only modest discrimination and calibration across heterogeneous patient populations. Machine learning (ML) has emerged as a promising tool to address these limitations. In imaging, convolutional neural networks and hybrid algorithms can detect VTE on CT pulmonary angiography with areas under the curves (AUCs) of 0.85 to 0.96. In surgical cohorts, gradient-boosting models outperform traditional risk scores, achieving AUCs between 0.70 and 0.80 in predicting postoperative VTE. In cancer-associated venous thrombosis, advanced ML models demonstrate AUCs between 0.68 and 0.82. However, concerns about bias and external validation persist. Bleeding risk prediction models remain challenging in extended anticoagulation settings, often matching conventional models. Predicting recurrent VTE using neural networks showed AUCs of 0.93 to 0.99 in initial studies. However, these lack transparency and prospective validation. Most ML models suffer from limited external validation, "black box" algorithms, and integration hurdles within clinical workflows. Future efforts should focus on standardized reporting (e.g., Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis [TRIPOD]-ML), transparent model interpretation, prospective impact assessments, and seamless incorporation into electronic health records to realize the full potential of ML in VTE.

Fracture Detection and Localisation in Wrist and Hand Radiographs using Detection Transformer Variants

Aditya Bagri, Vasanthakumar Venugopal, Anandakumar D, Revathi Ezhumalai, Kalyan Sivasailam, Bargava Subramanian, VarshiniPriya, Meenakumari K S, Abi M, Renita S

arxiv logopreprintAug 19 2025
Background: Accurate diagnosis of wrist and hand fractures using radiographs is essential in emergency care, but manual interpretation is slow and prone to errors. Transformer-based models show promise in improving medical image analysis, but their application to extremity fractures is limited. This study addresses this gap by applying object detection transformers to wrist and hand X-rays. Methods: We fine-tuned the RT-DETR and Co-DETR models, pre-trained on COCO, using over 26,000 annotated X-rays from a proprietary clinical dataset. Each image was labeled for fracture presence with bounding boxes. A ResNet-50 classifier was trained on cropped regions to refine abnormality classification. Supervised contrastive learning was used to enhance embedding quality. Performance was evaluated using AP@50, precision, and recall metrics, with additional testing on real-world X-rays. Results: RT-DETR showed moderate results (AP@50 = 0.39), while Co-DETR outperformed it with an AP@50 of 0.615 and faster convergence. The integrated pipeline achieved 83.1% accuracy, 85.1% precision, and 96.4% recall on real-world X-rays, demonstrating strong generalization across 13 fracture types. Visual inspection confirmed accurate localization. Conclusion: Our Co-DETR-based pipeline demonstrated high accuracy and clinical relevance in wrist and hand fracture detection, offering reliable localization and differentiation of fracture types. It is scalable, efficient, and suitable for real-time deployment in hospital workflows, improving diagnostic speed and reliability in musculoskeletal radiology.

A Systematic Study of Deep Learning Models and xAI Methods for Region-of-Interest Detection in MRI Scans

Justin Yiu, Kushank Arora, Daniel Steinberg, Rohit Ghiya

arxiv logopreprintAug 19 2025
Magnetic Resonance Imaging (MRI) is an essential diagnostic tool for assessing knee injuries. However, manual interpretation of MRI slices remains time-consuming and prone to inter-observer variability. This study presents a systematic evaluation of various deep learning architectures combined with explainable AI (xAI) techniques for automated region of interest (ROI) detection in knee MRI scans. We investigate both supervised and self-supervised approaches, including ResNet50, InceptionV3, Vision Transformers (ViT), and multiple U-Net variants augmented with multi-layer perceptron (MLP) classifiers. To enhance interpretability and clinical relevance, we integrate xAI methods such as Grad-CAM and Saliency Maps. Model performance is assessed using AUC for classification and PSNR/SSIM for reconstruction quality, along with qualitative ROI visualizations. Our results demonstrate that ResNet50 consistently excels in classification and ROI identification, outperforming transformer-based models under the constraints of the MRNet dataset. While hybrid U-Net + MLP approaches show potential for leveraging spatial features in reconstruction and interpretability, their classification performance remains lower. Grad-CAM consistently provided the most clinically meaningful explanations across architectures. Overall, CNN-based transfer learning emerges as the most effective approach for this dataset, while future work with larger-scale pretraining may better unlock the potential of transformer models.

Deep learning for detection and diagnosis of intrathoracic lymphadenopathy from endobronchial ultrasound multimodal videos: A multi-center study.

Chen J, Li J, Zhang C, Zhi X, Wang L, Zhang Q, Yu P, Tang F, Zha X, Wang L, Dai W, Xiong H, Sun J

pubmed logopapersAug 19 2025
Convex probe endobronchial ultrasound (CP-EBUS) ultrasonographic features are important for diagnosing intrathoracic lymphadenopathy. Conventional methods for CP-EBUS imaging analysis rely heavily on physician expertise. To overcome this obstacle, we propose a deep learning-aided diagnostic system (AI-CEMA) to automatically select representative images, identify lymph nodes (LNs), and differentiate benign from malignant LNs based on CP-EBUS multimodal videos. AI-CEMA is first trained using 1,006 LNs from a single center and validated with a retrospective study and then demonstrated with a prospective multi-center study on 267 LNs. AI-CEMA achieves an area under the curve (AUC) of 0.8490 (95% confidence interval [CI], 0.8000-0.8980), which is comparable to experienced experts (AUC, 0.7847 [95% CI, 0.7320-0.8373]; p = 0.080). Additionally, AI-CEMA is successfully transferred to a pulmonary lesion diagnosis task and obtains a commendable AUC of 0.8192 (95% CI, 0.7676-0.8709). In conclusion, AI-CEMA shows great potential in clinical diagnosis of intrathoracic lymphadenopathy and pulmonary lesions by providing automated, noninvasive, and expert-level diagnosis.

A Systematic Study of Deep Learning Models and xAI Methods for Region-of-Interest Detection in MRI Scans

Justin Yiu, Kushank Arora, Daniel Steinberg, Rohit Ghiya

arxiv logopreprintAug 19 2025
Magnetic Resonance Imaging (MRI) is an essential diagnostic tool for assessing knee injuries. However, manual interpretation of MRI slices remains time-consuming and prone to inter-observer variability. This study presents a systematic evaluation of various deep learning architectures combined with explainable AI (xAI) techniques for automated region of interest (ROI) detection in knee MRI scans. We investigate both supervised and self-supervised approaches, including ResNet50, InceptionV3, Vision Transformers (ViT), and multiple U-Net variants augmented with multi-layer perceptron (MLP) classifiers. To enhance interpretability and clinical relevance, we integrate xAI methods such as Grad-CAM and Saliency Maps. Model performance is assessed using AUC for classification and PSNR/SSIM for reconstruction quality, along with qualitative ROI visualizations. Our results demonstrate that ResNet50 consistently excels in classification and ROI identification, outperforming transformer-based models under the constraints of the MRNet dataset. While hybrid U-Net + MLP approaches show potential for leveraging spatial features in reconstruction and interpretability, their classification performance remains lower. Grad-CAM consistently provided the most clinically meaningful explanations across architectures. Overall, CNN-based transfer learning emerges as the most effective approach for this dataset, while future work with larger-scale pretraining may better unlock the potential of transformer models.

Toward ICE-XRF fusion: real-time pose estimation of the intracardiac echo probe in 2D X-ray using deep learning.

Severens A, Meijs M, Pai Raikar V, Lopata R

pubmed logopapersAug 18 2025
Valvular heart disease affects 2.5% of the general population and 10% of people aged over 75, with many patients untreated due to high surgical risks. Transcatheter valve therapies offer a safer, less invasive alternative but rely on ultrasound and X-ray image guidance. The current ultrasound technique for valve interventions, transesophageal echocardiography (TEE), requires general anesthesia and has poor visibility of the right side of the heart. Intracardiac echocardiography (ICE) provides improved 3D imaging without the need for general anesthesia but faces challenges in adoption due to device handling and operator training. To facilitate the use of ICE in the clinic, the fusion of ultrasound and X-ray is proposed. This study introduces a two-stage detection algorithm using deep learning to support ICE-XRF fusion. Initially, the ICE probe is coarsely detected using an object detection network. This is followed by 5-degree-of-freedom (DoF) pose estimation of the ICE probe using a regression network. Model validation using synthetic data and seven clinical cases showed that the framework provides accurate probe detection and 5-DoF pose estimation. For the object detection, an F1 score of 1.00 was achieved on synthetic data and high precision (0.97) and recall (0.83) for clinical cases. For the 5-DoF pose estimation, median position errors were found under 0.5mm and median rotation errors below <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>7</mn> <mo>.</mo> <msup><mn>2</mn> <mo>∘</mo></msup> </mrow> </math> . This real-time detection method supports image fusion of ICE and XRF during clinical procedures and facilitates the use of ICE in valve therapy.

Balancing Speed and Sensitivity: Echo-Planar Accelerated MRI for ARIA-H Screening in Anti-Aβ Therapeutics.

Hagiwara A

pubmed logopapersAug 18 2025
The recent advent of anti-amyloid-β monoclonal antibodies has introduced new demands for MRI-based screening of amyloid-related imaging abnormalities, particularly the hemorrhage subtype (ARIA-H). In this editorial, we discuss the study by Loftus and colleagues, which evaluates the diagnostic performance of echo-planar accelerated gradient-recalled echo (GRE) and susceptibility-weighted imaging (SWI) sequences for ARIA-H screening. Their results demonstrate that significant scan time reductions-up to 86%-can be achieved without substantial loss in diagnostic accuracy, particularly for accelerated GRE. These findings align with recently issued MRI guidelines and offer practical solutions for improving workflow efficiency in Alzheimer's care. However, challenges remain in terms of inter-rater variability and image quality, especially with accelerated SWI. We also highlight the emerging role of artificial intelligence-assisted analysis and the importance of reproducibility and data sharing in advancing clinical implementation. Balancing speed and sensitivity remains a central theme in optimizing imaging strategies for antiamyloid therapeutic protocols.
Page 10 of 45441 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.