Sort by:
Page 66 of 2342333 results

Artificial intelligence in coronary CT angiography: transforming the diagnosis and risk stratification of atherosclerosis.

Irannejad K, Mafi M, Krishnan S, Budoff MJ

pubmed logopapersJun 27 2025
Coronary CT Angiography (CCTA) is essential for assessing atherosclerosis and coronary artery disease, aiding in early detection, risk prediction, and clinical assessment. However, traditional CCTA interpretation is limited by observer variability, time inefficiency, and inconsistent plaque characterization. AI has emerged as a transformative tool, enhancing diagnostic accuracy, workflow efficiency, and risk prediction for major adverse cardiovascular events (MACE). Studies show that AI improves stenosis detection by 27%, inter-reader agreement by 30%, and reduces reporting times by 40%, thereby addressing key limitations of manual interpretation. Integrating AI with multimodal imaging (e.g., FFR-CT, PET-CT) further enhances ischemia detection by 28% and lesion classification by 35%, providing a more comprehensive cardiovascular evaluation. This review synthesizes recent advancements in CCTA-AI automation, risk stratification, and precision diagnostics while critically analyzing data quality, generalizability, ethics, and regulation challenges. Future directions, including real-time AI-assisted triage, cloud-based diagnostics, and AI-driven personalized medicine, are explored for their potential to revolutionize clinical workflows and optimize patient outcomes.

3D Auto-segmentation of pancreas cancer and surrounding anatomical structures for surgical planning.

Rhu J, Oh N, Choi GS, Kim JM, Choi SY, Lee JE, Lee J, Jeong WK, Min JH

pubmed logopapersJun 27 2025
This multicenter study aimed to develop a deep learning-based autosegmentation model for pancreatic cancer and surrounding anatomical structures using computed tomography (CT) to enhance surgical planning. We included patients with pancreatic cancer who underwent pancreatic surgery at three tertiary referral hospitals. A hierarchical Swin Transformer V2 model was implemented to segment the pancreas, pancreatic cancers, and peripancreatic structures from preoperative contrast-enhanced CT scans. Data was divided into training and internal validation sets at a 3:1 ratio (from one tertiary institution), with separately prepared external validation set (from two separate institutions). Segmentation performance was quantitatively assessed using the dice similarity coefficient (DSC) and qualitatively evaluated (complete vs partial vs absent). A total of 275 patients (51.6% male, mean age 65.8 ± 9.5 years) were included (176 training group, 59 internal validation group, and 40 external validation group). No significant differences in baseline characteristics were observed between the groups. The model achieved an overall mean DSC of 75.4 ± 6.0 and 75.6 ± 4.8 in the internal and external validation groups, respectively. It showed high accuracy particularly in the pancreas parenchyma (84.8 ± 5.3 and 86.1 ± 4.1) and lower accuracy in pancreatic cancer (57.0 ± 28.7 and 54.5 ± 23.5). The DSC scores for pancreatic cancer tended to increase with larger tumor sizes. Moreover, the qualitative assessments revealed high accuracy in the superior mesenteric artery (complete segmentation, 87.5%-100%), portal and superior mesenteric vein (97.5%-100%), pancreas parenchyma (83.1%-87.5%), but lower accuracy in cancers (62.7%-65.0%). The deep learning-based autosegmentation model for 3D visualization of pancreatic cancer and peripancreatic structures showed robust performance. Further improvement will enhance many promising applications in clinical research.

Improving radiology reporting accuracy: use of GPT-4 to reduce errors in reports.

Mayes CJ, Reyes C, Truman ME, Dodoo CA, Adler CR, Banerjee I, Khandelwal A, Alexander LF, Sheedy SP, Thompson CP, Varner JA, Zulfiqar M, Tan N

pubmed logopapersJun 27 2025
Radiology reports are essential for communicating imaging findings to guide diagnosis and treatment. Although most radiology reports are accurate, errors can occur in the final reports due to high workloads, use of dictation software, and human error. Advanced artificial intelligence models, such as GPT-4, show potential as tools to improve report accuracy. This retrospective study evaluated how GPT-4 performed in detecting and correcting errors in finalized radiology reports in real-world settings for abdominopelvic computed tomography (CT) reports. We evaluated finalized CT abdominopelvic reports from a tertiary health system by using GPT-4 with zero-shot learning techniques. Six radiologists each reviewed 100 of their finalized reports (randomly selected), evaluating GPT-4's suggested revisions for agreement, acceptance, and clinical impact. The radiologists' responses were compared by years in practice and sex. GPT-4 identified issues and suggested revisions for 91% of the 600 reports; most revisions addressed grammar (74%). The radiologists agreed with 27% of the revisions and accepted 23%. Most revisions were rated as having no (44%) or low (46%) clinical impact. Potential harm was rare (8%), with only 2 cases of potentially severe harm. Radiologists with less experience (≤ 7 years of practice) were more likely to agree with the revisions suggested by GPT-4 than those with more experience (34% vs. 20%, P = .003) and accepted a greater percentage of the revisions (32% vs. 15%, P = .003). Although GPT-4 showed promise in identifying errors and improving the clarity of finalized radiology reports, most errors were categorized as minor, with no or low clinical impact. Collectively, the radiologists accepted 23% of the suggested revisions in their finalized reports. This study highlights the potential of GPT-4 as a prospective tool for radiology reporting, with further refinement needed for consistent use in clinical practice.

Hybrid segmentation model and CAViaR -based Xception Maxout network for brain tumor detection using MRI images.

Swapna S, Garapati Y

pubmed logopapersJun 27 2025
Brain tumor (BT) is a rapid growth of brain cells. If the BT is not identified and treated in the first stage, it could cause death. Despite several methods and efforts being developed for segmenting and identifying BT, the detection of BT is complicated due to the distinct position of the tumor and its size. To solve such issues, this paper proposes the Conditional Autoregressive Value-at-Risk_Xception Maxout-Network (Caviar_XM-Net) for BT detection utilizing magnetic resonance imaging (MRI) images. The input MRI image gathered from the dataset is denoised using the adaptive bilateral filter (ABF), and tumor region segmentation is done using BFC-MRFNet-RVSeg. Here, the segmentation is done by the Bayesian fuzzy clustering (BFC) and multi-branch residual fusion network (MRF-Net) separately. Subsequently, outputs from both segmentation techniques are combined using the RV coefficient. Image augmentation is performed to boost the quantity of images in the training process. Afterwards, feature extraction is done, where features, like local optimal oriented pattern (LOOP), convolutional neural network (CNN) features, median binary pattern (MBP) with statistical features, and local Gabor XOR pattern (LGXP), are extracted. Lastly, BT detection is carried out by employing Caviar_XM-Net, which is acquired by the assimilation of the Xception model and deep Maxout network (DMN) with the CAViaR approach. Furthermore, the effectiveness of Caviar_XM-Net is examined using the parameters, namely sensitivity, accuracy, specificity, precision, and F1-score, and the corresponding values of 91.59%, 91.36%, 90.83%, 90.99%, and 91.29% are attained. Hence, the Caviar_XM-Net performs better than the traditional methods with high efficiency.

Causality-Adjusted Data Augmentation for Domain Continual Medical Image Segmentation.

Zhu Z, Dong Q, Luo G, Wang W, Dong S, Wang K, Tian Y, Wang G, Li S

pubmed logopapersJun 27 2025
In domain continual medical image segmentation, distillation-based methods mitigate catastrophic forgetting by continuously reviewing old knowledge. However, these approaches often exhibit biases towards both new and old knowledge simultaneously due to confounding factors, which can undermine segmentation performance. To address these biases, we propose the Causality-Adjusted Data Augmentation (CauAug) framework, introducing a novel causal intervention strategy called the Texture-Domain Adjustment Hybrid-Scheme (TDAHS) alongside two causality-targeted data augmentation approaches: the Cross Kernel Network (CKNet) and the Fourier Transformer Generator (FTGen). (1) TDAHS establishes a domain-continual causal model that accounts for two types of knowledge biases by identifying irrelevant local textures (L) and domain-specific features (D) as confounders. It introduces a hybrid causal intervention that combines traditional confounder elimination with a proposed replacement approach to better adapt to domain shifts, thereby promoting causal segmentation. (2) CKNet eliminates confounder L to reduce biases in new knowledge absorption. It decreases reliance on local textures in input images, forcing the model to focus on relevant anatomical structures and thus improving generalization. (3) FTGen causally intervenes on confounder D by selectively replacing it to alleviate biases that impact old knowledge retention. It restores domain-specific features in images, aiding in the comprehensive distillation of old knowledge. Our experiments show that CauAug significantly mitigates catastrophic forgetting and surpasses existing methods in various medical image segmentation tasks. The implementation code is publicly available at: https://github.com/PerceptionComputingLab/CauAug_DCMIS.

Quantifying Sagittal Craniosynostosis Severity: A Machine Learning Approach With CranioRate.

Tao W, Somorin TJ, Kueper J, Dixon A, Kass N, Khan N, Iyer K, Wagoner J, Rogers A, Whitaker R, Elhabian S, Goldstein JA

pubmed logopapersJun 27 2025
ObjectiveTo develop and validate machine learning (ML) models for objective and comprehensive quantification of sagittal craniosynostosis (SCS) severity, enhancing clinical assessment, management, and research.DesignA cross-sectional study that combined the analysis of computed tomography (CT) scans and expert ratings.SettingThe study was conducted at a children's hospital and a major computer imaging institution. Our survey collected expert ratings from participating surgeons.ParticipantsThe study included 195 patients with nonsyndromic SCS, 221 patients with nonsyndromic metopic craniosynostosis (CS), and 178 age-matched controls. Fifty-four craniofacial surgeons participated in rating 20 patients head CT scans.InterventionsComputed tomography scans for cranial morphology assessment and a radiographic diagnosis of nonsyndromic SCS.Main OutcomesAccuracy of the proposed Sagittal Severity Score (SSS) in predicting expert ratings compared to cephalic index (CI). Secondary outcomes compared Likert ratings with SCS status, the predictive power of skull-based versus skin-based landmarks, and assessments of an unsupervised ML model, the Cranial Morphology Deviation (CMD), as an alternative without ratings.ResultsThe SSS achieved significantly higher accuracy in predicting expert responses than CI (<i>P</i> < .05). Likert ratings outperformed SCS status in supervising ML models to quantify within-group variations. Skin-based landmarks demonstrated equivalent predictive power as skull landmarks (<i>P</i> < .05, threshold 0.02). The CMD demonstrated a strong correlation with the SSS (Pearson coefficient: 0.92, Spearman coefficient: 0.90, <i>P</i> < .01).ConclusionsThe SSS and CMD can provide accurate, consistent, and comprehensive quantification of SCS severity. Implementing these data-driven ML models can significantly advance CS care through standardized assessments, enhanced precision, and informed surgical planning.

Automation in tibial implant loosening detection using deep-learning segmentation.

Magg C, Ter Wee MA, Buijs GS, Kievit AJ, Schafroth MU, Dobbe JGG, Streekstra GJ, Sánchez CI, Blankevoort L

pubmed logopapersJun 27 2025
Patients with recurrent complaints after total knee arthroplasty may suffer from aseptic implant loosening. Current imaging modalities do not quantify looseness of knee arthroplasty components. A recently developed and validated workflow quantifies the tibial component displacement relative to the bone from CT scans acquired under valgus and varus load. The 3D analysis approach includes segmentation and registration of the tibial component and bone. In the current approach, the semi-automatic segmentation requires user interaction, adding complexity to the analysis. The research question is whether the segmentation step can be fully automated while keeping outcomes indifferent. In this study, different deep-learning (DL) models for fully automatic segmentation are proposed and evaluated. For this, we employ three different datasets for model development (20 cadaveric CT pairs and 10 cadaveric CT scans) and evaluation (72 patient CT pairs). Based on the performance on the development dataset, the final model was selected, and its predictions replaced the semi-automatic segmentation in the current approach. Implant displacement was quantified by the rotation about the screw-axis, maximum total point motion, and mean target registration error. The displacement parameters of the proposed approach showed a statistically significant difference between fixed and loose samples in a cadaver dataset, as well as between asymptomatic and loose samples in a patient dataset, similar to the outcomes of the current approach. The methodological error calculated on a reproducibility dataset showed values that were not statistically significant different between the two approaches. The results of the proposed and current approaches showed excellent reliability for one and three operators on two datasets. The conclusion is that a full automation in knee implant displacement assessment is feasible by utilizing a DL-based segmentation model while maintaining the capability of distinguishing between fixed and loose implants.

Catheter detection and segmentation in X-ray images via multi-task learning.

Xi L, Ma Y, Koland E, Howell S, Rinaldi A, Rhode KS

pubmed logopapersJun 27 2025
Automated detection and segmentation of surgical devices, such as catheters or wires, in X-ray fluoroscopic images have the potential to enhance image guidance in minimally invasive heart surgeries. In this paper, we present a convolutional neural network model that integrates a resnet architecture with multiple prediction heads to achieve real-time, accurate localization of electrodes on catheters and catheter segmentation in an end-to-end deep learning framework. We also propose a multi-task learning strategy in which our model is trained to perform both accurate electrode detection and catheter segmentation simultaneously. A key challenge with this approach is achieving optimal performance for both tasks. To address this, we introduce a novel multi-level dynamic resource prioritization method. This method dynamically adjusts sample and task weights during training to effectively prioritize more challenging tasks, where task difficulty is inversely proportional to performance and evolves throughout the training process. The proposed method has been validated on both public and private datasets for single-task catheter segmentation and multi-task catheter segmentation and detection. The performance of our method is also compared with existing state-of-the-art methods, demonstrating significant improvements, with a mean <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>J</mi></math> of 64.37/63.97 and with average precision over all IoU thresholds of 84.15/83.13, respectively, for detection and segmentation multi-task on the validation and test sets of the catheter detection and segmentation dataset. Our approach achieves a good balance between accuracy and efficiency, making it well-suited for real-time surgical guidance applications.

Noise-Inspired Diffusion Model for Generalizable Low-Dose CT Reconstruction

Qi Gao, Zhihao Chen, Dong Zeng, Junping Zhang, Jianhua Ma, Hongming Shan

arxiv logopreprintJun 27 2025
The generalization of deep learning-based low-dose computed tomography (CT) reconstruction models to doses unseen in the training data is important and remains challenging. Previous efforts heavily rely on paired data to improve the generalization performance and robustness through collecting either diverse CT data for re-training or a few test data for fine-tuning. Recently, diffusion models have shown promising and generalizable performance in low-dose CT (LDCT) reconstruction, however, they may produce unrealistic structures due to the CT image noise deviating from Gaussian distribution and imprecise prior information from the guidance of noisy LDCT images. In this paper, we propose a noise-inspired diffusion model for generalizable LDCT reconstruction, termed NEED, which tailors diffusion models for noise characteristics of each domain. First, we propose a novel shifted Poisson diffusion model to denoise projection data, which aligns the diffusion process with the noise model in pre-log LDCT projections. Second, we devise a doubly guided diffusion model to refine reconstructed images, which leverages LDCT images and initial reconstructions to more accurately locate prior information and enhance reconstruction fidelity. By cascading these two diffusion models for dual-domain reconstruction, our NEED requires only normal-dose data for training and can be effectively extended to various unseen dose levels during testing via a time step matching strategy. Extensive qualitative, quantitative, and segmentation-based evaluations on two datasets demonstrate that our NEED consistently outperforms state-of-the-art methods in reconstruction and generalization performance. Source code is made available at https://github.com/qgao21/NEED.

Towards Scalable and Robust White Matter Lesion Localization via Multimodal Deep Learning

Julia Machnio, Sebastian Nørgaard Llambias, Mads Nielsen, Mostafa Mehdipour Ghazi

arxiv logopreprintJun 27 2025
White matter hyperintensities (WMH) are radiological markers of small vessel disease and neurodegeneration, whose accurate segmentation and spatial localization are crucial for diagnosis and monitoring. While multimodal MRI offers complementary contrasts for detecting and contextualizing WM lesions, existing approaches often lack flexibility in handling missing modalities and fail to integrate anatomical localization efficiently. We propose a deep learning framework for WM lesion segmentation and localization that operates directly in native space using single- and multi-modal MRI inputs. Our study evaluates four input configurations: FLAIR-only, T1-only, concatenated FLAIR and T1, and a modality-interchangeable setup. It further introduces a multi-task model for jointly predicting lesion and anatomical region masks to estimate region-wise lesion burden. Experiments conducted on the MICCAI WMH Segmentation Challenge dataset demonstrate that multimodal input significantly improves the segmentation performance, outperforming unimodal models. While the modality-interchangeable setting trades accuracy for robustness, it enables inference in cases with missing modalities. Joint lesion-region segmentation using multi-task learning was less effective than separate models, suggesting representational conflict between tasks. Our findings highlight the utility of multimodal fusion for accurate and robust WMH analysis, and the potential of joint modeling for integrated predictions.
Page 66 of 2342333 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.