Sort by:
Page 11 of 1401395 results

Imaging-Based Mortality Prediction in Patients with Systemic Sclerosis

Alec K. Peltekian, Karolina Senkow, Gorkem Durak, Kevin M. Grudzinski, Bradford C. Bemiss, Jane E. Dematte, Carrie Richardson, Nikolay S. Markov, Mary Carns, Kathleen Aren, Alexandra Soriano, Matthew Dapas, Harris Perlman, Aaron Gundersheimer, Kavitha C. Selvan, John Varga, Monique Hinchcliff, Krishnan Warrior, Catherine A. Gao, Richard G. Wunderink, GR Scott Budinger, Alok N. Choudhary, Anthony J. Esposito, Alexander V. Misharin, Ankit Agrawal, Ulas Bagci

arxiv logopreprintSep 27 2025
Interstitial lung disease (ILD) is a leading cause of morbidity and mortality in systemic sclerosis (SSc). Chest computed tomography (CT) is the primary imaging modality for diagnosing and monitoring lung complications in SSc patients. However, its role in disease progression and mortality prediction has not yet been fully clarified. This study introduces a novel, large-scale longitudinal chest CT analysis framework that utilizes radiomics and deep learning to predict mortality associated with lung complications of SSc. We collected and analyzed 2,125 CT scans from SSc patients enrolled in the Northwestern Scleroderma Registry, conducting mortality analyses at one, three, and five years using advanced imaging analysis techniques. Death labels were assigned based on recorded deaths over the one-, three-, and five-year intervals, confirmed by expert physicians. In our dataset, 181, 326, and 428 of the 2,125 CT scans were from patients who died within one, three, and five years, respectively. Using ResNet-18, DenseNet-121, and Swin Transformer we use pre-trained models, and fine-tuned on 2,125 images of SSc patients. Models achieved an AUC of 0.769, 0.801, 0.709 for predicting mortality within one-, three-, and five-years, respectively. Our findings highlight the potential of both radiomics and deep learning computational methods to improve early detection and risk assessment of SSc-related interstitial lung disease, marking a significant advancement in the literature.

Deep Learning Approaches with Explainable AI for Differentiating Alzheimer Disease and Mild Cognitive Impairment

Fahad Mostafa, Kannon Hossain, Hafiz Khan

arxiv logopreprintSep 27 2025
Early and accurate diagnosis of Alzheimer Disease is critical for effective clinical intervention, particularly in distinguishing it from Mild Cognitive Impairment, a prodromal stage marked by subtle structural changes. In this study, we propose a hybrid deep learning ensemble framework for Alzheimer Disease classification using structural magnetic resonance imaging. Gray and white matter slices are used as inputs to three pretrained convolutional neural networks such as ResNet50, NASNet, and MobileNet, each fine tuned through an end to end process. To further enhance performance, we incorporate a stacked ensemble learning strategy with a meta learner and weighted averaging to optimally combine the base models. Evaluated on the Alzheimer Disease Neuroimaging Initiative dataset, the proposed method achieves state of the art accuracy of 99.21% for Alzheimer Disease vs. Mild Cognitive Impairment and 91.0% for Mild Cognitive Impairment vs. Normal Controls, outperforming conventional transfer learning and baseline ensemble methods. To improve interpretability in image based diagnostics, we integrate Explainable AI techniques by Gradient weighted Class Activation, which generates heatmaps and attribution maps that highlight critical regions in gray and white matter slices, revealing structural biomarkers that influence model decisions. These results highlight the frameworks potential for robust and scalable clinical decision support in neurodegenerative disease diagnostics.

Enhanced Fracture Diagnosis Based on Critical Regional and Scale Aware in YOLO

Yuyang Sun, Junchuan Yu, Cuiming Zou

arxiv logopreprintSep 27 2025
Fracture detection plays a critical role in medical imaging analysis, traditional fracture diagnosis relies on visual assessment by experienced physicians, however the speed and accuracy of this approach are constrained by the expertise. With the rapid advancements in artificial intelligence, deep learning models based on the YOLO framework have been widely employed for fracture detection, demonstrating significant potential in improving diagnostic efficiency and accuracy. This study proposes an improved YOLO-based model, termed Fracture-YOLO, which integrates novel Critical-Region-Selector Attention (CRSelector) and Scale-Aware (ScA) heads to further enhance detection performance. Specifically, the CRSelector module utilizes global texture information to focus on critical features of fracture regions. Meanwhile, the ScA module dynamically adjusts the weights of features at different scales, enhancing the model's capacity to identify fracture targets at multiple scales. Experimental results demonstrate that, compared to the baseline model, Fracture-YOLO achieves a significant improvement in detection precision, with mAP50 and mAP50-95 increasing by 4 and 3, surpassing the baseline model and achieving state-of-the-art (SOTA) performance.

Hemorica: A Comprehensive CT Scan Dataset for Automated Brain Hemorrhage Classification, Segmentation, and Detection

Kasra Davoodi, Mohammad Hoseyni, Javad Khoramdel, Reza Barati, Reihaneh Mortazavi, Amirhossein Nikoofard, Mahdi Aliyari-Shoorehdeli, Jaber Hatam Parikhan

arxiv logopreprintSep 26 2025
Timely diagnosis of Intracranial hemorrhage (ICH) on Computed Tomography (CT) scans remains a clinical priority, yet the development of robust Artificial Intelligence (AI) solutions is still hindered by fragmented public data. To close this gap, we introduce Hemorica, a publicly available collection of 372 head CT examinations acquired between 2012 and 2024. Each scan has been exhaustively annotated for five ICH subtypes-epidural (EPH), subdural (SDH), subarachnoid (SAH), intraparenchymal (IPH), and intraventricular (IVH)-yielding patient-wise and slice-wise classification labels, subtype-specific bounding boxes, two-dimensional pixel masks and three-dimensional voxel masks. A double-reading workflow, preceded by a pilot consensus phase and supported by neurosurgeon adjudication, maintained low inter-rater variability. Comprehensive statistical analysis confirms the clinical realism of the dataset. To establish reference baselines, standard convolutional and transformer architectures were fine-tuned for binary slice classification and hemorrhage segmentation. With only minimal fine-tuning, lightweight models such as MobileViT-XS achieved an F1 score of 87.8% in binary classification, whereas a U-Net with a DenseNet161 encoder reached a Dice score of 85.5% for binary lesion segmentation that validate both the quality of the annotations and the sufficiency of the sample size. Hemorica therefore offers a unified, fine-grained benchmark that supports multi-task and curriculum learning, facilitates transfer to larger but weakly labelled cohorts, and facilitates the process of designing an AI-based assistant for ICH detection and quantification systems.

Bézier Meets Diffusion: Robust Generation Across Domains for Medical Image Segmentation

Chen Li, Meilong Xu, Xiaoling Hu, Weimin Lyu, Chao Chen

arxiv logopreprintSep 26 2025
Training robust learning algorithms across different medical imaging modalities is challenging due to the large domain gap. Unsupervised domain adaptation (UDA) mitigates this problem by using annotated images from the source domain and unlabeled images from the target domain to train the deep models. Existing approaches often rely on GAN-based style transfer, but these methods struggle to capture cross-domain mappings in regions with high variability. In this paper, we propose a unified framework, B\'ezier Meets Diffusion, for cross-domain image generation. First, we introduce a B\'ezier-curve-based style transfer strategy that effectively reduces the domain gap between source and target domains. The transferred source images enable the training of a more robust segmentation model across domains. Thereafter, using pseudo-labels generated by this segmentation model on the target domain, we train a conditional diffusion model (CDM) to synthesize high-quality, labeled target-domain images. To mitigate the impact of noisy pseudo-labels, we further develop an uncertainty-guided score matching method that improves the robustness of CDM training. Extensive experiments on public datasets demonstrate that our approach generates realistic labeled images, significantly augmenting the target domain and improving segmentation performance.

A Framework for Guiding DDPM-Based Reconstruction of Damaged CT Projections Using Traditional Methods.

Zhang Z, Yang Y, Yang M, Guo H, Yang J, Shen X, Wang J

pubmed logopapersSep 26 2025
Denoising Diffusion Probabilistic Models (DDPM) have emerged as a promising generative framework for sample synthesis, yet their limitations in detail preservation hinder practical applications in computed tomography (CT) image reconstruction. To address these technical constraints and enhance reconstruction quality from compromised CT projection data, this study proposes the Projection Hybrid Inverse Reconstruction Framework (PHIRF) - a novel paradigm integrating conventional reconstruction methodologies with DDPM architecture. The framework implements a dual-phase approach: Initially, conventional CT reconstruction algorithms (e.g., Filtered back projection(FBP), Algebraic Reconstruction Technique(ART), Maximum-Likelihood Expectation Maximization (ML-EM)) are employed to generate preliminary reconstructions from incomplete projections, establishing low-dimensional feature representations. These features are subsequently parameterized and embedded as conditional constraints in the reverse diffusion process of DDPM, thereby guiding the generative model to synthesize enhanced tomographic images with improved structural fidelity. Comprehensive evaluations were conducted on three representative ill-posed projection scenarios: limited-angle projections, sparse-view acquisitions, and low-dose measurements. Experimental results demonstrate that PHIRF achieves state-of-the-art performance across all compromised data conditions, particularly in preserving fine anatomical details and suppressing reconstruction artifacts. Quantitative metrics and visual assessments confirm the framework's consistent superiority over existing deep learning-based reconstruction approaches, substantiating its adaptability to diverse projection degradation patterns. This hybrid architecture establishes a new paradigm for combining physical prior knowledge with data-driven generative models in medical image reconstruction tasks.

AI-driven MRI biomarker for triple-class HER2 expression classification in breast cancer: a large-scale multicenter study.

Wong C, Yang Q, Liang Y, Wei Z, Dai Y, Xu Z, Chen X, Du S, Han C, Liang C, Zhang L, Liu Z, Wang Y, Shi Z

pubmed logopapersSep 26 2025
Accurate classification of Human epidermal growth factor receptor 2 (HER2) expression is crucial for guiding treatment in breast cancer, especially with emerging therapies like trastuzumab deruxtecan (T-DXd) for HER2-low patients. Current gold-standard methods relying on invasive biopsy and immunohistochemistry suffer from sampling bias and interobserver variability, highlighting the need for reliable non-invasive alternatives. We developed an artificial intelligence framework that integrates a pretrained foundation model with a task-specific classifier to predict HER2 expression categories (HER2-zero, HER2-low, HER2-positive) directly from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The model was trained and validated using multicenter datasets. Model interpretability was assessed through feature visualization using t-SNE and UMAP dimensionality reduction techniques, complemented by SHAP analysis for post-hoc interpretation of critical predictive imaging features. The developed model demonstrated robust performance across datasets, achieving micro-average AUCs of 0.821 (95% CI 0.795–0.846) and 0.835 (95% CI 0.797–0.864), and macro-average AUCs of 0.833 (95% CI 0.818–0.847) and 0.857 (95% CI 0.837–0.872) in external validation. Subgroup analysis demonstrated strong discriminative power in distinguishing HER2 categories, particularly HER2-zero and HER2-low cases. Visualization techniques revealed distinct, biologically plausible clustering patterns corresponding to HER2 expression categories. This study presents a reproducible, non-invasive solution for comprehensive HER2 phenotyping using DCE-MRI, addressing fundamental limitations of biopsy-dependent assessment. Our approach enables accurate identification of HER2-low patients who may benefit from novel therapies like T-DXd. This framework represents a significant advancement in precision oncology, with potential to transform diagnostic workflows and guide targeted therapy selection in breast cancer care. The online version contains supplementary material available at 10.1186/s13058-025-02118-2.

Learning KAN-based Implicit Neural Representations for Deformable Image Registration

Nikita Drozdov, Marat Zinovev, Dmitry Sorokin

arxiv logopreprintSep 26 2025
Deformable image registration (DIR) is a cornerstone of medical image analysis, enabling spatial alignment for tasks like comparative studies and multi-modal fusion. While learning-based methods (e.g., CNNs, transformers) offer fast inference, they often require large training datasets and struggle to match the precision of classical iterative approaches on some organ types and imaging modalities. Implicit neural representations (INRs) have emerged as a promising alternative, parameterizing deformations as continuous mappings from coordinates to displacement vectors. However, this comes at the cost of requiring instance-specific optimization, making computational efficiency and seed-dependent learning stability critical factors for these methods. In this work, we propose KAN-IDIR and RandKAN-IDIR, the first integration of Kolmogorov-Arnold Networks (KANs) into deformable image registration with implicit neural representations (INRs). Our proposed randomized basis sampling strategy reduces the required number of basis functions in KAN while maintaining registration quality, thereby significantly lowering computational costs. We evaluated our approach on three diverse datasets (lung CT, brain MRI, cardiac MRI) and compared it with competing instance-specific learning-based approaches, dataset-trained deep learning models, and classical registration approaches. KAN-IDIR and RandKAN-IDIR achieved the highest accuracy among INR-based methods across all evaluated modalities and anatomies, with minimal computational overhead and superior learning stability across multiple random seeds. Additionally, we discovered that our RandKAN-IDIR model with randomized basis sampling slightly outperforms the model with learnable basis function indices, while eliminating its additional training-time complexity.

A novel deep neural architecture for efficient and scalable multidomain image classification.

Nobel SMN, Tasir MAM, Noor H, Monowar MM, Hamid MA, Sayeed MS, Islam MR, Mridha MF, Dey N

pubmed logopapersSep 26 2025
Deep learning has significantly advanced the field of computer vision; however, developing models that generalize effectively across diverse image domains remains a major research challenge. In this study, we introduce DeepFreqNet, a novel deep neural architecture specifically designed for high-performance multi-domain image classification. The innovative aspect of DeepFreqNet lies in its combination of three powerful components: multi-scale feature extraction for capturing patterns at different resolutions, depthwise separable convolutions for enhanced computational efficiency, and residual connections to maintain gradient flow and accelerate convergence. This hybrid design improves the architecture's ability to learn discriminative features and ensures scalability across domains with varying data complexities. Unlike traditional transfer learning models, DeepFreqNet adapts seamlessly to diverse datasets without requiring extensive reconfiguration. Experimental results from nine benchmark datasets, including MRI tumor classification, blood cell classification, and sign language recognition, demonstrate superior performance, achieving classification accuracies between 98.96% and 99.97%. These results highlight the effectiveness and versatility of DeepFreqNet, showcasing a significant improvement over existing state-of-the-art methods and establishing it as a robust solution for real-world image classification challenges.

Efficacy of PSMA PET/CT radiomics analysis for risk stratification in newly diagnosed prostate cancer: a multicenter study.

Jafari E, Zarei A, Dadgar H, Keshavarz A, Abdollahi H, Samimi R, Manafi-Farid R, Divband G, Nikkholgh B, Fallahi B, Amini H, Ahmadzadehfar H, Rahmim A, Zohrabi F, Assadi M

pubmed logopapersSep 26 2025
Prostate-specific membrane antigen (PSMA) PET/CT plays an increasing role in prostate cancer management. Radiomics analysis of PSMA PET/CT images may provide additional information for risk stratification. This study aimed to evaluate the performance of PSMA PET/CT radiomics analysis in differentiating between Gleason Grade Groups (GGG 1–3 vs. GGG 4–5) and predicting PSA levels (below vs. at or above 20 ng/ml) in patients with newly diagnosed prostate cancer. In this multicenter study, patients with confirmed primary prostate cancer were enrolled who underwent [68Ga]Ga-PSMA PET/CT for staging. Inclusion criteria required intraprostatic lesions on PET and the International Society of Urological Pathology (ISUP) grade information. Three different segments were delineated including intraprostatic PSMA-avid lesions on PET, the whole prostate in PET, and the whole prostate in CT. Radiomic features (RFs) were extracted from all segments. Dimensionality reduction was achieved through principal component analysis (PCA) prior to model training on data from two centers (186 cases) with 10-fold cross-validation. Model performance was validated with external data set (57 cases) using various machine learning models including random forest, nearest centroid, support vector machine (SVM), calibrated classifier CV and logistic regression. In this retrospective study, 243 patients with a median age of 69 (range: 46–89) were enrolled. For distinguishing GGG 1–3 from GGG 4–5, the nearest centroid classifier using radiomic features (RFs) from whole-prostate PET achieved the best performance in the internal test set, while the random forest classifier using RFs from PSMA-avid lesions in PET performed best in the external test set. However, when considering both internal and external test sets, a calibrated classifier CV using RFs from PSMA-avid PET data showed slightly improved overall performance. Regarding PSA level classification (< 20 ng/ml vs. ≥20 ng/ml), the nearest centroid classifier using RFs from the whole prostate in PET achieved the best performance in the internal test set. In the external test set, the highest performance was observed using RFs derived from the concatenation of PET and CT. Notably, when combining both internal and external test sets, the best performance was again achieved with RFs from the concatenated PET/CT data. Our research suggests that [68Ga]Ga-PSMA PET/CT radiomic features, particularly features derived from intraprostatic PSMA-avid lesions, may provide valuable information for pre-biopsy risk stratification in newly diagnosed prostate cancer.
Page 11 of 1401395 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.