Sort by:
Page 49 of 2352345 results

Incomplete Multi-modal Disentanglement Learning with Application to Alzheimer's Disease Diagnosis.

Han K, Hu D, Zhao F, Liu T, Yang F, Li G

pubmed logopapersAug 29 2025
Multi-modal neuroimaging data, including magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (PET), have greatly advanced the computer-aided diagnosis of Alzheimer's disease (AD) by providing shared and complementary information. However, the problem of incomplete multi-modal data remains inevitable and challenging. Conventional strategies that exclude subjects with missing data or synthesize missing scans either result in substantial sample reduction or introduce unwanted noise. To address this issue, we propose an Incomplete Multi-modal Disentanglement Learning method (IMDL) for AD diagnosis without missing scan synthesis, a novel model that employs a tiny Transformer to fuse incomplete multi-modal features extracted by modality-wise variational autoencoders adaptively. Specifically, we first design a cross-modality contrastive learning module to encourage modality-wise variational autoencoders to disentangle shared and complementary representations of each modality. Then, to alleviate the potential information gap between the representations obtained from complete and incomplete multi-modal neuroimages, we leverage the technique of adversarial learning to harmonize these representations with two discriminators. Furthermore, we develop a local attention rectification module comprising local attention alignment and multi-instance attention rectification to enhance the localization of atrophic areas associated with AD. This module aligns inter-modality and intra-modality attention within the Transformer, thus making attention weights more explainable. Extensive experiments conducted on ADNI and AIBL datasets demonstrated the superior performance of the proposed IMDL in AD diagnosis, and a further validation on the HABS-HD dataset highlighted its effectiveness for dementia diagnosis using different multi-modal neuroimaging data (i.e., T1-weighted MRI and diffusion tensor imaging).

Federated Fine-tuning of SAM-Med3D for MRI-based Dementia Classification

Kaouther Mouheb, Marawan Elbatel, Janne Papma, Geert Jan Biessels, Jurgen Claassen, Huub Middelkoop, Barbara van Munster, Wiesje van der Flier, Inez Ramakers, Stefan Klein, Esther E. Bron

arxiv logopreprintAug 29 2025
While foundation models (FMs) offer strong potential for AI-based dementia diagnosis, their integration into federated learning (FL) systems remains underexplored. In this benchmarking study, we systematically evaluate the impact of key design choices: classification head architecture, fine-tuning strategy, and aggregation method, on the performance and efficiency of federated FM tuning using brain MRI data. Using a large multi-cohort dataset, we find that the architecture of the classification head substantially influences performance, freezing the FM encoder achieves comparable results to full fine-tuning, and advanced aggregation methods outperform standard federated averaging. Our results offer practical insights for deploying FMs in decentralized clinical settings and highlight trade-offs that should guide future method development.

Distinct 3-Dimensional Morphologies of Arthritic Knee Anatomy Exist: CT-Based Phenotyping Offers Outlier Detection in Total Knee Arthroplasty.

Woo JJ, Hasan SS, Zhang YB, Nawabi DH, Calendine CL, Wassef AJ, Chen AF, Krebs VE, Ramkumar PN

pubmed logopapersAug 29 2025
There is no foundational classification that 3-dimensionally characterizes arthritic anatomy to preoperatively plan and postoperatively evaluate total knee arthroplasty (TKA). With the advent of computed tomography (CT) as a preoperative planning tool, the purpose of this study was to morphologically classify pre-TKA anatomy across coronal, axial, and sagittal planes to identify outlier phenotypes and establish a foundation for future philosophical, technical, and technological strategies. A cross-sectional analysis was conducted using 1,352 pre-TKA lower-extremity CT scans collected from a database at a single multicenter referral center. A validated deep learning and computer vision program acquired 27 lower-extremity measurements for each CT scan. An unsupervised spectral clustering algorithm morphometrically classified the cohort. The optimal number of clusters was determined through elbow-plot and eigen-gap analyses. Visualization was conducted through t-stochastic neighbor embedding, and each cluster was characterized. The analysis was repeated to assess how it was affected by severe deformity by removing impacted parameters and reassessing cluster separation. Spectral clustering revealed 4 distinct pre-TKA anatomic morphologies (18.5% Type 1, 39.6% Type 2, 7.5% Type 3, 34.5% Type 4). Types 1 and 3 embodied clear outliers. Key parameters distinguishing the 4 morphologies were hip rotation, medial posterior tibial slope, hip-knee-ankle angle, tibiofemoral angle, medial proximal tibial angle, and lateral distal femoral angle. After removing variables impacted by severe deformity, the secondary analysis again demonstrated 4 distinct clusters with the same distinguishing variables. CT-based phenotyping established a 3D classification of arthritic knee anatomy into 4 foundational morphologies, of which Types 1 and 3 represent outliers present in 26% of knees undergoing TKA. Unlike prior classifications emphasizing native coronal plane anatomy, 3D phenotyping of knees undergoing TKA enables recognition of outlier cases and a foundation for longitudinal evaluation in a morphologically diverse and growing surgical population. Longitudinal studies that control for implant selection, alignment technique, and applied technology are required to evaluate the impact of this classification in enabling rapid recovery and mitigating dissatisfaction after TKA. Prognostic Level II. See Instructions for Authors for a complete description of levels of evidence.

Integrating Pathology and CT Imaging for Personalized Recurrence Risk Prediction in Renal Cancer

Daniël Boeke, Cedrik Blommestijn, Rebecca N. Wray, Kalina Chupetlovska, Shangqi Gao, Zeyu Gao, Regina G. H. Beets-Tan, Mireia Crispin-Ortuzar, James O. Jones, Wilson Silva, Ines P. Machado

arxiv logopreprintAug 29 2025
Recurrence risk estimation in clear cell renal cell carcinoma (ccRCC) is essential for guiding postoperative surveillance and treatment. The Leibovich score remains widely used for stratifying distant recurrence risk but offers limited patient-level resolution and excludes imaging information. This study evaluates multimodal recurrence prediction by integrating preoperative computed tomography (CT) and postoperative histopathology whole-slide images (WSIs). A modular deep learning framework with pretrained encoders and Cox-based survival modeling was tested across unimodal, late fusion, and intermediate fusion setups. In a real-world ccRCC cohort, WSI-based models consistently outperformed CT-only models, underscoring the prognostic strength of pathology. Intermediate fusion further improved performance, with the best model (TITAN-CONCH with ResNet-18) approaching the adjusted Leibovich score. Random tie-breaking narrowed the gap between the clinical baseline and learned models, suggesting discretization may overstate individualized performance. Using simple embedding concatenation, radiology added value primarily through fusion. These findings demonstrate the feasibility of foundation model-based multimodal integration for personalized ccRCC risk prediction. Future work should explore more expressive fusion strategies, larger multimodal datasets, and general-purpose CT encoders to better match pathology modeling capacity.

A hybrid computer vision model to predict lung cancer in diverse populations

Zakkar, A., Perwaiz, N., Harikrishnan, V., Zhong, W., Narra, V., Krule, A., Yousef, F., Kim, D., Burrage-Burton, M., Lawal, A. A., Gadi, V., Korpics, M. C., Kim, S. J., Chen, Z., Khan, A. A., Molina, Y., Dai, Y., Marai, E., Meidani, H., Nguyen, R., Salahudeen, A. A.

medrxiv logopreprintAug 29 2025
PURPOSE Disparities of lung cancer incidence exist in Black populations and screening criteria underserve Black populations due to disparately elevated risk in the screening eligible population. Prediction models that integrate clinical and imaging-based features to individualize lung cancer risk is a potential means to mitigate these disparities. PATIENTS AND METHODS This Multicenter (NLST) and catchment population based (UIH, urban and suburban Cook County) study utilized participants at risk of lung cancer with available lung CT imaging and follow up between the years 2015 and 2024. 53,452 in NLST and 11,654 in UIH were included based on age and tobacco use based risk factors for lung cancer. Cohorts were used for training and testing of deep and machine learning models using clinical features alone or combined with CT image features (hybrid computer vision). RESULTS An optimized 7 clinical feature model achieved ROC-AUC values ranging 0.64-0.67 in NLST and 0.60-0.65 in UIH cohorts across multiple years. Incorporation of imaging features to form a hybrid computer vision model significantly improved ROC-AUC values to 0.78-0.91 in NLST but deteriorated in UIH with ROC-AUC values of 0.68- 0.80, attributable to Black participants where ROC-AUC values ranged from 0.63-0.72 across multiple years. Retraining the hybrid computer vision model by incorporating Black and other participants from the UIH cohort improved performance with ROC- AUC values of 0.70-0.87 in a held out UIH test set. CONCLUSION Hybrid computer vision predicted risk with improved accuracy compared to clinical risk models alone. However, potential biases in image training data reduced model generalizability in Black participants. Performance was improved upon retraining with a subset of the UIH cohort, suggesting that inclusive training and validation datasets can minimize racial disparities. Future studies incorporating vision models trained on representative data sets may demonstrate improved health equity upon clinical use.

Artificial intelligence as an independent reader of risk-dominant lung nodules: influence of CT reconstruction parameters.

Mao Y, Heuvelmans MA, van Tuinen M, Yu D, Yi J, Oudkerk M, Ye Z, de Bock GH, Dorrius MD

pubmed logopapersAug 29 2025
To assess the impact of reconstruction parameters on AI's performance in detecting and classifying risk-dominant nodules in a baseline low-dose CT (LDCT) screening among a Chinese general population. Baseline LDCT scans from 300 consecutive participants in the Netherlands and China Big-3 (NELCIN-B3) trial were included. AI analyzed each scan reconstructed with four settings: 1 mm/0.7 mm thickness/interval with medium-soft and hard kernels (D45f/1 mm, B80f/1 mm) and 2 mm/1 mm with soft and medium-soft kernels (B30f/2 mm, D45f/2 mm). Reading results from consensus read by two radiologists served as reference standard. At scan level, inter-reader agreement between AI and reference standard, sensitivity, and specificity in determining the presence of a risk-dominant nodule were evaluated. For reference-standard risk-dominant nodules, nodule detection rate, and agreement in nodule type classification between AI and reference standard were assessed. AI-D45f/1 mm demonstrated a significantly higher sensitivity than AI-B80f/1 mm in determining the presence of a risk-dominant nodule per scan (77.5% vs. 31.5%, p < 0.0001). For reference-standard risk-dominant nodules (111/300, 37.0%), kernel variations (AI-D45f/1 mm vs. AI-B80f/1 mm) did not significantly affect AI's nodule detection rate (87.4% vs. 82.0%, p = 0.26) but substantially influenced the agreement in nodule type classification between AI and reference standard (87.7% [50/57] vs. 17.7% [11/62], p < 0.0001). Change in thickness/interval (AI-D45f/1 mm vs. AI-D45f/2 mm) had no substantial influence on any of AI's performance (p > 0.05). Variations in reconstruction kernels significantly affected AI's performance in risk-dominant nodule type classification, but not nodule detection. Ensuring consistency with radiologist-preferred kernels significantly improved agreement in nodule type classification and may help integrate AI more smoothly into clinical workflows. Question Patient management in lung cancer screening depends on the risk-dominant nodule, yet no prior studies have assessed the impact of reconstruction parameters on AI performance for these nodules. Findings The difference between reconstruction kernels (AI-D45f/1 mm vs. AI-B80f/1 mm, or AI-B30f/2 mm vs. AI-D45f/2 mm) significantly affected AI's performance in risk-dominant nodule type classification, but not nodule detection. Clinical relevance The use of kernel for AI consistent with radiologist's choice is likely to improve the overall performance of AI-based CAD systems as an independent reader and support greater clinical acceptance and integration of AI tools into routine practice.

Deep Learning Framework for Early Detection of Pancreatic Cancer Using Multi-Modal Medical Imaging Analysis

Dennis Slobodzian, Karissa Tilbury, Amir Kordijazi

arxiv logopreprintAug 28 2025
Pacreatic ductal adenocarcinoma (PDAC) remains one of the most lethal forms of cancer, with a five-year survival rate below 10% primarily due to late detection. This research develops and validates a deep learning framework for early PDAC detection through analysis of dual-modality imaging: autofluorescence and second harmonic generation (SHG). We analyzed 40 unique patient samples to create a specialized neural network capable of distinguishing between normal, fibrotic, and cancerous tissue. Our methodology evaluated six distinct deep learning architectures, comparing traditional Convolutional Neural Networks (CNNs) with modern Vision Transformers (ViTs). Through systematic experimentation, we identified and overcome significant challenges in medical image analysis, including limited dataset size and class imbalance. The final optimized framework, based on a modified ResNet architecture with frozen pre-trained layers and class-weighted training, achieved over 90% accuracy in cancer detection. This represents a significant improvement over current manual analysis methods an demonstrates potential for clinical deployment. This work establishes a robust pipeline for automated PDAC detection that can augment pathologists' capabilities while providing a foundation for future expansion to other cancer types. The developed methodology also offers valuable insights for applying deep learning to limited-size medical imaging datasets, a common challenge in clinical applications.

ResGSNet: Enhanced local attention with Global Scoring Mechanism for the early detection and treatment of Alzheimer's Disease.

Chen T, Li X

pubmed logopapersAug 28 2025
Recently, Transformer has been widely used in medical imaging analysis for its competitive potential when given enough data. However, Transformer conducts attention on a global scale by utilizing self-attention mechanisms across all input patches, thereby requiring substantial computational power and memory, especially when dealing with large 3D images such as MRI images. In this study, we proposed Residual Global Scoring Network (ResGSNet), a novel architecture combining ResNet with Global Scoring Module (GSM), achieving high computational efficiency while incorporating both local and global features. First, our proposed GSM utilized local attention to conduct information exchange within local brain regions, subsequently assigning global scores to each of these local regions, demonstrating the capability to encapsulate local and global information with reduced computational burden and superior performance compared to existing methods. Second, we utilized Grad-CAM++ and the Attention Map to interpret model predictions, uncovering brain regions related to Alzheimer's Disease (AD) Detection. Third, our extensive experiments on the ADNI dataset show that our proposed ResGSNet achieved satisfactory performance with 95.1% accuracy in predicting AD, a 1.3% increase compared to state-of-the-art methods, and 93.4% accuracy for Mild Cognitive Impairment (MCI). Our model for detecting MCI can potentially serve as a screening tool for identifying individuals at high risk of developing AD and allow for early intervention. Furthermore, the Grad-CAM++ and Attention Map not only identified brain regions commonly associated with AD and MCI but also revealed previously undiscovered regions, including putamen, cerebellum cortex, and caudate nucleus, holding promise for further research into the etiology of AD.

Hybrid quantum-classical-quantum convolutional neural networks.

Long C, Huang M, Ye X, Futamura Y, Sakurai T

pubmed logopapersAug 28 2025
Deep learning has achieved significant success in pattern recognition, with convolutional neural networks (CNNs) serving as a foundational architecture for extracting spatial features from images. Quantum computing provides an alternative computational framework, a hybrid quantum-classical convolutional neural networks (QCCNNs) leverage high-dimensional Hilbert spaces and entanglement to surpass classical CNNs in image classification accuracy under comparable architectures. Despite performance improvements, QCCNNs typically use fixed quantum layers without incorporating trainable quantum parameters. This limits their ability to capture non-linear quantum representations and separates the model from the potential advantages of expressive quantum learning. In this work, we present a hybrid quantum-classical-quantum convolutional neural network (QCQ-CNN) that incorporates a quantum convolutional filter, a shallow classical CNN, and a trainable variational quantum classifier. This architecture aims to enhance the expressivity of decision boundaries in image classification tasks by introducing tunable quantum parameters into the end-to-end learning process. Through a series of small-sample experiments on MNIST, F-MNIST, and MRI tumor datasets, QCQ-CNN demonstrates competitive accuracy and convergence behavior compared to classical and hybrid baselines. We further analyze the effect of ansatz depth and find that moderate-depth quantum circuits can improve learning stability without introducing excessive complexity. Additionally, simulations incorporating depolarizing noise and finite sampling shots suggest that QCQ-CNN maintains a certain degree of robustness under realistic quantum noise conditions. While our results are currently limited to simulations with small-scale quantum circuits, the proposed approach offers a potentially promising direction for hybrid quantum learning in near-term applications.

Domain Adaptation Techniques for Natural and Medical Image Classification

Ahmad Chaddad, Yihang Wu, Reem Kateb, Christian Desrosiers

arxiv logopreprintAug 28 2025
Domain adaptation (DA) techniques have the potential in machine learning to alleviate distribution differences between training and test sets by leveraging information from source domains. In image classification, most advances in DA have been made using natural images rather than medical data, which are harder to work with. Moreover, even for natural images, the use of mainstream datasets can lead to performance bias. {With the aim of better understanding the benefits of DA for both natural and medical images, this study performs 557 simulation studies using seven widely-used DA techniques for image classification in five natural and eight medical datasets that cover various scenarios, such as out-of-distribution, dynamic data streams, and limited training samples.} Our experiments yield detailed results and insightful observations highlighting the performance and medical applicability of these techniques. Notably, our results have shown the outstanding performance of the Deep Subdomain Adaptation Network (DSAN) algorithm. This algorithm achieved feasible classification accuracy (91.2\%) in the COVID-19 dataset using Resnet50 and showed an important accuracy improvement in the dynamic data stream DA scenario (+6.7\%) compared to the baseline. Our results also demonstrate that DSAN exhibits remarkable level of explainability when evaluated on COVID-19 and skin cancer datasets. These results contribute to the understanding of DA techniques and offer valuable insight into the effective adaptation of models to medical data.
Page 49 of 2352345 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.