Sort by:
Page 4 of 1691684 results

Multi-Contrast Fusion Module: An attention mechanism integrating multi-contrast features for fetal torso plane classification

Shengjun Zhu, Siyu Liu, Runqing Xiong, Liping Zheng, Duo Ma, Rongshang Chen, Jiaxin Cai

arxiv logopreprintAug 13 2025
Purpose: Prenatal ultrasound is a key tool in evaluating fetal structural development and detecting abnormalities, contributing to reduced perinatal complications and improved neonatal survival. Accurate identification of standard fetal torso planes is essential for reliable assessment and personalized prenatal care. However, limitations such as low contrast and unclear texture details in ultrasound imaging pose significant challenges for fine-grained anatomical recognition. Methods: We propose a novel Multi-Contrast Fusion Module (MCFM) to enhance the model's ability to extract detailed information from ultrasound images. MCFM operates exclusively on the lower layers of the neural network, directly processing raw ultrasound data. By assigning attention weights to image representations under different contrast conditions, the module enhances feature modeling while explicitly maintaining minimal parameter overhead. Results: The proposed MCFM was evaluated on a curated dataset of fetal torso plane ultrasound images. Experimental results demonstrate that MCFM substantially improves recognition performance, with a minimal increase in model complexity. The integration of multi-contrast attention enables the model to better capture subtle anatomical structures, contributing to higher classification accuracy and clinical reliability. Conclusions: Our method provides an effective solution for improving fetal torso plane recognition in ultrasound imaging. By enhancing feature representation through multi-contrast fusion, the proposed approach supports clinicians in achieving more accurate and consistent diagnoses, demonstrating strong potential for clinical adoption in prenatal screening. The codes are available at https://github.com/sysll/MCFM.

Explainable AI Technique in Lung Cancer Detection Using Convolutional Neural Networks

Nishan Rai, Sujan Khatri, Devendra Risal

arxiv logopreprintAug 13 2025
Early detection of lung cancer is critical to improving survival outcomes. We present a deep learning framework for automated lung cancer screening from chest computed tomography (CT) images with integrated explainability. Using the IQ-OTH/NCCD dataset (1,197 scans across Normal, Benign, and Malignant classes), we evaluate a custom convolutional neural network (CNN) and three fine-tuned transfer learning backbones: DenseNet121, ResNet152, and VGG19. Models are trained with cost-sensitive learning to mitigate class imbalance and evaluated via accuracy, precision, recall, F1-score, and ROC-AUC. While ResNet152 achieved the highest accuracy (97.3%), DenseNet121 provided the best overall balance in precision, recall, and F1 (up to 92%, 90%, 91%, respectively). We further apply Shapley Additive Explanations (SHAP) to visualize evidence contributing to predictions, improving clinical transparency. Results indicate that CNN-based approaches augmented with explainability can provide fast, accurate, and interpretable support for lung cancer screening, particularly in resource-limited settings.

Current imaging applications, radiomics, and machine learning modalities of CNS demyelinating disorders and its mimickers.

Alam Z, Maddali A, Patel S, Weber N, Al Rikabi S, Thiemann D, Desai K, Monoky D

pubmed logopapersAug 12 2025
Distinguishing among neuroinflammatory demyelinating diseases of the central nervous system can present a significant diagnostic challenge due to substantial overlap in clinical presentations and imaging features. Collaboration between specialists, novel antibody testing, and dedicated magnetic resonance imaging protocols have helped to narrow the diagnostic gap, but challenging cases remain. Machine learning algorithms have proven to be able to identify subtle patterns that escape even the most experienced human eye. Indeed, machine learning and the subfield of radiomics have demonstrated exponential growth and improvement in diagnosis capacity within the past decade. The sometimes daunting diagnostic overlap of various demyelinating processes thus provides a unique opportunity: can the elite pattern recognition powers of machine learning close the gap in making the correct diagnosis? This review specifically focuses on neuroinflammatory demyelinating diseases, exploring the role of artificial intelligence in the detection, diagnosis, and differentiation of the most common pathologies: multiple sclerosis (MS), neuromyelitis optica spectrum disorder (NMOSD), acute disseminated encephalomyelitis (ADEM), Sjogren's syndrome, MOG antibody-associated disorder (MOGAD), and neuropsychiatric systemic lupus erythematosus (NPSLE). Understanding how these tools enhance diagnostic precision may lead to earlier intervention, improved outcomes, and optimized management strategies.

[Development of a machine learning-based diagnostic model for T-shaped uterus using transvaginal 3D ultrasound quantitative parameters].

Li SJ, Wang Y, Huang R, Yang LM, Lyu XD, Huang XW, Peng XB, Song DM, Ma N, Xiao Y, Zhou QY, Guo Y, Liang N, Liu S, Gao K, Yan YN, Xia EL

pubmed logopapersAug 12 2025
<b>Objective:</b> To develop a machine learning diagnostic model for T-shaped uterus based on quantitative parameters from 3D transvaginal ultrasound. <b>Methods:</b> A retrospective cross-sectional study was conducted, recruiting 304 patients who visited the hysteroscopy centre of Fuxing Hospital, Beijing, China, between July 2021 and June 2024 for reasons such as "infertility or recurrent pregnancy loss" and other adverse obstetric histories. Twelve experts, including seven clinicians and five sonographers, from Fuxing Hospital and Beijing Obstetrics and Gynecology Hospital of Capital Medical University, Peking University People's Hospital, and Beijing Hospital, independently and anonymously assessed the diagnosis of T-shaped uterus using a modified Delphi method. Based on the consensus results, 56 cases were classified into the T-shaped uterus group and 248 cases into the non-T-shaped uterus group. A total of 7 clinical features and 14 sonographic features were initially included. Features demonstrating significant diagnostic impact were selected using 10-fold cross-validated LASSO (Least Absolute Shrinkage and Selection Operator) regression. Four machine learning algorithms [logistic regression (LR), decision tree (DT), random forest (RF), and support vector machine (SVM)] were subsequently implemented to develop T-shaped uterus diagnostic models. Using the Python random module, the patient dataset was randomly divided into five subsets, each maintaining the original class distribution (T-shaped uterus: non-T-shaped uterus ≈ 1∶4) and a balanced number of samples between the two categories. Five-fold cross-validation was performed, with four subsets used for training and one for validation in each round, to enhance the reliability of model evaluation. Model performance was rigorously assessed using established metrics: area under the curve (AUC) of receiver operator characteristic (ROC) curve, sensitivity, specificity, precision, and F1-score. In the RF model, feature importance was assessed by the mean decrease in Gini impurity attributed to each variable. <b>Results:</b> A total of 304 patients had a mean age of (35±4) years, and the age of the T-shaped uterus group was (35±5) years; the age of the non-T-shaped uterus group was (34±4) years.. Eight features with non-zero coefficients were selected by LASSO regression, including average lateral wall indentation width, average lateral wall indentation angle, upper cavity depth, endometrial thickness, uterine cavity area, cavity width at level of lateral wall indentation, angle formed by the bilateral lateral walls, and average cornual angle (coefficient: 0.125, -0.064,-0.037,-0.030,-0.026,-0.025,-0.025 and -0.024, respectively). The RF model showed the best diagnostic performance: in training set, AUC was 0.986 (95%<i>CI</i>: 0.980-0.992), sensitivity was 0.978, specificity 0.946, precision 0.802, and F1-score 0.881; in testing set, AUC was 0.948 (95%<i>CI</i>: 0.911-0.985), sensitivity was 0.873, specificity 0.919, precision 0.716, and F1-score 0.784. RF model feature importance analysis revealed that average lateral wall indentation width, upper cavity depth, and average lateral wall indentation angle were the top three features (over 65% in total), playing a decisive role in model prediction. <b>Conclusion:</b> The machine learning models developed in this study, particularly the RF model, are promising for the diagnosis of T-shaped uterus, offering new perspectives and technical support for clinical practice.

Hierarchical Variable Importance with Statistical Control for Medical Data-Based Prediction

Joseph Paillard, Antoine Collas, Denis A. Engemann, Bertrand Thirion

arxiv logopreprintAug 12 2025
Recent advances in machine learning have greatly expanded the repertoire of predictive methods for medical imaging. However, the interpretability of complex models remains a challenge, which limits their utility in medical applications. Recently, model-agnostic methods have been proposed to measure conditional variable importance and accommodate complex non-linear models. However, they often lack power when dealing with highly correlated data, a common problem in medical imaging. We introduce Hierarchical-CPI, a model-agnostic variable importance measure that frames the inference problem as the discovery of groups of variables that are jointly predictive of the outcome. By exploring subgroups along a hierarchical tree, it remains computationally tractable, yet also enjoys explicit family-wise error rate control. Moreover, we address the issue of vanishing conditional importance under high correlation with a tree-based importance allocation mechanism. We benchmarked Hierarchical-CPI against state-of-the-art variable importance methods. Its effectiveness is demonstrated in two neuroimaging datasets: classifying dementia diagnoses from MRI data (ADNI dataset) and analyzing the Berger effect on EEG data (TDBRAIN dataset), identifying biologically plausible variables.

Are [18F]FDG PET/CT imaging and cell blood count-derived biomarkers robust non-invasive surrogates for tumor-infiltrating lymphocytes in early-stage breast cancer?

Seban RD, Rebaud L, Djerroudi L, Vincent-Salomon A, Bidard FC, Champion L, Buvat I

pubmed logopapersAug 12 2025
Tumor-infiltrating lymphocytes (TILs) are key immune biomarkers associated with prognosis and treatment response in early-stage breast cancer (BC), particularly in the triple-negative subtype. This study aimed to evaluate whether [18F]FDG PET/CT imaging and routine cell blood count (CBC)-derived biomarkers can serve as non-invasive surrogates for TILs, using machine-learning models. We retrospectively analyzed 358 patients with biopsy-proven early-stage invasive BC who underwent pre-treatment [18F]FDG PET/CT imaging. PET-derived biomarkers were extracted from the primary tumor, lymph nodes, and lymphoid organs (spleen and bone marrow). CBC-derived biomarkers included neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR). TILs were assessed histologically and categorized as low (0-10%), intermediate (11-59%), or high (≥ 60%). Correlations were assessed using Spearman's rank coefficient, and classification and regression models were built using several machine-learning algorithms. Tumor SUVmax and tumor SUVmean showed the highest correlation with TIL levels (ρ = 0.29 and 0.30 respectively, p < 0.001 for both), but overall associations between TILs and PET or CBC-derived biomarkers were weak. No CBC-derived biomarker showed significant correlation or discriminative performance. Machine-learning models failed to predict TIL levels with satisfactory accuracy (maximum balanced accuracy = 0.66). Lymphoid organ metrics (SLR, BLR) and CBC-derived parameters did not significantly enhance predictive value. In this study, neither [18F]FDG PET/CT nor routine CBC-derived biomarkers reliably predict TILs levels in early-stage BC. This observation was made in presence of potential scanner-related variability and for a restricted set of usual PET metrics. Future models should incorporate more targeted imaging approaches, such as immunoPET, to non-invasively assess immune infiltration with higher specificity and improve personalized treatment strategies.

Amorphous-Crystalline Synergy in CoSe<sub>2</sub>/CoS<sub>2</sub> Heterostructures: High-Performance SERS Substrates for Esophageal Tumor Cell Discrimination.

Zhang M, Liu A, Meng X, Wang Y, Yu J, Liu H, Sun Y, Xu L, Song X, Zhang J, Sun L, Lin J, Wu A, Wang X, Chai N, Li L

pubmed logopapersAug 12 2025
Although surface-enhanced Raman scattering (SERS) spectroscopy is applied in biomedicine deeply, the design of new substrates for wider detection is still in demand. Crystalline-amorphous CoSe<sub>2</sub>/CoS<sub>2</sub> heterojunction is synthesized, with high SERS performance and stability, composed of orthorhombic (o-CoSe<sub>2</sub>) and amorphous CoS<sub>2</sub> (a-CoS<sub>2</sub>). By adjusting feed ratio, the proportion of a-CoS<sub>2</sub> to o-CoSe<sub>2</sub> is regulated, where CoSe<sub>2</sub>/CoS<sub>2</sub>-S50 with a 1:1 ratio demonstrates the best SERS performance due to the balance of two components. It is confirmed through experimental and simulation methods that o-CoSe<sub>2</sub> and a-CoS<sub>2</sub> have unique contribution, respectively: a-CoS<sub>2</sub> has rich vacancies and a higher density of active sites, while o-CoSe<sub>2</sub> further enriches vacancies, enhances electron delocalization and charge transfer (CT) capabilities, and reduces bandgap. Besides, CoSe<sub>2</sub>/CoS<sub>2</sub>-S50 achieves not only SERS detection of two common esophageal tumor cells (KYSE and TE) and healthy oral epithelial cells (het-1A), but also the discrimination with high sensitivity, specificity, and accuracy via machine learning (ML) analysis.

CRCFound: A Colorectal Cancer CT Image Foundation Model Based on Self-Supervised Learning.

Yang J, Cai D, Liu J, Zhuang Z, Zhao Y, Wang FA, Li C, Hu C, Gai B, Chen Y, Li Y, Wang L, Gao F, Wu X

pubmed logopapersAug 12 2025
Accurate risk stratification is crucial for determining the optimal treatment plan for patients with colorectal cancer (CRC). However, existing deep learning models perform poorly in the preoperative diagnosis of CRC and exhibit limited generalizability, primarily due to insufficient annotated data. To address these issues, CRCFound, a self-supervised learning-based CT image foundation model for CRC is proposed. After pretraining on 5137 unlabeled CRC CT images, CRCFound can learn universal feature representations and provide efficient and reliable adaptability for various clinical applications. Comprehensive benchmark tests are conducted on six different diagnostic tasks and two prognosis tasks to validate the performance of the pretrained model. Experimental results demonstrate that CRCFound can easily transfer to most CRC tasks and exhibit outstanding performance and generalization ability. Overall, CRCFound can solve the problem of insufficient annotated data and perform well in a wide range of downstream tasks of CRC, making it a promising solution for accurate diagnosis and personalized treatment of CRC patients.

Machine learning models for diagnosing lymph node recurrence in postoperative PTC patients: a radiomic analysis.

Pang F, Wu L, Qiu J, Guo Y, Xie L, Zhuang S, Du M, Liu D, Tan C, Liu T

pubmed logopapersAug 12 2025
Postoperative papillary thyroid cancer (PTC) patients often have enlarged cervical lymph nodes due to inflammation or hyperplasia, which complicates the assessment of recurrence or metastasis. This study aimed to explore the diagnostic capabilities of computed tomography (CT) imaging and radiomic analysis to distinguish the recurrence of cervical lymph nodes in patients with PTC postoperatively. A retrospective analysis of 194 PTC patients who underwent total thyroidectomy was conducted, with 98 cases of cervical lymph node recurrence and 96 cases without recurrence. Using 3D Slicer software, Regions of Interest (ROI) were delineated on enhanced venous phase CT images, analyzing 302 positive and 391 negative lymph nodes. These nodes were randomly divided into training and validation sets in a 3:2 ratio. Python was used to extract radiomic features from the ROIs and to develop radiomic models. Univariate and multivariate analyses identified statistically significant risk factors for cervical lymph node recurrence from clinical data, which, when combined with radiomic scores, formed a nomogram to predict recurrence risk. The diagnostic efficacy and clinical utility of the models were assessed using ROC curves, calibration curves, and Decision Curve Analysis (DCA). This study analyzed 693 lymph nodes (302 positive and 391 negative) and identified 35 significant radiomic features through dimensionality reduction and selection. The three machine learning models, including the Lasso regression, Support Vector Machine (SVM), and RF radiomics models, showed.

Dynamic Survival Prediction using Longitudinal Images based on Transformer

Bingfan Liu, Haolun Shi, Jiguo Cao

arxiv logopreprintAug 12 2025
Survival analysis utilizing multiple longitudinal medical images plays a pivotal role in the early detection and prognosis of diseases by providing insight beyond single-image evaluations. However, current methodologies often inadequately utilize censored data, overlook correlations among longitudinal images measured over multiple time points, and lack interpretability. We introduce SurLonFormer, a novel Transformer-based neural network that integrates longitudinal medical imaging with structured data for survival prediction. Our architecture comprises three key components: a Vision Encoder for extracting spatial features, a Sequence Encoder for aggregating temporal information, and a Survival Encoder based on the Cox proportional hazards model. This framework effectively incorporates censored data, addresses scalability issues, and enhances interpretability through occlusion sensitivity analysis and dynamic survival prediction. Extensive simulations and a real-world application in Alzheimer's disease analysis demonstrate that SurLonFormer achieves superior predictive performance and successfully identifies disease-related imaging biomarkers.
Page 4 of 1691684 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.