Sort by:
Page 18 of 91907 results

Fully automated 3D multi-modal deep learning model for preoperative T-stage prediction of colorectal cancer using <sup>18</sup>F-FDG PET/CT.

Zhang M, Li Y, Zheng C, Xie F, Zhao Z, Dai F, Wang J, Wu H, Zhu Z, Liu Q, Li Y

pubmed logopapersJul 28 2025
This study aimed to develop a fully automated 3D multi-modal deep learning model using preoperative <sup>18</sup>F-FDG PET/CT to predict the T-stage of colorectal cancer (CRC) and evaluate its clinical utility. A retrospective cohort of 474 CRC patients was included, with 400 patients for internal cohort and 74 patients for external cohort. Patients were classified into early T-stage (T1-T2) and advanced T-stage (T3-T4) groups. Automatic segmentation of the volume of interest (VOI) was achieved based on TotalSegmentator. A 3D ResNet18-based deep learning model integrated with a cross-multi-head attention mechanism was developed. Five models (CT + PET + Clinic (CPC), CT + PET (CP), PET (P), CT (C), Clinic) and two radiologists' assessment were compared. Performance was evaluated using Area Under the Curve (AUC). Grad-CAM was employed to provide visual interpretability of decision-critical regions. The automated segmentation achieved Dice scores of 0.884 (CT) and 0.888 (PET). The CPC and CP models achieved superior performance, with AUCs of 0.869 and 0.869 in the internal validation cohort, respectively, outperforming single-modality models (P: 0.832; C: 0.809; Clinic: 0.728) and the radiologists (AUC: 0.627, P < 0.05 for all models vs. radiologists, except for the Clinical model). External validation exhibited a similar trend, with AUCs of 0.814, 0.812, 0.763, 0.714, 0.663 and 0.704, respectively. Grad-CAM visualization highlighted tumor-centric regions for early T-stage and peri-tumoral tissue infiltration for advanced T-stage. The fully automated multimodal, fusing PET/CT with cross-multi-head-attention, improved T-stage prediction in CRC, surpassing the single-modality models and radiologists, offering a time-efficient tool to aid clinical decision-making.

Evaluating the impact of view position in X-ray imaging for the classification of lung diseases.

Hage Chehade A, Abdallah N, Marion JM, Oueidat M, Chauvet P

pubmed logopapersJul 28 2025
Clinical information associated with chest X-ray images, such as view position, patient age and gender, plays a crucial role in image interpretation, as it influences the visibility of anatomical structures and pathologies. However, most classification models using the ChestX-ray14 dataset relied solely on image data, disregarding the impact of these clinical variables. This study aims to investigate which clinical variable affects image characteristics and assess its impact on classification performance. To explore the relationships between clinical variables and image characteristics, unsupervised clustering was applied to group images based on their similarities. Afterwards, a statistical analysis was then conducted on each cluster to examine their clinical composition, by analyzing the distribution of age, gender, and view position. An attention-based CNN model was developed separately for each value of the clinical variable with the greatest influence on image characteristics to assess its impact on lung disease classification. The analysis identified view position as the most influential variable affecting image characteristics. Accounting for this, the proposed approach achieved a weighted area under the curve (AUC) of 0.8176 for pneumonia classification, surpassing the base model (without considering view position) by 1.65% and outperforming previous studies by 6.76%. Furthermore, it demonstrated improved performance across all 14 diseases in the ChestX-ray14 dataset. The findings highlight the importance of considering view position when developing classification models for chest X-ray analysis. Accounting for this characteristic allows for more precise disease identification, demonstrating potential for broader clinical application in lung disease evaluation.

Rapid vessel segmentation and reconstruction of head and neck angiograms from MR vessel wall images.

Zhang J, Wang W, Dong J, Yang X, Bai S, Tian J, Li B, Li X, Zhang J, Wu H, Zeng X, Ye Y, Ding S, Wan J, Wu K, Mao Y, Li C, Zhang N, Xu J, Dai Y, Shi F, Sun B, Zhou Y, Zhao H

pubmed logopapersJul 28 2025
Three-dimensional magnetic resonance vessel wall imaging (3D MR-VWI) is critical for characterizing cerebrovascular pathologies, yet its clinical adoption is hindered by labor-intensive postprocessing. We developed VWI Assistant, a multi-sequence integrated deep learning platform trained on multicenter data (study cohorts 1981 patients and imaging datasets) to automate artery segmentation and reconstruction. The framework demonstrated robust performance across diverse patient populations, imaging protocols, and scanner manufacturers, achieving 92.9% qualified rate comparable to expert manual delineation. VWI Assistant reduced processing time by over 90% (10-12 min per case) compared to manual methods (p < 0.001) and improved inter-/intra-reader agreement. Real-world deployment (n = 1099 patients) demonstrated rapid clinical adoption, with utilization rates increasing from 10.8% to 100.0% within 12 months. By streamlining 3D MR-VWI workflows, VWI Assistant addresses scalability challenges in vascular imaging, offering a practical tool for routine use and large-scale research, significantly improving workflow efficiency while reducing labor and time costs.

Harnessing infrared thermography and multi-convolutional neural networks for early breast cancer detection.

Attallah O

pubmed logopapersJul 28 2025
Breast cancer is a relatively common carcinoma among women worldwide and remains a considerable public health concern. Consequently, the prompt identification of cancer is crucial, as research indicates that 96% of cancers are treatable if diagnosed prior to metastasis. Despite being considered the gold standard for breast cancer evaluation, conventional mammography possesses inherent drawbacks, including accessibility issues, especially in rural regions, and discomfort associated with the procedure. Therefore, there has been a surge in interest in non-invasive, radiation-free alternative diagnostic techniques, such as thermal imaging (thermography). Thermography employs infrared thermal sensors to capture and assess temperature maps of human breasts for the identification of potential tumours based on areas of thermal irregularity. This study proposes an advanced computer-aided diagnosis (CAD) system called Thermo-CAD to assess early breast cancer detection using thermal imaging, aimed at assisting radiologists. The CAD system employs a variety of deep learning techniques, specifically incorporating multiple convolutional neural networks (CNNs) to enhance diagnostic accuracy and reliability. To effectively integrate multiple deep features and diminish the dimensionality of features derived from each CNN, feature transformation and selection methods, including non-negative matrix factorization and Relief-F, are used leading to a reduction in classification complexity. The Thermo-CAD system is assessed utilising two datasets: the DMR-IR (Database for Mastology Research Infrared Images), for distinguishing between normal and abnormal breast tissues, and a novel thermography dataset to distinguish abnormal instances as benign or malignant. Thermo-CAD has proven to be an outstanding CAD system for thermographic breast cancer detection, attaining 100% accuracy on the DMR-IR dataset (normal versus abnormal breast cancer) using CSVM and MGSVM classifiers, and lower accuracy using LSVM and QSVM classifiers. However, it showed a lower ability to distinguish benign from malignant cases (second dataset), achieving an accuracy of 79.3% using CSVM. Yet, it remains a promising tool for early-stage cancer detection, especially in resource-constrained environments.

Radiomics with Machine Learning Improves the Prediction of Microscopic Peritumoral Small Cancer Foci and Early Recurrence in Hepatocellular Carcinoma.

Zou W, Gu M, Chen H, He R, Zhao X, Jia N, Wang P, Liu W

pubmed logopapersJul 28 2025
This study aimed to develop an interpretable machine learning model using magnetic resonance imaging (MRI) radiomics features to predict preoperative microscopic peritumoral small cancer foci (MSF) and explore its relationship with early recurrence in hepatocellular carcinoma (HCC) patients. A total of 1049 patients from three hospitals were divided into a training set (Hospital 1: 614 cases), a test set (Hospital 2: 248 cases), and a validation set (Hospital 3: 187 cases). Independent risk factors from clinical and MRI features were identified using univariate and multivariate logistic regression to build a clinicoradiological model. MRI radiomics features were then selected using methods like least absolute shrinkage and selection operator (LassoCV) and modeled with various machine learning algorithms, choosing the best-performing model as the radiomics model. The clinical and radiomics features were combined to form a fusion model. Model performance was evaluated by comparing receiver operating characteristic (ROC) curves, area under the curve (AUC) values, calibration curves, and decision curve analysis (DCA) curves. Net reclassification improvement (NRI) and integrated discrimination improvement (IDI) values assessed improvements in predictive efficacy. The model's prognostic value was verified using Kaplan-Meier analysis. SHapley Additive exPlanations (SHAP) was used to interpret how the model makes predictions. Three models were developed as follows: Clinical Radiology, XGBoost, and Clinical XGBoost. XGBoost was selected as the final model for predicting MSF, with AUCs of 0.841, 0.835, and 0.817 in the training, test, and validation sets, respectively. These results were comparable to the Clinical XGBoost model (0.856, 0.826, 0.837) and significantly better than the Clinical Radiology model (0.688, 0.561, 0.613). Additionally, the XGBoost model effectively predicted early recurrence in HCC patients. This study successfully developed an interpretable XGBoost machine learning model based on MRI radiomics features to predict preoperative MSF and early recurrence in HCC patients.

Continual learning in medical image analysis: A comprehensive review of recent advancements and future prospects.

Kumari P, Chauhan J, Bozorgpour A, Huang B, Azad R, Merhof D

pubmed logopapersJul 28 2025
Medical image analysis has witnessed remarkable advancements, even surpassing human-level performance in recent years, driven by the rapid development of advanced deep-learning algorithms. However, when the inference dataset slightly differs from what the model has seen during one-time training, the model performance is greatly compromised. The situation requires restarting the training process using both the old and the new data, which is computationally costly, does not align with the human learning process, and imposes storage constraints and privacy concerns. Alternatively, continual learning has emerged as a crucial approach for developing unified and sustainable deep models to deal with new classes, tasks, and the drifting nature of data in non-stationary environments for various application areas. Continual learning techniques enable models to adapt and accumulate knowledge over time, which is essential for maintaining performance on evolving datasets and novel tasks. Owing to its popularity and promising performance, it is an active and emerging research topic in the medical field and hence demands a survey and taxonomy to clarify the current research landscape of continual learning in medical image analysis. This systematic review paper provides a comprehensive overview of the state-of-the-art in continual learning techniques applied to medical image analysis. We present an extensive survey of existing research, covering topics including catastrophic forgetting, data drifts, stability, and plasticity requirements. Further, an in-depth discussion of key components of a continual learning framework, such as continual learning scenarios, techniques, evaluation schemes, and metrics, is provided. Continual learning techniques encompass various categories, including rehearsal, regularization, architectural, and hybrid strategies. We assess the popularity and applicability of continual learning categories in various medical sub-fields like radiology and histopathology. Our exploration considers unique challenges in the medical domain, including costly data annotation, temporal drift, and the crucial need for benchmarking datasets to ensure consistent model evaluation. The paper also addresses current challenges and looks ahead to potential future research directions.

Multi-Attention Stacked Ensemble for Lung Cancer Detection in CT Scans

Uzzal Saha, Surya Prakash

arxiv logopreprintJul 27 2025
In this work, we address the challenge of binary lung nodule classification (benign vs malignant) using CT images by proposing a multi-level attention stacked ensemble of deep neural networks. Three pretrained backbones - EfficientNet V2 S, MobileViT XXS, and DenseNet201 - are each adapted with a custom classification head tailored to 96 x 96 pixel inputs. A two-stage attention mechanism learns both model-wise and class-wise importance scores from concatenated logits, and a lightweight meta-learner refines the final prediction. To mitigate class imbalance and improve generalization, we employ dynamic focal loss with empirically calculated class weights, MixUp augmentation during training, and test-time augmentation at inference. Experiments on the LIDC-IDRI dataset demonstrate exceptional performance, achieving 98.09 accuracy and 0.9961 AUC, representing a 35 percent reduction in error rate compared to state-of-the-art methods. The model exhibits balanced performance across sensitivity (98.73) and specificity (98.96), with particularly strong results on challenging cases where radiologist disagreement was high. Statistical significance testing confirms the robustness of these improvements across multiple experimental runs. Our approach can serve as a robust, automated aid for radiologists in lung cancer screening.

Quantification of hepatic steatosis on post-contrast computed tomography scans using artificial intelligence tools.

Derstine BA, Holcombe SA, Chen VL, Pai MP, Sullivan JA, Wang SC, Su GL

pubmed logopapersJul 26 2025
Early detection of steatotic liver disease (SLD) is critically important. In clinical practice, hepatic steatosis is frequently diagnosed using computed tomography (CT) performed for unrelated clinical indications. An equation for estimating magnetic resonance proton density fat fraction (MR-PDFF) using liver attenuation on non-contrast CT exists, but no equivalent equation exists for post-contrast CT. We sought to (1) determine whether an automated workflow can accurately measure liver attenuation, (2) validate previously identified optimal thresholds for liver or liver-spleen attenuation in post-contrast studies, and (3) develop a method for estimating MR-PDFF (FF) on post-contrast CT. The fully automated TotalSegmentator 'total' machine learning model was used to segment 3D liver and spleen from non-contrast and post-contrast CT scans. Mean attenuation was extracted from liver (L) and spleen (S) volumes and from manually placed regions of interest (ROIs) in multi-phase CT scans of two cohorts: derivation (n = 1740) and external validation (n = 1044). Non-linear regression was used to determine the optimal coefficients for three phase-specific (arterial, venous, delayed) increasing exponential decay equations relating post-contrast L to non-contrast L. MR-PDFF was estimated from non-contrast CT and used as the reference standard. The mean attenuation for manual ROIs versus automated volumes were nearly perfectly correlated for both liver and spleen (r > .96, p < .001). For moderate-to-severe steatosis (L < 40 HU), the density of the liver (L) alone was a better classifier than either liver-spleen difference (L-S) or ratio (L/S) on post-contrast CTs. Fat fraction calculated using a corrected post-contrast liver attenuation measure agreed with non-contrast FF > 15% in both the derivation and external validation cohort, with AUROC between 0.92 and 0.97 on arterial, venous, and delayed phases. Automated volumetric mean attenuation of liver and spleen can be used instead of manually placed ROIs for liver fat assessments. Liver attenuation alone in post-contrast phases can be used to assess the presence of moderate-to-severe hepatic steatosis. Correction equations for liver attenuation on post-contrast phase CT scans enable reasonable quantification of liver steatosis, providing potential opportunities for utilizing clinical scans to develop large scale screening or studies in SLD.

AI-driven preclinical disease risk assessment using imaging in UK biobank.

Seletkov D, Starck S, Mueller TT, Zhang Y, Steinhelfer L, Rueckert D, Braren R

pubmed logopapersJul 26 2025
Identifying disease risk and detecting disease before clinical symptoms appear are essential for early intervention and improving patient outcomes. In this context, the integration of medical imaging in a clinical workflow offers a unique advantage by capturing detailed structural and functional information. Unlike non-image data, such as lifestyle, sociodemographic, or prior medical conditions, which often rely on self-reported information susceptible to recall biases and subjective perceptions, imaging offers more objective and reliable insights. Although the use of medical imaging in artificial intelligence (AI)-driven risk assessment is growing, its full potential remains underutilized. In this work, we demonstrate how imaging can be integrated into routine screening workflows, in particular by taking advantage of neck-to-knee whole-body magnetic resonance imaging (MRI) data available in the large prospective study UK Biobank. Our analysis focuses on three-year risk assessment for a broad spectrum of diseases, including cardiovascular, digestive, metabolic, inflammatory, degenerative, and oncologic conditions. We evaluate AI-based pipelines for processing whole-body MRI and demonstrate that using image-derived radiomics features provides the best prediction performance, interpretability, and integration capability with non-image data.

A triple pronged approach for ulcerative colitis severity classification using multimodal, meta, and transformer based learning.

Ahmed MN, Neogi D, Kabir MR, Rahman S, Momen S, Mohammed N

pubmed logopapersJul 26 2025
Ulcerative colitis (UC) is a chronic inflammatory disorder necessitating precise severity stratification to facilitate optimal therapeutic interventions. This study harnesses a triple-pronged deep learning methodology-including multimodal inference pipelines that eliminate domain-specific training, few-shot meta-learning, and Vision Transformer (ViT)-based ensembling-to classify UC severity within the HyperKvasir dataset. We systematically evaluate multiple vision transformer architectures, discovering that a Swin-Base model achieves an accuracy of 90%, while a soft-voting ensemble of diverse ViT backbones boosts performance to 93%. In parallel, we leverage multimodal pre-trained frameworks (e.g., CLIP, BLIP, FLAVA) integrated with conventional machine learning algorithms, yielding an accuracy of 83%. To address limited annotated data, we deploy few-shot meta-learning approaches (e.g., Matching Networks), attaining 83% accuracy in a 5-shot context. Furthermore, interpretability is enhanced via SHapley Additive exPlanations (SHAP), which interpret both local and global model behaviors, thereby fostering clinical trust in the model's inferences. These findings underscore the potential of contemporary representation learning and ensemble strategies for robust UC severity classification, highlighting the pivotal role of model transparency in facilitating medical image analysis.
Page 18 of 91907 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.