Sort by:
Page 87 of 2382377 results

Machine learning of whole-brain resting-state fMRI signatures for individualized grading of frontal gliomas.

Hu Y, Cao X, Chen H, Geng D, Lv K

pubmed logopapersAug 4 2025
Accurate preoperative grading of gliomas is critical for therapeutic planning and prognostic evaluation. We developed a noninvasive machine learning model leveraging whole-brain resting-state functional magnetic resonance imaging (rs-fMRI) biomarkers to discriminate high-grade (HGGs) and low-grade gliomas (LGGs) in the frontal lobe. This retrospective study included 138 patients (78 LGGs, 60 HGGs) with left frontal gliomas. A total of 7134 features were extracted from the mean amplitude of low-frequency fluctuation (mALFF), mean fractional ALFF, mean percentage amplitude of fluctuation (mPerAF), mean regional homogeneity (mReHo) maps and resting-state functional connectivity (RSFC) matrix. Twelve predictive features were selected through Mann-Whitney U test, correlation analysis and least absolute shrinkage and selection operator method. The patients were stratified and randomized into the training and testing datasets with a 7:3 ratio. The logical regression, random forest, support vector machine (SVM) and adaptive boosting algorithms were used to establish models. The model performance was evaluated using area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity. The selected 12 features included 7 RSFC features, 4 mPerAF features, and 1 mReHo feature. Based on these features, the model was established using the SVM had an optimal performance. The accuracy in the training and testing datasets was 0.957 and 0.727, respectively. The area under the receiver operating characteristic curves was 0.972 and 0.799, respectively. Our whole-brain rs-fMRI radiomics approach provides an objective tool for preoperative glioma stratification. The biological interpretability of selected features reflects distinct neuroplasticity patterns between LGGs and HGGs, advancing understanding of glioma-network interactions.

Enhanced detection of ovarian cancer using AI-optimized 3D CNNs for PET/CT scan analysis.

Sadeghi MH, Sina S, Faghihi R, Alavi M, Giammarile F, Omidi H

pubmed logopapersAug 4 2025
This study investigates how deep learning (DL) can enhance ovarian cancer diagnosis and staging using large imaging datasets. Specifically, we compare six conventional convolutional neural network (CNN) architectures-ResNet, DenseNet, GoogLeNet, U-Net, VGG, and AlexNet-with OCDA-Net, an enhanced model designed for [<sup>18</sup>F]FDG PET image analysis. The OCDA-Net, an advancement on the ResNet architecture, was thoroughly compared using randomly split datasets of training (80%), validation (10%), and test (10%) images. Trained over 100 epochs, OCDA-Net achieved superior diagnostic classification with an accuracy of 92%, and staging results of 94%, supported by robust precision, recall, and F-measure metrics. Grad-CAM ++ heat-maps confirmed that the network attends to hyper-metabolic lesions, supporting clinical interpretability. Our findings show that OCDA-Net outperforms existing CNN models and has strong potential to transform ovarian cancer diagnosis and staging. The study suggests that implementing these DL models in clinical practice could ultimately improve patient prognoses. Future research should expand datasets, enhance model interpretability, and validate these models in clinical settings.

Multimodal deep learning model for prognostic prediction in cervical cancer receiving definitive radiotherapy: a multi-center study.

Wang W, Yang G, Liu Y, Wei L, Xu X, Zhang C, Pan Z, Liang Y, Yang B, Qiu J, Zhang F, Hou X, Hu K, Liang X

pubmed logopapersAug 4 2025
For patients with locally advanced cervical cancer (LACC), precise survival prediction models could guide personalized treatment. We developed and validated CerviPro, a deep learning-based multimodal prognostic model, to predict disease-free survival (DFS) in 1018 patients with LACC receiving definitive radiotherapy. The model integrates pre- and post-treatment CT imaging, handcrafted radiomic features, and clinical variables. CerviPro demonstrated robust predictive performance in the internal validation cohort (C-index 0.81), and external validation cohorts (C-index 0.70&0.66), significantly stratifying patients into distinct high- and low-risk DFS groups. Multimodal feature fusion consistently outperformed models based on single feature categories (clinical data, imaging, or radiomics alone), highlighting the synergistic value of integrating diverse data sources. By integrating multimodal data to predict DFS and recurrence risk, CerviPro provides a clinically valuable prognostic tool for LACC, offering the potential to guide personalized treatment strategies.

Retrospective evaluation of interval breast cancer screening mammograms by radiologists and AI.

Subelack J, Morant R, Blum M, Gräwingholt A, Vogel J, Geissler A, Ehlig D

pubmed logopapersAug 4 2025
To determine whether an AI system can identify breast cancer risk in interval breast cancer (IBC) screening mammograms. IBC screening mammograms from a Swiss screening program were retrospectively analyzed by radiologists/an AI system. Radiologists determined whether the IBC mammogram showed human visible signs of breast cancer (potentially missed IBCs) or not (IBCs without retrospective abnormalities). The AI system provided a case score and a prognostic risk category per mammogram. 119 IBC cases (mean age 57.3 (5.4)) were available with complete retrospective evaluations by radiologists/the AI system. 82 (68.9%) were classified as IBCs without retrospective abnormalities and 37 (31.1%) as potentially missed IBCs. 46.2% of all IBCs received a case score ≥ 25, 25.2% ≥ 50, and 13.4% ≥ 75. Of the 25.2% of the IBCs ≥ 50 (vs. 13.4% of a no breast cancer population), 45.2% had not been discussed during a consensus conference, reflecting 11.4% of all IBC cases. The potentially missed IBCs received significantly higher case scores and risk classifications than IBCs without retrospective abnormalities (case score mean: 54.1 vs. 23.1; high risk: 48.7% vs. 14.7%; p < 0.05). 13.4% of the IBCs without retrospective abnormalities received a case score ≥ 50, of which 62.5% had not been discussed during a consensus conference. An AI system can identify IBC screening mammograms with a higher risk for breast cancer, particularly in potentially missed IBCs but also in some IBCs without retrospective abnormalities where radiologists did not see anything, indicating its ability to improve mammography screening quality. Question AI presents a promising opportunity to enhance breast cancer screening in general, but evidence is missing regarding its ability to reduce interval breast cancers. Findings The AI system detected a high risk of breast cancer in most interval breast cancer screening mammograms where radiologists retrospectively detected abnormalities. Clinical relevance Utilization of an AI system in mammography screening programs can identify breast cancer risk in many interval breast cancer screening mammograms and thus potentially reduce the number of interval breast cancers.

Combined nomogram for differentiating adrenal pheochromocytoma from large-diameter lipid-poor adenoma using multiphase CT radiomics and clinico-radiological features.

Shan Z, Zhang X, Zhang Y, Wang S, Wang J, Shi X, Li L, Li Z, Yang L, Liu H, Li W, Yang J, Yang L

pubmed logopapersAug 4 2025
Adrenal incidentalomas (AIs) are predominantly adrenal adenomas (80%), with a smaller proportion (7%) being pheochromocytomas(PHEO). Adenomas are typically non-functional tumors managed through observation or medication, with some cases requiring surgical removal, which is generally safe. In contrast, PHEO secrete catecholamines, causing severe blood pressure fluctuations, making surgical resection the only treatment option. Without adequate preoperative preparation, perioperative mortality risk is significantly high.A specialized adrenal CT scanning protocol is recommended to differentiate between these tumor types. However, distinguishing patients with similar washout characteristics remains challenging, and concerns about efficiency, cost, and risk limit its feasibility. Recently, radiomics has demonstrated efficacy in identifying molecular-level differences in tumor cells, including adrenal tumors. This study develops a combined nomogram model, integrating key clinical-radiological and radiomic features from multiphase CT, to enhance accuracy in distinguishing pheochromocytoma from large-diameter lipid-poor adrenal adenoma (LP-AA). A retrospective analysis was conducted on 202 patients with pathologically confirmed adrenal PHEO and large-diameter LP-AA from three tertiary care centers. Key clinico-radiological and radiomics features were selected to construct models: a clinico-radiological model, a radiomics model, and a combined nomogram model for predicting these two tumor types. Model performance and robustness were evaluated using external validation, calibration curve analysis, machine learning techniques, and Delong's test. Additionally, the Hosmer-Lemeshow test, decision curve analysis, and five-fold cross-validation were employed to assess the clinical translational potential of the combined nomogram model. All models demonstrated high diagnostic performance, with AUC values exceeding 0.8 across all cohorts, confirming their reliability. The combined nomogram model exhibited the highest diagnostic accuracy, with AUC values of 0.994, 0.979, and 0.945 for the training, validation, and external test cohorts, respectively. Notably, the unenhanced combined nomogram model was not significantly inferior to the three-phase combined nomogram model (p > 0.05 in the validation and test cohorts; p = 0.049 in the training cohort). The combined nomogram model reliably distinguishes between PHEO and LP-AA, shows strong clinical translational potential, and may reduce the need for contrast-enhanced CT scans. Not applicable.

An integrated predictive model for Alzheimer's disease progression from cognitively normal subjects using generated MRI and interpretable AI.

Aghaei A, Moghaddam ME

pubmed logopapersAug 4 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that begins with subtle cognitive changes and advances to severe impairment. Early diagnosis is crucial for effective intervention and management. In this study, we propose an integrated framework that leverages ensemble transfer learning, generative modeling, and automatic ROI extraction techniques to predict the progression of Alzheimer's disease from cognitively normal (CN) subjects. Using the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, we employ a three-stage process: (1) estimating the probability of transitioning from CN to mild cognitive impairment (MCI) using ensemble transfer learning, (2) generating future MRI images using Transformer-based Generative Adversarial Network (ViT-GANs) to simulate disease progression after two years, and (3) predicting AD using a 3D convolutional neural network (CNN) with calibrated probabilities using isotonic regression and interpreting critical regions of interest (ROIs) with Gradient-weighted Class Activation Mapping (Grad-CAM). However, the proposed method has generality and may work when sufficient data for simulating brain changes after three years or more is available; in the training phase, regarding available data, brain changes after 2 years have been considered. Our approach addresses the challenge of limited longitudinal data by creating high-quality synthetic images and improving model transparency by identifying key brain regions involved in disease progression. The proposed method demonstrates high accuracy and F1-score, 0.85 and 0.86, respectively, in CN to AD prediction up to 10 years, offering a potential tool for early diagnosis and personalized intervention strategies in Alzheimer's disease.

AI-Driven Integration of Deep Learning with Lung Imaging, Functional Analysis, and Blood Gas Metrics for Perioperative Hypoxemia Prediction: Progress and Perspectives.

Huang K, Wu C, Fang J, Pi R

pubmed logopapersAug 4 2025
This Perspective article explores the transformative role of artificial intelligence (AI) in predicting perioperative hypoxemia through the integration of deep learning (DL) with multimodal clinical data, including lung imaging, pulmonary function tests (PFTs), and arterial blood gas (ABG) analysis. Perioperative hypoxemia, defined as arterial oxygen partial pressure (PaO₂) <60 mmHg or oxygen saturation (SpO₂) <90%, poses significant risks of delayed recovery and organ dysfunction. Traditional diagnostic methods, such as radiological imaging and ABG analysis, often lack integrated predictive accuracy. AI frameworks, particularly convolutional neural networks (CNNs) and hybrid models like TD-CNNLSTM-LungNet, demonstrate exceptional performance in detecting pulmonary inflammation and stratifying hypoxemia risk, achieving up to 96.57% accuracy in pneumonia subtype differentiation and an AUC of 0.96 for postoperative hypoxemia prediction. Multimodal AI systems, such as DeepLung-Predict, unify CT scans, PFTs, and ABG parameters to enhance predictive precision, surpassing conventional methods by 22%. However, challenges persist, including dataset heterogeneity, model interpretability, and clinical workflow integration. Future directions emphasize multicenter validation, explainable AI (XAI) frameworks, and pragmatic trials to ensure equitable and reliable deployment. This AI-driven approach not only optimizes resource allocation but also mitigates financial burdens on healthcare systems by enabling early interventions and reducing ICU admission risks.

Natural language processing evaluation of trends in cervical cancer incidence in radiology reports: A ten-year survey.

López-Úbeda P, Martín-Noguerol T, Luna A

pubmed logopapersAug 4 2025
Cervical cancer commonly associated with human papillomavirus (HPV) infection, remains the fourth most common cancer in women globally. This study aims to develop and evaluate a Natural Language Processing (NLP) system to identify and analyze cervical cancer incidence trends from 2013 to 2023 at our institution, focusing on age-specific variations and evaluating the possible impact of HPV vaccination. This retrospective cohort study, we analyzed unstructured radiology reports collected between 2013 and 2023, comprising 433,207 studies involving 250,181 women who underwent CT, MRI, or ultrasound scans of the abdominopelvic region. A rule-based NLP system was developed to extract references to cervical cancer from these reports and validated against a set of 200 manually annotated cases reviewed by an experienced radiologist. The NLP system demonstrated excellent performance, achieving an accuracy of over 99.5 %. This high reliability enabled its application in a large-scale population study. Results show that the women under 30 maintain a consistently low cervical cancer incidence, likely reflecting early HPV vaccination impact. The 30-40 cohorts declined until 2020, followed by a slight increase, while the 40-60 groups exhibited an overall downward trend with fluctuations, suggesting long-term vaccine effects. Incidence in patients over 60 also declined, though with greater variability, possibly due to other risk factors. The developed NLP system effectively identified cervical cancer cases from unstructured radiology reports, facilitating an accurate analysis of the impact of HPV vaccination on cervical cancer prevalence and imaging study requirements. This approach demonstrates the potential of AI and NLP tools in enhancing data accuracy and efficiency in medical epidemiology research. NLP-based approaches can significantly improve the collection and analysis of epidemiological data on cervical cancer, supporting the development of more targeted and personalized prevention strategies-particularly in populations with heterogeneous HPV vaccination coverage.

Glioblastoma Overall Survival Prediction With Vision Transformers

Yin Lin, Riccardo Barbieri, Domenico Aquino, Giuseppe Lauria, Marina Grisoli, Elena De Momi, Alberto Redaelli, Simona Ferrante

arxiv logopreprintAug 4 2025
Glioblastoma is one of the most aggressive and common brain tumors, with a median survival of 10-15 months. Predicting Overall Survival (OS) is critical for personalizing treatment strategies and aligning clinical decisions with patient outcomes. In this study, we propose a novel Artificial Intelligence (AI) approach for OS prediction using Magnetic Resonance Imaging (MRI) images, exploiting Vision Transformers (ViTs) to extract hidden features directly from MRI images, eliminating the need of tumor segmentation. Unlike traditional approaches, our method simplifies the workflow and reduces computational resource requirements. The proposed model was evaluated on the BRATS dataset, reaching an accuracy of 62.5% on the test set, comparable to the top-performing methods. Additionally, it demonstrated balanced performance across precision, recall, and F1 score, overcoming the best model in these metrics. The dataset size limits the generalization of the ViT which typically requires larger datasets compared to convolutional neural networks. This limitation in generalization is observed across all the cited studies. This work highlights the applicability of ViTs for downsampled medical imaging tasks and establishes a foundation for OS prediction models that are computationally efficient and do not rely on segmentation.

CT-Based 3D Super-Resolution Radiomics for the Differential Diagnosis of Brucella <i>vs.</i> Tuberculous Spondylitis using Deep Learning.

Wang K, Qi L, Li J, Zhang M, Du H

pubmed logopapersAug 4 2025
This study aims to improve the accuracy of distinguishing Tuberculous Spondylitis (TBS) from Brucella Spondylitis (BS) by developing radiomics models using Deep Learning and CT images enhanced with Super-Resolution (SR). A total of 94 patients diagnosed with BS or TBS were randomly divided into training (n=65) and validation (n=29) groups in a 7:3 ratio. In the training set, there were 40 BS and 25 TBS patients, with a mean age of 58.34 ± 12.53 years. In the validation set, there were 17 BS and 12 TBS patients, with a mean age of 58.48 ± 12.29 years. Standard CT images were enhanced using SR, improving spatial resolution and image quality. The lesion regions (ROIs) were manually segmented, and radiomics features were extracted. ResNet18 and ResNet34 were used for deep learning feature extraction and model training. Four multi-layer perceptron (MLP) models were developed: clinical, radiomics (Rad), deep learning (DL), and a combined model. Model performance was assessed using five-fold cross-validation, ROC, and decision curve analysis (DCA). Statistical significance was assessed, with key clinical and imaging features showing significant differences between TBS and BS (e.g., gender, p=0.0038; parrot beak appearance, p<0.001; dead bone, p<0.001; deformities of the spinal posterior process, p=0.0044; psoas abscess, p<0.001). The combined model outperformed others, achieving the highest AUC (0.952), with ResNet34 and SR-enhanced images further boosting performance. Sensitivity reached 0.909, and Specificity was 0.941. DCA confirmed clinical applicability. The integration of SR-enhanced CT imaging and deep learning radiomics appears to improve diagnostic differentiation between BS and TBS. The combined model, especially when using ResNet34 and GAN-based super-resolution, demonstrated better predictive performance. High-resolution imaging may facilitate better lesion delineation and more robust feature extraction. Nevertheless, further validation with larger, multicenter cohorts is needed to confirm generalizability and reduce potential bias from retrospective design and imaging heterogeneity. This study suggests that integrating Deep Learning Radiomics with Super-Resolution may improve the differentiation between TBS and BS compared to standard CT imaging. However, prospective multi-center studies are necessary to validate its clinical applicability.
Page 87 of 2382377 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.