Sort by:
Page 124 of 3543538 results

A dual self-attentive transformer U-Net model for precise pancreatic segmentation and fat fraction estimation.

Shanmugam A, Radhabai PR, Kvn K, Imoize AL

pubmed logopapersAug 4 2025
Accurately segmenting the pancreas from abdominal computed tomography (CT) images is crucial for detecting and managing pancreatic diseases, such as diabetes and tumors. Type 2 diabetes and metabolic syndrome are associated with pancreatic fat accumulation. Calculating the fat fraction aids in the investigation of β-cell malfunction and insulin resistance. The most widely used pancreas segmentation technique is a U-shaped network based on deep convolutional neural networks (DCNNs). They struggle to capture long-range biases in an image because they rely on local receptive fields. This research proposes a novel dual Self-attentive Transformer Unet (DSTUnet) model for accurate pancreatic segmentation, addressing this problem. This model incorporates dual self-attention Swin transformers on both the encoder and decoder sides to facilitate global context extraction and refine candidate regions. After segmenting the pancreas using a DSTUnet, a histogram analysis is used to estimate the fat fraction. The suggested method demonstrated excellent performance on the standard dataset, achieving a DSC of 93.7% and an HD of 2.7 mm. The average volume of the pancreas was 92.42, and its fat volume fraction (FVF) was 13.37%.

Early prediction of proton therapy dose distributions and DVHs for hepatocellular carcinoma using contour-based CNN models from diagnostic CT and MRI.

Rachi T, Tochinai T

pubmed logopapersAug 4 2025
Proton therapy is commonly used for treating hepatocellular carcinoma (HCC); however, its feasibility can be challenging to assess in large tumors or those adjacent to critical organs at risk (OARs), which are typically assessed only after planning computed tomography (CT) acquisition. This study aimed to predict proton dose distributions using diagnostic CT (dCT) and diagnostic MRI (dMRI) with a convolutional neural network (CNN), enabling early treatment feasibility assessments. Dose distributions and dose-volume histograms (DVHs) were calculated for 118 patients with HCC using intensity-modulated proton therapy (IMPT) and passive proton therapy. A CPU-based CNN model was used to predict DVHs and 3D dose distributions from diagnostic images. Prediction accuracy was evaluated using mean absolute error (MAE), mean squared error (MSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and gamma passing rate with a 3 mm/3% criterion. The predicted DVHs and dose distributions showed high agreement with actual values. MAE remained below 3.0%, with passive techniques achieving 1.2-1.8%. MSE was below 0.004 in all cases. PSNR ranged from 24 to 28 dB, and SSIM exceeded 0.94 in most conditions. Gamma passing rates averaged 82-83% for IMPT and 92-93% for passive techniques. The model achieved comparable accuracy when using dMRI and dCT. This study demonstrates that early dose distribution prediction from diagnostic imaging is feasible and accurate using a lightweight CNN model. Despite anatomical variability between diagnostic and planning images, this approach provides timely insights into treatment feasibility, potentially supporting insurance pre-authorization, reducing unnecessary imaging, and optimizing clinical workflows for HCC proton therapy.

Retrospective evaluation of interval breast cancer screening mammograms by radiologists and AI.

Subelack J, Morant R, Blum M, Gräwingholt A, Vogel J, Geissler A, Ehlig D

pubmed logopapersAug 4 2025
To determine whether an AI system can identify breast cancer risk in interval breast cancer (IBC) screening mammograms. IBC screening mammograms from a Swiss screening program were retrospectively analyzed by radiologists/an AI system. Radiologists determined whether the IBC mammogram showed human visible signs of breast cancer (potentially missed IBCs) or not (IBCs without retrospective abnormalities). The AI system provided a case score and a prognostic risk category per mammogram. 119 IBC cases (mean age 57.3 (5.4)) were available with complete retrospective evaluations by radiologists/the AI system. 82 (68.9%) were classified as IBCs without retrospective abnormalities and 37 (31.1%) as potentially missed IBCs. 46.2% of all IBCs received a case score ≥ 25, 25.2% ≥ 50, and 13.4% ≥ 75. Of the 25.2% of the IBCs ≥ 50 (vs. 13.4% of a no breast cancer population), 45.2% had not been discussed during a consensus conference, reflecting 11.4% of all IBC cases. The potentially missed IBCs received significantly higher case scores and risk classifications than IBCs without retrospective abnormalities (case score mean: 54.1 vs. 23.1; high risk: 48.7% vs. 14.7%; p < 0.05). 13.4% of the IBCs without retrospective abnormalities received a case score ≥ 50, of which 62.5% had not been discussed during a consensus conference. An AI system can identify IBC screening mammograms with a higher risk for breast cancer, particularly in potentially missed IBCs but also in some IBCs without retrospective abnormalities where radiologists did not see anything, indicating its ability to improve mammography screening quality. Question AI presents a promising opportunity to enhance breast cancer screening in general, but evidence is missing regarding its ability to reduce interval breast cancers. Findings The AI system detected a high risk of breast cancer in most interval breast cancer screening mammograms where radiologists retrospectively detected abnormalities. Clinical relevance Utilization of an AI system in mammography screening programs can identify breast cancer risk in many interval breast cancer screening mammograms and thus potentially reduce the number of interval breast cancers.

Multimodal deep learning model for prognostic prediction in cervical cancer receiving definitive radiotherapy: a multi-center study.

Wang W, Yang G, Liu Y, Wei L, Xu X, Zhang C, Pan Z, Liang Y, Yang B, Qiu J, Zhang F, Hou X, Hu K, Liang X

pubmed logopapersAug 4 2025
For patients with locally advanced cervical cancer (LACC), precise survival prediction models could guide personalized treatment. We developed and validated CerviPro, a deep learning-based multimodal prognostic model, to predict disease-free survival (DFS) in 1018 patients with LACC receiving definitive radiotherapy. The model integrates pre- and post-treatment CT imaging, handcrafted radiomic features, and clinical variables. CerviPro demonstrated robust predictive performance in the internal validation cohort (C-index 0.81), and external validation cohorts (C-index 0.70&0.66), significantly stratifying patients into distinct high- and low-risk DFS groups. Multimodal feature fusion consistently outperformed models based on single feature categories (clinical data, imaging, or radiomics alone), highlighting the synergistic value of integrating diverse data sources. By integrating multimodal data to predict DFS and recurrence risk, CerviPro provides a clinically valuable prognostic tool for LACC, offering the potential to guide personalized treatment strategies.

Enhanced detection of ovarian cancer using AI-optimized 3D CNNs for PET/CT scan analysis.

Sadeghi MH, Sina S, Faghihi R, Alavi M, Giammarile F, Omidi H

pubmed logopapersAug 4 2025
This study investigates how deep learning (DL) can enhance ovarian cancer diagnosis and staging using large imaging datasets. Specifically, we compare six conventional convolutional neural network (CNN) architectures-ResNet, DenseNet, GoogLeNet, U-Net, VGG, and AlexNet-with OCDA-Net, an enhanced model designed for [<sup>18</sup>F]FDG PET image analysis. The OCDA-Net, an advancement on the ResNet architecture, was thoroughly compared using randomly split datasets of training (80%), validation (10%), and test (10%) images. Trained over 100 epochs, OCDA-Net achieved superior diagnostic classification with an accuracy of 92%, and staging results of 94%, supported by robust precision, recall, and F-measure metrics. Grad-CAM ++ heat-maps confirmed that the network attends to hyper-metabolic lesions, supporting clinical interpretability. Our findings show that OCDA-Net outperforms existing CNN models and has strong potential to transform ovarian cancer diagnosis and staging. The study suggests that implementing these DL models in clinical practice could ultimately improve patient prognoses. Future research should expand datasets, enhance model interpretability, and validate these models in clinical settings.

Machine learning of whole-brain resting-state fMRI signatures for individualized grading of frontal gliomas.

Hu Y, Cao X, Chen H, Geng D, Lv K

pubmed logopapersAug 4 2025
Accurate preoperative grading of gliomas is critical for therapeutic planning and prognostic evaluation. We developed a noninvasive machine learning model leveraging whole-brain resting-state functional magnetic resonance imaging (rs-fMRI) biomarkers to discriminate high-grade (HGGs) and low-grade gliomas (LGGs) in the frontal lobe. This retrospective study included 138 patients (78 LGGs, 60 HGGs) with left frontal gliomas. A total of 7134 features were extracted from the mean amplitude of low-frequency fluctuation (mALFF), mean fractional ALFF, mean percentage amplitude of fluctuation (mPerAF), mean regional homogeneity (mReHo) maps and resting-state functional connectivity (RSFC) matrix. Twelve predictive features were selected through Mann-Whitney U test, correlation analysis and least absolute shrinkage and selection operator method. The patients were stratified and randomized into the training and testing datasets with a 7:3 ratio. The logical regression, random forest, support vector machine (SVM) and adaptive boosting algorithms were used to establish models. The model performance was evaluated using area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity. The selected 12 features included 7 RSFC features, 4 mPerAF features, and 1 mReHo feature. Based on these features, the model was established using the SVM had an optimal performance. The accuracy in the training and testing datasets was 0.957 and 0.727, respectively. The area under the receiver operating characteristic curves was 0.972 and 0.799, respectively. Our whole-brain rs-fMRI radiomics approach provides an objective tool for preoperative glioma stratification. The biological interpretability of selected features reflects distinct neuroplasticity patterns between LGGs and HGGs, advancing understanding of glioma-network interactions.

Can Machine Learning Predict Metastatic Sites in Pancreatic Ductal Adenocarcinoma? A Radiomic Analysis.

Spoto F, De Robertis R, Cardobi N, Garofano A, Messineo L, Lucin E, Milella M, D'Onofrio M

pubmed logopapersAug 4 2025
Pancreatic ductal adenocarcinoma (PDAC) exhibits high metastatic potential, with distinct prognoses based on metastatic sites. Radiomics enables quantitative imaging analysis for predictive modeling. To evaluate the feasibility of radiomic models in predicting PDAC metastatic patterns, specifically distinguishing between hepatic and pulmonary metastases. This retrospective study included 115 PDAC patients with either liver (n = 94) or lung (n = 21) metastases. Radiomic features were extracted from pancreatic arterial and venous phase CT scans of primary tumors using PyRadiomics. Two radiologists independently segmented tumors for inter-reader reliability assessment. Features with ICC > 0.9 underwent LASSO regularization for feature selection. Class imbalance was addressed using SMOTE and class weighting. Model performance was evaluated using fivefold cross-validation and bootstrap resampling. The multivariate logistic regression model achieved an AUC-ROC of 0.831 (95% CI: 0.752-0.910). At the optimal threshold, sensitivity was 0.762 (95% CI: 0.659-0.865) and specificity was 0.787 (95% CI: 0.695-0.879). The negative predictive value for lung metastases was 0.810 (95% CI: 0.734-0.886). LargeDependenceEmphasis showed a trend toward significance (p = 0.0566) as a discriminative feature. Precision was 0.842, recall 0.762, and F1 score 0.800. Radiomic analysis of primary pancreatic tumors demonstrates potential for predicting hepatic versus pulmonary metastatic patterns. The high negative predictive value for lung metastases may support clinical decision-making. External validation is essential before clinical implementation. These findings from a single-center study require confirmation in larger, multicenter cohorts.

Open-radiomics: a collection of standardized datasets and a technical protocol for reproducible radiomics machine learning pipelines.

Namdar K, Wagner MW, Ertl-Wagner BB, Khalvati F

pubmed logopapersAug 4 2025
As an important branch of machine learning pipelines in medical imaging, radiomics faces two major challenges namely reproducibility and accessibility. In this work, we introduce open-radiomics, a set of radiomics datasets along with a comprehensive radiomics pipeline based on our proposed technical protocol to investigate the effects of radiomics feature extraction on the reproducibility of the results. We curated large-scale radiomics datasets based on three open-source datasets; BraTS 2020 for high-grade glioma (HGG) versus low-grade glioma (LGG) classification and survival analysis, BraTS 2023 for O6-methylguanine-DNA methyltransferase (MGMT) classification, and non-small cell lung cancer (NSCLC) survival analysis from the Cancer Imaging Archive (TCIA). We used the BraTS 2020 open-source Magnetic Resonance Imaging (MRI) dataset to demonstrate how our proposed technical protocol could be utilized in radiomics-based studies. The cohort includes 369 adult patients with brain tumors (76 LGG, and 293 HGG). Using PyRadiomics library for LGG vs. HGG classification, we created 288 radiomics datasets; the combinations of 4 MRI sequences, 3 binWidths, 6 image normalization methods, and 4 tumor subregions. We used Random Forest classifiers, and for each radiomics dataset, we repeated the training-validation-test (60%/20%/20%) experiment with different data splits and model random states 100 times (28,800 test results) and calculated the Area Under the Receiver Operating Characteristic Curve (AUROC). Unlike binWidth and image normalization, the tumor subregion and imaging sequence significantly affected performance of the models. T1 contrast-enhanced sequence and the union of Necrotic and the non-enhancing tumor core subregions resulted in the highest AUROCs (average test AUROC 0.951, 95% confidence interval of (0.949, 0.952)). Although several settings and data splits (28 out of 28800) yielded test AUROC of 1, they were irreproducible. Our experiments demonstrate the sources of variability in radiomics pipelines (e.g., tumor subregion) can have a significant impact on the results, which may lead to superficial perfect performances that are irreproducible. Not applicable.

CT-Based 3D Super-Resolution Radiomics for the Differential Diagnosis of Brucella <i>vs.</i> Tuberculous Spondylitis using Deep Learning.

Wang K, Qi L, Li J, Zhang M, Du H

pubmed logopapersAug 4 2025
This study aims to improve the accuracy of distinguishing Tuberculous Spondylitis (TBS) from Brucella Spondylitis (BS) by developing radiomics models using Deep Learning and CT images enhanced with Super-Resolution (SR). A total of 94 patients diagnosed with BS or TBS were randomly divided into training (n=65) and validation (n=29) groups in a 7:3 ratio. In the training set, there were 40 BS and 25 TBS patients, with a mean age of 58.34 ± 12.53 years. In the validation set, there were 17 BS and 12 TBS patients, with a mean age of 58.48 ± 12.29 years. Standard CT images were enhanced using SR, improving spatial resolution and image quality. The lesion regions (ROIs) were manually segmented, and radiomics features were extracted. ResNet18 and ResNet34 were used for deep learning feature extraction and model training. Four multi-layer perceptron (MLP) models were developed: clinical, radiomics (Rad), deep learning (DL), and a combined model. Model performance was assessed using five-fold cross-validation, ROC, and decision curve analysis (DCA). Statistical significance was assessed, with key clinical and imaging features showing significant differences between TBS and BS (e.g., gender, p=0.0038; parrot beak appearance, p<0.001; dead bone, p<0.001; deformities of the spinal posterior process, p=0.0044; psoas abscess, p<0.001). The combined model outperformed others, achieving the highest AUC (0.952), with ResNet34 and SR-enhanced images further boosting performance. Sensitivity reached 0.909, and Specificity was 0.941. DCA confirmed clinical applicability. The integration of SR-enhanced CT imaging and deep learning radiomics appears to improve diagnostic differentiation between BS and TBS. The combined model, especially when using ResNet34 and GAN-based super-resolution, demonstrated better predictive performance. High-resolution imaging may facilitate better lesion delineation and more robust feature extraction. Nevertheless, further validation with larger, multicenter cohorts is needed to confirm generalizability and reduce potential bias from retrospective design and imaging heterogeneity. This study suggests that integrating Deep Learning Radiomics with Super-Resolution may improve the differentiation between TBS and BS compared to standard CT imaging. However, prospective multi-center studies are necessary to validate its clinical applicability.

Deep Learning-Enabled Ultrasound for Advancing Anterior Talofibular Ligament Injuries Classification: A Multicenter Model Development and Validation Study.

Shi X, Zhang H, Yuan Y, Xu Z, Meng L, Xi Z, Qiao Y, Liu S, Sun J, Cui J, Du R, Yu Q, Wang D, Shen S, Gao C, Li P, Bai L, Xu H, Wang K

pubmed logopapersAug 4 2025
Ultrasound (US) is the preferred modality for assessing anterior talofibular ligament (ATFL) injuries. We aimed to advance ATFL injuries classification by developing a US-based deep learning (DL) model, and explore how artificial intelligence (AI) could help radiologists improve diagnostic performance. Consecutive healthy controls and patients with acute ATFL injuries (mild strain, partial tear, complete tear, and avulsion fracture) at 10 hospitals were retrospectively included. A US-based DL model (ATFLNet) was trained (n=2566), internally validated (n=642), and externally validated (n=717 and 493). Surgical or radiological findings based on the majority consensus of three experts served as the reference standard. Prospective validation was conducted at three additional hospitals (n=472). The performance was compared to that of 12 radiologists at different levels (external validation sets 1 and 2); an ATFLNet-aided strategy was developed, comparing with the radiologists when reviewing B-mode images (external validation set 2); the strategy was then tested in a simulated scenario (reviewing images alongside dynamic clips; prospective validation set). Statistical comparisons were performed using the McNemar's test, while inter-reader agreement was evaluated with the Multireader Fleiss κ statistic. ATFLNet obtained macro-average area under the curve ≥0.970 across all five classes in each dataset, indicating robust overall performance. Additionally, it consistently outperformed senior radiologists in external validation sets (all p<.05). ATFLNet-aided strategy improved radiologists' average accuracy (0.707 vs. 0.811, p<.001) for image review. In the simulated scenario, it led to enhanced accuracy (0.794 to 0.864, p=.003), and a reduction in diagnostic variability, particularly for junior radiologists. Our US-based model outperformed human experts for ATFL injury evaluation. AI-aided strategies hold the potential to enhance diagnostic performance in real-world clinical scenarios.
Page 124 of 3543538 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.