Sort by:
Page 10 of 3973970 results

Machine Learning-Driven Radiomic Profiling of Thalamus-Amygdala Nuclei for Prediction of Postoperative Delirium After STN-DBS in Parkinson's Disease Patients: A Pilot Study.

Radziunas A, Davidavicius G, Reinyte K, Pranckeviciene A, Fedaravicius A, Kucinskas V, Laucius O, Tamasauskas A, Deltuva V, Saudargiene A

pubmed logopapersAug 13 2025
Postoperative delirium is a common complication following sub-thalamic nucleus deep brain stimulation surgery in Parkinson's disease patients. Postoperative delirium has been shown to prolong hospital stays, harm cognitive function, and negatively impact outcomes. Utilizing radiomics as a predictive tool for identifying patients at risk of delirium is a novel and personalized approach. This pilot study analyzed preoperative T1-weighted and T2-weighted magnetic resonance images from 34 Parkinson's disease patients, which were used to segment the thalamus, amygdala, and hippocampus, resulting in 10,680 extracted radiomic features. Feature selection using the minimum redundancy maximal relevance method identified the 20 most informative features, which were input into eight different machine learning algorithms. A high predictive accuracy of postoperative delirium was achieved by applying regularized binary logistic regression and linear discriminant analysis and using 10 most informative radiomic features. Regularized logistic regression resulted in 96.97% (±6.20) balanced accuracy, 99.5% (±4.97) sensitivity, 94.43% (±10.70) specificity, and area under the receiver operating characteristic curve of 0.97 (±0.06). Linear discriminant analysis showed 98.42% (±6.57) balanced accuracy, 98.00% (±9.80) sensitivity, 98.83% (±4.63) specificity, and area under the receiver operating characteristic curve of 0.98 (±0.07). The feed-forward neural network also demonstrated strong predictive capacity, achieving 96.17% (±10.40) balanced accuracy, 94.5% (±19.87) sensitivity, 97.83% (±7.87) specificity, and an area under the receiver operating characteristic curve of 0.96 (±0.10). However, when the feature set was extended to 20 features, both logistic regression and linear discriminant analysis showed reduced performance, while the feed-forward neural network achieved the highest predictive accuracy of 99.28% (±2.71), with 100.0% (±0.00) sensitivity, 98.57% (±5.42) specificity, and an area under the receiver operating characteristic curve of 0.99 (±0.03). Selected radiomic features might indicate network dysfunction between thalamic laterodorsal, reuniens medial ventral, and amygdala basal nuclei with hippocampus cornu ammonis 4 in these patients. This finding expands previous research suggesting the importance of the thalamic-hippocampal-amygdala network for postoperative delirium due to alterations in neuronal activity.

A stacking ensemble framework integrating radiomics and deep learning for prognostic prediction in head and neck cancer.

Wang B, Liu J, Zhang X, Lin J, Li S, Wang Z, Cao Z, Wen D, Liu T, Ramli HRH, Harith HH, Hasan WZW, Dong X

pubmed logopapersAug 13 2025
Radiomics models frequently face challenges related to reproducibility and robustness. To address these issues, we propose a multimodal, multi-model fusion framework utilizing stacking ensemble learning for prognostic prediction in head and neck cancer (HNC). This approach seeks to improve the accuracy and reliability of survival predictions. A total of 806 cases from nine centers were collected; 143 cases from two centers were assigned as the external validation cohort, while the remaining 663 were stratified and randomly split into training (n = 530) and internal validation (n = 133) sets. Radiomics features were extracted according to IBSI standards, and deep learning features were obtained using a 3D DenseNet-121 model. Following feature selection, the selected features were input into Cox, SVM, RSF, DeepCox, and DeepSurv models. A stacking fusion strategy was employed to develop the prognostic model. Model performance was evaluated using Kaplan-Meier survival curves and time-dependent ROC curves. On the external validation set, the model using combined PET and CT radiomics features achieved superior performance compared to single-modality models, with the RSF model obtaining the highest concordance index (C-index) of 0.7302. When using deep features extracted by 3D DenseNet-121, the PET + CT-based models demonstrated significantly improved prognostic accuracy, with Deepsurv and DeepCox achieving C-indices of 0.9217 and 0.9208, respectively. In stacking models, the PET + CT model using only radiomics features reached a C-index of 0.7324, while the deep feature-based stacking model achieved 0.9319. The best performance was obtained by the multi-feature fusion model, which integrated both radiomics and deep learning features from PET and CT, yielding a C-index of 0.9345. Kaplan-Meier survival analysis further confirmed the fusion model's ability to distinguish between high-risk and low-risk groups. The stacking-based ensemble model demonstrates superior performance compared to individual machine learning models, markedly improving the robustness of prognostic predictions.

ES-UNet: efficient 3D medical image segmentation with enhanced skip connections in 3D UNet.

Park M, Oh S, Park J, Jeong T, Yu S

pubmed logopapersAug 13 2025
Deep learning has significantly advanced medical image analysis, particularly in semantic segmentation, which is essential for clinical decisions. However, existing 3D segmentation models, like the traditional 3D UNet, face challenges in balancing computational efficiency and accuracy when processing volumetric medical data. This study aims to develop an improved architecture for 3D medical image segmentation with enhanced learning strategies to improve accuracy and address challenges related to limited training data. We propose ES-UNet, a 3D segmentation architecture that achieves superior segmentation performance while offering competitive efficiency across multiple computational metrics, including memory usage, inference time, and parameter count. The model builds upon the full-scale skip connection design of UNet3+ by integrating channel attention modules into each encoder-to-decoder path and incorporating full-scale deep supervision to enhance multi-resolution feature learning. We further introduce Region Specific Scaling (RSS), a data augmentation method that adaptively applies geometric transformations to annotated regions, and a Dynamically Weighted Dice (DWD) loss to improve the balance between precision and recall. The model was evaluated on the MICCAI HECKTOR dataset, and additional validation was conducted on selected tasks from the Medical Segmentation Decathlon (MSD). On the HECKTOR dataset, ES-UNet achieved a Dice Similarity Coefficient (DSC) of 76.87%, outperforming baseline models including 3D UNet, 3D UNet 3+, nnUNet, and Swin UNETR. Ablation studies showed that RSS and DWD contributed up to 1.22% and 1.06% improvement in DSC, respectively. A sensitivity analysis demonstrated that the chosen scaling range in RSS offered a favorable trade-off between deformation and anatomical plausibility. Cross-dataset evaluation on MSD Heart and Spleen tasks also indicated strong generalization. Computational analysis revealed that ES-UNet achieves superior segmentation performance with moderate computational demands. Specifically, the enhanced skip connection design with lightweight channel attention modules integrated throughout the network architecture enables this favorable balance between high segmentation accuracy and computational efficiency. ES-UNet integrates architectural and algorithmic improvements to achieve robust 3D medical image segmentation. While the framework incorporates established components, its core contributions lie in the optimized skip connection strategy and supporting techniques like RSS and DWD. Future work will explore adaptive scaling strategies and broader validation across diverse imaging modalities.

An optimized multi-task contrastive learning framework for HIFU lesion detection and segmentation.

Zavar M, Ghaffari HR, Tabatabaee H

pubmed logopapersAug 13 2025
Accurate detection and segmentation of lesions induced by High-Intensity Focused Ultrasound (HIFU) in medical imaging remain significant challenges in automated disease diagnosis. Traditional methods heavily rely on labeled data, which is often scarce, expensive, and time-consuming to obtain. Moreover, existing approaches frequently struggle with variations in medical data and the limited availability of annotated datasets, leading to suboptimal performance. To address these challenges, this paper introduces an innovative framework called the Optimized Multi-Task Contrastive Learning Framework (OMCLF), which leverages self-supervised learning (SSL) and genetic algorithms (GA) to enhance HIFU lesion detection and segmentation. OMCLF integrates classification and segmentation into a unified model, utilizing a shared backbone to extract common features. The framework systematically optimizes feature representations, hyperparameters, and data augmentation strategies tailored for medical imaging, ensuring that critical information, such as lesion details, is preserved. By employing a genetic algorithm, OMCLF explores and optimizes augmentation techniques suitable for medical data, avoiding distortions that could compromise diagnostic accuracy. Experimental results demonstrate that OMCLF outperforms single-task methods in both classification and segmentation tasks while significantly reducing dependency on labeled data. Specifically, OMCLF achieves an accuracy of 93.3% in lesion detection and a Dice score of 92.5% in segmentation, surpassing state-of-the-art methods such as SimCLR and MoCo. The proposed approach achieves superior accuracy in identifying and delineating HIFU-induced lesions, marking a substantial advancement in medical image interpretation and automated diagnosis. OMCLF represents a significant step forward in the evolutionary optimization of self-supervised learning, with potential applications across various medical imaging domains.

Explainable AI Technique in Lung Cancer Detection Using Convolutional Neural Networks

Nishan Rai, Sujan Khatri, Devendra Risal

arxiv logopreprintAug 13 2025
Early detection of lung cancer is critical to improving survival outcomes. We present a deep learning framework for automated lung cancer screening from chest computed tomography (CT) images with integrated explainability. Using the IQ-OTH/NCCD dataset (1,197 scans across Normal, Benign, and Malignant classes), we evaluate a custom convolutional neural network (CNN) and three fine-tuned transfer learning backbones: DenseNet121, ResNet152, and VGG19. Models are trained with cost-sensitive learning to mitigate class imbalance and evaluated via accuracy, precision, recall, F1-score, and ROC-AUC. While ResNet152 achieved the highest accuracy (97.3%), DenseNet121 provided the best overall balance in precision, recall, and F1 (up to 92%, 90%, 91%, respectively). We further apply Shapley Additive Explanations (SHAP) to visualize evidence contributing to predictions, improving clinical transparency. Results indicate that CNN-based approaches augmented with explainability can provide fast, accurate, and interpretable support for lung cancer screening, particularly in resource-limited settings.

Deep Learning Enables Large-Scale Shape and Appearance Modeling in Total-Body DXA Imaging

Arianna Bunnell, Devon Cataldi, Yannik Glaser, Thomas K. Wolfgruber, Steven Heymsfield, Alan B. Zonderman, Thomas L. Kelly, Peter Sadowski, John A. Shepherd

arxiv logopreprintAug 13 2025
Total-body dual X-ray absorptiometry (TBDXA) imaging is a relatively low-cost whole-body imaging modality, widely used for body composition assessment. We develop and validate a deep learning method for automatic fiducial point placement on TBDXA scans using 1,683 manually-annotated TBDXA scans. The method achieves 99.5% percentage correct keypoints in an external testing dataset. To demonstrate the value for shape and appearance modeling (SAM), our method is used to place keypoints on 35,928 scans for five different TBDXA imaging modes, then associations with health markers are tested in two cohorts not used for SAM model generation using two-sample Kolmogorov-Smirnov tests. SAM feature distributions associated with health biomarkers are shown to corroborate existing evidence and generate new hypotheses on body composition and shape's relationship to various frailty, metabolic, inflammation, and cardiometabolic health markers. Evaluation scripts, model weights, automatic point file generation code, and triangulation files are available at https://github.com/hawaii-ai/dxa-pointplacement.

Ultrasonic Texture Analysis for Predicting Acute Myocardial Infarction.

Jamthikar AD, Hathaway QA, Maganti K, Hamirani Y, Bokhari S, Yanamala N, Sengupta PP

pubmed logopapersAug 13 2025
Acute myocardial infarction (MI) alters cardiomyocyte geometry and architecture, leading to changes in the acoustic properties of the myocardium. This study examines ultrasomics-a novel cardiac ultrasound-based radiomics technique to extract high-throughput pixel-level information from images-for identifying ultrasonic texture and morphologic changes associated with infarcted myocardium. We included 684 participants from multisource data: a) a retrospective single-center matched case-control dataset, b) a prospective multicenter matched clinical trial dataset, and c) an open-source international and multivendor dataset. Handcrafted and deep transfer learning-based ultrasomics features from 2- and 4-chamber echocardiographic views were used to train machine learning (ML) models with the use of leave-one-source-out cross-validation for external validation. The ML model showed a higher AUC than transfer learning-based deep features in identifying MI [AUCs: 0.87 [95% CI: 0.84-0.89] vs 0.74 [95% CI: 0.70-0.77]; P < 0.0001]. ML probability was an independent predictor of MI even after adjusting for conventional echocardiographic parameters [adjusted OR: 1.03 [95% CI: 1.01-1.05]; P < 0.0001]. ML probability showed diagnostic value in differentiating acute MI, even in the presence of myocardial dysfunction (averaged longitudinal strain [LS] <16%) (AUC: 0.84 [95% CI: 0.77-0.89]). In addition, combining averaged LS with ML probability significantly improved predictive performance compared with LS alone (AUCs: 0.86 [95% CI: 0.80-0.91] vs 0.80 [95% CI: 0.72-0.87]; P = 0.02). Visualization of ultrasomics features with the use of a Manhattan plot discriminated infarcted and noninfarcted segments (P < 0.001) and facilitated parametric visualization of infarcted myocardium. This study demonstrates the potential of cardiac ultrasomics to distinguish healthy from infarcted myocardium and highlights the need for validation in diverse populations to define its role and incremental value in myocardial tissue characterization beyond conventional echocardiography.

Brown adipose tissue machine learning nnU-Net V2 network using TriDFusion (3DF).

Lafontaine D, Chahwan S, Barraza G, Ucpinar BA, Kayal G, Gómez-Banoy N, Cohen P, Humm JL, Schöder H

pubmed logopapersAug 13 2025
Recent advances in machine learning have revolutionized medical imaging. Currently, identifying brown adipose tissue (BAT) relies on manual identification and segmentation on Fluorine-<sup>18</sup> fluorodeoxyglucose positron emission tomography/computed tomography (<sup>18</sup>F-FDG PET/CT) scans. However, the process is time-consuming, especially for studies involving a large number of cases, and is subject to bias due to observer dependency. The introduction of machine learning algorithms, such as the PET/CT algorithm implemented in the TriDFusion (3DF) Image Viewer, represents a significant advancement in BAT detection. In the context of cancer care, artificial intelligence (AI)-driven BAT detection holds immense promise for rapid and automatic differentiation between malignant lesions and non-malignant BAT confounds. By leveraging machine learning to discern intricate patterns in imaging data, this study aims to advance the automation of BAT recognition and provide precise quantitative assessment of radiographic features. We used a semi-automatic, threshold-based 3DF workflow to segment 317 PET/CT scans containing BAT. To minimize manual edits, we defined exclusion zones via machine-learning-based CT organ segmentation and used those organ masks to assign each volume of interest (VOI) to its anatomical site. Three physicians then reviewed and corrected all segmentations using the 3DF contour panel. The final, edited masks were used to train an nnU-Net V2 model, which we subsequently applied to 118 independent PET/CT scans. Across all anatomical sites, physicians reviewed the network’s automated segmentations to be approximately 90% accurate. Although nnU-Net V2 effectively identified BAT from PET/CT scans, training an AI model capable of perfect BAT segmentation remains a challenge due to factors such as PET/CT misregistration and the absence of visible BAT activity across contiguous slices.

Graph Neural Networks for Realistic Bleeding Prediction in Surgical Simulators.

Kakdas YC, De S, Demirel D

pubmed logopapersAug 12 2025
This study presents a novel approach using graph neural networks to predict the risk of internal bleeding using vessel maps derived from patient CT and MRI scans, aimed at enhancing the realism of surgical simulators for emergency scenarios such as trauma, where rapid detection of internal bleeding can be lifesaving. First, medical images are segmented and converted into graph representations of the vasculature, where nodes represent vessel branching points with spatial coordinates and edges encode vessel features such as length and radius. Due to no existing dataset directly labeling bleeding risks, we calculate the bleeding probability for each vessel node using a physics-based heuristic, peripheral vascular resistance via the Hagen-Poiseuille equation. A graph attention network is then trained to regress these probabilities, effectively learning to predict hemorrhage risk from the graph-structured imaging data. The model is trained using a tenfold cross-validation on a combined dataset of 1708 vessel graphs extracted from four public image datasets (MSD, KiTS, AbdomenCT, CT-ORG) with optimization via the Adam optimizer, mean squared error loss, early stopping, and L2 regularization. Our model achieves a mean R-squared of 0.86, reaching up to 0.9188 in optimal configurations and low mean training and validation losses of 0.0069 and 0.0074, respectively, in predicting bleeding risk, with higher performance on well-connected vascular graphs. Finally, we integrate the trained model into an immersive virtual reality environment to simulate intra-abdominal bleeding scenarios for immersive surgical training. The model demonstrates robust predictive performance despite the inherent sparsity of real-life datasets.

[Development of a machine learning-based diagnostic model for T-shaped uterus using transvaginal 3D ultrasound quantitative parameters].

Li SJ, Wang Y, Huang R, Yang LM, Lyu XD, Huang XW, Peng XB, Song DM, Ma N, Xiao Y, Zhou QY, Guo Y, Liang N, Liu S, Gao K, Yan YN, Xia EL

pubmed logopapersAug 12 2025
<b>Objective:</b> To develop a machine learning diagnostic model for T-shaped uterus based on quantitative parameters from 3D transvaginal ultrasound. <b>Methods:</b> A retrospective cross-sectional study was conducted, recruiting 304 patients who visited the hysteroscopy centre of Fuxing Hospital, Beijing, China, between July 2021 and June 2024 for reasons such as "infertility or recurrent pregnancy loss" and other adverse obstetric histories. Twelve experts, including seven clinicians and five sonographers, from Fuxing Hospital and Beijing Obstetrics and Gynecology Hospital of Capital Medical University, Peking University People's Hospital, and Beijing Hospital, independently and anonymously assessed the diagnosis of T-shaped uterus using a modified Delphi method. Based on the consensus results, 56 cases were classified into the T-shaped uterus group and 248 cases into the non-T-shaped uterus group. A total of 7 clinical features and 14 sonographic features were initially included. Features demonstrating significant diagnostic impact were selected using 10-fold cross-validated LASSO (Least Absolute Shrinkage and Selection Operator) regression. Four machine learning algorithms [logistic regression (LR), decision tree (DT), random forest (RF), and support vector machine (SVM)] were subsequently implemented to develop T-shaped uterus diagnostic models. Using the Python random module, the patient dataset was randomly divided into five subsets, each maintaining the original class distribution (T-shaped uterus: non-T-shaped uterus ≈ 1∶4) and a balanced number of samples between the two categories. Five-fold cross-validation was performed, with four subsets used for training and one for validation in each round, to enhance the reliability of model evaluation. Model performance was rigorously assessed using established metrics: area under the curve (AUC) of receiver operator characteristic (ROC) curve, sensitivity, specificity, precision, and F1-score. In the RF model, feature importance was assessed by the mean decrease in Gini impurity attributed to each variable. <b>Results:</b> A total of 304 patients had a mean age of (35±4) years, and the age of the T-shaped uterus group was (35±5) years; the age of the non-T-shaped uterus group was (34±4) years.. Eight features with non-zero coefficients were selected by LASSO regression, including average lateral wall indentation width, average lateral wall indentation angle, upper cavity depth, endometrial thickness, uterine cavity area, cavity width at level of lateral wall indentation, angle formed by the bilateral lateral walls, and average cornual angle (coefficient: 0.125, -0.064,-0.037,-0.030,-0.026,-0.025,-0.025 and -0.024, respectively). The RF model showed the best diagnostic performance: in training set, AUC was 0.986 (95%<i>CI</i>: 0.980-0.992), sensitivity was 0.978, specificity 0.946, precision 0.802, and F1-score 0.881; in testing set, AUC was 0.948 (95%<i>CI</i>: 0.911-0.985), sensitivity was 0.873, specificity 0.919, precision 0.716, and F1-score 0.784. RF model feature importance analysis revealed that average lateral wall indentation width, upper cavity depth, and average lateral wall indentation angle were the top three features (over 65% in total), playing a decisive role in model prediction. <b>Conclusion:</b> The machine learning models developed in this study, particularly the RF model, are promising for the diagnosis of T-shaped uterus, offering new perspectives and technical support for clinical practice.
Page 10 of 3973970 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.