Sort by:
Page 101 of 3993982 results

Hybrid Ensemble Approaches: Optimal Deep Feature Fusion and Hyperparameter-Tuned Classifier Ensembling for Enhanced Brain Tumor Classification

Zahid Ullah, Dragan Pamucar, Jihie Kim

arxiv logopreprintJul 16 2025
Magnetic Resonance Imaging (MRI) is widely recognized as the most reliable tool for detecting tumors due to its capability to produce detailed images that reveal their presence. However, the accuracy of diagnosis can be compromised when human specialists evaluate these images. Factors such as fatigue, limited expertise, and insufficient image detail can lead to errors. For example, small tumors might go unnoticed, or overlap with healthy brain regions could result in misidentification. To address these challenges and enhance diagnostic precision, this study proposes a novel double ensembling framework, consisting of ensembled pre-trained deep learning (DL) models for feature extraction and ensembled fine-tuned hyperparameter machine learning (ML) models to efficiently classify brain tumors. Specifically, our method includes extensive preprocessing and augmentation, transfer learning concepts by utilizing various pre-trained deep convolutional neural networks and vision transformer networks to extract deep features from brain MRI, and fine-tune hyperparameters of ML classifiers. Our experiments utilized three different publicly available Kaggle MRI brain tumor datasets to evaluate the pre-trained DL feature extractor models, ML classifiers, and the effectiveness of an ensemble of deep features along with an ensemble of ML classifiers for brain tumor classification. Our results indicate that the proposed feature fusion and classifier fusion improve upon the state of the art, with hyperparameter fine-tuning providing a significant enhancement over the ensemble method. Additionally, we present an ablation study to illustrate how each component contributes to accurate brain tumor classification.

Identifying Signatures of Image Phenotypes to Track Treatment Response in Liver Disease

Matthias Perkonigg, Nina Bastati, Ahmed Ba-Ssalamah, Peter Mesenbrink, Alexander Goehler, Miljen Martic, Xiaofei Zhou, Michael Trauner, Georg Langs

arxiv logopreprintJul 16 2025
Quantifiable image patterns associated with disease progression and treatment response are critical tools for guiding individual treatment, and for developing novel therapies. Here, we show that unsupervised machine learning can identify a pattern vocabulary of liver tissue in magnetic resonance images that quantifies treatment response in diffuse liver disease. Deep clustering networks simultaneously encode and cluster patches of medical images into a low-dimensional latent space to establish a tissue vocabulary. The resulting tissue types capture differential tissue change and its location in the liver associated with treatment response. We demonstrate the utility of the vocabulary on a randomized controlled trial cohort of non-alcoholic steatohepatitis patients. First, we use the vocabulary to compare longitudinal liver change in a placebo and a treatment cohort. Results show that the method identifies specific liver tissue change pathways associated with treatment, and enables a better separation between treatment groups than established non-imaging measures. Moreover, we show that the vocabulary can predict biopsy derived features from non-invasive imaging data. We validate the method on a separate replication cohort to demonstrate the applicability of the proposed method.

Deep learning-assisted comparison of different models for predicting maxillary canine impaction on panoramic radiography.

Zhang C, Zhu H, Long H, Shi Y, Guo J, You M

pubmed logopapersJul 16 2025
The panoramic radiograph is the most commonly used imaging modality for predicting maxillary canine impaction. Several prediction models have been constructed based on panoramic radiographs. This study aimed to compare the prediction accuracy of existing models in an external validation facilitated by an automatic landmark detection system based on deep learning. Patients aged 7-14 years who underwent panoramic radiographic examinations and received a diagnosis of impacted canines were included in the study. An automatic landmark localization system was employed to assist the measurement of geometric parameters on the panoramic radiographs, followed by the calculated prediction of the canine impaction. Three prediction models constructed by Arnautska, Alqerban et al, and Margot et al were evaluated. The metrics of accuracy, sensitivity, specificity, precision, and area under the receiver operating characteristic curve (AUC) were used to compare the performance of different models. A total of 102 panoramic radiographs with 102 impacted canines and 102 nonimpacted canines were analyzed in this study. The prediction outcomes indicated that the model by Margot et al achieved the highest performance, with a sensitivity of 95% and a specificity of 86% (AUC, 0.97), followed by the model by Arnautska, with a sensitivity of 93% and a specificity of 71% (AUC, 0.94). The model by Alqerban et al showed poor performance with an AUC of only 0.20. Two of the existing predictive models exhibited good diagnostic accuracy, whereas the third model demonstrated suboptimal performance. Nonetheless, even the most effective model is constrained by several limitations, such as logical and computational challenges, which necessitate further refinement.

Super-resolution deep learning in pediatric CTA for congenital heart disease: enhancing intracardiac visualization under free-breathing conditions.

Zhou X, Xiong D, Liu F, Li J, Tan N, Duan X, Du X, Ouyang Z, Bao S, Ke T, Zhao Y, Tao J, Dong X, Wang Y, Liao C

pubmed logopapersJul 16 2025
This study assesses the effectiveness of super-resolution deep learning reconstruction (SR-DLR), conventional deep learning reconstruction (C-DLR), and hybrid iterative reconstruction (HIR) in enhancing image quality and diagnostic performance for pediatric congenital heart disease (CHD) in CT angiography (CCTA). A total of 91 pediatric patients aged 1-10 years, suspected of having CHD, were consecutively enrolled for CCTA under free-breathing conditions. Reconstructions were performed using SR-DLR, C-DLR, and HIR algorithms. Objective metrics-standard deviation (SD), signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR)-were quantified. Two radiologists provided blinded subjective image quality evaluations. The full width at half maximum of lesions was significantly larger on SR-DLR (9.50 ± 6.44 mm) than on C-DLR (9.08 ± 6.23 mm; p < 0.001) and HIR (8.98 ± 6.37 mm; p < 0.001). SR-DLR exhibited superior performance with significantly reduced SD and increased SNR and CNR, particularly in the left ventricle, left atrium, and right ventricle regions (p < 0.05). Subjective evaluations favored SR-DLR over C-DLR and HIR (p < 0.05). The accuracy (99.12%), sensitivity (99.07%), and negative predictive value (85.71%) of SR-DLR were the highest, significantly exceeding those of C-DLR (+7.01%, +7.40%, and +45.71%) and HIR (+20.17%, +21.29%, and +65.71%), with statistically significant differences (p < 0.05 and p < 0.001). In the detection of atrial septal defects (ASDs) and ventricular septal defects (VSDs), SR-DLR demonstrated significantly higher sensitivity compared to C-DLR (+8.96% and +9.09%) and HIR (+20.90% and +36.36%). For multi-perforated ASDs and VSDs, SR-DLR's sensitivity reached 85.71% and 100%, far surpassing C-DLR and HIR. SR-DLR significantly reduces image noise and enhances resolution, improving the diagnostic visualization of CHD structures in pediatric patients. It outperforms existing algorithms in detecting small lesions, achieving diagnostic accuracy close to that of ultrasound. Question Pediatric cardiac computed tomography angiography (CCTA) often fails to adequately visualize intracardiac structures, creating diagnostic challenges for CHD, particularly complex multi-perforated atrioventricular defects. Findings SR-DLR markedly improves image quality and diagnostic accuracy, enabling detailed visualization and precise detection of small congenital lesions. Clinical relevance SR-DLR enhances the diagnostic confidence and accuracy of CCTA in pediatric CHD, reducing missed diagnoses and improving the characterization of complex intracardiac anomalies, thus supporting better clinical decision-making.

Multi-scale machine learning model predicts muscle and functional disease progression.

Blemker SS, Riem L, DuCharme O, Pinette M, Costanzo KE, Weatherley E, Statland J, Tapscott SJ, Wang LH, Shaw DWW, Song X, Leung D, Friedman SD

pubmed logopapersJul 16 2025
Facioscapulohumeral muscular dystrophy (FSHD) is a genetic neuromuscular disorder characterized by progressive muscle degeneration with substantial variability in severity and progression patterns. FSHD is a highly heterogeneous disease; however, current clinical metrics used for tracking disease progression lack sensitivity for personalized assessment, which greatly limits the design and execution of clinical trials. This study introduces a multi-scale machine learning framework leveraging whole-body magnetic resonance imaging (MRI) and clinical data to predict regional, muscle, joint, and functional progression in FSHD. The goal this work is to create a 'digital twin' of individual FSHD patients that can be leveraged in clinical trials. Using a combined dataset of over 100 patients from seven studies, MRI-derived metrics-including fat fraction, lean muscle volume, and fat spatial heterogeneity at baseline-were integrated with clinical and functional measures. A three-stage random forest model was developed to predict annualized changes in muscle composition and a functional outcome (timed up-and-go (TUG)). All model stages revealed strong predictive performance in separate holdout datasets. After training, the models predicted fat fraction change with a root mean square error (RMSE) of 2.16% and lean volume change with a RMSE of 8.1 ml in a holdout testing dataset. Feature analysis revealed that metrics of fat heterogeneity within muscle predicts muscle-level progression. The stage 3 model, which combined functional muscle groups, predicted change in TUG with a RMSE of 0.6 s in the holdout testing dataset. This study demonstrates the machine learning models incorporating individual muscle and performance data can effectively predict MRI disease progression and functional performance of complex tasks, addressing the heterogeneity and nonlinearity inherent in FSHD. Further studies incorporating larger longitudinal cohorts, as well as comprehensive clinical and functional measures, will allow for expanding and refining this model. As many neuromuscular diseases are characterized by variability and heterogeneity similar to FSHD, such approaches have broad applicability.

Conditional GAN performs better than orthopedic surgeon in virtual reduction of femoral neck fracture.

Zhao K, Mei Y, Wang X, Ma W, Shen W

pubmed logopapersJul 16 2025
Satisfied reduction of fracture is hard to achieve. The purpose of this study is to develop a virtual fracture reduction technique using conditional GAN (Generative Adversarial Network), and evaluate its performance in simulating and guiding reduction of femoral neck fracture, which is hard to reduce. We compared its reduction quality with manual reduction performed by orthopedic surgeons. It is a pilot study for augmented reality assisted femoral neck fracture surgery. To establish the gold standard of reduction, we invited an orthopedic surgeon to perform virtual reduction registration with reference to the healthy proximal femur. The invited orthopedic surgeon also performed manual reduction by Mimics software to represent the capability of human doctor. Then we trained conditional GAN models on our dataset, which consisted 208 images from 208 different patients. For displaced femoral neck fractures, it is not easy to measure the accurate angles, like Pauwels angle, of the fracture line. However, the fracture lines would be clearer after reduction. We compared the results of manual reduction, conditional GAN models and registration by Pauwels angle, Garden index and satisfied reduction rate. We tried different number of downsampling (α) to optimize the performance of conditional GAN models. There were 208 pre-surgical CT scans from 208 patients included in our study (age 69.755 ± 13.728, including 88 men). The Pauwles angles of conditional GAN model(α = 0) was 38.519°, which was significantly more stable than manual reduction (44.647°, p < 0.001). The Garden indices of conditional GAN model(α = 0) was 176.726°, which was also significantly more stable than manual reduction (163.590°, p = 0.002). The satisfied reduction rate of conditional GAN model(α = 0) was 88.372%, significantly higher than manual reduction (53.488%, p < 0.001). The Pauwels angles, Garden indices and satisfied reduction rate of conditional GAN model(α = 0) showed no difference to registration. Conditional GAN model(α = 0) can achieve better performance in the virtual reduction of femoral neck fracture than orthopedic surgeon.

Image quality and radiation dose of reduced-dose abdominopelvic computed tomography (CT) with silver filter and deep learning reconstruction.

Otgonbaatar C, Jeon SH, Cha SJ, Shim H, Kim JW, Ahn JH

pubmed logopapersJul 16 2025
To assess the image quality and radiation dose between reduced-dose CT with deep learning reconstruction (DLR) using SilverBeam filter and standard dose with iterative reconstruction (IR) in abdominopelvic CT. In total, 182 patients (mean age ± standard deviation, 63 ± 14 years; 100 men) were included. Standard-dose scanning was performed with a tube voltage of 100 kVp, automatic tube current modulation, and IR reconstruction, whereas reduced-dose scanning was performed with a tube voltage of 120 kVp, a SilverBeam filter, and DLR. Additionally, a contrast-enhanced (CE)-boost image was obtained for reduced-dose scanning. Radiation dose, objective, and subjective image analyses were performed in each body mass index (BMI) category. The radiation dose for SilverBeam with DLR was significantly lower than that of standard dose with IR, with an average reduction in the effective dose of 59.0% (1.87 vs. 4.57 mSv). Standard dose with IR (10.59 ± 1.75) and SilverBeam with DLR (10.60 ± 1.08) showed no significant difference in image noise (p = 0.99). In the obese group (BMI > 25 kg/m<sup>2</sup>), there were no significant differences in SNRs of the liver, pancreas, and spleen between standard dose with IR and SilverBeam with DLR. SilverBeam with DLR + CE-boost demonstrated significantly better SNRs and CNRs, compared with standard dose with IR and SilverBeam with DLR. DLR combined with silver filter is effective for routine abdominopelvic CT, achieving a clearly reduced radiation dose while providing image quality that is non-inferior to standard dose with IR.

Artificial intelligence-based diabetes risk prediction from longitudinal DXA bone measurements.

Khan S, Shah Z

pubmed logopapersJul 16 2025
Diabetes mellitus (DM) is a serious global health concern that poses a significant threat to human life. Beyond its direct impact, diabetes substantially increases the risk of developing severe complications such as hypertension, cardiovascular disease, and musculoskeletal disorders like arthritis and osteoporosis. The field of diabetes classification has advanced significantly with the use of diverse data modalities and sophisticated tools to identify individuals or groups as diabetic. But the task of predicting diabetes prior to its onset, particularly through the use of longitudinal multi-modal data, remains relatively underexplored. To better understand the risk factors associated with diabetes development among Qatari adults, this longitudinal research aims to investigate dual-energy X-ray absorptiometry (DXA)-derived whole-body and regional bone composition measures as potential predictors of diabetes onset. We proposed a case-control retrospective study, with a total of 1,382 participants contains 725 male participants (cases: 146, control: 579) and 657 female participants (case: 133, control: 524). We excluded participants with incomplete data points. To handle class imbalance, we augmented our data using Synthetic Minority Over-sampling Technique (SMOTE) and SMOTEENN (SMOTE with Edited Nearest Neighbors), and to further investigated the association between bones data features and diabetes status, we employed ANOVA analytical method. For diabetes onset prediction, we employed both conventional and deep learning (DL) models to predict risk factors associated with diabetes in Qatari adults. We used SHAP and probabilistic methods to investigate the association of identified risk factors with diabetes. During experimental analysis, we found that bone mineral density (BMD), bone mineral contents (BMC) in the hip, femoral neck, troch area, and lumbar spine showed an upward trend in diabetic patients with [Formula: see text]. Meanwhile, we found that patients with abnormal glucose metabolism had increased wards BMD and BMC with low Z-score compared to healthy participants. Consequently, it shows that the diabetic group has superior bone health than the control group in the cohort, because they exhibit higher BMD, muscle mass, and bone area across most body regions. Moreover, in the age group distribution analysis, we found that the diabetes prediction rate was higher among healthy participants in the younger age group 20-40 years. But as the age range increased, the model predictions became more accurate for diabetic participants, especially in the older age group 56-69 years. It is also observed that male participants demonstrated a higher susceptibility to diabetes onset compared to female participants. Shallow models outperformed the DL models by presenting improved accuracy (91.08%), AUROC (96%), and recall values (91%). This pivotal approach utilizing DXA scans highlights significant potential for the rapid and minimally invasive early detection of diabetes.

Automatic segmentation of liver structures in multi-phase MRI using variants of nnU-Net and Swin UNETR.

Raab F, Strotzer Q, Stroszczynski C, Fellner C, Einspieler I, Haimerl M, Lang EW

pubmed logopapersJul 16 2025
Accurate segmentation of the liver parenchyma, portal veins, hepatic veins, and lesions from MRI is important for hepatic disease monitoring and treatment. Multi-phase contrast enhanced imaging is superior in distinguishing hepatic structures compared to single-phase approaches, but automated approaches for detailed segmentation of hepatic structures are lacking. This study evaluates deep learning architectures for segmenting liver structures from multi-phase Gd-EOB-DTPA-enhanced T1-weighted VIBE MRI scans. We utilized 458 T1-weighted VIBE scans of pathological livers, with 78 manually labeled for liver parenchyma, hepatic and portal veins, aorta, lesions, and ascites. An additional dataset of 47 labeled subjects was used for cross-scanner evaluation. Three models were evaluated using nested cross-validation: the conventional nnU-Net, the ResEnc nnU-Net, and the Swin UNETR. The late arterial phase was identified as the optimal fixed phase for co-registration. Both nnU-Net variants outperformed Swin UNETR across most tasks. The conventional nnU-Net achieved the highest segmentation performance for liver parenchyma (DSC: 0.97; 95% CI 0.97, 0.98), portal vein (DSC: 0.83; 95% CI 0.80, 0.87), and hepatic vein (DSC: 0.78; 95% CI 0.77, 0.80). Lesion and ascites segmentation proved challenging for all models, with the conventional nnU-Net performing best. This study demonstrates the effectiveness of deep learning, particularly nnU-Net variants, for detailed liver structure segmentation from multi-phase MRI. The developed models and preprocessing pipeline offer potential for improved liver disease assessment and surgical planning in clinical practice.

Comparative study of 2D vs. 3D AI-enhanced ultrasound for fetal crown-rump length evaluation in the first trimester.

Zhang Y, Huang Y, Chen C, Hu X, Pan W, Luo H, Huang Y, Wang H, Cao Y, Yi Y, Xiong Y, Ni D

pubmed logopapersJul 16 2025
Accurate fetal growth evaluation is crucial for monitoring fetal health, with crown-rump length (CRL) being the gold standard for estimating gestational age and assessing growth during the first trimester. To enhance CRL evaluation accuracy and efficiency, we developed an artificial intelligence (AI)-based model (3DCRL-Net) using the 3D U-Net architecture for automatic landmark detection to achieve CRL plane localization and measurement in 3D ultrasound. We then compared its performance to that of experienced radiologists using both 2D and 3D ultrasound for fetal growth assessment. This prospective consecutive study collected fetal data from 1,326 ultrasound screenings conducted at 11-14 weeks of gestation (June 2021 to June 2023). Three experienced radiologists performed fetal screening using 2D video (2D-RAD) and 3D volume (3D-RAD) to obtain the CRL plane and measurement. The 3DCRL-Net model automatically outputs the landmark position, CRL plane localization and measurement. Three specialists audited the planes achieved by radiologists and 3DCRL-Net as standard or non-standard. The performance of CRL landmark detection, plane localization, measurement and time efficiency was evaluated in the internal testing dataset, comparing results with 3D-RAD. In the external dataset, CRL plane localization, measurement accuracy, and time efficiency were compared among the three groups. The internal dataset consisted of 126 cases in the testing set (training: validation: testing = 8:1:1), and the external dataset included 245 cases. On the internal testing set, 3DCRL-Net achieved a mean absolute distance error of 1.81 mm for the nine landmarks, higher accuracy in standard plane localization compared to 3D-RAD (91.27% vs. 80.16%), and strong consistency in CRL measurements (mean absolute error (MAE): 1.26 mm; mean difference: 0.37 mm, P = 0.70). The average time required per fetal case was 2.02 s for 3DCRL-Net versus 2 min for 3D-RAD (P < 0.001). On the external testing dataset, 3DCRL-Net demonstrated high performance in standard plane localization, achieving results comparable to 2D-RAD and 3D-RAD (accuracy: 91.43% vs. 93.06% vs. 86.12%), with strong consistency in CRL measurements, compared to 2D-RAD, which showed an MAE of 1.58 mm and a mean difference of 1.12 mm (P = 0.25). For 2D-RAD vs. 3DCRL-Net, the Pearson correlation and R² were 0.96 and 0.93, respectively, with an MAE of 0.11 ± 0.12 weeks. The average time required per fetal case was 5 s for 3DCRL-Net, compared to 2 min for 3D-RAD and 35 s for 2D-RAD (P < 0.001). The 3DCRL-Net model provides a rapid, accurate, and fully automated solution for CRL measurement in 3D ultrasound, achieving expert-level performance and significantly improving the efficiency and reliability of first-trimester fetal growth assessment.
Page 101 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.