Sort by:
Page 95 of 1411408 results

Foundation Model and Radiomics-Based Quantitative Characterization of Perirenal Fat in Renal Cell Carcinoma Surgery.

Mei H, Chen H, Zheng Q, Yang R, Wang N, Jiao P, Wang X, Chen Z, Liu X

pubmed logopapersJul 1 2025
To quantitatively characterize the degree of perirenal fat adhesion using artificial intelligence in renal cell carcinoma. This retrospective study analyzed a total of 596 patients from three cohorts, utilizing corticomedullary phase computed tomography urography (CTU) images. The nnUNet v2 network combined with numerical computation was employed to segment the perirenal fat region. Pyradiomics algorithms and a computed tomography foundation model were used to extract features from CTU images separately, creating single-modality predictive models for identifying perirenal fat adhesion. By concatenating the Pyradiomics and foundation model features, an early fusion multimodal predictive signature was developed. The prognostic performance of the single-modality and multimodality models was further validated in two independent cohorts. The nnUNet v2 segmentation model accurately segmented both kidneys. The neural network and thresholding approach effectively delineated the perirenal fat region. Single-modality models based on radiomic and computed tomography foundation features demonstrated a certain degree of accuracy in diagnosing and identifying perirenal fat adhesion, while the early feature fusion diagnostic model outperformed the single-modality models. Also, the perirenal fat adhesion score showed a positive correlation with surgical time and intraoperative blood loss. AI-based radiomics and foundation models can accurately identify the degree of perirenal fat adhesion and have the potential to be used for surgical risk assessment.

Deep learning radiomics and mediastinal adipose tissue-based nomogram for preoperative prediction of postoperative‌ brain metastasis risk in non-small cell lung cancer.

Niu Y, Jia HB, Li XM, Huang WJ, Liu PP, Liu L, Liu ZY, Wang QJ, Li YZ, Miao SD, Wang RT, Duan ZX

pubmed logopapersJul 1 2025
Brain metastasis (BM) significantly affects the prognosis of non-small cell lung cancer (NSCLC) patients. Increasing evidence suggests that adipose tissue influences cancer progression and metastasis. This study aimed to develop a predictive nomogram integrating mediastinal fat area (MFA) and deep learning (DL)-derived tumor characteristics to stratify postoperative‌ BM risk in NSCLC patients. A retrospective cohort of 585 surgically resected NSCLC patients was analyzed. Preoperative computed tomography (CT) scans were utilized to quantify MFA using ImageJ software (radiologist-validated measurements). Concurrently, a DL algorithm extracted tumor radiomic features, generating a deep learning brain metastasis score (DLBMS). Multivariate logistic regression identified independent BM predictors, which were incorporated into a nomogram. Model performance was assessed via area under the receiver operating characteristic curve (AUC), calibration plots, integrated discrimination improvement (IDI), net reclassification improvement (NRI), and decision curve analysis (DCA). Multivariate analysis identified N stage, EGFR mutation status, MFA, and DLBMS as independent predictors of BM. The nomogram achieved superior discriminative capacity (AUC: 0.947 in the test set), significantly outperforming conventional models. MFA contributed substantially to predictive accuracy, with IDI and NRI values confirming its incremental utility (IDI: 0.123, <i>P</i> < 0.001; NRI: 0.386, <i>P</i> = 0.023). Calibration analysis demonstrated strong concordance between predicted and observed BM probabilities, while DCA confirmed clinical net benefit across risk thresholds. This DL-enhanced nomogram, incorporating MFA and tumor radiomics, represents a robust and clinically useful tool for preoperative prediction of postoperative BM risk in NSCLC. The integration of adipose tissue metrics with advanced imaging analytics advances personalized prognostic assessment in NSCLC patients. The online version contains supplementary material available at 10.1186/s12885-025-14466-5.

Atrophy related neuroimaging biomarkers for neurological and cognitive function in Wilson disease.

Hausmann AC, Rubbert C, Querbach SK, Ivan VL, Schnitzler A, Hartmann CJ, Caspers J

pubmed logopapersJul 1 2025
Although brain atrophy is a prevalent finding in Wilson disease (WD), its role as a contributing factor to clinical symptoms, especially cognitive decline, remains unclear. The objective of this study was to investigate different neuroimaging biomarkers related to grey matter atrophy and their relationship with neurological and cognitive impairment in WD. In this study, 30 WD patients and 30 age- and sex-matched healthy controls were enrolled prospectively and underwent structural magnetic resonance imaging (MRI). Regional atrophy was evaluated using established linear radiological measurements and the automated workflow volumetric estimation of gross atrophy and brain age longitudinally (veganbagel) for age- and sex-specific estimations of regional brain volume changes. Brain Age Gap Estimate (BrainAGE), defined as the discrepancy between machine learning predicted brain age from structural MRI and chronological age, was assessed using an established model. Atrophy markers and clinical scores were compared between 19 WD patients with a neurological phenotype (neuro-WD), 11 WD patients with a hepatic phenotype (hep-WD), and a healthy control group using Welch's ANOVA or Kruskal-Wallis test. Correlations between atrophy markers and neurological and neuropsychological scores were investigated using Spearman's correlation coefficients. Patients with neuro-WD demonstrated increased third ventricle width and bicaudate index, along with significant striatal-thalamic atrophy patterns that correlated with global cognitive function, mental processing speed, and verbal memory. Median BrainAGE was significantly higher in patients with neuro-WD (8.97 years, interquartile range [IQR] = 5.62-15.73) compared to those with hep-WD (4.72 years, IQR = 0.00-5.48) and healthy controls (0.46 years, IQR = - 4.11-4.24). Striatal-thalamic atrophy and BrainAGE were significantly correlated with neurological symptom severity. Our findings indicate advanced predicted brain age and substantial striatal-thalamic atrophy patterns in patients with neuro-WD, which serve as promising neuroimaging biomarkers for neurological and cognitive functions in treated, chronic WD.

Improving YOLO-based breast mass detection with transfer learning pretraining on the OPTIMAM Mammography Image Database.

Ho PS, Tsai HY, Liu I, Lee YY, Chan SW

pubmed logopapersJul 1 2025
Early detection of breast cancer through mammography significantly improves survival rates. However, high false positive and false negative rates remain a challenge. Deep learning-based computer-aided diagnosis systems can assist in lesion detection, but their performance is often limited by the availability of labeled clinical data. This study systematically evaluated the effectiveness of transfer learning, image preprocessing techniques, and the latest You Only Look Once (YOLO) model (v9) for optimizing breast mass detection models on small proprietary datasets. We examined 133 mammography images containing masses and assessed various preprocessing strategies, including cropping and contrast enhancement. We further investigated the impact of transfer learning using the OPTIMAM Mammography Image Database (OMI-DB) compared with training on proprietary data. The performance of YOLOv9 was evaluated against YOLOv7 to determine improvements in detection accuracy. Pretraining on the OMI-DB dataset with cropped images significantly improved model performance, with YOLOv7 achieving a 13.9 % higher mean average precision (mAP) and 13.2 % higher F1-score compared to training only on proprietary data. Among the tested models and configurations, the best results were obtained using YOLOv9 pretrained OMI-DB and fine-tuned with cropped proprietary images, yielding an mAP of 73.3 % ± 16.7 % and an F1-score of 76.0 % ± 13.4 %, under this condition, YOLOv9 outperformed YOLOv7 by 8.1 % in mAP and 9.2 % in F1-score. This study provides a systematic evaluation of transfer learning and preprocessing techniques for breast mass detection in small datasets. Our results demonstrating that YOLOv9 with OMI-DB pretraining significantly enhances the performance of breast mass detection models while reducing training time, providing a valuable guideline for optimizing deep learning models in data-limited clinical applications.

Automated Fast Prediction of Bone Mineral Density From Low-dose Computed Tomography.

Zhou K, Xin E, Yang S, Luo X, Zhu Y, Zeng Y, Fu J, Ruan Z, Wang R, Geng D, Yang L

pubmed logopapersJul 1 2025
Low-dose chest CT (LDCT) is commonly employed for the early screening of lung cancer. However, it has rarely been utilized in the assessment of volumetric bone mineral density (vBMD) and the diagnosis of osteoporosis (OP). This study investigated the feasibility of using deep learning to establish a system for vBMD prediction and OP classification based on LDCT scans. This study included 551 subjects who underwent both LDCT and QCT examinations. First, the U-net was developed to automatically segment lumbar vertebrae from single 2D LDCT slices near the mid-vertebral level. Then, a prediction model was proposed to estimate vBMD, which was subsequently employed for detecting OP and osteopenia (OA). Specifically, two input modalities were constructed for the prediction model. The performance metrics of the models were calculated and evaluated. The segmentation model exhibited a strong correlation with manual segmentation, achieving a mean Dice similarity coefficient (DSC) of 0.974, sensitivity of 0.964, positive predictive value (PPV) of 0.985, and Hausdorff distance of 3.261 in the test set. Linear regression and Bland-Altman analysis demonstrated strong agreement between the predicted vBMD from two-channel inputs and QCT-derived vBMD, with a root mean square error of 8.958 mg/mm<sup>3</sup> and an R<sup>2</sup> of 0.944. The areas under the curve for detecting OP and OA were 0.800 and 0.878, respectively, with an overall accuracy of 94.2%. The average processing time for this system was 1.5 s. This prediction system could automatically estimate vBMD and detect OP and OA on LDCT scans, providing great potential for the osteoporosis screening.

Deep learning-based segmentation of T1 and T2 cardiac MRI maps for automated disease detection

Andreea Bianca Popescu, Andreas Seitz, Heiko Mahrholdt, Jens Wetzl, Athira Jacob, Lucian Mihai Itu, Constantin Suciu, Teodora Chitiboi

arxiv logopreprintJul 1 2025
Objectives Parametric tissue mapping enables quantitative cardiac tissue characterization but is limited by inter-observer variability during manual delineation. Traditional approaches relying on average relaxation values and single cutoffs may oversimplify myocardial complexity. This study evaluates whether deep learning (DL) can achieve segmentation accuracy comparable to inter-observer variability, explores the utility of statistical features beyond mean T1/T2 values, and assesses whether machine learning (ML) combining multiple features enhances disease detection. Materials & Methods T1 and T2 maps were manually segmented. The test subset was independently annotated by two observers, and inter-observer variability was assessed. A DL model was trained to segment left ventricle blood pool and myocardium. Average (A), lower quartile (LQ), median (M), and upper quartile (UQ) were computed for the myocardial pixels and employed in classification by applying cutoffs or in ML. Dice similarity coefficient (DICE) and mean absolute percentage error evaluated segmentation performance. Bland-Altman plots assessed inter-user and model-observer agreement. Receiver operating characteristic analysis determined optimal cutoffs. Pearson correlation compared features from model and manual segmentations. F1-score, precision, and recall evaluated classification performance. Wilcoxon test assessed differences between classification methods, with p < 0.05 considered statistically significant. Results 144 subjects were split into training (100), validation (15) and evaluation (29) subsets. Segmentation model achieved a DICE of 85.4%, surpassing inter-observer agreement. Random forest applied to all features increased F1-score (92.7%, p < 0.001). Conclusion DL facilitates segmentation of T1/ T2 maps. Combining multiple features with ML improves disease detection.

Iterative Misclassification Error Training (IMET): An Optimized Neural Network Training Technique for Image Classification

Ruhaan Singh, Sreelekha Guggilam

arxiv logopreprintJul 1 2025
Deep learning models have proven to be effective on medical datasets for accurate diagnostic predictions from images. However, medical datasets often contain noisy, mislabeled, or poorly generalizable images, particularly for edge cases and anomalous outcomes. Additionally, high quality datasets are often small in sample size that can result in overfitting, where models memorize noise rather than learn generalizable patterns. This in particular, could pose serious risks in medical diagnostics where the risk associated with mis-classification can impact human life. Several data-efficient training strategies have emerged to address these constraints. In particular, coreset selection identifies compact subsets of the most representative samples, enabling training that approximates full-dataset performance while reducing computational overhead. On the other hand, curriculum learning relies on gradually increasing training difficulty and accelerating convergence. However, developing a generalizable difficulty ranking mechanism that works across diverse domains, datasets, and models while reducing the computational tasks and remains challenging. In this paper, we introduce Iterative Misclassification Error Training (IMET), a novel framework inspired by curriculum learning and coreset selection. The IMET approach is aimed to identify misclassified samples in order to streamline the training process, while prioritizing the model's attention to edge case senarious and rare outcomes. The paper evaluates IMET's performance on benchmark medical image classification datasets against state-of-the-art ResNet architectures. The results demonstrating IMET's potential for enhancing model robustness and accuracy in medical image analysis are also presented in the paper.

Deep learning image enhancement algorithms in PET/CT imaging: a phantom and sarcoma patient radiomic evaluation.

Bonney LM, Kalisvaart GM, van Velden FHP, Bradley KM, Hassan AB, Grootjans W, McGowan DR

pubmed logopapersJul 1 2025
PET/CT imaging data contains a wealth of quantitative information that can provide valuable contributions to characterising tumours. A growing body of work focuses on the use of deep-learning (DL) techniques for denoising PET data. These models are clinically evaluated prior to use, however, quantitative image assessment provides potential for further evaluation. This work uses radiomic features to compare two manufacturer deep-learning (DL) image enhancement algorithms, one of which has been commercialised, against 'gold-standard' image reconstruction techniques in phantom data and a sarcoma patient data set (N=20). All studies in the retrospective sarcoma clinical [ <math xmlns="http://www.w3.org/1998/Math/MathML"><mmultiscripts><mrow></mrow> <mrow></mrow> <mn>18</mn></mmultiscripts> </math> F]FDG dataset were acquired on either a GE Discovery 690 or 710 PET/CT scanner with volumes segmented by an experienced nuclear medicine radiologist. The modular heterogeneous imaging phantom used in this work was filled with [ <math xmlns="http://www.w3.org/1998/Math/MathML"><mmultiscripts><mrow></mrow> <mrow></mrow> <mn>18</mn></mmultiscripts> </math> F]FDG, and five repeat acquisitions of the phantom were acquired on a GE Discovery 710 PET/CT scanner. The DL-enhanced images were compared to 'gold-standard' images the algorithms were trained to emulate and input images. The difference between image sets was tested for significance in 93 international biomarker standardisation initiative (IBSI) standardised radiomic features. Comparing DL-enhanced images to the 'gold-standard', 4.0% and 9.7% radiomic features measured significantly different (p<sub>critical</sub> < 0.0005) in the phantom and patient data respectively (averaged over the two DL algorithms). Larger differences were observed comparing DL-enhanced images to algorithm input images with 29.8% and 43.0% of radiomic features measuring significantly different in the phantom and patient data respectively (averaged over the two DL algorithms). DL-enhanced images were found to be similar to images generated using the 'gold-standard' target image reconstruction method with more than 80% of radiomic features not significantly different in all comparisons across unseen phantom and sarcoma patient data. This result offers insight into the performance of the DL algorithms, and demonstrate potential applications for DL algorithms in harmonisation for radiomics and for radiomic features in quantitative evaluation of DL algorithms.

Deep Learning Estimation of Small Airway Disease from Inspiratory Chest Computed Tomography: Clinical Validation, Repeatability, and Associations with Adverse Clinical Outcomes in Chronic Obstructive Pulmonary Disease.

Chaudhary MFA, Awan HA, Gerard SE, Bodduluri S, Comellas AP, Barjaktarevic IZ, Barr RG, Cooper CB, Galban CJ, Han MK, Curtis JL, Hansel NN, Krishnan JA, Menchaca MG, Martinez FJ, Ohar J, Vargas Buonfiglio LG, Paine R, Bhatt SP, Hoffman EA, Reinhardt JM

pubmed logopapersJul 1 2025
<b>Rationale:</b> Quantifying functional small airway disease (fSAD) requires additional expiratory computed tomography (CT) scans, limiting clinical applicability. Artificial intelligence (AI) could enable fSAD quantification from chest CT scans at total lung capacity (TLC) alone (fSAD<sup>TLC</sup>). <b>Objectives:</b> To evaluate an AI model for estimating fSAD<sup>TLC</sup>, compare it with dual-volume parametric response mapping fSAD (fSAD<sup>PRM</sup>), and assess its clinical associations and repeatability in chronic obstructive pulmonary disease (COPD). <b>Methods:</b> We analyzed 2,513 participants from SPIROMICS (the Subpopulations and Intermediate Outcome Measures in COPD Study). Using a randomly sampled subset (<i>n</i> = 1,055), we developed a generative model to produce virtual expiratory CT scans for estimating fSAD<sup>TLC</sup> in the remaining 1,458 SPIROMICS participants. We compared fSAD<sup>TLC</sup> with dual-volume fSAD<sup>PRM</sup>. We investigated univariate and multivariable associations of fSAD<sup>TLC</sup> with FEV<sub>1</sub>, FEV<sub>1</sub>/FVC ratio, 6-minute-walk distance, St. George's Respiratory Questionnaire score, and FEV<sub>1</sub> decline. The results were validated in a subset of patients from the COPDGene (Genetic Epidemiology of COPD) study (<i>n</i> = 458). Multivariable models were adjusted for age, race, sex, body mass index, baseline FEV<sub>1</sub>, smoking pack-years, smoking status, and percent emphysema. <b>Measurements and Main Results:</b> Inspiratory fSAD<sup>TLC</sup> showed a strong correlation with fSAD<sup>PRM</sup> in SPIROMICS (Pearson's <i>R</i> = 0.895) and COPDGene (<i>R</i> = 0.897) cohorts. Higher fSAD<sup>TLC</sup> levels were significantly associated with lower lung function, including lower postbronchodilator FEV<sub>1</sub> (in liters) and FEV<sub>1</sub>/FVC ratio, and poorer quality of life reflected by higher total St. George's Respiratory Questionnaire scores independent of percent CT emphysema. In SPIROMICS, individuals with higher fSAD<sup>TLC</sup> experienced an annual decline in FEV<sub>1</sub> of 1.156 ml (relative decrease; 95% confidence interval [CI], 0.613-1.699; <i>P</i> < 0.001) per year for every 1% increase in fSAD<sup>TLC</sup>. The rate of decline in the COPDGene cohort was slightly lower at 0.866 ml/yr (relative decrease; 95% CI, 0.345-1.386; <i>P</i> < 0.001) per 1% increase in fSAD<sup>TLC</sup>. Inspiratory fSAD<sup>TLC</sup> demonstrated greater consistency between repeated measurements, with a higher intraclass correlation coefficient of 0.99 (95% CI, 0.98-0.99) compared with fSAD<sup>PRM</sup> (0.83; 95% CI, 0.76-0.88). <b>Conclusions:</b> Small airway disease can be reliably assessed from a single inspiratory CT scan using generative AI, eliminating the need for an additional expiratory CT scan. fSAD estimation from inspiratory CT correlates strongly with fSAD<sup>PRM</sup>, demonstrates a significant association with FEV<sub>1</sub> decline, and offers greater repeatability.

Enhanced diagnostic and prognostic assessment of cardiac amyloidosis using combined <sup>11</sup>C-PiB PET/CT and <sup>99m</sup>Tc-DPD scintigraphy.

Hong Z, Spielvogel CP, Xue S, Calabretta R, Jiang Z, Yu J, Kluge K, Haberl D, Nitsche C, Grünert S, Hacker M, Li X

pubmed logopapersJul 1 2025
Cardiac amyloidosis (CA) is a severe condition characterized by amyloid fibril deposition in the myocardium, leading to restrictive cardiomyopathy and heart failure. Differentiating between amyloidosis subtypes is crucial due to distinct treatment strategies. The individual conventional diagnostic methods lack the accuracy needed for effective subtype identification. This study aimed to evaluate the efficacy of combining <sup>11</sup>C-PiB PET/CT and <sup>99m</sup>Tc-DPD scintigraphy in detecting CA and distinguishing between its main subtypes, light chain (AL) and transthyretin (ATTR) amyloidosis while assessing the association of imaging findings with patient prognosis. We retrospectively evaluated the diagnostic efficacy of combining <sup>11</sup>C-PiB PET/CT and <sup>99m</sup>Tc-DPD scintigraphy in a cohort of 50 patients with clinical suspicion of CA. Semi-quantitative imaging markers were extracted from the images. Diagnostic performance was calculated against biopsy results or genetic testing. Both machine learning models and a rationale-based model were developed to detect CA and classify subtypes. Survival prediction over five years was assessed using a random survival forest model. Prognostic value was assessed using Kaplan-Meier estimators and Cox proportional hazards models. The combined imaging approach significantly improved diagnostic accuracy, with <sup>11</sup>C-PiB PET and <sup>99m</sup>Tc-DPD scintigraphy showing complementary strengths in detecting AL and ATTR, respectively. The machine learning model achieved an AUC of 0.94 (95% CI 0.93-0.95) for CA subtype differentiation, while the rationale-based model demonstrated strong diagnostic ability with AUCs of 0.95 (95% CI 0.88-1.00) for ATTR and 0.88 (95% CI 0.770-0.961) for AL. Survival prediction models identified key prognostic markers, with significant stratification of overall mortality based on predicted survival (p value = 0.006; adj HR 2.43 [95% CI 1.03-5.71]). The integration of <sup>11</sup>C-PiB PET/CT and <sup>99m</sup>Tc-DPD scintigraphy, supported by both machine learning and rationale-based models, enhances the diagnostic accuracy and prognostic assessment of cardiac amyloidosis, with significant implications for clinical practice.
Page 95 of 1411408 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.