Sort by:
Page 156 of 6486473 results

Mehri-Kakavand G, Mdletshe S, Amini M, Wang A

pubmed logopapersSep 18 2025
Postoperative recurrence in non-small cell lung cancer (NSCLC) affects up to 55% of patients, underscoring limits of TNM staging. We assessed multimodal radiomics—positron emission tomography (PET), computed tomography (CT), and clinicopathological (CP) data—for personalized recurrence prediction. Data from 131 NSCLC patients with PET/CT imaging and CP variables were analysed. Radiomics features were extracted using PyRadiomics (1,316 PET and 1,409 CT features per tumor), with robustness testing and selection yielding 20 CT, 20 PET, and 23 CP variables. Prediction models were trained using Logistic Regression (L1, L2, Elastic Net), Random Forest, Gradient Boosting, XGBoost, and CatBoost. Nested cross-validation with SMOTE addressed class imbalance. Fusion strategies included early (feature concatenation), intermediate (stacked ensembles), and late (weighted averaging) fusion. Among single modalities, CT with Elastic Net achieved the highest cross-validated AUC (0.679, 95% CI: 0.57–0.79). Fusion improved performance: PET + CT + Clinical late fusion with Elastic Net achieved the best cross-validated AUC (0.811, 95% CI: 0.69–0.91). Out-of-fold ROC curves confirmed stronger discrimination for the fusion model (AUC = 0.836 vs. 0.741 for CT). Fusion also showed better calibration, higher net clinical benefit (decision-curve analysis), and clearer survival stratification (Kaplan–Meier). Integrating PET, CT, and CP data—particularly via late fusion with Elastic Net—enhances discrimination beyond single-modality models and supports more consistent risk stratification. These findings suggest practical potential for informing postoperative surveillance and adjuvant therapy decisions, encouraging a shift beyond TNM alone toward interpretable multimodal frameworks. External validation in larger, multicenter cohorts is warranted. The online version contains supplementary material available at 10.1007/s00432-025-06311-w.

Tomii D, Shiri I, Baj G, Nakase M, Kazaj PM, Samim D, Bartkowiak J, Praz F, Lanz J, Stortecky S, Reineke D, Windecker S, Pilgrim T, Gräni C

pubmed logopapersSep 18 2025
Technical failure is not uncommon and is associated with unfavorable outcomes in patients undergoing TAVR. However, predicting procedural failure remains challenging due to the complex interplay of clinical, anatomical, and procedural factors. The objective of the study was to develop and validate a data-driven prediction model for technical failure of transcatheter aortic valve replacement (TAVR), using multimodal information and machine learning algorithms. In a prospective TAVR registry, 184 parameters derived from clinical examination, laboratory studies, electrocardiography, echocardiography, cardiac catheterization, computed tomography, and procedural measurements were used for machine learning modeling of TAVR technical failure prediction. For the machine learning algorithm, 24 different model combinations were developed using a standardized machine learning pipeline. All model development steps were performed solely on the training set, whereas the holdout test set was kept separate for final evaluation. Technical success/failure was defined according to the Valve Academic Research Consortium (VARC)-3 definition, which differentiates between vascular and cardiac complications. Among 2,937 consecutive patients undergoing TAVR, the rate of cardiac and vascular technical failure was 2.4% and 7.0%, respectively. For both categories of technical failure, the best-performing model demonstrated moderate-to-high discrimination (cardiac: area under the curve: 0.769; vascular: area under the curve: 0.788), with high negative predictive values (0.995 and 0.976, respectively). Interpretability analysis showed that atherosclerotic comorbidities, computed tomography-based aortic root and iliofemoral anatomies, antithrombotic management, and procedural features were consistently identified as key determinants of VARC-3 technical failure across all models. Machine learning-based models that integrate multimodal data can effectively predict VARC-3 technical failure in TAVR, refining patient selection and optimizing procedural strategies.

Gago L, González MAF, Engelmann J, Remeseiro B, Igual L

pubmed logopapersSep 18 2025
Colon wall segmentation in transabdominal ultrasound is challenging due to variations in image quality, speckle noise, and ambiguous boundaries. Existing methods struggle with low-quality images due to their inability to adapt to varying noise levels, poor boundary definition, and reduced contrast in ultrasound imaging, resulting in inconsistent segmentation performance. We present a novel quality-aware segmentation framework that simultaneously predicts image quality and adapts the segmentation process accordingly. Our approach uses a U-Net architecture with a ConvNeXt encoder backbone, enhanced with a parallel quality prediction branch that serves as a regularization mechanism. Our model learns robust features by explicitly modeling image quality during training. We evaluate our method on the C-TRUS dataset and demonstrate superior performance compared to state-of-the-art approaches, particularly on challenging low-quality images. Our method achieves Dice scores of 0.7780, 0.7025, and 0.5970 for high, medium, and low-quality images, respectively. The proposed quality-aware segmentation framework represents a significant step toward clinically viable automated colon wall segmentation systems.

Wimmert L, Gauerd T, Dickmanne J, Hofmanne C, Sentkera T, Wernera R

pubmed logopapersSep 18 2025
4D CT imaging is essential for radiotherapy planning in thoracic tumors. However, current protocols tend to acquire more projection data than is strictly necessary for reconstructing the 4D CT, potentially leading to unnecessary radiation exposure and misalignment with the ALARA (As Low As Reasonably Achievable) principle. We propose a deep learning (DL)-driven approach that uses the patient's breathing signal to guide data acquisition, aiming to acquire only necessary projection data. This retrospective study analyzed 1,415 breathing signals from 294 patients, with a 75/25 training/validation split at patient level. Based on the signals, a DL model was trained to predict optimal beam-on events for projection data acquisition. Model testing was performed on 104 independent clinical 4D CT scans. The performance of the model was assessed by measuring temporal alignment between predicted and optimal beam-on events. To assess the impact on the reconstructed images, each 4D dataset was reconstructed twice: (1) using all clinically acquired projections (reference) and (2) using only the model-selected projections (dose-reduced). Reference and dose-reduced images were compared using Dice coefficients for organ segmentations, deformable image registration (DIR)-based displacement fields, artifact frequency, and tumor segmentation agreement, the latter evaluated in terms of Hausdorff distance and tumor motion ranges. The proposed approach reduced beam-on time and imaging dose by a median of 29% (IQR: 24-35%), corresponding to 11.6 mGy dose reduction for a standard 4D CT CTDIvol of 40 mGy. Temporal alignment between predicted and optimal beam-on events showed marginal differences. Similarly, reconstructed dose-reduced images showed only minimal differences to the reference images, demonstrated by high lung and liver segmentation Dice values, small-magnitude (DIR) displacement fields, and unchanged artifact frequency. Minor deviations of tumor segmentation and motion ranges compared to the reference suggest only minimal impact of the proposed approach on treatment planning. The proposed DL-driven data acquisition approach has the ability to reduce radiation exposure during 4D CT imaging while preserving diagnostic quality, offering a clinically viable, ALARA-adhering solution for 4D CT imaging.

Moinak Bhattacharya, Angelica P. Kurtz, Fabio M. Iwamoto, Prateek Prasanna, Gagandeep Singh

arxiv logopreprintSep 18 2025
Neuro-oncology poses unique challenges for machine learning due to heterogeneous data and tumor complexity, limiting the ability of foundation models (FMs) to generalize across cohorts. Existing FMs also perform poorly in predicting uncommon molecular markers, which are essential for treatment response and risk stratification. To address these gaps, we developed a neuro-oncology specific FM with a distributionally robust loss function, enabling accurate estimation of tumor phenotypes while maintaining cross-institution generalization. We pretrained self-supervised backbones (BYOL, DINO, MAE, MoCo) on multi-institutional brain tumor MRI and applied distributionally robust optimization (DRO) to mitigate site and class imbalance. Downstream tasks included molecular classification of common markers (MGMT, IDH1, 1p/19q, EGFR), uncommon alterations (ATRX, TP53, CDKN2A/2B, TERT), continuous markers (Ki-67, TP53), and overall survival prediction in IDH1 wild-type glioblastoma at UCSF, UPenn, and CUIMC. Our method improved molecular prediction and reduced site-specific embedding differences. At CUIMC, mean balanced accuracy rose from 0.744 to 0.785 and AUC from 0.656 to 0.676, with the largest gains for underrepresented endpoints (CDKN2A/2B accuracy 0.86 to 0.92, AUC 0.73 to 0.92; ATRX AUC 0.69 to 0.82; Ki-67 accuracy 0.60 to 0.69). For survival, c-index improved at all sites: CUIMC 0.592 to 0.597, UPenn 0.647 to 0.672, UCSF 0.600 to 0.627. Grad-CAM highlighted tumor and peri-tumoral regions, confirming interpretability. Overall, coupling FMs with DRO yields more site-invariant representations, improves prediction of common and uncommon markers, and enhances survival discrimination, underscoring the need for prospective validation and integration of longitudinal and interventional signals to advance precision neuro-oncology.

Neri F, Yang M, Xue Y

pubmed logopapersSep 18 2025
In the context of neural system structure modeling and complex visual tasks, the effective integration of multi-scale features and contextual information is critical for enhancing model performance. This paper proposes a biologically inspired hybrid neural network architecture - CompEyeNet - which combines the global modeling capacity of transformers with the efficiency of lightweight convolutional structures. The backbone network, multi-attention transformer backbone network (MATBN), integrates multiple attention mechanisms to collaboratively model local details and long-range dependencies. The neck network, compound eye neck network (CENN), introduces high-resolution feature layers and efficient attention fusion modules to significantly enhance multi-scale information representation and reconstruction capability. CompEyeNet is evaluated on three authoritative medical image segmentation datasets: MICCAI-CVC-ClinicDB, ISIC2018, and MICCAI-tooth-segmentation, demonstrating its superior performance. Experimental results show that compared to models such as Deeplab, Unet, and the YOLO series, CompEyeNet achieves better performance with fewer parameters. Specifically, compared to the baseline model YOLOv11, CompEyeNet reduces the number of parameters by an average of 38.31%. On key performance metrics, the average Dice coefficient improves by 0.87%, the Jaccard index by 1.53%, Precision by 0.58%, and Recall by 1.11%. These findings verify the advantages of the proposed architecture in terms of parameter efficiency and accuracy, highlighting the broad application potential of bio-inspired attention-fusion hybrid neural networks in neural system modeling and image analysis.

Ravali P, Reddy PCS, Praveen P

pubmed logopapersSep 18 2025
Accurate and non-invasive grading of glioma brain tumors from MRI scans is challenging due to limited labeled data and the complexity of clinical evaluation. This study aims to develop a robust and efficient deep learning framework for improved glioma classification using MRI images. A multi-stage framework is proposed, starting with SimCLR-based self-supervised learning for representation learning without labels, followed by Deep Embedded Clustering to extract and group features effectively. EfficientNet-B7 is used for initial classification due to its parameter efficiency. A weighted ensemble of EfficientNet-B7, ResNet-50, and DenseNet-121 is employed for the final classification. Hyperparameters are fine-tuned using a Differential Evolution-optimized Genetic Algorithm to enhance accuracy and training efficiency. EfficientNet-B7 achieved approximately 88-90% classification accuracy. The weighted ensemble improved this to approximately 93%. Genetic optimization further enhanced accuracy by 3-5% and reduced training time by 15%. The framework overcomes data scarcity and limited feature extraction issues in traditional CNNs. The combination of self-supervised learning, clustering, ensemble modeling, and evolutionary optimization provides improved performance and robustness, though it requires significant computational resources and further clinical validation. The proposed framework offers an accurate and scalable solution for glioma classification from MRI images. It supports faster, more reliable clinical decision-making and holds promise for real-world diagnostic applications.

Liu J, Zhu M, Li L, Zang L, Luo L, Zhu F, Zhang H, Xu Q

pubmed logopapersSep 18 2025
Construct and compare multiple machine learning models to predict lymph node (LN) metastasis in cervical cancer, utilizing radiomic features extracted from preoperative multi-parametric magnetic resonance imaging (MRI). This study retrospectively enrolled 407 patients with cervical cancer who were randomly divided into a training cohort (n=284) and a validation cohort (n=123). A total of 4065 radiomic features were extracted from the tumor regions of interest on contrast-enhanced T1-weighted imaging, T2-weighted imaging, and diffusion-weighted imaging for each patient. The Mann-Whitney U test, Spearman correlation analysis, and selection operator Cox regression analysis were employed for radiomic feature selection. The relationship between MRI radiomic features and LN status was analyzed using five machine-learning algorithms. Model performance was evaluated by measuring the area under the receiver-operating characteristic curve (AUC) and accuracy (ACC). Moreover, Kaplan-Meier analysis was used to validate the prognostic value of selected clinical and radiomic characteristics. LN metastasis was pathologically detected in 24.3% (99/407) of patients. Following a three-step feature selection, 18 radiomic features were employed for model construction. The XGBoost model exhibited superior performance compared to other models, achieving an AUC, accuracy, sensitivity, specificity, and F1 score of 0.9268, 0.8969, 0.7419, 0.9891, and 0.8364, respectively, on the validation set. Additionally, Kaplan-Meier curves indicated a significant correlation between radiomic scores and progression-free survival in cervical cancer patients (p < 0.05). Among the machine learning models, XGBoost demonstrated the best predictive ability for LN metastasis and showed prognostic value through its radiomic score, highlighting its clinical potential. Machine learning-based multi-parametric MRI radiomic analysis demonstrated promising performance in the preoperative prediction of LN metastasis and clinical prognosis in cervical cancer.

Maldonado-Garcia, C., Salih, A., Neubauer, S., Petersen, S. E., Raisi-Estabragh, Z.

medrxiv logopreprintSep 18 2025
Obesity is a global public health priority and a major risk factor for cardiovascular disease (CVD). Emerging evidence indicates variation in pathologic consequences of obesity deposition across different body compartments. Biological heart age may be estimated from imaging measures of cardiac structure and function and captures risk beyond traditional measures. Using cardiac and abdominal magnetic resonance imaging (MRI) from 34,496 UK Biobank participants and linked health record data, we investigated how compartment-specific obesity phenotypes relate to cardiac ageing and incident CVD risk. Biological heart age was estimated using machine learning from 56 cardiac MRI phenotypes. K-means clustering of abdominal visceral (VAT), abdominal subcutaneous (ASAT), and pericardial (PAT) adiposity identified a high-risk cluster (characterised by greater adiposity across all three depots) associated with accelerated cardiac ageing - and a lower-risk cluster linked to decelerated ageing. These clusters provided more precise stratification of cardiovascular ageing trajectories than established body mass index categories. Mediation analysis showed that VAT and PAT explained 13.7% and 11.9% of obesity-associated CVD risk, respectively, whereas ASAT contributed minimally, with effects more pronounced in males. Thus, cardiovascular risk appears to be driven primarily by visceral and pericardial rather than subcutaneous fat. Our findings reveal a distinct risk profile of compartment-specific fat distributions and show the importance of pericardial and visceral fat as drivers of greater cardiovascular ageing. Advanced image-defined adiposity profiling may enhance CVD risk prediction beyond anthropometric measures and enhance mechanistic understanding.

Giammarco La Barbera, Enzo Bonnot, Thomas Isla, Juan Pablo de la Plata, Joy-Rose Dunoyer de Segonzac, Jennifer Attali, Cécile Lozach, Alexandre Bellucci, Louis Marcellin, Laure Fournier, Sabine Sarnacki, Pietro Gori, Isabelle Bloch

arxiv logopreprintSep 18 2025
Endometriosis often leads to chronic pelvic pain and possible nerve involvement, yet imaging the peripheral nerves remains a challenge. We introduce Visionerves, a novel hybrid AI framework for peripheral nervous system recognition from multi-gradient DWI and morphological MRI data. Unlike conventional tractography, Visionerves encodes anatomical knowledge through fuzzy spatial relationships, removing the need for selection of manual ROIs. The pipeline comprises two phases: (A) automatic segmentation of anatomical structures using a deep learning model, and (B) tractography and nerve recognition by symbolic spatial reasoning. Applied to the lumbosacral plexus in 10 women with (confirmed or suspected) endometriosis, Visionerves demonstrated substantial improvements over standard tractography, with Dice score improvements of up to 25% and spatial errors reduced to less than 5 mm. This automatic and reproducible approach enables detailed nerve analysis and paves the way for non-invasive diagnosis of endometriosis-related neuropathy, as well as other conditions with nerve involvement.
Page 156 of 6486473 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.