Sort by:
Page 43 of 66652 results

Deep learning-based contour propagation in magnetic resonance imaging-guided radiotherapy of lung cancer patients.

Wei C, Eze C, Klaar R, Thorwarth D, Warda C, Taugner J, Hörner-Rieber J, Regnery S, Jaekel O, Weykamp F, Palacios MA, Marschner S, Corradini S, Belka C, Kurz C, Landry G, Rabe M

pubmed logopapersJun 26 2025
Fast and accurate organ-at-risk (OAR) and gross tumor volume (GTV) contour propagation methods are needed to improve the efficiency of magnetic resonance (MR) imaging-guided radiotherapy. We trained deformable image registration networks to accurately propagate contours from planning to fraction MR images.
Approach: Data from 140 stage 1-2 lung cancer patients treated at a 0.35T MR-Linac were split into 102/17/21 for training/validation/testing. Additionally, 18 central lung tumor patients, treated at a 0.35T MR-Linac externally, and 14 stage 3 lung cancer patients from a phase 1 clinical trial, treated at 0.35T or 1.5T MR-Linacs at three institutions, were used for external testing. Planning and fraction images were paired (490 pairs) for training. Two hybrid transformer-convolutional neural network TransMorph models with mean squared error (MSE), Dice similarity coefficient (DSC), and regularization losses (TM_{MSE+Dice}) or MSE and regularization losses (TM_{MSE}) were trained to deformably register planning to fraction images. The TransMorph models predicted diffeomorphic dense displacement fields. Multi-label images including seven thoracic OARs and the GTV were propagated to generate fraction segmentations. Model predictions were compared with contours obtained through B-spline, vendor registration and the auto-segmentation method nnUNet. Evaluation metrics included the DSC and Hausdorff distance percentiles (50th and 95th) against clinical contours.
Main results: TM_{MSE+Dice} and TM_{MSE} achieved mean OARs/GTV DSCs of 0.90/0.82 and 0.90/0.79 for the internal and 0.84/0.77 and 0.85/0.76 for the central lung tumor external test data. On stage 3 data, TM_{MSE+Dice} achieved mean OARs/GTV DSCs of 0.87/0.79 and 0.83/0.78 for the 0.35 T MR-Linac datasets, and 0.87/0.75 for the 1.5 T MR-Linac dataset. TM_{MSE+Dice} and TM_{MSE} had significantly higher geometric accuracy than other methods on external data. No significant difference between TM_{MSE+Dice} and TM_{MSE} was found.
Significance: TransMorph models achieved time-efficient segmentation of fraction MRIs with high geometrical accuracy and accurately segmented images obtained at different field strengths.

Development, deployment, and feature interpretability of a three-class prediction model for pulmonary diseases.

Cao Z, Xu G, Gao Y, Xu J, Tian F, Shi H, Yang D, Xie Z, Wang J

pubmed logopapersJun 26 2025
To develop a high-performance machine learning model for predicting and interpreting features of pulmonary diseases. This retrospective study analyzed clinical and imaging data from patients with non-small cell lung cancer (NSCLC), granulomatous inflammation, and benign tumors, collected across multiple centers from January 2015 to October 2023. Data from two hospitals in Anhui Province were split into a development set (n = 1696) and a test set (n = 424) in an 8:2 ratio, with an external validation set (n = 909) from Zhejiang Province. Features with p < 0.05 from univariate analyses were selected using the Boruta algorithm for input into Random Forest (RF) and XGBoost models. Model efficacy was assessed using receiver operating characteristic (ROC) analysis. A total of 3030 patients were included: 2269 with NSCLC, 529 with granulomatous inflammation, and 232 with benign tumors. The Obuchowski indices for RF and XGBoost in the test set were 0.7193 (95% CI: 0.6567-0.7812) and 0.8282 (95% CI: 0.7883-0.8650), respectively. In the external validation set, indices were 0.7932 (95% CI: 0.7572-0.8250) for RF and 0.8074 (95% CI: 0.7740-0.8387) for XGBoost. XGBoost achieved better accuracy in both the test (0.81) and external validation (0.79) sets. Calibration Curve and Decision Curve Analysis (DCA) showed XGBoost offered higher net clinical benefit. The XGBoost model outperforms RF in the three-class classification of lung diseases. XGBoost surpasses Random Forest in accurately classifying NSCLC, granulomatous inflammation, and benign tumors, offering superior clinical utility via multicenter data. Lung cancer classification model has broad clinical applicability. XGBoost outperforms random forests using CT imaging data. XGBoost model can be deployed on a website for clinicians.

Harnessing Generative AI for Lung Nodule Spiculation Characterization.

Wang Y, Patel C, Tchoua R, Furst J, Raicu D

pubmed logopapersJun 26 2025
Spiculation, characterized by irregular, spike-like projections from nodule margins, serves as a crucial radiological biomarker for malignancy assessment and early cancer detection. These distinctive stellate patterns strongly correlate with tumor invasiveness and are vital for accurate diagnosis and treatment planning. Traditional computer-aided diagnosis (CAD) systems are limited in their capability to capture and use these patterns given their subtlety, difficulty in quantifying them, and small datasets available to learn these patterns. To address these challenges, we propose a novel framework leveraging variational autoencoders (VAE) to discover, extract, and vary disentangled latent representations of lung nodule images. By gradually varying the latent representations of non-spiculated nodule images, we generate augmented datasets containing spiculated nodule variations that, we hypothesize, can improve the diagnostic classification of lung nodules. Using the National Institutes of Health/National Cancer Institute Lung Image Database Consortium (LIDC) dataset, our results show that incorporating these spiculated image variations into the classification pipeline significantly improves spiculation detection performance up to 7.53%. Notably, this enhancement in spiculation detection is achieved while preserving the classification performance of non-spiculated cases. This approach effectively addresses class imbalance and enhances overall classification outcomes. The gradual attenuation of spiculation characteristics demonstrates our model's ability to both capture and generate clinically relevant semantic features in an algorithmic manner. These findings suggest that the integration of semantic-based latent representations into CAD models not only enhances diagnostic accuracy but also provides insights into the underlying morphological progression of spiculated nodules, enabling more informed and clinically meaningful AI-driven support systems.

MedPrompt: LLM-CNN Fusion with Weight Routing for Medical Image Segmentation and Classification

Shadman Sobhan, Kazi Abrar Mahmud, Abduz Zami

arxiv logopreprintJun 26 2025
Current medical image analysis systems are typically task-specific, requiring separate models for classification and segmentation, and lack the flexibility to support user-defined workflows. To address these challenges, we introduce MedPrompt, a unified framework that combines a few-shot prompted Large Language Model (Llama-4-17B) for high-level task planning with a modular Convolutional Neural Network (DeepFusionLab) for low-level image processing. The LLM interprets user instructions and generates structured output to dynamically route task-specific pretrained weights. This weight routing approach avoids retraining the entire framework when adding new tasks-only task-specific weights are required, enhancing scalability and deployment. We evaluated MedPrompt across 19 public datasets, covering 12 tasks spanning 5 imaging modalities. The system achieves a 97% end-to-end correctness in interpreting and executing prompt-driven instructions, with an average inference latency of 2.5 seconds, making it suitable for near real-time applications. DeepFusionLab achieves competitive segmentation accuracy (e.g., Dice 0.9856 on lungs) and strong classification performance (F1 0.9744 on tuberculosis). Overall, MedPrompt enables scalable, prompt-driven medical imaging by combining the interpretability of LLMs with the efficiency of modular CNNs.

Morphology-based radiological-histological correlation on ultra-high-resolution energy-integrating detector CT using cadaveric human lungs: nodule and airway analysis.

Hata A, Yanagawa M, Ninomiya K, Kikuchi N, Kurashige M, Nishigaki D, Doi S, Yamagata K, Yoshida Y, Ogawa R, Tokuda Y, Morii E, Tomiyama N

pubmed logopapersJun 26 2025
To evaluate the depiction capability of fine lung nodules and airways using high-resolution settings on ultra-high-resolution energy-integrating detector CT (UHR-CT), incorporating large matrix sizes, thin-slice thickness, and iterative reconstruction (IR)/deep-learning reconstruction (DLR), using cadaveric human lungs and corresponding histological images. Images of 20 lungs were acquired using conventional CT (CCT), UHR-CT, and photon-counting detector CT (PCD-CT). CCT images were reconstructed with a 512 matrix and IR (CCT-512-IR). UHR-CT images were reconstructed with four settings by varying the matrix size and the reconstruction method: UHR-512-IR, UHR-1024-IR, UHR-2048-IR, and UHR-1024-DLR. Two imaging settings of PCD-CT were used: PCD-512-IR and PCD-1024-IR. CT images were visually evaluated and compared with histology. Overall, 6769 nodules (median: 1321 µm) and 92 airways (median: 851 µm) were evaluated. For nodules, UHR-2048-IR outperformed CCT-512-IR, UHR-512-IR, and UHR-1024-IR (p < 0.001). UHR-1024-DLR showed no significant difference from UHR-2048-IR in the overall nodule score after Bonferroni correction (uncorrected p = 0.043); however, for nodules > 1000 μm, UHR-2048-IR demonstrated significantly better scores than UHR-1024-DLR (p = 0.003). For airways, UHR-1024-IR and UHR-512-IR showed significant differences (p < 0.001), with no notable differences among UHR-1024-IR, UHR-2048-IR, and UHR-1024-DLR. UHR-2048-IR detected nodules and airways with median diameters of 604 µm and 699 µm, respectively. No significant difference was observed between UHR-512-IR and PCD-512-IR (p > 0.1). PCD-1024-IR outperformed UHR-CTs for nodules > 1000 μm (p ≤ 0.001), while UHR-1024-DLR outperformed PCD-1024-IR for airways > 1000 μm (p = 0.005). UHR-2048-IR demonstrated the highest scores among the evaluated EID-CT images. UHR-CT showed potential for detecting submillimeter nodules and airways. With the 512 matrix, UHR-CT demonstrated performance comparable to PCD-CT. Question There are scarce data evaluating the depiction capabilities of ultra-high-resolution energy-integrating detector CT (UHR-CT) for fine structures, nor any comparisons with photon-counting detector CT (PCD-CT). Findings UHR-CT depicted nodules and airways with median diameters of 604 µm and 699 µm, showing no significant difference from PCD-CT with the 512 matrix. Clinical relevance High-resolution imaging is crucial for lung diagnosis. UHR-CT has the potential to contribute to pulmonary nodule diagnosis and airway disease evaluation by detecting fine opacities and airways.

Towards automated multi-regional lung parcellation for 0.55-3T 3D T2w fetal MRI

Uus, A., Avena Zampieri, C., Downes, F., Egloff Collado, A., Hall, M., Davidson, J., Payette, K., Aviles Verdera, J., Grigorescu, I., Hajnal, J. V., Deprez, M., Aertsen, M., Hutter, J., Rutherford, M., Deprest, J., Story, L.

medrxiv logopreprintJun 26 2025
Fetal MRI is increasingly being employed in the diagnosis of fetal lung anomalies and segmentation-derived total fetal lung volumes are used as one of the parameters for prediction of neonatal outcomes. However, in clinical practice, segmentation is performed manually in 2D motion-corrupted stacks with thick slices which is time consuming and can lead to variations in estimated volumes. Furthermore, there is a known lack of consensus regarding a universal lung parcellation protocol and expected normal total lung volume formulas. The lungs are also segmented as one label without parcellation into lobes. In terms of automation, to the best of our knowledge, there have been no reported works on multi-lobe segmentation for fetal lung MRI. This work introduces the first automated deep learning segmentation pipeline for multi-regional lung segmentation for 3D motion-corrected T2w fetal body images for normal anatomy and congenital diaphragmatic hernia cases. The protocol for parcellation into 5 standard lobes was defined in the population-averaged 3D atlas. It was then used to generate a multi-label training dataset including 104 normal anatomy controls and 45 congenital diaphragmatic hernia cases from 0.55T, 1.5T and 3T acquisition protocols. The performance of 3D Attention UNet network was evaluated on 18 cases and showed good results for normal lung anatomy with expectedly lower Dice values for the ipsilateral lung. In addition, we also produced normal lung volumetry growth charts from 290 0.55T and 3T controls. This is the first step towards automated multi-regional fetal lung analysis for 3D fetal MRI.

Deep transfer learning radiomics combined with explainable machine learning for preoperative thymoma risk prediction based on CT.

Wu S, Fan L, Wu Y, Xu J, Guo Y, Zhang H, Xu Z

pubmed logopapersJun 26 2025
To develop and validate a computerized tomography (CT)‑based deep transfer learning radiomics model combined with explainable machine learning for preoperative risk prediction of thymoma. This retrospective study included 173 pathologically confirmed thymoma patients from our institution in the training group and 93 patients from two external centers in the external validation group. Tumors were classified according to the World Health Organization simplified criteria as low‑risk types (A, AB, and B1) or high‑risk types (B2 and B3). Radiomics features and deep transfer learning features were extracted from venous‑phase contrast‑enhanced CT images by using a modified Inception V3 network. Principal component analysis and least absolute shrinkage and selection operator regression identified 20 key predictors. Six classifiers-decision tree, gradient boosting machine, k‑nearest neighbors, naïve Bayes, random forest (RF), and support vector machine-were trained on five feature sets: CT imaging model, radiomics feature model, deep transfer learning feature model, combined feature model, and combined model. Interpretability was assessed with SHapley Additive exPlanations (SHAP), and an interactive web application was developed for real‑time individualized risk prediction and visualization. In the external validation group, the RF classifier achieved the highest area under the receiver operating characteristic curve (AUC) value of 0.956. In the training group, the AUC values for the CT imaging model, radiomics feature model, deep transfer learning feature model, combined feature model, and combined model were 0.684, 0.831, 0.815, 0.893, and 0.910, respectively. The corresponding AUC values in the external validation group were 0.604, 0.865, 0.880, 0.934, and 0.956, respectively. SHAP visualizations revealed the relative contribution of each feature, while the web application provided real‑time individual prediction probabilities with interpretative outputs. We developed a CT‑based deep transfer learning radiomics model combined with explainable machine learning and an interactive web application; this model achieved high accuracy and transparency for preoperative thymoma risk stratification, facilitating personalized clinical decision‑making.

Machine Learning Models for Predicting Mortality in Pneumonia Patients.

Pavlovic V, Haque MS, Grubor N, Pavlovic A, Stanisavljevic D, Milic N

pubmed logopapersJun 26 2025
Pneumonia remains a significant cause of hospital mortality, prompting the need for precise mortality prediction methods. This study conducted a systematic review identifying predictors of mortality using Machine Learning (ML) and applied these methods to hospitalized pneumonia patients at the University Clinical Centre Zvezdara. The systematic review identified 16 studies (313,572 patients), revealing common mortality predictors including age, oxygen levels, and albumin. A Random Forest (RF) model was developed using local data (n=343), achieving an accuracy of 99%, and AUC of 0.99. Key predictors identified were chest X-ray worsening, ventilator use, age, and oxygen support. ML demonstrated high potential for accurately predicting pneumonia mortality, surpassing traditional severity scores, and highlighting its practical clinical utility.

Machine Learning-Based Risk Assessment of Myasthenia Gravis Onset in Thymoma Patients and Analysis of Their Correlations and Causal Relationships.

Liu W, Wang W, Zhang H, Guo M

pubmed logopapersJun 25 2025
The study aims to utilize interpretable machine learning models to predict the risk of myasthenia gravis onset in thymoma patients and investigate the intrinsic correlations and causal relationships between them. A comprehensive retrospective analysis was conducted on 172 thymoma patients diagnosed at two medical centers between 2018 and 2024. The cohort was bifurcated into a training set (n = 134) and test set (n = 38) to develop and validate risk predictive models. Radiomic and deep features were extracted from tumor regions across three CT phases: non-enhanced, arterial, and venous. Through rigorous feature selection employing Spearman's rank correlation coefficient and LASSO (Least Absolute Shrinkage and Selection Operator) regularization, 12 optimal imaging features were identified. These were integrated with 11 clinical parameters and one pathological subtype variable to form a multi-dimensional feature matrix. Six machine learning algorithms were subsequently implemented for model construction and comparative analysis. We utilized SHAP (SHapley Additive exPlanation) to interpret the model and employed doubly robust learner to perform a potential causal analysis between thymoma and myasthenia gravis (MG). All six models demonstrated satisfactory predictive capabilities, with the support vector machine (SVM) model exhibiting superior performance on the test cohort. It achieved an area under the curve (AUC) of 0.904 (95% confidence interval [CI] 0.798-1.000), outperforming other models such as logistic regression, multilayer perceptron (MLP), and others. The model's predictive result substantiates the strong correlation between thymoma and MG. Additionally, our analysis revealed the existence of a significant causal relationship between them, and high-risk tumors significantly elevated the risk of MG by an average treatment effect (ATE) of 9.2%. This implies that thymoma patients with types B2 and B3 face a considerably high risk of developing MG compared to those with types A, AB, and B1. The model provides a novel and effective tool for evaluating the risk of MG development in patients with thymoma. Furthermore, correlation and causal analysis have unveiled pathways that connect tumor to the risk of MG, with a notably higher incidence of MG observed in high risk pathological subtypes. These insights contribute to a deeper understanding of MG and drive a paradigm shift in medical practice from passive treatment to proactive intervention.

BronchoGAN: anatomically consistent and domain-agnostic image-to-image translation for video bronchoscopy.

Soliman A, Keuth R, Himstedt M

pubmed logopapersJun 25 2025
Purpose The limited availability of bronchoscopy images makes image synthesis particularly interesting for training deep learning models. Robust image translation across different domains-virtual bronchoscopy, phantom as well as in vivo and ex vivo image data-is pivotal for clinical applications. Methods This paper proposes BronchoGAN introducing anatomical constraints for image-to-image translation being integrated into a conditional GAN. In particular, we force bronchial orifices to match across input and output images. We further propose to use foundation model-generated depth images as intermediate representation ensuring robustness across a variety of input domains establishing models with substantially less reliance on individual training datasets. Moreover, our intermediate depth image representation allows to easily construct paired image data for training. Results Our experiments showed that input images from different domains (e.g., virtual bronchoscopy, phantoms) can be successfully translated to images mimicking realistic human airway appearance. We demonstrated that anatomical settings (i.e., bronchial orifices) can be robustly preserved with our approach which is shown qualitatively and quantitatively by means of improved FID, SSIM and dice coefficients scores. Our anatomical constraints enabled an improvement in the Dice coefficient of up to 0.43 for synthetic images. Conclusion Through foundation models for intermediate depth representations and bronchial orifice segmentation integrated as anatomical constraints into conditional GANs, we are able to robustly translate images from different bronchoscopy input domains. BronchoGAN allows to incorporate public CT scan data (virtual bronchoscopy) in order to generate large-scale bronchoscopy image datasets with realistic appearance. BronchoGAN enables to bridge the gap of missing public bronchoscopy images.
Page 43 of 66652 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.