Sort by:
Page 226 of 3903899 results

Harnessing Generative AI for Lung Nodule Spiculation Characterization.

Wang Y, Patel C, Tchoua R, Furst J, Raicu D

pubmed logopapersJun 26 2025
Spiculation, characterized by irregular, spike-like projections from nodule margins, serves as a crucial radiological biomarker for malignancy assessment and early cancer detection. These distinctive stellate patterns strongly correlate with tumor invasiveness and are vital for accurate diagnosis and treatment planning. Traditional computer-aided diagnosis (CAD) systems are limited in their capability to capture and use these patterns given their subtlety, difficulty in quantifying them, and small datasets available to learn these patterns. To address these challenges, we propose a novel framework leveraging variational autoencoders (VAE) to discover, extract, and vary disentangled latent representations of lung nodule images. By gradually varying the latent representations of non-spiculated nodule images, we generate augmented datasets containing spiculated nodule variations that, we hypothesize, can improve the diagnostic classification of lung nodules. Using the National Institutes of Health/National Cancer Institute Lung Image Database Consortium (LIDC) dataset, our results show that incorporating these spiculated image variations into the classification pipeline significantly improves spiculation detection performance up to 7.53%. Notably, this enhancement in spiculation detection is achieved while preserving the classification performance of non-spiculated cases. This approach effectively addresses class imbalance and enhances overall classification outcomes. The gradual attenuation of spiculation characteristics demonstrates our model's ability to both capture and generate clinically relevant semantic features in an algorithmic manner. These findings suggest that the integration of semantic-based latent representations into CAD models not only enhances diagnostic accuracy but also provides insights into the underlying morphological progression of spiculated nodules, enabling more informed and clinically meaningful AI-driven support systems.

Development, deployment, and feature interpretability of a three-class prediction model for pulmonary diseases.

Cao Z, Xu G, Gao Y, Xu J, Tian F, Shi H, Yang D, Xie Z, Wang J

pubmed logopapersJun 26 2025
To develop a high-performance machine learning model for predicting and interpreting features of pulmonary diseases. This retrospective study analyzed clinical and imaging data from patients with non-small cell lung cancer (NSCLC), granulomatous inflammation, and benign tumors, collected across multiple centers from January 2015 to October 2023. Data from two hospitals in Anhui Province were split into a development set (n = 1696) and a test set (n = 424) in an 8:2 ratio, with an external validation set (n = 909) from Zhejiang Province. Features with p < 0.05 from univariate analyses were selected using the Boruta algorithm for input into Random Forest (RF) and XGBoost models. Model efficacy was assessed using receiver operating characteristic (ROC) analysis. A total of 3030 patients were included: 2269 with NSCLC, 529 with granulomatous inflammation, and 232 with benign tumors. The Obuchowski indices for RF and XGBoost in the test set were 0.7193 (95% CI: 0.6567-0.7812) and 0.8282 (95% CI: 0.7883-0.8650), respectively. In the external validation set, indices were 0.7932 (95% CI: 0.7572-0.8250) for RF and 0.8074 (95% CI: 0.7740-0.8387) for XGBoost. XGBoost achieved better accuracy in both the test (0.81) and external validation (0.79) sets. Calibration Curve and Decision Curve Analysis (DCA) showed XGBoost offered higher net clinical benefit. The XGBoost model outperforms RF in the three-class classification of lung diseases. XGBoost surpasses Random Forest in accurately classifying NSCLC, granulomatous inflammation, and benign tumors, offering superior clinical utility via multicenter data. Lung cancer classification model has broad clinical applicability. XGBoost outperforms random forests using CT imaging data. XGBoost model can be deployed on a website for clinicians.

Dose-aware denoising diffusion model for low-dose CT.

Kim S, Kim BJ, Baek J

pubmed logopapersJun 26 2025
Low-dose computed tomography (LDCT) denoising plays an important role in medical imaging for reducing the radiation dose to patients. Recently, various data-driven and diffusion-based deep learning (DL) methods have been developed and shown promising results in LDCT denoising. However, challenges remain in ensuring generalizability to different datasets and mitigating uncertainty from stochastic sampling. In this paper, we introduce a novel dose-aware diffusion model that effectively reduces CT image noise while maintaining structural fidelity and being generalizable to different dose levels.&#xD;Approach: Our approach employs a physics-based forward process with continuous timesteps, enabling flexible representation of diverse noise levels. We incorporate a computationally efficient noise calibration module in our diffusion framework that resolves misalignment between intermediate results and their corresponding timesteps. Furthermore, we present a simple yet effective method for estimating appropriate timesteps for unseen LDCT images, allowing generalization to an unknown, arbitrary dose levels.&#xD;Main Results: Both qualitative and quantitative evaluation results on Mayo Clinic datasets show that the proposed method outperforms existing denoising methods in preserving the noise texture and restoring anatomical structures. The proposed method also shows consistent results on different dose levels and an unseen dataset.&#xD;Significance: We propose a novel dose-aware diffusion model for LDCT denoising, aiming to address the generalization and uncertainty issues of existing diffusion-based DL methods. Our experimental results demonstrate the effectiveness of the proposed method across different dose levels. We expect that our approach can provide a clinically practical solution for LDCT denoising with its high structural fidelity and computational efficiency.

A machine learning model integrating clinical-radiomics-deep learning features accurately predicts postoperative recurrence and metastasis of primary gastrointestinal stromal tumors.

Xie W, Zhang Z, Sun Z, Wan X, Li J, Jiang J, Liu Q, Yang G, Fu Y

pubmed logopapersJun 26 2025
Post-surgical prediction of recurrence or metastasis for primary gastrointestinal stromal tumors (GISTs) remains challenging. We aim to develop individualized clinical follow-up strategies for primary GIST patients, such as shortening follow-up time or extending drug administration based on the clinical deep learning radiomics model (CDLRM). The clinical information on primary GISTs was collected from two independent centers. Postoperative recurrence or metastasis in GIST patients was defined as the endpoint of the study. A total of nine machine learning models were established based on the selected features. The performance of the models was assessed by calculating the area under the curve (AUC). The CDLRM with the best predictive performance was constructed. Decision curve analysis (DCA) and calibration curves were analyzed separately. Ultimately, our model was applied to the high-potential malignant group vs the low-malignant-potential group. The optimal clinical application scenarios of the model were further explored by comparing the DCA performance of the two subgroups. A total of 526 patients, 260 men and 266 women, with a mean age of 62 years, were enrolled in the study. CDLRM performed excellently with AUC values of 0.999, 0.963, and 0.995 for the training, external validation, and aggregated sets, respectively. The calibration curve indicated that CDLRM was in good agreement between predicted and observed probabilities in the validation cohort. The results of DCA's performance in different subgroups show that it was more clinically valuable in populations with high malignant potential. CDLRM could help the development of personalized treatment and improved follow-up of patients with a high probability of recurrence or metastasis in the future. This model utilizes imaging features extracted from CT scans (including radiomic features and deep features) and clinical data to accurately predict postoperative recurrence and metastasis in patients with primary GISTs, which has a certain auxiliary role in clinical decision-making. We developed and validated a model to predict recurrence or metastasis in patients taking oral imatinib after GIST. We demonstrate that CT image features were associated with recurrence or metastases. The model had good predictive performance and clinical benefit.

Deep learning-based contour propagation in magnetic resonance imaging-guided radiotherapy of lung cancer patients.

Wei C, Eze C, Klaar R, Thorwarth D, Warda C, Taugner J, Hörner-Rieber J, Regnery S, Jaekel O, Weykamp F, Palacios MA, Marschner S, Corradini S, Belka C, Kurz C, Landry G, Rabe M

pubmed logopapersJun 26 2025
Fast and accurate organ-at-risk (OAR) and gross tumor volume (GTV) contour propagation methods are needed to improve the efficiency of magnetic resonance (MR) imaging-guided radiotherapy. We trained deformable image registration networks to accurately propagate contours from planning to fraction MR images.&#xD;Approach: Data from 140 stage 1-2 lung cancer patients treated at a 0.35T MR-Linac were split into 102/17/21 for training/validation/testing. Additionally, 18 central lung tumor patients, treated at a 0.35T MR-Linac externally, and 14 stage 3 lung cancer patients from a phase 1 clinical trial, treated at 0.35T or 1.5T MR-Linacs at three institutions, were used for external testing. Planning and fraction images were paired (490 pairs) for training. Two hybrid transformer-convolutional neural network TransMorph models with mean squared error (MSE), Dice similarity coefficient (DSC), and regularization losses (TM_{MSE+Dice}) or MSE and regularization losses (TM_{MSE}) were trained to deformably register planning to fraction images. The TransMorph models predicted diffeomorphic dense displacement fields. Multi-label images including seven thoracic OARs and the GTV were propagated to generate fraction segmentations. Model predictions were compared with contours obtained through B-spline, vendor registration and the auto-segmentation method nnUNet. Evaluation metrics included the DSC and Hausdorff distance percentiles (50th and 95th) against clinical contours.&#xD;Main results: TM_{MSE+Dice} and TM_{MSE} achieved mean OARs/GTV DSCs of 0.90/0.82 and 0.90/0.79 for the internal and 0.84/0.77 and 0.85/0.76 for the central lung tumor external test data. On stage 3 data, TM_{MSE+Dice} achieved mean OARs/GTV DSCs of 0.87/0.79 and 0.83/0.78 for the 0.35 T MR-Linac datasets, and 0.87/0.75 for the 1.5 T MR-Linac dataset. TM_{MSE+Dice} and TM_{MSE} had significantly higher geometric accuracy than other methods on external data. No significant difference between TM_{MSE+Dice} and TM_{MSE} was found.&#xD;Significance: TransMorph models achieved time-efficient segmentation of fraction MRIs with high geometrical accuracy and accurately segmented images obtained at different field strengths.

Epicardial adipose tissue, myocardial remodelling and adverse outcomes in asymptomatic aortic stenosis: a post hoc analysis of a randomised controlled trial.

Geers J, Manral N, Razipour A, Park C, Tomasino GF, Xing E, Grodecki K, Kwiecinski J, Pawade T, Doris MK, Bing R, White AC, Droogmans S, Cosyns B, Slomka PJ, Newby DE, Dweck MR, Dey D

pubmed logopapersJun 26 2025
Epicardial adipose tissue represents a metabolically active visceral fat depot that is in direct contact with the left ventricular myocardium. While it is associated with coronary artery disease, little is known regarding its role in aortic stenosis. We sought to investigate the association of epicardial adipose tissue with aortic stenosis severity and progression, myocardial remodelling and function, and mortality in asymptomatic patients with aortic stenosis. In a post hoc analysis of 124 patients with asymptomatic mild-to-severe aortic stenosis participating in a prospective clinical trial, baseline epicardial adipose tissue was quantified on CT angiography using fully automated deep learning-enabled software. Aortic stenosis disease severity was assessed at baseline and 1 year. The primary endpoint was all-cause mortality. Neither epicardial adipose tissue volume nor attenuation correlated with aortic stenosis severity or subsequent disease progression as assessed by echocardiography or CT (p>0.05 for all). Epicardial adipose tissue volume correlated with plasma cardiac troponin concentration (r=0.23, p=0.009), left ventricular mass (r=0.46, p<0.001), ejection fraction (r=-0.28, p=0.002), global longitudinal strain (r=0.28, p=0.017), and left atrial volume (r=0.39, p<0.001). During the median follow-up of 48 (IQR 26-73) months, a total of 23 (18%) patients died. In multivariable analysis, both epicardial adipose tissue volume (HR 1.82, 95% CI 1.10 to 3.03; p=0.021) and plasma cardiac troponin concentration (HR 1.47, 95% CI 1.13 to 1.90; p=0.004) were associated with all-cause mortality, after adjustment for age, body mass index and left ventricular ejection fraction. Patients with epicardial adipose tissue volume >90 mm<sup>3</sup> had 3-4 times higher risk of death (adjusted HR 3.74, 95% CI 1.08 to 12.96; p=0.037). Epicardial adipose tissue volume does not associate with aortic stenosis severity or its progression but does correlate with blood and imaging biomarkers of impaired myocardial health. The latter may explain the association of epicardial adipose tissue volume with an increased risk of all-cause mortality in patients with asymptomatic aortic stenosis. gov (NCT02132026).

Enhancing cancer diagnostics through a novel deep learning-based semantic segmentation algorithm: A low-cost, high-speed, and accurate approach.

Benabbou T, Sahel A, Badri A, Mourabit IE

pubmed logopapersJun 26 2025
Deep learning-based semantic segmentation approaches provide an efficient and automated means for cancer diagnosis and monitoring, which is important in clinical applications. However, implementing these approaches outside the experimental environment and using them in real-world applications requires powerful and adequate hardware resources, which are not available in most hospitals, especially in low- and middle-income countries. Consequently, clinical settings will never use most of these algorithms, or at best, their adoption will be relatively limited. To address these issues, some approaches that reduce computational costs were proposed, but they performed poorly and failed to produce satisfactory results. Therefore, finding a method that overcomes these limitations without losing performance is highly challenging. To face this challenge, our study proposes a novel, optimal convolutional neural network-based approach for medical image segmentation that consists of multiple synthesis and analysis paths connected through a series of long skip connections. The design leverages multi-scale convolution, multi-scale feature extraction, downsampling strategies, and feature map fusion methods, all of which have proven effective in enhancing performance. This framework was extensively evaluated against current state-of-the-art architectures on various medical image segmentation tasks, including lung tumors, spleen, and pancreatic tumors. The results of these experiments conclusively demonstrate the efficacy of the proposed approach in outperforming existing state-of-the-art methods across multiple evaluation metrics. This superiority is further enhanced by the framework's ability to minimize the computational complexity and decrease the number of parameters required, resulting in greater segmentation accuracy, faster processing, and better implementation efficiency.

Constructing high-quality enhanced 4D-MRI with personalized modeling for liver cancer radiotherapy.

Yao Y, Chen B, Wang K, Cao Y, Zuo L, Zhang K, Chen X, Kuo M, Dai J

pubmed logopapersJun 26 2025
For magnetic resonance imaging (MRI), a short acquisition time and good image quality are incompatible. Thus, reconstructing time-resolved volumetric MRI (4D-MRI) to delineate and monitor thoracic and upper abdominal tumor movements is a challenge. Existing MRI sequences have limited applicability to 4D-MRI. A method is proposed for reconstructing high-quality personalized enhanced 4D-MR images. Low-quality 4D-MR images are scanned followed by deep learning-based personalization to generate high-quality 4D-MR images. High-speed multiphase 3D fast spoiled gradient recalled echo (FSPGR) sequences were utilized to generate low-quality enhanced free-breathing 4D-MR images and paired low-/high-quality breath-holding 4D-MR images for 58 liver cancer patients. Then, a personalized model guided by the paired breath-holding 4D-MR images was developed for each patient to cope with patient heterogeneity. The 4D-MR images generated by the personalized model were of much higher quality compared with the low-quality 4D-MRI images obtained by conventional scanning as demonstrated by significant improvements in the peak signal-to-noise ratio, structural similarity, normalized root mean square error, and cumulative probability of blur detection. The introduction of individualized information helped the personalized model demonstrate a statistically significant improvement compared to the general model (p < 0.001). The proposed method can be used to quickly reconstruct high-quality 4D-MR images and is potentially applicable to radiotherapy for liver cancer.

Machine Learning Models for Predicting Mortality in Pneumonia Patients.

Pavlovic V, Haque MS, Grubor N, Pavlovic A, Stanisavljevic D, Milic N

pubmed logopapersJun 26 2025
Pneumonia remains a significant cause of hospital mortality, prompting the need for precise mortality prediction methods. This study conducted a systematic review identifying predictors of mortality using Machine Learning (ML) and applied these methods to hospitalized pneumonia patients at the University Clinical Centre Zvezdara. The systematic review identified 16 studies (313,572 patients), revealing common mortality predictors including age, oxygen levels, and albumin. A Random Forest (RF) model was developed using local data (n=343), achieving an accuracy of 99%, and AUC of 0.99. Key predictors identified were chest X-ray worsening, ventilator use, age, and oxygen support. ML demonstrated high potential for accurately predicting pneumonia mortality, surpassing traditional severity scores, and highlighting its practical clinical utility.

Enhancing Diagnostic Precision: Utilising a Large Language Model to Extract U Scores from Thyroid Sonography Reports.

Watts E, Pournik O, Allington R, Ding X, Boelaert K, Sharma N, Ghalichi L, Arvanitis TN

pubmed logopapersJun 26 2025
This study evaluates the performance of ChatGPT-4, a Large Language Model (LLM), in automatically extracting U scores from free-text thyroid ultrasound reports collected from University Hospitals Birmingham (UHB), UK, between 2014 and 2024. The LLM was provided with guidelines on the U classification system and extracted U scores independently from 14,248 de-identified reports, without access to human-assigned scores. The LLM-extracted scores were compared to initial clinician-assigned and refined U scores provided by expert reviewers. The LLM achieved 97.7% agreement with refined human U scores, successfully identifying the highest U score in 98.1% of reports with multiple nodules. Most discrepancies (2.5%) were linked to ambiguous descriptions, multi-nodule reports, and cases with human-documented uncertainty. While the results demonstrate the potential for LLMs to improve reporting consistency and reduce manual workload, ethical and governance challenges such as transparency, privacy, and bias must be addressed before routine clinical deployment. Embedding LLMs into reporting workflows, such as Online Analytical Processing (OLAP) tools, could further enhance reporting quality and consistency.
Page 226 of 3903899 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.