Sort by:
Page 12 of 2602591 results

Integrating CT radiomics and clinical features using machine learning to predict post-COVID pulmonary fibrosis.

Zhao Q, Li Y, Zhao C, Dong R, Tian J, Zhang Z, Huang L, Huang J, Yan J, Yang Z, Ruan J, Wang P, Yu L, Qu J, Zhou M

pubmed logopapersJul 2 2025
The lack of reliable biomarkers for the early detection and risk stratification of post-COVID-19 pulmonary fibrosis (PCPF) underscores the urgency advanced predictive tools. This study aimed to develop a machine learning-based predictive model integrating quantitative CT (qCT) radiomics and clinical features to assess the risk of lung fibrosis in COVID-19 patients. A total of 204 patients with confirmed COVID-19 pneumonia were included in the study. Of these, 93 patients were assigned to the development cohort (74 for training and 19 for internal validation), while 111 patients from three independent hospitals constituted the external validation cohort. Chest CT images were analyzed using qCT software. Clinical data and laboratory parameters were obtained from electronic health records. Least absolute shrinkage and selection operator (LASSO) regression with 5-fold cross-validation was used to select the most predictive features. Twelve machine learning algorithms were independently trained. Their performances were evaluated by receiver operating characteristic (ROC) curves, area under the curve (AUC) values, sensitivity, and specificity. Seventy-eight features were extracted and reduced to ten features for model development. These included two qCT radiomics signatures: (1) whole lung_reticulation (%) interstitial lung disease (ILD) texture analysis, (2) interstitial lung abnormality (ILA)_Num of lung zones ≥ 5%_whole lung_ILA. Among 12 machine learning algorithms evaluated, the support vector machine (SVM) model demonstrated the best predictive performance, with AUCs of 0.836 (95% CI: 0.830-0.842) in the training cohort, 0.796 (95% CI: 0.777-0.816) in the internal validation cohort, and 0.797 (95% CI: 0.691-0.873) in the external validation cohort. The integration of CT radiomics, clinical and laboratory variables using machine learning provides a robust tool for predicting pulmonary fibrosis progression in COVID-19 patients, facilitating early risk assessment and intervention.

Large language model trained on clinical oncology data predicts cancer progression.

Zhu M, Lin H, Jiang J, Jinia AJ, Jee J, Pichotta K, Waters M, Rose D, Schultz N, Chalise S, Valleru L, Morin O, Moran J, Deasy JO, Pilai S, Nichols C, Riely G, Braunstein LZ, Li A

pubmed logopapersJul 2 2025
Subspecialty knowledge barriers have limited the adoption of large language models (LLMs) in oncology. We introduce Woollie, an open-source, oncology-specific LLM trained on real-world data from Memorial Sloan Kettering Cancer Center (MSK) across lung, breast, prostate, pancreatic, and colorectal cancers, with external validation using University of California, San Francisco (UCSF) data. Woollie surpasses ChatGPT in medical benchmarks and excels in eight non-medical benchmarks. Analyzing 39,319 radiology impression notes from 4002 patients, it achieved an overall area under the receiver operating characteristic curve (AUROC) of 0.97 for cancer progression prediction on MSK data, including a notable 0.98 AUROC for pancreatic cancer. On UCSF data, it achieved an overall AUROC of 0.88, excelling in lung cancer detection with an AUROC of 0.95. As the first oncology specific LLM validated across institutions, Woollie demonstrates high accuracy and consistency across cancer types, underscoring its potential to enhance cancer progression analysis.

Multimodal AI to forecast arrhythmic death in hypertrophic cardiomyopathy.

Lai C, Yin M, Kholmovski EG, Popescu DM, Lu DY, Scherer E, Binka E, Zimmerman SL, Chrispin J, Hays AG, Phelan DM, Abraham MR, Trayanova NA

pubmed logopapersJul 2 2025
Sudden cardiac death from ventricular arrhythmias is a leading cause of mortality worldwide. Arrhythmic death prognostication is challenging in patients with hypertrophic cardiomyopathy (HCM), a setting where current clinical guidelines show low performance and inconsistent accuracy. Here, we present a deep learning approach, MAARS (Multimodal Artificial intelligence for ventricular Arrhythmia Risk Stratification), to forecast lethal arrhythmia events in patients with HCM by analyzing multimodal medical data. MAARS' transformer-based neural networks learn from electronic health records, echocardiogram and radiology reports, and contrast-enhanced cardiac magnetic resonance images, the latter being a unique feature of this model. MAARS achieves an area under the curve of 0.89 (95% confidence interval (CI) 0.79-0.94) and 0.81 (95% CI 0.69-0.93) in internal and external cohorts and outperforms current clinical guidelines by 0.27-0.35 (internal) and 0.22-0.30 (external). In contrast to clinical guidelines, it demonstrates fairness across demographic subgroups. We interpret MAARS' predictions on multiple levels to promote artificial intelligence transparency and derive risk factors warranting further investigation.

Multimodal nomogram integrating deep learning radiomics and hemodynamic parameters for early prediction of post-craniotomy intracranial hypertension.

Fu Z, Wang J, Shen W, Wu Y, Zhang J, Liu Y, Wang C, Shen Y, Zhu Y, Zhang W, Lv C, Peng L

pubmed logopapersJul 2 2025
To evaluate the effectiveness of deep learning radiomics nomogram in distinguishing early intracranial hypertension (IH) following primary decompressive craniectomy (DC) in patients with severe traumatic brain injury (TBI) and to demonstrate its potential clinical value as a noninvasive tool for guiding timely intervention and improving patient outcomes. This study included 238 patients with severe TBI (training cohort: n = 166; testing cohort: n = 72). Postoperative ultrasound images of the optic nerve sheath (ONS) and Spectral doppler imaging of middle cerebral artery (MCASDI) were obtained at 6 and 18 h after DC. Patients were grouped according to threshold values of 15 mmHg and 20 mmHg based on invasive intracranial pressure (ICPi) measurements. Clinical-semantic features were collected, and radiomics features were extracted from ONS images, and Additionally, deep transfer learning (DTL) features were generated using RseNet101. Predictive models were developed using the Light Gradient Boosting Machine (light GBM) machine learning algorithm. Clinical-ultrasound variables were incorporated into the model through univariate and multivariate logistic regression. A combined nomogram was developed by integrating DLR (deep learning radiomics) features with clinical-ultrasound variables, and its diagnostic performance over different thresholds was evaluated using Receiver Operating Characteristic (ROC) curve analysis and decision curve analysis (DCA). The nomogram model demonstrated superior performance over the clinical model at both 15 mmHg and 20 mmHg thresholds. For 15 mmHg, the AUC was 0.974 (95% confidence interval [CI]: 0.953-0.995) in the training cohort and 0.919 (95% CI: 0.845-0.993) in the testing cohort. For 20 mmHg, the AUC was 0.968 (95% CI: 0.944-0.993) in the training cohort and 0.889 (95% CI: 0.806-0.972) in the testing cohort. DCA curves showed net clinical benefit across all models. Among DLR models based on ONS, MCASDI, or their pre-fusion, the ONS-based model performed best in the testing cohorts. The nomogram model, incorporating clinical-semantic features, radiomics, and DTL features, exhibited promising performance in predicting early IH in post-DC patients. It shows promise for enhancing non-invasive ICP monitoring and supporting individualized therapeutic strategies.

Ensemble methods and partially-supervised learning for accurate and robust automatic murine organ segmentation.

Daenen LHBA, de Bruijn J, Staut N, Verhaegen F

pubmed logopapersJul 2 2025
Delineation of multiple organs in murine µCT images is crucial for preclinical studies but requires manual volumetric segmentation, a tedious and time-consuming process prone to inter-observer variability. Automatic deep learning-based segmentation can improve speed and reproducibility. While 2D and 3D deep learning models have been developed for anatomical segmentation, their generalization to external datasets has not been extensively investigated. Furthermore, ensemble learning, combining predictions of multiple 2D models, and partially-supervised learning (PSL), enabling training on partially-labeled datasets, have not been explored for preclinical purposes. This study demonstrates the first use of PSL frameworks and the superiority of 3D models in accuracy and generalizability to external datasets. Ensemble methods performed on par or better than the best individual 2D network, but only 3D models consistently generalized to external datasets (Dice Similarity Coefficient (DSC) > 0.8). PSL frameworks showed promising results across various datasets and organs, but its generalization to external data can be improved for some organs. This work highlights the superiority of 3D models over 2D and ensemble counterparts in accuracy and generalizability for murine µCT image segmentation. Additionally, a promising PSL framework is presented for leveraging multiple datasets without complete annotations. Our model can increase time-efficiency and improve reproducibility in preclinical radiotherapy workflows by circumventing manual contouring bottlenecks. Moreover, high segmentation accuracy of 3D models allows monitoring multiple organs over time using repeated µCT imaging, potentially reducing the number of mice sacrificed in studies, adhering to the 3R principle, specifically Reduction and Refinement.

A federated learning-based privacy-preserving image processing framework for brain tumor detection from CT scans.

Al-Saleh A, Tejani GG, Mishra S, Sharma SK, Mousavirad SJ

pubmed logopapersJul 2 2025
The detection of brain tumors is crucial in medical imaging, because accurate and early diagnosis can have a positive effect on patients. Because traditional deep learning models store all their data together, they raise questions about privacy, complying with regulations and the different types of data used by various institutions. We introduce the anisotropic-residual capsule hybrid Gorilla Badger optimized network (Aniso-ResCapHGBO-Net) framework for detecting brain tumors in a privacy-preserving, decentralized system used by many healthcare institutions. ResNet-50 and capsule networks are incorporated to achieve better feature extraction and maintain the structure of images' spatial data. To get the best results, the hybrid Gorilla Badger optimization algorithm (HGBOA) is applied for selecting the key features. Preprocessing techniques include anisotropic diffusion filtering, morphological operations, and mutual information-based image registration. Updates to the model are made secure and tamper-evident on the Ethereum network with its private blockchain and SHA-256 hashing scheme. The project is built using Python, TensorFlow and PyTorch. The model displays 99.07% accuracy, 98.54% precision and 99.82% sensitivity on assessments from benchmark CT imaging of brain tumors. This approach also helps to reduce the number of cases where no disease is found when there is one and vice versa. The framework ensures that patients' data is protected and does not decrease the accuracy of brain tumor detection.

Foundation Model and Radiomics-Based Quantitative Characterization of Perirenal Fat in Renal Cell Carcinoma Surgery.

Mei H, Chen H, Zheng Q, Yang R, Wang N, Jiao P, Wang X, Chen Z, Liu X

pubmed logopapersJul 1 2025
To quantitatively characterize the degree of perirenal fat adhesion using artificial intelligence in renal cell carcinoma. This retrospective study analyzed a total of 596 patients from three cohorts, utilizing corticomedullary phase computed tomography urography (CTU) images. The nnUNet v2 network combined with numerical computation was employed to segment the perirenal fat region. Pyradiomics algorithms and a computed tomography foundation model were used to extract features from CTU images separately, creating single-modality predictive models for identifying perirenal fat adhesion. By concatenating the Pyradiomics and foundation model features, an early fusion multimodal predictive signature was developed. The prognostic performance of the single-modality and multimodality models was further validated in two independent cohorts. The nnUNet v2 segmentation model accurately segmented both kidneys. The neural network and thresholding approach effectively delineated the perirenal fat region. Single-modality models based on radiomic and computed tomography foundation features demonstrated a certain degree of accuracy in diagnosing and identifying perirenal fat adhesion, while the early feature fusion diagnostic model outperformed the single-modality models. Also, the perirenal fat adhesion score showed a positive correlation with surgical time and intraoperative blood loss. AI-based radiomics and foundation models can accurately identify the degree of perirenal fat adhesion and have the potential to be used for surgical risk assessment.

CBCT radiomics features combine machine learning to diagnose cystic lesions in the jaw.

Sha X, Wang C, Sun J, Qi S, Yuan X, Zhang H, Yang J

pubmed logopapersJul 1 2025
The aim of this study was to develop a radiomics model based on cone beam CT (CBCT) to differentiate odontogenic cysts (OCs), odontogenic keratocysts (OKCs), and ameloblastomas (ABs). In this retrospective study, CBCT images were collected from 300 patients diagnosed with OC, OKC, and AB who underwent histopathological diagnosis. These patients were randomly divided into training (70%) and test (30%) cohorts. Radiomics features were extracted from the images, and the optimal features were incorporated into random forest model, support vector classifier (SVC) model, logistic regression model, and a soft VotingClassifier based on the above 3 algorithms. The performance of the models was evaluated using a receiver operating characteristic (ROC) curve and the area under the curve (AUC). The optimal model among these was then used to establish the final radiomics prediction model, whose performance was evaluated using the sensitivity, accuracy, precision, specificity, and F1 score in both the training cohort and the test cohort. The 6 optimal radiomics features were incorporated into a soft VotingClassifier. Its performance was the best overall. The AUC values of the One-vs-Rest (OvR) multi-classification strategy were AB-vs-Rest 0.963; OKC-vs-Rest 0.928; OC-vs-Rest 0.919 in the training cohort and AB-vs-Rest 0.814; OKC-vs-Rest 0.781; OC-vs-Rest 0.849 in the test cohort. The overall accuracy of the model in the training cohort was 0.757, and in the test cohort was 0.711. The VotingClassifier model demonstrated the ability of the CBCT radiomics to distinguish the multiple types of diseases (OC, OKC, and AB) in the jaw and may have the potential to diagnose accurately under non-invasive conditions.

A Preoperative CT-based Multiparameter Deep Learning and Radiomic Model with Extracellular Volume Parameter Images Can Predict the Tumor Budding Grade in Rectal Cancer Patients.

Tang X, Zhuang Z, Jiang L, Zhu H, Wang D, Zhang L

pubmed logopapersJul 1 2025
To investigate a computed tomography (CT)-based multiparameter deep learning-radiomic model (DLRM) for predicting the preoperative tumor budding (TB) grade in patients with rectal cancer. Data from 135 patients with histologically confirmed rectal cancer (85 in the Bd1+2 group and 50 in the Bd3 group) were retrospectively included. Deep learning (DL) features and hand-crafted radiomic (HCR) features were separately extracted and selected from preoperative CT-based extracellular volume (ECV) parameter images and venous-phase images. Six predictive signatures were subsequently constructed from machine learning classification algorithms. Finally, a combined DL and HCR model, the DLRM, was established to predict the TB grade of rectal cancer patients by merging the DL and HCR features from the two image sets. In the training and test cohorts, the AUC values of the DLRM were 0.976 [95% CI: 0.942-0.997] and 0.976 [95% CI: 0.942-1.00], respectively. The DLRM had good output agreement and clinical applicability according to calibration curve analysis and DCA, respectively. The DLRM outperformed the individual DL and HCR signatures in terms of predicting the TB grade of rectal cancer patients (p < 0.05). The DLRM can be used to evaluate the TB grade of rectal cancer patients in a noninvasive manner before surgery, thereby providing support for clinical treatment decision-making for these patients.

Magnetic resonance image generation using enhanced TransUNet in temporomandibular disorder patients.

Ha EG, Jeon KJ, Lee C, Kim DH, Han SS

pubmed logopapersJul 1 2025
Temporomandibular disorder (TMD) patients experience a variety of clinical symptoms, and MRI is the most effective tool for diagnosing temporomandibular joint (TMJ) disc displacement. This study aimed to develop a transformer-based deep learning model to generate T2-weighted (T2w) images from proton density-weighted (PDw) images, reducing MRI scan time for TMD patients. A dataset of 7226 images from 178 patients who underwent TMJ MRI examinations was used. The proposed model employed a generative adversarial network framework with a TransUNet architecture as the generator for image translation. Additionally, a disc segmentation decoder was integrated to improve image quality in the TMJ disc region. The model performance was evaluated using metrics such as the structural similarity index measure (SSIM), learned perceptual image patch similarity (LPIPS), and Fréchet inception distance (FID). Three experienced oral radiologists also performed a qualitative assessment through the mean opinion score (MOS). The model demonstrated high performance in generating T2w images from PDw images, achieving average SSIM, LPIPS, and FID values of 82.28%, 2.46, and 23.85, respectively, in the disc region. The model also obtained an average MOS score of 4.58, surpassing other models. Additionally, the model showed robust segmentation capabilities for the TMJ disc. The proposed model, integrating a transformer and a disc segmentation task, demonstrated strong performance in MR image generation, both quantitatively and qualitatively. This suggests its potential clinical significance in reducing MRI scan times for TMD patients while maintaining high image quality.
Page 12 of 2602591 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.