Sort by:
Page 10 of 2052045 results

A deep learning model for early diagnosis of alzheimer's disease combined with 3D CNN and video Swin transformer.

Zhou J, Wei Y, Li X, Zhou W, Tao R, Hua Y, Liu H

pubmed logopapersJul 2 2025
Alzheimer's disease (AD) constitutes a neurodegenerative disorder predominantly observed in the geriatric population. If AD can be diagnosed early, both in terms of prevention and treatment, it is very beneficial to patients. Therefore, our team proposed a novel deep learning model named 3D-CNN-VSwinFormer. The model consists of two components: the first part is a 3D CNN equipped with a 3D Convolutional Block Attention Module (3D CBAM) module, and the second part involves a fine-tuned Video Swin Transformer. Our investigation extracts features from subject-level 3D Magnetic resonance imaging (MRI) data, retaining only a single 3D MRI image per participant. This method circumvents data leakage and addresses the issue of 2D slices failing to capture global spatial information. We utilized the ADNI dataset to validate our proposed model. In differentiating between AD patients and cognitively normal (CN) individuals, we achieved accuracy and AUC values of 92.92% and 0.9660, respectively. Compared to other studies on AD and CN recognition, our model yielded superior results, enhancing the efficiency of AD diagnosis.

Clinical value of the 70-kVp ultra-low-dose CT pulmonary angiography with deep learning image reconstruction.

Zhang Y, Wang L, Yuan D, Qi K, Zhang M, Zhang W, Gao J, Liu J

pubmed logopapersJul 2 2025
This study aims to assess the feasibility of "double-low," low radiation dosage and low contrast media dosage, CT pulmonary angiography (CTPA) based on deep-learning image reconstruction (DLIR) algorithms. One hundred consecutive patients (41 females; average age 60.9 years, range 18-90) were prospectively scanned on multi-detector CT systems. Fifty patients in the conventional-dose group (CD group) underwent CTPA with 100 kV protocol using the traditional iterative reconstruction algorithm, and 50 patients in the low-dose group (LD group) underwent CTPA with a 70 kVp DLIR protocol. Radiation and contrast agent doses were recorded and compared between groups. Objective parameters were measured and compared. Two radiologists evaluated images for overall image quality, artifacts, and image contrast separately on a 5-point scale. The furthest visible branches were compared between groups. Compared to the control group, the study group reduced the dose-length product by 80.3% (p < 0.01) and the contrast media dose by 33.3%. CT values, SD values, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) showed no statistically significant differences (all p > 0.05) between the LD and CD groups. The overall image quality scores were comparable between the LD and CD groups (p > 0.05), with good in-reader agreement (k = 0.75). More peripheral pulmonary vessels could be assessed in the LD group compared with the CD group. 70 kVp combined with DLIR reconstruction for CTPA can further reduce radiation and contrast agent dose while maintaining image quality and increasing the visibility on the pulmonary artery distal branches. Question Elevated radiation exposure and substantial doses of contrast media during CT pulmonary angiography (CTPA) augment patient risks. Findings The "double-low" CT pulmonary angiography protocol can diminish radiation doses by 80.3% and minimize contrast doses by one-third while maintaining image quality. Clinical relevance With deep learning algorithms, we confirmed that CTPA images maintained excellent quality despite reduced radiation and contrast dosages, helping to reduce radiation exposure and kidney burden on patients. The "double-low" CTPA protocol, complemented by deep learning image reconstruction, prioritizes patient safety.

Artificial intelligence-assisted endobronchial ultrasound for differentiating between benign and malignant thoracic lymph nodes: a meta-analysis.

Tang F, Zha XK, Ye W, Wang YM, Wu YF, Wang LN, Lyu LP, Lyu XM

pubmed logopapersJul 2 2025
Endobronchial ultrasound (EBUS) is a widely used imaging modality for evaluating thoracic lymph nodes (LNs), particularly in the staging of lung cancer. Artificial intelligence (AI)-assisted EBUS has emerged as a promising tool to enhance diagnostic accuracy. However, its effectiveness in differentiating benign from malignant thoracic LNs remains uncertain. This meta-analysis aimed to evaluate the diagnostic performance of AI-assisted EBUS compared to the pathological reference standards. A systematic search was conducted across PubMed, Embase, and Web of Science for studies assessing AI-assisted EBUS in differentiating benign and malignant thoracic LNs. The reference standard included pathological confirmation via EBUS-guided transbronchial needle aspiration, surgical resection, or other histological/cytological validation methods. Sensitivity, specificity, diagnostic likelihood ratios, and diagnostic odds ratio (OR) were pooled using a random-effects model. The area under the receiver operating characteristic curve (AUROC) was summarized to evaluate diagnostic accuracy. Subgroup analyses were conducted by study design, lymph node location, and AI model type. Twelve studies with a total of 6,090 thoracic LNs were included. AI-assisted EBUS showed a pooled sensitivity of 0.75 (95% confidence interval [CI]: 0.60-0.86, I² = 97%) and specificity of 0.88 (95% CI: 0.83-0.92, I² = 96%). The positive and negative likelihood ratios were 6.34 (95% CI: 4.41-9.08) and 0.28 (95% CI: 0.17-0.47), respectively. The pooled diagnostic OR was 22.38 (95% CI: 11.03-45.38), and the AUROC was 0.90 (95% CI: 0.88-0.93). The subgroup analysis showed higher sensitivity but lower specificity in retrospective studies compared to prospective ones (sensitivity: 0.87 vs. 0.42; specificity: 0.80 vs. 0.93; both p < 0.001). No significant differences were found by lymph node location or AI model type. AI-assisted EBUS shows promise in differentiating benign from malignant thoracic LNs, particularly those with high specificity. However, substantial heterogeneity and moderate sensitivity highlight the need for cautious interpretation and further validation. PROSPERO CRD42025637964.

Integrating CT radiomics and clinical features using machine learning to predict post-COVID pulmonary fibrosis.

Zhao Q, Li Y, Zhao C, Dong R, Tian J, Zhang Z, Huang L, Huang J, Yan J, Yang Z, Ruan J, Wang P, Yu L, Qu J, Zhou M

pubmed logopapersJul 2 2025
The lack of reliable biomarkers for the early detection and risk stratification of post-COVID-19 pulmonary fibrosis (PCPF) underscores the urgency advanced predictive tools. This study aimed to develop a machine learning-based predictive model integrating quantitative CT (qCT) radiomics and clinical features to assess the risk of lung fibrosis in COVID-19 patients. A total of 204 patients with confirmed COVID-19 pneumonia were included in the study. Of these, 93 patients were assigned to the development cohort (74 for training and 19 for internal validation), while 111 patients from three independent hospitals constituted the external validation cohort. Chest CT images were analyzed using qCT software. Clinical data and laboratory parameters were obtained from electronic health records. Least absolute shrinkage and selection operator (LASSO) regression with 5-fold cross-validation was used to select the most predictive features. Twelve machine learning algorithms were independently trained. Their performances were evaluated by receiver operating characteristic (ROC) curves, area under the curve (AUC) values, sensitivity, and specificity. Seventy-eight features were extracted and reduced to ten features for model development. These included two qCT radiomics signatures: (1) whole lung_reticulation (%) interstitial lung disease (ILD) texture analysis, (2) interstitial lung abnormality (ILA)_Num of lung zones ≥ 5%_whole lung_ILA. Among 12 machine learning algorithms evaluated, the support vector machine (SVM) model demonstrated the best predictive performance, with AUCs of 0.836 (95% CI: 0.830-0.842) in the training cohort, 0.796 (95% CI: 0.777-0.816) in the internal validation cohort, and 0.797 (95% CI: 0.691-0.873) in the external validation cohort. The integration of CT radiomics, clinical and laboratory variables using machine learning provides a robust tool for predicting pulmonary fibrosis progression in COVID-19 patients, facilitating early risk assessment and intervention.

Large language model trained on clinical oncology data predicts cancer progression.

Zhu M, Lin H, Jiang J, Jinia AJ, Jee J, Pichotta K, Waters M, Rose D, Schultz N, Chalise S, Valleru L, Morin O, Moran J, Deasy JO, Pilai S, Nichols C, Riely G, Braunstein LZ, Li A

pubmed logopapersJul 2 2025
Subspecialty knowledge barriers have limited the adoption of large language models (LLMs) in oncology. We introduce Woollie, an open-source, oncology-specific LLM trained on real-world data from Memorial Sloan Kettering Cancer Center (MSK) across lung, breast, prostate, pancreatic, and colorectal cancers, with external validation using University of California, San Francisco (UCSF) data. Woollie surpasses ChatGPT in medical benchmarks and excels in eight non-medical benchmarks. Analyzing 39,319 radiology impression notes from 4002 patients, it achieved an overall area under the receiver operating characteristic curve (AUROC) of 0.97 for cancer progression prediction on MSK data, including a notable 0.98 AUROC for pancreatic cancer. On UCSF data, it achieved an overall AUROC of 0.88, excelling in lung cancer detection with an AUROC of 0.95. As the first oncology specific LLM validated across institutions, Woollie demonstrates high accuracy and consistency across cancer types, underscoring its potential to enhance cancer progression analysis.

Multimodal AI to forecast arrhythmic death in hypertrophic cardiomyopathy.

Lai C, Yin M, Kholmovski EG, Popescu DM, Lu DY, Scherer E, Binka E, Zimmerman SL, Chrispin J, Hays AG, Phelan DM, Abraham MR, Trayanova NA

pubmed logopapersJul 2 2025
Sudden cardiac death from ventricular arrhythmias is a leading cause of mortality worldwide. Arrhythmic death prognostication is challenging in patients with hypertrophic cardiomyopathy (HCM), a setting where current clinical guidelines show low performance and inconsistent accuracy. Here, we present a deep learning approach, MAARS (Multimodal Artificial intelligence for ventricular Arrhythmia Risk Stratification), to forecast lethal arrhythmia events in patients with HCM by analyzing multimodal medical data. MAARS' transformer-based neural networks learn from electronic health records, echocardiogram and radiology reports, and contrast-enhanced cardiac magnetic resonance images, the latter being a unique feature of this model. MAARS achieves an area under the curve of 0.89 (95% confidence interval (CI) 0.79-0.94) and 0.81 (95% CI 0.69-0.93) in internal and external cohorts and outperforms current clinical guidelines by 0.27-0.35 (internal) and 0.22-0.30 (external). In contrast to clinical guidelines, it demonstrates fairness across demographic subgroups. We interpret MAARS' predictions on multiple levels to promote artificial intelligence transparency and derive risk factors warranting further investigation.

Multimodal nomogram integrating deep learning radiomics and hemodynamic parameters for early prediction of post-craniotomy intracranial hypertension.

Fu Z, Wang J, Shen W, Wu Y, Zhang J, Liu Y, Wang C, Shen Y, Zhu Y, Zhang W, Lv C, Peng L

pubmed logopapersJul 2 2025
To evaluate the effectiveness of deep learning radiomics nomogram in distinguishing early intracranial hypertension (IH) following primary decompressive craniectomy (DC) in patients with severe traumatic brain injury (TBI) and to demonstrate its potential clinical value as a noninvasive tool for guiding timely intervention and improving patient outcomes. This study included 238 patients with severe TBI (training cohort: n = 166; testing cohort: n = 72). Postoperative ultrasound images of the optic nerve sheath (ONS) and Spectral doppler imaging of middle cerebral artery (MCASDI) were obtained at 6 and 18 h after DC. Patients were grouped according to threshold values of 15 mmHg and 20 mmHg based on invasive intracranial pressure (ICPi) measurements. Clinical-semantic features were collected, and radiomics features were extracted from ONS images, and Additionally, deep transfer learning (DTL) features were generated using RseNet101. Predictive models were developed using the Light Gradient Boosting Machine (light GBM) machine learning algorithm. Clinical-ultrasound variables were incorporated into the model through univariate and multivariate logistic regression. A combined nomogram was developed by integrating DLR (deep learning radiomics) features with clinical-ultrasound variables, and its diagnostic performance over different thresholds was evaluated using Receiver Operating Characteristic (ROC) curve analysis and decision curve analysis (DCA). The nomogram model demonstrated superior performance over the clinical model at both 15 mmHg and 20 mmHg thresholds. For 15 mmHg, the AUC was 0.974 (95% confidence interval [CI]: 0.953-0.995) in the training cohort and 0.919 (95% CI: 0.845-0.993) in the testing cohort. For 20 mmHg, the AUC was 0.968 (95% CI: 0.944-0.993) in the training cohort and 0.889 (95% CI: 0.806-0.972) in the testing cohort. DCA curves showed net clinical benefit across all models. Among DLR models based on ONS, MCASDI, or their pre-fusion, the ONS-based model performed best in the testing cohorts. The nomogram model, incorporating clinical-semantic features, radiomics, and DTL features, exhibited promising performance in predicting early IH in post-DC patients. It shows promise for enhancing non-invasive ICP monitoring and supporting individualized therapeutic strategies.

Ensemble methods and partially-supervised learning for accurate and robust automatic murine organ segmentation.

Daenen LHBA, de Bruijn J, Staut N, Verhaegen F

pubmed logopapersJul 2 2025
Delineation of multiple organs in murine µCT images is crucial for preclinical studies but requires manual volumetric segmentation, a tedious and time-consuming process prone to inter-observer variability. Automatic deep learning-based segmentation can improve speed and reproducibility. While 2D and 3D deep learning models have been developed for anatomical segmentation, their generalization to external datasets has not been extensively investigated. Furthermore, ensemble learning, combining predictions of multiple 2D models, and partially-supervised learning (PSL), enabling training on partially-labeled datasets, have not been explored for preclinical purposes. This study demonstrates the first use of PSL frameworks and the superiority of 3D models in accuracy and generalizability to external datasets. Ensemble methods performed on par or better than the best individual 2D network, but only 3D models consistently generalized to external datasets (Dice Similarity Coefficient (DSC) > 0.8). PSL frameworks showed promising results across various datasets and organs, but its generalization to external data can be improved for some organs. This work highlights the superiority of 3D models over 2D and ensemble counterparts in accuracy and generalizability for murine µCT image segmentation. Additionally, a promising PSL framework is presented for leveraging multiple datasets without complete annotations. Our model can increase time-efficiency and improve reproducibility in preclinical radiotherapy workflows by circumventing manual contouring bottlenecks. Moreover, high segmentation accuracy of 3D models allows monitoring multiple organs over time using repeated µCT imaging, potentially reducing the number of mice sacrificed in studies, adhering to the 3R principle, specifically Reduction and Refinement.

A federated learning-based privacy-preserving image processing framework for brain tumor detection from CT scans.

Al-Saleh A, Tejani GG, Mishra S, Sharma SK, Mousavirad SJ

pubmed logopapersJul 2 2025
The detection of brain tumors is crucial in medical imaging, because accurate and early diagnosis can have a positive effect on patients. Because traditional deep learning models store all their data together, they raise questions about privacy, complying with regulations and the different types of data used by various institutions. We introduce the anisotropic-residual capsule hybrid Gorilla Badger optimized network (Aniso-ResCapHGBO-Net) framework for detecting brain tumors in a privacy-preserving, decentralized system used by many healthcare institutions. ResNet-50 and capsule networks are incorporated to achieve better feature extraction and maintain the structure of images' spatial data. To get the best results, the hybrid Gorilla Badger optimization algorithm (HGBOA) is applied for selecting the key features. Preprocessing techniques include anisotropic diffusion filtering, morphological operations, and mutual information-based image registration. Updates to the model are made secure and tamper-evident on the Ethereum network with its private blockchain and SHA-256 hashing scheme. The project is built using Python, TensorFlow and PyTorch. The model displays 99.07% accuracy, 98.54% precision and 99.82% sensitivity on assessments from benchmark CT imaging of brain tumors. This approach also helps to reduce the number of cases where no disease is found when there is one and vice versa. The framework ensures that patients' data is protected and does not decrease the accuracy of brain tumor detection.

Foundation Model and Radiomics-Based Quantitative Characterization of Perirenal Fat in Renal Cell Carcinoma Surgery.

Mei H, Chen H, Zheng Q, Yang R, Wang N, Jiao P, Wang X, Chen Z, Liu X

pubmed logopapersJul 1 2025
To quantitatively characterize the degree of perirenal fat adhesion using artificial intelligence in renal cell carcinoma. This retrospective study analyzed a total of 596 patients from three cohorts, utilizing corticomedullary phase computed tomography urography (CTU) images. The nnUNet v2 network combined with numerical computation was employed to segment the perirenal fat region. Pyradiomics algorithms and a computed tomography foundation model were used to extract features from CTU images separately, creating single-modality predictive models for identifying perirenal fat adhesion. By concatenating the Pyradiomics and foundation model features, an early fusion multimodal predictive signature was developed. The prognostic performance of the single-modality and multimodality models was further validated in two independent cohorts. The nnUNet v2 segmentation model accurately segmented both kidneys. The neural network and thresholding approach effectively delineated the perirenal fat region. Single-modality models based on radiomic and computed tomography foundation features demonstrated a certain degree of accuracy in diagnosing and identifying perirenal fat adhesion, while the early feature fusion diagnostic model outperformed the single-modality models. Also, the perirenal fat adhesion score showed a positive correlation with surgical time and intraoperative blood loss. AI-based radiomics and foundation models can accurately identify the degree of perirenal fat adhesion and have the potential to be used for surgical risk assessment.
Page 10 of 2052045 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.