Sort by:
Page 32 of 3433422 results

Machine Learning Models for Carotid Artery plaque Detection: A Systematic Review of Ultrasound-Based Diagnostic Performance.

Eini P, Eini P, Serpoush H, Rezayee M, Tremblay J

pubmed logopapersSep 5 2025
Carotid artery plaques, a hallmark of atherosclerosis, are key risk indicators for ischemic stroke, a major global health burden with 101 million cases and 6.65 million deaths in 2019. Early ultrasound detection is vital but hindered by manual analysis limitations. Machine learning (ML) offers a promising solution for automated plaque detection, yet its comparative performance is underexplored. This systematic review and meta-analysis evaluates ML models for carotid plaque detection using ultrasound. We searched PubMed, Scopus, Embase, Web of Science, and ProQuest for studies on ML-based carotid plaque detection with ultrasound, following PRISMA guidelines. Eligible studies reported diagnostic metrics and used a reference standard. Data on study characteristics, ML models, and performance were extracted, with risk of bias assessed via PROBAST+AI. Pooled sensitivity, specificity, AUROC were calculated using STATA 18 with MIDAS and METADTA modules. Of ten studies, eight were meta-analyzed (200-19,751 patients) Best models showed a pooled sensitivity 0.94 (95% CI: 0.88-0.97), specificity 0.95 (95% CI: 0.86-0.98), AUROC 0.98 (95% CI: 0.97-0.99), and DOR 302 (95% CI: 54-1684), with high heterogeneity (I² = 90%) and no publication bias. ML models show promise in carotid plaque detection, supporting potential clinical integration for stroke prevention, though high heterogeneity and potential bias highlight the need for standardized validation.

Chest X-ray Pneumothorax Segmentation Using EfficientNet-B4 Transfer Learning in a U-Net Architecture

Alvaro Aranibar Roque, Helga Sebastian

arxiv logopreprintSep 4 2025
Pneumothorax, the abnormal accumulation of air in the pleural space, can be life-threatening if undetected. Chest X-rays are the first-line diagnostic tool, but small cases may be subtle. We propose an automated deep-learning pipeline using a U-Net with an EfficientNet-B4 encoder to segment pneumothorax regions. Trained on the SIIM-ACR dataset with data augmentation and a combined binary cross-entropy plus Dice loss, the model achieved an IoU of 0.7008 and Dice score of 0.8241 on the independent PTX-498 dataset. These results demonstrate that the model can accurately localize pneumothoraces and support radiologists.

Interpretable Transformer Models for rs-fMRI Epilepsy Classification and Biomarker Discovery

Jeyabose Sundar, A., Boerwinkle, V. L., Robinson Vimala, B., Leggio, O., Kazemi, M.

medrxiv logopreprintSep 4 2025
BackgroundAutomated interpretation of resting-state fMRI (rs-fMRI) for epilepsy diagnosis remains a challenge. We developed a regularized transformer that models parcel-wise spatial patterns and long-range temporal dynamics to classify epilepsy and generate interpretable, network-level candidate biomarkers. MethodsInputs were Schaefer-200 parcel time series extracted after standardized preprocessing (fMRIPrep). The Regularized Transformer is an attention-based sequence model with learned positional encoding and multi-head self-attention, combined with fMRI-specific regularization (dropout, weight decay, gradient clipping) and augmentation to improve robustness on modest clinical cohorts. Training used stratified group 4-fold cross-validation on n=65 (30 epilepsy, 35 controls) with fMRI-specific augmentation (time-warping, adaptive noise, structured masking). We compared the transformer to seven baselines (MLP, 1D-CNN, LSTM, CNN-LSTM, GCN, GAT, Attention-Only). External validation used an independent set (10 UNC epilepsy cohort, 10 controls). Biomarker discovery combined gradient-based attributions with parcelwise statistics and connectivity contrasts. ResultsOn an illustrative best-performing fold, the transformer attained Accuracy 0.77, Sensitivity 0.83, Specificity 0.88, F1-Score 0.77, and AUC 0.76. Averaged cross-validation performance was lower but consistent with these findings. External testing yielded Accuracy 0.60, AUC 0.64, Specificity 0.80, Sensitivity 0.40. Attribution-guided analysis identified distributed, network-level candidate biomarkers concentrated in limbic, somatomotor, default-mode and salience systems. ConclusionsA regularized transformer on parcel-level rs-fMRI can achieve strong within-fold discrimination and produce interpretable candidate biomarkers. Results are encouraging but preliminary larger multi-site validation, stability testing and multiple-comparison control are required prior to clinical translation.

A Generative Foundation Model for Chest Radiography

Yuanfeng Ji, Dan Lin, Xiyue Wang, Lu Zhang, Wenhui Zhou, Chongjian Ge, Ruihang Chu, Xiaoli Yang, Junhan Zhao, Junsong Chen, Xiangde Luo, Sen Yang, Jin Fang, Ping Luo, Ruijiang Li

arxiv logopreprintSep 4 2025
The scarcity of well-annotated diverse medical images is a major hurdle for developing reliable AI models in healthcare. Substantial technical advances have been made in generative foundation models for natural images. Here we develop `ChexGen', a generative vision-language foundation model that introduces a unified framework for text-, mask-, and bounding box-guided synthesis of chest radiographs. Built upon the latent diffusion transformer architecture, ChexGen was pretrained on the largest curated chest X-ray dataset to date, consisting of 960,000 radiograph-report pairs. ChexGen achieves accurate synthesis of radiographs through expert evaluations and quantitative metrics. We demonstrate the utility of ChexGen for training data augmentation and supervised pretraining, which led to performance improvements across disease classification, detection, and segmentation tasks using a small fraction of training data. Further, our model enables the creation of diverse patient cohorts that enhance model fairness by detecting and mitigating demographic biases. Our study supports the transformative role of generative foundation models in building more accurate, data-efficient, and equitable medical AI systems.

Deep Learning Based Multiomics Model for Risk Stratification of Postoperative Distant Metastasis in Colorectal Cancer.

Yao X, Han X, Huang D, Zheng Y, Deng S, Ning X, Yuan L, Ao W

pubmed logopapersSep 4 2025
To develop deep learning-based multiomics models for predicting postoperative distant metastasis (DM) and evaluating survival prognosis in colorectal cancer (CRC) patients. This retrospective study included 521 CRC patients who underwent curative surgery at two centers. Preoperative CT and postoperative hematoxylin-eosin (HE) stained slides were collected. A total of 381 patients from Center 1 were split (7:3) into training and internal validation sets; 140 patients from Center 2 formed the independent external validation set. Patients were grouped based on DM status during follow-up. Radiological and pathological models were constructed using independent imaging and pathological predictors. Deep features were extracted with a ResNet-101 backbone to build deep learning radiomics (DLRS) and deep learning pathomics (DLPS) models. Two integrated models were developed: Nomogram 1 (radiological + DLRS) and Nomogram 2 (pathological + DLPS). CT- reported T (cT) stage (OR=2.00, P=0.006) and CT-reported N (cN) stage (OR=1.63, P=0.023) were identified as independent radiologic predictors for building the radiological model; pN stage (OR=1.91, P=0.003) and perineural invasion (OR=2.07, P=0.030) were identified as pathological predictors for building the pathological model. DLRS and DLPS incorporated 28 and 30 deep features, respectively. In the training set, area under the curve (AUC) for radiological, pathological, DLRS, DLPS, Nomogram 1, and Nomogram 2 models were 0.657, 0.687, 0.931, 0.914, 0.938, and 0.930. DeLong's test showed DLRS, DLPS, and both nomograms significantly outperformed conventional models (P<.05). Kaplan-Meier analysis confirmed effective 3-year disease-free survival (DFS) stratification by the nomograms. Deep learning-based multiomics models provided high accuracy for postoperative DM prediction. Nomogram models enabled reliable DFS risk stratification in CRC patients.

A Cascaded Segmentation-Classification Deep Learning Framework for Preoperative Prediction of Occult Peritoneal Metastasis and Early Recurrence in Advanced Gastric Cancer.

Zou T, Chen P, Wang T, Lei T, Chen X, Yang F, Lin X, Li S, Yi X, Zheng L, Lin Y, Zheng B, Song J, Wang L

pubmed logopapersSep 4 2025
To develop a cascaded deep learning (DL) framework integrating tumor segmentation with metastatic risk stratification for preoperative prediction of occult peritoneal metastasis (OPM) in advanced gastric cancer (GC), and validate its generalizability for early peritoneal recurrence (PR) prediction. This multicenter study enrolled 765 patients with advanced GC from three institutions. We developed a two-stage framework as follows: (1) V-Net-based tumor segmentation on CT; (2) DL-based metastatic risk classification using segmented tumor regions. Clinicopathological predictors were integrated with deep learning probabilities to construct a combined model. Validation cohorts comprised: Internal validation (Test1 for OPM, n=168; Test2 for early PR, n=212) and External validation (Test3 for early PR, n=57 from two independent centers). Multivariable analysis identified Borrmann type (OR=1.314, 95% CI: 1.239-1.394), CA125 ≥35U/mL (OR=1.301, 95% CI: 1.127-1.499), and CT-N+ stage (OR=1.259, 95% CI: 1.124-1.415) as independent OPM predictors. The combined model demonstrated robust performance for both OPM and early PR prediction: achieving AUCs of 0.938 (Train) and 0.916 (Test1) for OPM with improvements over clinical (∆AUC +0.039-+0.107) and DL-only models (∆AUC +0.044-+0.104), while attaining AUC 0.820-0.825 for early PR (Test2 and Test3) with balanced sensitivity (79.7-88.9%) and specificity (72.4-73.3%). Decision curve analysis confirmed net clinical benefit across clinical thresholds. This CT-based cascaded framework enables reliable preoperative risk stratification for OPM and early PR in advanced GC, potentially refining indications for personalized therapeutic pathways.

Deep Learning for Segmenting Ischemic Stroke Infarction in Non-contrast CT Scans by Utilizing Asymmetry.

Sun J, Ju GL, Qu YH, Xie HH, Sun HX, Han SY, Li YF, Jia XQ, Yang Q

pubmed logopapersSep 4 2025
Non-contrast computed tomography (NCCT) is a first-line imaging technique for determining treatment options for acute ischemic stroke (AIS). However, its poor contrast and signal-to-noise ratio limit the diagnosis accuracy for radiologists, and automated AIS lesion segmentation using NCCT also remains a challenge. This study aims to develop a segmentation method for ischemic lesions in NCCT scans, combining symmetry-based principles with the nnUNet segmentation model. Our novel approach integrates a Generative Module (GM) utilizing 2.5 D ResUNet and an Upstream Segmentation Module (UM) with additional inputs and constraints under the 3D nnUNet segmentation model, utilizing symmetry-based learning to enhance the identification and segmentation of ischemic regions. We utilized the publicly accessible AISD dataset for our experiments. This dataset contains 397 NCCT scans of acute ischemic stroke taken within 24 h of the onset of symptoms. Our method was trained and validated using 345 scans, while the remaining 52 scans were used for internal testing. Additionally, we included 60 positive cases (External Set 1) with segmentation labels obtained from our hospital for external validation of the segmentation task. External Set 2 was employed to evaluate the model's sensitivity and specificity in case-dimensional classification, further assessing its clinical performance. We introduced innovative features such as an intensity-based lesion probability (ILP) function and specific input channels for suspected lesion areas to augment the model's sensitivity and specificity. The methodology demonstrated commendable segmentation efficacy, attaining a Dice Similarity Coefficient (DSC) of 0.6720 and a Hausdorff Distance (HD95) of 35.28 on the internal test dataset. Similarly, on the external test dataset, the method yielded satisfactory segmentation outcomes, with a DSC of 0.4891 and an HD 95 of 46.06. These metrics reflect a substantial overlap with expert-drawn boundaries and demonstrate the model's potential for reliable clinical application. In terms of classification performance, the method achieved an Area Under the Curve (AUC) of 0.991 on the external test set, surpassing the performance of nnUNet, which recorded an AUC of 0.947. This study introduces a novel segmentation technique for ischemic lesions in NCCT scans, leveraging symmetry-based principles integrated with nnUNet, which shows potential for improving clinical decision-making in stroke care.

Machine Learning-Based Prediction of Lymph Node Metastasis and Volume Using Preoperative Ultrasound Features in Papillary Thyroid Carcinoma.

Hu T, Cai Y, Zhou T, Zhang Y, Huang K, Huang X, Qian S, Wang Q, Luo D

pubmed logopapersSep 4 2025
A predictive model of cervical lymph node metastasis and metastasis volume was constructed based on a machine learning algorithm and ultrasound characteristics before surgery. A retrospective analysis was conducted on 573 cases of PTC patients who underwent surgery in our institution, from 2017 to 2022. Patient demographic and clinical characteristics were systematically collected. Feature selection was performed using univariate analysis, Logistic regression (LR) analysis. Statistically significant variables were identified using a threshold of p < 0.05. Predictive models for cervical lymph node metastasis and metastatic volume in papillary thyroid carcinoma were constructed using advanced machine learning algorithms: K-Nearest Neighbors (KNN), Gradient Boosting Machine (XGBoost), and Support Vector Machine (SVM). Model performance was rigorously assessed using validation cohort data, evaluating area under the Receiver Operating Characteristic (ROC) curve, sensitivity, specificity, and accuracy. In this retrospective study of 573 patients (320 had lymph node metastasis, 127 had small volume lymph node metastasis, and 193 had medium-volume lymph node metastasis). In the model predicting the neck lymph node metastasis, the Gradient Boosting method exhibited the best performance, with an area under the ROC curve of 0.784, sensitivity of 76.2%, specificity of 70.6%, and accuracy of 73.8%. In the model predicting the metastatic volume in neck lymph nodes for PTC, the Gradient Boosting method also demonstrated the best performance, with an area under the ROC curve of 0.779, sensitivity of 71.7%, specificity of 75.9%, and accuracy of 74.4%. Machine learning-based predictive models integrating preoperative ultrasound features demonstrate robust performance in stratifying neck lymph node metastasis risk for PTC patients. These models optimize surgical planning by guiding lymph node dissection extent and individualizing treatment strategies, potentially reducing unnecessary extensive surgeries. The integration of advanced computational techniques with clinical imaging provides a data-driven paradigm for preoperative risk assessment in thyroid oncology.

A Physics-ASIC Architecture-Driven Deep Learning Photon-Counting Detector Model Under Limited Data.

Yu X, Wu Q, Qin W, Zhong T, Su M, Ma J, Zhang Y, Ji X, Wang W, Quan G, Du Y, Chen Y, Lai X

pubmed logopapersSep 4 2025
Photon-counting computed tomography (PCCT) based on photon-counting detectors (PCDs) represents a cutting-edge CT technology, offering higher spatial resolution, reduced radiation dose, and advanced material decomposition capabilities. Accurately modeling complex and nonlinear PCDs under limited calibration data becomes one of the challenges hindering the widespread accessibility of PCCT. This paper introduces a physics-ASIC architecture-driven deep learning detector model for PCDs. This model adeptly captures the comprehensive response of the PCD, encompassing both sensor and ASIC responses. We present experimental results demonstrating the model's exceptional accuracy and robustness with limited calibration data. Key advancements include reduced calibration errors, reasonable physics-ASIC parameters estimation, and high-quality and high-accuracy material decomposition images.

MUSiK: An Open Source Simulation Library for 3D Multi-view Ultrasound.

Chan TJ, Nair-Kanneganti A, Anthony B, Pouch A

pubmed logopapersSep 4 2025
Diagnostic ultrasound has long filled a crucial niche in medical imaging thanks to its portability, affordability, and favorable safety profile. Now, multi-view hardware and deep-learning-based image reconstruction algorithms promise to extend this niche to increasingly sophisticated applications, such as volume rendering and long-term organ monitoring. However, progress on these fronts is impeded by the complexities of ultrasound electronics and by the scarcity of high-fidelity radiofrequency data. Evidently, there is a critical need for tools that enable rapid ultrasound prototyping and generation of synthetic data. We meet this need with MUSiK, the first open-source ultrasound simulation library expressly designed for multi-view acoustic simulations of realistic anatomy. This library covers the full gamut of image acquisition: building anatomical digital phantoms, defining and positioning diverse transducer types, running simulations, and reconstructing images. In this paper, we demonstrate several use cases for MUSiK. We simulate in vitro multi-view experiments and compare the resolution and contrast of the resulting images. We then perform multiple conventional and experimental in vivo imaging tasks, such as 2D scans of the kidney, 2D and 3D echocardiography, 2.5D tomography of large regions, and 3D tomography for lesion detection in soft tissue. Finally, we introduce MUSiK's Bayesian reconstruction framework for multi-view ultrasound and validate an original SNR-enhancing reconstruction algorithm. We anticipate that these unique features will seed new hypotheses and accelerate the overall pace of ultrasound technological development. The MUSiK library is publicly available at github.com/norway99/MUSiK.
Page 32 of 3433422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.