Sort by:
Page 5 of 2982972 results

Deep Learning-Based Instance-Level Segmentation of Kidney and Liver Cysts in CT Images of Patients Affected by Polycystic Kidney Disease.

Gregory AV, Khalifa M, Im J, Ramanathan S, Elbarougy DE, Cruz C, Yang H, Denic A, Rule AD, Chebib FT, Dahl NK, Hogan MC, Harris PC, Torres VE, Erickson BJ, Potretzke TA, Kline TL

pubmed logopapersAug 14 2025
Total kidney and liver volumes are key image-based biomarkers to predict the severity of kidney and liver phenotype in autosomal dominant polycystic kidney disease (ADPKD). However, MRI-based advanced biomarkers like total cyst number (TCN) and cyst parenchyma surface area (CPSA) have been shown to more accurately assess cyst burden and improve the prediction of disease progression. The main aim of this study is to extend the calculation of advanced biomarkers to other imaging modalities; thus, we propose a fully automated model to segment kidney and liver cysts in CT images. Abdominal CTs of ADPKD patients were gathered retrospectively between 2001-2018. A 3D deep-learning method using the nnU-Net architecture was trained to learn cyst edges-cores and the non-cystic kidney/liver parenchyma. Separate segmentation models were trained for kidney cysts in contrast-enhanced CTs and liver cysts in non-contrast CTs using an active learning approach. Two experienced research fellows manually generated the reference standard segmentation, which were reviewed by an expert radiologist for accuracy. Two-hundred CT scans from 148 patients (mean age, 51.2 ± 14.1 years; 48% male) were utilized for model training (80%) and testing (20%). In the test set, both models showed good agreement with the reference standard segmentations, similar to the agreement between two independent human readers (model vs reader: TCNkidney/liver r=0.96/0.97 and CPSAkidney r=0.98), inter-reader: TCNkidney/liver r=0.96/0.98 and CPSAkidney r=0.99). Our study demonstrates that automated models can segment kidney and liver cysts accurately in CT scans of patients with ADPKD.

Development and validation of deep learning model for detection of obstructive coronary artery disease in patients with acute chest pain: a multi-center study.

Kim JY, Park J, Lee KH, Lee JW, Park J, Kim PK, Han K, Baek SE, Im DJ, Choi BW, Hur J

pubmed logopapersAug 14 2025
This study aimed to develop and validate a deep learning (DL) model to detect obstructive coronary artery disease (CAD, ≥ 50% stenosis) in coronary CT angiography (CCTA) among patients presenting to the emergency department (ED) with acute chest pain. The training dataset included 378 patients with acute chest pain who underwent CCTA (10,060 curved multiplanar reconstruction [MPR] images) from a single-center ED between January 2015 and December 2022. The external validation dataset included 298 patients from 3 ED centers between January 2021 and December 2022. A DL model based on You Only Look Once v4, requires manual preprocessing for curved MPR extraction and was developed using 15 manually preprocessed MPR images per major coronary artery. Model performance was evaluated per artery and per patient. The training dataset included 378 patients (mean age 61.3 ± 12.2 years, 58.2% men); the external dataset included 298 patients (mean age 58.3 ± 13.8 years, 54.6% men). Obstructive CAD prevalence in the external dataset was 27.5% (82/298). The DL model achieved per-artery sensitivity, specificity, positive predictive value, negative predictive value (NPV), and area under the curve (AUC) of 92.7%, 89.9%, 62.6%, 98.5%, and 0.919, respectively; and per-patient values of 93.3%, 80.7%, 67.7%, 96.6%, and 0.871, respectively. The DL model demonstrated high sensitivity and NPV for identifying obstructive CAD in patients with acute chest pain undergoing CCTA, indicating its potential utility in aiding ED physicians in CAD detection.

Artificial Intelligence based fractional flow reserve.

Bednarek A, Gąsior P, Jaguszewski M, Buszman PP, Milewski K, Hawranek M, Gil R, Wojakowski W, Kochman J, Tomaniak M

pubmed logopapersAug 14 2025
Fractional flow reserve (FFR) - a physiological indicator of coronary stenosis significance - has now become a widely used parameter also in the guidance of percutaneous coronary intervention (PCI). Several studies have shown the superiority of FFR compared to visual assessment, contributing to the reduction in clinical endpoints. However, the current approach to FFR assessment requires coronary instrumentation with a dedicated pressure wire and thus increasing invasiveness, cost, and duration of the procedure. Alternative, noninvasive methods of FFR assessment based on computational fluid dynamics are being widely tested; these approaches are generally not fully automated and may sometimes require substantial computational power. Nowadays, one of the most rapidly expanding fields in medicine is the use of artificial intelligence (AI) in therapy optimization, diagnosis, treatment, and risk stratification. AI usage contributes to the development of more sophisticated methods of imaging analysis and allows for the derivation of clinically important parameters in a faster and more accurate way. Over the recent years, AI utility in deriving FFR in a noninvasive manner has been increasingly reported. In this review, we critically summarize current knowledge in the field of AI-derived FFR based on data from computed tomography angiography, invasive angiography, optical coherence tomography, and intravascular ultrasound. Available solutions, possible future directions in optimizing cathlab performance, including the use of mixed reality, as well as current limitations standing behind the wide adoption of these techniques, are overviewed.

Enhancing cardiac MRI reliability at 3 T using motion-adaptive B<sub>0</sub> shimming.

Huang Y, Malagi AV, Li X, Guan X, Yang CC, Huang LT, Long Z, Zepeda J, Zhang X, Yoosefian G, Bi X, Gao C, Shang Y, Binesh N, Lee HL, Li D, Dharmakumar R, Han H, Yang HR

pubmed logopapersAug 14 2025
Magnetic susceptibility differences at the heart-lung interface introduce B<sub>0</sub>-field inhomogeneities that challenge cardiac MRI at high field strengths (≥ 3 T). Although hardware-based shimming has advanced, conventional approaches often neglect dynamic variations in thoracic anatomy caused by cardiac and respiratory motion, leading to residual off-resonance artifacts. This study aims to characterize motion-induced B<sub>0</sub>-field fluctuations in the heart and evaluate a deep learning-enabled motion-adaptive B<sub>0</sub> shimming pipeline to mitigate them. A motion-resolved B<sub>0</sub> mapping sequence was implemented at 3 T to quantify cardiac and respiratory-induced B<sub>0</sub> variations. A motion-adaptive shimming framework was then developed and validated through numerical simulations and human imaging studies. B<sub>0</sub>-field homogeneity and T<sub>2</sub>* mapping accuracy were assessed in multiple breath-hold positions using standard and motion-adaptive shimming. Respiratory motion significantly altered myocardial B<sub>0</sub> fields (p < 0.01), whereas cardiac motion had minimal impact (p = 0.49). Compared with conventional scanner shimming, motion-adaptive B<sub>0</sub> shimming yielded significantly improved field uniformity across both inspiratory (post-shim SD<sub>ratio</sub>: 0.68 ± 0.10 vs. 0.89 ± 0.11; p < 0.05) and expiratory (0.65 ± 0.16 vs. 0.84 ± 0.20; p < 0.05) breath-hold states. Corresponding improvements in myocardial T<sub>2</sub>* map homogeneity were observed, with reduced coefficient of variation (0.44 ± 0.19 vs. 0.39 ± 0.22; 0.59 ± 0.30 vs. 0.46 ± 0.21; both p < 0.01). The proposed motion-adaptive B<sub>0</sub> shimming approach effectively compensates for respiration-induced B<sub>0</sub> fluctuations, enhancing field homogeneity and reducing off-resonance artifacts. This strategy improves the robustness and reproducibility of T<sub>2</sub>* mapping, enabling more reliable high-field cardiac MRI.

Severity Classification of Pediatric Spinal Cord Injuries Using Structural MRI Measures and Deep Learning: A Comprehensive Analysis across All Vertebral Levels.

Sadeghi-Adl Z, Naghizadehkashani S, Middleton D, Krisa L, Alizadeh M, Flanders AE, Faro SH, Wang Z, Mohamed FB

pubmed logopapersAug 14 2025
Spinal cord injury (SCI) in the pediatric population presents a unique challenge in diagnosis and prognosis due to the complexity of performing clinical assessments on children. Accurate evaluation of structural changes in the spinal cord is essential for effective treatment planning. This study aims to evaluate structural characteristics in pediatric patients with SCI by comparing cross-sectional area (CSA), anterior-posterior (AP) width, and right-left (RL) width across all vertebral levels of the spinal cord between typically developing (TD) and participants with SCI. We employed deep learning techniques to utilize these measures for detecting SCI cases and determining their injury severity. Sixty-one pediatric participants (ages 6-18), including 20 with chronic SCI and 41 TD, were enrolled and scanned by using a 3T MRI scanner. All SCI participants underwent the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) test to assess their neurologic function and determine their American Spinal Injury Association (ASIA) Impairment Scale (AIS) category. T2-weighted MRI scans were utilized to measure CSA, AP width, and RL widths along the entire cervical and thoracic cord. These measures were automatically extracted at every vertebral level of the spinal cord by using the spinal cord toolbox. Deep convolutional neural networks (CNNs) were utilized to classify participants into SCI or TD groups and determine their AIS classification based on structural parameters and demographic factors such as age and height. Significant differences (<i>P</i> < .05) were found in CSA, AP width, and RL width between SCI and TD participants, indicating notable structural alterations due to SCI. The CNN-based models demonstrated high performance, achieving 96.59% accuracy in distinguishing SCI from TD participants. Furthermore, the models determined AIS category classification with 94.92% accuracy. The study demonstrates the effectiveness of integrating cross-sectional structural imaging measures with deep learning methods for classification and severity assessment of pediatric SCI. The deep learning approach outperforms traditional machine learning models in diagnostic accuracy, offering potential improvements in patient care in pediatric SCI management.

Preoperative ternary classification using DCE-MRI radiomics and machine learning for HCC, ICC, and HIPT.

Xie P, Liao ZJ, Xie L, Zhong J, Zhang X, Yuan W, Yin Y, Chen T, Lv H, Wen X, Wang X, Zhang L

pubmed logopapersAug 14 2025
This study develops a machine learning model using dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) radiomics and clinical data to preoperatively differentiate hepatocellular carcinoma (HCC), intrahepatic cholangiocarcinoma (ICC), and hepatic inflammatory pseudotumor (HIPT), addressing limitations of conventional diagnostics. This retrospective study included 280 patients (HCC = 160, ICC = 80, HIPT = 40) who underwent DCE-MRI from 2008 to 2024 at three hospitals. Radiomics features and clinical data were extracted and analyzed using LASSO regression and machine learning algorithms (Logistic Regression, Random Forest, and Extreme Gradient Boosting), with class weighting (HCC:ICC:HIPT = 1:2:4) to address class imbalance. Models were compared using macro-average Area Under the Curve (AUC), accuracy, recall, and precision. The fusion model, integrating radiomics and clinical features, achieved an AUC of 0.933 (95% CI: 0.91-0.95) and 84.5% accuracy, outperforming radiomics-only (AUC = 0.856, 72.6%) and clinical-only (AUC = 0.795, 66.7%) models (p < 0.05). Rim enhancement is a key model feature for distinguishing HCC from ICC and HIPT, while hepatic lobe atrophy distinguishes ICC and HIPT from HCC. This study developed a novel preoperative imaging-based model to differentiate HCC, ICC, and HIPT. The fusion model performed exceptionally well, demonstrating superior accuracy in ICC identification, significantly outperforming traditional diagnostic methods (e.g., radiology and biomarkers) and single-modality machine learning models (p < 0.05). This noninvasive approach enhances diagnostic precision and supports personalized treatment planning in liver disease management. This study develops a novel preoperative imaging-based machine learning model to differentiate hepatocellular carcinoma (HCC), intrahepatic cholangiocarcinoma (ICC), and hepatic inflammatory pseudotumor (HIPT), improving diagnostic accuracy and advancing personalized treatment strategies in clinical radiology. A machine learning model integrates DCE-MRI radiomics and clinical data for liver lesion differentiation. The fusion model outperforms single-modality models with 0.933 AUC and 84.5% accuracy. This model provides a noninvasive, reliable tool for personalized liver disease diagnosis and treatment planning.

Contrast-enhanced ultrasound radiomics model for predicting axillary lymph node metastasis and prognosis in breast cancer: a multicenter study.

Li SY, Li YM, Fang YQ, Jin ZY, Li JK, Zou XM, Huang SS, Niu RL, Fu NQ, Shao YH, Gong XT, Li MR, Wang W, Wang ZL

pubmed logopapersAug 14 2025
To construct a multimodal ultrasound (US) radiomics model for predicting axillary lymph node metastasis (ALNM) in breast cancer and evaluated its application value in predicting ALNM and patient prognosis. From March 2014 to December 2022, data from 682 breast cancer patients from four hospitals were collected, including preoperative grayscale US, color Doppler flow imaging (CDFI), contrast-enhanced ultrasound (CEUS) imaging data, and clinical information. Data from the First Medical Center of PLA General Hospital were used as the training and internal validation sets, while data from Peking University First Hospital, the Cancer Hospital of the Chinese Academy of Medical Sciences, and the Fourth Medical Center of PLA General Hospital were used as the external validation set. LASSO regression was employed to select radiomic features (RFs), while eight machine learning algorithms were utilized to construct radiomic models based on US, CDFI, and CEUS. The prediction efficiency of ALNM was assessed to identify the optimal model. In the meantime, Radscore was computed and integrated with immunoinflammatory markers to forecast Disease-Free Survival (DFS) in breast cancer patients. Follow-up methods included telephone outreach and in-person hospital visits. The analysis employed Cox regression to pinpoint prognostic factors, while clinical-imaging models were developed accordingly. The performance of the model was evaluated using the C-index, Receiver Operating Characteristic (ROC) curves, calibration curves, and Decision Curve Analysis (DCA). In the training cohort (n = 400), 40% of patients had ALNM, with a mean age of 55 ± 10 years. The US + CDFI + CEUS-based radiomics model achieved Area Under the Curves (AUCs) of 0.88, 0.81, and 0.77 for predicting N0 versus N+ (≥ 1) in the training, internal, and external validation sets, respectively, outperforming the US-only model (P < 0.05). For distinguishing N+ (1-2) from N+ (≥ 3), the model achieved AUCs of 0.89, 0.74, and 0.75. Combining radiomics scores with clinical immunoinflammatory markers (platelet count and neutrophil-to-lymphocyte ratio) yielded a clinical-radiomics model predicting disease-free survival (DFS), with C-indices of 0.80, 0.73, and 0.79 across the three cohorts. In the external validation cohort, the clinical-radiomics model achieved higher AUCs for predicting 2-, 3-, and 5-year DFS compared to the clinical model alone (2-year: 0.79 vs. 0.66; 3-year: 0.83 vs. 0.70; 5-year: 0.78 vs. 0.64; all P < 0.05). Calibration and decision curve analyses demonstrated good model agreement and clinical utility. The multimodal ultrasound radiomics model based on US, CDFI, and CEUS could effectively predict ALNM in breast cancer. Furthermore, the combined application of radiomics and immune inflammation markers might predict the DFS of breast cancer patients to some extent.

Multimodal artificial intelligence for subepithelial lesion classification and characterization: a multicenter comparative study (with video).

Li J, Jing X, Zhang Q, Wang X, Wang L, Shan J, Zhou Z, Fan L, Gong X, Sun X, He S

pubmed logopapersAug 14 2025
Subepithelial lesions (SELs) present significant diagnostic challenges in gastrointestinal endoscopy, particularly in differentiating malignant types, such as gastrointestinal stromal tumors (GISTs) and neuroendocrine tumors, from benign types like leiomyomas. Misdiagnosis can lead to unnecessary interventions or delayed treatment. To address this challenge, we developed ECMAI-WME, a parallel fusion deep learning model integrating white light endoscopy (WLE) and microprobe endoscopic ultrasonography (EUS), to improve SEL classification and lesion characterization. A total of 523 SELs from four hospitals were used to develop serial and parallel fusion AI models. The Parallel Model, demonstrating superior performance, was designated as ECMAI-WME. The model was tested on an external validation cohort (n = 88) and a multicenter test cohort (n = 274). Diagnostic performance, lesion characterization, and clinical decision-making support were comprehensively evaluated and compared with endoscopists' performance. The ECMAI-WME model significantly outperformed endoscopists in diagnostic accuracy (96.35% vs. 63.87-86.13%, p < 0.001) and treatment decision-making accuracy (96.35% vs. 78.47-86.13%, p < 0.001). It achieved 98.72% accuracy in internal validation, 94.32% in external validation, and 96.35% in multicenter testing. For distinguishing gastric GISTs from leiomyomas, the model reached 91.49% sensitivity, 100% specificity, and 96.38% accuracy. Lesion characteristics were identified with a mean accuracy of 94.81% (range: 90.51-99.27%). The model maintained robust performance despite class imbalance, confirmed by five complementary analyses. Subgroup analyses showed consistent accuracy across lesion size, location, or type (p > 0.05), demonstrating strong generalizability. The ECMAI-WME model demonstrates excellent diagnostic performance and robustness in the multiclass SEL classification and characterization, supporting its potential for real-time deployment to enhance diagnostic consistency and guide clinical decision-making.

AI post-intervention operational and functional outcomes prediction in ischemic stroke patients using MRIs.

Wittrup E, Reavey-Cantwell J, Pandey AS, Rivet Ii DJ, Najarian K

pubmed logopapersAug 14 2025
Despite the potential clinical utility for acute ischemic stroke patients, predicting short-term operational outcomes like length of stay (LOS) and long-term functional outcomes such as the 90-day Modified Rankin Scale (mRS) remain a challenge, with limited current clinical guidance on expected patient trajectories. Machine learning approaches have increasingly aimed to bridge this gap, often utilizing admission-based clinical features; yet, the integration of imaging biomarkers remains underexplored, especially regarding whole 2.5D image fusion using advanced deep learning techniques. This study introduces a novel method leveraging autoencoders to integrate 2.5D diffusion weighted imaging (DWI) with clinical features for refined outcome prediction. Results on a comprehensive dataset of AIS patients demonstrate that our autoencoder-based method has comparable performance to traditional convolutional neural networks image fusion methods and clinical data alone (LOS > 8 days: AUC 0.817, AUPRC 0.573, F1-Score 0.552; 90-day mRS > 2: AUC 0.754, AUPRC 0.685, F1-Score 0.626). This novel integration of imaging and clinical data for post-intervention stroke prognosis has numerous computational and operational advantages over traditional image fusion methods. While further validation of the presented models is necessary before adoption, this approach aims to enhance personalized patient management and operational decision-making in healthcare settings. Not applicable.

An effective brain stroke diagnosis strategy based on feature extraction and hybrid classifier.

Elsayed MS, Saleh GA, Saleh AI, Khalil AT

pubmed logopapersAug 14 2025
Stroke is a leading cause of death and long-term disability worldwide, and early detection remains a significant clinical challenge. This study proposes an Effective Brain Stroke Diagnosis Strategy (EBDS). The hybrid deep learning framework integrates Vision Transformer (ViT) and VGG16 to enable accurate and interpretable stroke detection from CT images. The model was trained and evaluated using a publicly available dataset from Kaggle, achieving impressive results: a test accuracy of 99.6%, a precision of 1.00 for normal cases and 0.98 for stroke cases, a recall of 0.99 for normal cases and 1.00 for stroke cases, and an overall F1-score of 0.99. These results demonstrate the robustness and reliability of the EBDS model, which outperforms several recent state-of-the-art methods. To enhance clinical trust, the model incorporates explainability techniques, such as Grad-CAM and LIME, which provide visual insights into its decision-making process. The EBDS framework is designed for real-time application in emergency settings, offering both high diagnostic performance and interpretability. This work addresses a critical research gap in early brain stroke diagnosis and contributes a scalable, explainable, and clinically relevant solution for medical imaging diagnostics.
Page 5 of 2982972 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.