Sort by:
Page 5 of 73727 results

Deep Learning-aided <sup>1</sup>H-MR Spectroscopy for Differentiating between Patients with and without Hepatocellular Carcinoma.

Bae JS, Lee HH, Kim H, Song IC, Lee JY, Han JK

pubmed logopapersAug 9 2025
Among patients with hepatitis B virus-associated liver cirrhosis (HBV-LC), there may be differences in the hepatic parenchyma between those with and without hepatocellular carcinoma (HCC). Proton MR spectroscopy (<sup>1</sup>H-MRS) is a well-established tool for noninvasive metabolomics, but has been challenging in the liver allowing only a few metabolites to be detected other than lipids. This study aims to explore the potential of <sup>1</sup>H-MRS of the liver in conjunction with deep learning to differentiate between HBV-LC patients with and without HCC. Between August 2018 and March 2021, <sup>1</sup>H-MRS data were collected from 37 HBV-LC patients who underwent MRI for HCC surveillance, without HCC (HBV-LC group, n = 20) and with HCC (HBV-LC-HCC group, n = 17). Based on a priori knowledge from the first 10 patients from each group, big spectral datasets were simulated to develop 2 kinds of convolutional neural networks (CNNs): CNNs quantifying 15 metabolites and 5 lipid resonances (qCNNs) and CNNs classifying patients into HBV-LC and HBV-LC-HCC (cCNNs). The performance of the cCNNs was assessed using the remaining patients in the 2 groups (10 HBV-LC and 7 HBV-LC-HCC patients). Using a simulated dataset, the quantitative errors with the qCNNs were significantly lower than those with a conventional nonlinear-least-squares-fitting method for all metabolites and lipids (P ≤0.004). The cCNNs exhibited sensitivity, specificity, and accuracy of 100% (7/7), 90% (9/10), and 94% (16/17), respectively, for identifying the HBV-LC-HCC group. Deep-learning-aided <sup>1</sup>H-MRS with data augmentation by spectral simulation may have potential in differentiating between HBV-LC patients with and without HCC.

Kidney volume after endovascular exclusion of abdominal aortic aneurysms by EVAR and FEVAR.

B S, C V, Turkia J B, Weydevelt E V, R P, F L, A K

pubmed logopapersAug 9 2025
Decreased kidney volume is a sign of renal aging and/or decreased vascularization. The aim of this study was to determine whether renal volume changes 24 months after exclusion of an abdominal aortic aneurysm (AAA), and to compare fenestrated (FEVAR) and subrenal (EVAR) stents. Retrospective single-center study from a prospective registry, including patients between 60 and 80 years with normal preoperative renal function (eGFR≥60 ml/min/1.73 m<sup>-2</sup>) who underwent fenestrated (FEVAR) or infrarenal (EVAR) stent grafts between 2015 and 2021. Patients had to have had an CT scan at 24 months of the study to be included. Exclusion criteria were renal branches, the presence of preoperative renal insufficiency, a single kidney, embolization or coverage of an accessory renal artery, occlusion of a renal artery during follow-up and mention of AAA rupture. Renal volume was measured using sizing software (EndoSize, therenva) based on fully automatic deep-learning segmentation of several anatomical structures (arterial lumen, bone structure, thrombus, heart, etc.), including the kidneys. In the presence of renal cysts, these were manually excluded from the segmentation. Forty-eight patients were included (24 EVAR vs. 24 FEVAR), 96 kidneys were segmented. There was no difference between groups in age (78.9±6.7 years vs. 69.4±6.8, p=0.89), eGFR 85.8 ± 12.4 [62-107] ml/min/1.73 m<sup>-2</sup> vs. 81 ± 16.2 [42-107] (p=0.36), and renal volume 170.9 ± 29.7 [123-276] mL vs. 165.3 ± 37.4 [115-298] (p=0.12). At 24 months in the EVAR group, there was a non-significant reduction in eGFR 84.1 ± 17.2 [61-128] ml/min/1.73 m<sup>-2</sup> vs. 81 ± 16.2 [42-107] (p=0.36) or renal volume 170.9 ± 29.7 [123-276] mL vs. 165.3 ± 37.4 [115-298] (p=0.12). In the FEVAR group, at 24 months there was a non-significant fall in eGFR 84.1 ± 17.2 [61-128] ml/min/1.73 m<sup>-2</sup> vs. 73.8 ± 21.4 [40-110] (p=0.09), while renal volume decreased significantly 182 ± 37.8 [123-293] mL vs. 158.9 ± 40.2 [45-258] (p=0.007). In this study, there appears to be a significant decrease in renal volume without a drop in eGFR 24 months after fenestrated stenting. This decrease may reflect changes in renal perfusion and could potentially be predictive of long-term renal impairment, although this cannot be confirmed within the limits of this small sample. Further studies with long-term follow-up are needed.

MRI-based radiomics for preoperative T-staging of rectal cancer: a retrospective analysis.

Patanè V, Atripaldi U, Sansone M, Marinelli L, Del Tufo S, Arrichiello G, Ciardiello D, Selvaggi F, Martinelli E, Reginelli A

pubmed logopapersAug 8 2025
Preoperative T-staging in rectal cancer is essential for treatment planning, yet conventional MRI shows limited accuracy (~ 60-78). Our study investigates whether radiomic analysis of high-resolution T2-weighted MRI can non-invasively improve staging accuracy through a retrospective evaluation in a real-world surgical cohort. This single-center retrospective study included 200 patients (January 2024-April 2025) with pathologically confirmed rectal cancer, all undergoing preoperative high-resolution T2-weighted MRI within one week prior to curative surgery and no neoadjuvant therapy. Manual segmentation was performed using ITK‑SNAP, followed by extraction of 107 radiomic features via PyRadiomics. Feature selection employed mRMR and LASSO logistic regression, culminating in a Rad-score predictive model. Statistical performance was evaluated using ROC curves (AUC), accuracy, sensitivity, specificity, and Delong's test. Among 200 patients, 95 were pathologically staged as T2 and 105 as T3-T4 (55 T3, 50 T4). After preprocessing, 26 radiomic features were retained; key features including ngtdm_contrast and ngtdm_coarseness showed AUC values > 0.70. The LASSO-based model achieved an AUC of 0.82 (95% CI: 0.75-0.89), with overall accuracy of 81%, sensitivity of 78%, and specificity of 84%. Radiomic analysis of standard preoperative T2-weighted MRI provides a reliable, non-invasive method to predict rectal cancer T-stage. This approach has the potential to enhance staging accuracy and inform personalized surgical planning. Prospective multicenter validation is required for broader clinical implementation.

LLM-Based Extraction of Imaging Features from Radiology Reports: Automating Disease Activity Scoring in Crohn's Disease.

Dehdab R, Mankertz F, Brendel JM, Maalouf N, Kaya K, Afat S, Kolahdoozan S, Radmard AR

pubmed logopapersAug 8 2025
Large Language Models (LLMs) offer a promising solution for extracting structured clinical information from free-text radiology reports. The Simplified Magnetic Resonance Index of Activity (sMARIA) is a validated scoring system used to quantify Crohn's disease (CD) activity based on Magnetic Resonance Enterography (MRE) findings. This study aims to evaluate the performance of two advanced LLMs in extracting key imaging features and computing sMARIA scores from free-text MRE reports. This retrospective study included 117 anonymized free-text MRE reports from patients with confirmed CD. ChatGPT (GPT-4o) and DeepSeek (DeepSeek-R1) were prompted using a structured input designed to extract four key radiologic features relevant to sMARIA: bowel wall thickness, mural edema, perienteric fat stranding, and ulceration. LLM outputs were evaluated against radiologist annotations at both the segment and feature levels. Segment-level agreement was assessed using accuracy, mean absolute error (MAE) and Pearson correlation. Feature-level performance was evaluated using sensitivity, specificity, precision, and F1-score. Errors including confabulations were recorded descriptively. ChatGPT achieved a segment-level accuracy of 98.6%, MAE of 0.17, and Pearson correlation of 0.99. DeepSeek achieved 97.3% accuracy, MAE of 0.51, and correlation of 0.96. At the feature level, ChatGPT yielded an F1-score of 98.8% (precision 97.8%, sensitivity 99.9%), while DeepSeek achieved 97.9% (precision 96.0%, sensitivity 99.8%). LLMs demonstrate near-human accuracy in extracting structured information and computing sMARIA scores from free-text MRE reports. This enables automated assessment of CD activity without altering current reporting workflows, supporting longitudinal monitoring and large-scale research. Integration into clinical decision support systems may be feasible in the future, provided appropriate human oversight and validation are ensured.

An evaluation of rectum contours generated by artificial intelligence automatic contouring software using geometry, dosimetry and predicted toxicity.

Mc Laughlin O, Gholami F, Osman S, O'Sullivan JM, McMahon SJ, Jain S, McGarry CK

pubmed logopapersAug 7 2025
Objective&#xD;This study assesses rectum contours generated using a commercial deep learning auto-contouring model and compares them to clinician contours using geometry, changes in dosimetry and toxicity modelling. &#xD;Approach&#xD;This retrospective study involved 308 prostate cancer patients who were treated using 3D-conformal radiotherapy. Computed tomography images were input into Limbus Contour (v1.8.0b3) to generate auto-contour structures for each patient. Auto-contours were not edited after their generation.&#xD;Rectum auto-contours were compared to clinician contours geometrically and dosimetrically. Dice similarity coefficient (DSC), mean Hausdorff distance (HD) and volume difference were assessed. Dose-volume histogram (DVH) constraints (V41%-V100%) were compared, and a Wilcoxon signed rank test was used to evaluate statistical significance of differences. &#xD;Toxicity modelling to compare contours was carried out using equivalent uniform dose (EUD) and clinical factors of abdominal surgery and atrial fibrillation. Trained models were tested (80:20) in their prediction of grade 1 late rectal bleeding (ntotal=124) using area-under the receiver operating characteristic curve (AUC).&#xD;Main results&#xD;Median DSC (interquartile range (IQR)) was 0.85 (0.09), median HD was 1.38 mm (0.60 mm) and median volume difference was -1.73 cc (14.58 cc). Median DVH differences between contours were found to be small (<1.5%) for all constraints although systematically larger than clinician contours (p<0.05). However, an IQR up to 8.0% was seen for individual patients across all dose constraints.&#xD;Models using EUD alone derived from clinician or auto-contours had AUCs of 0.60 (0.10) and 0.60 (0.09). AUC for models involving clinical factors and dosimetry was 0.65 (0.09) and 0.66 (0.09) when using clinician contours and auto-contours.&#xD;Significance&#xD;Although median DVH metrics were similar, variation for individual patients highlights the importance of clinician review. Rectal bleeding prediction accuracy did not depend on the contour method for this cohort. The auto-contouring model used in this study shows promise in a supervised workflow.&#xD.

MM2CT: MR-to-CT translation for multi-modal image fusion with mamba

Chaohui Gong, Zhiying Wu, Zisheng Huang, Gaofeng Meng, Zhen Lei, Hongbin Liu

arxiv logopreprintAug 7 2025
Magnetic resonance (MR)-to-computed tomography (CT) translation offers significant advantages, including the elimination of radiation exposure associated with CT scans and the mitigation of imaging artifacts caused by patient motion. The existing approaches are based on single-modality MR-to-CT translation, with limited research exploring multimodal fusion. To address this limitation, we introduce Multi-modal MR to CT (MM2CT) translation method by leveraging multimodal T1- and T2-weighted MRI data, an innovative Mamba-based framework for multi-modal medical image synthesis. Mamba effectively overcomes the limited local receptive field in CNNs and the high computational complexity issues in Transformers. MM2CT leverages this advantage to maintain long-range dependencies modeling capabilities while achieving multi-modal MR feature integration. Additionally, we incorporate a dynamic local convolution module and a dynamic enhancement module to improve MRI-to-CT synthesis. The experiments on a public pelvis dataset demonstrate that MM2CT achieves state-of-the-art performance in terms of Structural Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio (PSNR). Our code is publicly available at https://github.com/Gots-ch/MM2CT.

FedGIN: Federated Learning with Dynamic Global Intensity Non-linear Augmentation for Organ Segmentation using Multi-modal Images

Sachin Dudda Nagaraju, Ashkan Moradi, Bendik Skarre Abrahamsen, Mattijs Elschot

arxiv logopreprintAug 7 2025
Medical image segmentation plays a crucial role in AI-assisted diagnostics, surgical planning, and treatment monitoring. Accurate and robust segmentation models are essential for enabling reliable, data-driven clinical decision making across diverse imaging modalities. Given the inherent variability in image characteristics across modalities, developing a unified model capable of generalizing effectively to multiple modalities would be highly beneficial. This model could streamline clinical workflows and reduce the need for modality-specific training. However, real-world deployment faces major challenges, including data scarcity, domain shift between modalities (e.g., CT vs. MRI), and privacy restrictions that prevent data sharing. To address these issues, we propose FedGIN, a Federated Learning (FL) framework that enables multimodal organ segmentation without sharing raw patient data. Our method integrates a lightweight Global Intensity Non-linear (GIN) augmentation module that harmonizes modality-specific intensity distributions during local training. We evaluated FedGIN using two types of datasets: an imputed dataset and a complete dataset. In the limited dataset scenario, the model was initially trained using only MRI data, and CT data was added to assess its performance improvements. In the complete dataset scenario, both MRI and CT data were fully utilized for training on all clients. In the limited-data scenario, FedGIN achieved a 12 to 18% improvement in 3D Dice scores on MRI test cases compared to FL without GIN and consistently outperformed local baselines. In the complete dataset scenario, FedGIN demonstrated near-centralized performance, with a 30% Dice score improvement over the MRI-only baseline and a 10% improvement over the CT-only baseline, highlighting its strong cross-modality generalization under privacy constraints.

Response Assessment in Hepatocellular Carcinoma: A Primer for Radiologists.

Mroueh N, Cao J, Srinivas Rao S, Ghosh S, Song OK, Kongboonvijit S, Shenoy-Bhangle A, Kambadakone A

pubmed logopapersAug 7 2025
Hepatocellular carcinoma (HCC) is the third leading cause of cancer-related deaths worldwide, necessitating accurate and early diagnosis to guide therapy, along with assessment of treatment response. Response assessment criteria have evolved from traditional morphologic approaches, such as WHO criteria and Response Evaluation Criteria in Solid Tumors (RECIST), to more recent methods focused on evaluating viable tumor burden, including European Association for Study of Liver (EASL) criteria, modified RECIST (mRECIST) and Liver Imaging Reporting and Data System (LI-RADS) Treatment Response (LI-TR) algorithm. This shift reflects the complex and evolving landscape of HCC treatment in the context of emerging systemic and locoregional therapies. Each of these criteria have their own nuanced strengths and limitations in capturing the detailed characteristics of HCC treatment and response assessment. The emergence of functional imaging techniques, including dual-energy CT, perfusion imaging, and rising use of radiomics, are enhancing the capabilities of response assessment. Growth in the realm of artificial intelligence and machine learning models provides an opportunity to refine the precision of response assessment by facilitating analysis of complex imaging data patterns. This review article provides a comprehensive overview of existing criteria, discusses functional and emerging imaging techniques, and outlines future directions for advancing HCC tumor response assessment.

Automatic Multi-Stage Classification Model for Fetal Ultrasound Images Based on EfficientNet.

Shih CS, Chiu HW

pubmed logopapersAug 7 2025
This study aims to enhance the accuracy of fetal ultrasound image classification using convolutional neural networks, specifically EfficientNet. The research focuses on data collection, preprocessing, model training, and evaluation at different pregnancy stages: early, midterm, and newborn. EfficientNet showed the best performance, particularly in the newborn stage, demonstrating deep learning's potential to improve classification performance and support clinical workflows.

Deep Learning-Based Cascade 3D Kidney Segmentation Method.

Hao Z, Chapman BE

pubmed logopapersAug 7 2025
Renal tumors require early diagnosis and precise localization for effective treatment. This study aims to automate renal tumor analysis in abdominal CT images using a cascade 3D U-Net architecture for semantic kidney segmentation. To address challenges like edge detection and small object segmentation, the framework incorporates residual blocks to enhance convergence and efficiency. Comprehensive training configurations, preprocessing, and postprocessing strategies were employed to ensure accurate results. Tested on KiTS2019 data, the method ranked 23rd on the leaderboard (Nov 2024), demonstrating the enhanced cascade 3D U-Net's effectiveness in improving segmentation precision.
Page 5 of 73727 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.