Sort by:
Page 31 of 100991 results

Dendrite cross attention for high-dose-rate brachytherapy distribution planning.

Saini S, Liu X

pubmed logopapersAug 10 2025
Cervical cancer is a significant global health issue, and high-dose-rate brachytherapy (HDR-BT) is crucial for its treatment. However, manually creating HDR-BT plans is time-consuming and heavily relies on the planner's expertise, making standardization difficult. This study introduces two advanced deep learning models to address this need: Bi-branch Cross-Attention UNet (BiCA-UNet) and Dendrite Cross-Attention UNet (DCA-UNet). BiCA-UNet enhances the correlation between the CT scan and segmentation maps of the clinical target volume (CTV), applicator, bladder, and rectum. It uses two branches: one processes the stacked input of CT scans and segmentations, and the other focuses on the CTV segmentation. A cross-attention mechanism integrates these branches, improving the model's understanding of the CTV region for accurate dose predictions. Building on BiCA-UNet, DCA-UNet further introduces a primary branch of stacked inputs and three secondary branches for CTV, bladder, and rectum segmentations forming a dendritic structure. Cross attention with bladder and rectum segmentation helps the model understand the regions of organs at risk (OAR), refining dose prediction. Evaluation of these models using multiple metrics indicates that both BiCA-UNet and DCA-UNet significantly improve HDR-BT dose prediction accuracy for various applicator types. The cross-attention mechanisms enhance the feature representation of critical anatomical regions, leading to precise and reliable treatment plans. This research highlights the potential of BiCA-UNet and DCA-UNet in advancing HDR-BT planning, contributing to the standardization of treatment plans, and offering promising directions for future research to improve patient outcomes in the source data.

Prediction of Benign and Malignant Small Renal Masses Using CT-Derived Extracellular Volume Fraction: An Interpretable Machine Learning Model.

Guo Y, Fang Q, Li Y, Yang D, Chen L, Bai G

pubmed logopapersAug 9 2025
We developed a machine learning model comprising morphological characteristics, enhancement dynamics, and extracellular volume (ECV) fractions for distinguishing malignant and benign small renal masses (SRMs), supporting personalised management. This retrospective analysis involved 230 patients who underwent SRM resection with preoperative imaging, including 185 internal and 45 external cases. The internal cohort was split into training (n=136) and validation (n=49) sets. Histopathological evaluation categorised the lesions as renal cell carcinomas (n=183) or benign masses (n=47). Eleven multiphasic contrast-enhanced computed tomography (CT) parameters, including the ECV fraction, were manually measured, along with clinical and laboratory data. Feature selection involved univariate analysis and least absolute shrinkage and selection operator regularisation. Feature selection informed various machine learning classifiers, and performance was evaluated using receiver operating characteristic curves and classification tests. The optimal model was interpreted using SHapley Additive exPlanations (SHAP). The analysis included 183 carcinoma and 47 benign SRM cases. Feature selection identified seven discriminative parameters, including the ECV fraction, which informed multiple machine learning models. The Extreme Gradient Boosting model incorporating ECV exhibited optimal performance in distinguishing malignant and benign SRMs, achieving area under the curve values of 0.993 (internal training set), 0.986 (internal validation set), and 0.951 (external test set). SHAP analysis confirmed ECV as the top contributor to SRM characterisation. The integration of multiphase contrast-enhanced CT-derived ECV fraction with conventional contrast-enhanced CT parameters demonstrated diagnostic efficacy in differentiating malignant and benign SRMs.

Kidney volume after endovascular exclusion of abdominal aortic aneurysms by EVAR and FEVAR.

B S, C V, Turkia J B, Weydevelt E V, R P, F L, A K

pubmed logopapersAug 9 2025
Decreased kidney volume is a sign of renal aging and/or decreased vascularization. The aim of this study was to determine whether renal volume changes 24 months after exclusion of an abdominal aortic aneurysm (AAA), and to compare fenestrated (FEVAR) and subrenal (EVAR) stents. Retrospective single-center study from a prospective registry, including patients between 60 and 80 years with normal preoperative renal function (eGFR≥60 ml/min/1.73 m<sup>-2</sup>) who underwent fenestrated (FEVAR) or infrarenal (EVAR) stent grafts between 2015 and 2021. Patients had to have had an CT scan at 24 months of the study to be included. Exclusion criteria were renal branches, the presence of preoperative renal insufficiency, a single kidney, embolization or coverage of an accessory renal artery, occlusion of a renal artery during follow-up and mention of AAA rupture. Renal volume was measured using sizing software (EndoSize, therenva) based on fully automatic deep-learning segmentation of several anatomical structures (arterial lumen, bone structure, thrombus, heart, etc.), including the kidneys. In the presence of renal cysts, these were manually excluded from the segmentation. Forty-eight patients were included (24 EVAR vs. 24 FEVAR), 96 kidneys were segmented. There was no difference between groups in age (78.9±6.7 years vs. 69.4±6.8, p=0.89), eGFR 85.8 ± 12.4 [62-107] ml/min/1.73 m<sup>-2</sup> vs. 81 ± 16.2 [42-107] (p=0.36), and renal volume 170.9 ± 29.7 [123-276] mL vs. 165.3 ± 37.4 [115-298] (p=0.12). At 24 months in the EVAR group, there was a non-significant reduction in eGFR 84.1 ± 17.2 [61-128] ml/min/1.73 m<sup>-2</sup> vs. 81 ± 16.2 [42-107] (p=0.36) or renal volume 170.9 ± 29.7 [123-276] mL vs. 165.3 ± 37.4 [115-298] (p=0.12). In the FEVAR group, at 24 months there was a non-significant fall in eGFR 84.1 ± 17.2 [61-128] ml/min/1.73 m<sup>-2</sup> vs. 73.8 ± 21.4 [40-110] (p=0.09), while renal volume decreased significantly 182 ± 37.8 [123-293] mL vs. 158.9 ± 40.2 [45-258] (p=0.007). In this study, there appears to be a significant decrease in renal volume without a drop in eGFR 24 months after fenestrated stenting. This decrease may reflect changes in renal perfusion and could potentially be predictive of long-term renal impairment, although this cannot be confirmed within the limits of this small sample. Further studies with long-term follow-up are needed.

Deep Learning-aided <sup>1</sup>H-MR Spectroscopy for Differentiating between Patients with and without Hepatocellular Carcinoma.

Bae JS, Lee HH, Kim H, Song IC, Lee JY, Han JK

pubmed logopapersAug 9 2025
Among patients with hepatitis B virus-associated liver cirrhosis (HBV-LC), there may be differences in the hepatic parenchyma between those with and without hepatocellular carcinoma (HCC). Proton MR spectroscopy (<sup>1</sup>H-MRS) is a well-established tool for noninvasive metabolomics, but has been challenging in the liver allowing only a few metabolites to be detected other than lipids. This study aims to explore the potential of <sup>1</sup>H-MRS of the liver in conjunction with deep learning to differentiate between HBV-LC patients with and without HCC. Between August 2018 and March 2021, <sup>1</sup>H-MRS data were collected from 37 HBV-LC patients who underwent MRI for HCC surveillance, without HCC (HBV-LC group, n = 20) and with HCC (HBV-LC-HCC group, n = 17). Based on a priori knowledge from the first 10 patients from each group, big spectral datasets were simulated to develop 2 kinds of convolutional neural networks (CNNs): CNNs quantifying 15 metabolites and 5 lipid resonances (qCNNs) and CNNs classifying patients into HBV-LC and HBV-LC-HCC (cCNNs). The performance of the cCNNs was assessed using the remaining patients in the 2 groups (10 HBV-LC and 7 HBV-LC-HCC patients). Using a simulated dataset, the quantitative errors with the qCNNs were significantly lower than those with a conventional nonlinear-least-squares-fitting method for all metabolites and lipids (P ≤0.004). The cCNNs exhibited sensitivity, specificity, and accuracy of 100% (7/7), 90% (9/10), and 94% (16/17), respectively, for identifying the HBV-LC-HCC group. Deep-learning-aided <sup>1</sup>H-MRS with data augmentation by spectral simulation may have potential in differentiating between HBV-LC patients with and without HCC.

Ultrasound-Based Machine Learning and SHapley Additive exPlanations Method Evaluating Risk of Gallbladder Cancer: A Bicentric and Validation Study.

Chen B, Zhong H, Lin J, Lyu G, Su S

pubmed logopapersAug 9 2025
This study aims to construct and evaluate 8 machine learning models by integrating ultrasound imaging features, clinical characteristics, and serological features to assess the risk of gallbladder cancer (GBC) occurrence in patients. A retrospective analysis was conducted on ultrasound and clinical data of 300 suspected GBC patients who visited the Second Affiliated Hospital of Fujian Medical University from January 2020 to January 2024 and 69 patients who visited the Zhongshan Hospital Affiliated to Xiamen University from January 2024 to January 2025. Key relevant features were selected using Least Absolute Shrinkage and Selection Operator (LASSO) regression. Predictive models were constructed using XGBoost, logistic regression, support vector machine, k-nearest neighbors, random forest, decision tree, naive Bayes, and neural network, with the SHapley Additive exPlanations (SHAP) method employed to explain model interpretability. The LASSO regression demonstrated that gender, age, alkaline phosphatase (ALP), clarity of interface with liver, stratification of the gallbladder wall, intracapsular anechoic lesions, and intracapsular punctiform strong lesions were key features for GBC. The XGBoost model demonstrated an area under receiver operating characteristic curve (AUC) of 0.934, 0.916, and 0.813 in the training, validating, and test sets. SHAP analysis revealed the importance ranking of factors as clarity of interface with liver, stratification of the gallbladder wall, intracapsular anechoic lesions, and intracapsular punctiform strong lesions, ALP, gender, and age. Personalized prediction explanations through SHAP values demonstrated the contribution of each feature to the final prediction, enhancing result interpretability. Furthermore, decision plots were generated to display the influence trajectory of each feature on model predictions, aiding in analyzing which features had the greatest impact on these mispredictions; thereby facilitating further model optimization or feature adjustment. This study proposed a GBC ML model based on ultrasound, clinical, and serological characteristics, indicating the superior performance of the XGBoost model and enhancing the interpretability of the model through the SHAP method.

Multi-institutional study for comparison of detectability of hypovascular liver metastases between 70- and 40-keV images: DELMIO study.

Ichikawa S, Funayama S, Hyodo T, Ozaki K, Ito A, Kakuya M, Kobayashi T, Tanahashi Y, Kozaka K, Igarashi S, Suto T, Noda Y, Matsuo M, Narita A, Okada H, Suzuki K, Goshima S

pubmed logopapersAug 9 2025
To compare the lesion detectability of hypovascular liver metastases between 70-keV and 40-keV images from dual energy-computed tomography (CT) reconstructed with deep-learning image reconstruction (DLIR). This multi-institutional, retrospective study included adult patients both pre- and post-treatment for gastrointestinal adenocarcinoma. All patients underwent contrast-enhanced CT with reconstruction at 40-keV and 70-keV. Liver metastases were confirmed using gadoxetic acid-enhanced magnetic resonance imaging. Four radiologists independently assessed lesion conspicuity (per-patient and per-lesion) using a 5-point scale. A radiologic technologist measured image noise, tumor-to-liver contrast, and contrast-to-noise ratio (CNR). Quantitative and qualitative results were compared between 70-keV and 40-keV images. The study included 138 patients (mean age, 69 ± 12 years; 80 men) with 208 liver metastases. Seventy-one patients had liver metastases, while 67 did not. Primary cancer sites included 68 cases of pancreas, 50 colorectal, 12 stomach, and 8 gallbladder/bile duct. No significant difference in per-patient lesion detectability was found between 70-keV images (sensitivity, 71.8-90.1%; specificity, 61.2-85.1%; accuracy, 73.9-79.7%) and 40-keV images (sensitivity, 76.1-90.1%; specificity, 53.7-82.1%; accuracy, 71.7-79.0%) (p = 0.18-> 0.99). Similarly, no significant difference in per-lesion lesion detectability was observed between 70-keV (sensitivity, 67.3-82.2%) and 40-keV images (sensitivity, 68.8-81.7%) (p = 0.20-> 0.99). However, Image noise was significantly higher at 40 keV, along with greater tumor-to-liver contrast and CNRs for both hepatic parenchyma and tumors (p < 0.01). There was no significant difference in hypovascular liver metastases detectability between 70-keV and 40-keV images using the DLIR technology.

MRI-based radiomics for preoperative T-staging of rectal cancer: a retrospective analysis.

Patanè V, Atripaldi U, Sansone M, Marinelli L, Del Tufo S, Arrichiello G, Ciardiello D, Selvaggi F, Martinelli E, Reginelli A

pubmed logopapersAug 8 2025
Preoperative T-staging in rectal cancer is essential for treatment planning, yet conventional MRI shows limited accuracy (~ 60-78). Our study investigates whether radiomic analysis of high-resolution T2-weighted MRI can non-invasively improve staging accuracy through a retrospective evaluation in a real-world surgical cohort. This single-center retrospective study included 200 patients (January 2024-April 2025) with pathologically confirmed rectal cancer, all undergoing preoperative high-resolution T2-weighted MRI within one week prior to curative surgery and no neoadjuvant therapy. Manual segmentation was performed using ITK‑SNAP, followed by extraction of 107 radiomic features via PyRadiomics. Feature selection employed mRMR and LASSO logistic regression, culminating in a Rad-score predictive model. Statistical performance was evaluated using ROC curves (AUC), accuracy, sensitivity, specificity, and Delong's test. Among 200 patients, 95 were pathologically staged as T2 and 105 as T3-T4 (55 T3, 50 T4). After preprocessing, 26 radiomic features were retained; key features including ngtdm_contrast and ngtdm_coarseness showed AUC values > 0.70. The LASSO-based model achieved an AUC of 0.82 (95% CI: 0.75-0.89), with overall accuracy of 81%, sensitivity of 78%, and specificity of 84%. Radiomic analysis of standard preoperative T2-weighted MRI provides a reliable, non-invasive method to predict rectal cancer T-stage. This approach has the potential to enhance staging accuracy and inform personalized surgical planning. Prospective multicenter validation is required for broader clinical implementation.

LLM-Based Extraction of Imaging Features from Radiology Reports: Automating Disease Activity Scoring in Crohn's Disease.

Dehdab R, Mankertz F, Brendel JM, Maalouf N, Kaya K, Afat S, Kolahdoozan S, Radmard AR

pubmed logopapersAug 8 2025
Large Language Models (LLMs) offer a promising solution for extracting structured clinical information from free-text radiology reports. The Simplified Magnetic Resonance Index of Activity (sMARIA) is a validated scoring system used to quantify Crohn's disease (CD) activity based on Magnetic Resonance Enterography (MRE) findings. This study aims to evaluate the performance of two advanced LLMs in extracting key imaging features and computing sMARIA scores from free-text MRE reports. This retrospective study included 117 anonymized free-text MRE reports from patients with confirmed CD. ChatGPT (GPT-4o) and DeepSeek (DeepSeek-R1) were prompted using a structured input designed to extract four key radiologic features relevant to sMARIA: bowel wall thickness, mural edema, perienteric fat stranding, and ulceration. LLM outputs were evaluated against radiologist annotations at both the segment and feature levels. Segment-level agreement was assessed using accuracy, mean absolute error (MAE) and Pearson correlation. Feature-level performance was evaluated using sensitivity, specificity, precision, and F1-score. Errors including confabulations were recorded descriptively. ChatGPT achieved a segment-level accuracy of 98.6%, MAE of 0.17, and Pearson correlation of 0.99. DeepSeek achieved 97.3% accuracy, MAE of 0.51, and correlation of 0.96. At the feature level, ChatGPT yielded an F1-score of 98.8% (precision 97.8%, sensitivity 99.9%), while DeepSeek achieved 97.9% (precision 96.0%, sensitivity 99.8%). LLMs demonstrate near-human accuracy in extracting structured information and computing sMARIA scores from free-text MRE reports. This enables automated assessment of CD activity without altering current reporting workflows, supporting longitudinal monitoring and large-scale research. Integration into clinical decision support systems may be feasible in the future, provided appropriate human oversight and validation are ensured.

MM2CT: MR-to-CT translation for multi-modal image fusion with mamba

Chaohui Gong, Zhiying Wu, Zisheng Huang, Gaofeng Meng, Zhen Lei, Hongbin Liu

arxiv logopreprintAug 7 2025
Magnetic resonance (MR)-to-computed tomography (CT) translation offers significant advantages, including the elimination of radiation exposure associated with CT scans and the mitigation of imaging artifacts caused by patient motion. The existing approaches are based on single-modality MR-to-CT translation, with limited research exploring multimodal fusion. To address this limitation, we introduce Multi-modal MR to CT (MM2CT) translation method by leveraging multimodal T1- and T2-weighted MRI data, an innovative Mamba-based framework for multi-modal medical image synthesis. Mamba effectively overcomes the limited local receptive field in CNNs and the high computational complexity issues in Transformers. MM2CT leverages this advantage to maintain long-range dependencies modeling capabilities while achieving multi-modal MR feature integration. Additionally, we incorporate a dynamic local convolution module and a dynamic enhancement module to improve MRI-to-CT synthesis. The experiments on a public pelvis dataset demonstrate that MM2CT achieves state-of-the-art performance in terms of Structural Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio (PSNR). Our code is publicly available at https://github.com/Gots-ch/MM2CT.

FedGIN: Federated Learning with Dynamic Global Intensity Non-linear Augmentation for Organ Segmentation using Multi-modal Images

Sachin Dudda Nagaraju, Ashkan Moradi, Bendik Skarre Abrahamsen, Mattijs Elschot

arxiv logopreprintAug 7 2025
Medical image segmentation plays a crucial role in AI-assisted diagnostics, surgical planning, and treatment monitoring. Accurate and robust segmentation models are essential for enabling reliable, data-driven clinical decision making across diverse imaging modalities. Given the inherent variability in image characteristics across modalities, developing a unified model capable of generalizing effectively to multiple modalities would be highly beneficial. This model could streamline clinical workflows and reduce the need for modality-specific training. However, real-world deployment faces major challenges, including data scarcity, domain shift between modalities (e.g., CT vs. MRI), and privacy restrictions that prevent data sharing. To address these issues, we propose FedGIN, a Federated Learning (FL) framework that enables multimodal organ segmentation without sharing raw patient data. Our method integrates a lightweight Global Intensity Non-linear (GIN) augmentation module that harmonizes modality-specific intensity distributions during local training. We evaluated FedGIN using two types of datasets: an imputed dataset and a complete dataset. In the limited dataset scenario, the model was initially trained using only MRI data, and CT data was added to assess its performance improvements. In the complete dataset scenario, both MRI and CT data were fully utilized for training on all clients. In the limited-data scenario, FedGIN achieved a 12 to 18% improvement in 3D Dice scores on MRI test cases compared to FL without GIN and consistently outperformed local baselines. In the complete dataset scenario, FedGIN demonstrated near-centralized performance, with a 30% Dice score improvement over the MRI-only baseline and a 10% improvement over the CT-only baseline, highlighting its strong cross-modality generalization under privacy constraints.
Page 31 of 100991 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.