Sort by:
Page 8 of 1391390 results

Integrating Multi-Modal Imaging Features for Early Prediction of Acute Kidney Injury in Pneumonia Sepsis: A Multicenter Retrospective Study.

Gu Y, Li L, Yang K, Zou C, Yin B

pubmed logopapersSep 29 2025
Sepsis, a severe complication of infection, often leads to acute kidney injury (AKI), which significantly increases the risk of death. Despite its clinical importance, early prediction of AKI remains challenging. Current tools rely on blood and urine tests, which are costly, variable, and not always available in time for intervention. Pneumonia is the most common cause of sepsis, accounting for over one-third of cases. In such patients, pulmonary inflammation and perilesional tissue alterations may serve as surrogate markers of systemic disease progression. However, these imaging features are rarely used in clinical decision-making. To overcome this limitation, our study aims to extract informative imaging features from pneumonia-associated sepsis cases using deep learning, with the goal of predicting the development of AKI. This dual-center retrospective study included pneumonia-associated sepsis patients (Jan 2020-Jul 2024). Chest CT images, clinical records, and laboratory data at admission were collected. We propose MCANet (Multimodal Cross-Attention Network), a two-stage deep learning framework designed to predict the occurrence of pneumonia-associated sepsis-related acute kidney injury (pSA-AKI). In the first stage, region-specific features were extracted from the lungs, epicardial adipose tissue, and T4-level subcutaneous adipose tissue using ResNet-18, which was chosen for its lightweight architecture and efficiency in processing multi-regional 2D CT slices with low computational cost. In the second stage, the extracted features were fused via a Multiscale Feature Attention Network (MSFAN) employing cross-attention mechanisms to enhance interactions among anatomical regions, followed by classification using ResNet-101, selected for its deeper architecture and strong ability to model global semantic representations and complex patterns.Model performance was evaluated using AUC, accuracy, precision, recall, and F1-score. Grad-CAM and PyRadiomics were employed for visual interpretation and radiomic analysis, respectively. A total of 399 patients with pneumonia-associated sepsis were included in this study. The modality ablation experiments demonstrated that the model integrating features from the lungs, T4-level subcutaneous adipose tissue, and epicardial adipose tissue achieved the best performance, with an accuracy of 0.981 and an AUC of 0.99 on the external test set from an independent center. For the prediction of AKI onset time, the LightGBM model incorporating imaging and clinical features achieved the highest accuracy of 0.8409 on the external test set. Furthermore, the multimodal model combining deep features, radiomics features, and clinical data further improved predictive performance, reaching an accuracy of 0.9773 and an AUC of 0.961 on the external test set. This study developed MCAnet, a multimodal deep learning framework that integrates imaging features from the lungs, epicardial adipose tissue, and T4-level subcutaneous adipose tissue. The framework significantly improved the accuracy of AKI occurrence and temporal prediction in pneumonia-associated sepsis patients, highlighting the synergistic role of adipose tissue and lung characteristics. Furthermore, explainability analysis revealed potential decision-making mechanisms underlying the temporal progression of pSA-AKI, offering new insights for clinical management.

Deep learning NTCP model for late dysphagia after radiotherapy for head and neck cancer patients based on 3D dose, CT and segmentations.

de Vette SPM, Neh H, van der Hoek L, MacRae DC, Chu H, Gawryszuk A, Steenbakkers RJHM, van Ooijen PMA, Fuller CD, Hutcheson KA, Langendijk JA, Sijtsema NM, van Dijk LV

pubmed logopapersSep 29 2025
Late radiation-associated dysphagia after head and neck cancer (HNC) significantly impacts patient's health and quality of life. Conventional normal tissue complication probability (NTCP) models use discrete dose parameters to predict toxicity risk but fail to fully capture the complexity of this side effect. Deep learning (DL) offers potential improvements by incorporating 3D dose data for all anatomical structures involved in swallowing. This study aims to enhance dysphagia prediction with 3D DL NTCP models compared to conventional NTCP models. A multi-institutional cohort of 1484 HNC patients was used to train and validate a 3D DL model (Residual Network) incorporating 3D dose distributions, organ-at-risk segmentations, and CT scans, with or without patient- or treatment-related data. Predictions of grade ≥ 2 dysphagia (CTCAEv4) at six months post-treatment were evaluated using area under the curve (AUC) and calibration curves. Results were compared to a conventional NTCP model based on pre-treatment dysphagia, tumour location, and mean dose to swallowing organs. Attention maps highlighting regions of interest for individual patients were assessed. DL models outperformed the conventional NTCP model in both the independent test set (AUC = 0.80-0.84 versus 0.76) and external test set (AUC = 0.73-0.74 versus 0.63) in AUC and calibration. Attention maps showed a focus on the oral cavity and superior pharyngeal constrictor muscle. DL NTCP models performed significantly better than the conventional NTCP model, suggesting the benefit of using 3D-input over the conventional discrete dose parameters. Attention maps highlighted relevant regions linked to dysphagia, supporting the utility of DL for improved predictions.

Evaluation of a commercial deep-learning-based contouring software for CT-based gynecological brachytherapy.

Yang HJ, Patrick J, Vickress J, D'Souza D, Velker V, Mendez L, Starling MM, Fenster A, Hoover D

pubmed logopapersSep 29 2025
To evaluate a commercial deep-learning based auto-contouring software specifically trained for high-dose-rate gynecological brachytherapy. We collected CT images from 30 patients treated with gynecological brachytherapy (19.5-28 Gy in 3-4 fractions) at our institution from January 2018 to December 2022. Clinical and artificial intelligence (AI) generated contours for bladder, bowel, rectum, and sigmoid were obtained. Five patients were randomly selected from the test set and manually re-contoured by 4 radiation oncologists. Contouring was repeated 2 weeks later using AI contours as the starting point ("AI-assisted" approach). Comparisons amongst clinical, AI, AI-assisted, and manual retrospective contours were made using various metrics, including Dice similarity coefficient (DSC) and unsigned D2cc difference. Between clinical and AI contours, DSC was 0.92, 0.79, 0.62, 0.66, for bladder, rectum, sigmoid, and bowel, respectively. Rectum and sigmoid had the lowest median unsigned D2cc difference of 0.20 and 0.21 Gy/fraction respectively between clinical and AI contours, while bowel had the largest median difference of 0.38 Gy/fraction. Agreement between fully automated AI and clinical contours was generally not different compared to agreement between AI-assisted and clinical contours. AI-assisted interobserver agreement was better than manual interobserver agreement for all organs and metrics. The median time to contour all organs for manual and AI-assisted approaches was 14.8 and 6.9 minutes/patient (p < 0.001), respectively. The agreement between AI or AI-assisted contours against the clinical contours was similar to manual interobserver agreement. Implementation of the AI-assisted contouring approach could enhance clinical workflow by decreasing both contouring time and interobserver variability.

DCM-Net: dual-encoder CNN-Mamba network with cross-branch fusion for robust medical image segmentation.

Atabansi CC, Wang S, Li H, Nie J, Xiang L, Zhang C, Liu H, Zhou X, Li D

pubmed logopapersSep 29 2025
Medical image segmentation is a critical task for the early detection and diagnosis of various conditions, such as skin cancer, polyps, thyroid nodules, and pancreatic tumors. Recently, deep learning architectures have achieved significant success in this field. However, they face a critical trade-off between local feature extraction and global context modeling. To address this limitation, we present DCM-Net, a dual-encoder architecture that integrates pretrained CNN layers with Visual State Space (VSS) blocks through a Cross-Branch Feature Fusion Module (CBFFM). A Decoder Feature Enhancement Module (DFEM) combines depth-wise separable convolutions with MLP-based semantic rectification to extract enhanced decoded features and improve the segmentation performance. Additionally, we present a new 2D pancreas and pancreatic tumor dataset (CCH-PCT-CT) collected from Chongqing University Cancer Hospital, comprising 3,547 annotated CT slices, which is used to validate the proposed model. The proposed DCM-Net architecture achieves competitive performance across all datasets investigated in this study. We develop a novel DCM-Net architecture that generates robust features for tumor and organ segmentation in medical images. DCM-Net significantly outperforms all baseline models in segmentation tasks, with higher Dice Similarity Coefficient (DSC) and mean Intersection over Union (mIoU) scores. Its robustness confirms strong potential for clinical use.

Artificial intelligence in carotid computed tomography angiography plaque detection: Decade of progress and future perspectives.

Wang DY, Yang T, Zhang CT, Zhan PC, Miao ZX, Li BL, Yang H

pubmed logopapersSep 28 2025
The application of artificial intelligence (AI) in carotid atherosclerotic plaque detection <i>via</i> computed tomography angiography (CTA) has significantly advanced over the past decade. This mini-review consolidates recent innovations in deep learning architectures, domain adaptation techniques, and automated plaque characterization methodologies. Hybrid models, such as residual U-Net-Pyramid Scene Parsing Network, exhibit a remarkable precision of 80.49% in plaque segmentation, outperforming radiologists in diagnostic efficiency by reducing analysis time from minutes to mere seconds. Domain-adaptive frameworks, such as Lesion Assessment through Tracklet Evaluation, demonstrate robust performance across heterogeneous imaging datasets, achieving an area under the curve (AUC) greater than 0.88. Furthermore, novel approaches integrating U-Net and Efficient-Net architectures, enhanced by Bayesian optimization, have achieved impressive correlation coefficients (0.89) for plaque quantification. AI-powered CTA also enables high-precision three-dimensional vascular segmentation, with a Dice coefficient of 0.9119, and offers superior cardiovascular risk stratification compared to traditional Agatston scoring, yielding AUC values of 0.816 <i>vs</i> 0.729 at a 15-year follow-up. These breakthroughs address key challenges in plaque motion analysis, with systolic retractive motion biomarkers successfully identifying 80% of vulnerable plaques. Looking ahead, future directions focus on enhancing the interpretability of AI models through explainable AI and leveraging federated learning to mitigate data heterogeneity. This mini-review underscores the transformative potential of AI in carotid plaque assessment, offering substantial implications for stroke prevention and personalized cerebrovascular management strategies.

Tunable-Generalization Diffusion Powered by Self-Supervised Contextual Sub-Data for Low-Dose CT Reconstruction

Guoquan Wei, Zekun Zhou, Liu Shi, Wenzhe Shan, Qiegen Liu

arxiv logopreprintSep 28 2025
Current models based on deep learning for low-dose CT denoising rely heavily on paired data and generalize poorly. Even the more concerned diffusion models need to learn the distribution of clean data for reconstruction, which is difficult to satisfy in medical clinical applications. At the same time, self-supervised-based methods face the challenge of significant degradation of generalizability of models pre-trained for the current dose to expand to other doses. To address these issues, this paper proposes a novel method of tunable-generalization diffusion powered by self-supervised contextual sub-data for low-dose CT reconstruction, named SuperDiff. Firstly, a contextual subdata similarity adaptive sensing strategy is designed for denoising centered on the LDCT projection domain, which provides an initial prior for the subsequent progress. Subsequently, the initial prior is used to combine knowledge distillation with a deep combination of latent diffusion models for optimizing image details. The pre-trained model is used for inference reconstruction, and the pixel-level self-correcting fusion technique is proposed for fine-grained reconstruction of the image domain to enhance the image fidelity, using the initial prior and the LDCT image as a guide. In addition, the technique is flexibly applied to the generalization of upper and lower doses or even unseen doses. Dual-domain strategy cascade for self-supervised LDCT denoising, SuperDiff requires only LDCT projection domain data for training and testing. Full qualitative and quantitative evaluations on both datasets and real data show that SuperDiff consistently outperforms existing state-of-the-art methods in terms of reconstruction and generalization performance.

MAN: Latent Diffusion Enhanced Multistage Anti-Noise Network for Efficient and High-Quality Low-Dose CT Image Denoising

Tangtangfang Fang, Jingxi Hu, Xiangjian He, Jiaqi Yang

arxiv logopreprintSep 28 2025
While diffusion models have set a new benchmark for quality in Low-Dose Computed Tomography (LDCT) denoising, their clinical adoption is critically hindered by extreme computational costs, with inference times often exceeding thousands of seconds per scan. To overcome this barrier, we introduce MAN, a Latent Diffusion Enhanced Multistage Anti-Noise Network for Efficient and High-Quality Low-Dose CT Image Denoising task. Our method operates in a compressed latent space via a perceptually-optimized autoencoder, enabling an attention-based conditional U-Net to perform the fast, deterministic conditional denoising diffusion process with drastically reduced overhead. On the LDCT and Projection dataset, our model achieves superior perceptual quality, surpassing CNN/GAN-based methods while rivaling the reconstruction fidelity of computationally heavy diffusion models like DDPM and Dn-Dp. Most critically, in the inference stage, our model is over 60x faster than representative pixel space diffusion denoisers, while remaining competitive on PSNR/SSIM scores. By bridging the gap between high fidelity and clinical viability, our work demonstrates a practical path forward for advanced generative models in medical imaging.

Hepatocellular Carcinoma Risk Stratification for Cirrhosis Patients: Integrating Radiomics and Deep Learning Computed Tomography Signatures of the Liver and Spleen into a Clinical Model.

Fan R, Shi YR, Chen L, Wang CX, Qian YS, Gao YH, Wang CY, Fan XT, Liu XL, Bai HL, Zheng D, Jiang GQ, Yu YL, Liang XE, Chen JJ, Xie WF, Du LT, Yan HD, Gao YJ, Wen H, Liu JF, Liang MF, Kong F, Sun J, Ju SH, Wang HY, Hou JL

pubmed logopapersSep 28 2025
Given the high burden of hepatocellular carcinoma (HCC), risk stratification in patients with cirrhosis is critical but remains inadequate. In this study, we aimed to develop and validate an HCC prediction model by integrating radiomics and deep learning features from liver and spleen computed tomography (CT) images into the established age-male-ALBI-platelet (aMAP) clinical model. Patients were enrolled between 2018 and 2023 from a Chinese multicenter, prospective, observational cirrhosis cohort, all of whom underwent 3-phase contrast-enhanced abdominal CT scans at enrollment. The aMAP clinical score was calculated, and radiomic (PyRadiomics) and deep learning (ResNet-18) features were extracted from liver and spleen regions of interest. Feature selection was performed using the least absolute shrinkage and selection operator. Among 2,411 patients (median follow-up: 42.7 months [IQR: 32.9-54.1]), 118 developed HCC (three-year cumulative incidence: 3.59%). Chronic hepatitis B virus infection was the main etiology, accounting for 91.5% of cases. The aMAP-CT model, which incorporates CT signatures, significantly outperformed existing models (area under the receiver-operating characteristic curve: 0.809-0.869 in three cohorts). It stratified patients into high-risk (three-year HCC incidence: 26.3%) and low-risk (1.7%) groups. Stepwise application (aMAP → aMAP-CT) further refined stratification (three-year incidences: 1.8% [93.0% of the cohort] vs. 27.2% [7.0%]). The aMAP-CT model improves HCC risk prediction by integrating CT-based liver and spleen signatures, enabling precise identification of high-risk cirrhosis patients. This approach personalizes surveillance strategies, potentially facilitating earlier detection and improved outcomes.

Imaging-Based Mortality Prediction in Patients with Systemic Sclerosis

Alec K. Peltekian, Karolina Senkow, Gorkem Durak, Kevin M. Grudzinski, Bradford C. Bemiss, Jane E. Dematte, Carrie Richardson, Nikolay S. Markov, Mary Carns, Kathleen Aren, Alexandra Soriano, Matthew Dapas, Harris Perlman, Aaron Gundersheimer, Kavitha C. Selvan, John Varga, Monique Hinchcliff, Krishnan Warrior, Catherine A. Gao, Richard G. Wunderink, GR Scott Budinger, Alok N. Choudhary, Anthony J. Esposito, Alexander V. Misharin, Ankit Agrawal, Ulas Bagci

arxiv logopreprintSep 27 2025
Interstitial lung disease (ILD) is a leading cause of morbidity and mortality in systemic sclerosis (SSc). Chest computed tomography (CT) is the primary imaging modality for diagnosing and monitoring lung complications in SSc patients. However, its role in disease progression and mortality prediction has not yet been fully clarified. This study introduces a novel, large-scale longitudinal chest CT analysis framework that utilizes radiomics and deep learning to predict mortality associated with lung complications of SSc. We collected and analyzed 2,125 CT scans from SSc patients enrolled in the Northwestern Scleroderma Registry, conducting mortality analyses at one, three, and five years using advanced imaging analysis techniques. Death labels were assigned based on recorded deaths over the one-, three-, and five-year intervals, confirmed by expert physicians. In our dataset, 181, 326, and 428 of the 2,125 CT scans were from patients who died within one, three, and five years, respectively. Using ResNet-18, DenseNet-121, and Swin Transformer we use pre-trained models, and fine-tuned on 2,125 images of SSc patients. Models achieved an AUC of 0.769, 0.801, 0.709 for predicting mortality within one-, three-, and five-years, respectively. Our findings highlight the potential of both radiomics and deep learning computational methods to improve early detection and risk assessment of SSc-related interstitial lung disease, marking a significant advancement in the literature.

Development of a clinical-CT-radiomics nomogram for predicting endoscopic red color sign in cirrhotic patients with esophageal varices.

Han J, Dong J, Yan C, Zhang J, Wang Y, Gao M, Zhang M, Chen Y, Cai J, Zhao L

pubmed logopapersSep 27 2025
To evaluate the predictive performance of a clinical-CT-radiomics nomogram based on radiomics signature and independent clinical-CT predictors for predicting endoscopic red color sign (RC) in cirrhotic patients with esophageal varices (EV). We retrospectively evaluated 215 cirrhotic patients. Among them, 108 and 107 cases were positive and negative for endoscopic RC, respectively. Patients were assigned to a training cohort (n = 150) and a validation cohort (n = 65) at a 7:3 ratio. In the training cohort, univariate and multivariate logistic regression analyses were performed on clinical and CT features to develop a clinical-CT model. Radiomic features were extracted from portal venous phase CT images to generate a Radiomic score (Rad-score) and to construct five machine learning models. A combined model was built using clinical-CT predictors and Rad-score through logistic regression. The performance of different models was evaluated using the receiver operating characteristic (ROC) curves and the area under the curve (AUC). The spleen-to-platelet ratio, liver volume, splenic vein diameter, and superior mesenteric vein diameter were independent predictors. Six radiomics features were selected to construct five machine learning models. The adaptive boosting model showed excellent predictive performance, achieving an AUC of 0.964 in the validation cohort, while the combined model achieved the highest predictive accuracy with an AUC of 0.985 in the validation cohort. The clinical-CT-radiomics nomogram demonstrates high predictive accuracy for endoscopic RC in cirrhotic patients with EV, which provides a novel tool for non-invasive prediction of esophageal varices bleeding.
Page 8 of 1391390 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.