Sort by:
Page 7 of 1391390 results

A phase-aware Cross-Scale U-MAMba with uncertainty-aware segmentation and Switch Atrous Bifovea EfficientNetB7 classification of kidney lesion subtype.

Rmr SS, Mb S, R D, M T, P V

pubmed logopapersSep 30 2025
Kidney lesion subtype identification is essential for precise diagnosis and personalized treatment planning. However, achieving reliable classification remains challenging due to factors such as inter-patient anatomical variability, incomplete multi-phase CT acquisitions, and ill-defined or overlapping lesion boundaries. In addition, genetic and ethnic morphological variations introduce inconsistent imaging patterns, reducing the generalizability of conventional deep learning models. To address these challenges, we introduce a unified framework called Phase-aware Cross-Scale U-MAMba and Switch Atrous Bifovea EfficientNet B7 (PCU-SABENet), which integrates multi-phase reconstruction, fine-grained lesion segmentation, and robust subtype classification. The PhaseGAN-3D synthesizes missing CT phases using binary mask-guided inter-phase priors, enabling complete four-phase reconstruction even under partial acquisition conditions. The PCU segmentation module combines Contextual Attention Blocks, Cross-Scale Skip Connections, and uncertainty-aware pseudo-labeling to delineate lesion boundaries with high anatomical fidelity. These enhancements help mitigate low contrast and intra-class ambiguity. For classification, SABENet employs Switch Atrous Convolution for multi-scale receptive field adaptation, Hierarchical Tree Pooling for structure-aware abstraction, and Bi-Fovea Self-Attention to emphasize fine lesion cues and global morphology. This configuration is particularly effective in addressing morphological diversity across patient populations. Experimental results show that the proposed model achieves state-of-the-art performance, with 99.3% classification accuracy, 94.8% Dice similarity, 89.3% IoU, 98.8% precision, 99.2% recall, a phase-consistency score of 0.94, and a subtype confidence deviation of 0.08. Moreover, the model generalizes well on external datasets (TCIA) with 98.6% accuracy and maintains efficient computational performance, requiring only 0.138 GFLOPs and 8.2 ms inference time. These outcomes confirm the model's robustness in phase-incomplete settings and its adaptability to diverse patient cohorts. The PCU-SABENet framework sets a new standard in kidney lesion subtype analysis, combining segmentation precision with clinically actionable classification, thus offering a powerful tool for enhancing diagnostic accuracy and decision-making in real-world renal cancer management.

Radiomics analysis using machine learning to predict perineural invasion in pancreatic cancer.

Sun Y, Li Y, Li M, Hu T, Wang J

pubmed logopapersSep 30 2025
Pancreatic cancer is one of the most aggressive and lethal malignancies of the digestive system and is characterized by an extremely low five-year survival rate. The perineural invasion (PNI) status in patients with pancreatic cancer is positively correlated with adverse prognoses, including overall survival and recurrence-free survival. Emerging radiomic methods can reveal subtle variations in tumor structure by analyzing preoperative contrast-enhanced computed tomography (CECT) imaging data. Therefore, we propose the development of a preoperative CECT-based radiomic model to predict the risk of PNI in patients with pancreatic cancer. This study enrolled patients with pancreatic malignancies who underwent radical resection. Computerized tools were employed to extract radiomic features from tumor regions of interest (ROIs). The optimal radiomic features associated with PNI were selected to construct a radiomic score (RadScore). The model's reliability was comprehensively evaluated by integrating clinical and follow-up information, with SHapley Additive exPlanations (SHAP)-based visualization to interpret the decision-making processes. A total of 167 patients with pancreatic malignancies were included. From the CECT images, 851 radiomic features were extracted, 22 of which were identified as most strongly correlated with PNI. These 22 features were evaluated using seven machine learning methods. We ultimately selected the Gaussian naive Bayes model, which demonstrated robust predictive performance in both the training and validation cohorts, and achieved area under the ROC curve (AUC) values of 0.899 and 0.813, respectively. Among the clinical features, maximum tumor diameter, CA-199 level, blood glucose concentration, and lymph node metastasis were found to be independent risk factors for PNI. The integrated model yielded AUCs of 0.945 (training cohort) and 0.881 (validation cohort). Decision curve analysis confirmed the clinical utility of the ensemble model to predict perineural invasion. The combined model integrating clinical and radiomic features exhibited excellent performance in predicting the probability of perineural invasion in patients with pancreatic cancer. This approach has significant potential to optimize therapeutic decision-making and prognostic evaluation in patients with PNI.

Non-contrast CT-based pulmonary embolism detection using GAN-generated synthetic contrast enhancement: Development and validation of an AI framework.

Kim YT, Bak SH, Han SS, Son Y, Park J

pubmed logopapersSep 30 2025
Acute pulmonary embolism (PE) is a life-threatening condition often diagnosed using CT pulmonary angiography (CTPA). However, CTPA is contraindicated in patients with contrast allergies or at risk for contrast-induced nephropathy. This study explores an AI-driven approach to generate synthetic contrast-enhanced images from non-contrast CT scans for accurate diagnosis of acute PE without contrast agents. This retrospective study used dual-energy and standard CT datasets from two institutions. The internal dataset included 84 patients: 41 PE-negative cases for generative model training and 43 patients (30 PE-positive) for diagnostic evaluation. An external dataset of 62 patients (26 PE-positive) was used for further validation. We developed a generative adversarial network (GAN) based on U-Net, trained on paired non-contrast and contrast-enhanced images. The model was optimized using contrast-enhanced L1-loss with hyperparameter λ to improve anatomical accuracy. A ConvNeXt-based classifier trained on the RSNA dataset (N = 7,122) generated per-slice PE probabilities, which were aggregated for patient-level prediction via a Random Forest model. Diagnostic performance was assessed using five-fold cross-validation on both internal and external datasets. The GAN achieved optimal image similarity at λ = 0.5, with the lowest mean absolute error (0.0089) and highest MS-SSIM (0.9674). PE classification yielded AUCs of 0.861 and 0.836 in the internal dataset, and 0.787 and 0.680 in the external dataset, using real and synthetic images, respectively. No statistically significant differences were observed. Our findings demonstrate that synthetic contrast CT can serve as a viable alternative for PE diagnosis in patients contraindicated for CTPA, supporting safe and accessible imaging strategies.

Automated contouring of gross tumor volume lymph nodes in lung cancer by deep learning.

Huang Y, Yuan X, Xu L, Jian J, Gong C, Zhang Y, Zheng W

pubmed logopapersSep 30 2025
The precise contouring of gross tumor volume lymph nodes (GTVnd) is an essential step in clinical target volume delineation. This study aims to propose and evaluate a deep learning model for segmenting GTVnd specifically in lung cancer, representing one of the pioneering investigations into automated segmentation of GTVnd specifically for lung cancer. Ninety computed tomography (CT) scans of patients with stage Ш-Ⅳ small cell lung cancer (SCLC) were collected, of which 75 patients were assembled into a training dataset and 15 were used in a testing dataset. A new segmentation model was constructed to enable the automatic and accurate delineation of the GTVnd in lung cancer. This model integrates a contextual cue enhancement module and an edge-guided feature enhancement decoder. The contextual cues enhancement module was used to enforce the consistency of the contextual cues encoded in the deepest feature, and the edge-guided feature enhancement decoder was used to obtain edge-aware and edge-preserving segmentation predictions. The model was quantitatively evaluated using the three-dimensional Dice Similarity Coefficient (3D DSC) and the 95th Hausdorff Distance (95HD). Additionally, comparative analysis was conducted between predicted treatment plans derived from auto-contouring GTVnd and established clinical plans. The ECENet achieved a mean 3D DSC of 0.72 ± 0.09 and a 95HD of 6.39 ± 4.59 mm, showing significant improvement compared to UNet, with a DSC of 0.46 ± 0.19 and a 95HD of 12.24 ± 13.36 mm, and nnUNet, with a DSC of 0.52 ± 0.18 and a 95HD of 9.92 ± 6.49 mm. Its performance was intermediate, falling between mid-level physicians, with a DSC of 0.81 ± 0.06, and junior physicians, with a DSC of 0.68 ± 0.10. And the clinical and predicted treatment plans were further compared. The dosimetric analysis demonstrated excellent agreement between predicted and clinical plans, with average relative deviation of < 0.17% for PTV D2/D50/D98, < 3.5% for lung V30/V20/V10/V5/Dmean, and < 6.1% for heart V40/V30/Dmean. Furthermore, the TCP (66.99% ± 0.55 vs. 66.88% ± 0.45) and NTCP (3.13% ± 1.33 vs. 3.25% ± 1.42) analyses revealed strong concordance between predicted and clinical outcomes, confirming the clinical applicability of the proposed method. The proposed model could achieve the automatic delineation of the GTVnd in the thoracic region of lung cancer and showed certain advantages, making it a potential choice for the automatic delineation of the GTVnd in lung cancer, particularly for young radiation oncologists.

Integrating Multi-Modal Imaging Features for Early Prediction of Acute Kidney Injury in Pneumonia Sepsis: A Multicenter Retrospective Study.

Gu Y, Li L, Yang K, Zou C, Yin B

pubmed logopapersSep 29 2025
Sepsis, a severe complication of infection, often leads to acute kidney injury (AKI), which significantly increases the risk of death. Despite its clinical importance, early prediction of AKI remains challenging. Current tools rely on blood and urine tests, which are costly, variable, and not always available in time for intervention. Pneumonia is the most common cause of sepsis, accounting for over one-third of cases. In such patients, pulmonary inflammation and perilesional tissue alterations may serve as surrogate markers of systemic disease progression. However, these imaging features are rarely used in clinical decision-making. To overcome this limitation, our study aims to extract informative imaging features from pneumonia-associated sepsis cases using deep learning, with the goal of predicting the development of AKI. This dual-center retrospective study included pneumonia-associated sepsis patients (Jan 2020-Jul 2024). Chest CT images, clinical records, and laboratory data at admission were collected. We propose MCANet (Multimodal Cross-Attention Network), a two-stage deep learning framework designed to predict the occurrence of pneumonia-associated sepsis-related acute kidney injury (pSA-AKI). In the first stage, region-specific features were extracted from the lungs, epicardial adipose tissue, and T4-level subcutaneous adipose tissue using ResNet-18, which was chosen for its lightweight architecture and efficiency in processing multi-regional 2D CT slices with low computational cost. In the second stage, the extracted features were fused via a Multiscale Feature Attention Network (MSFAN) employing cross-attention mechanisms to enhance interactions among anatomical regions, followed by classification using ResNet-101, selected for its deeper architecture and strong ability to model global semantic representations and complex patterns.Model performance was evaluated using AUC, accuracy, precision, recall, and F1-score. Grad-CAM and PyRadiomics were employed for visual interpretation and radiomic analysis, respectively. A total of 399 patients with pneumonia-associated sepsis were included in this study. The modality ablation experiments demonstrated that the model integrating features from the lungs, T4-level subcutaneous adipose tissue, and epicardial adipose tissue achieved the best performance, with an accuracy of 0.981 and an AUC of 0.99 on the external test set from an independent center. For the prediction of AKI onset time, the LightGBM model incorporating imaging and clinical features achieved the highest accuracy of 0.8409 on the external test set. Furthermore, the multimodal model combining deep features, radiomics features, and clinical data further improved predictive performance, reaching an accuracy of 0.9773 and an AUC of 0.961 on the external test set. This study developed MCAnet, a multimodal deep learning framework that integrates imaging features from the lungs, epicardial adipose tissue, and T4-level subcutaneous adipose tissue. The framework significantly improved the accuracy of AKI occurrence and temporal prediction in pneumonia-associated sepsis patients, highlighting the synergistic role of adipose tissue and lung characteristics. Furthermore, explainability analysis revealed potential decision-making mechanisms underlying the temporal progression of pSA-AKI, offering new insights for clinical management.

Evaluation of a commercial deep-learning-based contouring software for CT-based gynecological brachytherapy.

Yang HJ, Patrick J, Vickress J, D'Souza D, Velker V, Mendez L, Starling MM, Fenster A, Hoover D

pubmed logopapersSep 29 2025
To evaluate a commercial deep-learning based auto-contouring software specifically trained for high-dose-rate gynecological brachytherapy. We collected CT images from 30 patients treated with gynecological brachytherapy (19.5-28 Gy in 3-4 fractions) at our institution from January 2018 to December 2022. Clinical and artificial intelligence (AI) generated contours for bladder, bowel, rectum, and sigmoid were obtained. Five patients were randomly selected from the test set and manually re-contoured by 4 radiation oncologists. Contouring was repeated 2 weeks later using AI contours as the starting point ("AI-assisted" approach). Comparisons amongst clinical, AI, AI-assisted, and manual retrospective contours were made using various metrics, including Dice similarity coefficient (DSC) and unsigned D2cc difference. Between clinical and AI contours, DSC was 0.92, 0.79, 0.62, 0.66, for bladder, rectum, sigmoid, and bowel, respectively. Rectum and sigmoid had the lowest median unsigned D2cc difference of 0.20 and 0.21 Gy/fraction respectively between clinical and AI contours, while bowel had the largest median difference of 0.38 Gy/fraction. Agreement between fully automated AI and clinical contours was generally not different compared to agreement between AI-assisted and clinical contours. AI-assisted interobserver agreement was better than manual interobserver agreement for all organs and metrics. The median time to contour all organs for manual and AI-assisted approaches was 14.8 and 6.9 minutes/patient (p < 0.001), respectively. The agreement between AI or AI-assisted contours against the clinical contours was similar to manual interobserver agreement. Implementation of the AI-assisted contouring approach could enhance clinical workflow by decreasing both contouring time and interobserver variability.

Predicting Acute Cerebrovascular Events in Stroke Alerts Using Large-Language Models and Structured Data

Erekat, A., Downes, M. H., Stein, L. K., Delman, B. N., Karp, A. M., Tripathi, A., Nadkarni, G. N., Kupersmith, M. J., Kummer, B. R.

medrxiv logopreprintSep 29 2025
BackgroundAcute stroke alerts are often activated for non-cerebrovascular conditions, leading to false positives that strain clinical resources and promote diagnostic uncertainty. We sought to develop machine learning (ML) models integrating large-language models (LLMs), structured electronic health record data, and clinical time series data to predict the presence of acute cerebrovascular disease (ACD) at stroke alert activation. MethodsWe derived a series of ML models using retrospective data from stroke alerts activated at Mount Sinai Health System between 2011 and 2021. We extracted structured data (demographics, medical comorbidities, medications, and engineered time-series features from vital signs and lab results) as well as unstructured clinical notes available prior to the time of stroke alert. We processed clinical notes using three embedding approaches: word embeddngs, biomedical embeddings (BioWordVec), and LLMs. Using a radiographic gold standard for acute intracranial vascular event, we used an auto-ML approach to train one model based on unstructured data and five models based on different combinations of structured data. We evaluated models individually using the area under the receiver operating characteristic curve (AUROC), mean positive predictive value (PPV), sensitivity, and F1-score. We then combined the 6 model logits into a multimodal ensemble by weighting their logits based on F1-score, determining ensemble performance using the same metrics. ResultsWe identified 16,512 stroke alerts corresponding to 14,233 unique patients over the study period, of which 9,013 (54.6%) were due to ACD. The multi-modal model (AUROC 0.72, PPV 0.68, sensitivity 0.76, F1 0.72) outperformed all individual models by AUROC. One structured model based on demographics, comorbidities, and medications demonstrated the highest sensitivity (0.95). ConclusionsWe developed a multi-modal ML model to predict ACD at stroke alert activation. This approach has promise to optimize stroke triage and reduce false-positive activations.

Deep learning NTCP model for late dysphagia after radiotherapy for head and neck cancer patients based on 3D dose, CT and segmentations.

de Vette SPM, Neh H, van der Hoek L, MacRae DC, Chu H, Gawryszuk A, Steenbakkers RJHM, van Ooijen PMA, Fuller CD, Hutcheson KA, Langendijk JA, Sijtsema NM, van Dijk LV

pubmed logopapersSep 29 2025
Late radiation-associated dysphagia after head and neck cancer (HNC) significantly impacts patient's health and quality of life. Conventional normal tissue complication probability (NTCP) models use discrete dose parameters to predict toxicity risk but fail to fully capture the complexity of this side effect. Deep learning (DL) offers potential improvements by incorporating 3D dose data for all anatomical structures involved in swallowing. This study aims to enhance dysphagia prediction with 3D DL NTCP models compared to conventional NTCP models. A multi-institutional cohort of 1484 HNC patients was used to train and validate a 3D DL model (Residual Network) incorporating 3D dose distributions, organ-at-risk segmentations, and CT scans, with or without patient- or treatment-related data. Predictions of grade ≥ 2 dysphagia (CTCAEv4) at six months post-treatment were evaluated using area under the curve (AUC) and calibration curves. Results were compared to a conventional NTCP model based on pre-treatment dysphagia, tumour location, and mean dose to swallowing organs. Attention maps highlighting regions of interest for individual patients were assessed. DL models outperformed the conventional NTCP model in both the independent test set (AUC = 0.80-0.84 versus 0.76) and external test set (AUC = 0.73-0.74 versus 0.63) in AUC and calibration. Attention maps showed a focus on the oral cavity and superior pharyngeal constrictor muscle. DL NTCP models performed significantly better than the conventional NTCP model, suggesting the benefit of using 3D-input over the conventional discrete dose parameters. Attention maps highlighted relevant regions linked to dysphagia, supporting the utility of DL for improved predictions.

Dynamic computed tomography assessment of patellofemoral and tibiofemoral kinematics before and after total knee arthroplasty: A pilot study.

Boot MR, van de Groes SAW, Tanck E, Janssen D

pubmed logopapersSep 29 2025
To develop and evaluate the clinical feasibility of a dynamic computed tomography (CT) protocol for assessing patellofemoral (PF) and tibiofemoral (TF) kinematics before and after total knee arthroplasty (TKA), and to quantify postoperative kinematic changes in a pilot study. In this prospective single-centre study, patients with primary osteoarthritis scheduled for cemented TKA underwent dynamic CT scans preoperatively and at 1-year follow-up during active flexion-extension-flexion. Preoperatively, the femur, tibia and patella were segmented using a neural network. Postoperatively, computer-aided design (CAD) implant models were aligned to CT data to determine relative implant-bone orientation. Due to metal artefacts, preoperative patella meshes were manually aligned to postoperative scans by four raters, and averaged for analysis. Anatomical coordinate systems were applied to quantify patellar flexion, tilt, proximal tip rotation, mediolateral translation and femoral condyle anterior-posterior translation. Descriptive statistics were reported, and interoperator agreement for patellar registration was assessed using intraclass correlation coefficients (ICCs). Ten patients (mean age, 65 ± 8 years; 6 men) were analysed across a shared flexion range of 14°-55°. Postoperatively, the patella showed increased flexion (median difference: 0.9°-3.9°), medial proximal tip rotation (median difference: 1.5°-6.0°), lateral tilt (median difference: 2.7°-5.5°), and lateral shift (median difference: -1.5 to -2.8 mm). The medial and lateral femoral condyles translated 2-4 mm anterior-posteriorly during knee flexion. Interoperator agreement for patellar registration ranged from good to excellent across all parameters (ICC = 0.85-1.00). This pilot study demonstrates that dynamic CT enables in vivo assessment of PF and TF kinematics before and after TKA. The protocol quantified postoperative kinematic changes and demonstrated potential as research tool. Further automation is needed to investigate relationships between these kinematic patterns and patient outcomes in larger-scale studies. Level III.

DCM-Net: dual-encoder CNN-Mamba network with cross-branch fusion for robust medical image segmentation.

Atabansi CC, Wang S, Li H, Nie J, Xiang L, Zhang C, Liu H, Zhou X, Li D

pubmed logopapersSep 29 2025
Medical image segmentation is a critical task for the early detection and diagnosis of various conditions, such as skin cancer, polyps, thyroid nodules, and pancreatic tumors. Recently, deep learning architectures have achieved significant success in this field. However, they face a critical trade-off between local feature extraction and global context modeling. To address this limitation, we present DCM-Net, a dual-encoder architecture that integrates pretrained CNN layers with Visual State Space (VSS) blocks through a Cross-Branch Feature Fusion Module (CBFFM). A Decoder Feature Enhancement Module (DFEM) combines depth-wise separable convolutions with MLP-based semantic rectification to extract enhanced decoded features and improve the segmentation performance. Additionally, we present a new 2D pancreas and pancreatic tumor dataset (CCH-PCT-CT) collected from Chongqing University Cancer Hospital, comprising 3,547 annotated CT slices, which is used to validate the proposed model. The proposed DCM-Net architecture achieves competitive performance across all datasets investigated in this study. We develop a novel DCM-Net architecture that generates robust features for tumor and organ segmentation in medical images. DCM-Net significantly outperforms all baseline models in segmentation tasks, with higher Dice Similarity Coefficient (DSC) and mean Intersection over Union (mIoU) scores. Its robustness confirms strong potential for clinical use.
Page 7 of 1391390 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.