Sort by:
Page 20 of 99990 results

Artificial intelligence software to detect small hepatic lesions on hepatobiliary-phase images using multiscale sampling.

Maeda S, Nakamura Y, Higaki T, Karasudani A, Yamaguchi T, Ishihara M, Baba T, Kondo S, Fonseca D, Awai K

pubmed logopapersAug 29 2025
To investigate the effect of multiscale sampling artificial intelligence (msAI) software adapted to small hepatic lesions on the diagnostic performance of readers interpreting gadoxetic acid-enhanced hepatobiliary-phase (HBP) images. HBP images of 30 patients harboring 186 hepatic lesions were included. Three board-certified radiologists, 9 radiology residents, and 2 general physicians interpreted HBP image data sets twice, once with and once without the msAI software at 2-week intervals. Jackknife free-response receiver-operating characteristic analysis was performed to calculate the figure of merit (FOM) for detecting hepatic lesions. The negative consultation ratio (NCR), percentage of correct diagnoses turning into incorrect by the AI software, was calculated. We defined readers whose NCR was lower than 10% as those correctly diagnosed the false findings presented by the software. The msAI software significantly improved the lesion localization fraction (LLF) for all readers (0.74 vs 0.82, p < 0.01); the FOM did not (0.76 vs 0.78, p = 0.45). In lesion-size-based subgroup analysis, the LLF (0.40 vs 0.53, p < 0.01) improved significantly with the AI software even for lesions smaller than 6 mm, whereas the FOM (0.63 vs 0.66, p = 0.51) showed no significant difference. Among 10 readers with an NCR lower than 10%, not only the LLF but also the FOM were significantly better with the software (LLF 0.77 vs 0.82, FOM 0.79 vs 0.84, both p < 0.01). The detectability of small hepatic lesions on HBP images was improved with msAI software especially when its results were properly evaluated.

Multi-regional Multiparametric Deep Learning Radiomics for Diagnosis of Clinically Significant Prostate Cancer.

Liu X, Liu R, He H, Yan Y, Zhang L, Zhang Q

pubmed logopapersAug 29 2025
Non-invasive and precise identification of clinically significant prostate cancer (csPCa) is essential for the management of prostatic diseases. Our study introduces a novel and interpretable diagnostic method for csPCa, leveraging multi-regional, multiparametric deep learning radiomics based on magnetic resonance imaging (MRI). The prostate regions, including the peripheral zone (PZ) and transition zone (TZ), are automatically segmented using a deep learning framework that combines convolutional neural networks and transformers to generate region-specific masks. Radiomics features are then extracted and selected from multiparametric MRI at the PZ, TZ, and their combined area to develop a multi-regional multiparametric radiomics diagnostic model. Feature contributions are quantified to enhance the model's interpretability and assess the importance of different imaging parameters across various regions. The multi-regional model substantially outperforms single-region models, achieving an optimal area under the curve (AUC) of 0.903 on the internal test set, and an AUC of 0.881 on the external test set. Comparison with other methods demonstrates that our proposed approach exhibits superior performance. Features from diffusion-weighted imaging and apparent diffusion coefficient play a crucial role in csPCa diagnosis, with contribution degrees of 53.28% and 39.52%, respectively. We introduce an interpretable, multi-regional, multiparametric diagnostic model for csPCa using deep learning radiomics. By integrating features from various zones, our model improves diagnostic accuracy and provides clear insights into the key imaging parameters, offering strong potential for clinical applications in csPCa management.

AI-driven body composition monitoring and its prognostic role in mCRPC undergoing lutetium-177 PSMA radioligand therapy: insights from a retrospective single-center analysis.

Ruhwedel T, Rogasch J, Galler M, Schatka I, Wetz C, Furth C, Biernath N, De Santis M, Shnayien S, Kolck J, Geisel D, Amthauer H, Beetz NL

pubmed logopapersAug 28 2025
Body composition (BC) analysis is performed to quantify the relative amounts of different body tissues as a measure of physical fitness and tumor cachexia. We hypothesized that relative changes in body composition (BC) parameters, assessed by an artificial intelligence-based, PACS-integrated software, between baseline imaging before the start of radioligand therapy (RLT) and interim staging after two RLT cycles could predict overall survival (OS) in patients with metastatic castration-resistant prostate cancer. We conducted a single-center, retrospective analysis of 92 patients with mCRPC undergoing [<sup>177</sup>Lu]Lu-PSMA RLT between September 2015 and December 2023. All patients had [<sup>68</sup> Ga]Ga-PSMA-11 PET/CT at baseline (≤ 6 weeks before the first RLT cycle) and at interim staging (6-8 weeks after the second RLT cycle) allowing for longitudinal BC assessment. During follow-up, 78 patients (85%) died. Median OS was 16.3 months. Median follow-up time in survivors was 25.6 months. The 1 year mortality rate was 32.6% (95%CI 23.0-42.2%) and the 5 year mortality rate was 92.9% (95%CI 85.8-100.0%). In multivariable regression, relative change in visceral adipose tissue (VAT) (HR: 0.26; p = 0.006), previous chemotherapy of any type (HR: 2.4; p = 0.003), the presence of liver metastases (HR: 2.4; p = 0.018) and a higher baseline De Ritis ratio (HR: 1.4; p < 0.001) remained independent predictors of OS. Patients with a higher decrease in VAT (< -20%) had a median OS of 10.2 months versus 18.5 months in patients with a lower VAT decrease or VAT increase (≥ -20%) (log-rank test: p = 0.008). In a separate Cox model, the change in VAT predicted OS (p = 0.005) independent of the best PSA response after 1-2 RLT cycles (p = 0.09), and there was no interaction between the two (p = 0.09). PACS-Integrated, AI-based BC monitoring detects relative changes in the VAT, Which was an independent predictor of shorter OS in our population of patients undergoing RLT.

Privacy-preserving federated transfer learning for enhanced liver lesion segmentation in PET-CT imaging.

Kumar R, Zeng S, Kumar J, Mao X

pubmed logopapersAug 28 2025
Positron Emission Tomography-Computed Tomography (PET-CT) evolution is critical for liver lesion diagnosis. However, data scarcity, privacy concerns, and cross-institutional imaging heterogeneity impede accurate deep learning model deployment. We propose a Federated Transfer Learning (FTL) framework that integrates federated learning's privacy-preserving collaboration with transfer learning's pre-trained model adaptation, enhancing liver lesion segmentation in PET-CT imaging. By leveraging a Feature Co-learning Block (FCB) and privacy-enhancing technologies (DP, HE), our approach ensures robust segmentation without sharing sensitive patient data. (1) A privacy-preserving FTL framework combining federated learning and adaptive transfer learning; (2) A multi-modal FCB for improved PET-CT feature integration; (3) Extensive evaluation across diverse institutions with privacy-enhancing technologies like Differential Privacy (DP) and Homomorphic Encryption (HE). Experiments on simulated multi-institutional PET-CT datasets demonstrate superior performance compared to baselines, with robust privacy guarantees. The FTL framework reduces data requirements and enhances generalizability, advancing liver lesion diagnostics.

Macrotrabecular-massive subtype in hepatocellular carcinoma based on contrast-enhanced CT: deep learning outperforms machine learning.

Jia L, Li Z, Huang G, Jiang H, Xu H, Zhao J, Li J, Lei J

pubmed logopapersAug 28 2025
To develop a CT-based deep learning model for predicting the macrotrabecular-massive (MTM) subtype of hepatocellular carcinoma (HCC) and to compare its diagnostic performance with machine learning models. We retrospectively collected contrast-enhanced CT data from patients diagnosed with HCC via histopathological examination between January 2019 and August 2023. These patients were recruited from two medical centers. All analyses were performed using two-dimensional regions of interest. We developed a novel deep learning network based on ResNet-50, named ResNet-ViT Contrastive Learning (RVCL). The RVCL model was compared against baseline deep learning models and machine learning models. Additionally, we developed a multimodal prediction model by integrating deep learning models with clinical parameters. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). A total of 368 patients (mean age, 56 ± 10; 285 [77%] male) from two institutions were retrospectively enrolled. Our RVCL model demonstrated superior diagnostic performance in predicting MTM (AUC = 0.93) on the external test set compared to the five baseline deep learning models (AUCs range 0.46-0.72, all p < 0.05) and the three machine learning models (AUCs range 0.49-0.60, all p < 0.05). However, integrating the clinical biomarker Alpha-Fetoprotein (AFP) into the RVCL model did not significant improvement in diagnostic performance (internal test data set: AUC 0.99 vs 0.95 [p = 0.08]; external test data set: AUC 0.98 vs 0.93 [p = 0.05]). The deep learning model based on contrast-enhanced CT can accurately predict the MTM subtype in HCC patients, offering a smart tool for clinical decision-making. The RVCL model introduces a transformative approach to the non-invasive diagnosis MTM subtype of HCC by harmonizing convolutional neural networks and vision transformers within a unified architecture. The RVCL model can accurately predict the MTM subtype. Deep learning outperforms machine learning for predicting MTM subtype. RVCL boosts accuracy and guides personalized therapy.

Deep Learning Framework for Early Detection of Pancreatic Cancer Using Multi-Modal Medical Imaging Analysis

Dennis Slobodzian, Karissa Tilbury, Amir Kordijazi

arxiv logopreprintAug 28 2025
Pacreatic ductal adenocarcinoma (PDAC) remains one of the most lethal forms of cancer, with a five-year survival rate below 10% primarily due to late detection. This research develops and validates a deep learning framework for early PDAC detection through analysis of dual-modality imaging: autofluorescence and second harmonic generation (SHG). We analyzed 40 unique patient samples to create a specialized neural network capable of distinguishing between normal, fibrotic, and cancerous tissue. Our methodology evaluated six distinct deep learning architectures, comparing traditional Convolutional Neural Networks (CNNs) with modern Vision Transformers (ViTs). Through systematic experimentation, we identified and overcome significant challenges in medical image analysis, including limited dataset size and class imbalance. The final optimized framework, based on a modified ResNet architecture with frozen pre-trained layers and class-weighted training, achieved over 90% accuracy in cancer detection. This represents a significant improvement over current manual analysis methods an demonstrates potential for clinical deployment. This work establishes a robust pipeline for automated PDAC detection that can augment pathologists' capabilities while providing a foundation for future expansion to other cancer types. The developed methodology also offers valuable insights for applying deep learning to limited-size medical imaging datasets, a common challenge in clinical applications.

PET/CT radiomics for non-invasive prediction of immunotherapy efficacy in cervical cancer.

Du T, Li C, Grzegozek M, Huang X, Rahaman M, Wang X, Sun H

pubmed logopapersAug 28 2025
PurposeThe prediction of immunotherapy efficacy in cervical cancer patients remains a critical clinical challenge. This study aims to develop and validate a deep learning-based automatic tumor segmentation method on PET/CT images, extract texture features from the tumor regions in cervical cancer patients, and investigate their correlation with PD-L1 expression. Furthermore, a predictive model for immunotherapy efficacy will be constructed.MethodsWe retrospectively collected data from 283 pathologically confirmed cervical cancer patients who underwent <sup>18</sup>F-FDG PET/CT examinations, divided into three subsets. Subset-I (n = 97) was used to develop a deep learning-based segmentation model using Attention-UNet and region-growing methods on co-registered PET/CT images. Subset-II (n = 101) was used to explore correlations between radiomic features and PD-L1 expression. Subset-III (n = 85) was used to construct and validate a radiomic model for predicting immunotherapy response.ResultsUsing Subset-I, a segmentation model was developed. The segmentation model achieved optimal performance at the 94th epoch with an IoU of 0.746 in the validation set. Manual evaluation confirmed accurate tumor localization. Sixteen features demonstrated excellent reproducibility (ICC > 0.75). Using Subset-II, PD-L1-correlated features were extracted and identified. In Subset-II, 183 features showed significant correlations with PD-L1 expression (P < 0.05).Using these features in Subset-III, a predictive model for immunotherapy efficacy was constructed and evaluated. In Subset-III, the SVM-based radiomic model achieved the best predictive performance with an AUC of 0.935.ConclusionWe validated, respectively in Subset-I, Subset-II, and Subset-III, that deep learning models incorporating medical prior knowledge can accurately and automatically segment cervical cancer lesions, that texture features extracted from <sup>18</sup>F-FDG PET/CT are significantly associated with PD-L1 expression, and that predictive models based on these features can effectively predict the efficacy of PD-L1 immunotherapy. This approach offers a non-invasive, efficient, and cost-effective tool for guiding individualized immunotherapy in cervical cancer patients and may help reduce patient burden, accelerate treatment planning.

MRI-based machine-learning radiomics of the liver to predict liver-related events in hepatitis B virus-associated fibrosis.

Luo Y, Luo Q, Wu Y, Zhang S, Ren H, Wang X, Liu X, Yang Q, Xu W, Wu Q, Li Y

pubmed logopapersAug 27 2025
The onset of liver-related events (LREs) in fibrosis indicates a poor prognosis and worsens patients' quality of life, making the prediction and early detection of LREs crucial. The aim of this study was to develop a radiomics model using liver magnetic resonance imaging (MRI) to predict LRE risk in patients undergoing antiviral treatment for chronic fibrosis caused by hepatitis B virus (HBV). Patients with HBV-associated liver fibrosis and liver stiffness measurements ≥ 10 kPa were included. Feature selection and dimensionality reduction techniques identified discriminative features from three MRI sequences. Radiomics models were built using eight machine learning techniques and evaluated for performance. Shapley additive explanation and permutation importance techniques were applied to interpret the model output. A total of 222 patients aged 49 ± 10 years (mean ± standard deviation), 175 males, were evaluated, with 41 experiencing LREs. The radiomics model, incorporating 58 selected features, outperformed traditional clinical tools in prediction accuracy. Developed using a support vector machine classifier, the model achieved optimal areas under the receiver operating characteristic curves of 0.94 and 0.93 in the training and test sets, respectively, demonstrating good calibration. Machine learning techniques effectively predicted LREs in patients with fibrosis and HBV, offering comparable accuracy across algorithms and supporting personalized care decisions for HBV-related liver disease. Radiomics models based on liver multisequence MRI can improve risk prediction and management of patients with HBV-associated chronic fibrosis. In addition, it offers valuable prognostic insights and aids in making informed clinical decisions. Liver-related events (LREs) are associated with poor prognosis in chronic fibrosis. Radiomics models could predict LREs in patients with hepatitis B-associated chronic fibrosis. Radiomics contributes to personalized care choices for patients with hepatitis B-associated fibrosis.

ProMUS-NET: Artificial intelligence detects more prostate cancer than urologists on micro-ultrasonography.

Zhou SR, Zhang L, Choi MH, Vesal S, Kinnaird A, Brisbane WG, Lughezzani G, Maffei D, Fasulo V, Albers P, Fan RE, Shao W, Sonn GA, Rusu M

pubmed logopapersAug 27 2025
To improve sensitivity and inter-reader consistency of prostate cancer localisation on micro-ultrasonography (MUS) by developing a deep learning model for automatic cancer segmentation, and to compare model performance with that of expert urologists. We performed an institutional review board-approved prospective collection of MUS images from patients undergoing magnetic resonance imaging (MRI)-ultrasonography fusion guided biopsy at a single institution. Patients underwent 14-core systematic biopsy and additional targeted sampling of suspicious MRI lesions. Biopsy pathology and MRI information were cross-referenced to annotate the locations of International Society of Urological Pathology Grade Group (GG) ≥2 clinically significant cancer on MUS images. We trained a no-new U-Net model - the Prostate Micro-Ultrasound Network (ProMUS-NET) - to localise GG ≥2 cancer on these image stacks in a fivefold cross-validation. Performance was compared vs that of six expert urologists in a matched sub-cohort. The artificial intelligence (AI) model achieved an area under the receiver-operating characteristic curve of 0.92 and detected more cancers than urologists (lesion-level sensitivity 73% vs 58%; patient-level sensitivity 77% vs 66%). AI lesion-level sensitivity for peripheral zone lesions was 86.2%. Our AI model identified prostate cancer lesions on MUS with high sensitivity and specificity. Further work is ongoing to improve margin overlap, to reduce false positives, and to perform external validation. AI-assisted prostate cancer detection on MUS has great potential to improve biopsy diagnosis by urologists.

Development of Privacy-preserving Deep Learning Model with Homomorphic Encryption: A Technical Feasibility Study in Kidney CT Imaging.

Lee SW, Choi J, Park MJ, Kim H, Eo SH, Lee G, Kim S, Suh J

pubmed logopapersAug 27 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Purpose To evaluate the technical feasibility of implementing homomorphic encryption in deep learning models for privacy-preserving CT image analysis of renal masses. Materials and Methods A privacy-preserving deep learning system was developed through three sequential technical phases: a reference CNN model (Ref-CNN) based on ResNet architecture, modification for encryption compatibility (Approx-CNN) by replacing ReLU with polynomial approximation and max-pooling with averagepooling, and implementation of fully homomorphic encryption (HE-CNN). The CKKS encryption scheme was used for its capability to perform arithmetic operations on encrypted real numbers. Using 12,446 CT images from a public dataset (3,709 renal cysts, 5,077 normal kidneys, and 2,283 kidney tumors), we evaluated model performance using area under the receiver operating characteristic curve (AUC) and area under the precision-recall curve (AUPRC). Results All models demonstrated high diagnostic accuracy with AUC ranging from 0.89-0.99 and AUPRC from 0.67-0.99. The diagnostic performance trade-off was minimal from Ref-CNN to Approx-CNN (AUC: 0.99 to 0.97 for normal category), with no evidence of differences between models. However, encryption significantly increased storage and computational demands: a 256 × 256-pixel image expanded from 65KB to 32MB, requiring 50 minutes for CPU inference but only 90 seconds with GPU acceleration. Conclusion This technical development demonstrates that privacy-preserving deep learning inference using homomorphic encryption is feasible for renal mass classification on CT images, achieving comparable diagnostic performance while maintaining data privacy through end-to-end encryption. ©RSNA, 2025.
Page 20 of 99990 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.