Sort by:
Page 65 of 100991 results

AI-based CT assessment of sarcopenia in borderline resectable pancreatic Cancer: A narrative review of clinical and technical perspectives.

Gehin W, Lambert A, Bibault JE

pubmed logopapersJun 25 2025
Sarcopenia, defined as the progressive loss of skeletal muscle mass and function, has been associated with poor prognosis in patients with pancreatic cancer, particularly those with borderline resectable pancreatic cancer (BRPC). Although body composition can be extracted from routine CT imaging, sarcopenia assessment remains underused in clinical practice. Recent advances in artificial intelligence (AI) offer the potential to automate and standardize this process, but their clinical translation remains limited. This narrative review aims to critically evaluate (1) the clinical impact of CT-defined sarcopenia in BRPC, and (2) the performance and maturity of AI-based methods for automated muscle and fat segmentation on CT images. A dual-axis literature search was conducted to identify clinical studies assessing the prognostic role of sarcopenia in BRPC, and technical studies developing AI-based segmentation models for body composition analysis. Structured data extraction was applied to 13 clinical and 71 technical studies. A PRISMA-inspired flow diagram was included to ensure methodological transparency. Sarcopenia was consistently associated with worse survival and treatment tolerance in BRPC, yet clinical definitions and cut-offs varied widely. AI models-mostly 2D U-Nets trained on L3-level CT slices-achieved high segmentation accuracy (mean DSC >0.93), but external validation and standardization were often lacking. CT-based AI assessment of sarcopenia holds promise for improving patient stratification in BRPC. However, its clinical adoption will require standardization, integration into decision-support frameworks, and prospective validation across diverse populations.

Diagnostic Performance of Radiomics for Differentiating Intrahepatic Cholangiocarcinoma from Hepatocellular Carcinoma: A Systematic Review and Meta-analysis.

Wang D, Sun L

pubmed logopapersJun 25 2025
Differentiating intrahepatic cholangiocarcinoma (ICC) from hepatocellular carcinoma (HCC) is essential for selecting the most effective treatment strategies. However, traditional imaging modalities and serum biomarkers often lack sufficient specificity. Radiomics, a sophisticated image analysis approach that derives quantitative data from medical imaging, has emerged as a promising non-invasive tool. To systematically review and meta-analyze the radiomics diagnostic accuracy in differentiating ICC from HCC. PubMed, EMBASE, and Web of Science databases were systematically searched through January 24, 2025. Studies evaluating radiomics models for distinguishing ICC from HCC were included. Assessing the quality of included studies was done by using Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) and METhodological RadiomICs Score tools. Pooled sensitivity, specificity, and area under the curve (AUC) were calculated using a bivariate random-effects model. Subgroup and publication bias analyses were also performed. 12 studies with 2541 patients were included, with 14 validation cohorts entered into meta-analysis. The pooled sensitivity and specificity of radiomics models were 0.82 (95% CI: 0.76-0.86) and 0.90 (95% CI: 0.85-0.93), respectively, with an AUC of 0.88 (95% CI: 0.85-0.91). Subgroup analyses revealed variations based on segmentation method, software used, and sample size, though not all differences were statistically significant. Publication bias was not detected. Radiomics demonstrates high diagnostic accuracy in distinguishing ICC from HCC and offers a non-invasive adjunct to conventional diagnostics. Further prospective, multicenter studies with standardized workflows are needed to enhance clinical applicability and reproducibility.

Assessment of Robustness of MRI Radiomic Features in the Abdomen: Impact of Deep Learning Reconstruction and Accelerated Acquisition.

Zhong J, Xing Y, Hu Y, Liu X, Dai S, Ding D, Lu J, Yang J, Song Y, Lu M, Nickel D, Lu W, Zhang H, Yao W

pubmed logopapersJun 25 2025
The objective of this study is to investigate the impact of deep learning reconstruction and accelerated acquisition on reproducibility and variability of radiomic features in abdominal MRI. Seventeen volunteers were prospectively included to undergo abdominal MRI on a 3-T scanner for axial T2-weighted, axial T2-weighted fat-suppressed, and coronal T2-weighted sequences. Each sequence was scanned for four times using clinical reference acquisition with standard reconstruction, clinical reference acquisition with deep learning reconstruction, accelerated acquisition with standard reconstruction, and accelerated acquisition with deep learning reconstruction, respectively. The regions of interest were drawn for ten anatomical sites with rigid registrations. Ninety-three radiomic features were extracted via PyRadiomics after z-score normalization. The reproducibility was evaluated using clinical reference acquisition with standard reconstruction as reference by intraclass correlation coefficient (ICC) and concordance correlation coefficient (CCC). The variability among four scans was assessed by coefficient of variation (CV) and quartile coefficient of dispersion (QCD). Our study found that the median (first and third quartile) of overall ICC and CCC values were 0.451 (0.305, 0.583) and 0.450 (0.304, 0.582). The overall percentage of radiomic features with ICC > 0.90 and CCC > 0.90 was 8.1% and 8.1%, and was considered acceptable. The median (first and third quartile) of overall CV and QCD values was 9.4% (4.9%, 17.2%) and 4.9% (2.5%, 9.7%). The overall percentage of radiomic features with CV < 10% and QCD < 10% was 51.9% and 75.0%, and was considered acceptable. Without respect to clinical significance, deep learning reconstruction and accelerated acquisition led to a poor reproducibility of radiomic features, but more than a half of the radiomic features varied within an acceptable range.

[Thyroid nodule segmentation method integrating receiving weighted key-value architecture and spherical geometric features].

Zhu L, Wei G

pubmed logopapersJun 25 2025
To address the high computational complexity of the Transformer in the segmentation of ultrasound thyroid nodules and the loss of image details or omission of key spatial information caused by traditional image sampling techniques when dealing with high-resolution, complex texture or uneven density two-dimensional ultrasound images, this paper proposes a thyroid nodule segmentation method that integrates the receiving weighted key-value (RWKV) architecture and spherical geometry feature (SGF) sampling technology. This method effectively captures the details of adjacent regions through two-dimensional offset prediction and pixel-level sampling position adjustment, achieving precise segmentation. Additionally, this study introduces a patch attention module (PAM) to optimize the decoder feature map using a regional cross-attention mechanism, enabling it to focus more precisely on the high-resolution features of the encoder. Experiments on the thyroid nodule segmentation dataset (TN3K) and the digital database for thyroid images (DDTI) show that the proposed method achieves dice similarity coefficients (DSC) of 87.24% and 80.79% respectively, outperforming existing models while maintaining a lower computational complexity. This approach may provide an efficient solution for the precise segmentation of thyroid nodules.

EAGLE: An Efficient Global Attention Lesion Segmentation Model for Hepatic Echinococcosis

Jiayan Chen, Kai Li, Yulu Zhao, Jianqiang Huang, Zhan Wang

arxiv logopreprintJun 25 2025
Hepatic echinococcosis (HE) is a widespread parasitic disease in underdeveloped pastoral areas with limited medical resources. While CNN-based and Transformer-based models have been widely applied to medical image segmentation, CNNs lack global context modeling due to local receptive fields, and Transformers, though capable of capturing long-range dependencies, are computationally expensive. Recently, state space models (SSMs), such as Mamba, have gained attention for their ability to model long sequences with linear complexity. In this paper, we propose EAGLE, a U-shaped network composed of a Progressive Visual State Space (PVSS) encoder and a Hybrid Visual State Space (HVSS) decoder that work collaboratively to achieve efficient and accurate segmentation of hepatic echinococcosis (HE) lesions. The proposed Convolutional Vision State Space Block (CVSSB) module is designed to fuse local and global features, while the Haar Wavelet Transformation Block (HWTB) module compresses spatial information into the channel dimension to enable lossless downsampling. Due to the lack of publicly available HE datasets, we collected CT slices from 260 patients at a local hospital. Experimental results show that EAGLE achieves state-of-the-art performance with a Dice Similarity Coefficient (DSC) of 89.76%, surpassing MSVM-UNet by 1.61%.

Few-Shot Learning for Prostate Cancer Detection on MRI: Comparative Analysis with Radiologists' Performance.

Yamagishi Y, Baba Y, Suzuki J, Okada Y, Kanao K, Oyama M

pubmed logopapersJun 25 2025
Deep-learning models for prostate cancer detection typically require large datasets, limiting clinical applicability across institutions due to domain shift issues. This study aimed to develop a few-shot learning deep-learning model for prostate cancer detection on multiparametric MRI that requires minimal training data and to compare its diagnostic performance with experienced radiologists. In this retrospective study, we used 99 cases (80 positive, 19 negative) of biopsy-confirmed prostate cancer (2017-2022), with 20 cases for training, 5 for validation, and 74 for testing. A 2D transformer model was trained on T2-weighted, diffusion-weighted, and apparent diffusion coefficient map images. Model predictions were compared with two radiologists using Matthews correlation coefficient (MCC) and F1 score, with 95% confidence intervals (CIs) calculated via bootstrap method. The model achieved an MCC of 0.297 (95% CI: 0.095-0.474) and F1 score of 0.707 (95% CI: 0.598-0.847). Radiologist 1 had an MCC of 0.276 (95% CI: 0.054-0.484) and F1 score of 0.741; Radiologist 2 had an MCC of 0.504 (95% CI: 0.289-0.703) and F1 score of 0.871, showing that the model performance was comparable to Radiologist 1. External validation on the Prostate158 dataset revealed that ImageNet pretraining substantially improved model performance, increasing study-level ROC-AUC from 0.464 to 0.636 and study-level PR-AUC from 0.637 to 0.773 across all architectures. Our findings demonstrate that few-shot deep-learning models can achieve clinically relevant performance when using pretrained transformer architectures, offering a promising approach to address domain shift challenges across institutions.

Machine learning-based construction and validation of an radiomics model for predicting ISUP grading in prostate cancer: a multicenter radiomics study based on [68Ga]Ga-PSMA PET/CT.

Zhang H, Jiang X, Yang G, Tang Y, Qi L, Chen M, Hu S, Gao X, Zhang M, Chen S, Cai Y

pubmed logopapersJun 24 2025
The International Society of Urological Pathology (ISUP) grading of prostate cancer (PCa) is a crucial factor in the management and treatment planning for PCa patients. An accurate and non-invasive assessment of the ISUP grading group could significantly improve biopsy decisions and treatment planning. The use of PSMA-PET/CT radiomics for predicting ISUP has not been widely studied. The aim of this study is to investigate the role of <sup>68</sup>Ga-PSMA PET/CT radiomics in predicting the ISUP grading of primary PCa. This study included 415 PCa patients who underwent <sup>68</sup>Ga-PSMA PET/CT scans before prostate biopsy or radical prostatectomy. Patients were from three centers: Xiangya Hospital, Central South University (252 cases), Qilu Hospital of Shandong University (External Validation 1, 108 cases), and Qingdao University Medical College (External Validation 2, 55 cases). Xiangya Hospital cases were split into training and testing groups (1:1 ratio), with the other centers serving as external validation groups. Feature selection was performed using Minimum Redundancy Maximum Relevance (mRMR) and Least Absolute Shrinkage and Selection Operator (LASSO) algorithms. Eight machine learning classifiers were trained and tested with ten-fold cross-validation. Sensitivity, specificity, and AUC were calculated for each model. Additionally, we combined the radiomic features with maximum Standardized Uptake Value (SUVmax) and prostate-specific antigen (PSA) to create prediction models and tested the corresponding performances. The best-performing model in the Xiangya Hospital training cohort achieved an AUC of 0.868 (sensitivity 72.7%, specificity 96.0%). Similar trends were seen in the testing cohort and external validation centers (AUCs: 0.860, 0.827, and 0.812). After incorporating PSA and SUVmax, a more robust model was developed, achieving an AUC of 0.892 (sensitivity 77.9%, specificity 96.0%) in the training group. This study established and validated a radiomics model based on <sup>68</sup>Ga-PSMA PET/CT, offering an accurate, non-invasive method for predicting ISUP grades in prostate cancer. A multicenter design with external validation ensured the model's robustness and broad applicability. This is the largest study to date on PSMA radiomics for predicting ISUP grades. Notably, integrating SUVmax and PSA metrics with radiomic features significantly improved prediction accuracy, providing new insights and tools for personalized diagnosis and treatment.

[Practical artificial intelligence for urology : Technical principles, current application and future implementation of AI in practice].

Rodler S, Hügelmann K, von Knobloch HC, Weiss ML, Buck L, Kohler J, Fabian A, Jarczyk J, Nuhn P

pubmed logopapersJun 24 2025
Artificial intelligence (AI) is a disruptive technology that is currently finding widespread application after having long been confined to the domain of specialists. In urology, in particular, new fields of application are continuously emerging, which are being studied both in preclinical basic research and in clinical applications. Potential applications include image recognition in the operating room or interpreting images from radiology and pathology, the automatic measurement of urinary stones and radiotherapy. Certain medical devices, particularly in the field of AI-based predictive biomarkers, have already been incorporated into international guidelines. In addition, AI is playing an increasingly more important role in administrative tasks and is expected to lead to enormous changes, especially in the outpatient sector. For urologists, it is becoming increasingly more important to engage with this technology, to pursue appropriate training and therefore to optimally implement AI into the treatment of patients and in the management of their practices or hospitals.

From Faster Frames to Flawless Focus: Deep Learning HASTE in Postoperative Single Sequence MRI.

Hosse C, Fehrenbach U, Pivetta F, Malinka T, Wagner M, Walter-Rittel T, Gebauer B, Kolck J, Geisel D

pubmed logopapersJun 24 2025
This study evaluates the feasibility of a novel deep learning-accelerated half-fourier single-shot turbo spin-echo sequence (HASTE-DL) compared to the conventional HASTE sequence (HASTE<sub>S</sub>) in postoperative single-sequence MRI for the detection of fluid collections following abdominal surgery. As small fluid collections are difficult to visualize using other techniques, HASTE-DL may offer particular advantages in this clinical context. A retrospective analysis was conducted on 76 patients (mean age 65±11.69 years) who underwent abdominal MRI for suspected septic foci following abdominal surgery. Imaging was performed using 3-T MRI scanners, and both sequences were analyzed in terms of image quality, contrast, sharpness, and artifact presence. Quantitative assessments focused on fluid collection detectability, while qualitative assessments evaluated visualization of critical structures. Inter-reader agreement was measured using Cohen's kappa coefficient, and statistical significance was determined with the Mann-Whitney U test. HASTE-DL achieved a 46% reduction in scan time compared to HASTE<sub>S</sub>, while significantly improving overall image quality (p<0.001), contrast (p<0.001), and sharpness (p<0.001). The inter-reader agreement for HASTE-DL was excellent (κ=0.960), with perfect agreement on overall image quality and fluid collection detection (κ=1.0). Fluid detectability and characterization scores were higher for HASTE-DL, and visualization of critical structures was significantly enhanced (p<0.001). No relevant artifacts were observed in either sequence. HASTE-DL offers superior image quality, improved visualization of critical structures, such as drainages, vessels, bile and pancreatic ducts, and reduced acquisition time, making it an effective alternative to the standard HASTE sequence, and a promising complementary tool in the postoperative imaging workflow.

Multimodal Deep Learning Based on Ultrasound Images and Clinical Data for Better Ovarian Cancer Diagnosis.

Su C, Miao K, Zhang L, Yu X, Guo Z, Li D, Xu M, Zhang Q, Dong X

pubmed logopapersJun 24 2025
This study aimed to develop and validate a multimodal deep learning model that leverages 2D grayscale ultrasound (US) images alongside readily available clinical data to improve diagnostic performance for ovarian cancer (OC). A retrospective analysis was conducted involving 1899 patients who underwent preoperative US examinations and subsequent surgeries for adnexal masses between 2019 and 2024. A multimodal deep learning model was constructed for OC diagnosis and extracting US morphological features from the images. The model's performance was evaluated using metrics such as receiver operating characteristic (ROC) curves, accuracy, and F1 score. The multimodal deep learning model exhibited superior performance compared to the image-only model, achieving areas under the curves (AUCs) of 0.9393 (95% CI 0.9139-0.9648) and 0.9317 (95% CI 0.9062-0.9573) in the internal and external test sets, respectively. The model significantly improved the AUCs for OC diagnosis by radiologists and enhanced inter-reader agreement. Regarding US morphological feature extraction, the model demonstrated robust performance, attaining accuracies of 86.34% and 85.62% in the internal and external test sets, respectively. Multimodal deep learning has the potential to enhance the diagnostic accuracy and consistency of radiologists in identifying OC. The model's effective feature extraction from ultrasound images underscores the capability of multimodal deep learning to automate the generation of structured ultrasound reports.
Page 65 of 100991 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.