Sort by:
Page 68 of 100991 results

Development and validation of an AI-driven radiomics model using non-enhanced CT for automated severity grading in chronic pancreatitis.

Chen C, Zhou J, Mo S, Li J, Fang X, Liu F, Wang T, Wang L, Lu J, Shao C, Bian Y

pubmed logopapersJun 19 2025
To develop and validate the chronic pancreatitis CT severity model (CATS), an artificial intelligence (AI)-based tool leveraging automated 3D segmentation and radiomics analysis of non-enhanced CT scans for objective severity stratification in chronic pancreatitis (CP). This retrospective study encompassed patients with recurrent acute pancreatitis (RAP) and CP from June 2016 to May 2020. A 3D convolutional neural network segmented non-enhanced CT scans, extracting 1843 radiomic features to calculate the radiomics score (Rad-score). The CATS was formulated using multivariable logistic regression and validated in a subsequent cohort from June 2020 to April 2023. Overall, 2054 patients with RAP and CP were included in the training (n = 927), validation set (n = 616), and external test (n = 511) sets. CP grade I and II patients accounted for 300 (14.61%) and 1754 (85.39%), respectively. The Rad-score significantly correlated with the acinus-to-stroma ratio (p = 0.023; OR, -2.44). The CATS model demonstrated high discriminatory performance in differentiating CP severity grades, achieving an area under the curve (AUC) of 0.96 (95% CI: 0.94-0.98) and 0.88 (95% CI: 0.81-0.90) in the validation and test cohorts. CATS-predicted grades correlated with exocrine insufficiency (all p < 0.05) and showed significant prognostic differences (all p < 0.05). CATS outperformed radiologists in detecting calcifications, identifying all minute calcifications missed by radiologists. The CATS, developed using non-enhanced CT and AI, accurately predicts CP severity, reflects disease morphology, and forecasts short- to medium-term prognosis, offering a significant advancement in CP management. Question Existing CP severity assessments rely on semi-quantitative CT evaluations and multi-modality imaging, leading to inconsistency and inaccuracy in early diagnosis and prognosis prediction. Findings The AI-driven CATS model, using non-enhanced CT, achieved high accuracy in grading CP severity, and correlated with histopathological fibrosis markers. Clinical relevance CATS provides a cost-effective, widely accessible tool for precise CP severity stratification, enabling early intervention, personalized management, and improved outcomes without contrast agents or invasive biopsies.

AGE-US: automated gestational age estimation based on fetal ultrasound images

César Díaz-Parga, Marta Nuñez-Garcia, Maria J. Carreira, Gabriel Bernardino, Nicolás Vila-Blanco

arxiv logopreprintJun 19 2025
Being born small carries significant health risks, including increased neonatal mortality and a higher likelihood of future cardiac diseases. Accurate estimation of gestational age is critical for monitoring fetal growth, but traditional methods, such as estimation based on the last menstrual period, are in some situations difficult to obtain. While ultrasound-based approaches offer greater reliability, they rely on manual measurements that introduce variability. This study presents an interpretable deep learning-based method for automated gestational age calculation, leveraging a novel segmentation architecture and distance maps to overcome dataset limitations and the scarcity of segmentation masks. Our approach achieves performance comparable to state-of-the-art models while reducing complexity, making it particularly suitable for resource-constrained settings and with limited annotated data. Furthermore, our results demonstrate that the use of distance maps is particularly suitable for estimating femur endpoints.

Multitask Deep Learning for Automated Segmentation and Prognostic Stratification of Endometrial Cancer via Biparametric MRI.

Yan R, Zhang X, Cao Q, Xu J, Chen Y, Qin S, Zhang S, Zhao W, Xing X, Yang W, Lang N

pubmed logopapersJun 19 2025
Endometrial cancer (EC) is a common gynecologic malignancy; accurate assessment of key prognostic factors is important for treatment planning. To develop a deep learning (DL) framework based on biparametric MRI for automated segmentation and multitask classification of EC key prognostic factors, including grade, stage, histological subtype, lymphovascular space invasion (LVSI), and deep myometrial invasion (DMI). Retrospective. A total of 325 patients with histologically confirmed EC were included: 211 training, 54 validation, and 60 test cases. T2-weighted imaging (T2WI, FSE/TSE) and diffusion-weighted imaging (DWI, SS-EPI) sequences at 1.5 and 3 T. The DL model comprised tumor segmentation and multitask classification. Manual delineation on T2WI and DWI acted as the reference standard for segmentation. Separate models were trained using T2WI alone, DWI alone and combined T2WI + DWI to classify dichotomized key prognostic factors. Performance was assessed in validation and test cohorts. For DMI, the combined model's was compared with visual assessment by four radiologists (with 1, 4, 7, and 20 years' experience), each of whom independently reviewed all cases. Segmentation was evaluated using the dice similarity coefficient (DSC), Jaccard similarity coefficient (JSC), Hausdorff distance (HD95), and average surface distance (ASD). Classification performance was assessed using area under the receiver operating characteristic curve (AUC). Model AUCs were compared using DeLong's test. p < 0.05 was considered significant. In the test cohort, DSCs were 0.80 (T2WI) and 0.78 (DWI) and JSCs were 0.69 for both. HD95 and ASD were 7.02/1.71 mm (T2WI) versus 10.58/2.13 mm (DWI). The classification framework achieved AUCs of 0.78-0.94 (validation) and 0.74-0.94 (test). For DMI, the combined model performed comparably to radiologists (p = 0.07-0.84). The unified DL framework demonstrates strong EC segmentation and classification performance, with high accuracy across multiple tasks. 3. Stage 3.

Artificial intelligence in imaging diagnosis of liver tumors: current status and future prospects.

Hori M, Suzuki Y, Sofue K, Sato J, Nishigaki D, Tomiyama M, Nakamoto A, Murakami T, Tomiyama N

pubmed logopapersJun 19 2025
Liver cancer remains a significant global health concern, ranking as the sixth most common malignancy and the third leading cause of cancer-related deaths worldwide. Medical imaging plays a vital role in managing liver tumors, particularly hepatocellular carcinoma (HCC) and metastatic lesions. However, the large volume and complexity of imaging data can make accurate and efficient interpretation challenging. Artificial intelligence (AI) is recognized as a promising tool to address these challenges. Therefore, this review aims to explore the recent advances in AI applications in liver tumor imaging, focusing on key areas such as image reconstruction, image quality enhancement, lesion detection, tumor characterization, segmentation, and radiomics. Among these, AI-based image reconstruction has already been widely integrated into clinical workflows, helping to enhance image quality while reducing radiation exposure. While the adoption of AI-assisted diagnostic tools in liver imaging has lagged behind other fields, such as chest imaging, recent developments are driving their increasing integration into clinical practice. In the future, AI is expected to play a central role in various aspects of liver cancer care, including comprehensive image analysis, treatment planning, response evaluation, and prognosis prediction. This review offers a comprehensive overview of the status and prospects of AI applications in liver tumor imaging.

A fusion-based deep-learning algorithm predicts PDAC metastasis based on primary tumour CT images: a multinational study.

Xue N, Sabroso-Lasa S, Merino X, Munzo-Beltran M, Schuurmans M, Olano M, Estudillo L, Ledesma-Carbayo MJ, Liu J, Fan R, Hermans JJ, van Eijck C, Malats N

pubmed logopapersJun 19 2025
Diagnosing the presence of metastasis of pancreatic cancer is pivotal for patient management and treatment, with contrast-enhanced CT scans (CECT) as the cornerstone of diagnostic evaluation. However, this diagnostic modality requires a multifaceted approach. To develop a convolutional neural network (CNN)-based model (PMPD, Pancreatic cancer Metastasis Prediction Deep-learning algorithm) to predict the presence of metastases based on CECT images of the primary tumour. CECT images in the portal venous phase of 335 patients with pancreatic ductal adenocarcinoma (PDAC) from the PanGenEU study and The First Affiliated Hospital of Zhengzhou University (ZZU) were randomly divided into training and internal validation sets by applying fivefold cross-validation. Two independent external validation datasets of 143 patients from the Radboud University Medical Center (RUMC), included in the PANCAIM study (RUMC-PANCAIM) and 183 patients from the PREOPANC trial of the Dutch Pancreatic Cancer Group (PREOPANC-DPCG) were used to evaluate the results. The area under the receiver operating characteristic curve (AUROC) for the internally tested model was 0.895 (0.853-0.937) and 0.779 (0.741-0.817) in the PanGenEU and ZZU sets, respectively. In the external validation sets, the mean AUROC was 0.806 (0.787-0.826) for the RUMC-PANCAIM and 0.761 (0.717-0.804) for the PREOPANC-DPCG. When stratified by the different metastasis sites, the PMPD model achieved the average AUROC between 0.901-0.927 in PanGenEU, 0.782-0.807 in ZZU and 0.761-0.820 in PREOPANC-DPCG sets. A PMPD-derived Metastasis Risk Score (MRS) (HR: 2.77, 95% CI 1.99 to 3.86, p=1.59e-09) outperformed the Resectability status from the National Comprehensive Cancer Network guideline and the CA19-9 biomarker in predicting overall survival. Meanwhile, the MRS could potentially predict developed metastasis (AUROC: 0.716 for within 3 months, 0.645 for within 6 months). This study represents a pioneering utilisation of a high-performance deep-learning model to predict extrapancreatic organ metastasis in patients with PDAC.

Concordance between single-slice abdominal computed tomography-based and bioelectrical impedance-based analysis of body composition in a prospective study.

Fehrenbach U, Hosse C, Wienbrandt W, Walter-Rittel T, Kolck J, Auer TA, Blüthner E, Tacke F, Beetz NL, Geisel D

pubmed logopapersJun 19 2025
Body composition analysis (BCA) is a recognized indicator of patient frailty. Apart from the established bioelectrical impedance analysis (BIA), computed tomography (CT)-derived BCA is being increasingly explored. The aim of this prospective study was to directly compare BCA obtained from BIA and CT. A total of 210 consecutive patients scheduled for CT, including a high proportion of cancer patients, were prospectively enrolled. Immediately prior to the CT scan, all patients underwent BIA. CT-based BCA was performed using a single-slice AI tool for automated detection and segmentation at the level of the third lumbar vertebra (L3). BIA-based parameters, body fat mass (BFM<sub>BIA</sub>) and skeletal muscle mass (SMM<sub>BIA</sub>), CT-based parameters, subcutaneous and visceral adipose tissue area (SATA<sub>CT</sub> and VATA<sub>CT</sub>) and total abdominal muscle area (TAMA<sub>CT</sub>) were determined. Indices were calculated by normalizing the BIA and CT parameters to patient's weight (body fat percentage (BFP<sub>BIA</sub>) and body fat index (BFI<sub>CT</sub>)) or height (skeletal muscle index (SMI<sub>BIA</sub>) and lumbar skeletal muscle index (LSMI<sub>CT</sub>)). Parameters representing fat, BFM<sub>BIA</sub> and SATA<sub>CT</sub> + VATA<sub>CT</sub>, and parameters representing muscle tissue, SMM<sub>BIA</sub> and TAMA<sub>CT</sub>, showed strong correlations in female (fat: r = 0.95; muscle: r = 0.72; p < 0.001) and male (fat: r = 0.91; muscle: r = 0.71; p < 0.001) patients. Linear regression analysis was statistically significant (fat: R<sup>2</sup> = 0.73 (female) and 0.74 (male); muscle: R<sup>2</sup> = 0.56 (female) and 0.56 (male); p < 0.001), showing that BFI<sub>CT</sub> and LSMI<sub>CT</sub> allowed prediction of BFP<sub>BIA</sub> and SMI<sub>BIA</sub> for both sexes. CT-based BCA strongly correlates with BIA results and yields quantitative results for BFP and SMI comparable to the existing gold standard. Question CT-based body composition analysis (BCA) is moving more and more into clinical focus, but validation against established methods is lacking. Findings Fully automated CT-based BCA correlates very strongly with guideline-accepted bioelectrical impedance analysis (BIA). Clinical relevance BCA is currently moving further into clinical focus to improve assessment of patient frailty and individualize therapies accordingly. Comparability with established BIA strengthens the value of CT-based BCA and supports its translation into clinical routine.

Classification of Multi-Parametric Body MRI Series Using Deep Learning

Boah Kim, Tejas Sudharshan Mathai, Kimberly Helm, Peter A. Pinto, Ronald M. Summers

arxiv logopreprintJun 18 2025
Multi-parametric magnetic resonance imaging (mpMRI) exams have various series types acquired with different imaging protocols. The DICOM headers of these series often have incorrect information due to the sheer diversity of protocols and occasional technologist errors. To address this, we present a deep learning-based classification model to classify 8 different body mpMRI series types so that radiologists read the exams efficiently. Using mpMRI data from various institutions, multiple deep learning-based classifiers of ResNet, EfficientNet, and DenseNet are trained to classify 8 different MRI series, and their performance is compared. Then, the best-performing classifier is identified, and its classification capability under the setting of different training data quantities is studied. Also, the model is evaluated on the out-of-training-distribution datasets. Moreover, the model is trained using mpMRI exams obtained from different scanners in two training strategies, and its performance is tested. Experimental results show that the DenseNet-121 model achieves the highest F1-score and accuracy of 0.966 and 0.972 over the other classification models with p-value$<$0.05. The model shows greater than 0.95 accuracy when trained with over 729 studies of the training data, whose performance improves as the training data quantities grew larger. On the external data with the DLDS and CPTAC-UCEC datasets, the model yields 0.872 and 0.810 accuracy for each. These results indicate that in both the internal and external datasets, the DenseNet-121 model attains high accuracy for the task of classifying 8 body MRI series types.

Generalist medical foundation model improves prostate cancer segmentation from multimodal MRI images.

Zhang Y, Ma X, Li M, Huang K, Zhu J, Wang M, Wang X, Wu M, Heng PA

pubmed logopapersJun 18 2025
Prostate cancer (PCa) is one of the most common types of cancer, seriously affecting adult male health. Accurate and automated PCa segmentation is essential for radiologists to confirm the location of cancer, evaluate its severity, and design appropriate treatments. This paper presents PCaSAM, a fully automated PCa segmentation model that allows us to input multi-modal MRI images into the foundation model to improve performance significantly. We collected multi-center datasets to conduct a comprehensive evaluation. The results showed that PCaSAM outperforms the generalist medical foundation model and the other representative segmentation models, with the average DSC of 0.721 and 0.706 in the internal and external datasets, respectively. Furthermore, with the assistance of segmentation, the PI-RADS scoring of PCa lesions was improved significantly, leading to a substantial increase in average AUC by 8.3-8.9% on two external datasets. Besides, PCaSAM achieved superior efficiency, making it highly suitable for real-world deployment scenarios.

Deep Learning-Based Adrenal Gland Volumetry for the Prediction of Diabetes.

Ku EJ, Yoon SH, Park SS, Yoon JW, Kim JH

pubmed logopapersJun 18 2025
The long-term association between adrenal gland volume (AGV) and type 2 diabetes (T2D) remains unclear. We aimed to determine the association between deep learning-based AGV and current glycemic status and incident T2D. In this observational study, adults who underwent abdominopelvic computed tomography (CT) for health checkups (2011-2012), but had no adrenal nodules, were included. AGV was measured from CT images using a three-dimensional nnU-Net deep learning algorithm. We assessed the association between AGV and T2D using a cross-sectional and longitudinal design. We used 500 CT scans (median age, 52.3 years; 253 men) for model development and a Multi-Atlas Labeling Beyond the Cranial Vault dataset for external testing. A clinical cohort included a total of 9708 adults (median age, 52.0 years; 5,769 men). The deep learning model demonstrated a dice coefficient of 0.71±0.11 for adrenal segmentation and a mean volume difference of 0.6± 0.9 mL in the external dataset. Participants with T2D at baseline had a larger AGV than those without (7.3 cm3 vs. 6.7 cm3 and 6.3 cm3 vs. 5.5 cm3 for men and women, respectively, all P<0.05). The optimal AGV cutoff values for predicting T2D were 7.2 cm3 in men and 5.5 cm3 in women. Over a median 7.0-year follow-up, T2D developed in 938 participants. Cumulative T2D risk was accentuated with high AGV compared with low AGV (adjusted hazard ratio, 1.27; 95% confidence interval, 1.11 to 1.46). AGV, measured using deep learning algorithms, is associated with current glycemic status and can significantly predict the development of T2D.

RECIST<sup>Surv</sup>: Hybrid Multi-task Transformer for Hepatocellular Carcinoma Response and Survival Evaluation.

Jiao R, Liu Q, Zhang Y, Pu B, Xue B, Cheng Y, Yang K, Liu X, Qu J, Jin C, Zhang Y, Wang Y, Zhang YD

pubmed logopapersJun 18 2025
Transarterial Chemoembolization (TACE) is a widely applied alternative treatment for patients with hepatocellular carcinoma who are not eligible for liver resection or transplantation. However, the clinical outcomes after TACE are highly heterogeneous. There remains an urgent need for effective and efficient strategies to accurately assess tumor response and predict long-term outcomes using longitudinal and multi-center datasets. To address this challenge, we here introduce RECIST<sup>Surv</sup>, a novel response-driven Transformer model that integrates multi-task learning with a response-driven co-attention mechanism to simultaneously perform liver and tumor segmentation, predict tumor response to TACE, and estimate overall survival based on longitudinal Computed Tomography (CT) imaging. The proposed Response-driven Co-attention layer models the interactions between pre-TACE and post-TACE features guided by the treatment response embedding. This design enables the model to capture complex relationships between imaging features, treatment response, and survival outcomes, thereby enhancing both prediction accuracy and interpretability. In a multi-center validation study, RECIST<sup>Surv</sup>-predicted prognosis has demonstrated superior precision than state-of-the-art methods with C-indexes ranging from 0.595 to 0.780. Furthermore, when integrated with multi-modal data, RECIST<sup>Surv</sup> has emerged as an independent prognostic factor in all three validation cohorts, with hazard ratio (HR) ranging from 1.693 to 20.7 (P = 0.001-0.042). Our results highlight the potential of RECIST<sup>Surv</sup> as a powerful tool for personalized treatment planning and outcome prediction in hepatocellular carcinoma patients undergoing TACE. The experimental code is made publicly available at https://github.com/rushier/RECISTSurv.
Page 68 of 100991 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.