Sort by:
Page 23 of 47469 results

Advancing prenatal healthcare by explainable AI enhanced fetal ultrasound image segmentation using U-Net++ with attention mechanisms.

Singh R, Gupta S, Mohamed HG, Bharany S, Rehman AU, Ghadi YY, Hussen S

pubmed logopapersJun 4 2025
Prenatal healthcare development requires accurate automated techniques for fetal ultrasound image segmentation. This approach allows standardized evaluation of fetal development by minimizing time-exhaustive processes that perform poorly due to human intervention. This research develops a segmentation framework through U-Net++ with ResNet backbone features which incorporates attention components for enhancing extraction of features in low contrast, noisy ultrasound data. The model leverages the nested skip connections of U-Net++ and the residual learning of ResNet-34 to achieve state-of-the-art segmentation accuracy. Evaluations of the developed model against the vast fetal ultrasound image collection yielded superior results by reaching 97.52% Dice coefficient as well as 95.15% Intersection over Union (IoU), and 3.91 mm Hausdorff distance. The pipeline integrated Grad-CAM++ allows explanations of the model decisions for clinical utility and trust enhancement. The explainability component enables medical professionals to study how the model functions, which creates clear and proven segmentation outputs for better overall reliability. The framework fills in the gap between AI automation and clinical interpretability by showing important areas which affect predictions. The research shows that deep learning combined with Explainable AI (XAI) operates to generate medical imaging solutions that achieve high accuracy. The proposed system demonstrates readiness for clinical workflows due to its ability to deliver a sophisticated prenatal diagnostic instrument that enhances healthcare results.

Multimodal data integration for biologically-relevant artificial intelligence to guide adjuvant chemotherapy in stage II colorectal cancer.

Xie C, Ning Z, Guo T, Yao L, Chen X, Huang W, Li S, Chen J, Zhao K, Bian X, Li Z, Huang Y, Liang C, Zhang Q, Liu Z

pubmed logopapersJun 4 2025
Adjuvant chemotherapy provides a limited survival benefit (<5%) for patients with stage II colorectal cancer (CRC) and is suggested for high-risk patients. Given the heterogeneity of stage II CRC, we aimed to develop a clinically explainable artificial intelligence (AI)-powered analyser to identify radiological phenotypes that would benefit from chemotherapy. Multimodal data from patients with CRC across six cohorts were collected, including 405 patients from the Guangdong Provincial People's Hospital for model development and 153 patients from the Yunnan Provincial Cancer Centre for validation. RNA sequencing data were used to identify the differentially expressed genes in the two radiological clusters. Histopathological patterns were evaluated to bridge the gap between the imaging and genetic information. Finally, we investigated the discovered morphological patterns of mouse models to observe imaging features. The survival benefit of chemotherapy varied significantly among the AI-powered radiological clusters [interaction hazard ratio (iHR) = 5.35, (95% CI: 1.98, 14.41), adjusted P<sub>interaction</sub> = 0.012]. Distinct biological pathways related to immune and stromal cell abundance were observed between the clusters. The observation only (OO)-preferable cluster exhibited higher necrosis, haemorrhage, and tortuous vessels, whereas the adjuvant chemotherapy (AC)-preferable cluster exhibited vessels with greater pericyte coverage, allowing for a more enriched infiltration of B, CD4<sup>+</sup>-T, and CD8<sup>+</sup>-T cells into the core tumoural areas. Further experiments confirmed that changes in vessel morphology led to alterations in predictive imaging features. The developed explainable AI-powered analyser effectively identified patients with stage II CRC with improved overall survival after receiving adjuvant chemotherapy, thereby contributing to the advancement of precision oncology. This work was funded by the National Science Fund of China (81925023, 82302299, and U22A2034), Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application (2022B1212010011), and High-level Hospital Construction Project (DFJHBF202105 and YKY-KF202204).

Predicting clinical outcomes using 18F-FDG PET/CT-based radiomic features and machine learning algorithms in patients with esophageal cancer.

Mutevelizade G, Aydin N, Duran Can O, Teke O, Suner AF, Erdugan M, Sayit E

pubmed logopapersJun 4 2025
This study evaluated the relationship between 18F-fluorodeoxyglucose PET/computed tomography (18F-FDG PET/CT) radiomic features and clinical parameters, including tumor localization, histopathological subtype, lymph node metastasis, mortality, and treatment response, in esophageal cancer (EC) patients undergoing chemoradiotherapy and the predictive performance of various machine learning (ML) models. In this retrospective study, 39 patients with EC who underwent pretreatment 18F-FDG PET/CT and received concurrent chemoradiotherapy were analyzed. Texture features were extracted using LIFEx software. Logistic regression, Naive Bayes, random forest, extreme gradient boosting (XGB), and support vector machine classifiers were applied to predict clinical outcomes. Cox regression and Kaplan-Meier analyses were used to evaluate overall survival (OS), and the accuracy of ML algorithms was quantified using the area under the receiver operating characteristic curve. Radiomic features showed significant associations with several clinical parameters. Lymph node metastasis, tumor localization, and treatment response emerged as predictors of OS. Among the ML models, XGB demonstrated the most consistent and highest predictive performance across clinical outcomes. Radiomic features extracted from 18F-FDG PET/CT, when combined with ML approaches, may aid in predicting treatment response and clinical outcomes in EC. Radiomic features demonstrated value in assessing tumor heterogeneity; however, clinical parameters retained a stronger prognostic value for OS.

Rad-Path Correlation of Deep Learning Models for Prostate Cancer Detection on MRI

Verde, A. S. C., de Almeida, J. G., Mendes, F., Pereira, M., Lopes, R., Brito, M. J., Urbano, M., Correia, P. S., Gaivao, A. M., Firpo-Betancourt, A., Fonseca, J., Matos, C., Regge, D., Marias, K., Tsiknakis, M., ProCAncer-I Consortium,, Conceicao, R. C., Papanikolaou, N.

medrxiv logopreprintJun 4 2025
While Deep Learning (DL) models trained on Magnetic Resonance Imaging (MRI) have shown promise for prostate cancer detection, their lack of direct biological validation often undermines radiologists trust and hinders clinical adoption. Radiologic-histopathologic (rad-path) correlation has the potential to validate MRI-based lesion detection using digital histopathology. This study uses automated and manually annotated digital histopathology slides as a standard of reference to evaluate the spatial extent of lesion annotations derived from both radiologist interpretations and DL models previously trained on prostate bi-parametric MRI (bp-MRI). 117 histopathology slides were used as reference. Prospective patients with clinically significant prostate cancer performed a bp-MRI examination before undergoing a robotic radical prostatectomy, and each prostate specimen was sliced using a 3D-printed patient-specific mold to ensure a direct comparison between pre-operative imaging and histopathology slides. The histopathology slides and their corresponding T2-weighted MRI images were co-registered. We trained DL models for cancer detection on large retrospective datasets of T2-w MRI only, bp-MRI and histopathology images and did inference in a prospective patient cohort. We evaluated the spatial extent between detected lesions and between detected lesions and the histopathological and radiological ground-truth, using the Dice similarity coefficient (DSC). The DL models trained on digital histopathology tiles and MRI images demonstrated promising capabilities in lesion detection. A low overlap was observed between the lesion detection masks generated by the histopathology and bp-MRI models, with a DSC = 0.10. However, the overlap was equivalent (DSC = 0.08) between radiologist annotations and histopathology ground truth. A rad-path correlation pipeline was established in a prospective patient cohort with prostate cancer undergoing surgery. The correlation between rad-path DL models was low but comparable to the overlap between annotations. While DL models show promise in prostate cancer detection, challenges remain in integrating MRI-based predictions with histopathological findings.

Impact of AI-Generated ADC Maps on Computer-Aided Diagnosis of Prostate Cancer: A Feasibility Study.

Ozyoruk KB, Harmon SA, Yilmaz EC, Gelikman DG, Bagci U, Simon BD, Merino MJ, Lis R, Gurram S, Wood BJ, Pinto PA, Choyke PL, Turkbey B

pubmed logopapersJun 4 2025
To evaluate the impact of AI-generated apparent diffusion coefficient (ADC) maps on diagnostic performance of a 3D U-Net AI model for prostate cancer (PCa) detection and segmentation at biparametric MRI (bpMRI). The study population was retrospectively collected and consisted of 178 patients, including 119 cases and 59 controls. Cases had a mean age of 62.1 years (SD=7.4) and a median prostate-specific antigen (PSA) level of 7.27ng/mL (IQR=5.43-10.55), while controls had a mean age of 63.4 years (SD=7.5) and a median PSA of 6.66ng/mL (IQR=4.29-11.30). All participants underwent 3.0 T T2-weighted turbo spin-echo MRI and high b-value echo-planar diffusion-weighted imaging (bpMRI), followed by either prostate biopsy or radical prostatectomy between January 2013 and December 2022. We compared the lesion detection and segmentation performance of a pretrained 3D U-Net AI model using conventional ADC maps versus AI-generated ADC maps. The Wilcoxon signed-rank test was used for statistical comparison, with 95% confidence intervals (CI) estimated via bootstrapping. A p-value <0.05 was considered significant. AI-ADC maps increased the accuracy of the lesion detection AI model, from 0.70 to 0.78 (p<0.01). Specificity increased from 0.22 to 0.47 (p<0.001), while maintaining high sensitivity, which was 0.94 with conventional ADC maps and 0.93 with AI-ADC maps (p>0.05). Mean dice similarity coefficients (DSC) for conventional ADC maps was 0.276, while AI-ADC maps showed a mean DSC of 0.225 (p<0.05). In the subset of patients with ISUP≥2, standard ADC maps demonstrated a mean DSC of 0.282 compared to 0.230 for AI-ADC maps (p<0.05). AI-generated ADC maps can improve performance of computer-aided diagnosis of prostate cancer.

Interpretable Machine Learning based Detection of Coeliac Disease

Jaeckle, F., Bryant, R., Denholm, J., Romero Diaz, J., Schreiber, B., Shenoy, V., Ekundayomi, D., Evans, S., Arends, M., Soilleux, E.

medrxiv logopreprintJun 4 2025
BackgroundCoeliac disease, an autoimmune disorder affecting approximately 1% of the global population, is typically diagnosed on a duodenal biopsy. However, inter-pathologist agreement on coeliac disease diagnosis is only around 80%. Existing machine learning solutions designed to improve coeliac disease diagnosis often lack interpretability, which is essential for building trust and enabling widespread clinical adoption. ObjectiveTo develop an interpretable AI model capable of segmenting key histological structures in duodenal biopsies, generating explainable segmentation masks, estimating intraepithelial lymphocyte (IEL)-to-enterocyte and villus-to-crypt ratios, and diagnosing coeliac disease. DesignSemantic segmentation models were trained to identify villi, crypts, IELs, and enterocytes using 49 annotated 2048x2048 patches at 40x magnification. IEL-to-enterocyte and villus-to-crypt ratios were calculated from segmentation masks, and a logistic regression model was trained on 172 images to diagnose coeliac disease based on these ratios. Evaluation was performed on an independent test set of 613 duodenal biopsy scans from a separate NHS Trust. ResultsThe villus-crypt segmentation model achieved a mean PR AUC of 80.5%, while the IEL-enterocyte model reached a PR AUC of 82%. The diagnostic model classified WSIs with 96% accuracy, 86% positive predictive value, and 98% negative predictive value on the independent test set. ConclusionsOur interpretable AI models accurately segmented key histological structures and diagnosed coeliac disease in unseen WSIs, demonstrating strong generalization performance. These models provide pathologists with reliable IEL-to-enterocyte and villus-to-crypt ratio estimates, enhancing diagnostic accuracy. Interpretable AI solutions like ours are essential for fostering trust among healthcare professionals and patients, complementing existing black-box methodologies. What is already known on this topicPathologist concordance in diagnosing coeliac disease from duodenal biopsies is consistently reported to be below 80%, highlighting diagnostic variability and the need for improved methods. Several recent studies have leveraged artificial intelligence (AI) to enhance coeliac disease diagnosis. However, most of these models operate as "black boxes," offering limited interpretability and transparency. The lack of explainability in AI-driven diagnostic tools prevents widespread adoption by healthcare professionals and reduces patient trust. What this study addsThis study presents an interpretable semantic segmentation algorithm capable of detecting the four key histological structures essential for diagnosing coeliac disease: crypts, villi, intraepithelial lymphocytes (IELs), and enterocytes. The model accurately estimates the IEL-to-enterocyte ratio and the villus-to-crypt ratio, the latter being an indicator of villous atrophy and crypt hyperplasia, thereby providing objective, reproducible metrics for diagnosis. The segmentation outputs allow for transparent, explainable decision-making, supporting pathologists in coeliac disease diagnosis with improved accuracy and confidence. This study presents an AI model that automates the estimation of the IEL-to-enterocyte ratio--a labour-intensive task currently performed manually by pathologists in limited biopsy regions. By minimising diagnostic variability and alleviating time constraints for pathologists, the model provides an efficient and practical solution to streamline the diagnostic workflow. Tested on an independent dataset from a previously unseen source, the model demonstrates explainability and generalizability, enhancing trust and encouraging adoption in routine clinical practice. Furthermore, this approach could set a new standard for AI-assisted duodenal biopsy evaluation, paving the way for the development of interpretable AI tools in pathology to address the critical challenges of limited pathologist availability and diagnostic inconsistencies.

Enhanced risk stratification for stage II colorectal cancer using deep learning-based CT classifier and pathological markers to optimize adjuvant therapy decision.

Huang YQ, Chen XB, Cui YF, Yang F, Huang SX, Li ZH, Ying YJ, Li SY, Li MH, Gao P, Wu ZQ, Wen G, Wang ZS, Wang HX, Hong MP, Diao WJ, Chen XY, Hou KQ, Zhang R, Hou J, Fang Z, Wang ZN, Mao Y, Wee L, Liu ZY

pubmed logopapersJun 4 2025
Current risk stratification for stage II colorectal cancer (CRC) has limited accuracy in identifying patients who would benefit from adjuvant chemotherapy, leading to potential over- or under-treatment. We aimed to develop a more precise risk stratification system by integrating artificial intelligence-based imaging analysis with pathological markers. We analyzed 2,992 stage II CRC patients from 12 centers. A deep learning classifier (Swin Transformer Assisted Risk-stratification for CRC, STAR-CRC) was developed using multi-planar CT images from 1,587 patients (training:internal validation=7:3) and validated in 1,405 patients from 8 independent centers, which stratified patients into low-, uncertain-, and high-risk groups. To further refine the uncertain-risk group, a composite score based on pathological markers (pT4 stage, number of lymph nodes sampled, perineural invasion, and lymphovascular invasion) was applied, forming the intelligent risk integration system for stage II CRC (IRIS-CRC). IRIS-CRC was compared against the guideline-based risk stratification system (GRSS-CRC) for prediction performance and validated in the validation dataset. IRIS-CRC stratified patients into four prognostic groups with distinct 3-year disease-free survival rates (≥95%, 95-75%, 75-55%, ≤55%). Upon external validation, compared to GRSS-CRC, IRIS-CRC downstaged 27.1% of high-risk patients into Favorable group, while upstaged 6.5% of low-risk patients into Very Poor prognosis group who might require more aggressive treatment. In the GRSS-CRC intermediate-risk group of the external validation dataset, IRIS-CRC reclassified 40.1% as Favorable prognosis and 7.0% as Very Poor prognosis. IRIS-CRC's performance maintained generalized in both chemotherapy and non-chemotherapy cohorts. IRIS-CRC offers a more precise and personalized risk assessment than current guideline-based risk factors, potentially sparing low-risk patients from unnecessary adjuvant chemotherapy while identifying high-risk individuals for more aggressive treatment. This novel approach holds promise for improving clinical decision-making and outcomes in stage II CRC.

Deep Learning-Based Opportunistic CT Osteoporosis Screening and Establishment of Normative Values

Westerhoff, M., Gyftopoulos, S., Dane, B., Vega, E., Murdock, D., Lindow, N., Herter, F., Bousabarah, K., Recht, M. P., Bredella, M. A.

medrxiv logopreprintJun 3 2025
BackgroundOsteoporosis is underdiagnosed and undertreated prompting the exploration of opportunistic screening using CT and artificial intelligence (AI). PurposeTo develop a reproducible deep learning-based convolutional neural network to automatically place a 3D region of interest (ROI) in trabecular bone, develop a correction method to normalize attenuation across different CT protocols or and scanner models, and to establish thresholds for osteoporosis in a large diverse population. MethodsA deep learning-based method was developed to automatically quantify trabecular attenuation using a 3D ROI of the thoracic and lumbar spine on chest, abdomen, or spine CTs, adjusted for different tube voltages and scanner models. Normative values, thresholds for osteoporosis of trabecular attenuation of the spine were established across a diverse population, stratified by age, sex, race, and ethnicity using reported prevalence of osteoporosis by the WHO. Results538,946 CT examinations from 283,499 patients (mean age 65 years{+/-}15, 51.2% women and 55.5% White), performed on 50 scanner models using six different tube voltages were analyzed. Hounsfield Units at 80 kVp versus 120 kVp differed by 23%, and different scanner models resulted in differences of values by < 10%. Automated ROI placement of 1496 vertebra was validated by manual radiologist review, demonstrating >99% agreement. Mean trabecular attenuation was higher in young women (<50 years) than young men (p<.001) and decreased with age, with a steeper decline in postmenopausal women. In patients older than 50 years, trabecular attention was higher in males than females (p<.001). Trabecular attenuation was highest in Blacks, followed by Asians and lowest in Whites (p<.001). The threshold for L1 in diagnosing osteoporosis was 80 HU. ConclusionDeep learning-based automated opportunistic osteoporosis screening can identify patients with low bone mineral density that undergo CT scans for clinical purposes on different scanners and protocols. Key Results 3 main results/conclusionsO_LIIn a study of 538,946 CT examinations performed in 283,499 patients using different scanner models and imaging protocols, an automated deep learning-based convolutional neural network was able to accurately place a three-dimensional regions of interest within thoracic and lumbar vertebra to measure trabecular attenuation. C_LIO_LITube voltage had a larger influence on attenuation values (23%) than scanner model (<10%). C_LIO_LIA threshold of 80 HU was identified for L1 to diagnose osteoporosis using an automated three-dimensional region of interest. C_LI

Developing a CT radiomics-based model for assessing split renal function using machine learning.

Zhan Y, Zheng J, Chen X, Chen Y, Fang C, Lai C, Dai M, Wu Z, Wu H, Yu T, Huang J, Yu H

pubmed logopapersJun 3 2025
This study aims to investigate whether non-contrast computed tomography radiomics can effectively reflect split renal function and to develop a radiomics model for its assessment. This retrospective study included kidneys from the study center and split them into training (70%) and testing (30%) sets. Renal dynamic imaging was used as the reference standard for measuring split renal function. Based on chronic kidney disease staging, kidneys were categorized into three groups according to glomerular filtration rate: > 45 ml/min/1.73 m<sup>2</sup>, 30-45 ml/min/1.73 m<sup>2</sup>, and < 30 ml/min/1.73 m<sup>2</sup>.Features were selected based on feature importance ranking from a tree model, and a random forest radiomics model was built. A total of 543 kidneys were included, with 381 in the training set and 162 in the testing set. In the training set, 16 features identified as most important for distinguishing between the groups were ultimately included to develop the random forest model. The model demonstrated good discriminatory ability in the testing set. The AUC for the > 45 ml/min/1.73 m<sup>2</sup>, 30-45 ml/min/1.73 m<sup>2</sup>, and < 30 ml/min/1.73 m<sup>2</sup> categories were 0.859 (95% CI 0.804-0.910), 0.679 (95% CI 0.589-0.760), and 0.901 (95% CI 0.848-0.946), respectively. The calibration curves for the kidneys in each group closely align with the diagonal, with Hosmer-Lemeshow test P-values of 0.124, 0.241, and 0.199 for the three groups, respectively (all P > 0.05). The decision curve analysis confirmed the radiomics model's clinical utility, demonstrating significantly higher net benefit than both treat-all and treat-none strategies at clinically relevant probability thresholds: 1-69% and 71-75% for the > 45 ml/min/1.73 m<sup>2</sup> group, 15-d50% for the 30-45 ml/min/1.73 m<sup>2</sup> group, and 0-99% for the < 30 ml/min/1.73 m<sup>2</sup> group. Non-contrast computed tomography radiomics can effectively reflect split renal function information, and the model developed based on it can accurately assess split renal function, holding great potential for clinical application.

Comparisons of AI automated segmentation techniques to manual segmentation techniques of the maxilla and maxillary sinus for CT or CBCT scans-A Systematic review.

Park JH, Hamimi M, Choi JJE, Figueredo CMS, Cameron MA

pubmed logopapersJun 3 2025
Accurate segmentation of the maxillary sinus from medical images is essential for diagnostic purposes and surgical planning. Manual segmentation of the maxillary sinus, while the gold standard, is time consuming and requires adequate training. To overcome this problem, AI enabled automatic segmentation software's developed. The purpose of this review is to systematically analyse the current literature to investigate the accuracy and efficiency of automatic segmentation techniques of the maxillary sinus to manual segmentation. A systematic approach to perform a thorough analysis of the existing literature using PRISMA guidelines. Data for this study was obtained from Pubmed, Medline, Embase, and Google Scholar databases. The inclusion and exclusion eligibility criteria were used to shortlist relevant studies. The sample size, anatomical structures segmented, experience of operators, type of manual segmentation software used, type of automatic segmentation software used, statistical comparative method used, and length of time of segmentation were analysed. This systematic review presents 10 studies that compared the accuracy and efficiency of automatic segmentation of the maxillary sinus to manual segmentation. All the studies included in this study were found to have a low risk of bias. Samples sizes ranged from 3 to 144, a variety of operators were used to manually segment the CBCT and segmentation was made primarily to 3D slicer and Mimics software. The comparison was primarily made to Unet architecture softwares, with the dice-coefficient being the primary means of comparison. This systematic review showed that automatic segmentation technique was consistently faster than manual segmentation techniques and over 90% accurate when compared to the gold standard of manual segmentation.
Page 23 of 47469 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.