Sort by:
Page 45 of 75743 results

Effects of patient and imaging factors on small bowel motility scores derived from deep learning-based segmentation of cine MRI.

Heo S, Yun J, Kim DW, Park SY, Choi SH, Kim K, Jung KW, Myung SJ, Park SH

pubmed logopapersJun 17 2025
Small bowel motility can be quantified using cine MRI, but the influence of patient and imaging factors on motility scores remains unclear. This study evaluated whether patient and imaging factors affect motility scores derived from deep learning-based segmentation of cine MRI. Fifty-four patients (mean age 53.6 ± 16.4 years; 34 women) with chronic constipation or suspected colonic pseudo-obstruction who underwent cine MRI covering the entire small bowel between 2022 and 2023 were included. A deep learning algorithm was developed to segment small bowel regions, and motility was quantified with an optical flow-based algorithm, producing a motility score for each slice. Associations of motility scores with patient factors (age, sex, body mass index, symptoms, and bowel distension) and MRI slice-related factors (anatomical location, bowel area, and anteroposterior position) were analyzed using linear mixed models. Deep learning-based small bowel segmentation achieved a mean volumetric Dice similarity coefficient of 75.4 ± 18.9%, with a manual correction time of 26.5 ± 13.5 s. Median motility scores per patient ranged from 26.4 to 64.4, with an interquartile range of 3.1-26.6. Multivariable analysis revealed that MRI slice-related factors, including anatomical location with mixed ileum and jejunum (β = -4.9; p = 0.01, compared with ileum dominant), bowel area (first order β = -0.2, p < 0.001; second order β = 5.7 × 10<sup>-4</sup>, p < 0.001), and anteroposterior position (first order β = -51.5, p < 0.001; second order β = 28.8, p = 0.004) were significantly associated with motility scores. Patient factors showed no association with motility scores. Small bowel motility scores were significantly associated with MRI slice-related factors. Determining global motility without adjusting for these factors may be limited. Question Global small bowel motility can be quantified from cine MRI; however, the confounding factors affecting motility scores remain unclear. Findings Motility scores were significantly influenced by MRI slice-related factors, including anatomical location, bowel area, and anteroposterior position. Clinical relevance Adjusting for slice-related factors is essential for accurate interpretation of small bowel motility scores on cine MRI.

Deep learning based colorectal cancer detection in medical images: A comprehensive analysis of datasets, methods, and future directions.

Gülmez B

pubmed logopapersJun 17 2025
This comprehensive review examines the current state and evolution of artificial intelligence applications in colorectal cancer detection through medical imaging from 2019 to 2025. The study presents a quantitative analysis of 110 high-quality publications and 9 publicly accessible medical image datasets used for training and validation. Various convolutional neural network architectures-including ResNet (40 implementations), VGG (18 implementations), and emerging transformer-based models (12 implementations)-for classification, object detection, and segmentation tasks are systematically categorized and evaluated. The investigation encompasses hyperparameter optimization techniques utilized to enhance model performance, with particular focus on genetic algorithms and particle swarm optimization approaches. The role of explainable AI methods in medical diagnosis interpretation is analyzed through visualization techniques such as Grad-CAM and SHAP. Technical limitations, including dataset scarcity, computational constraints, and standardization challenges, are identified through trend analysis. Research gaps in current methodologies are highlighted through comparative assessment of performance metrics across different architectural implementations. Potential future research directions, including multimodal learning and federated learning approaches, are proposed based on publication trend analysis. This review serves as a comprehensive reference for researchers in medical image analysis and clinical practitioners implementing AI-based colorectal cancer detection systems.

Interpretable deep fuzzy network-aided detection of central lymph node metastasis status in papillary thyroid carcinoma.

Wang W, Ning Z, Zhang J, Zhang Y, Wang W

pubmed logopapersJun 16 2025
The non-invasive assessment of central lymph node metastasis (CLNM) in patients with papillary thyroid carcinoma (PTC) plays a crucial role in assisting treatment decision and prognosis planning. This study aims to use an interpretable deep fuzzy network guided by expert knowledge to predict the CLNM status of patients with PTC from ultrasound images. A total of 1019 PTC patients were enrolled in this study, comprising 465 CLNM patients and 554 non-CLNM patients. Pathological diagnosis served as the gold standard to determine metastasis status. Clinical and morphological features of thyroid were collected as expert knowledge to guide the deep fuzzy network in predicting CLNM status. The network consisted of a region of interest (ROI) segmentation module, a knowledge-aware feature extraction module, and a fuzzy prediction module. The network was trained on 652 patients, validated on 163 patients and tested on 204 patients. The model exhibited promising performance in predicting CLNM status, achieving the area under the receiver operating characteristic curve (AUC), accuracy, precision, sensitivity and specificity of 0.786 (95% CI 0.720-0.846), 0.745 (95% CI 0.681-0.799), 0.727 (95% CI 0.636-0.819), 0.696 (95% CI 0.594-0.789), and 0.786 (95% CI 0.712-0.864), respectively. In addition, the rules of the fuzzy system in the model are easy to understand and explain, and have good interpretability. The deep fuzzy network guided by expert knowledge predicted CLNM status of PTC patients with high accuracy and good interpretability, and may be considered as an effective tool to guide preoperative clinical decision-making.

Imaging-Based AI for Predicting Lymphovascular Space Invasion in Cervical Cancer: Systematic Review and Meta-Analysis.

She L, Li Y, Wang H, Zhang J, Zhao Y, Cui J, Qiu L

pubmed logopapersJun 16 2025
The role of artificial intelligence (AI) in enhancing the accuracy of lymphovascular space invasion (LVSI) detection in cervical cancer remains debated. This meta-analysis aimed to evaluate the diagnostic accuracy of imaging-based AI for predicting LVSI in cervical cancer. We conducted a comprehensive literature search across multiple databases, including PubMed, Embase, and Web of Science, identifying studies published up to November 9, 2024. Studies were included if they evaluated the diagnostic performance of imaging-based AI models in detecting LVSI in cervical cancer. We used a bivariate random-effects model to calculate pooled sensitivity and specificity with corresponding 95% confidence intervals. Study heterogeneity was assessed using the I2 statistic. Of 403 studies identified, 16 studies (2514 patients) were included. For the interval validation set, the pooled sensitivity, specificity, and area under the curve (AUC) for detecting LVSI were 0.84 (95% CI 0.79-0.87), 0.78 (95% CI 0.75-0.81), and 0.87 (95% CI 0.84-0.90). For the external validation set, the pooled sensitivity, specificity, and AUC for detecting LVSI were 0.79 (95% CI 0.70-0.86), 0.76 (95% CI 0.67-0.83), and 0.84 (95% CI 0.81-0.87). Using the likelihood ratio test for subgroup analysis, deep learning demonstrated significantly higher sensitivity compared to machine learning (P=.01). Moreover, AI models based on positron emission tomography/computed tomography exhibited superior sensitivity relative to those based on magnetic resonance imaging (P=.01). Imaging-based AI, particularly deep learning algorithms, demonstrates promising diagnostic performance in predicting LVSI in cervical cancer. However, the limited external validation datasets and the retrospective nature of the research may introduce potential biases. These findings underscore AI's potential as an auxiliary diagnostic tool, necessitating further large-scale prospective validation.

Improving Prostate Gland Segmenting Using Transformer based Architectures

Shatha Abudalou

arxiv logopreprintJun 16 2025
Inter reader variability and cross site domain shift challenge the automatic segmentation of prostate anatomy using T2 weighted MRI images. This study investigates whether transformer models can retain precision amid such heterogeneity. We compare the performance of UNETR and SwinUNETR in prostate gland segmentation against our previous 3D UNet model [1], based on 546 MRI (T2weighted) volumes annotated by two independent experts. Three training strategies were analyzed: single cohort dataset, 5 fold cross validated mixed cohort, and gland size based dataset. Hyperparameters were tuned by Optuna. The test set, from an independent population of readers, served as the evaluation endpoint (Dice Similarity Coefficient). In single reader training, SwinUNETR achieved an average dice score of 0.816 for Reader#1 and 0.860 for Reader#2, while UNETR scored 0.8 and 0.833 for Readers #1 and #2, respectively, compared to the baseline UNets 0.825 for Reader #1 and 0.851 for Reader #2. SwinUNETR had an average dice score of 0.8583 for Reader#1 and 0.867 for Reader#2 in cross-validated mixed training. For the gland size-based dataset, SwinUNETR achieved an average dice score of 0.902 for Reader#1 subset and 0.894 for Reader#2, using the five-fold mixed training strategy (Reader#1, n=53; Reader#2, n=87) at larger gland size-based subsets, where UNETR performed poorly. Our findings demonstrate that global and shifted-window self-attention effectively reduces label noise and class imbalance sensitivity, resulting in improvements in the Dice score over CNNs by up to five points while maintaining computational efficiency. This contributes to the high robustness of SwinUNETR for clinical deployment.

First experiences with an adaptive pelvic radiotherapy system: Analysis of treatment times and learning curve.

Benzaquen D, Taussky D, Fave V, Bouveret J, Lamine F, Letenneur G, Halley A, Solmaz Y, Champion A

pubmed logopapersJun 16 2025
The Varian Ethos system allows not only on-treatment-table plan adaptation but also automated contouring with the aid of artificial intelligence. This study evaluates the initial clinical implementation of an adaptive pelvic radiotherapy system, focusing on the treatment times and the associated learning curve. We analyzed the data from 903 consecutive treatments for most urogenital cancers at our center. The treatment time was calculated from the time of the first cone-beam computed tomography scan used for replanning until the end of treatment. To calculate whether treatments were generally shorter over time, we divided the date of the first treatment into 3-months quartiles. Differences between the groups were calculated using t-tests. The mean time from the first cone-beam computed tomography scan to the end of treatment was 25.9min (standard deviation: 6.9min). Treatment time depended on the number of planning target volumes and treatment of the pelvic lymph nodes. The mean time from cone-beam computed tomography to the end of treatment was 37 % longer if the pelvic lymph nodes were treated and 26 % longer if there were more than two planning target volumes. There was a learning curve: in linear regression analysis, both quartiles of months of treatment (odds ratio [OR]: 1.3, 95 % confidence interval [CI]: 1.8-0.70, P<0.001) and the number of planning target volumes (OR: 3.0, 95 % CI: 2.6-3.4, P<0.001) were predictive of treatment time. Approximately two-thirds of the treatments were delivered within 33min. Treatment time was strongly dependent on the number of separate planning target volumes. There was a continuous learning curve.

AI based automatic measurement of split renal function in [<sup>18</sup>F]PSMA-1007 PET/CT.

Valind K, Ulén J, Gålne A, Jögi J, Minarik D, Trägårdh E

pubmed logopapersJun 16 2025
Prostate-specific membrane antigen (PSMA) is an important target for positron emission tomography (PET) with computed tomography (CT) in prostate cancer. In addition to overexpression in prostate cancer cells, PSMA is expressed in healthy cells in the proximal tubules of the kidneys. Consequently, PSMA PET is being explored for renal functional imaging. Left and right renal uptake of PSMA targeted radiopharmaceuticals have shown strong correlations to split renal function (SRF) as determined by other methods. Manual segmentation of kidneys in PET images is, however, time consuming, making this method of measuring SRF impractical. In this study, we designed, trained and validated an artificial intelligence (AI) model for automatic renal segmentation and measurement of SRF in [<sup>18</sup>F]PSMA-1007 PET images. Kidneys were segmented in 135 [<sup>18</sup>F]PSMA-1007 PET/CT studies used to train the AI model. The model was evaluated in 40 test studies. Left renal function percentage (LRF%) measurements ranged from 40 to 67%. Spearman correlation coefficients for LRF% measurements ranged between 0.98 and 0.99 when comparing segmentations made by 3 human readers and the AI model. The largest LRF% difference between any measurements in a single case was 3 percentage points. The AI model produced measurements similar to those of human readers. Automatic measurement of SRF in PSMA PET is feasible. A potential use could be to provide additional data in investigation of renal functional impairment in patients treated for prostate cancer.

Artificial intelligence (AI) and CT in abdominal imaging: image reconstruction and beyond.

Pisuchpen N, Srinivas Rao S, Noda Y, Kongboonvijit S, Rezaei A, Kambadakone A

pubmed logopapersJun 16 2025
Computed tomography (CT) is a cornerstone of abdominal imaging, playing a vital role in accurate diagnosis, appropriate treatment planning, and disease monitoring. The evolution of artificial intelligence (AI) in imaging has introduced deep learning-based reconstruction (DLR) techniques that enhance image quality, reduce radiation dose, and improve workflow efficiency. Traditional image reconstruction methods, including filtered back projection (FBP) and iterative reconstruction (IR), have limitations such as high noise levels and artificial image texture. DLR overcomes these challenges by leveraging convolutional neural networks to generate high-fidelity images while preserving anatomical details. Recent advances in vendor-specific and vendor-agnostic DLR algorithms, such as TrueFidelity, AiCE, and Precise Image, have demonstrated significant improvements in contrast-to-noise ratio, lesion detection, and diagnostic confidence across various abdominal organs, including the liver, pancreas, and kidneys. Furthermore, AI extends beyond image reconstruction to applications such as low contrast lesion detection, quantitative imaging, and workflow optimization, augmenting radiologists' efficiency and diagnostic accuracy. However, challenges remain in clinical validation, standardization, and widespread adoption. This review explores the principles, advancements, and future directions of AI-driven CT image reconstruction and its expanding role in abdominal imaging.

Predicting mucosal healing in Crohn's disease: development of a deep-learning model based on intestinal ultrasound images.

Ma L, Chen Y, Fu X, Qin J, Luo Y, Gao Y, Li W, Xiao M, Cao Z, Shi J, Zhu Q, Guo C, Wu J

pubmed logopapersJun 16 2025
Predicting treatment response in Crohn's disease (CD) is essential for making an optimal therapeutic regimen, but relevant models are lacking. This study aimed to develop a deep learning model based on baseline intestinal ultrasound (IUS) images and clinical information to predict mucosal healing. Consecutive CD patients who underwent pretreatment IUS were retrospectively recruited at a tertiary hospital. A total of 1548 IUS images of longitudinal diseased bowel segments were collected and divided into a training cohort and a test cohort. A convolutional neural network model was developed to predict mucosal healing after one year of standardized treatment. The model's efficacy was validated using the five-fold internal cross-validation and further tested in the test cohort. A total of 190 patients (68.9% men, mean age 32.3 ± 14.1 years) were enrolled, consisting of 1038 IUS images of mucosal healing and 510 images of no mucosal healing. The mean area under the curve in the test cohort was 0.73 (95% CI: 0.68-0.78), with the mean sensitivity of 68.1% (95% CI: 60.5-77.4%), specificity of 69.5% (95% CI: 60.1-77.2%), positive prediction value of 80.0% (95% CI: 74.5-84.9%), negative prediction value of 54.8% (95% CI: 48.0-63.7%). Heat maps showing the deep-learning decision-making process revealed that information from the bowel wall, serous surface, and surrounding mesentery was mainly considered by the model. We developed a deep learning model based on IUS images to predict mucosal healing in CD with notable accuracy. Further validation and improvement of this model with more multi-center, real-world data are needed. Predicting treatment response in CD is essential to making an optimal therapeutic regimen. In this study, a deep-learning model using pretreatment ultrasound images and clinical information was generated to predict mucosal healing with an AUC of 0.73. Response to medication treatment is highly variable among patients with CD. High-resolution IUS images of the intestinal wall may hide significant characteristics for treatment response. A deep-learning model capable of predicting treatment response was generated using pretreatment IUS images.

Three-dimensional multimodal imaging for predicting early recurrence of hepatocellular carcinoma after surgical resection.

Peng J, Wang J, Zhu H, Jiang P, Xia J, Cui H, Hong C, Zeng L, Li R, Li Y, Liang S, Deng Q, Deng H, Xu H, Dong H, Xiao L, Liu L

pubmed logopapersJun 16 2025
High tumor recurrence after surgery remains a significant challenge in managing hepatocellular carcinoma (HCC). We aimed to construct a multimodal model to forecast the early recurrence of HCC after surgical resection and explore the associated biological mechanisms. Overall, 519 patients with HCC were included from three medical centers. 433 patients from Nanfang Hospital were used as the training cohort, and 86 patients from the other two hospitals comprised validation cohort. Radiomics and deep learning (DL) models were developed using contrast-enhanced computed tomography images. Radiomics feature visualization and gradient-weighted class activation mapping were applied to improve interpretability. A multimodal model (MM-RDLM) was constructed by integrating radiomics and DL models. Associations between MM-RDLM and recurrence-free survival (RFS) and overall survival were analyzed. Gene set enrichment analysis (GSEA) and multiplex immunohistochemistry (mIHC) were used to investigate the biological mechanisms. Models based on hepatic arterial phase images exhibited the best predictive performance, with radiomics and DL models achieving areas under the curve (AUCs) of 0.770 (95 % confidence interval [CI]: 0.725-0.815) and 0.846 (95 % CI: 0.807-0.886), respectively, in the training cohort. MM-RDLM achieved an AUC of 0.955 (95 % CI: 0.937-0.972) in the training cohort and 0.930 (95 % CI: 0.876-0.984) in the validation cohort. MM-RDLM (high vs. low) was notably linked to RFS in the training (hazard ratio [HR] = 7.80 [5.74 - 10.61], P < 0.001) and validation (HR = 10.46 [4.96 - 22.68], P < 0.001) cohorts. GSEA revealed enrichment of the natural killer cell-mediated cytotoxicity pathway in the MM-RDLM low cohort. mIHC showed significantly higher percentages of CD3-, CD56-, and CD8-positive cells in the MM-RDLM low group. The MM-RDLM model demonstrated strong predictive performance for early postoperative recurrence of HCC. These findings contribute to identifying patients at high risk for early recurrence and provide insights into the potential underlying biological mechanisms.
Page 45 of 75743 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.