Sort by:
Page 21 of 47469 results

Preoperative prediction model for benign and malignant gallbladder polyps on the basis of machine-learning algorithms.

Zeng J, Hu W, Wang Y, Jiang Y, Peng J, Li J, Liu X, Zhang X, Tan B, Zhao D, Li K, Zhang S, Cao J, Qu C

pubmed logopapersJun 10 2025
This study aimed to differentiate between benign and malignant gallbladder polyps preoperatively by developing a prediction model integrating preoperative transabdominal ultrasound and clinical features using machine-learning algorithms. A retrospective analysis was conducted on clinical and ultrasound data from 1,050 patients at 2 centers who underwent cholecystectomy for gallbladder polyps. Six machine-learning algorithms were used to develop preoperative models for predicting benign and malignant gallbladder polyps. Internal and external test cohorts evaluated model performance. The Shapley Additive Explanations algorithm was used to understand feature importance. The main study cohort included 660 patients with benign polyps and 285 patients with malignant polyps, randomly divided into a 3:1 stratified training and internal test cohorts. The external test cohorts consisted of 73 benign and 32 malignant polyps. In the training cohort, the Shapley Additive Explanations algorithm, on the basis of variables selected by Least Absolute Shrinkage and Selection Operator regression and multivariate logistic regression, further identified 6 key predictive factors: polyp size, age, fibrinogen, carbohydrate antigen 19-9, presence of stones, and cholinesterase. Using these factors, 6 predictive models were developed. The random forest model outperformed others, with an area under the curve of 0.963, 0.940, and 0.958 in the training, internal, and external test cohorts, respectively. Compared with previous studies, the random forest model demonstrated excellent clinical utility and predictive performance. In addition, the Shapley Additive Explanations algorithm was used to visualize feature importance, and an online calculation platform was developed. The random forest model, combining preoperative ultrasound and clinical features, accurately predicts benign and malignant gallbladder polyps, offering valuable guidance for clinical decision-making.

Evaluation of artificial-intelligence-based liver segmentation and its application for longitudinal liver volume measurement.

Kimura R, Hirata K, Tsuneta S, Takenaka J, Watanabe S, Abo D, Kudo K

pubmed logopapersJun 10 2025
Accurate liver-volume measurements from CT scans are essential for treatment planning, particularly in liver resection cases, to avoid postoperative liver failure. However, manual segmentation is time-consuming and prone to variability. Advancements in artificial intelligence (AI), specifically convolutional neural networks, have enhanced liver segmentation accuracy. We aimed to identify optimal CT phases for AI-based liver volume estimation and apply the model to track liver volume changes over time. We also evaluated temporal changes in liver volume in participants without liver disease. In this retrospective, single-center study, we assessed the performance of an open-source AI-based liver segmentation model previously reported, using non-contrast and dynamic CT phases. The accuracy of the model was compared with that of expert radiologists. The Dice similarity coefficient (DSC) was calculated across various CT phases, including arterial, portal venous, and non-contrast, to validate the model. The model was then applied to a longitudinal study involving 39 patients without liver disease (527 CT scans) to examine age-related liver volume changes over 5 to 20 years. The model demonstrated high accuracy across all phases compared to manual segmentation. Among the CT phases, the highest DSC of 0.988 ± 0.010 was in the arterial phase. The intraclass correlation coefficients for liver volume were also high, exceeding 0.9 for contrast-enhanced phases and 0.8 for non-contrast CT. In the longitudinal study, the model indicated an annual decrease of 0.95%. This model provides high accuracy in liver segmentation across various CT phases and offers insights into age-related liver volume reduction. Measuring changes in liver volume may help with the early detection of diseases and the understanding of pathophysiology.

Multi-task and multi-scale attention network for lymph node metastasis prediction in esophageal cancer.

Yi Y, Wang J, Li Z, Wang L, Ding X, Zhou Q, Huang Y, Li B

pubmed logopapersJun 9 2025
The accurate diagnosis of lymph node metastasis in esophageal squamous cell carcinoma is crucial in the treatment workflow, and the process is often time-consuming for clinicians. Recent deep learning models predicting whether lymph nodes are affected by cancer in esophageal cancer cases suffer from challenging node delineation and hence gain poor diagnosis accuracy. This paper proposes an innovative multi-task and multi-scale attention network (M <math xmlns="http://www.w3.org/1998/Math/MathML"><mmultiscripts><mrow></mrow> <mrow></mrow> <mn>2</mn></mmultiscripts> </math> ANet) to predict lymph node metastasis precisely. The network softly expands the regions of the node mask and subsequently utilizes the expanded mask to aggregate image features, thereby amplifying the node contexts. It additionally proposes a two-branch training strategy that compels the model to simultaneously predict metastasis probability and node masks, fostering a more comprehensive learning process. The node metastasis prediction performance has been evaluated on a self-collected dataset with 177 patients. Our model finally achieves a competitive accuracy of 83.7% on the test set comprising 577 nodes. With the adaptability to intricate patterns and ability to handle data variations, M <math xmlns="http://www.w3.org/1998/Math/MathML"><mmultiscripts><mrow></mrow> <mrow></mrow> <mn>2</mn></mmultiscripts> </math> ANet emerges as a promising tool for robust and comprehensive lymph node metastasis prediction in medical image analysis.

Transformer-based robotic ultrasound 3D tracking for capsule robot in GI tract.

Liu X, He C, Wu M, Ping A, Zavodni A, Matsuura N, Diller E

pubmed logopapersJun 9 2025
Ultrasound (US) imaging is a promising modality for real-time monitoring of robotic capsule endoscopes navigating through the gastrointestinal (GI) tract. It offers high temporal resolution and safety but is limited by a narrow field of view, low visibility in gas-filled regions and challenges in detecting out-of-plane motions. This work addresses these issues by proposing a novel robotic ultrasound tracking system capable of long-distance 3D tracking and active re-localization when the capsule is lost due to motion or artifacts. We develop a hybrid deep learning-based tracking framework combining convolutional neural networks (CNNs) and a transformer backbone. The CNN component efficiently encodes spatial features, while the transformer captures long-range contextual dependencies in B-mode US images. This model is integrated with a robotic arm that adaptively scans and tracks the capsule. The system's performance is evaluated using ex vivo colon phantoms under varying imaging conditions, with physical perturbations introduced to simulate realistic clinical scenarios. The proposed system achieved continuous 3D tracking over distances exceeding 90 cm, with a mean centroid localization error of 1.5 mm and over 90% detection accuracy. We demonstrated 3D tracking in a more complex workspace featuring two curved sections to simulate anatomical challenges. This suggests the strong resilience of the tracking system to motion-induced artifacts and geometric variability. The system maintained real-time tracking at 9-12 FPS and successfully re-localized the capsule within seconds after tracking loss, even under gas artifacts and acoustic shadowing. This study presents a hybrid CNN-transformer system for automatic, real-time 3D ultrasound tracking of capsule robots over long distances. The method reliably handles occlusions, view loss and image artifacts, offering millimeter-level tracking accuracy. It significantly reduces clinical workload through autonomous detection and re-localization. Future work includes improving probe-tissue interaction handling and validating performance in live animal and human trials to assess physiological impacts.

Bi-regional and bi-phasic automated machine learning radiomics for defining metastasis to lesser curvature lymph node stations in gastric cancer.

Huang H, Wang S, Deng J, Ye Z, Li H, He B, Fang M, Zhang N, Liu J, Dong D, Liang H, Li G, Tian J, Hu Y

pubmed logopapersJun 8 2025
Lymph node metastasis (LNM) is the primary metastatic mode in gastric cancer (GC), with frequent occurrences in lesser curvature. This study aims to establish a radiomic model to predict the metastatic status of lymph nodes in the lesser curvature for GC. We retrospectively collected data from 939 gastric cancer patients who underwent gastrectomy and D2 lymphadenectomy across two centers. Both the primary lesion and the lesser curvature region were segmented as representative region of interests (ROIs). The combination of bi-regional and bi-phasic CT imaging features were used to build a hybrid radiomic model to predict LNM in the lesser curvature. And the model was validated internally and externally. Further, the potential generalization ability of the hybrid model was investigated in predicting the metastasis status in the supra-pancreatic area. The hybrid model yielded substantially higher performance with AUCs of 0.847 (95% CI, 0.770-0.924) and 0.833 (95% CI, 0.800-0.867) in the two independent test cohorts, compared to the single regional and phasic models. Additionally, the hybrid model achieved AUCs ranging from 0.678 to 0.761 in the prediction of LNM in supra-pancreatic area, showing the potential generalization performance. The CT imaging features of primary tumor and adjacent tissues are significantly associated with LNM. And our as-developed model showed great diagnostic performance and might be of great application in the individual treatment of GC.

Simulating workload reduction with an AI-based prostate cancer detection pathway using a prediction uncertainty metric.

Fransen SJ, Bosma JS, van Lohuizen Q, Roest C, Simonis FFJ, Kwee TC, Yakar D, Huisman H

pubmed logopapersJun 7 2025
This study compared two uncertainty quantification (UQ) metrics to rule out prostate MRI scans with a high-confidence artificial intelligence (AI) prediction and investigated the resulting potential radiologist's workload reduction in a clinically significant prostate cancer (csPCa) detection pathway. This retrospective study utilized 1612 MRI scans from three institutes for csPCa (Gleason Grade Group ≥ 2) assessment. We compared the standard diagnostic pathway (radiologist reading) to an AI-based rule-out pathway in terms of efficacy and accuracy in diagnosing csPCa. In the rule-out pathway, 15 AI submodels (trained on 7756 cases) diagnosed each MRI scan, and any prediction deemed uncertain was referred to a radiologist for reading. We compared the mean (meanUQ) and variability (varUQ) of predictions using the DeLong test on the area under the receiver operating characteristic curves (AUROC). The level of workload reduction of the best UQ method was determined based on a maintained sensitivity at non-inferior specificity using the margins 0.05 and 0.10. The workload reduction of the proposed pathway was institute-specific: up to 20% at a 0.10 non-inferiority margin (p < 0.05) and non-significant workload reduction at a 0.05 margin. VarUQ-based rule out gave higher but non-significant AUROC scores than meanUQ in certain selected cases (+0.05 AUROC, p > 0.05). MeanUQ and varUQ showed promise in AI-based rule-out csPCa detection. Using varUQ in an AI-based csPCa detection pathway could reduce the number of scans radiologists need to read. The varying performance of the UQ rule-out indicates the need for institute-specific UQ thresholds. Question AI can autonomously assess prostate MRI scans with high certainty at a non-inferior performance compared to radiologists, potentially reducing the workload of radiologists. Findings The optimal ratio of AI-model and radiologist readings is institute-dependent and requires calibration. Clinical relevance Semi-autonomous AI-based prostate cancer detection with variational UQ scores shows promise in reducing the number of scans radiologists need to read.

Physics-informed neural networks for denoising high b-value diffusion-weighted images.

Lin Q, Yang F, Yan Y, Zhang H, Xie Q, Zheng J, Yang W, Qian L, Liu S, Yao W, Qu X

pubmed logopapersJun 7 2025
Diffusion-weighted imaging (DWI) is widely applied in tumor diagnosis by measuring the diffusion of water molecules. To increase the sensitivity to tumor identification, faithful high b-value DWI images are expected by setting a stronger strength of gradient field in magnetic resonance imaging (MRI). However, high b-value DWI images are heavily affected by reduced signal-to-noise ratio due to the exponential decay of signal intensity. Thus, removing noise becomes important for high b-value DWI images. Here, we propose a Physics-Informed neural Network for high b-value DWI images Denoising (PIND) by leveraging information from physics-informed loss and prior information from low b-value DWI images with high signal-to-noise ratio. Experiments are conducted on a prostate DWI dataset that has 125 subjects. Compared with the original noisy images, PIND improves the peak signal-to-noise ratio from 31.25 dB to 36.28 dB, and structural similarity index measure from 0.77 to 0.92. Our schemes can save 83% data acquisition time since fewer averages of high b-value DWI images need to be acquired, while maintaining 98% accuracy of the apparent diffusion coefficient value, suggesting its potential effectiveness in preserving essential diffusion characteristics. Reader study by 4 radiologists (3, 6, 13, and 18 years of experience) indicates PIND's promising performance on overall quality, signal-to-noise ratio, artifact suppression, and lesion conspicuity, showing potential for improving clinical DWI applications.

Estimation of tumor coverage after RF ablation of hepatocellular carcinoma using single 2D image slices.

Varble N, Li M, Saccenti L, Borde T, Arrichiello A, Christou A, Lee K, Hazen L, Xu S, Lencioni R, Wood BJ

pubmed logopapersJun 7 2025
To assess the technical success of radiofrequency ablation (RFA) in patients with hepatocellular carcinoma (HCC), an artificial intelligence (AI) model was developed to estimate the tumor coverage without the need for segmentation or registration tools. A secondary retrospective analysis of 550 patients in the multicenter and multinational OPTIMA trial (3-7 cm solidary HCC lesions, randomized to RFA or RFA + LTLD) identified 182 patients with well-defined pre-RFA tumor and 1-month post-RFA devascularized ablation zones on enhanced CT. The ground-truth, or percent tumor coverage, was determined based on the result of semi-automatic 3D tumor and ablation zone segmentation and elastic registration. The isocenter of the tumor and ablation was isolated on 2D axial CT images. Feature extraction was performed, and classification and linear regression models were built. Images were augmented, and 728 image pairs were used for training and testing. The estimated percent tumor coverage using the models was compared to ground-truth. Validation was performed on eight patient cases from a separate institution, where RFA was performed, and pre- and post-ablation images were collected. In testing cohorts, the best model accuracy was with classification and moderate data augmentation (AUC = 0.86, TPR = 0.59, and TNR = 0.89, accuracy = 69%) and regression with random forest (RMSE = 12.6%, MAE = 9.8%). Validation in a separate institution did not achieve accuracy greater than random estimation. Visual review of training cases suggests that poor tumor coverage may be a result of atypical ablation zone shrinkage 1 month post-RFA, which may not be reflected in clinical utilization. An AI model that uses 2D images at the center of the tumor and 1 month post-ablation can accurately estimate ablation tumor coverage. In separate validation cohorts, translation could be challenging.

Automatic MRI segmentation of masticatory muscles using deep learning enables large-scale muscle parameter analysis.

Ten Brink RSA, Merema BJ, den Otter ME, Jensma ML, Witjes MJH, Kraeima J

pubmed logopapersJun 7 2025
Mandibular reconstruction to restore mandibular continuity often relies on patient-specific implants and virtual surgical planning, but current implant designs rarely consider individual biomechanical demands, which are critical for preventing complications such as stress shielding, screw loosening, and implant failure. The inclusion of patient-specific masticatory muscle parameters such as cross-sectional area, vectors, and volume could improve implant success, but manual segmentation of these parameters is time-consuming, limiting large-scale analyses. In this study, a deep learning model was trained for automatic segmentation of eight masticatory muscles on MRI images. Forty T1-weighted MRI scans were segmented manually or via pseudo-labelling for training. Training employed 5-fold cross-validation over 1000 epochs per fold and testing was done on 10 manually segmented scans. The model achieved a mean Dice similarity coefficient (DSC) of 0.88, intersection over union (IoU) of 0.79, precision of 0.87, and recall of 0.89, demonstrating high segmentation accuracy. These results indicate the feasibility of large-scale, reproducible analyses of muscle volumes, directions, and estimated forces. By integrating these parameters into implant design and surgical planning, this method offers a step forward in developing personalized surgical strategies that could improve postoperative outcomes in mandibular reconstruction. This brings the field closer to truly individualized patient care.
Page 21 of 47469 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.