Sort by:
Page 73 of 100991 results

Biologically Inspired Deep Learning Approaches for Fetal Ultrasound Image Classification

Rinat Prochii, Elizaveta Dakhova, Pavel Birulin, Maxim Sharaev

arxiv logopreprintJun 10 2025
Accurate classification of second-trimester fetal ultrasound images remains challenging due to low image quality, high intra-class variability, and significant class imbalance. In this work, we introduce a simple yet powerful, biologically inspired deep learning ensemble framework that-unlike prior studies focused on only a handful of anatomical targets-simultaneously distinguishes 16 fetal structures. Drawing on the hierarchical, modular organization of biological vision systems, our model stacks two complementary branches (a "shallow" path for coarse, low-resolution cues and a "detailed" path for fine, high-resolution features), concatenating their outputs for final prediction. To our knowledge, no existing method has addressed such a large number of classes with a comparably lightweight architecture. We trained and evaluated on 5,298 routinely acquired clinical images (annotated by three experts and reconciled via Dawid-Skene), reflecting real-world noise and variability rather than a "cleaned" dataset. Despite this complexity, our ensemble (EfficientNet-B0 + EfficientNet-B6 with LDAM-Focal loss) identifies 90% of organs with accuracy > 0.75 and 75% of organs with accuracy > 0.85-performance competitive with more elaborate models applied to far fewer categories. These results demonstrate that biologically inspired modular stacking can yield robust, scalable fetal anatomy recognition in challenging clinical settings.

Evaluation of artificial-intelligence-based liver segmentation and its application for longitudinal liver volume measurement.

Kimura R, Hirata K, Tsuneta S, Takenaka J, Watanabe S, Abo D, Kudo K

pubmed logopapersJun 10 2025
Accurate liver-volume measurements from CT scans are essential for treatment planning, particularly in liver resection cases, to avoid postoperative liver failure. However, manual segmentation is time-consuming and prone to variability. Advancements in artificial intelligence (AI), specifically convolutional neural networks, have enhanced liver segmentation accuracy. We aimed to identify optimal CT phases for AI-based liver volume estimation and apply the model to track liver volume changes over time. We also evaluated temporal changes in liver volume in participants without liver disease. In this retrospective, single-center study, we assessed the performance of an open-source AI-based liver segmentation model previously reported, using non-contrast and dynamic CT phases. The accuracy of the model was compared with that of expert radiologists. The Dice similarity coefficient (DSC) was calculated across various CT phases, including arterial, portal venous, and non-contrast, to validate the model. The model was then applied to a longitudinal study involving 39 patients without liver disease (527 CT scans) to examine age-related liver volume changes over 5 to 20 years. The model demonstrated high accuracy across all phases compared to manual segmentation. Among the CT phases, the highest DSC of 0.988 ± 0.010 was in the arterial phase. The intraclass correlation coefficients for liver volume were also high, exceeding 0.9 for contrast-enhanced phases and 0.8 for non-contrast CT. In the longitudinal study, the model indicated an annual decrease of 0.95%. This model provides high accuracy in liver segmentation across various CT phases and offers insights into age-related liver volume reduction. Measuring changes in liver volume may help with the early detection of diseases and the understanding of pathophysiology.

Artificial intelligence and endoanal ultrasound: pioneering automated differentiation of benign anal and sphincter lesions.

Mascarenhas M, Almeida MJ, Martins M, Mendes F, Mota J, Cardoso P, Mendes B, Ferreira J, Macedo G, Poças C

pubmed logopapersJun 10 2025
Anal injuries, such as lacerations and fissures, are challenging to diagnose because of their anatomical complexity. Endoanal ultrasound (EAUS) has proven to be a reliable tool for detailed visualization of anal structures but relies on expert interpretation. Artificial intelligence (AI) may offer a solution for more accurate and consistent diagnoses. This study aims to develop and test a convolutional neural network (CNN)-based algorithm for automatic classification of fissures and anal lacerations (internal and external) on EUAS. A single-center retrospective study analyzed 238 EUAS radial probe exams (April 2022-January 2024), categorizing 4528 frames into fissures (516), external lacerations (2174), and internal lacerations (1838), following validation by three experts. Data was split 80% for training and 20% for testing. Performance metrics included sensitivity, specificity, and accuracy. For external lacerations, the CNN achieved 82.5% sensitivity, 93.5% specificity, and 88.2% accuracy. For internal lacerations, achieved 91.7% sensitivity, 85.9% specificity, and 88.2% accuracy. For anal fissures, achieved 100% sensitivity, specificity, and accuracy. This first EUAS AI-assisted model for differentiating benign anal injuries demonstrates excellent diagnostic performance. It highlights AI's potential to improve accuracy, reduce reliance on expertise, and support broader clinical adoption. While currently limited by small dataset and single-center scope, this work represents a significant step towards integrating AI in proctology.

DWI-based Biologically Interpretable Radiomic Nomogram for Predicting 1- year Biochemical Recurrence after Radical Prostatectomy: A Deep Learning, Multicenter Study.

Niu X, Li Y, Wang L, Xu G

pubmed logopapersJun 10 2025
It is not rare to experience a biochemical recurrence (BCR) following radical prostatectomy (RP) for prostate cancer (PCa). It has been reported that early detection and management of BCR following surgery could improve survival in PCa. This study aimed to develop a nomogram integrating deep learning-based radiomic features and clinical parameters to predict 1-year BCR after RP and to examine the associations between radiomic scores and the tumor microenvironment (TME). In this retrospective multicenter study, two independent cohorts of patients (n = 349) who underwent RP after multiparametric magnetic resonance imaging (mpMRI) between January 2015 and January 2022 were included in the analysis. Single-cell RNA sequencing data from four prospectively enrolled participants were used to investigate the radiomic score-related TME. The 3D U-Net was trained and optimized for prostate cancer segmentation using diffusion-weighted imaging, and radiomic features of the target lesion were extracted. Predictive nomograms were developed via multivariate Cox proportional hazard regression analysis. The nomograms were assessed for discrimination, calibration, and clinical usefulness. In the development cohort, the clinical-radiomic nomogram had an AUC of 0.892 (95% confidence interval: 0.783--0.939), which was considerably greater than those of the radiomic signature and clinical model. The Hosmer-Lemeshow test demonstrated that the clinical-radiomic model performed well in both the development (P = 0.461) and validation (P = 0.722) cohorts. Decision curve analysis revealed that the clinical-radiomic nomogram displayed better clinical predictive usefulness than the clinical or radiomic signature alone in both cohorts. Radiomic scores were associated with a significant difference in the TME pattern. Our study demonstrated the feasibility of a DWI-based clinical-radiomic nomogram combined with deep learning for the prediction of 1-year BCR. The findings revealed that the radiomic score was associated with a distinctive tumor microenvironment.

Transformer-based robotic ultrasound 3D tracking for capsule robot in GI tract.

Liu X, He C, Wu M, Ping A, Zavodni A, Matsuura N, Diller E

pubmed logopapersJun 9 2025
Ultrasound (US) imaging is a promising modality for real-time monitoring of robotic capsule endoscopes navigating through the gastrointestinal (GI) tract. It offers high temporal resolution and safety but is limited by a narrow field of view, low visibility in gas-filled regions and challenges in detecting out-of-plane motions. This work addresses these issues by proposing a novel robotic ultrasound tracking system capable of long-distance 3D tracking and active re-localization when the capsule is lost due to motion or artifacts. We develop a hybrid deep learning-based tracking framework combining convolutional neural networks (CNNs) and a transformer backbone. The CNN component efficiently encodes spatial features, while the transformer captures long-range contextual dependencies in B-mode US images. This model is integrated with a robotic arm that adaptively scans and tracks the capsule. The system's performance is evaluated using ex vivo colon phantoms under varying imaging conditions, with physical perturbations introduced to simulate realistic clinical scenarios. The proposed system achieved continuous 3D tracking over distances exceeding 90 cm, with a mean centroid localization error of 1.5 mm and over 90% detection accuracy. We demonstrated 3D tracking in a more complex workspace featuring two curved sections to simulate anatomical challenges. This suggests the strong resilience of the tracking system to motion-induced artifacts and geometric variability. The system maintained real-time tracking at 9-12 FPS and successfully re-localized the capsule within seconds after tracking loss, even under gas artifacts and acoustic shadowing. This study presents a hybrid CNN-transformer system for automatic, real-time 3D ultrasound tracking of capsule robots over long distances. The method reliably handles occlusions, view loss and image artifacts, offering millimeter-level tracking accuracy. It significantly reduces clinical workload through autonomous detection and re-localization. Future work includes improving probe-tissue interaction handling and validating performance in live animal and human trials to assess physiological impacts.

Multi-task and multi-scale attention network for lymph node metastasis prediction in esophageal cancer.

Yi Y, Wang J, Li Z, Wang L, Ding X, Zhou Q, Huang Y, Li B

pubmed logopapersJun 9 2025
The accurate diagnosis of lymph node metastasis in esophageal squamous cell carcinoma is crucial in the treatment workflow, and the process is often time-consuming for clinicians. Recent deep learning models predicting whether lymph nodes are affected by cancer in esophageal cancer cases suffer from challenging node delineation and hence gain poor diagnosis accuracy. This paper proposes an innovative multi-task and multi-scale attention network (M <math xmlns="http://www.w3.org/1998/Math/MathML"><mmultiscripts><mrow></mrow> <mrow></mrow> <mn>2</mn></mmultiscripts> </math> ANet) to predict lymph node metastasis precisely. The network softly expands the regions of the node mask and subsequently utilizes the expanded mask to aggregate image features, thereby amplifying the node contexts. It additionally proposes a two-branch training strategy that compels the model to simultaneously predict metastasis probability and node masks, fostering a more comprehensive learning process. The node metastasis prediction performance has been evaluated on a self-collected dataset with 177 patients. Our model finally achieves a competitive accuracy of 83.7% on the test set comprising 577 nodes. With the adaptability to intricate patterns and ability to handle data variations, M <math xmlns="http://www.w3.org/1998/Math/MathML"><mmultiscripts><mrow></mrow> <mrow></mrow> <mn>2</mn></mmultiscripts> </math> ANet emerges as a promising tool for robust and comprehensive lymph node metastasis prediction in medical image analysis.

Bi-regional and bi-phasic automated machine learning radiomics for defining metastasis to lesser curvature lymph node stations in gastric cancer.

Huang H, Wang S, Deng J, Ye Z, Li H, He B, Fang M, Zhang N, Liu J, Dong D, Liang H, Li G, Tian J, Hu Y

pubmed logopapersJun 8 2025
Lymph node metastasis (LNM) is the primary metastatic mode in gastric cancer (GC), with frequent occurrences in lesser curvature. This study aims to establish a radiomic model to predict the metastatic status of lymph nodes in the lesser curvature for GC. We retrospectively collected data from 939 gastric cancer patients who underwent gastrectomy and D2 lymphadenectomy across two centers. Both the primary lesion and the lesser curvature region were segmented as representative region of interests (ROIs). The combination of bi-regional and bi-phasic CT imaging features were used to build a hybrid radiomic model to predict LNM in the lesser curvature. And the model was validated internally and externally. Further, the potential generalization ability of the hybrid model was investigated in predicting the metastasis status in the supra-pancreatic area. The hybrid model yielded substantially higher performance with AUCs of 0.847 (95% CI, 0.770-0.924) and 0.833 (95% CI, 0.800-0.867) in the two independent test cohorts, compared to the single regional and phasic models. Additionally, the hybrid model achieved AUCs ranging from 0.678 to 0.761 in the prediction of LNM in supra-pancreatic area, showing the potential generalization performance. The CT imaging features of primary tumor and adjacent tissues are significantly associated with LNM. And our as-developed model showed great diagnostic performance and might be of great application in the individual treatment of GC.

Physics-informed neural networks for denoising high b-value diffusion-weighted images.

Lin Q, Yang F, Yan Y, Zhang H, Xie Q, Zheng J, Yang W, Qian L, Liu S, Yao W, Qu X

pubmed logopapersJun 7 2025
Diffusion-weighted imaging (DWI) is widely applied in tumor diagnosis by measuring the diffusion of water molecules. To increase the sensitivity to tumor identification, faithful high b-value DWI images are expected by setting a stronger strength of gradient field in magnetic resonance imaging (MRI). However, high b-value DWI images are heavily affected by reduced signal-to-noise ratio due to the exponential decay of signal intensity. Thus, removing noise becomes important for high b-value DWI images. Here, we propose a Physics-Informed neural Network for high b-value DWI images Denoising (PIND) by leveraging information from physics-informed loss and prior information from low b-value DWI images with high signal-to-noise ratio. Experiments are conducted on a prostate DWI dataset that has 125 subjects. Compared with the original noisy images, PIND improves the peak signal-to-noise ratio from 31.25 dB to 36.28 dB, and structural similarity index measure from 0.77 to 0.92. Our schemes can save 83% data acquisition time since fewer averages of high b-value DWI images need to be acquired, while maintaining 98% accuracy of the apparent diffusion coefficient value, suggesting its potential effectiveness in preserving essential diffusion characteristics. Reader study by 4 radiologists (3, 6, 13, and 18 years of experience) indicates PIND's promising performance on overall quality, signal-to-noise ratio, artifact suppression, and lesion conspicuity, showing potential for improving clinical DWI applications.

Estimation of tumor coverage after RF ablation of hepatocellular carcinoma using single 2D image slices.

Varble N, Li M, Saccenti L, Borde T, Arrichiello A, Christou A, Lee K, Hazen L, Xu S, Lencioni R, Wood BJ

pubmed logopapersJun 7 2025
To assess the technical success of radiofrequency ablation (RFA) in patients with hepatocellular carcinoma (HCC), an artificial intelligence (AI) model was developed to estimate the tumor coverage without the need for segmentation or registration tools. A secondary retrospective analysis of 550 patients in the multicenter and multinational OPTIMA trial (3-7 cm solidary HCC lesions, randomized to RFA or RFA + LTLD) identified 182 patients with well-defined pre-RFA tumor and 1-month post-RFA devascularized ablation zones on enhanced CT. The ground-truth, or percent tumor coverage, was determined based on the result of semi-automatic 3D tumor and ablation zone segmentation and elastic registration. The isocenter of the tumor and ablation was isolated on 2D axial CT images. Feature extraction was performed, and classification and linear regression models were built. Images were augmented, and 728 image pairs were used for training and testing. The estimated percent tumor coverage using the models was compared to ground-truth. Validation was performed on eight patient cases from a separate institution, where RFA was performed, and pre- and post-ablation images were collected. In testing cohorts, the best model accuracy was with classification and moderate data augmentation (AUC = 0.86, TPR = 0.59, and TNR = 0.89, accuracy = 69%) and regression with random forest (RMSE = 12.6%, MAE = 9.8%). Validation in a separate institution did not achieve accuracy greater than random estimation. Visual review of training cases suggests that poor tumor coverage may be a result of atypical ablation zone shrinkage 1 month post-RFA, which may not be reflected in clinical utilization. An AI model that uses 2D images at the center of the tumor and 1 month post-ablation can accurately estimate ablation tumor coverage. In separate validation cohorts, translation could be challenging.

Simulating workload reduction with an AI-based prostate cancer detection pathway using a prediction uncertainty metric.

Fransen SJ, Bosma JS, van Lohuizen Q, Roest C, Simonis FFJ, Kwee TC, Yakar D, Huisman H

pubmed logopapersJun 7 2025
This study compared two uncertainty quantification (UQ) metrics to rule out prostate MRI scans with a high-confidence artificial intelligence (AI) prediction and investigated the resulting potential radiologist's workload reduction in a clinically significant prostate cancer (csPCa) detection pathway. This retrospective study utilized 1612 MRI scans from three institutes for csPCa (Gleason Grade Group ≥ 2) assessment. We compared the standard diagnostic pathway (radiologist reading) to an AI-based rule-out pathway in terms of efficacy and accuracy in diagnosing csPCa. In the rule-out pathway, 15 AI submodels (trained on 7756 cases) diagnosed each MRI scan, and any prediction deemed uncertain was referred to a radiologist for reading. We compared the mean (meanUQ) and variability (varUQ) of predictions using the DeLong test on the area under the receiver operating characteristic curves (AUROC). The level of workload reduction of the best UQ method was determined based on a maintained sensitivity at non-inferior specificity using the margins 0.05 and 0.10. The workload reduction of the proposed pathway was institute-specific: up to 20% at a 0.10 non-inferiority margin (p < 0.05) and non-significant workload reduction at a 0.05 margin. VarUQ-based rule out gave higher but non-significant AUROC scores than meanUQ in certain selected cases (+0.05 AUROC, p > 0.05). MeanUQ and varUQ showed promise in AI-based rule-out csPCa detection. Using varUQ in an AI-based csPCa detection pathway could reduce the number of scans radiologists need to read. The varying performance of the UQ rule-out indicates the need for institute-specific UQ thresholds. Question AI can autonomously assess prostate MRI scans with high certainty at a non-inferior performance compared to radiologists, potentially reducing the workload of radiologists. Findings The optimal ratio of AI-model and radiologist readings is institute-dependent and requires calibration. Clinical relevance Semi-autonomous AI-based prostate cancer detection with variational UQ scores shows promise in reducing the number of scans radiologists need to read.
Page 73 of 100991 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.