Sort by:
Page 9 of 1331322 results

Uncertainty-Supervised Interpretable and Robust Evidential Segmentation

Yuzhu Li, An Sui, Fuping Wu, Xiahai Zhuang

arxiv logopreprintSep 21 2025
Uncertainty estimation has been widely studied in medical image segmentation as a tool to provide reliability, particularly in deep learning approaches. However, previous methods generally lack effective supervision in uncertainty estimation, leading to low interpretability and robustness of the predictions. In this work, we propose a self-supervised approach to guide the learning of uncertainty. Specifically, we introduce three principles about the relationships between the uncertainty and the image gradients around boundaries and noise. Based on these principles, two uncertainty supervision losses are designed. These losses enhance the alignment between model predictions and human interpretation. Accordingly, we introduce novel quantitative metrics for evaluating the interpretability and robustness of uncertainty. Experimental results demonstrate that compared to state-of-the-art approaches, the proposed method can achieve competitive segmentation performance and superior results in out-of-distribution (OOD) scenarios while significantly improving the interpretability and robustness of uncertainty estimation. Code is available via https://github.com/suiannaius/SURE.

Uncovering genetic architecture of the heart via genetic association studies of unsupervised deep learning derived endophenotypes.

You L, Zhao X, Xie Z, Patel KA, Chen C, Kitkungvan D, Mohammed KK, Narula N, Arbustini E, Cassidy CK, Narula J, Zhi D

pubmed logopapersSep 20 2025
Recent genome-wide association studies (GWAS) have effectively linked genetic variants to quantitative traits derived from time-series cardiac magnetic resonance imaging, revealing insights into cardiac morphology and function. Deep learning approach generally requires extensive supervised training on manually annotated data. In this study, we developed a novel framework using a 3D U-architecture autoencoder (cineMAE) to learn deep image phenotypes from cardiac magnetic resonance (CMR) imaging for genetic discovery, focusing on long-axis two-chamber and four-chamber views. We trained a masked autoencoder to develop <b>U</b> nsupervised <b>D</b> erived <b>I</b> mage <b>P</b> henotypes for heart (Heart-UDIPs). These representations were found to be informative to indicate various heart-specific phenotypes (e.g., left ventricular hypertrophy) and diseases (e.g., hypertrophic cardiomyopathy). GWAS on Heart UDIP identified 323 lead SNP and 628 SNP-prioritized genes, which exceeded previous methods. The genes identified by method described herein, exhibited significant associations with cardiac function and showed substantial enrichment in pathways related to cardiac disorders. These results underscore the utility of our Heart-UDIP approach in enhancing the discovery potential for genetic associations, without the need for clinically defined phenotypes or manual annotations.

Machine learning and deep learning approaches in MRI for quantifying and staging fatty liver disease: A systematic review.

Elhaie M, Koozari A, Koohi H, Alqurain QT

pubmed logopapersSep 20 2025
Fatty liver disease, encompassing non-alcoholic fatty liver disease (NAFLD) and alcohol-related liver disease (ALD), affects ∼25% of adults globally. Magnetic resonance imaging (MRI), particularly proton density fat fraction (PDFF), is the non-invasive gold standard for hepatic steatosis quantification, but its clinical use is limited by cost, protocol variability, a analysis time. Machine learning (ML) and deep learning (DL) techniques, including convolutional neural networks (CNNs) and generative adversarial networks (GANs), show promise in enhancing MRI-based quantification and staging. To systematically review the diagnostic accuracy, reproducibility, and clinical utility of ML and DL techniques applied to MRI for quantifying and staging hepatic steatosis in fatty liver disease. This systematic review was registered in PROSPERO (CRD420251121056) and adhered to PRISMA guidelines, searching PubMed, Cochrane Library, Scopus, IEEE Xplore, Web of Science, Google Scholar, and grey literature for studies on ML/DL applications in MRI for fatty liver disease. Eligible studies involved human participants with suspected/confirmed NAFLD, NASH, or ALD, using ML/DL (e.g., CNNs, GANs) on MRI data (e.g., PDFF, Dixon MRI). Outcomes included diagnostic accuracy (sensitivity, specificity, area under the curve (AUC)), reproducibility (intraclass correlation coefficient (ICC), Dice), and clinical utility (e.g., treatment planning). Two reviewers screened studies, extracted data, and assessed risk of bias using QUADAS-2. Narrative synthesis and meta-analysis (where feasible) were conducted. From 2550 records, 15 studies (n = 25-1038) were included, using CNNs, GANs, radiomics, and dictionary learning on PDFF, chemical shift-encoded MRI, or Dixon MRI. Diagnostic accuracy was high (AUC 0.85-0.97, r = 0.94-0.99 vs. biopsy/MRS), with reproducibility metrics robust (ICC 0.94-0.99, Dice 0.87-0.94). Efficiency improved significantly (e.g., processing <0.16 s/slice, scan time <1 min). Clinical utility included virtual biopsies, surgical planning, and treatment monitoring. Limitations included small sample sizes, single-center designs, and vendor variability. ML and DL enhance MRI-based hepatic steatosis assessment, offering high accuracy, reproducibility, and efficiency. CNNs excel in segmentation/PDFF quantification, while GANs and radiomics aid free-breathing MRI and NASH staging. Multi-center studies and standardization are needed for clinical integration.

Longitudinal Progression of Traumatic Bone Marrow Lesions Following Anterior Cruciate Ligament Injury: Associations With Knee Pain and Concomitant Injuries.

Stirling CE, Pavlovic N, Manske SL, Walker REA, Boyd SK

pubmed logopapersSep 20 2025
Traumatic bone marrow lesions (BMLs) occur in ~80% of anterior cruciate ligament (ACL) injuries, typically in the lateral femoral condyle (LFC) and lateral tibial plateau (LTP). Associated with microfractures, vascular proliferation, inflammation, and bone density changes, BMLs may contribute to posttraumatic osteoarthritis. However, their relationship with knee pain is unclear. This study examined the prevalence, characteristics, and progression of BMLs after ACL injury, focusing on associations with pain, meniscal and ligament injuries, and fractures. Participants (N = 100, aged 14-55) with MRI-confirmed ACL tears were scanned within 6 weeks post-injury (mean = 30.0, SD = 9.6 days). BML volumes were quantified using a validated machine learning method, and pain assessed via the Knee Injury and Osteoarthritis Outcome Score (KOOS). Analyses included t-tests, Mann-Whitney U, chi-square, and Spearman correlations with false discovery rate correction. BMLs were present in 95% of participants, primarily in the LFC and LTP. Males had 33% greater volumes than females (p < 0.05), even after adjusting for BMI. Volumes were higher in cases with depression fractures (p = 0.022) and negatively associated with baseline KOOS Symptoms. At 1 year, 92.68% of lesions (based on lesion counts) resolved in Nonsurgical participants, with a 96.13% volume reduction (p < 0.001). KOOS outcomes were similar between groups, except for slightly better Pain scores in the Nonsurgical group. Baseline Pain and Sport scores predicted follow-up outcomes. BMLs are common post-ACL injury, vary by sex and fracture status, and modestly relate to early symptoms. Most resolve within a year, with limited long-term differences by surgical status.

AI-driven innovations for dental implant treatment planning: A systematic review.

Zaww K, Abbas H, Vanegas Sáenz JR, Hong G

pubmed logopapersSep 19 2025
This systematic review evaluates the effectiveness of artificial intelligence (AI) models in dental implant treatment planning, focusing on: 1) identification, detection, and segmentation of anatomical structures; 2) technical assistance during treatment planning; and 3) additional relevant applications. A literature search of PubMed/MEDLINE, Scopus, and Web of Science was conducted for studies published in English until July 31, 2024. The included studies explored AI applications in implant treatment planning, excluding expert opinions, guidelines, and protocols. Three reviewers independently assessed study quality using the Joanna Briggs Institute (JBI) Critical Appraisal Checklist for Quasi-Experimental Studies, resolving disagreements by consensus. Of the 28 included studies, four were high, four were medium, and 20 were low quality according to the JBI scale. Eighteen studies on anatomical segmentation have demonstrated AI models with accuracy rates ranging from 66.4% to 99.1%. Eight studies examined AI's role in technical assistance for surgical planning, demonstrating its potential in predicting jawbone mineral density, optimizing drilling protocols, and classifying plans for maxillary sinus augmentation. One study indicated a learning curve for AI in implant planning, recommending at least 50 images for over 70% predictive accuracy. Another study reported 83% accuracy in localizing stent markers for implant sites, suggesting additional imaging planes to address a 17% miss rate and 2.8% false positives. AI models exhibit potential for automating dental implant planning with high accuracy in anatomical segmentation and insightful technical assistance. However, further well-designed studies with standardized evaluation parameters are required for pragmatic integration into clinical settings.

Uncertainty-Gated Deformable Network for Breast Tumor Segmentation in MR Images

Yue Zhang, Jiahua Dong, Chengtao Peng, Qiuli Wang, Dan Song, Guiduo Duan

arxiv logopreprintSep 19 2025
Accurate segmentation of breast tumors in magnetic resonance images (MRI) is essential for breast cancer diagnosis, yet existing methods face challenges in capturing irregular tumor shapes and effectively integrating local and global features. To address these limitations, we propose an uncertainty-gated deformable network to leverage the complementary information from CNN and Transformers. Specifically, we incorporates deformable feature modeling into both convolution and attention modules, enabling adaptive receptive fields for irregular tumor contours. We also design an Uncertainty-Gated Enhancing Module (U-GEM) to selectively exchange complementary features between CNN and Transformer based on pixel-wise uncertainty, enhancing both local and global representations. Additionally, a Boundary-sensitive Deep Supervision Loss is introduced to further improve tumor boundary delineation. Comprehensive experiments on two clinical breast MRI datasets demonstrate that our method achieves superior segmentation performance compared with state-of-the-art methods, highlighting its clinical potential for accurate breast tumor delineation.

TractoTransformer: Diffusion MRI Streamline Tractography using CNN and Transformer Networks

Itzik Waizman, Yakov Gusakov, Itay Benou, Tammy Riklin Raviv

arxiv logopreprintSep 19 2025
White matter tractography is an advanced neuroimaging technique that reconstructs the 3D white matter pathways of the brain from diffusion MRI data. It can be framed as a pathfinding problem aiming to infer neural fiber trajectories from noisy and ambiguous measurements, facing challenges such as crossing, merging, and fanning white-matter configurations. In this paper, we propose a novel tractography method that leverages Transformers to model the sequential nature of white matter streamlines, enabling the prediction of fiber directions by integrating both the trajectory context and current diffusion MRI measurements. To incorporate spatial information, we utilize CNNs that extract microstructural features from local neighborhoods around each voxel. By combining these complementary sources of information, our approach improves the precision and completeness of neural pathway mapping compared to traditional tractography models. We evaluate our method with the Tractometer toolkit, achieving competitive performance against state-of-the-art approaches, and present qualitative results on the TractoInferno dataset, demonstrating strong generalization to real-world data.

Prostate Capsule Segmentation from Micro-Ultrasound Images using Adaptive Focal Loss

Kaniz Fatema, Vaibhav Thakur, Emad A. Mohammed

arxiv logopreprintSep 19 2025
Micro-ultrasound (micro-US) is a promising imaging technique for cancer detection and computer-assisted visualization. This study investigates prostate capsule segmentation using deep learning techniques from micro-US images, addressing the challenges posed by the ambiguous boundaries of the prostate capsule. Existing methods often struggle in such cases, motivating the development of a tailored approach. This study introduces an adaptive focal loss function that dynamically emphasizes both hard and easy regions, taking into account their respective difficulty levels and annotation variability. The proposed methodology has two primary strategies: integrating a standard focal loss function as a baseline to design an adaptive focal loss function for proper prostate capsule segmentation. The focal loss baseline provides a robust foundation, incorporating class balancing and focusing on examples that are difficult to classify. The adaptive focal loss offers additional flexibility, addressing the fuzzy region of the prostate capsule and annotation variability by dilating the hard regions identified through discrepancies between expert and non-expert annotations. The proposed method dynamically adjusts the segmentation model's weights better to identify the fuzzy regions of the prostate capsule. The proposed adaptive focal loss function demonstrates superior performance, achieving a mean dice coefficient (DSC) of 0.940 and a mean Hausdorff distance (HD) of 1.949 mm in the testing dataset. These results highlight the effectiveness of integrating advanced loss functions and adaptive techniques into deep learning models. This enhances the accuracy of prostate capsule segmentation in micro-US images, offering the potential to improve clinical decision-making in prostate cancer diagnosis and treatment planning.

Multi-modal CT Perfusion-based Deep Learning for Predicting Stroke Lesion Outcomes in Complete and No Recanalization Scenarios.

Yang H, George Y, Mehta D, Lin L, Chen C, Yang D, Sun J, Lau KF, Bain C, Yang Q, Parsons MW, Ge Z

pubmed logopapersSep 19 2025
Predicting the final location and volume of lesions in acute ischemic stroke (AIS) is crucial for clinical management. While CT perfusion (CTP) imaging is routinely used for estimating lesion outcomes, conventional threshold-based methods have limitations. We developed specialized outcome prediction deep learning models that predict infarct core in successful reperfusion cases and the combined core-penumbra region in unsuccessful reperfusion cases. We developed single-modal and multi-modal deep learning models using CTP parameter maps to predict the final infarct lesion on follow-up diffusion-weighted imaging (DWI). Using a multi-center dataset from multiple sites, deep learning models were developed and evaluated separately for patients with complete recanalization (CR, successful reperfusion, n=350) and no recanalization (NR, unsuccessful reperfusion, n=138) after treatment. The CR model was designed to predict the infarct core region, while the NR model predicted the expanded hypoperfused tissue encompassing both core and penumbra regions. Five-fold cross-validation was performed for robust evaluation. The multi-modal 3D nnU-Net model demonstrated superior performance, achieving mean Dice scores of 35.36% in CR patients and 50.22% in NR patients. This significantly outperformed the current clinical used method, providing more accurate outcome estimates than the conventional single-modality threshold-based measures which yielded dice scores of 15.73% and 39.71% for CR and NR groups respectively. Our approach offered both successful reperfusion and unsuccessful reperfusion estimations for potential treatment outcomes, enabling clinicians to better evaluate treatment eligibility for reperfusion therapies and assess potential treatment benefits. This advancement facilitates more personalized treatment recommendations and has the potential to significantly enhance clinical decision-making in AIS management by providing more accurate tissue outcome predictions than conventional single-modality threshold-based approaches. AIS=acute ischemic stroke; CR=complete recanalization; NR=no recanalization; DT=delay time; IQR=interquartile range; GT=ground truth; HD95=95% Hausdorff distance; ASSD=average symmetric surface distance; MLV=mismatch lesion volume.

Deep Feedback Models

David Calhas, Arlindo L. Oliveira

arxiv logopreprintSep 19 2025
Deep Feedback Models (DFMs) are a new class of stateful neural networks that combine bottom up input with high level representations over time. This feedback mechanism introduces dynamics into otherwise static architectures, enabling DFMs to iteratively refine their internal state and mimic aspects of biological decision making. We model this process as a differential equation solved through a recurrent neural network, stabilized via exponential decay to ensure convergence. To evaluate their effectiveness, we measure DFMs under two key conditions: robustness to noise and generalization with limited data. In both object recognition and segmentation tasks, DFMs consistently outperform their feedforward counterparts, particularly in low data or high noise regimes. In addition, DFMs translate to medical imaging settings, while being robust against various types of noise corruption. These findings highlight the importance of feedback in achieving stable, robust, and generalizable learning. Code is available at https://github.com/DCalhas/deep_feedback_models.
Page 9 of 1331322 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.