Sort by:
Page 16 of 1331322 results

A Gabor-enhanced deep learning approach with dual-attention for 3D MRI brain tumor segmentation.

Chamseddine E, Tlig L, Chaari L, Sayadi M

pubmed logopapersSep 11 2025
Robust 3D brain tumor MRI segmentation is significant for diagnosis and treatment. However, the tumor heterogeneity, irregular shape, and complicated texture are challenging. Deep learning has transformed medical image analysis by feature extraction directly from the data, greatly enhancing the accuracy of segmentation. The functionality of deep models can be complemented by adding modules like texture-sensitive customized convolution layers and attention mechanisms. These components allow the model to focus its attention on pertinent locations and boundary definition problems. In this paper, a texture-aware deep learning method that improves the U-Net structure by adding a trainable Gabor convolution layer in the input for rich textural feature capture is proposed. Such features are fused in parallel with standard convolutional outputs to better represent tumors. The model also utilizes dual attention modules, Squeeze-and-Excitation blocks in the encoder for dynamically adjusting channel-wise features and Attention Gates for boosting skip connections by removing trivial areas and weighting tumor areas. The working of each module is explored through explainable artificial intelligence methods to ensure interpretability. To address class imbalance, a weighted combined loss function is applied. The model achieves Dice coefficients of 91.62%, 89.92%, and 88.86% for whole tumor, tumor core, and enhancing tumor respectively on BraTS2021 dataset. Large-scale quantitative and qualitative evaluations on BraTS2021, validated on BraTS benchmarks, prove the accuracy and robustness of the proposed model. The proposed approach results are superior to benchmark U-Net and other state-of-the-art segmentation methods, offering a robust and interpretable solution for clinical use.

Deep-Learning System for Automatic Measurement of the Femorotibial Rotational Angle on Lower-Extremity Computed Tomography.

Lee SW, Lee GP, Yoon I, Kim YJ, Kim KG

pubmed logopapersSep 10 2025
To develop and validate a deep-learning-based algorithm for automatic identification of anatomical landmarks and calculating femoral and tibial version angles (FTT angles) on lower-extremity CT scans. In this IRB-approved, retrospective study, lower-extremity CT scans from 270 adult patients (median age, 69 years; female to male ratio, 235:35) were analyzed. CT data were preprocessed using contrast-limited adaptive histogram equalization and RGB superposition to enhance tissue boundary distinction. The Attention U-Net model was trained using the gold standard of manual labeling and landmark drawing, enabling it to segment bones, detect landmarks, create lines, and automatically measure the femoral version and tibial torsion angles. The model's performance was validated against manual segmentations by a musculoskeletal radiologist using a test dataset. The segmentation model demonstrated 92.16%±0.02 sensitivity, 99.96%±<0.01 specificity, and 2.14±2.39 HD95, with a Dice similarity coefficient (DSC) of 93.12%±0.01. Automatic measurements of femoral and tibial torsion angles showed good correlation with radiologists' measurements, with correlation coefficients of 0.64 for femoral and 0.54 for tibial angles (p < 0.05). Automated segmentation significantly reduced the measurement time per leg compared to manual methods (57.5 ± 8.3 s vs. 79.6 ± 15.9 s, p < 0.05). We developed a method to automate the measurement of femorotibial rotation in continuous axial CT scans of patients with osteoarthritis (OA) using a deep-learning approach. This method has the potential to expedite the analysis of patient data in busy clinical settings.

CLAPS: A CLIP-Unified Auto-Prompt Segmentation for Multi-Modal Retinal Imaging

Zhihao Zhao, Yinzheng Zhao, Junjie Yang, Xiangtong Yao, Quanmin Liang, Shahrooz Faghihroohi, Kai Huang, Nassir Navab, M. Ali Nasseri

arxiv logopreprintSep 10 2025
Recent advancements in foundation models, such as the Segment Anything Model (SAM), have significantly impacted medical image segmentation, especially in retinal imaging, where precise segmentation is vital for diagnosis. Despite this progress, current methods face critical challenges: 1) modality ambiguity in textual disease descriptions, 2) a continued reliance on manual prompting for SAM-based workflows, and 3) a lack of a unified framework, with most methods being modality- and task-specific. To overcome these hurdles, we propose CLIP-unified Auto-Prompt Segmentation (\CLAPS), a novel method for unified segmentation across diverse tasks and modalities in retinal imaging. Our approach begins by pre-training a CLIP-based image encoder on a large, multi-modal retinal dataset to handle data scarcity and distribution imbalance. We then leverage GroundingDINO to automatically generate spatial bounding box prompts by detecting local lesions. To unify tasks and resolve ambiguity, we use text prompts enhanced with a unique "modality signature" for each imaging modality. Ultimately, these automated textual and spatial prompts guide SAM to execute precise segmentation, creating a fully automated and unified pipeline. Extensive experiments on 12 diverse datasets across 11 critical segmentation categories show that CLAPS achieves performance on par with specialized expert models while surpassing existing benchmarks across most metrics, demonstrating its broad generalizability as a foundation model.

Implicit Neural Representations of Intramyocardial Motion and Strain

Andrew Bell, Yan Kit Choi, Steffen E Petersen, Andrew King, Muhummad Sohaib Nazir, Alistair A Young

arxiv logopreprintSep 10 2025
Automatic quantification of intramyocardial motion and strain from tagging MRI remains an important but challenging task. We propose a method using implicit neural representations (INRs), conditioned on learned latent codes, to predict continuous left ventricular (LV) displacement -- without requiring inference-time optimisation. Evaluated on 452 UK Biobank test cases, our method achieved the best tracking accuracy (2.14 mm RMSE) and the lowest combined error in global circumferential (2.86%) and radial (6.42%) strain compared to three deep learning baselines. In addition, our method is $\sim$380$\times$ faster than the most accurate baseline. These results highlight the suitability of INR-based models for accurate and scalable analysis of myocardial strain in large CMR datasets.

Artificial Intelligence in Breast Cancer Care: Transforming Preoperative Planning and Patient Education with 3D Reconstruction

Mustafa Khanbhai, Giulia Di Nardo, Jun Ma, Vivienne Freitas, Caterina Masino, Ali Dolatabadi, Zhaoxun "Lorenz" Liu, Wey Leong, Wagner H. Souza, Amin Madani

arxiv logopreprintSep 10 2025
Effective preoperative planning requires accurate algorithms for segmenting anatomical structures across diverse datasets, but traditional models struggle with generalization. This study presents a novel machine learning methodology to improve algorithm generalization for 3D anatomical reconstruction beyond breast cancer applications. We processed 120 retrospective breast MRIs (January 2018-June 2023) through three phases: anonymization and manual segmentation of T1-weighted and dynamic contrast-enhanced sequences; co-registration and segmentation of whole breast, fibroglandular tissue, and tumors; and 3D visualization using ITK-SNAP. A human-in-the-loop approach refined segmentations using U-Mamba, designed to generalize across imaging scenarios. Dice similarity coefficient assessed overlap between automated segmentation and ground truth. Clinical relevance was evaluated through clinician and patient interviews. U-Mamba showed strong performance with DSC values of 0.97 ($\pm$0.013) for whole organs, 0.96 ($\pm$0.024) for fibroglandular tissue, and 0.82 ($\pm$0.12) for tumors on T1-weighted images. The model generated accurate 3D reconstructions enabling visualization of complex anatomical features. Clinician interviews indicated improved planning, intraoperative navigation, and decision support. Integration of 3D visualization enhanced patient education, communication, and understanding. This human-in-the-loop machine learning approach successfully generalizes algorithms for 3D reconstruction and anatomical segmentation across patient datasets, offering enhanced visualization for clinicians, improved preoperative planning, and more effective patient education, facilitating shared decision-making and empowering informed patient choices across medical applications.

Calibration and Uncertainty for multiRater Volume Assessment in multiorgan Segmentation (CURVAS) challenge results.

Riera-Marín M, O K S, Rodríguez-Comas J, May MS, Pan Z, Zhou X, Liang X, Erick FX, Prenner A, Hémon C, Boussot V, Dillenseger JL, Nunes JC, Qayyum A, Mazher M, Niederer SA, Kushibar K, Martín-Isla C, Radeva P, Lekadir K, Barfoot T, Garcia Peraza Herrera LC, Glocker B, Vercauteren T, Gago L, Englemann J, Kleiss JM, Aubanell A, Antolin A, García-López J, González Ballester MA, Galdrán A

pubmed logopapersSep 10 2025
Deep learning (DL) has become the dominant approach for medical image segmentation, yet ensuring the reliability and clinical applicability of these models requires addressing key challenges such as annotation variability, calibration, and uncertainty estimation. This is why we created the Calibration and Uncertainty for multiRater Volume Assessment in multiorgan Segmentation (CURVAS), which highlights the critical role of multiple annotators in establishing a more comprehensive ground truth, emphasizing that segmentation is inherently subjective and that leveraging inter-annotator variability is essential for robust model evaluation. Seven teams participated in the challenge, submitting a variety of DL models evaluated using metrics such as Dice Similarity Coefficient (DSC), Expected Calibration Error (ECE), and Continuous Ranked Probability Score (CRPS). By incorporating consensus and dissensus ground truth, we assess how DL models handle uncertainty and whether their confidence estimates align with true segmentation performance. Our findings reinforce the importance of well-calibrated models, as better calibration is strongly correlated with the quality of the results. Furthermore, we demonstrate that segmentation models trained on diverse datasets and enriched with pre-trained knowledge exhibit greater robustness, particularly in cases deviating from standard anatomical structures. Notably, the best-performing models achieved high DSC and well-calibrated uncertainty estimates. This work underscores the need for multi-annotator ground truth, thorough calibration assessments, and uncertainty-aware evaluations to develop trustworthy and clinically reliable DL-based medical image segmentation models.

Diffusion MRI of the prenatal fetal brain: a methodological scoping review.

Di Stefano M, Ciceri T, Leemans A, de Zwarte SMC, De Luca A, Peruzzo D

pubmed logopapersSep 10 2025
Fetal diffusion-weighted magnetic resonance imaging (dMRI) represents a promising modality for the assessment of white matter fiber organization, microstructure and development during pregnancy. Over the past two decades, research using this technology has significantly increased, but no consensus has yet been established on how to best implement and standardize the use of fetal dMRI across clinical and research settings. This scoping review aims to synthesize the various methodological approaches for the analysis of fetal dMRI brain data and their applications. We identified a total of 54 relevant articles and analyzed them across five primary domains: (1) datasets, (2) acquisition protocols, (3) image preprocessing/denoising, (4) image processing/modeling, and (5) brain atlas construction. The review of these articles reveals a predominant reliance on Diffusion Tensor Imaging (DTI) (n=37) to study fiber properties, and deterministic tractography approaches to investigate fiber organization (n=23). However, there is an emerging trend towards the adoption of more advanced techniques that address the inherent limitations of fetal dMRI (e.g. maternal and fetal motion, intensity artifacts, fetus's fast and uneven development), particularly through the application of artificial intelligence-based approaches (n=8). In our view, the results suggest that the potential of fetal brain dMRI is hindered by the methodological heterogeneity of the proposed solutions and the lack of publicly available data and tools. Nevertheless, clinical applications demonstrate its utility in studying brain development in both healthy and pathological conditions.

A 3D multi-task network for the automatic segmentation of CT images featuring hip osteoarthritis.

Wang H, Zhang X, Li S, Zheng X, Zhang Y, Xie Q, Jin Z

pubmed logopapersSep 10 2025
Total hip arthroplasty (THA) is the standard surgical treatment for end-stage hip osteoarthritis, with its success dependent on precise preoperative planning, which, in turn, relies on accurate three-dimensional segmentation and reconstruction of the periarticular bone of the hip joint. However, patients with hip osteoarthritis often exhibit pathological characteristics, such as joint space narrowing, femoroacetabular impingement, osteophyte formation, and joint deformity. These changes present significant challenges for traditional manual or semi-automatic segmentation methods. To address these challenges, this study proposed a novel 3D UNet-based multi-task network to achieve rapid and accurate segmentation and reconstruction of the periarticular bone in hip osteoarthritis patients. The bone segmentation main network incorporated the Transformer module during the encoder to effectively capture spatial anatomical features, while a boundary-optimization branch was designed to address segmentation challenges at the acetabular-femoral interface. These branches were jointly optimized through a multi-task loss function, with an oversampling strategy introduced to enhance the network's feature learning capability for complex structures. The experimental results showed that the proposed method achieved excellent performance on the test set with hip osteoarthritis. The average Dice coefficient was 96.09% (96.98% for femur, 95.20% for hip), with an overall precision of 96.66% and recall of 97.32%. In terms of the boundary matching metrics, the average surface distance (ASD) and the 95% Hausdorff distance (HD95) were 0.40 mm and 1.78 mm, respectively. The metrics showed that the proposed automatic segmentation network achieved high accuracy in segmenting the periarticular bone of the hip joint, generating reliable 2D masks and 3D models, thereby demonstrating significant potential for supporting THA surgical planning.

Live(r) Die: Predicting Survival in Colorectal Liver Metastasis

Muhammad Alberb, Helen Cheung, Anne Martel

arxiv logopreprintSep 10 2025
Colorectal cancer frequently metastasizes to the liver, significantly reducing long-term survival. While surgical resection is the only potentially curative treatment for colorectal liver metastasis (CRLM), patient outcomes vary widely depending on tumor characteristics along with clinical and genomic factors. Current prognostic models, often based on limited clinical or molecular features, lack sufficient predictive power, especially in multifocal CRLM cases. We present a fully automated framework for surgical outcome prediction from pre- and post-contrast MRI acquired before surgery. Our framework consists of a segmentation pipeline and a radiomics pipeline. The segmentation pipeline learns to segment the liver, tumors, and spleen from partially annotated data by leveraging promptable foundation models to complete missing labels. Also, we propose SAMONAI, a novel zero-shot 3D prompt propagation algorithm that leverages the Segment Anything Model to segment 3D regions of interest from a single point prompt, significantly improving our segmentation pipeline's accuracy and efficiency. The predicted pre- and post-contrast segmentations are then fed into our radiomics pipeline, which extracts features from each tumor and predicts survival using SurvAMINN, a novel autoencoder-based multiple instance neural network for survival analysis. SurvAMINN jointly learns dimensionality reduction and hazard prediction from right-censored survival data, focusing on the most aggressive tumors. Extensive evaluation on an institutional dataset comprising 227 patients demonstrates that our framework surpasses existing clinical and genomic biomarkers, delivering a C-index improvement exceeding 10%. Our results demonstrate the potential of integrating automated segmentation algorithms and radiomics-based survival analysis to deliver accurate, annotation-efficient, and interpretable outcome prediction in CRLM.

Implicit Neural Representations of Intramyocardial Motion and Strain

Andrew Bell, Yan Kit Choi, Steffen Peterson, Andrew King, Muhummad Sohaib Nazir, Alistair Young

arxiv logopreprintSep 10 2025
Automatic quantification of intramyocardial motion and strain from tagging MRI remains an important but challenging task. We propose a method using implicit neural representations (INRs), conditioned on learned latent codes, to predict continuous left ventricular (LV) displacement -- without requiring inference-time optimisation. Evaluated on 452 UK Biobank test cases, our method achieved the best tracking accuracy (2.14 mm RMSE) and the lowest combined error in global circumferential (2.86%) and radial (6.42%) strain compared to three deep learning baselines. In addition, our method is $\sim$380$\times$ faster than the most accurate baseline. These results highlight the suitability of INR-based models for accurate and scalable analysis of myocardial strain in large CMR datasets.
Page 16 of 1331322 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.