Sort by:
Page 93 of 99986 results

An automated cascade framework for glioma prognosis via segmentation, multi-feature fusion and classification techniques.

Hamoud M, Chekima NEI, Hima A, Kholladi NH

pubmed logopapersMay 13 2025
Glioma is one of the most lethal types of brain tumors, accounting for approximately 33% of all diagnosed brain tumor cases. Accurate segmentation and classification are crucial for precise glioma characterization, emphasizing early detection of malignancy, effective treatment planning, and prevention of tumor progression. Magnetic Resonance Imaging (MRI) serves as a non-invasive imaging modality that allows detailed examination of gliomas without exposure to ionizing radiation. However, manual analysis of MRI scans is impractical, time-consuming, subjective, and requires specialized expertise from radiologists. To address this, computer-aided diagnosis (CAD) systems have greatly evolved as powerful tools to support neuro-oncologists in the brain cancer screening process. In this work, we present a glioma classification framework based on 3D multi-modal MRI segmentation using the CNN models SegResNet and Swin UNETR which incorporates transformer mechanisms for enhancing segmentation performance. MRI images undergo preprocessing with a Gaussian filter and skull stripping to improve tissue localization. Key textural features are then extracted from segmented tumor regions using Gabor Transform, Discrete Wavelet Transform (DWT), and deep features from ResNet50. These features are fused, normalized, and classified using a Support Vector Machine (SVM) to distinguish between Low-Grade Glioma (LGG) and High-Grade Glioma (HGG). Extensive experiments on benchmark datasets, including BRATS2020 and BRATS2023, demonstrate the effectiveness of the proposed approach. Our model achieved Dice scores of 0.815 for Tumor Core, 0.909 for Whole Tumor, and 0.829 for Enhancing Tumor. Concerning classification, the framework attained 97% accuracy, 94% precision, 96% recall, and a 95% F1-score. These results highlight the potential of the proposed framework to provide reliable support for radiologists in the early detection and classification of gliomas.

Signal-based AI-driven software solution for automated quantification of metastatic bone disease and treatment response assessment using Whole-Body Diffusion-Weighted MRI (WB-DWI) biomarkers in Advanced Prostate Cancer

Antonio Candito, Matthew D Blackledge, Richard Holbrey, Nuria Porta, Ana Ribeiro, Fabio Zugni, Luca D'Erme, Francesca Castagnoli, Alina Dragan, Ricardo Donners, Christina Messiou, Nina Tunariu, Dow-Mu Koh

arxiv logopreprintMay 13 2025
We developed an AI-driven software solution to quantify metastatic bone disease from WB-DWI scans. Core technologies include: (i) a weakly-supervised Residual U-Net model generating a skeleton probability map to isolate bone; (ii) a statistical framework for WB-DWI intensity normalisation, obtaining a signal-normalised b=900s/mm^2 (b900) image; and (iii) a shallow convolutional neural network that processes outputs from (i) and (ii) to generate a mask of suspected bone lesions, characterised by higher b900 signal intensity due to restricted water diffusion. This mask is applied to the gADC map to extract TDV and gADC statistics. We tested the tool using expert-defined metastatic bone disease delineations on 66 datasets, assessed repeatability of imaging biomarkers (N=10), and compared software-based response assessment with a construct reference standard based on clinical, laboratory and imaging assessments (N=118). Dice score between manual and automated delineations was 0.6 for lesions within pelvis and spine, with an average surface distance of 2mm. Relative differences for log-transformed TDV (log-TDV) and median gADC were below 9% and 5%, respectively. Repeatability analysis showed coefficients of variation of 4.57% for log-TDV and 3.54% for median gADC, with intraclass correlation coefficients above 0.9. The software achieved 80.5% accuracy, 84.3% sensitivity, and 85.7% specificity in assessing response to treatment compared to the construct reference standard. Computation time generating a mask averaged 90 seconds per scan. Our software enables reproducible TDV and gADC quantification from WB-DWI scans for monitoring metastatic bone disease response, thus providing potentially useful measurements for clinical decision-making in APC patients.

Segmentation of renal vessels on non-enhanced CT images using deep learning models.

Zhong H, Zhao Y, Zhang Y

pubmed logopapersMay 13 2025
To evaluate the possibility of performing renal vessel reconstruction on non-enhanced CT images using deep learning models. 177 patients' CT scans in the non-enhanced phase, arterial phase and venous phase were chosen. These data were randomly divided into the training set (n = 120), validation set (n = 20) and test set (n = 37). In training set and validation set, a radiologist marked out the right renal arteries and veins on non-enhanced CT phase images using contrast phases as references. Trained deep learning models were tested and evaluated on the test set. A radiologist performed renal vessel reconstruction on the test set without the contrast phase reference, and the results were used for comparison. Reconstruction using the arterial phase and venous phase was used as the gold standard. Without the contrast phase reference, both radiologist and model could accurately identify artery and vein main trunk. The accuracy was 91.9% vs. 97.3% (model vs. radiologist) in artery and 91.9% vs. 100% in vein, the difference was insignificant. The model had difficulty identify accessory arteries, the accuracy was significantly lower than radiologist (44.4% vs. 77.8%, p = 0.044). The model also had lower accuracy in accessory veins, but the difference was insignificant (64.3% vs. 85.7%, p = 0.094). Deep learning models could accurately recognize the right renal artery and vein main trunk, and accuracy was comparable to that of radiologists. Although the current model still had difficulty recognizing small accessory vessels, further training and model optimization would solve these problems.

DEMAC-Net: A Dual-Encoder Multiattention Collaborative Network for Cervical Nerve Pathway and Adjacent Anatomical Structure Segmentation.

Cui H, Duan J, Lin L, Wu Q, Guo W, Zang Q, Zhou M, Fang W, Hu Y, Zou Z

pubmed logopapersMay 13 2025
Currently, cervical anesthesia is performed using three main approaches: superficial cervical plexus block, deep cervical plexus block, and intermediate plexus nerve block. However, each technique carries inherent risks and demands significant clinical expertise. Ultrasound imaging, known for its real-time visualization capabilities and accessibility, is widely used in both diagnostic and interventional procedures. Nevertheless, accurate segmentation of small and irregularly shaped structures such as the cervical and brachial plexuses remains challenging due to image noise, complex anatomical morphology, and limited annotated training data. This study introduces DEMAC-Net-a dual-encoder, multiattention collaborative network-to significantly improve the segmentation accuracy of these neural structures. By precisely identifying the cervical nerve pathway (CNP) and adjacent anatomical tissues, DEMAC-Net aims to assist clinicians, especially those less experienced, in effectively guiding anesthesia procedures and accurately identifying optimal needle insertion points. Consequently, this improvement is expected to enhance clinical safety, reduce procedural risks, and streamline decision-making efficiency during ultrasound-guided regional anesthesia. DEMAC-Net combines a dual-encoder architecture with the Spatial Understanding Convolution Kernel (SUCK) and the Spatial-Channel Attention Module (SCAM) to extract multi-scale features effectively. Additionally, a Global Attention Gate (GAG) and inter-layer fusion modules refine relevant features while suppressing noise. A novel dataset, Neck Ultrasound Dataset (NUSD), was introduced, containing 1,500 annotated ultrasound images across seven anatomical regions. Extensive experiments were conducted on both NUSD and the BUSI public dataset, comparing DEMAC-Net to state-of-the-art models using metrics such as Dice Similarity Coefficient (DSC) and Intersection over Union (IoU). On the NUSD dataset, DEMAC-Net achieved a mean DSC of 93.3%, outperforming existing models. For external validation on the BUSI dataset, it demonstrated superior generalization, achieving a DSC of 87.2% and a mean IoU of 77.4%, surpassing other advanced methods. Notably, DEMAC-Net displayed consistent segmentation stability across all tested structures. The proposed DEMAC-Net significantly improves segmentation accuracy for small nerves and complex anatomical structures in ultrasound images, outperforming existing methods in terms of accuracy and computational efficiency. This framework holds great potential for enhancing ultrasound-guided procedures, such as peripheral nerve blocks, by providing more precise anatomical localization, ultimately improving clinical outcomes.

Calibration and Uncertainty for multiRater Volume Assessment in multiorgan Segmentation (CURVAS) challenge results

Meritxell Riera-Marin, Sikha O K, Julia Rodriguez-Comas, Matthias Stefan May, Zhaohong Pan, Xiang Zhou, Xiaokun Liang, Franciskus Xaverius Erick, Andrea Prenner, Cedric Hemon, Valentin Boussot, Jean-Louis Dillenseger, Jean-Claude Nunes, Abdul Qayyum, Moona Mazher, Steven A Niederer, Kaisar Kushibar, Carlos Martin-Isla, Petia Radeva, Karim Lekadir, Theodore Barfoot, Luis C. Garcia Peraza Herrera, Ben Glocker, Tom Vercauteren, Lucas Gago, Justin Englemann, Joy-Marie Kleiss, Anton Aubanell, Andreu Antolin, Javier Garcia-Lopez, Miguel A. Gonzalez Ballester, Adrian Galdran

arxiv logopreprintMay 13 2025
Deep learning (DL) has become the dominant approach for medical image segmentation, yet ensuring the reliability and clinical applicability of these models requires addressing key challenges such as annotation variability, calibration, and uncertainty estimation. This is why we created the Calibration and Uncertainty for multiRater Volume Assessment in multiorgan Segmentation (CURVAS), which highlights the critical role of multiple annotators in establishing a more comprehensive ground truth, emphasizing that segmentation is inherently subjective and that leveraging inter-annotator variability is essential for robust model evaluation. Seven teams participated in the challenge, submitting a variety of DL models evaluated using metrics such as Dice Similarity Coefficient (DSC), Expected Calibration Error (ECE), and Continuous Ranked Probability Score (CRPS). By incorporating consensus and dissensus ground truth, we assess how DL models handle uncertainty and whether their confidence estimates align with true segmentation performance. Our findings reinforce the importance of well-calibrated models, as better calibration is strongly correlated with the quality of the results. Furthermore, we demonstrate that segmentation models trained on diverse datasets and enriched with pre-trained knowledge exhibit greater robustness, particularly in cases deviating from standard anatomical structures. Notably, the best-performing models achieved high DSC and well-calibrated uncertainty estimates. This work underscores the need for multi-annotator ground truth, thorough calibration assessments, and uncertainty-aware evaluations to develop trustworthy and clinically reliable DL-based medical image segmentation models.

Enhancing Liver Fibrosis Measurement: Deep Learning and Uncertainty Analysis Across Multi-Centre Cohorts

Wojciechowska, M. K., Malacrino, S., Windell, D., Culver, E., Dyson, J., UK-AIH Consortium,, Rittscher, J.

medrxiv logopreprintMay 13 2025
O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=111 SRC="FIGDIR/small/25326981v1_ufig1.gif" ALT="Figure 1"> View larger version (31K): [email protected]@14e7b87org.highwire.dtl.DTLVardef@19005c4org.highwire.dtl.DTLVardef@6ac42f_HPS_FORMAT_FIGEXP M_FIG O_FLOATNOGraphical AbstractC_FLOATNO C_FIG HighlightsO_LIA retrospective cohort of liver biopsies collected from over 20 healthcare centres has been assembled. C_LIO_LIThe cohort is characterized on the basis of collagen staining used for liver fibrosis assessment. C_LIO_LIA computational pipeline for the quantification of collagen from liver histology slides has been developed and applied to the described cohorts. C_LIO_LIUncertainty estimation is evaluated as a method to build trust in deep-learning based collagen predictions. C_LI The introduction of digital pathology has revolutionised the way in which histology-based measurements can support large, multi-centre studies. How-ever, pooling data from various centres often reveals significant differences in specimen quality, particularly regarding histological staining protocols. These variations present challenges in reliably quantifying features from stained tissue sections using image analysis. In this study, we investigate the statistical variation of measuring fibrosis across a liver cohort composed of four individual studies from 20 clinical sites across Europe and North America. In a first step, we apply colour consistency measurements to analyse staining variability across this diverse cohort. Subsequently, a learnt segmentation model is used to quantify the collagen proportionate area (CPA) and employed uncertainty mapping to evaluate the quality of the segmentations. Our analysis highlights a lack of standardisation in PicroSirius Red (PSR) staining practices, revealing significant variability in staining protocols across institutions. The deconvolution of the staining of the digitised slides identified the different numbers and types of counterstains used, leading to potentially incomparable results. Our analysis highlights the need for standardised staining protocols to ensure reliable collagen quantification in liver biopsies. The tools and methodologies presented here can be applied to perform slide colour quality control in digital pathology studies, thus enhancing the comparability and reproducibility of fibrosis assessment in the liver and other tissues.

Trustworthy AI for stage IV non-small cell lung cancer: Automatic segmentation and uncertainty quantification.

Dedeken S, Conze PH, Damerjian Pieters V, Gallinato O, Faure J, Colin T, Visvikis D

pubmed logopapersMay 13 2025
Accurate segmentation of lung tumors is essential for advancing personalized medicine in non-small cell lung cancer (NSCLC). However, stage IV NSCLC presents significant challenges due to heterogeneous tumor morphology and the presence of associated conditions including infection, atelectasis and pleural effusion. The complexity of multicentric datasets further complicates robust segmentation across diverse clinical settings. In this study, we evaluate deep-learning-based approaches for automated segmentation of advanced-stage lung tumors using 3D architectures on 387 CT scans from the Deep-Lung-IV study. Through comprehensive experiments, we assess the impact of model design, HU windowing, and dataset size on delineation performance, providing practical guidelines for robust implementation. Additionally, we propose a confidence score using deep ensembles to quantify prediction uncertainty and automate the identification of complex cases that require further review. Our results demonstrate the potential of attention-based architectures and specific preprocessing strategies to improve segmentation quality in such a challenging clinical scenario, while emphasizing the importance of uncertainty estimation to build trustworthy AI systems in medical imaging. Code is available at: https://github.com/Sacha-Dedeken/SegStageIVNSCLC.

AmygdalaGo-BOLT: an open and reliable AI tool to trace boundaries of human amygdala

Zhou, Q., Dong, B., Gao, P., Jintao, W., Xiao, J., Wang, W., Liang, P., Lin, D., Zuo, X.-N., He, H.

biorxiv logopreprintMay 13 2025
Each year, thousands of brain MRI scans are collected to study structural development in children and adolescents. However, the amygdala, a particularly small and complex structure, remains difficult to segment reliably, especially in developing populations where its volume is even smaller. To address this challenge, we developed AmygdalaGo-BOLT, a boundary-aware deep learning model tailored for human amygdala segmentation. It was trained and validated using 854 manually labeled scans from pediatric datasets, with independent samples used to ensure performance generalizability. The model integrates multiscale image features, spatial priors, and self-attention mechanisms within a compact encoder-decoder architecture to enhance boundary detection. Validation across multiple imaging centers and age groups shows that AmygdalaGo-BOLT closely matches expert manual labels, improves processing efficiency, and outperforms existing tools in accuracy. This enables robust and scalable analysis of amygdala morphology in developmental neuroimaging studies where manual tracing is impractical. To support open and reproducible science, we publicly release both the labeled datasets and the full source code.

Deep Learning-Derived Cardiac Chamber Volumes and Mass From PET/CT Attenuation Scans: Associations With Myocardial Flow Reserve and Heart Failure.

Hijazi W, Shanbhag A, Miller RJH, Kavanagh PB, Killekar A, Lemley M, Wopperer S, Knight S, Le VT, Mason S, Acampa W, Rosamond T, Dey D, Berman DS, Chareonthaitawee P, Di Carli MF, Slomka PJ

pubmed logopapersMay 13 2025
Computed tomography (CT) attenuation correction scans are an intrinsic part of positron emission tomography (PET) myocardial perfusion imaging using PET/CT, but anatomic information is rarely derived from these ultralow-dose CT scans. We aimed to assess the association between deep learning-derived cardiac chamber volumes (right atrial, right ventricular, left ventricular, and left atrial) and mass (left ventricular) from these scans with myocardial flow reserve and heart failure hospitalization. We included 18 079 patients with consecutive cardiac PET/CT from 6 sites. A deep learning model estimated cardiac chamber volumes and left ventricular mass from computed tomography attenuation correction imaging. Associations between deep learning-derived CT mass and volumes with heart failure hospitalization and reduced myocardial flow reserve were assessed in a multivariable analysis. During a median follow-up of 4.3 years, 1721 (9.5%) patients experienced heart failure hospitalization. Patients with 3 or 4 abnormal chamber volumes were 7× more likely to be hospitalized for heart failure compared with patients with normal volumes. In adjusted analyses, left atrial volume (hazard ratio [HR], 1.25 [95% CI, 1.19-1.30]), right atrial volume (HR, 1.29 [95% CI, 1.23-1.35]), right ventricular volume (HR, 1.25 [95% CI, 1.20-1.31]), left ventricular volume (HR, 1.27 [95% CI, 1.23-1.35]), and left ventricular mass (HR, 1.25 [95% CI, 1.18-1.32]) were independently associated with heart failure hospitalization. In multivariable analyses, left atrial volume (odds ratio, 1.14 [95% CI, 1.0-1.19]) and ventricular mass (odds ratio, 1.12 [95% CI, 1.6-1.17]) were independent predictors of reduced myocardial flow reserve. Deep learning-derived chamber volumes and left ventricular mass from computed tomography attenuation correction were predictive of heart failure hospitalization and reduced myocardial flow reserve in patients undergoing cardiac PET perfusion imaging. This anatomic data can be routinely reported along with other PET/CT parameters to improve risk prediction.

Fast cortical thickness estimation using deep learning-based anatomy segmentation and diffeomorphic registration.

Wu J, Zhou S

pubmed logopapersMay 13 2025
Accurately and efficiently estimating the cortical thickness from magnetic resonance images (MRIs) is crucial for neuroscientific studies and clinical applications with various large-scale datasets. Diffeomorphic registration-based cortical thickness estimation (DiReCT) is a prominent traditional method of calculating such measures directly from original MRIs by applying diffeomorphic registration on segmented tissues. However, it suffers from prolonged computational time and limited reproducibility, impediments to its application in large-scale studies or real-time environments. This paper proposes a framework for cortical thickness estimation using deep learning-based anatomy segmentation and diffeomorphic registration. The framework begins by applying a convolutional neural network (CNN) segmentation model to the original image, generating a segmentation map that accurately delineates the cortical boundaries. Subsequently, a pair of distance maps generated from the segmentation map is injected into an unsupervised learning-based registration network for fast and diffeomorphic registration. A novel algorithm based on diffeomorphisms of different time points is proposed to calculate the final thickness map. We systematically evaluated and compared our method with surface-based measures from FreeSurfer on two distinct datasets. The experimental results demonstrated a superior performance of the proposed method, surpassing the performance of DiReCT and DL+DiReCT in terms of time efficiency and consistency with FreeSurfer. Our code and pre-trained models are publicly available at: https://github.com/wujiong-hub/DL-CTE.git.
Page 93 of 99986 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.