Sort by:
Page 36 of 42411 results

Uncertainty Co-estimator for Improving Semi-Supervised Medical Image Segmentation.

Zeng X, Xiong S, Xu J, Du G, Rong Y

pubmed logopapersMay 15 2025
Recently, combining the strategy of consistency regularization with uncertainty estimation has shown promising performance on semi-supervised medical image segmentation tasks. However, most existing methods estimate the uncertainty solely based on the outputs of a single neural network, which results in imprecise uncertainty estimations and eventually degrades the segmentation performance. In this paper, we propose a novel Uncertainty Co-estimator (UnCo) framework to deal with this problem. Inspired by the co-training technique, UnCo establishes two different mean-teacher modules (i.e., two pairs of teacher and student models), and estimates three types of uncertainty from the multi-source predictions generated by these models. Through combining these uncertainties, their differences will help to filter out incorrect noise in each estimate, thus allowing the final fused uncertainty maps to be more accurate. These resulting maps are then used to enhance a cross-consistency regularization imposed between the two modules. In addition, UnCo also designs an internal consistency regularization within each module, so that the student models can aggregate diverse feature information from both modules, thus promoting the semi-supervised segmentation performance. Finally, an adversarial constraint is introduced to maintain the model diversity. Experimental results on four medical image datasets indicate that UnCo can achieve new state-of-the-art performance on both 2D and 3D semi-supervised segmentation tasks. The source code will be available at https://github.com/z1010x/UnCo.

Ordered-subsets Multi-diffusion Model for Sparse-view CT Reconstruction

Pengfei Yu, Bin Huang, Minghui Zhang, Weiwen Wu, Shaoyu Wang, Qiegen Liu

arxiv logopreprintMay 15 2025
Score-based diffusion models have shown significant promise in the field of sparse-view CT reconstruction. However, the projection dataset is large and riddled with redundancy. Consequently, applying the diffusion model to unprocessed data results in lower learning effectiveness and higher learning difficulty, frequently leading to reconstructed images that lack fine details. To address these issues, we propose the ordered-subsets multi-diffusion model (OSMM) for sparse-view CT reconstruction. The OSMM innovatively divides the CT projection data into equal subsets and employs multi-subsets diffusion model (MSDM) to learn from each subset independently. This targeted learning approach reduces complexity and enhances the reconstruction of fine details. Furthermore, the integration of one-whole diffusion model (OWDM) with complete sinogram data acts as a global information constraint, which can reduce the possibility of generating erroneous or inconsistent sinogram information. Moreover, the OSMM's unsupervised learning framework provides strong robustness and generalizability, adapting seamlessly to varying sparsity levels of CT sinograms. This ensures consistent and reliable performance across different clinical scenarios. Experimental results demonstrate that OSMM outperforms traditional diffusion models in terms of image quality and noise resilience, offering a powerful and versatile solution for advanced CT imaging in sparse-view scenarios.

Dual-Domain deep prior guided sparse-view CT reconstruction with multi-scale fusion attention.

Wu J, Lin J, Jiang X, Zheng W, Zhong L, Pang Y, Meng H, Li Z

pubmed logopapersMay 15 2025
Sparse-view CT reconstruction is a challenging ill-posed inverse problem, where insufficient projection data leads to degraded image quality with increased noise and artifacts. Recent deep learning approaches have shown promising results in CT reconstruction. However, existing methods often neglect projection data constraints and rely heavily on convolutional neural networks, resulting in limited feature extraction capabilities and inadequate adaptability. To address these limitations, we propose a Dual-domain deep Prior-guided Multi-scale fusion Attention (DPMA) model for sparse-view CT reconstruction, aiming to enhance reconstruction accuracy while ensuring data consistency and stability. First, we establish a residual regularization strategy that applies constraints on the difference between the prior image and target image, effectively integrating deep learning-based priors with model-based optimization. Second, we develop a multi-scale fusion attention mechanism that employs parallel pathways to simultaneously model global context, regional dependencies, and local details in a unified framework. Third, we incorporate a physics-informed consistency module based on range-null space decomposition to ensure adherence to projection data constraints. Experimental results demonstrate that DPMA achieves improved reconstruction quality compared to existing approaches, particularly in noise suppression, artifact reduction, and fine detail preservation.

Deep learning MRI-based radiomic models for predicting recurrence in locally advanced nasopharyngeal carcinoma after neoadjuvant chemoradiotherapy: a multi-center study.

Hu C, Xu C, Chen J, Huang Y, Meng Q, Lin Z, Huang X, Chen L

pubmed logopapersMay 15 2025
Local recurrence and distant metastasis were a common manifestation of locoregionally advanced nasopharyngeal carcinoma (LA-NPC) after neoadjuvant chemoradiotherapy (NACT). To validate the clinical value of MRI radiomic models based on deep learning for predicting the recurrence of LA-NPC patients. A total of 328 NPC patients from four hospitals were retrospectively included and divided into the training(n = 229) and validation (n = 99) cohorts randomly. Extracting 975 traditional radiomic features and 1000 deep radiomic features from contrast enhanced T1-weighted (T1WI + C) and T2-weighted (T2WI) sequences, respectively. Least absolute shrinkage and selection operator (LASSO) was applied for feature selection. Five machine learning classifiers were conducted to develop three models for LA-NPC prediction in training cohort, namely Model I: traditional radiomic features, Model II: combined the deep radiomic features with Model I, and Model III: combined Model II with clinical features. The predictive performance of these models were evaluated by receive operating characteristic (ROC) curve analysis, area under the curve (AUC), accuracy, sensitivity and specificity in both cohorts. The clinical characteristics in two cohorts showed no significant differences. Choosing 15 radiomic features and 6 deep radiomic features from T1WI + C. Choosing 9 radiomic features and 6 deep radiomic features from T2WI. In T2WI, the Model II based on Random forest (RF) (AUC = 0.87) performed best compared with other models in validation cohort. Traditional radiomic model combined with deep radiomic features shows excellent predictive performance. It could be used assist clinical doctors to predict curative effect for LA-NPC patients after NACT.

Predicting response to anti-VEGF therapy in neovascular age-related macular degeneration using random forest and SHAP algorithms.

Zhang P, Duan J, Wang C, Li X, Su J, Shang Q

pubmed logopapersMay 14 2025
This study aimed to establish and validate a prediction model based on machine learning methods and SHAP algorithm to predict response to anti-vascular endothelial growth factor (VEGF) therapy in neovascular age-related macular degeneration (AMD). In this retrospective study, we extracted data including demographic characteristics, laboratory test results, and imaging features from optical coherence tomography (OCT) and optical coherence tomography angiography (OCTA). Eight machine learning methods, including Logistic Regression, Gradient Boosting Decision Tree, Random Forest, CatBoost, Support Vector Machine, XGboost, LightGBM, K Nearest Neighbors were employed to develop the predictive model. The machine learning method with optimal performance was selected for further interpretation. Finally, the SHAP algorithm was applied to explain the model's predictions. The study included 145 patients with neovascular AMD. Among the eight models developed, the Random Forest model demonstrated general optimal performance, achieving a high accuracy of 75.86% and the highest area under the receiver operating characteristic curve (AUC) value of 0.91. In this model, important features identified as significant contributors to the response to anti-VEGF therapy in neovascular AMD patients included fractal dimension, total number of end points, total number of junctions, total vessels length, vessels area, average lacunarity, choroidal neovascularization (CNV) type, age, duration and logMAR BCVA. SHAP analysis and visualization provided interpretation at both the factor level and individual level. The Random Forest model for predicting response to anti-VEGF therapy in neovascular AMD using SHAP algorithm proved to be feasible and effective. OCTA imaging features, such as fractal dimension, total number of end points et al, were the most effective predictive factors.

Multi-Task Deep Learning for Predicting Metabolic Syndrome from Retinal Fundus Images in a Japanese Health Checkup Dataset

Itoh, T., Nishitsuka, K., Fukuma, Y., Wada, S.

medrxiv logopreprintMay 14 2025
BackgroundRetinal fundus images provide a noninvasive window into systemic health, offering opportunities for early detection of metabolic disorders such as metabolic syndrome (METS). ObjectiveThis study aimed to develop a deep learning model to predict METS from fundus images obtained during routine health checkups, leveraging a multi-task learning approach. MethodsWe retrospectively analyzed 5,000 fundus images from Japanese health checkup participants. Convolutional neural network (CNN) models were trained to classify METS status, incorporating fundus-specific data augmentation strategies and auxiliary regression tasks targeting clinical parameters such as abdominal circumference (AC). Model performance was evaluated using validation accuracy, test accuracy, and the area under the receiver operating characteristic curve (AUC). ResultsModels employing fundus-specific augmentation demonstrated more stable convergence and superior validation accuracy compared to general-purpose augmentation. Incorporating AC as an auxiliary task further enhanced performance across architectures. The final ensemble model with test-time augmentation achieved a test accuracy of 0.696 and an AUC of 0.73178. ConclusionCombining multi-task learning, fundus-specific data augmentation, and ensemble prediction substantially improves deep learning-based METS classification from fundus images. This approach may offer a practical, noninvasive screening tool for metabolic syndrome in general health checkup settings.

An Annotated Multi-Site and Multi-Contrast Magnetic Resonance Imaging Dataset for the study of the Human Tongue Musculature.

Ribeiro FL, Zhu X, Ye X, Tu S, Ngo ST, Henderson RD, Steyn FJ, Kiernan MC, Barth M, Bollmann S, Shaw TB

pubmed logopapersMay 14 2025
This dataset provides the first annotated, openly available MRI-based imaging dataset for investigations of tongue musculature, including multi-contrast and multi-site MRI data from non-disease participants. The present dataset includes 47 participants collated from three studies: BeLong (four participants; T2-weighted images), EATT4MND (19 participants; T2-weighted images), and BMC (24 participants; T1-weighted images). We provide manually corrected segmentations of five key tongue muscles: the superior longitudinal, combined transverse/vertical, genioglossus, and inferior longitudinal muscles. Other phenotypic measures, including age, sex, weight, height, and tongue muscle volume, are also available for use. This dataset will benefit researchers across domains interested in the structure and function of the tongue in health and disease. For instance, researchers can use this data to train new machine learning models for tongue segmentation, which can be leveraged for segmentation and tracking of different tongue muscles engaged in speech formation in health and disease. Altogether, this dataset provides the means to the scientific community for investigation of the intricate tongue musculature and its role in physiological processes and speech production.

DEMAC-Net: A Dual-Encoder Multiattention Collaborative Network for Cervical Nerve Pathway and Adjacent Anatomical Structure Segmentation.

Cui H, Duan J, Lin L, Wu Q, Guo W, Zang Q, Zhou M, Fang W, Hu Y, Zou Z

pubmed logopapersMay 13 2025
Currently, cervical anesthesia is performed using three main approaches: superficial cervical plexus block, deep cervical plexus block, and intermediate plexus nerve block. However, each technique carries inherent risks and demands significant clinical expertise. Ultrasound imaging, known for its real-time visualization capabilities and accessibility, is widely used in both diagnostic and interventional procedures. Nevertheless, accurate segmentation of small and irregularly shaped structures such as the cervical and brachial plexuses remains challenging due to image noise, complex anatomical morphology, and limited annotated training data. This study introduces DEMAC-Net-a dual-encoder, multiattention collaborative network-to significantly improve the segmentation accuracy of these neural structures. By precisely identifying the cervical nerve pathway (CNP) and adjacent anatomical tissues, DEMAC-Net aims to assist clinicians, especially those less experienced, in effectively guiding anesthesia procedures and accurately identifying optimal needle insertion points. Consequently, this improvement is expected to enhance clinical safety, reduce procedural risks, and streamline decision-making efficiency during ultrasound-guided regional anesthesia. DEMAC-Net combines a dual-encoder architecture with the Spatial Understanding Convolution Kernel (SUCK) and the Spatial-Channel Attention Module (SCAM) to extract multi-scale features effectively. Additionally, a Global Attention Gate (GAG) and inter-layer fusion modules refine relevant features while suppressing noise. A novel dataset, Neck Ultrasound Dataset (NUSD), was introduced, containing 1,500 annotated ultrasound images across seven anatomical regions. Extensive experiments were conducted on both NUSD and the BUSI public dataset, comparing DEMAC-Net to state-of-the-art models using metrics such as Dice Similarity Coefficient (DSC) and Intersection over Union (IoU). On the NUSD dataset, DEMAC-Net achieved a mean DSC of 93.3%, outperforming existing models. For external validation on the BUSI dataset, it demonstrated superior generalization, achieving a DSC of 87.2% and a mean IoU of 77.4%, surpassing other advanced methods. Notably, DEMAC-Net displayed consistent segmentation stability across all tested structures. The proposed DEMAC-Net significantly improves segmentation accuracy for small nerves and complex anatomical structures in ultrasound images, outperforming existing methods in terms of accuracy and computational efficiency. This framework holds great potential for enhancing ultrasound-guided procedures, such as peripheral nerve blocks, by providing more precise anatomical localization, ultimately improving clinical outcomes.

Unsupervised Out-of-Distribution Detection in Medical Imaging Using Multi-Exit Class Activation Maps and Feature Masking

Yu-Jen Chen, Xueyang Li, Yiyu Shi, Tsung-Yi Ho

arxiv logopreprintMay 13 2025
Out-of-distribution (OOD) detection is essential for ensuring the reliability of deep learning models in medical imaging applications. This work is motivated by the observation that class activation maps (CAMs) for in-distribution (ID) data typically emphasize regions that are highly relevant to the model's predictions, whereas OOD data often lacks such focused activations. By masking input images with inverted CAMs, the feature representations of ID data undergo more substantial changes compared to those of OOD data, offering a robust criterion for differentiation. In this paper, we introduce a novel unsupervised OOD detection framework, Multi-Exit Class Activation Map (MECAM), which leverages multi-exit CAMs and feature masking. By utilizing mult-exit networks that combine CAMs from varying resolutions and depths, our method captures both global and local feature representations, thereby enhancing the robustness of OOD detection. We evaluate MECAM on multiple ID datasets, including ISIC19 and PathMNIST, and test its performance against three medical OOD datasets, RSNA Pneumonia, COVID-19, and HeadCT, and one natural image OOD dataset, iSUN. Comprehensive comparisons with state-of-the-art OOD detection methods validate the effectiveness of our approach. Our findings emphasize the potential of multi-exit networks and feature masking for advancing unsupervised OOD detection in medical imaging, paving the way for more reliable and interpretable models in clinical practice.

Automatic deep learning segmentation of mandibular periodontal bone topography on cone-beam computed tomography images.

Palkovics D, Molnar B, Pinter C, García-Mato D, Diaz-Pinto A, Windisch P, Ramseier CA

pubmed logopapersMay 13 2025
This study evaluated the performance of a multi-stage Segmentation Residual Network (SegResNet)-based deep learning (DL) model for the automatic segmentation of cone-beam computed tomography (CBCT) images of patients with stage III and IV periodontitis. Seventy pre-processed CBCT scans from patients undergoing periodontal rehabilitation were used for training and validation. The model was tested on 10 CBCT scans independent from the training dataset by comparing results with semi-automatic (SA) segmentations. Segmentation accuracy was assessed using the Dice similarity coefficient (DSC), Intersection over Union (IoU), and Hausdorff distance 95<sup>th</sup> percentile (HD95). Linear periodontal measurements were performed on four tooth surfaces to assess the validity of the DL segmentation in the periodontal region. The DL model achieved a mean DSC of 0.9650 ± 0.0097, with an IoU of 0.9340 ± 0.0180 and HD95 of 0.4820 mm ± 0.1269 mm, showing strong agreement with SA segmentation. Linear measurements revealed high statistical correlations between the mesial, distal, and lingual surfaces, with intraclass correlation coefficients (ICC) of 0.9442 (p<0.0001), 0.9232 (p<0.0001), and 0.9598(p<0.0001), respectively, while buccal measurements revealed lower consistency, with an ICC of 0.7481 (p<0.0001). The DL method reduced the segmentation time by 47 times compared to the SA method. Acquired 3D models may enable precise treatment planning in cases where conventional diagnostic modalities are insufficient. However, the robustness of the model must be increased to improve its general reliability and consistency at the buccal aspect of the periodontal region. This study presents a DL model for the CBCT-based segmentation of periodontal defects, demonstrating high accuracy and a 47-fold time reduction compared to SA methods, thus improving the feasibility of 3D diagnostics for advanced periodontitis.
Page 36 of 42411 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.