Sort by:
Page 85 of 3703696 results

Medical Image De-Identification Benchmark Challenge

Linmin Pei, Granger Sutton, Michael Rutherford, Ulrike Wagner, Tracy Nolan, Kirk Smith, Phillip Farmer, Peter Gu, Ambar Rana, Kailing Chen, Thomas Ferleman, Brian Park, Ye Wu, Jordan Kojouharov, Gargi Singh, Jon Lemon, Tyler Willis, Milos Vukadinovic, Grant Duffy, Bryan He, David Ouyang, Marco Pereanez, Daniel Samber, Derek A. Smith, Christopher Cannistraci, Zahi Fayad, David S. Mendelson, Michele Bufano, Elmar Kotter, Hamideh Haghiri, Rajesh Baidya, Stefan Dvoretskii, Klaus H. Maier-Hein, Marco Nolden, Christopher Ablett, Silvia Siggillino, Sandeep Kaushik, Hongzhu Jiang, Sihan Xie, Zhiyu Wan, Alex Michie, Simon J Doran, Angeline Aurelia Waly, Felix A. Nathaniel Liang, Humam Arshad Mustagfirin, Michelle Grace Felicia, Kuo Po Chih, Rahul Krish, Ghulam Rasool, Nidhal Bouaynaya, Nikolas Koutsoubis, Kyle Naddeo, Kartik Pandit, Tony O'Sullivan, Raj Krish, Qinyan Pan, Scott Gustafson, Benjamin Kopchick, Laura Opsahl-Ong, Andrea Olvera-Morales, Jonathan Pinney, Kathryn Johnson, Theresa Do, Juergen Klenk, Maria Diaz, Arti Singh, Rong Chai, David A. Clunie, Fred Prior, Keyvan Farahani

arxiv logopreprintJul 31 2025
The de-identification (deID) of protected health information (PHI) and personally identifiable information (PII) is a fundamental requirement for sharing medical images, particularly through public repositories, to ensure compliance with patient privacy laws. In addition, preservation of non-PHI metadata to inform and enable downstream development of imaging artificial intelligence (AI) is an important consideration in biomedical research. The goal of MIDI-B was to provide a standardized platform for benchmarking of DICOM image deID tools based on a set of rules conformant to the HIPAA Safe Harbor regulation, the DICOM Attribute Confidentiality Profiles, and best practices in preservation of research-critical metadata, as defined by The Cancer Imaging Archive (TCIA). The challenge employed a large, diverse, multi-center, and multi-modality set of real de-identified radiology images with synthetic PHI/PII inserted. The MIDI-B Challenge consisted of three phases: training, validation, and test. Eighty individuals registered for the challenge. In the training phase, we encouraged participants to tune their algorithms using their in-house or public data. The validation and test phases utilized the DICOM images containing synthetic identifiers (of 216 and 322 subjects, respectively). Ten teams successfully completed the test phase of the challenge. To measure success of a rule-based approach to image deID, scores were computed as the percentage of correct actions from the total number of required actions. The scores ranged from 97.91% to 99.93%. Participants employed a variety of open-source and proprietary tools with customized configurations, large language models, and optical character recognition (OCR). In this paper we provide a comprehensive report on the MIDI-B Challenge's design, implementation, results, and lessons learned.

Out-of-Distribution Detection in Medical Imaging via Diffusion Trajectories

Lemar Abdi, Francisco Caetano, Amaan Valiuddin, Christiaan Viviers, Hamdi Joudeh, Fons van der Sommen

arxiv logopreprintJul 31 2025
In medical imaging, unsupervised out-of-distribution (OOD) detection offers an attractive approach for identifying pathological cases with extremely low incidence rates. In contrast to supervised methods, OOD-based approaches function without labels and are inherently robust to data imbalances. Current generative approaches often rely on likelihood estimation or reconstruction error, but these methods can be computationally expensive, unreliable, and require retraining if the inlier data changes. These limitations hinder their ability to distinguish nominal from anomalous inputs efficiently, consistently, and robustly. We propose a reconstruction-free OOD detection method that leverages the forward diffusion trajectories of a Stein score-based denoising diffusion model (SBDDM). By capturing trajectory curvature via the estimated Stein score, our approach enables accurate anomaly scoring with only five diffusion steps. A single SBDDM pre-trained on a large, semantically aligned medical dataset generalizes effectively across multiple Near-OOD and Far-OOD benchmarks, achieving state-of-the-art performance while drastically reducing computational cost during inference. Compared to existing methods, SBDDM achieves a relative improvement of up to 10.43% and 18.10% for Near-OOD and Far-OOD detection, making it a practical building block for real-time, reliable computer-aided diagnosis.

Quantifying the Trajectory of Percutaneous Endoscopic Lumbar Discectomy in 3D Lumbar Models Based on Automated MR Image Segmentation-A Cross-Sectional Study.

Su Z, Wang Y, Huang C, He Q, Lu J, Liu Z, Zhang Y, Zhao Q, Zhang Y, Cai J, Pang S, Yuan Z, Chen Z, Chen T, Lu H

pubmed logopapersJul 31 2025
Creating a 3D lumbar model and planning a personalized puncture trajectory has an advantage in establishing the working channel for percutaneous endoscopic lumbar discectomy (PELD). However, existing 3D lumbar models, which seldom include lumbar nerves and dural sac reconstructions, primarily depend on CT images for preoperative trajectory planning. Therefore, our study aims to further investigate the relationship between different virtual working channels and the 3D lumbar model, which includes automated MR image segmentation of lumbar bone, nerves, and dural sac at the L4/L5 level. Preoperative lumbar MR images of 50 patients with L4/L5 lumbar disc herniation were collected from a teaching hospital between March 2020 and July 2020. Automated MR image segmentation was initially used to create a 3D model of the lumbar spine, including the L4 vertebrae, L5 vertebrae, intervertebral disc, L4 nerves, dural sac, and skin. Thirty were then randomly chosen from the segmentation results to clarify the relationship between various virtual working channels and the lumbar 3D model. A bivariate Spearman's rank correlation analysis was used in this study. Preoperative MR images of 50 patients (34 males, mean age 45.6 ± 6 years) were used to train and validate the automated segmentation model, which had mean Dice scores of 0.906, 0.891, 0.896, 0.695, 0.892, and 0.892 for the L4 vertebrae, L5 vertebrae, intervertebral disc, L4 nerves, dural sac, and skin, respectively. With an increase in the coronal plane angle (CPA), there was a reduction in the intersection volume involving the L4 nerves and atypical structures. Conversely, the intersection volume encompassing the dural sac, L4 inferior articular process, and L5 superior articular process increased; the total intersection volume showed a fluctuating pattern: it initially decreased, followed by an increase, and then decreased once more. As the cross-section angle (CSA) increased, there was a rise in the intersection volume of both the L4 nerves and the dural sac; the intersection volume involving the L4 inferior articular process grew while that of the L5 superior articular process diminished; the overall intersection volume and the intersection volume of atypical structures initially decreased, followed by an increase. In terms of regularity, the optimal angles for L4/L5 PELD are a CSA of 15° and a CPA of 15°-20°, minimizing harm to the vertebral bones, facet joint, spinal nerves, and dural sac. Additionally, our 3D preoperative planning method could enhance puncture trajectories for individual patients, potentially advancing surgical navigation, robots, and artificial intelligence in PELD procedures.

Identification and validation of an explainable machine learning model for vascular depression diagnosis in the older adults: a multicenter cohort study.

Zhang R, Li T, Fan F, He H, Lan L, Sun D, Xu Z, Peng S, Cao J, Xu J, Peng X, Lei M, Song H, Zhang J

pubmed logopapersJul 31 2025
Vascular depression (VaDep) is a prevalent affective disorder in older adults that significantly impacts functional status and quality of life. Early identification and intervention are crucial but largely insufficient in clinical practice due to inconspicuous depressive symptoms mostly, heterogeneous imaging manifestations, and the lack of definitive peripheral biomarkers. This study aimed to develop and validate an interpretable machine learning (ML) model for VaDep to serve as a clinical support tool. This study included 602 participants from Wuhan in China divided into 236 VaDep patients and 366 controls for training and internal validation from July 2020 to October 2023. An independent dataset of 171 participants from surrounding areas was used for external validation. We collected clinical data, neuropsychological assessments, blood test results, and MRI scans to develop and refine ML models through cross-validation. Feature reduction was implemented to simplify the models without compromising their performance, with validation achieved through internal and external datasets. The SHapley Additive exPlanations method was used to enhance model interpretability. The Light Gradient Boosting Machine (LGBM) model outperformed from the selected 6 ML algorithms based on performance metrics. An optimized, interpretable LGBM model with 8 key features, including white matter hyperintensities score, age, vascular endothelial growth factor, interleukin-6, brain-derived neurotrophic factor, tumor necrosis factor-alpha levels, lacune counts, and serotonin level, demonstrated high diagnostic accuracy in both internal (AUROC = 0.937) and external (AUROC = 0.896) validations. The final model also achieved, and marginally exceeded, clinician-level diagnostic performance. Our research established a consistent and explainable ML framework for identifying VaDep in older adults, utilizing comprehensive clinical data. The 8 characteristics identified in the final LGBM model provide new insights for further exploration of VaDep mechanisms and emphasize the need for enhanced focus on early identification and intervention in this vulnerable group. More attention needs to be paid to the affective health of older adults.

A brain tumor segmentation enhancement in MRI images using U-Net and transfer learning.

Pourmahboubi A, Arsalani Saeed N, Tabrizchi H

pubmed logopapersJul 31 2025
This paper presents a novel transfer learning approach for segmenting brain tumors in Magnetic Resonance Imaging (MRI) images. Using Fluid-Attenuated Inversion Recovery (FLAIR) abnormality segmentation masks and MRI scans from The Cancer Genome Atlas's (TCGA's) lower-grade glioma collection, our proposed approach uses a VGG19-based U-Net architecture with fixed pretrained weights. The experimental findings, which show an Area Under the Curve (AUC) of 0.9957, F1-Score of 0.9679, Dice Coefficient of 0.9679, Precision of 0.9541, Recall of 0.9821, and Intersection-over-Union (IoU) of 0.9378, show how effective the proposed framework is. According to these metrics, the VGG19-powered U-Net outperforms not only the conventional U-Net model but also other variants that were compared and used different pre-trained backbones in the U-Net encoder.Clinical trial registrationNot applicable as this study utilized existing publicly available dataset and did not involve a clinical trial.

An interpretable CT-based machine learning model for predicting recurrence risk in stage II colorectal cancer.

Wu Z, Gong L, Luo J, Chen X, Yang F, Wen J, Hao Y, Wang Z, Gu R, Zhang Y, Liao H, Wen G

pubmed logopapersJul 31 2025
This study aimed to develop an interpretable 3-year disease-free survival risk prediction tool to stratify patients with stage II colorectal cancer (CRC) by integrating CT images and clinicopathological factors. A total of 769 patients with pathologically confirmed stage II CRC and disease-free survival (DFS) follow-up information were recruited from three medical centers and divided into training (n = 442), test (n = 190), and validation cohorts (n = 137). CT-based tumor radiomics features were extracted, selected, and used to calculate a Radscore. A combined model was developed using artificial neural network (ANN) algorithm, by integrating the Radscore with significant clinicoradiological factors to classify patients into high- and low-risk groups. Model performance was assessed using the area under the curve (AUC), and feature contributions were qualified using the Shapley additive explanation (SHAP) algorithm. Kaplan-Meier survival analysis revealed the prognostic stratification value of the risk groups. Fourteen radiomics features and five clinicoradiological factors were selected to construct the radiomics and clinicoradiological models, respectively. The combined model demonstrated optimal performance, with AUCs of 0.811 and 0.846 in the test and validation cohorts, respectively. Kaplan-Meier curves confirmed effective patient stratification (p < 0.001) in both test and validation cohorts. A high Radscore, rough intestinal outer edge, and advanced age were identified as key prognostic risk factors using the SHAP. The combined model effectively stratified patients with stage II CRC into different prognostic risk groups, aiding clinical decision-making. Integrating CT images with clinicopathological information can facilitate the identification of patients with stage II CRC who are most likely to benefit from adjuvant chemotherapy. The effectiveness of adjuvant chemotherapy for stage II colorectal cancer remains debated. A combined model successfully identified high-risk stage II colorectal cancer patients. Shapley additive explanations enhance the interpretability of the model's predictions.

Enhanced stroke risk prediction in hypertensive patients through deep learning integration of imaging and clinical data.

Li H, Zhang T, Han G, Huang Z, Xiao H, Ni Y, Liu B, Lin W, Lin Y

pubmed logopapersJul 31 2025
Stroke is one of the leading causes of death and disability worldwide, with a significantly elevated incidence among individuals with hypertension. Conventional risk assessment methods primarily rely on a limited set of clinical parameters and often exclude imaging-derived structural features, resulting in suboptimal predictive accuracy. This study aimed to develop a deep learning-based multimodal stroke risk prediction model by integrating carotid ultrasound imaging with multidimensional clinical data to enable precise identification of high-risk individuals among hypertensive patients. A total of 2,176 carotid artery ultrasound images from 1,088 hypertensive patients were collected. ResNet50 was employed to automatically segment the carotid intima-media and extract key structural features. These imaging features, along with clinical variables such as age, blood pressure, and smoking history, were fused using a Vision Transformer (ViT) and fed into a Radial Basis Probabilistic Neural Network (RBPNN) for risk stratification. The model's performance was systematically evaluated using metrics including AUC, Dice coefficient, IoU, and Precision-Recall curves. The proposed multimodal fusion model achieved outstanding performance on the test set, with an AUC of 0.97, a Dice coefficient of 0.90, and an IoU of 0.80. Ablation studies demonstrated that the inclusion of ViT and RBPNN modules significantly enhanced predictive accuracy. Subgroup analysis further confirmed the model's robust performance in high-risk populations, such as those with diabetes or smoking history. The deep learning-based multimodal fusion model effectively integrates carotid ultrasound imaging and clinical features, significantly improving the accuracy of stroke risk prediction in hypertensive patients. The model demonstrates strong generalizability and clinical application potential, offering a valuable tool for early screening and personalized intervention planning for stroke prevention. Not applicable.

TA-SSM net: tri-directional attention and structured state-space model for enhanced MRI-Based diagnosis of Alzheimer's disease and mild cognitive impairment.

Bao S, Zheng F, Jiang L, Wang Q, Lyu Y

pubmed logopapersJul 31 2025
Early diagnosis of Alzheimer's disease (AD) and its precursor, mild cognitive impairment (MCI), is critical for effective prevention and treatment. Computer-aided diagnosis using magnetic resonance imaging (MRI) provides a cost-effective and objective approach. However, existing methods often segment 3D MRI images into 2D slices, leading to spatial information loss and reduced diagnostic accuracy. To overcome this limitation, we propose TA-SSM Net, a deep learning model that leverages tri-directional attention and structured state-space model (SSM) for improved MRI-based diagnosis of AD and MCI. The tri-directional attention mechanism captures spatial and contextual information from forward, backward, and vertical directions in 3D MRI images, enabling effective feature fusion. Additionally, gradient checkpointing is applied within the SSM to enhance processing efficiency, allowing the model to handle whole-brain scans while preserving spatial correlations. To evaluate our method, we construct a dataset from the Alzheimer's Disease Neuroimaging Initiative (ADNI), consisting of 300 AD patients, 400 MCI patients, and 400 normal controls. TA-SSM Net achieved an accuracy of 90.24% for MCI detection and 95.83% for AD detection. The results demonstrate that our approach not only improves classification accuracy but also enhances processing efficiency and maintains spatial correlations, offering a promising solution for the diagnosis of Alzheimer's disease.

Advanced multi-label brain hemorrhage segmentation using an attention-based residual U-Net model.

Lin X, Zou E, Chen W, Chen X, Lin L

pubmed logopapersJul 31 2025
This study aimed to develop and assess an advanced Attention-Based Residual U-Net (ResUNet) model for accurately segmenting different types of brain hemorrhages from CT images. The goal was to overcome the limitations of manual segmentation and current automated methods regarding precision and generalizability. A dataset of 1,347 patient CT scans was collected retrospectively, covering six types of hemorrhages: subarachnoid hemorrhage (SAH, 231 cases), subdural hematoma (SDH, 198 cases), epidural hematoma (EDH, 236 cases), cerebral contusion (CC, 230 cases), intraventricular hemorrhage (IVH, 188 cases), and intracerebral hemorrhage (ICH, 264 cases). The dataset was divided into 80% for training using a 10-fold cross-validation approach and 20% for testing. All CT scans were standardized to a common anatomical space, and intensity normalization was applied for uniformity. The ResUNet model included attention mechanisms to enhance focus on important features and residual connections to support stable learning and efficient gradient flow. Model performance was assessed using the Dice Similarity Coefficient (DSC), Intersection over Union (IoU), and directed Hausdorff distance (dHD). The ResUNet model showed excellent performance during both training and testing. On training data, the model achieved DSC scores of 95 ± 1.2 for SAH, 94 ± 1.4 for SDH, 93 ± 1.5 for EDH, 91 ± 1.4 for CC, 89 ± 1.6 for IVH, and 93 ± 2.4 for ICH. IoU values ranged from 88 to 93, with dHD between 2.1- and 2.7-mm. Testing results confirmed strong generalization, with DSC scores of 93 for SAH, 93 for SDH, 92 for EDH, 90 for CC, 88 for IVH, and 92 for ICH. IoU values were also high, indicating precise segmentation and minimal boundary errors. The ResUNet model outperformed standard U-Net variants, achieving higher multi-label segmentation accuracy. This makes it a valuable tool for clinical applications that require fast and reliable brain hemorrhage analysis. Future research could investigate semi-supervised techniques and 3D segmentation to further enhance clinical use. Not applicable.

A successive framework for brain tumor interpretation using Yolo variants.

Priyadharshini S, Bhoopalan R, Manikandan D, Ramaswamy K

pubmed logopapersJul 31 2025
Accurate identification and segmentation of brain tumors in Magnetic Resonance Imaging (MRI) images are critical for timely diagnosis and treatment. MRI is frequently used to diagnose these disorders; however medical professionals find it challenging to manually evaluate MRI pictures because of time restrictions and unpredictability. Computerized methods such as R-CNN, attention models and earlier YOLO variants face limitations due to high computational demands and suboptimal segmentation performance. To overcome these limitations, this study proposes a successive framework that evaluates YOLOv9, YOLOv10, and YOLOv11 for tumor detection and segmentation using the Figshare Brain Tumor dataset (2100 images) and BraTS2020 dataset (3170 MRI slices). Preprocessing involves log transformation for intensity normalization, histogram equalization for contrast enhancement, and edge-based ROI extraction. The models were trained on 80% of the combined dataset and evaluated on the remaining 20%. YOLOv11 demonstrated superior performance, achieving 96.22% classification accuracy on BraTS2020 and 96.41% on Figshare, with an F1-score of 0.990, recall of 0.984, [email protected] of 0.993, and mAP@ [0.5:0.95] of 0.801 during testing. With a fast inference time of 5.3 ms and a balanced precision-recall profile, YOLOv11 proves to be a robust, real-time solution for brain tumor detection in clinical applications.
Page 85 of 3703696 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.