Sort by:
Page 317 of 6626611 results

Halid Ziya Yerebakan, Gerardo Hermosillo Valadez

arxiv logopreprintJul 31 2025
This study demonstrates that incorporating a consistency heuristic into the point-matching algorithm \cite{yerebakan2023hierarchical} improves robustness in matching anatomical locations across pairs of medical images. We validated our approach on diverse longitudinal internal and public datasets spanning CT and MRI modalities. Notably, it surpasses state-of-the-art results on the Deep Lesion Tracking dataset. Additionally, we show that the method effectively addresses landmark localization. The algorithm operates efficiently on standard CPU hardware and allows configurable trade-offs between speed and robustness. The method enables high-precision navigation between medical images without requiring a machine learning model or training data.

Shereiff Garrett, Abhinav Adhikari, Sarina Gautam, DaShawn Marquis Morris, Chandra Mani Adhikari

arxiv logopreprintJul 31 2025
Machine learning and artificial intelligence are fast-growing fields of research in which data is used to train algorithms, learn patterns, and make predictions. This approach helps to solve seemingly intricate problems with significant accuracy without explicit programming by recognizing complex relationships in data. Taking an example of 5824 chest X-ray images, we implement two machine learning algorithms, namely, a baseline convolutional neural network (CNN) and a DenseNet-121, and present our analysis in making machine-learned predictions in predicting patients with ailments. Both baseline CNN and DenseNet-121 perform very well in the binary classification problem presented in this work. Gradient-weighted class activation mapping shows that DenseNet-121 correctly focuses on essential parts of the input chest X-ray images in its decision-making more than the baseline CNN.

Solha Kang, Eugene Kim, Joris Vankerschaver, Utku Ozbulak

arxiv logopreprintJul 31 2025
Breast MRI provides high-resolution volumetric imaging critical for tumor assessment and treatment planning, yet manual interpretation of 3D scans remains labor-intensive and subjective. While AI-powered tools hold promise for accelerating medical image analysis, adoption of commercial medical AI products remains limited in low- and middle-income countries due to high license costs, proprietary software, and infrastructure demands. In this work, we investigate whether the Segment Anything Model 2 (SAM2) can be adapted for low-cost, minimal-input 3D tumor segmentation in breast MRI. Using a single bounding box annotation on one slice, we propagate segmentation predictions across the 3D volume using three different slice-wise tracking strategies: top-to-bottom, bottom-to-top, and center-outward. We evaluate these strategies across a large cohort of patients and find that center-outward propagation yields the most consistent and accurate segmentations. Despite being a zero-shot model not trained for volumetric medical data, SAM2 achieves strong segmentation performance under minimal supervision. We further analyze how segmentation performance relates to tumor size, location, and shape, identifying key failure modes. Our results suggest that general-purpose foundation models such as SAM2 can support 3D medical image analysis with minimal supervision, offering an accessible and affordable alternative for resource-constrained settings.

Alfie Roddan, Tobias Czempiel, Chi Xu, Daniel S. Elson, Stamatia Giannarou

arxiv logopreprintJul 31 2025
Hyperspectral imaging (HSI) provides rich spectral information for medical imaging, yet encounters significant challenges due to data limitations and hardware variations. We introduce SAMSA, a novel interactive segmentation framework that combines an RGB foundation model with spectral analysis. SAMSA efficiently utilizes user clicks to guide both RGB segmentation and spectral similarity computations. The method addresses key limitations in HSI segmentation through a unique spectral feature fusion strategy that operates independently of spectral band count and resolution. Performance evaluation on publicly available datasets has shown 81.0% 1-click and 93.4% 5-click DICE on a neurosurgical and 81.1% 1-click and 89.2% 5-click DICE on an intraoperative porcine hyperspectral dataset. Experimental results demonstrate SAMSA's effectiveness in few-shot and zero-shot learning scenarios and using minimal training examples. Our approach enables seamless integration of datasets with different spectral characteristics, providing a flexible framework for hyperspectral medical image analysis.

Kunpeng Qiu, Zhiying Zhou, Yongxin Guo

arxiv logopreprintJul 31 2025
Medical image annotation is constrained by privacy concerns and labor-intensive labeling, significantly limiting the performance and generalization of segmentation models. While mask-controllable diffusion models excel in synthesis, they struggle with precise lesion-mask alignment. We propose \textbf{Adaptively Distilled ControlNet}, a task-agnostic framework that accelerates training and optimization through dual-model distillation. Specifically, during training, a teacher model, conditioned on mask-image pairs, regularizes a mask-only student model via predicted noise alignment in parameter space, further enhanced by adaptive regularization based on lesion-background ratios. During sampling, only the student model is used, enabling privacy-preserving medical image generation. Comprehensive evaluations on two distinct medical datasets demonstrate state-of-the-art performance: TransUNet improves mDice/mIoU by 2.4%/4.2% on KiTS19, while SANet achieves 2.6%/3.5% gains on Polyps, highlighting its effectiveness and superiority. Code is available at GitHub.

Linmin Pei, Granger Sutton, Michael Rutherford, Ulrike Wagner, Tracy Nolan, Kirk Smith, Phillip Farmer, Peter Gu, Ambar Rana, Kailing Chen, Thomas Ferleman, Brian Park, Ye Wu, Jordan Kojouharov, Gargi Singh, Jon Lemon, Tyler Willis, Milos Vukadinovic, Grant Duffy, Bryan He, David Ouyang, Marco Pereanez, Daniel Samber, Derek A. Smith, Christopher Cannistraci, Zahi Fayad, David S. Mendelson, Michele Bufano, Elmar Kotter, Hamideh Haghiri, Rajesh Baidya, Stefan Dvoretskii, Klaus H. Maier-Hein, Marco Nolden, Christopher Ablett, Silvia Siggillino, Sandeep Kaushik, Hongzhu Jiang, Sihan Xie, Zhiyu Wan, Alex Michie, Simon J Doran, Angeline Aurelia Waly, Felix A. Nathaniel Liang, Humam Arshad Mustagfirin, Michelle Grace Felicia, Kuo Po Chih, Rahul Krish, Ghulam Rasool, Nidhal Bouaynaya, Nikolas Koutsoubis, Kyle Naddeo, Kartik Pandit, Tony O'Sullivan, Raj Krish, Qinyan Pan, Scott Gustafson, Benjamin Kopchick, Laura Opsahl-Ong, Andrea Olvera-Morales, Jonathan Pinney, Kathryn Johnson, Theresa Do, Juergen Klenk, Maria Diaz, Arti Singh, Rong Chai, David A. Clunie, Fred Prior, Keyvan Farahani

arxiv logopreprintJul 31 2025
The de-identification (deID) of protected health information (PHI) and personally identifiable information (PII) is a fundamental requirement for sharing medical images, particularly through public repositories, to ensure compliance with patient privacy laws. In addition, preservation of non-PHI metadata to inform and enable downstream development of imaging artificial intelligence (AI) is an important consideration in biomedical research. The goal of MIDI-B was to provide a standardized platform for benchmarking of DICOM image deID tools based on a set of rules conformant to the HIPAA Safe Harbor regulation, the DICOM Attribute Confidentiality Profiles, and best practices in preservation of research-critical metadata, as defined by The Cancer Imaging Archive (TCIA). The challenge employed a large, diverse, multi-center, and multi-modality set of real de-identified radiology images with synthetic PHI/PII inserted. The MIDI-B Challenge consisted of three phases: training, validation, and test. Eighty individuals registered for the challenge. In the training phase, we encouraged participants to tune their algorithms using their in-house or public data. The validation and test phases utilized the DICOM images containing synthetic identifiers (of 216 and 322 subjects, respectively). Ten teams successfully completed the test phase of the challenge. To measure success of a rule-based approach to image deID, scores were computed as the percentage of correct actions from the total number of required actions. The scores ranged from 97.91% to 99.93%. Participants employed a variety of open-source and proprietary tools with customized configurations, large language models, and optical character recognition (OCR). In this paper we provide a comprehensive report on the MIDI-B Challenge's design, implementation, results, and lessons learned.

Lemar Abdi, Francisco Caetano, Amaan Valiuddin, Christiaan Viviers, Hamdi Joudeh, Fons van der Sommen

arxiv logopreprintJul 31 2025
In medical imaging, unsupervised out-of-distribution (OOD) detection offers an attractive approach for identifying pathological cases with extremely low incidence rates. In contrast to supervised methods, OOD-based approaches function without labels and are inherently robust to data imbalances. Current generative approaches often rely on likelihood estimation or reconstruction error, but these methods can be computationally expensive, unreliable, and require retraining if the inlier data changes. These limitations hinder their ability to distinguish nominal from anomalous inputs efficiently, consistently, and robustly. We propose a reconstruction-free OOD detection method that leverages the forward diffusion trajectories of a Stein score-based denoising diffusion model (SBDDM). By capturing trajectory curvature via the estimated Stein score, our approach enables accurate anomaly scoring with only five diffusion steps. A single SBDDM pre-trained on a large, semantically aligned medical dataset generalizes effectively across multiple Near-OOD and Far-OOD benchmarks, achieving state-of-the-art performance while drastically reducing computational cost during inference. Compared to existing methods, SBDDM achieves a relative improvement of up to 10.43% and 18.10% for Near-OOD and Far-OOD detection, making it a practical building block for real-time, reliable computer-aided diagnosis.

Su Z, Wang Y, Huang C, He Q, Lu J, Liu Z, Zhang Y, Zhao Q, Zhang Y, Cai J, Pang S, Yuan Z, Chen Z, Chen T, Lu H

pubmed logopapersJul 31 2025
Creating a 3D lumbar model and planning a personalized puncture trajectory has an advantage in establishing the working channel for percutaneous endoscopic lumbar discectomy (PELD). However, existing 3D lumbar models, which seldom include lumbar nerves and dural sac reconstructions, primarily depend on CT images for preoperative trajectory planning. Therefore, our study aims to further investigate the relationship between different virtual working channels and the 3D lumbar model, which includes automated MR image segmentation of lumbar bone, nerves, and dural sac at the L4/L5 level. Preoperative lumbar MR images of 50 patients with L4/L5 lumbar disc herniation were collected from a teaching hospital between March 2020 and July 2020. Automated MR image segmentation was initially used to create a 3D model of the lumbar spine, including the L4 vertebrae, L5 vertebrae, intervertebral disc, L4 nerves, dural sac, and skin. Thirty were then randomly chosen from the segmentation results to clarify the relationship between various virtual working channels and the lumbar 3D model. A bivariate Spearman's rank correlation analysis was used in this study. Preoperative MR images of 50 patients (34 males, mean age 45.6 ± 6 years) were used to train and validate the automated segmentation model, which had mean Dice scores of 0.906, 0.891, 0.896, 0.695, 0.892, and 0.892 for the L4 vertebrae, L5 vertebrae, intervertebral disc, L4 nerves, dural sac, and skin, respectively. With an increase in the coronal plane angle (CPA), there was a reduction in the intersection volume involving the L4 nerves and atypical structures. Conversely, the intersection volume encompassing the dural sac, L4 inferior articular process, and L5 superior articular process increased; the total intersection volume showed a fluctuating pattern: it initially decreased, followed by an increase, and then decreased once more. As the cross-section angle (CSA) increased, there was a rise in the intersection volume of both the L4 nerves and the dural sac; the intersection volume involving the L4 inferior articular process grew while that of the L5 superior articular process diminished; the overall intersection volume and the intersection volume of atypical structures initially decreased, followed by an increase. In terms of regularity, the optimal angles for L4/L5 PELD are a CSA of 15° and a CPA of 15°-20°, minimizing harm to the vertebral bones, facet joint, spinal nerves, and dural sac. Additionally, our 3D preoperative planning method could enhance puncture trajectories for individual patients, potentially advancing surgical navigation, robots, and artificial intelligence in PELD procedures.

Zhang R, Li T, Fan F, He H, Lan L, Sun D, Xu Z, Peng S, Cao J, Xu J, Peng X, Lei M, Song H, Zhang J

pubmed logopapersJul 31 2025
Vascular depression (VaDep) is a prevalent affective disorder in older adults that significantly impacts functional status and quality of life. Early identification and intervention are crucial but largely insufficient in clinical practice due to inconspicuous depressive symptoms mostly, heterogeneous imaging manifestations, and the lack of definitive peripheral biomarkers. This study aimed to develop and validate an interpretable machine learning (ML) model for VaDep to serve as a clinical support tool. This study included 602 participants from Wuhan in China divided into 236 VaDep patients and 366 controls for training and internal validation from July 2020 to October 2023. An independent dataset of 171 participants from surrounding areas was used for external validation. We collected clinical data, neuropsychological assessments, blood test results, and MRI scans to develop and refine ML models through cross-validation. Feature reduction was implemented to simplify the models without compromising their performance, with validation achieved through internal and external datasets. The SHapley Additive exPlanations method was used to enhance model interpretability. The Light Gradient Boosting Machine (LGBM) model outperformed from the selected 6 ML algorithms based on performance metrics. An optimized, interpretable LGBM model with 8 key features, including white matter hyperintensities score, age, vascular endothelial growth factor, interleukin-6, brain-derived neurotrophic factor, tumor necrosis factor-alpha levels, lacune counts, and serotonin level, demonstrated high diagnostic accuracy in both internal (AUROC = 0.937) and external (AUROC = 0.896) validations. The final model also achieved, and marginally exceeded, clinician-level diagnostic performance. Our research established a consistent and explainable ML framework for identifying VaDep in older adults, utilizing comprehensive clinical data. The 8 characteristics identified in the final LGBM model provide new insights for further exploration of VaDep mechanisms and emphasize the need for enhanced focus on early identification and intervention in this vulnerable group. More attention needs to be paid to the affective health of older adults.

Pourmahboubi A, Arsalani Saeed N, Tabrizchi H

pubmed logopapersJul 31 2025
This paper presents a novel transfer learning approach for segmenting brain tumors in Magnetic Resonance Imaging (MRI) images. Using Fluid-Attenuated Inversion Recovery (FLAIR) abnormality segmentation masks and MRI scans from The Cancer Genome Atlas's (TCGA's) lower-grade glioma collection, our proposed approach uses a VGG19-based U-Net architecture with fixed pretrained weights. The experimental findings, which show an Area Under the Curve (AUC) of 0.9957, F1-Score of 0.9679, Dice Coefficient of 0.9679, Precision of 0.9541, Recall of 0.9821, and Intersection-over-Union (IoU) of 0.9378, show how effective the proposed framework is. According to these metrics, the VGG19-powered U-Net outperforms not only the conventional U-Net model but also other variants that were compared and used different pre-trained backbones in the U-Net encoder.Clinical trial registrationNot applicable as this study utilized existing publicly available dataset and did not involve a clinical trial.
Page 317 of 6626611 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.