Sort by:
Page 116 of 4003995 results

Consistent Point Matching

Halid Ziya Yerebakan, Gerardo Hermosillo Valadez

arxiv logopreprintJul 31 2025
This study demonstrates that incorporating a consistency heuristic into the point-matching algorithm \cite{yerebakan2023hierarchical} improves robustness in matching anatomical locations across pairs of medical images. We validated our approach on diverse longitudinal internal and public datasets spanning CT and MRI modalities. Notably, it surpasses state-of-the-art results on the Deep Lesion Tracking dataset. Additionally, we show that the method effectively addresses landmark localization. The algorithm operates efficiently on standard CPU hardware and allows configurable trade-offs between speed and robustness. The method enables high-precision navigation between medical images without requiring a machine learning model or training data.

Thin-slice 2D MR Imaging of the Shoulder Joint Using Denoising Deep Learning Reconstruction Provides Higher Image Quality Than 3D MR Imaging.

Kakigi T, Sakamoto R, Arai R, Yamamoto A, Kuriyama S, Sano Y, Imai R, Numamoto H, Miyake KK, Saga T, Matsuda S, Nakamoto Y

pubmed logopapersJul 31 2025
This study was conducted to evaluate whether thin-slice 2D fat-saturated proton density-weighted images of the shoulder joint in three imaging planes combined with parallel imaging, partial Fourier technique, and denoising approach with deep learning-based reconstruction (dDLR) are more useful than 3D fat-saturated proton density multi-planar voxel images. Eighteen patients who underwent MRI of the shoulder joint at 3T were enrolled. The denoising effect of dDLR in 2D was evaluated using coefficient of variation (CV). Qualitative evaluation of anatomical structures, noise, and artifacts in 2D after dDLR and 3D was performed by two radiologists using a five-point Likert scale. All were analyzed statistically. Gwet's agreement coefficients were also calculated. The CV of 2D after dDLR was significantly lower than that before dDLR (P < 0.05). Both radiologists rated 2D higher than 3D for all anatomical structures and noise (P < 0.05), except for artifacts. Both Gwet's agreement coefficients of anatomical structures, noise, and artifacts in 2D and 3D produced nearly perfect agreement between the two radiologists. The evaluation of 2D tended to be more reproducible than 3D. 2D with parallel imaging, partial Fourier technique, and dDLR was proved to be superior to 3D for depicting shoulder joint structures with lower noise.

DICOM De-Identification via Hybrid AI and Rule-Based Framework for Scalable, Uncertainty-Aware Redaction

Kyle Naddeo, Nikolas Koutsoubis, Rahul Krish, Ghulam Rasool, Nidhal Bouaynaya, Tony OSullivan, Raj Krish

arxiv logopreprintJul 31 2025
Access to medical imaging and associated text data has the potential to drive major advances in healthcare research and patient outcomes. However, the presence of Protected Health Information (PHI) and Personally Identifiable Information (PII) in Digital Imaging and Communications in Medicine (DICOM) files presents a significant barrier to the ethical and secure sharing of imaging datasets. This paper presents a hybrid de-identification framework developed by Impact Business Information Solutions (IBIS) that combines rule-based and AI-driven techniques, and rigorous uncertainty quantification for comprehensive PHI/PII removal from both metadata and pixel data. Our approach begins with a two-tiered rule-based system targeting explicit and inferred metadata elements, further augmented by a large language model (LLM) fine-tuned for Named Entity Recognition (NER), and trained on a suite of synthetic datasets simulating realistic clinical PHI/PII. For pixel data, we employ an uncertainty-aware Faster R-CNN model to localize embedded text, extract candidate PHI via Optical Character Recognition (OCR), and apply the NER pipeline for final redaction. Crucially, uncertainty quantification provides confidence measures for AI-based detections to enhance automation reliability and enable informed human-in-the-loop verification to manage residual risks. This uncertainty-aware deidentification framework achieves robust performance across benchmark datasets and regulatory standards, including DICOM, HIPAA, and TCIA compliance metrics. By combining scalable automation, uncertainty quantification, and rigorous quality assurance, our solution addresses critical challenges in medical data de-identification and supports the secure, ethical, and trustworthy release of imaging data for research.

Topology Optimization in Medical Image Segmentation with Fast Euler Characteristic

Liu Li, Qiang Ma, Cheng Ouyang, Johannes C. Paetzold, Daniel Rueckert, Bernhard Kainz

arxiv logopreprintJul 31 2025
Deep learning-based medical image segmentation techniques have shown promising results when evaluated based on conventional metrics such as the Dice score or Intersection-over-Union. However, these fully automatic methods often fail to meet clinically acceptable accuracy, especially when topological constraints should be observed, e.g., continuous boundaries or closed surfaces. In medical image segmentation, the correctness of a segmentation in terms of the required topological genus sometimes is even more important than the pixel-wise accuracy. Existing topology-aware approaches commonly estimate and constrain the topological structure via the concept of persistent homology (PH). However, these methods are difficult to implement for high dimensional data due to their polynomial computational complexity. To overcome this problem, we propose a novel and fast approach for topology-aware segmentation based on the Euler Characteristic ($\chi$). First, we propose a fast formulation for $\chi$ computation in both 2D and 3D. The scalar $\chi$ error between the prediction and ground-truth serves as the topological evaluation metric. Then we estimate the spatial topology correctness of any segmentation network via a so-called topological violation map, i.e., a detailed map that highlights regions with $\chi$ errors. Finally, the segmentation results from the arbitrary network are refined based on the topological violation maps by a topology-aware correction network. Our experiments are conducted on both 2D and 3D datasets and show that our method can significantly improve topological correctness while preserving pixel-wise segmentation accuracy.

Utility of Thin-slice Fat-suppressed Single-shot T2-weighted MR Imaging with Deep Learning Image Reconstruction as a Protocol for Evaluating the Pancreas.

Shimada R, Sofue K, Ueno Y, Wakayama T, Yamaguchi T, Ueshima E, Kusaka A, Hori M, Murakami T

pubmed logopapersJul 31 2025
To compare the utility of thin-slice fat-suppressed single-shot T2-weighted imaging (T2WI) with deep learning image reconstruction (DLIR) and conventional fast spin-echo T2WI with DLIR for evaluating pancreatic protocol. This retrospective study included 42 patients (mean age, 70.2 years) with pancreatic cancer who underwent gadoxetic acid-enhanced MRI. Three fat-suppressed T2WI, including conventional fast-spin echo with 6 mm thickness (FSE 6 mm), single-shot fast-spin echo with 6 mm and 3 mm thickness (SSFSE 6 mm and SSFSE 3 mm), were acquired for each patient. For quantitative analysis, the SNRs of the upper abdominal organs were calculated between images with and without DLIR. The pancreas-to-lesion contrast on DLIR images was also calculated. For qualitative analysis, two abdominal radiologists independently scored the image quality on a 5-point scale in the FSE 6 mm, SSFSE 6 mm, and SSFSE 3 mm with DLIR. The SNRs significantly improved among the three T2-weighted images with DLIR compared to those without DLIR in all patients (P < 0.001). The pancreas-to-lesion contrast of SSFSE 3 mm was higher than those of the FSE 6 mm (P < 0.001) and tended to be higher than SSFSE 6 mm (P = 0.07). SSFSE 3 mm had the highest image qualities regarding pancreas edge sharpness, pancreatic duct clarity, and overall image quality, followed by SSFSE 6 mm and FSE 6 mm (P < 0.0001). SSFSE 3 mm with DLIR demonstrated significant improvements in SNRs of the pancreas, pancreas-to-lesion contrast, and image quality more efficiently than did SSFSE 6 mm and FSE 6 mm. Thin-slice fat-suppressed single-shot T2WI with DLIR can be easily implemented for pancreatic MR protocol.

Deep Learning-based Hierarchical Brain Segmentation with Preliminary Analysis of the Repeatability and Reproducibility.

Goto M, Kamagata K, Andica C, Takabayashi K, Uchida W, Goto T, Yuzawa T, Kitamura Y, Hatano T, Hattori N, Aoki S, Sakamoto H, Sakano Y, Kyogoku S, Daida H

pubmed logopapersJul 31 2025
We developed new deep learning-based hierarchical brain segmentation (DLHBS) method that can segment T1-weighted MR images (T1WI) into 107 brain subregions and calculate the volume of each subregion. This study aimed to evaluate the repeatability and reproducibility of volume estimation using DLHBS and compare them with those of representative brain segmentation tools such as statistical parametric mapping (SPM) and FreeSurfer (FS). Hierarchical segmentation using multiple deep learning models was employed to segment brain subregions within a clinically feasible processing time. The T1WI and brain mask pairs in 486 subjects were used as training data for training of the deep learning segmentation models. Training data were generated using a multi-atlas registration-based method. The high quality of training data was confirmed through visual evaluation and manual correction by neuroradiologists. The brain 3D-T1WI scan-rescan data of the 11 healthy subjects were obtained using three MRI scanners for evaluating the repeatability and reproducibility. The volumes of the eight ROIs-including gray matter, white matter, cerebrospinal fluid, hippocampus, orbital gyrus, cerebellum posterior lobe, putamen, and thalamus-obtained using DLHBS, SPM 12 with default settings, and FS with the "recon-all" pipeline. These volumes were then used for evaluation of repeatability and reproducibility. In the volume measurements, the bilateral thalamus showed higher repeatability with DLHBS compared with SPM. Furthermore, DLHBS demonstrated higher repeatability than FS in across all eight ROIs. Additionally, higher reproducibility was observed with DLHBS in both hemispheres of six ROIs when compared with SPM and in five ROIs compared with FS. The lower repeatability and reproducibility in DLHBS were not observed in any comparisons. Our results showed that the best performance in both repeatability and reproducibility was found in DLHBS compared with SPM and FS.

Hybrid optimization enabled Eff-FDMNet for Parkinson's disease detection and classification in federated learning.

Subramaniam S, Balakrishnan U

pubmed logopapersJul 31 2025
Parkinson's Disease (PD) is a progressive neurodegenerative disorder and the early diagnosis is crucial for managing symptoms and slowing disease progression. This paper proposes a framework named Federated Learning Enabled Waterwheel Shuffled Shepherd Optimization-based Efficient-Fuzzy Deep Maxout Network (FedL_WSSO based Eff-FDMNet) for PD detection and classification. In local training model, the input image from the database "Image and Data Archive (IDA)" is given for preprocessing that is performed using Gaussian filter. Consequently, image augmentation takes place and feature extraction is conducted. These processes are executed for every input image. Therefore, the collected outputs of images are used for PD detection using Shepard Convolutional Neural Network Fuzzy Zeiler and Fergus Net (ShCNN-Fuzzy-ZFNet). Then, PD classification is accomplished using Eff-FDMNet, which is trained using WSSO. At last, based on CAViaR, local updation and aggregation are changed in server. The developed method obtained highest accuracy as 0.927, mean average precision as 0.905, lowest false positive rate (FPR) as 0.082, loss as 0.073, Mean Squared Error (MSE) as 0.213, and Root Mean Squared Error (RMSE) as 0.461. The high accuracy and low error rates indicate that the potent framework can enhance patient outcomes by enabling more reliable and personalized diagnosis.

Effect of spatial resolution on the diagnostic performance of machine-learning radiomics model in lung adenocarcinoma: comparisons between normal- and high-spatial-resolution imaging for predicting invasiveness.

Yanagawa M, Nagatani Y, Hata A, Sumikawa H, Moriya H, Iwano S, Tsuchiya N, Iwasawa T, Ohno Y, Tomiyama N

pubmed logopapersJul 31 2025
To construct two machine learning radiomics (MLR) for invasive adenocarcinoma (IVA) prediction using normal-spatial-resolution (NSR) and high-spatial-resolution (HSR) training cohorts, and to validate models (model-NSR and -HSR) in another test cohort while comparing independent radiologists' (R1, R2) performance with and without model-HSR. In this retrospective multicenter study, all CT images were reconstructed using NSR data (512 matrix, 0.5-mm thickness) and HSR data (2048 matrix, 0.25-mm thickness). Nodules were divided into training (n = 61 non-IVA, n = 165 IVA) and test sets (n = 36 non-IVA, n = 203 IVA). Two MLR models were developed with 18 significant factors for the NSR model and 19 significant factors for the HSR model from 172 radiomics features using random forest. Area under the receiver operator characteristic curves (AUC) was analyzed using DeLong's test in the test set. Accuracy (acc), sensitivity (sen), and specificity (spc) of R1 and R2 with and without model-HSR were compared using McNemar test. 437 patients (70 ± 9 years, 203 men) had 465 nodules (n = 368, IVA). Model-HSR AUCs were significantly higher than model-NSR in training (0.839 vs. 0.723) and test sets (0.863 vs. 0.718) (p < 0.05). R1's acc (87.2%) and sen (93.1%) with model-HSR were significantly higher than without (77.0% and 79.3%) (p < 0.0001). R2's acc (83.7%) and sen (86.7%) with model-HSR might be equal or higher than without (83.7% and 85.7%, respectively), but not significant (p > 0.50). Spc of R1 (52.8%) and R2 (66.7%) with model-HSR might be lower than without (63.9% and 72.2%, respectively), but not significant (p > 0.21). HSR-based MLR model significantly increased IVA diagnostic performance compared to NSR, supporting radiologists without compromising accuracy and sensitivity. However, this benefit came at the cost of reduced specificity, potentially increasing false positives, which may lead to unnecessary examinations or overtreatment in clinical settings.

A Trust-Guided Approach to MR Image Reconstruction with Side Information.

Atalik A, Chopra S, Sodickson DK

pubmed logopapersJul 31 2025
Reducing MRI scan times can improve patient care and lower healthcare costs. Many acceleration methods are designed to reconstruct diagnostic-quality images from sparse k-space data, via an ill-posed or ill-conditioned linear inverse problem (LIP). To address the resulting ambiguities, it is crucial to incorporate prior knowledge into the optimization problem, e.g., in the form of regularization. Another form of prior knowledge less commonly used in medical imaging is the readily available auxiliary data (a.k.a. side information) obtained from sources other than the current acquisition. In this paper, we present the Trust-Guided Variational Network (TGVN), an end-to-end deep learning framework that effectively and reliably integrates side information into LIPs. We demonstrate its effectiveness in multi-coil, multi-contrast MRI reconstruction, where incomplete or low-SNR measurements from one contrast are used as side information to reconstruct high-quality images of another contrast from heavily under-sampled data. TGVN is robust across different contrasts, anatomies, and field strengths. Compared to baselines utilizing side information, TGVN achieves superior image quality while preserving subtle pathological features even at challenging acceleration levels, drastically speeding up acquisition while minimizing hallucinations. Source code and dataset splits are available on github.com/sodicksonlab/TGVN.

SAM-Med3D: A Vision Foundation Model for General-Purpose Segmentation on Volumetric Medical Images.

Wang H, Guo S, Ye J, Deng Z, Cheng J, Li T, Chen J, Su Y, Huang Z, Shen Y, zzzzFu B, Zhang S, He J

pubmed logopapersJul 31 2025
Existing volumetric medical image segmentation models are typically task-specific, excelling at specific targets but struggling to generalize across anatomical structures or modalities. This limitation restricts their broader clinical use. In this article, we introduce segment anything model (SAM)-Med3D, a vision foundation model (VFM) for general-purpose segmentation on volumetric medical images. Given only a few 3-D prompt points, SAM-Med3D can accurately segment diverse anatomical structures and lesions across various modalities. To achieve this, we gather and preprocess a large-scale 3-D medical image segmentation dataset, SA-Med3D-140K, from 70 public datasets and 8K licensed private cases from hospitals. This dataset includes 22K 3-D images and 143K corresponding masks. SAM-Med3D, a promptable segmentation model characterized by its fully learnable 3-D structure, is trained on this dataset using a two-stage procedure and exhibits impressive performance on both seen and unseen segmentation targets. We comprehensively evaluate SAM-Med3D on 16 datasets covering diverse medical scenarios, including different anatomical structures, modalities, targets, and zero-shot transferability to new/unseen tasks. The evaluation demonstrates the efficiency and efficacy of SAM-Med3D, as well as its promising application to diverse downstream tasks as a pretrained model. Our approach illustrates that substantial medical resources can be harnessed to develop a general-purpose medical AI for various potential applications. Our dataset, code, and models are available at: https://github.com/uni-medical/SAM-Med3D.
Page 116 of 4003995 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.