Sort by:
Page 73 of 3593587 results

Hybrid optimization enabled Eff-FDMNet for Parkinson's disease detection and classification in federated learning.

Subramaniam S, Balakrishnan U

pubmed logopapersJul 31 2025
Parkinson's Disease (PD) is a progressive neurodegenerative disorder and the early diagnosis is crucial for managing symptoms and slowing disease progression. This paper proposes a framework named Federated Learning Enabled Waterwheel Shuffled Shepherd Optimization-based Efficient-Fuzzy Deep Maxout Network (FedL_WSSO based Eff-FDMNet) for PD detection and classification. In local training model, the input image from the database "Image and Data Archive (IDA)" is given for preprocessing that is performed using Gaussian filter. Consequently, image augmentation takes place and feature extraction is conducted. These processes are executed for every input image. Therefore, the collected outputs of images are used for PD detection using Shepard Convolutional Neural Network Fuzzy Zeiler and Fergus Net (ShCNN-Fuzzy-ZFNet). Then, PD classification is accomplished using Eff-FDMNet, which is trained using WSSO. At last, based on CAViaR, local updation and aggregation are changed in server. The developed method obtained highest accuracy as 0.927, mean average precision as 0.905, lowest false positive rate (FPR) as 0.082, loss as 0.073, Mean Squared Error (MSE) as 0.213, and Root Mean Squared Error (RMSE) as 0.461. The high accuracy and low error rates indicate that the potent framework can enhance patient outcomes by enabling more reliable and personalized diagnosis.

Deep Learning-based Hierarchical Brain Segmentation with Preliminary Analysis of the Repeatability and Reproducibility.

Goto M, Kamagata K, Andica C, Takabayashi K, Uchida W, Goto T, Yuzawa T, Kitamura Y, Hatano T, Hattori N, Aoki S, Sakamoto H, Sakano Y, Kyogoku S, Daida H

pubmed logopapersJul 31 2025
We developed new deep learning-based hierarchical brain segmentation (DLHBS) method that can segment T1-weighted MR images (T1WI) into 107 brain subregions and calculate the volume of each subregion. This study aimed to evaluate the repeatability and reproducibility of volume estimation using DLHBS and compare them with those of representative brain segmentation tools such as statistical parametric mapping (SPM) and FreeSurfer (FS). Hierarchical segmentation using multiple deep learning models was employed to segment brain subregions within a clinically feasible processing time. The T1WI and brain mask pairs in 486 subjects were used as training data for training of the deep learning segmentation models. Training data were generated using a multi-atlas registration-based method. The high quality of training data was confirmed through visual evaluation and manual correction by neuroradiologists. The brain 3D-T1WI scan-rescan data of the 11 healthy subjects were obtained using three MRI scanners for evaluating the repeatability and reproducibility. The volumes of the eight ROIs-including gray matter, white matter, cerebrospinal fluid, hippocampus, orbital gyrus, cerebellum posterior lobe, putamen, and thalamus-obtained using DLHBS, SPM 12 with default settings, and FS with the "recon-all" pipeline. These volumes were then used for evaluation of repeatability and reproducibility. In the volume measurements, the bilateral thalamus showed higher repeatability with DLHBS compared with SPM. Furthermore, DLHBS demonstrated higher repeatability than FS in across all eight ROIs. Additionally, higher reproducibility was observed with DLHBS in both hemispheres of six ROIs when compared with SPM and in five ROIs compared with FS. The lower repeatability and reproducibility in DLHBS were not observed in any comparisons. Our results showed that the best performance in both repeatability and reproducibility was found in DLHBS compared with SPM and FS.

Utility of Thin-slice Fat-suppressed Single-shot T2-weighted MR Imaging with Deep Learning Image Reconstruction as a Protocol for Evaluating the Pancreas.

Shimada R, Sofue K, Ueno Y, Wakayama T, Yamaguchi T, Ueshima E, Kusaka A, Hori M, Murakami T

pubmed logopapersJul 31 2025
To compare the utility of thin-slice fat-suppressed single-shot T2-weighted imaging (T2WI) with deep learning image reconstruction (DLIR) and conventional fast spin-echo T2WI with DLIR for evaluating pancreatic protocol. This retrospective study included 42 patients (mean age, 70.2 years) with pancreatic cancer who underwent gadoxetic acid-enhanced MRI. Three fat-suppressed T2WI, including conventional fast-spin echo with 6 mm thickness (FSE 6 mm), single-shot fast-spin echo with 6 mm and 3 mm thickness (SSFSE 6 mm and SSFSE 3 mm), were acquired for each patient. For quantitative analysis, the SNRs of the upper abdominal organs were calculated between images with and without DLIR. The pancreas-to-lesion contrast on DLIR images was also calculated. For qualitative analysis, two abdominal radiologists independently scored the image quality on a 5-point scale in the FSE 6 mm, SSFSE 6 mm, and SSFSE 3 mm with DLIR. The SNRs significantly improved among the three T2-weighted images with DLIR compared to those without DLIR in all patients (P < 0.001). The pancreas-to-lesion contrast of SSFSE 3 mm was higher than those of the FSE 6 mm (P < 0.001) and tended to be higher than SSFSE 6 mm (P = 0.07). SSFSE 3 mm had the highest image qualities regarding pancreas edge sharpness, pancreatic duct clarity, and overall image quality, followed by SSFSE 6 mm and FSE 6 mm (P < 0.0001). SSFSE 3 mm with DLIR demonstrated significant improvements in SNRs of the pancreas, pancreas-to-lesion contrast, and image quality more efficiently than did SSFSE 6 mm and FSE 6 mm. Thin-slice fat-suppressed single-shot T2WI with DLIR can be easily implemented for pancreatic MR protocol.

Topology Optimization in Medical Image Segmentation with Fast Euler Characteristic

Liu Li, Qiang Ma, Cheng Ouyang, Johannes C. Paetzold, Daniel Rueckert, Bernhard Kainz

arxiv logopreprintJul 31 2025
Deep learning-based medical image segmentation techniques have shown promising results when evaluated based on conventional metrics such as the Dice score or Intersection-over-Union. However, these fully automatic methods often fail to meet clinically acceptable accuracy, especially when topological constraints should be observed, e.g., continuous boundaries or closed surfaces. In medical image segmentation, the correctness of a segmentation in terms of the required topological genus sometimes is even more important than the pixel-wise accuracy. Existing topology-aware approaches commonly estimate and constrain the topological structure via the concept of persistent homology (PH). However, these methods are difficult to implement for high dimensional data due to their polynomial computational complexity. To overcome this problem, we propose a novel and fast approach for topology-aware segmentation based on the Euler Characteristic ($\chi$). First, we propose a fast formulation for $\chi$ computation in both 2D and 3D. The scalar $\chi$ error between the prediction and ground-truth serves as the topological evaluation metric. Then we estimate the spatial topology correctness of any segmentation network via a so-called topological violation map, i.e., a detailed map that highlights regions with $\chi$ errors. Finally, the segmentation results from the arbitrary network are refined based on the topological violation maps by a topology-aware correction network. Our experiments are conducted on both 2D and 3D datasets and show that our method can significantly improve topological correctness while preserving pixel-wise segmentation accuracy.

DICOM De-Identification via Hybrid AI and Rule-Based Framework for Scalable, Uncertainty-Aware Redaction

Kyle Naddeo, Nikolas Koutsoubis, Rahul Krish, Ghulam Rasool, Nidhal Bouaynaya, Tony OSullivan, Raj Krish

arxiv logopreprintJul 31 2025
Access to medical imaging and associated text data has the potential to drive major advances in healthcare research and patient outcomes. However, the presence of Protected Health Information (PHI) and Personally Identifiable Information (PII) in Digital Imaging and Communications in Medicine (DICOM) files presents a significant barrier to the ethical and secure sharing of imaging datasets. This paper presents a hybrid de-identification framework developed by Impact Business Information Solutions (IBIS) that combines rule-based and AI-driven techniques, and rigorous uncertainty quantification for comprehensive PHI/PII removal from both metadata and pixel data. Our approach begins with a two-tiered rule-based system targeting explicit and inferred metadata elements, further augmented by a large language model (LLM) fine-tuned for Named Entity Recognition (NER), and trained on a suite of synthetic datasets simulating realistic clinical PHI/PII. For pixel data, we employ an uncertainty-aware Faster R-CNN model to localize embedded text, extract candidate PHI via Optical Character Recognition (OCR), and apply the NER pipeline for final redaction. Crucially, uncertainty quantification provides confidence measures for AI-based detections to enhance automation reliability and enable informed human-in-the-loop verification to manage residual risks. This uncertainty-aware deidentification framework achieves robust performance across benchmark datasets and regulatory standards, including DICOM, HIPAA, and TCIA compliance metrics. By combining scalable automation, uncertainty quantification, and rigorous quality assurance, our solution addresses critical challenges in medical data de-identification and supports the secure, ethical, and trustworthy release of imaging data for research.

Thin-slice 2D MR Imaging of the Shoulder Joint Using Denoising Deep Learning Reconstruction Provides Higher Image Quality Than 3D MR Imaging.

Kakigi T, Sakamoto R, Arai R, Yamamoto A, Kuriyama S, Sano Y, Imai R, Numamoto H, Miyake KK, Saga T, Matsuda S, Nakamoto Y

pubmed logopapersJul 31 2025
This study was conducted to evaluate whether thin-slice 2D fat-saturated proton density-weighted images of the shoulder joint in three imaging planes combined with parallel imaging, partial Fourier technique, and denoising approach with deep learning-based reconstruction (dDLR) are more useful than 3D fat-saturated proton density multi-planar voxel images. Eighteen patients who underwent MRI of the shoulder joint at 3T were enrolled. The denoising effect of dDLR in 2D was evaluated using coefficient of variation (CV). Qualitative evaluation of anatomical structures, noise, and artifacts in 2D after dDLR and 3D was performed by two radiologists using a five-point Likert scale. All were analyzed statistically. Gwet's agreement coefficients were also calculated. The CV of 2D after dDLR was significantly lower than that before dDLR (P < 0.05). Both radiologists rated 2D higher than 3D for all anatomical structures and noise (P < 0.05), except for artifacts. Both Gwet's agreement coefficients of anatomical structures, noise, and artifacts in 2D and 3D produced nearly perfect agreement between the two radiologists. The evaluation of 2D tended to be more reproducible than 3D. 2D with parallel imaging, partial Fourier technique, and dDLR was proved to be superior to 3D for depicting shoulder joint structures with lower noise.

Consistent Point Matching

Halid Ziya Yerebakan, Gerardo Hermosillo Valadez

arxiv logopreprintJul 31 2025
This study demonstrates that incorporating a consistency heuristic into the point-matching algorithm \cite{yerebakan2023hierarchical} improves robustness in matching anatomical locations across pairs of medical images. We validated our approach on diverse longitudinal internal and public datasets spanning CT and MRI modalities. Notably, it surpasses state-of-the-art results on the Deep Lesion Tracking dataset. Additionally, we show that the method effectively addresses landmark localization. The algorithm operates efficiently on standard CPU hardware and allows configurable trade-offs between speed and robustness. The method enables high-precision navigation between medical images without requiring a machine learning model or training data.

Machine learning and machine learned prediction in chest X-ray images

Shereiff Garrett, Abhinav Adhikari, Sarina Gautam, DaShawn Marquis Morris, Chandra Mani Adhikari

arxiv logopreprintJul 31 2025
Machine learning and artificial intelligence are fast-growing fields of research in which data is used to train algorithms, learn patterns, and make predictions. This approach helps to solve seemingly intricate problems with significant accuracy without explicit programming by recognizing complex relationships in data. Taking an example of 5824 chest X-ray images, we implement two machine learning algorithms, namely, a baseline convolutional neural network (CNN) and a DenseNet-121, and present our analysis in making machine-learned predictions in predicting patients with ailments. Both baseline CNN and DenseNet-121 perform very well in the binary classification problem presented in this work. Gradient-weighted class activation mapping shows that DenseNet-121 correctly focuses on essential parts of the input chest X-ray images in its decision-making more than the baseline CNN.

Towards Affordable Tumor Segmentation and Visualization for 3D Breast MRI Using SAM2

Solha Kang, Eugene Kim, Joris Vankerschaver, Utku Ozbulak

arxiv logopreprintJul 31 2025
Breast MRI provides high-resolution volumetric imaging critical for tumor assessment and treatment planning, yet manual interpretation of 3D scans remains labor-intensive and subjective. While AI-powered tools hold promise for accelerating medical image analysis, adoption of commercial medical AI products remains limited in low- and middle-income countries due to high license costs, proprietary software, and infrastructure demands. In this work, we investigate whether the Segment Anything Model 2 (SAM2) can be adapted for low-cost, minimal-input 3D tumor segmentation in breast MRI. Using a single bounding box annotation on one slice, we propagate segmentation predictions across the 3D volume using three different slice-wise tracking strategies: top-to-bottom, bottom-to-top, and center-outward. We evaluate these strategies across a large cohort of patients and find that center-outward propagation yields the most consistent and accurate segmentations. Despite being a zero-shot model not trained for volumetric medical data, SAM2 achieves strong segmentation performance under minimal supervision. We further analyze how segmentation performance relates to tumor size, location, and shape, identifying key failure modes. Our results suggest that general-purpose foundation models such as SAM2 can support 3D medical image analysis with minimal supervision, offering an accessible and affordable alternative for resource-constrained settings.

SAMSA: Segment Anything Model Enhanced with Spectral Angles for Hyperspectral Interactive Medical Image Segmentation

Alfie Roddan, Tobias Czempiel, Chi Xu, Daniel S. Elson, Stamatia Giannarou

arxiv logopreprintJul 31 2025
Hyperspectral imaging (HSI) provides rich spectral information for medical imaging, yet encounters significant challenges due to data limitations and hardware variations. We introduce SAMSA, a novel interactive segmentation framework that combines an RGB foundation model with spectral analysis. SAMSA efficiently utilizes user clicks to guide both RGB segmentation and spectral similarity computations. The method addresses key limitations in HSI segmentation through a unique spectral feature fusion strategy that operates independently of spectral band count and resolution. Performance evaluation on publicly available datasets has shown 81.0% 1-click and 93.4% 5-click DICE on a neurosurgical and 81.1% 1-click and 89.2% 5-click DICE on an intraoperative porcine hyperspectral dataset. Experimental results demonstrate SAMSA's effectiveness in few-shot and zero-shot learning scenarios and using minimal training examples. Our approach enables seamless integration of datasets with different spectral characteristics, providing a flexible framework for hyperspectral medical image analysis.
Page 73 of 3593587 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.