Sort by:
Page 241 of 3883875 results

BrainTract: segmentation of white matter fiber tractography and analysis of structural connectivity using hybrid convolutional neural network.

Kumar PR, Shilpa B, Jha RK

pubmed logopapersJun 19 2025
Tractography uses diffusion Magnetic Resonance Imaging (dMRI) to noninvasively reconstruct brain white matter (WM) tracts, with Convolutional Neural Network (CNNs) like U-Net significantly advancing accuracy in medical image segmentation. This work proposes a metaheuristic optimization algorithm-based CNN architecture. This architecture combines the Inception-ResNet-V2 module and the densely connecting convolutional module (DI) into the Spatial Attention U-Net (SAU-Net) architecture for segmenting WM fiber tracts and analyzing the brain's structural connectivity. The proposed network model (DISAU-Net) consists of the following parts are; First, the Inception-ResNet-V2 block is used to replace the standard convolutional layers and expand the network's width; Second, the Dense-Inception block is used to extract features and deepen the network without the need for any additional parameters; Third, the down-sampling block is used to speed up training by decreasing the size of feature maps, and the up-sampling block is used to increase the maps' resolution. In addition, the parameter existing in the classifiers is randomly selected with the Gray Wolf Optimization (GWO) technique to boost the performance of the CNN architecture. We validated our method by segmenting WM tracts on dMRI scans of 280 subjects from the human connectome project (HCP) database. The proposed method is far more efficient than current methods. It offers unprecedented quantitative evaluation with high tract segmentation consistency, achieving accuracy of 97.10%, dice score of 96.88%, recall 95.74%, f1-score 94.79% for fiber tracts. The results showed that the proposed method is a potential approach for segmenting WM fiber tracts and analyzing the brain's structural connectivity.

Multi-domain information fusion diffusion model (MDIF-DM) for limited-angle computed tomography.

Ma G, Xia D, Zhao S

pubmed logopapersJun 19 2025
BackgroundLimited-angle Computed Tomography imaging suffers from severe artifacts in the reconstructed image due to incomplete projection data. Deep learning methods have been developed currently to address the challenges of robustness and low contrast of the limited-angle CT reconstruction with a relatively effective way.ObjectiveTo improve the low contrast of the current limited-angle CT reconstruction image, enhance the robustness of the reconstruction method and the contrast of the limited-angle image.MethodIn this paper, we proposed a limited-angle CT reconstruction method that combining the Fourier domain reweighting and wavelet domain enhancement, which fused information from different domains, thereby getting high-resolution reconstruction images.ResultsWe verified the feasibility and effectiveness of the proposed solution through experiments, and the reconstruction results are improved compared with the state-of-the-art methods.ConclusionsThe proposed method enhances some features of the original image domain data from different domains, which is beneficial to the reasonable diffusion and restoration of diffuse detail texture features.

VesselSDF: Distance Field Priors for Vascular Network Reconstruction

Salvatore Esposito, Daniel Rebain, Arno Onken, Changjian Li, Oisin Mac Aodha

arxiv logopreprintJun 19 2025
Accurate segmentation of vascular networks from sparse CT scan slices remains a significant challenge in medical imaging, particularly due to the thin, branching nature of vessels and the inherent sparsity between imaging planes. Existing deep learning approaches, based on binary voxel classification, often struggle with structural continuity and geometric fidelity. To address this challenge, we present VesselSDF, a novel framework that leverages signed distance fields (SDFs) for robust vessel reconstruction. Our method reformulates vessel segmentation as a continuous SDF regression problem, where each point in the volume is represented by its signed distance to the nearest vessel surface. This continuous representation inherently captures the smooth, tubular geometry of blood vessels and their branching patterns. We obtain accurate vessel reconstructions while eliminating common SDF artifacts such as floating segments, thanks to our adaptive Gaussian regularizer which ensures smoothness in regions far from vessel surfaces while producing precise geometry near the surface boundaries. Our experimental results demonstrate that VesselSDF significantly outperforms existing methods and preserves vessel geometry and connectivity, enabling more reliable vascular analysis in clinical settings.

Optimized YOLOv8 for enhanced breast tumor segmentation in ultrasound imaging.

Mostafa AM, Alaerjan AS, Aldughayfiq B, Allahem H, Mahmoud AA, Said W, Shabana H, Ezz M

pubmed logopapersJun 19 2025
Breast cancer significantly affects people's health globally, making early and accurate diagnosis vital. While ultrasound imaging is safe and non-invasive, its manual interpretation is subjective. This study explores machine learning (ML) techniques to improve breast ultrasound image segmentation, comparing models trained on combined versus separate classes of benign and malignant tumors. The YOLOv8 object detection algorithm is applied to the image segmentation task, aiming to capitalize on its robust feature detection capabilities. We utilized a dataset of 780 ultrasound images categorized into benign and malignant classes to train several deep learning (DL) models: UNet, UNet with DenseNet-121, VGG16, VGG19, and an adapted YOLOv8. These models were evaluated in two experimental setups-training on a combined dataset and training on separate datasets for benign and malignant classes. Performance metrics such as Dice Coefficient, Intersection over Union (IoU), and mean Average Precision (mAP) were used to assess model effectiveness. The study demonstrated substantial improvements in model performance when trained on separate classes, with the UNet model's F1-score increasing from 77.80 to 84.09% and Dice Coefficient from 75.58 to 81.17%, and the adapted YOLOv8 model achieving an F1-score improvement from 93.44 to 95.29% and Dice Coefficient from 82.10 to 84.40%. These results highlight the advantage of specialized model training and the potential of using advanced object detection algorithms for segmentation tasks. This research underscores the significant potential of using specialized training strategies and innovative model adaptations in medical imaging segmentation, ultimately contributing to better patient outcomes.

Optimization of Photon-Counting CT Myelography for the Detection of CSF-Venous Fistulas Using Convolutional Neural Network Denoising: A Comparative Analysis of Reconstruction Techniques.

Madhavan AA, Zhou Z, Farnsworth PJ, Thorne J, Amrhein TJ, Kranz PG, Brinjikji W, Cutsforth-Gregory JK, Kodet ML, Weber NM, Thompson G, Diehn FE, Yu L

pubmed logopapersJun 19 2025
Photon-counting detector CT myelography (PCD-CTM) is a recently described technique used for detecting spinal CSF leaks, including CSF-venous fistulas. Various image reconstruction techniques, including smoother-versus-sharper kernels and virtual monoenergetic images, are available with photon-counting CT. Moreover, denoising algorithms have shown promise in improving sharp kernel images. No prior studies have compared image quality of these different reconstructions on photon-counting CT myelography. Here, we sought to compare several image reconstructions using various parameters important for the detection of CSF-venous fistulas. We performed a retrospective review of all consecutive decubitus PCD-CTM between February 1, 2022, and August 1, 2024, at 1 institution. We included patients whose studies had the following reconstructions: Br48-40 keV virtual monoenergetic reconstruction, Br56 low-energy threshold (T3D), Qr89-T3D denoised with quantum iterative reconstruction, and Qr89-T3D denoised with a convolutional neural network algorithm. We excluded patients who had extradural CSF on preprocedural imaging or a technically unsatisfactory myelogram-. All 4 reconstructions were independently reviewed by 2 neuroradiologists. Each reviewer rated spatial resolution, noise, the presence of artifacts, image quality, and diagnostic confidence (whether positive or negative) on a 1-5 scale. These metrics were compared using the Friedman test. Additionally, noise and contrast were quantitatively assessed by a third reviewer and compared. The Qr89 reconstructions demonstrated higher spatial resolution than their Br56 or Br48-40keV counterparts. Qr89 with convolutional neural network denoising had less noise, better image quality, and improved diagnostic confidence compared with Qr89 with quantum iterative reconstruction denoising. The Br48-40keV reconstruction had the highest contrast-to-noise ratio quantitatively. In our study, the sharpest quantitative kernel (Qr89-T3D) with convolutional neural network denoising demonstrated the best performance regarding spatial resolution, noise level, image quality, and diagnostic confidence for detecting or excluding the presence of a CSF-venous fistula.

AGE-US: automated gestational age estimation based on fetal ultrasound images

César Díaz-Parga, Marta Nuñez-Garcia, Maria J. Carreira, Gabriel Bernardino, Nicolás Vila-Blanco

arxiv logopreprintJun 19 2025
Being born small carries significant health risks, including increased neonatal mortality and a higher likelihood of future cardiac diseases. Accurate estimation of gestational age is critical for monitoring fetal growth, but traditional methods, such as estimation based on the last menstrual period, are in some situations difficult to obtain. While ultrasound-based approaches offer greater reliability, they rely on manual measurements that introduce variability. This study presents an interpretable deep learning-based method for automated gestational age calculation, leveraging a novel segmentation architecture and distance maps to overcome dataset limitations and the scarcity of segmentation masks. Our approach achieves performance comparable to state-of-the-art models while reducing complexity, making it particularly suitable for resource-constrained settings and with limited annotated data. Furthermore, our results demonstrate that the use of distance maps is particularly suitable for estimating femur endpoints.

USING ARTIFICIAL INTELLIGENCE TO PREDICT TREATMENT OUTCOMES IN PATIENTS WITH NEUROGENIC OVERACTIVE BLADDER AND MULTIPLE SCLEROSIS

Chang, O., Lee, J., Lane, F., Demetriou, M., Chang, P.

medrxiv logopreprintJun 18 2025
Introduction and ObjectivesMany women with multiple sclerosis (MS) experience neurogenic overactive bladder (NOAB) characterized by urinary frequency, urinary urgency and urgency incontinence. The objective of the study was to create machine learning (ML) models utilizing clinical and imaging data to predict NOAB treatment success stratified by treatment type. MethodsThis was a retrospective cohort study of female patients with diagnosis of NOAB and MS seen at a tertiary academic center from 2017-2022. Clinical and imaging data were extracted. Three types of NOAB treatment options evaluated included behavioral therapy, medication therapy and minimally invasive therapies. The primary outcome - treatment success was defined as > 50% reduction in urinary frequency, urinary urgency or a subjective perception of treatment success. For the construction of the logistic regression ML models, bivariate analyses were performed with backward selection of variables with p-values of < 0.10 and clinically relevant variables applied. For ML, the cohort was split into a training dataset (70%) and a test dataset (30%). Area under the curve (AUC) scores are calculated to evaluate model performance. ResultsThe 110 patients included had a mean age of patients were 59 years old (SD 14 years), with a predominantly White cohort (91.8%), post-menopausal (68.2%). Patients were stratified by NOAB treatment therapy type received with 70 patients (63.6%) at behavioral therapy, 58 (52.7%) with medication therapy and 44 (40%) with minimally invasive therapies. On MRI brain imaging, 63.6% of patients had > 20 lesions though majority were not active lesions. The lesions were mostly located within the supratentorial (94.5%), infratentorial (68.2%) and 58.2 infratentorial brain (63.8%) as well as in the deep white matter (53.4%). For MRI spine imaging, most of the lesions were in the cervical spine (71.8%) followed by thoracic spine (43.7%) and lumbar spine (6.4%).10.3%). After feature selection, the top 10 highest ranking features were used to train complimentary LASSO-regularized logistic regression (LR) and extreme gradient-boosted tree (XGB) models. The top-performing LR models for predicting response to behavioral, medication, and minimally invasive therapies yielded AUC values of 0.74, 0.76, and 0.83, respectively. ConclusionsUsing these top-ranked features, LR models achieved AUC values of 0.74-0.83 for prediction of treatment success based on individual factors. Further prospective evaluation is needed to better characterize and validate these identified associations.

CLAIM: Clinically-Guided LGE Augmentation for Realistic and Diverse Myocardial Scar Synthesis and Segmentation

Farheen Ramzan, Yusuf Kiberu, Nikesh Jathanna, Shahnaz Jamil-Copley, Richard H. Clayton, Chen, Chen

arxiv logopreprintJun 18 2025
Deep learning-based myocardial scar segmentation from late gadolinium enhancement (LGE) cardiac MRI has shown great potential for accurate and timely diagnosis and treatment planning for structural cardiac diseases. However, the limited availability and variability of LGE images with high-quality scar labels restrict the development of robust segmentation models. To address this, we introduce CLAIM: \textbf{C}linically-Guided \textbf{L}GE \textbf{A}ugmentation for Real\textbf{i}stic and Diverse \textbf{M}yocardial Scar Synthesis and Segmentation framework, a framework for anatomically grounded scar generation and segmentation. At its core is the SMILE module (Scar Mask generation guided by cLinical knowledgE), which conditions a diffusion-based generator on the clinically adopted AHA 17-segment model to synthesize images with anatomically consistent and spatially diverse scar patterns. In addition, CLAIM employs a joint training strategy in which the scar segmentation network is optimized alongside the generator, aiming to enhance both the realism of synthesized scars and the accuracy of the scar segmentation performance. Experimental results show that CLAIM produces anatomically coherent scar patterns and achieves higher Dice similarity with real scar distributions compared to baseline models. Our approach enables controllable and realistic myocardial scar synthesis and has demonstrated utility for downstream medical imaging task.

Mono-Modalizing Extremely Heterogeneous Multi-Modal Medical Image Registration

Kyobin Choo, Hyunkyung Han, Jinyeong Kim, Chanyong Yoon, Seong Jae Hwang

arxiv logopreprintJun 18 2025
In clinical practice, imaging modalities with functional characteristics, such as positron emission tomography (PET) and fractional anisotropy (FA), are often aligned with a structural reference (e.g., MRI, CT) for accurate interpretation or group analysis, necessitating multi-modal deformable image registration (DIR). However, due to the extreme heterogeneity of these modalities compared to standard structural scans, conventional unsupervised DIR methods struggle to learn reliable spatial mappings and often distort images. We find that the similarity metrics guiding these models fail to capture alignment between highly disparate modalities. To address this, we propose M2M-Reg (Multi-to-Mono Registration), a novel framework that trains multi-modal DIR models using only mono-modal similarity while preserving the established architectural paradigm for seamless integration into existing models. We also introduce GradCyCon, a regularizer that leverages M2M-Reg's cyclic training scheme to promote diffeomorphism. Furthermore, our framework naturally extends to a semi-supervised setting, integrating pre-aligned and unaligned pairs only, without requiring ground-truth transformations or segmentation masks. Experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset demonstrate that M2M-Reg achieves up to 2x higher DSC than prior methods for PET-MRI and FA-MRI registration, highlighting its effectiveness in handling highly heterogeneous multi-modal DIR. Our code is available at https://github.com/MICV-yonsei/M2M-Reg.

D2Diff : A Dual Domain Diffusion Model for Accurate Multi-Contrast MRI Synthesis

Sanuwani Dayarathna, Himashi Peiris, Kh Tohidul Islam, Tien-Tsin Wong, Zhaolin Chen

arxiv logopreprintJun 18 2025
Multi contrast MRI synthesis is inherently challenging due to the complex and nonlinear relationships among different contrasts. Each MRI contrast highlights unique tissue properties, but their complementary information is difficult to exploit due to variations in intensity distributions and contrast specific textures. Existing methods for multi contrast MRI synthesis primarily utilize spatial domain features, which capture localized anatomical structures but struggle to model global intensity variations and distributed patterns. Conversely, frequency domain features provide structured inter contrast correlations but lack spatial precision, limiting their ability to retain finer details. To address this, we propose a dual domain learning framework that integrates spatial and frequency domain information across multiple MRI contrasts for enhanced synthesis. Our method employs two mutually trained denoising networks, one conditioned on spatial domain and the other on frequency domain contrast features through a shared critic network. Additionally, an uncertainty driven mask loss directs the models focus toward more critical regions, further improving synthesis accuracy. Extensive experiments show that our method outperforms SOTA baselines, and the downstream segmentation performance highlights the diagnostic value of the synthetic results.
Page 241 of 3883875 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.