Sort by:
Page 140 of 1621612 results

Distance Transform Guided Mixup for Alzheimer's Detection

Zobia Batool, Huseyin Ozkan, Erchan Aptoula

arxiv logopreprintMay 28 2025
Alzheimer's detection efforts aim to develop accurate models for early disease diagnosis. Significant advances have been achieved with convolutional neural networks and vision transformer based approaches. However, medical datasets suffer heavily from class imbalance, variations in imaging protocols, and limited dataset diversity, which hinder model generalization. To overcome these challenges, this study focuses on single-domain generalization by extending the well-known mixup method. The key idea is to compute the distance transform of MRI scans, separate them spatially into multiple layers and then combine layers stemming from distinct samples to produce augmented images. The proposed approach generates diverse data while preserving the brain's structure. Experimental results show generalization performance improvement across both ADNI and AIBL datasets.

High-Quality CEST Mapping With Lorentzian-Model Informed Neural Representation.

Chen C, Liu Y, Park SW, Li J, Chan KWY, Huang J, Morel JM, Chan RH

pubmed logopapersMay 28 2025
Chemical Exchange Saturation Transfer (CEST) MRI has demonstrated its remarkable ability to enhance the detection of macromolecules and metabolites with low concentrations. While CEST mapping is essential for quantifying molecular information, conventional methods face critical limitations: model-based approaches are constrained by limited sensitivity and robustness depending heavily on parameter setups, while data-driven deep learning methods lack generalizability across heterogeneous datasets and acquisition protocols. To overcome these challenges, we propose a Lorentzian-model Informed Neural Representation (LINR) framework for high-quality CEST mapping. LINR employs a self-supervised neural architecture embedding the Lorentzian equation - the fundamental biophysical model of CEST signal evolution - to directly reconstruct high-sensitivity parameter maps from raw z-spectra, eliminating dependency on labeled training data. Convergence of the self-supervised training strategy is guaranteed theoretically, ensuring LINR's mathematical validity. The superior performance of LINR in capturing CEST contrasts is revealed through comprehensive evaluations based on synthetic phantoms and in-vivo experiments (including tumor and Alzheimer's disease models). The intuitive parameter-free design enables adaptive integration into diverse CEST imaging workflows, positioning LINR as a versatile tool for non-invasive molecular diagnostics and pathophysiological discovery.

Deep Separable Spatiotemporal Learning for Fast Dynamic Cardiac MRI.

Wang Z, Xiao M, Zhou Y, Wang C, Wu N, Li Y, Gong Y, Chang S, Chen Y, Zhu L, Zhou J, Cai C, Wang H, Jiang X, Guo D, Yang G, Qu X

pubmed logopapersMay 28 2025
Dynamic magnetic resonance imaging (MRI) plays an indispensable role in cardiac diagnosis. To enable fast imaging, the k-space data can be undersampled but the image reconstruction poses a great challenge of high-dimensional processing. This challenge necessitates extensive training data in deep learning reconstruction methods. In this work, we propose a novel and efficient approach, leveraging a dimension-reduced separable learning scheme that can perform exceptionally well even with highly limited training data. We design this new approach by incorporating spatiotemporal priors into the development of a Deep Separable Spatiotemporal Learning network (DeepSSL), which unrolls an iteration process of a 2D spatiotemporal reconstruction model with both temporal lowrankness and spatial sparsity. Intermediate outputs can also be visualized to provide insights into the network behavior and enhance interpretability. Extensive results on cardiac cine datasets demonstrate that the proposed DeepSSL surpasses stateof-the-art methods both visually and quantitatively, while reducing the demand for training cases by up to 75%. Additionally, its preliminary adaptability to unseen cardiac patients has been verified through a blind reader study conducted by experienced radiologists and cardiologists. Furthermore, DeepSSL enhances the accuracy of the downstream task of cardiac segmentation and exhibits robustness in prospectively undersampled real-time cardiac MRI. DeepSSL is efficient under highly limited training data and adaptive to patients and prospective undersampling. This approach holds promise in addressing the escalating demand for high-dimensional data reconstruction in MRI applications.

Single Domain Generalization for Alzheimer's Detection from 3D MRIs with Pseudo-Morphological Augmentations and Contrastive Learning

Zobia Batool, Huseyin Ozkan, Erchan Aptoula

arxiv logopreprintMay 28 2025
Although Alzheimer's disease detection via MRIs has advanced significantly thanks to contemporary deep learning models, challenges such as class imbalance, protocol variations, and limited dataset diversity often hinder their generalization capacity. To address this issue, this article focuses on the single domain generalization setting, where given the data of one domain, a model is designed and developed with maximal performance w.r.t. an unseen domain of distinct distribution. Since brain morphology is known to play a crucial role in Alzheimer's diagnosis, we propose the use of learnable pseudo-morphological modules aimed at producing shape-aware, anatomically meaningful class-specific augmentations in combination with a supervised contrastive learning module to extract robust class-specific representations. Experiments conducted across three datasets show improved performance and generalization capacity, especially under class imbalance and imaging protocol variations. The source code will be made available upon acceptance at https://github.com/zobia111/SDG-Alzheimer.

Toward diffusion MRI in the diagnosis and treatment of pancreatic cancer.

Lee J, Lin T, He Y, Wu Y, Qin J

pubmed logopapersMay 28 2025
Pancreatic cancer is a highly aggressive malignancy with rising incidence and mortality rates, often diagnosed at advanced stages. Conventional imaging methods, such as computed tomography (CT) and magnetic resonance imaging (MRI), struggle to assess tumor characteristics and vascular involvement, which are crucial for treatment planning. This paper explores the potential of diffusion magnetic resonance imaging (dMRI) in enhancing pancreatic cancer diagnosis and treatment. Diffusion-based techniques, such as diffusion-weighted imaging (DWI), diffusion tensor imaging (DTI), intravoxel incoherent motion (IVIM), and diffusion kurtosis imaging (DKI), combined with emerging AI‑powered analysis, provide insights into tissue microstructure, allowing for earlier detection and improved evaluation of tumor cellularity. These methods may help assess prognosis and monitor therapy response by tracking diffusion and perfusion metrics. However, challenges remain, such as standardized protocols and robust data analysis pipelines. Ongoing research, including deep learning applications, aims to improve reliability, and dMRI shows promise in providing functional insights and improving patient outcomes. Further clinical validation is necessary to maximize its benefits.

DeepMultiConnectome: Deep Multi-Task Prediction of Structural Connectomes Directly from Diffusion MRI Tractography

Marcus J. Vroemen, Yuqian Chen, Yui Lo, Tengfei Xu, Weidong Cai, Fan Zhang, Josien P. W. Pluim, Lauren J. O'Donnell

arxiv logopreprintMay 27 2025
Diffusion MRI (dMRI) tractography enables in vivo mapping of brain structural connections, but traditional connectome generation is time-consuming and requires gray matter parcellation, posing challenges for large-scale studies. We introduce DeepMultiConnectome, a deep-learning model that predicts structural connectomes directly from tractography, bypassing the need for gray matter parcellation while supporting multiple parcellation schemes. Using a point-cloud-based neural network with multi-task learning, the model classifies streamlines according to their connected regions across two parcellation schemes, sharing a learned representation. We train and validate DeepMultiConnectome on tractography from the Human Connectome Project Young Adult dataset ($n = 1000$), labeled with an 84 and 164 region gray matter parcellation scheme. DeepMultiConnectome predicts multiple structural connectomes from a whole-brain tractogram containing 3 million streamlines in approximately 40 seconds. DeepMultiConnectome is evaluated by comparing predicted connectomes with traditional connectomes generated using the conventional method of labeling streamlines using a gray matter parcellation. The predicted connectomes are highly correlated with traditionally generated connectomes ($r = 0.992$ for an 84-region scheme; $r = 0.986$ for a 164-region scheme) and largely preserve network properties. A test-retest analysis of DeepMultiConnectome demonstrates reproducibility comparable to traditionally generated connectomes. The predicted connectomes perform similarly to traditionally generated connectomes in predicting age and cognitive function. Overall, DeepMultiConnectome provides a scalable, fast model for generating subject-specific connectomes across multiple parcellation schemes.

PlaNet-S: an Automatic Semantic Segmentation Model for Placenta Using U-Net and SegNeXt.

Saito I, Yamamoto S, Takaya E, Harigai A, Sato T, Kobayashi T, Takase K, Ueda T

pubmed logopapersMay 27 2025
This study aimed to develop a fully automated semantic placenta segmentation model that integrates the U-Net and SegNeXt architectures through ensemble learning. A total of 218 pregnant women with suspected placental abnormalities who underwent magnetic resonance imaging (MRI) were enrolled, yielding 1090 annotated images for developing a deep learning model for placental segmentation. The images were standardized and divided into training and test sets. The performance of Placental Segmentation Network (PlaNet-S), which integrates U-Net and SegNeXt within an ensemble framework, was assessed using Intersection over Union (IoU) and counting connected components (CCC) against the U-Net, U-Net + + , and DS-transUNet. PlaNet-S had significantly higher IoU (0.78, SD = 0.10) than that of U-Net (0.73, SD = 0.13) (p < 0.005) and DS-transUNet (0.64, SD = 0.16) (p < 0.005), while the difference with U-Net + + (0.77, SD = 0.12) was not statistically significant. The CCC for PlaNet-S was significantly higher than that for U-Net (p < 0.005), U-Net + + (p < 0.005), and DS-transUNet (p < 0.005), matching the ground truth in 86.0%, 56.7%, 67.9%, and 20.9% of the cases, respectively. PlaNet-S achieved higher IoU than U-Net and DS-transUNet, and comparable IoU to U-Net + + . Moreover, PlaNet-S significantly outperformed all three models in CCC, indicating better agreement with the ground truth. This model addresses the challenges of time-consuming physician-assisted manual segmentation and offers the potential for diverse applications in placental imaging analyses.

Deep Learning Auto-segmentation of Diffuse Midline Glioma on Multimodal Magnetic Resonance Images.

Fernández-Patón M, Montoya-Filardi A, Galiana-Bordera A, Martínez-Gironés PM, Veiga-Canuto D, Martínez de Las Heras B, Cerdá-Alberich L, Martí-Bonmatí L

pubmed logopapersMay 27 2025
Diffuse midline glioma (DMG) H3 K27M-altered is a rare pediatric brainstem cancer with poor prognosis. To advance the development of predictive models to gain a deeper understanding of DMG, there is a crucial need for seamlessly integrating automatic and highly accurate tumor segmentation techniques. There is only one method that tries to solve this task in this cancer; for that reason, this study develops a modified CNN-based 3D-Unet tool to automatically segment DMG in an accurate way in magnetic resonance (MR) images. The dataset consisted of 52 DMG patients and 70 images, each with T1W and T2W or FLAIR images. Three different datasets were created: T1W images, T2W or FLAIR images, and a combined set of T1W and T2W/FLAIR images. Denoising, bias field correction, spatial resampling, and normalization were applied as preprocessing steps to the MR images. Patching techniques were also used to enlarge the dataset size. For tumor segmentation, a 3D U-Net architecture with residual blocks was used. The best results were obtained for the dataset composed of all T1W and T2W/FLAIR images, reaching an average Dice Similarity Coefficient (DSC) of 0.883 on the test dataset. These results are comparable to other brain tumor segmentation models and to state-of-the-art results in DMG segmentation using fewer sequences. Our results demonstrate the effectiveness of the proposed 3D U-Net architecture for DMG tumor segmentation. This advancement holds potential for enhancing the precision of diagnostic and predictive models in the context of this challenging pediatric cancer.

Automated Body Composition Analysis Using DAFS Express on 2D MRI Slices at L3 Vertebral Level.

Akella V, Bagherinasab R, Lee H, Li JM, Nguyen L, Salehin M, Chow VTY, Popuri K, Beg MF

pubmed logopapersMay 27 2025
Body composition analysis is vital in assessing health conditions such as obesity, sarcopenia, and metabolic syndromes. MRI provides detailed images of skeletal muscle (SM), visceral adipose tissue (VAT), and subcutaneous adipose tissue (SAT), but their manual segmentation is labor-intensive and limits clinical applicability. This study validates an automated tool for MRI-based 2D body composition analysis (Data Analysis Facilitation Suite (DAFS) Express), comparing its automated measurements with expert manual segmentations using UK Biobank data. A cohort of 399 participants from the UK Biobank dataset was selected, yielding 423 single L3 slices for analysis. DAFS Express performed automated segmentations of SM, VAT, and SAT, which were then manually corrected by expert raters for validation. Evaluation metrics included Jaccard coefficients, Dice scores, intraclass correlation coefficients (ICCs), and Bland-Altman Plots to assess segmentation agreement and reliability. High agreements were observed between automated and manual segmentations with mean Jaccard scores: SM 99.03%, VAT 95.25%, and SAT 99.57%, and mean Dice scores: SM 99.51%, VAT 97.41%, and SAT 99.78%. Cross-sectional area comparisons showed consistent measurements, with automated methods closely matching manual measurements for SM and SAT, and slightly higher values for VAT (SM: auto 132.51 cm<sup>2</sup>, manual 132.36 cm<sup>2</sup>; VAT: auto 137.07 cm<sup>2</sup>, manual 134.46 cm<sup>2</sup>; SAT: auto 203.39 cm<sup>2</sup>, manual 202.85 cm<sup>2</sup>). ICCs confirmed strong reliability (SM 0.998, VAT 0.994, SAT 0.994). Bland-Altman plots revealed minimal biases, and boxplots illustrated distribution similarities across SM, VAT, and SAT areas. On average, DAFS Express took 18 s per DICOM for a total of 126.9 min for 423 images to output segmentations and measurement PDF's per DICOM. Automated segmentation of SM, VAT, and SAT from 2D MRI images using DAFS Express showed comparable accuracy to manual segmentation. This underscores its potential to streamline image analysis processes in research and clinical settings, enhancing diagnostic accuracy and efficiency. Future work should focus on further validation across diverse clinical applications and imaging conditions.

Deep learning network enhances imaging quality of low-b-value diffusion-weighted imaging and improves lesion detection in prostate cancer.

Liu Z, Gu WJ, Wan FN, Chen ZZ, Kong YY, Liu XH, Ye DW, Dai B

pubmed logopapersMay 27 2025
Diffusion-weighted imaging with higher b-value improves detection rate for prostate cancer lesions. However, obtaining high b-value DWI requires more advanced hardware and software configuration. Here we use a novel deep learning network, NAFNet, to generate a deep learning reconstructed (DLR<sub>1500</sub>) images from 800 b-value to mimic 1500 b-value images, and to evaluate its performance and lesion detection improvements based on whole-slide images (WSI). We enrolled 303 prostate cancer patients with both 800 and 1500 b-values from Fudan University Shanghai Cancer Centre between 2017 and 2020. We assigned these patients to the training and validation set in a 2:1 ratio. The testing set included 36 prostate cancer patients from an independent institute who had only preoperative DWI at 800 b-value. Two senior radiology doctors and two junior radiology doctors read and delineated cancer lesions on DLR<sub>1500</sub>, original 800 and 1500 b-values DWI images. WSI were used as the ground truth to assess the lesion detection improvement of DLR<sub>1500</sub> images in the testing set. After training and generating, within junior radiology doctors, the diagnostic AUC based on DLR<sub>1500</sub> images is not inferior to that based on 1500 b-value images (0.832 (0.788-0.876) vs. 0.821 (0.747-0.899), P = 0.824). The same phenomenon is also observed in senior radiology doctors. Furthermore, in the testing set, DLR<sub>1500</sub> images could significantly enhance junior radiology doctors' diagnostic performance than 800 b-value images (0.848 (0.758-0.938) vs. 0.752 (0.661-0.843), P = 0.043). DLR<sub>1500</sub> DWIs were comparable in quality to original 1500 b-value images within both junior and senior radiology doctors. NAFNet based DWI enhancement can significantly improve the image quality of 800 b-value DWI, and therefore promote the accuracy of prostate cancer lesion detection for junior radiology doctors.
Page 140 of 1621612 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.