Sort by:
Page 38 of 53522 results

MATI: A GPU-accelerated toolbox for microstructural diffusion MRI simulation and data fitting with a graphical user interface.

Xu J, Devan SP, Shi D, Pamulaparthi A, Yan N, Zu Z, Smith DS, Harkins KD, Gore JC, Jiang X

pubmed logopapersMay 24 2025
To introduce MATI (Microstructural Analysis Toolbox for Imaging), a versatile MATLAB-based toolbox that combines both simulation and data fitting capabilities for microstructural dMRI research. MATI provides a user-friendly, graphical user interface that enables researchers, including those without much programming experience, to perform advanced simulations and data analyses for microstructural MRI research. For simulation, MATI supports arbitrary microstructural tissues and pulse sequences. For data fitting, MATI supports a range of fitting methods, including traditional non-linear least squares, Bayesian approaches, machine learning, and dictionary matching methods, allowing users to tailor analyses based on specific research needs. Optimized with vectorized matrix operations and high-performance numerical libraries, MATI achieves high computational efficiency, enabling rapid simulations and data fitting on CPU and GPU hardware. While designed for microstructural dMRI, MATI's generalized framework can be extended to other imaging methods, making it a flexible and scalable tool for quantitative MRI research. MATI offers a significant step toward translating advanced microstructural MRI techniques into clinical applications.

Cross-Fusion Adaptive Feature Enhancement Transformer: Efficient high-frequency integration and sparse attention enhancement for brain MRI super-resolution.

Yang Z, Xiao H, Wang X, Zhou F, Deng T, Liu S

pubmed logopapersMay 24 2025
High-resolution magnetic resonance imaging (MRI) is essential for diagnosing and treating brain diseases. Transformer-based approaches demonstrate strong potential in MRI super-resolution by capturing long-range dependencies effectively. However, existing Transformer-based super-resolution methods face several challenges: (1) they primarily focus on low-frequency information, neglecting the utilization of high-frequency information; (2) they lack effective mechanisms to integrate both low-frequency and high-frequency information; (3) they struggle to effectively eliminate redundant information during the reconstruction process. To address these issues, we propose the Cross-fusion Adaptive Feature Enhancement Transformer (CAFET). Our model maximizes the potential of both CNNs and Transformers. It consists of four key blocks: a high-frequency enhancement block for extracting high-frequency information; a hybrid attention block for capturing global information and local fitting, which includes channel attention and shifted rectangular window attention; a large-window fusion attention block for integrating local high-frequency features and global low-frequency features; and an adaptive sparse overlapping attention block for dynamically retaining key information and enhancing the aggregation of cross-window features. Extensive experiments validate the effectiveness of the proposed method. On the BraTS and IXI datasets, with an upsampling factor of ×2, the proposed method achieves a maximum PSNR improvement of 2.4 dB and 1.3 dB compared to state-of-the-art methods, along with an SSIM improvement of up to 0.16% and 1.42%. Similarly, at an upsampling factor of ×4, the proposed method achieves a maximum PSNR improvement of 1.04 dB and 0.3 dB over the current leading methods, along with an SSIM improvement of up to 0.25% and 1.66%. Our method is capable of reconstructing high-quality super-resolution brain MRI images, demonstrating significant clinical potential.

Relational Bi-level aggregation graph convolutional network with dynamic graph learning and puzzle optimization for Alzheimer's classification.

Raajasree K, Jaichandran R

pubmed logopapersMay 24 2025
Alzheimer's disease (AD) is a neurodegenerative disorder characterized by a progressive cognitive decline, necessitating early diagnosis for effective treatment. This study presents the Relational Bi-level Aggregation Graph Convolutional Network with Dynamic Graph Learning and Puzzle Optimization for Alzheimer's Classification (RBAGCN-DGL-PO-AC), using denoised T1-weighted Magnetic Resonance Images (MRIs) collected from Alzheimer's Disease Neuroimaging Initiative (ADNI) repository. Addressing the impact of noise in medical imaging, the method employs advanced denoising techniques includes: the Modified Spline-Kernelled Chirplet Transform (MSKCT), Jump Gain Integral Recurrent Neural Network (JGIRNN), and Newton Time Extracting Wavelet Transform (NTEWT), to enhance the image quality. Key brain regions, crucial for classification such as hippocampal, lateral ventricle and posterior cingulate cortex are segmented using Attention Guided Generalized Intuitionistic Fuzzy C-Means Clustering (AG-GIFCMC). Feature extraction and classification using segmented outputs are performed with RBAGCN-DGL and puzzle optimization, categorize input images into Healthy Controls (HC), Early Mild Cognitive Impairment (EMCI), Late Mild Cognitive Impairment (LMCI), and Alzheimer's Disease (AD). To assess the effectiveness of the proposed method, we systematically examined the structural modifications to the RBAGCN-DGL-PO-AC model through extensive ablation studies. Experimental findings highlight that RBAGCN-DGL-PO-AC state-of-the art performance, with 99.25 % accuracy, outperforming existing methods including MSFFGCN_ADC, CNN_CAD_DBMRI, and FCNN_ADC, while reducing training time by 28.5 % and increasing inference speed by 32.7 %. Hence, the RBAGCN-DGL-PO-AC method enhances AD classification by integrating denoising, segmentation, and dynamic graph-based feature extraction, achieving superior accuracy and making it a valuable tool for clinical applications, ultimately improving patient outcomes and disease management.

Classifying athletes and non-athletes by differences in spontaneous brain activity: a machine learning and fMRI study.

Peng L, Xu L, Zhang Z, Wang Z, Zhong X, Wang L, Peng Z, Xu R, Shao Y

pubmed logopapersMay 24 2025
Different types of sports training can induce distinct changes in brain activity and function; however, it remains unclear if there are commonalities across various sports disciplines. Moreover, the relationship between these brain activity alterations and the duration of sports training requires further investigation. This study employed resting-state functional magnetic resonance imaging (rs-fMRI) techniques to analyze spontaneous brain activity using the amplitude of low-frequency fluctuations (ALFF) and fractional amplitude of low-frequency fluctuations (fALFF) in 86 highly trained athletes compared to 74 age- and gender-matched non-athletes. Our findings revealed significantly higher ALFF values in the Insula_R (Right Insula), OFCpost_R (Right Posterior orbital gyrus), and OFClat_R (Right Lateral orbital gyrus) in athletes compared to controls, whereas fALFF in the Postcentral_R (Right Postcentral) was notably higher in controls. Additionally, we identified a significant negative correlation between fALFF values in the Postcentral_R of athletes and their years of professional training. Utilizing machine learning algorithms, we achieved accurate classification of brain activity patterns distinguishing athletes from non-athletes with over 96.97% accuracy. These results suggest that the functional reorganization observed in athletes' brains may signify an adaptation to prolonged training, potentially reflecting enhanced processing efficiency. This study emphasizes the importance of examining the impact of long-term sports training on brain function, which could influence cognitive and sensory systems crucial for optimal athletic performance. Furthermore, machine learning methods could be used in the future to select athletes based on differences in brain activity.

Symbolic and hybrid AI for brain tissue segmentation using spatial model checking.

Belmonte G, Ciancia V, Massink M

pubmed logopapersMay 24 2025
Segmentation of 3D medical images, and brain segmentation in particular, is an important topic in neuroimaging and in radiotherapy. Overcoming the current, time consuming, practise of manual delineation of brain tumours and providing an accurate, explainable, and replicable method of segmentation of the tumour area and related tissues is therefore an open research challenge. In this paper, we first propose a novel symbolic approach to brain segmentation and delineation of brain lesions based on spatial model checking. This method has its foundations in the theory of closure spaces, a generalisation of topological spaces, and spatial logics. At its core is a high-level declarative logic language for image analysis, ImgQL, and an efficient spatial model checker, VoxLogicA, exploiting state-of-the-art image analysis libraries in its model checking algorithm. We then illustrate how this technique can be combined with Machine Learning techniques leading to a hybrid AI approach that provides accurate and explainable segmentation results. We show the results of the application of the symbolic approach on several public datasets with 3D magnetic resonance (MR) images. Three datasets are provided by the 2017, 2019 and 2020 international MICCAI BraTS Challenges with 210, 259 and 293 MR images, respectively, and the fourth is the BrainWeb dataset with 20 (synthetic) 3D patient images of the normal brain. We then apply the hybrid AI method to the BraTS 2020 training set. Our segmentation results are shown to be in line with the state-of-the-art with respect to other recent approaches, both from the accuracy point of view as well as from the view of computational efficiency, but with the advantage of them being explainable.

Deep learning and iterative image reconstruction for head CT: Impact on image quality and radiation dose reduction-Comparative study.

Pula M, Kucharczyk E, Zdanowicz-Ratajczyk A, Dorochowicz M, Guzinski M

pubmed logopapersMay 23 2025
<b>Background and purpose:</b> This study focuses on an objective evaluation of a novel reconstruction algorithm-Deep Learning Image Reconstruction (DLIR)-ability to improve image quality and reduce radiation dose compared to the established standard of Adaptive Statistical Iterative Reconstruction-V (ASIR-V), in unenhanced head computed tomography (CT). <b>Materials and methods:</b> A retrospective analysis of 163 consecutive unenhanced head CTs was conducted. Image quality assessment was computed on the objective parameters of Signal-to-Noise Ratio (SNR) and Contrast-to-Noise Ratio (CNR), derived from 5 regions of interest (ROI). The evaluation of DLIR dose reduction abilities was based on the analysis of the PACS derived parameters of dose length product and computed tomography dose index volume (CTDIvol). <b>Results:</b> Following the application of rigorous criteria, the study comprised 35 patients. Significant image quality improvement was achieved with the implementation of DLIR, as evidenced by up to a 145% and 160% increase in SNR in supra- and infratentorial regions, respectively. CNR measurements further confirmed the superiority of DLIR over ASIR-V, with an increase of 171.5% in the supratentorial region and a 59.3% increase in the infratentorial region. Despite the signal improvement and noise reduction DLIR facilitated radiation dose reduction of up to 44% in CTDIvol. <b>Conclusion:</b> Implementation of DLIR in head CT scans enables significant image quality improvement and dose reduction abilities compared to standard ASIR-V. However, the dose reduction feature was proven insufficient to counteract the lack of gantry angulation in wide-detector scanners.

Dual Attention Residual U-Net for Accurate Brain Ultrasound Segmentation in IVH Detection

Dan Yuan, Yi Feng, Ziyun Tang

arxiv logopreprintMay 23 2025
Intraventricular hemorrhage (IVH) is a severe neurological complication among premature infants, necessitating early and accurate detection from brain ultrasound (US) images to improve clinical outcomes. While recent deep learning methods offer promise for computer-aided diagnosis, challenges remain in capturing both local spatial details and global contextual dependencies critical for segmenting brain anatomies. In this work, we propose an enhanced Residual U-Net architecture incorporating two complementary attention mechanisms: the Convolutional Block Attention Module (CBAM) and a Sparse Attention Layer (SAL). The CBAM improves the model's ability to refine spatial and channel-wise features, while the SAL introduces a dual-branch design, sparse attention filters out low-confidence query-key pairs to suppress noise, and dense attention ensures comprehensive information propagation. Extensive experiments on the Brain US dataset demonstrate that our method achieves state-of-the-art segmentation performance, with a Dice score of 89.04% and IoU of 81.84% for ventricle region segmentation. These results highlight the effectiveness of integrating spatial refinement and attention sparsity for robust brain anatomy detection. Code is available at: https://github.com/DanYuan001/BrainImgSegment.

How We Won the ISLES'24 Challenge by Preprocessing

Tianyi Ren, Juampablo E. Heras Rivera, Hitender Oswal, Yutong Pan, William Henry, Jacob Ruzevick, Mehmet Kurt

arxiv logopreprintMay 23 2025
Stroke is among the top three causes of death worldwide, and accurate identification of stroke lesion boundaries is critical for diagnosis and treatment. Supervised deep learning methods have emerged as the leading solution for stroke lesion segmentation but require large, diverse, and annotated datasets. The ISLES'24 challenge addresses this need by providing longitudinal stroke imaging data, including CT scans taken on arrival to the hospital and follow-up MRI taken 2-9 days from initial arrival, with annotations derived from follow-up MRI. Importantly, models submitted to the ISLES'24 challenge are evaluated using only CT inputs, requiring prediction of lesion progression that may not be visible in CT scans for segmentation. Our winning solution shows that a carefully designed preprocessing pipeline including deep-learning-based skull stripping and custom intensity windowing is beneficial for accurate segmentation. Combined with a standard large residual nnU-Net architecture for segmentation, this approach achieves a mean test Dice of 28.5 with a standard deviation of 21.27.

How We Won the ISLES'24 Challenge by Preprocessing

Tianyi Ren, Juampablo E. Heras Rivera, Hitender Oswal, Yutong Pan, William Henry, Sophie Walters, Mehmet Kurt

arxiv logopreprintMay 23 2025
Stroke is among the top three causes of death worldwide, and accurate identification of stroke lesion boundaries is critical for diagnosis and treatment. Supervised deep learning methods have emerged as the leading solution for stroke lesion segmentation but require large, diverse, and annotated datasets. The ISLES'24 challenge addresses this need by providing longitudinal stroke imaging data, including CT scans taken on arrival to the hospital and follow-up MRI taken 2-9 days from initial arrival, with annotations derived from follow-up MRI. Importantly, models submitted to the ISLES'24 challenge are evaluated using only CT inputs, requiring prediction of lesion progression that may not be visible in CT scans for segmentation. Our winning solution shows that a carefully designed preprocessing pipeline including deep-learning-based skull stripping and custom intensity windowing is beneficial for accurate segmentation. Combined with a standard large residual nnU-Net architecture for segmentation, this approach achieves a mean test Dice of 28.5 with a standard deviation of 21.27.

Improvement of deep learning-based dose conversion accuracy to a Monte Carlo algorithm in proton beam therapy for head and neck cancers.

Kato R, Kadoya N, Kato T, Tozuka R, Ogawa S, Murakami M, Jingu K

pubmed logopapersMay 23 2025
This study is aimed to clarify the effectiveness of the image-rotation technique and zooming augmentation to improve the accuracy of the deep learning (DL)-based dose conversion from pencil beam (PB) to Monte Carlo (MC) in proton beam therapy (PBT). We adapted 85 patients with head and neck cancers. The patient dataset was randomly divided into 101 plans (334 beams) for training/validation and 11 plans (34 beams) for testing. Further, we trained a DL model that inputs a computed tomography (CT) image and the PB dose in a single-proton field and outputs the MC dose, applying the image-rotation technique and zooming augmentation. We evaluated the DL-based dose conversion accuracy in a single-proton field. The average γ-passing rates (a criterion of 3%/3 mm) were 80.6 ± 6.6% for the PB dose, 87.6 ± 6.0% for the baseline model, 92.1 ± 4.7% for the image-rotation model, and 93.0 ± 5.2% for the data-augmentation model, respectively. Moreover, the average range differences for R90 were - 1.5 ± 3.6% in the PB dose, 0.2 ± 2.3% in the baseline model, -0.5 ± 1.2% in the image-rotation model, and - 0.5 ± 1.1% in the data-augmentation model, respectively. The doses as well as ranges were improved by the image-rotation technique and zooming augmentation. The image-rotation technique and zooming augmentation greatly improved the DL-based dose conversion accuracy from the PB to the MC. These techniques can be powerful tools for improving the DL-based dose calculation accuracy in PBT.
Page 38 of 53522 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.