Sort by:
Page 111 of 1241236 results

Improvement of deep learning-based dose conversion accuracy to a Monte Carlo algorithm in proton beam therapy for head and neck cancers.

Kato R, Kadoya N, Kato T, Tozuka R, Ogawa S, Murakami M, Jingu K

pubmed logopapersMay 23 2025
This study is aimed to clarify the effectiveness of the image-rotation technique and zooming augmentation to improve the accuracy of the deep learning (DL)-based dose conversion from pencil beam (PB) to Monte Carlo (MC) in proton beam therapy (PBT). We adapted 85 patients with head and neck cancers. The patient dataset was randomly divided into 101 plans (334 beams) for training/validation and 11 plans (34 beams) for testing. Further, we trained a DL model that inputs a computed tomography (CT) image and the PB dose in a single-proton field and outputs the MC dose, applying the image-rotation technique and zooming augmentation. We evaluated the DL-based dose conversion accuracy in a single-proton field. The average γ-passing rates (a criterion of 3%/3 mm) were 80.6 ± 6.6% for the PB dose, 87.6 ± 6.0% for the baseline model, 92.1 ± 4.7% for the image-rotation model, and 93.0 ± 5.2% for the data-augmentation model, respectively. Moreover, the average range differences for R90 were - 1.5 ± 3.6% in the PB dose, 0.2 ± 2.3% in the baseline model, -0.5 ± 1.2% in the image-rotation model, and - 0.5 ± 1.1% in the data-augmentation model, respectively. The doses as well as ranges were improved by the image-rotation technique and zooming augmentation. The image-rotation technique and zooming augmentation greatly improved the DL-based dose conversion accuracy from the PB to the MC. These techniques can be powerful tools for improving the DL-based dose calculation accuracy in PBT.

Deep learning and iterative image reconstruction for head CT: Impact on image quality and radiation dose reduction-Comparative study.

Pula M, Kucharczyk E, Zdanowicz-Ratajczyk A, Dorochowicz M, Guzinski M

pubmed logopapersMay 23 2025
<b>Background and purpose:</b> This study focuses on an objective evaluation of a novel reconstruction algorithm-Deep Learning Image Reconstruction (DLIR)-ability to improve image quality and reduce radiation dose compared to the established standard of Adaptive Statistical Iterative Reconstruction-V (ASIR-V), in unenhanced head computed tomography (CT). <b>Materials and methods:</b> A retrospective analysis of 163 consecutive unenhanced head CTs was conducted. Image quality assessment was computed on the objective parameters of Signal-to-Noise Ratio (SNR) and Contrast-to-Noise Ratio (CNR), derived from 5 regions of interest (ROI). The evaluation of DLIR dose reduction abilities was based on the analysis of the PACS derived parameters of dose length product and computed tomography dose index volume (CTDIvol). <b>Results:</b> Following the application of rigorous criteria, the study comprised 35 patients. Significant image quality improvement was achieved with the implementation of DLIR, as evidenced by up to a 145% and 160% increase in SNR in supra- and infratentorial regions, respectively. CNR measurements further confirmed the superiority of DLIR over ASIR-V, with an increase of 171.5% in the supratentorial region and a 59.3% increase in the infratentorial region. Despite the signal improvement and noise reduction DLIR facilitated radiation dose reduction of up to 44% in CTDIvol. <b>Conclusion:</b> Implementation of DLIR in head CT scans enables significant image quality improvement and dose reduction abilities compared to standard ASIR-V. However, the dose reduction feature was proven insufficient to counteract the lack of gantry angulation in wide-detector scanners.

Brain age prediction from MRI scans in neurodegenerative diseases.

Papouli A, Cole JH

pubmed logopapersMay 22 2025
This review explores the use of brain age estimation from MRI scans as a biomarker of brain health. With disorders like Alzheimer's and Parkinson's increasing globally, there is an urgent need for early detection tools that can identify at-risk individuals before cognitive symptoms emerge. Brain age offers a noninvasive, quantitative measure of neurobiological ageing, with applications in early diagnosis, disease monitoring, and personalized medicine. Studies show that individuals with Alzheimer's, mild cognitive impairment (MCI), and Parkinson's have older brain ages than their chronological age. Longitudinal research indicates that brain-predicted age difference (brain-PAD) rises with disease progression and often precedes cognitive decline. Advances in deep learning and multimodal imaging have improved the accuracy and interpretability of brain age predictions. Moreover, socioeconomic disparities and environmental factors significantly affect brain aging, highlighting the need for inclusive models. Brain age estimation is a promising biomarker for identify future risk of neurodegenerative disease, monitoring progression, and helping prognosis. Challenges like implementation of standardization, demographic biases, and interpretability remain. Future research should integrate brain age with biomarkers and multimodal imaging to enhance early diagnosis and intervention strategies.

An Interpretable Deep Learning Approach for Autism Spectrum Disorder Detection in Children Using NASNet-Mobile.

K VRP, Hima Bindu C, Devi KRM

pubmed logopapersMay 22 2025
Autism spectrum disorder (ASD) is a multifaceted neurodevelopmental disorder featuring impaired social interactions and communication abilities engaging the individuals in a restrictive or repetitive behaviour. Though incurable early detection and intervention can reduce the severity of symptoms. Structural magnetic resonance imaging (sMRI) can improve diagnostic accuracy, facilitating early diagnosis to offer more tailored care. With the emergence of deep learning (DL), neuroimaging-based approaches for ASD diagnosis have been focused. However, many existing models lack interpretability of their decisions for diagnosis. The prime objective of this work is to perform ASD classification precisely and to interpret the classification process in a better way so as to discern the major features that are appropriate for the prediction of disorder. The proposed model employs neural architecture search network - mobile(NASNet-Mobile) model for ASD detection, which is integrated with an explainable artificial intelligence (XAI) technique called local interpretable model-agnostic explanations (LIME) for increased transparency of ASD classification. The model is trained on sMRI images of two age groups taken from autism brain imaging data exchange-I (ABIDE-I) dataset. The proposed model yielded accuracy of 0.9607, F1-score of 0.9614, specificity of 0.9774, sensitivity of 0.9451, negative predicted value (NPV) of 0.9429, positive predicted value (PPV) of 0.9783 and the diagnostic odds ratio of 745.59 for 2 to 11 years age group compared to 12 to 18 years group. These results are superior compared to other state of the art models Inception v3 and SqueezeNet.

Generative adversarial DacFormer network for MRI brain tumor segmentation.

Zhang M, Sun Q, Han Y, Zhang M, Wang W, Zhang J

pubmed logopapersMay 22 2025
Current brain tumor segmentation methods often utilize a U-Net architecture based on efficient convolutional neural networks. While effective, these architectures primarily model local dependencies, lacking the ability to capture global interactions like pure Transformer. However, using pure Transformer directly causes the network to lose local feature information. To address this limitation, we propose the Generative Adversarial Dilated Attention Convolutional Transformer(GDacFormer). GDacFormer enhances interactions between tumor regions while balancing global and local information through the integration of adversarial learning with an improved transformer module. Specifically, GDacFormer leverages a generative adversarial segmentation network to learn richer and more detailed features. It integrates a novel Transformer module, DacFormer, featuring multi-scale dilated attention and a next convolution block. This module, embedded within the generator, aggregates semantic multi-scale information, efficiently reduces the redundancy in the self-attention mechanism, and enhances local feature representations, thus refining the brain tumor segmentation results. GDacFormer achieves Dice values for whole tumor, core tumor, and enhancing tumor segmentation of 90.9%/90.8%/93.7%, 84.6%/85.7%/93.5%, and 77.9%/79.3%/86.3% on BraTS2019-2021 datasets. Extensive evaluations demonstrate the effectiveness and competitiveness of GDacFormer. The code for GDacFormer will be made publicly available at https://github.com/MuqinZ/GDacFormer.

On factors that influence deep learning-based dose prediction of head and neck tumors.

Gao R, Mody P, Rao C, Dankers F, Staring M

pubmed logopapersMay 22 2025
<i>Objective.</i>This study investigates key factors influencing deep learning-based dose prediction models for head and neck cancer radiation therapy. The goal is to evaluate model accuracy, robustness, and computational efficiency, and to identify key components necessary for optimal performance.<i>Approach.</i>We systematically analyze the impact of input and dose grid resolution, input type, loss function, model architecture, and noise on model performance. Two datasets are used: a public dataset (OpenKBP) and an in-house clinical dataset. Model performance is primarily evaluated using two metrics: dose score and dose-volume histogram (DVH) score.<i>Main results.</i>High-resolution inputs improve prediction accuracy (dose score and DVH score) by 8.6%-13.5% compared to low resolution. Using a combination of CT, planning target volumes, and organs-at-risk as input significantly enhances accuracy, with improvements of 57.4%-86.8% over using CT alone. Integrating mean absolute error (MAE) loss with value-based and criteria-based DVH loss functions further boosts DVH score by 7.2%-7.5% compared to MAE loss alone. In the robustness analysis, most models show minimal degradation under Poisson noise (0-0.3 Gy) but are more susceptible to adversarial noise (0.2-7.8 Gy). Notably, certain models, such as SwinUNETR, demonstrate superior robustness against adversarial perturbations.<i>Significance.</i>These findings highlight the importance of optimizing deep learning models and provide valuable guidance for achieving more accurate and reliable radiotherapy dose prediction.

Deep Learning for Automated Prediction of Sphenoid Sinus Pneumatization in Computed Tomography.

Alamer A, Salim O, Alharbi F, Alsaleem F, Almuqbil A, Alhassoon K, Alsunaydih F

pubmed logopapersMay 22 2025
The sphenoid sinus is an important access point for trans-sphenoidal surgeries, but variations in its pneumatization may complicate surgical safety. Deep learning can be used to identify these anatomical variations. We developed a convolutional neural network (CNN) model for the automated prediction of sphenoid sinus pneumatization patterns in computed tomography (CT) scans. This model was tested on mid-sagittal CT images. Two radiologists labeled all CT images into four pneumatization patterns: Conchal (type I), presellar (type II), sellar (type III), and postsellar (type IV). We then augmented the training set to address the limited size and imbalanced nature of the data. The initial dataset included 249 CT images, divided into training (n = 174) and test (n = 75) datasets. The training dataset was augmented to 378 images. Following augmentation, the overall diagnostic accuracy of the model improved from 76.71% to 84%, with an area under the curve (AUC) of 0.84, indicating very good diagnostic performance. Subgroup analysis showed excellent results for type IV, with the highest AUC of 0.93, perfect sensitivity (100%), and an F1-score of 0.94. The model also performed robustly for type I, achieving an accuracy of 97.33% and high specificity (99%). These metrics highlight the model's potential for reliable clinical application. The proposed CNN model demonstrates very good diagnostic accuracy in identifying various sphenoid sinus pneumatization patterns, particularly excelling in type IV, which is crucial for endoscopic sinus surgery due to its higher risk of surgical complications. By assisting radiologists and surgeons, this model enhances the safety of transsphenoidal surgery, highlighting its value, novelty, and applicability in clinical settings.

DP-MDM: detail-preserving MR reconstruction via multiple diffusion models.

Geng M, Zhu J, Hong R, Liu Q, Liang D, Liu Q

pubmed logopapersMay 22 2025
<i>Objective.</i>Magnetic resonance imaging (MRI) is critical in medical diagnosis and treatment by capturing detailed features, such as subtle tissue changes, which help clinicians make precise diagnoses. However, the widely used single diffusion model has limitations in accurately capturing more complex details. This study aims to address these limitations by proposing an efficient method to enhance the reconstruction of detailed features in MRI.<i>Approach.</i>We present a detail-preserving reconstruction method that leverages multiple diffusion models (DP-MDM) to extract structural and detailed features in the k-space domain, which complements the image domain. Since high-frequency information in k-space is more systematically distributed around the periphery compared to the irregular distribution of detailed features in the image domain, this systematic distribution allows for more efficient extraction of detailed features. To further reduce redundancy and enhance model performance, we introduce virtual binary masks with adjustable circular center windows that selectively focus on high-frequency regions. These masks align with the frequency distribution of k-space data, enabling the model to focus more efficiently on high-frequency information. The proposed method employs a cascaded architecture, where the first diffusion model recovers low-frequency structural components, with subsequent models enhancing high-frequency details during the iterative reconstruction stage.<i>Main results.</i>Experimental results demonstrate that DP-MDM achieves superior performance across multiple datasets. On the<i>T1-GE brain</i>dataset with 2D random sampling at<i>R</i>= 15, DP-MDM achieved 35.14 dB peak signal-to-noise ratio (PSNR) and 0.8891 structural similarity (SSIM), outperforming other methods. The proposed method also showed robust performance on the<i>Fast-MRI</i>and<i>Cardiac MR</i>datasets, achieving the highest PSNR and SSIM values.<i>Significance.</i>DP-MDM significantly advances MRI reconstruction by balancing structural integrity and detail preservation. It not only enhances diagnostic accuracy through improved image quality but also offers a versatile framework that can potentially be extended to other imaging modalities, thereby broadening its clinical applicability.

Multimodal MRI radiomics enhances epilepsy prediction in pediatric low-grade glioma patients.

Tang T, Wu Y, Dong X, Zhai X

pubmed logopapersMay 22 2025
Determining whether pediatric patients with low-grade gliomas (pLGGs) have tumor-related epilepsy (GAE) is a crucial aspect of preoperative evaluation. Therefore, we aim to propose an innovative, machine learning- and deep learning-based framework for the rapid and non-invasive preoperative assessment of GAE in pediatric patients using magnetic resonance imaging (MRI). In this study, we propose a novel radiomics-based approach that integrates tumor and peritumoral features extracted from preoperative multiparametric MRI scans to accurately and non-invasively predict the occurrence of tumor-related epilepsy in pediatric patients. Our study developed a multimodal MRI radiomics model to predict epilepsy in pLGGs patients, achieving an AUC of 0.969. The integration of multi-sequence MRI data significantly improved predictive performance, with Stochastic Gradient Descent (SGD) classifier showing robust results (sensitivity: 0.882, specificity: 0.956). Our model can accurately predict whether pLGGs patients have tumor-related epilepsy, which could guide surgical decision-making. Future studies should focus on similarly standardized preoperative evaluations in pediatric epilepsy centers to increase training data and enhance the generalizability of the model.

A Novel Dynamic Neural Network for Heterogeneity-Aware Structural Brain Network Exploration and Alzheimer's Disease Diagnosis.

Cui W, Leng Y, Peng Y, Bai C, Li L, Jiang X, Yuan G, Zheng J

pubmed logopapersMay 22 2025
Heterogeneity is a fundamental characteristic of brain diseases, distinguished by variability not only in brain atrophy but also in the complexity of neural connectivity and brain networks. However, existing data-driven methods fail to provide a comprehensive analysis of brain heterogeneity. Recently, dynamic neural networks (DNNs) have shown significant advantages in capturing sample-wise heterogeneity. Therefore, in this article, we first propose a novel dynamic heterogeneity-aware network (DHANet) to identify critical heterogeneous brain regions, explore heterogeneous connectivity between them, and construct a heterogeneous-aware structural brain network (HGA-SBN) using structural magnetic resonance imaging (sMRI). Specifically, we develop a 3-D dynamic convmixer to extract abundant heterogeneous features from sMRI first. Subsequently, the critical brain atrophy regions are identified by dynamic prototype learning with embedding the hierarchical brain semantic structure. Finally, we employ a joint dynamic edge-correlation (JDE) modeling approach to construct the heterogeneous connectivity between these regions and analyze the HGA-SBN. To evaluate the effectiveness of the DHANet, we conduct elaborate experiments on three public datasets and the method achieves state-of-the-art (SOTA) performance on two classification tasks.
Page 111 of 1241236 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.