Sort by:
Page 24 of 58579 results

A Privacy-Preserving Federated Learning Framework for Generalizable CBCT to Synthetic CT Translation in Head and Neck

Ciro Benito Raggio, Paolo Zaffino, Maria Francesca Spadea

arxiv logopreprintJun 10 2025
Shortened Abstract Cone-beam computed tomography (CBCT) has become a widely adopted modality for image-guided radiotherapy (IGRT). However, CBCT suffers from increased noise, limited soft-tissue contrast, and artifacts, resulting in unreliable Hounsfield unit values and hindering direct dose calculation. Synthetic CT (sCT) generation from CBCT addresses these issues, especially using deep learning (DL) methods. Existing approaches are limited by institutional heterogeneity, scanner-dependent variations, and data privacy regulations that prevent multi-center data sharing. To overcome these challenges, we propose a cross-silo horizontal federated learning (FL) approach for CBCT-to-sCT synthesis in the head and neck region, extending our FedSynthCT framework. A conditional generative adversarial network was collaboratively trained on data from three European medical centers in the public SynthRAD2025 challenge dataset. The federated model demonstrated effective generalization across centers, with mean absolute error (MAE) ranging from $64.38\pm13.63$ to $85.90\pm7.10$ HU, structural similarity index (SSIM) from $0.882\pm0.022$ to $0.922\pm0.039$, and peak signal-to-noise ratio (PSNR) from $32.86\pm0.94$ to $34.91\pm1.04$ dB. Notably, on an external validation dataset of 60 patients, comparable performance was achieved (MAE: $75.22\pm11.81$ HU, SSIM: $0.904\pm0.034$, PSNR: $33.52\pm2.06$ dB) without additional training, confirming robust generalization despite protocol, scanner differences and registration errors. These findings demonstrate the technical feasibility of FL for CBCT-to-sCT synthesis while preserving data privacy and offer a collaborative solution for developing generalizable models across institutions without centralized data sharing or site-specific fine-tuning.

Multivariate brain morphological patterns across mood disorders: key roles of frontotemporal and cerebellar areas.

Kandilarova S, Maggioni E, Squarcina L, Najar D, Homadi M, Tassi E, Stoyanov D, Brambilla P

pubmed logopapersJun 10 2025
Differentiating major depressive disorder (MDD) from bipolar disorder (BD) remains a significant clinical challenge, as both disorders exhibit overlapping symptoms but require distinct treatment approaches. Advances in voxel-based morphometry and surface-based morphometry have facilitated the identification of structural brain abnormalities that may serve as diagnostic biomarkers. This study aimed to explore the relationships between brain morphological features, such as grey matter volume (GMV) and cortical thickness (CT), and demographic and clinical variables in patients with MDD and BD and healthy controls (HC) using multivariate analysis methods. A total of 263 participants, including 120 HC, 95 patients with MDD and 48 patients with BD, underwent T1-weighted MRI. GMV and CT were computed for standardised brain regions, followed by multivariate partial least squares (PLS) regression to assess associations with demographic and diagnostic variables. Reductions in frontotemporal CT were observed in MDD and BD compared with HC, but distinct trends between BD and MDD were also detected for the CT of selective temporal, frontal and parietal regions. Differential patterns in cerebellar GMV were also identified, with lobule CI larger in MDD and lobule CII larger in BD. Additionally, BD showed the same trend as ageing concerning reductions in CT and posterior cerebellar and striatal GMV. Depression severity showed a transdiagnostic link with reduced frontotemporal CT. This study highlights shared and distinct structural brain alterations in MDD and BD, emphasising the potential of neuroimaging biomarkers to enhance diagnostic accuracy. Accelerated cortical thinning and differential cerebellar changes in BD may serve as targets for future research and clinical interventions. Our findings underscore the value of objective neuroimaging markers in increasing the precision of mood disorder diagnoses, improving treatment outcomes.

U<sub>2</sub>-Attention-Net: a deep learning automatic delineation model for parotid glands in head and neck cancer organs at risk on radiotherapy localization computed tomography images.

Wen X, Wang Y, Zhang D, Xiu Y, Sun L, Zhao B, Liu T, Zhang X, Fan J, Xu J, An T, Li W, Yang Y, Xing D

pubmed logopapersJun 10 2025
This study aimed to develop a novel deep learning model, U<sub>2</sub>-Attention-Net (U<sub>2</sub>A-Net), for precise segmentation of parotid glands on radiotherapy localization CT images. CT images from 79 patients with head and neck cancer were selected, on which the label maps were delineated by relevant practitioners to construct a dataset. The dataset was divided into the training set (n = 60), validation set (n = 6), and test set (n = 13), with the training set augmented. U<sub>2</sub>A-Net, divided into U<sub>2</sub>A-Net V<sub>1</sub> (sSE) and U<sub>2</sub>A-Net V<sub>2</sub> (cSE) based on different attention mechanisms, was evaluated for parotid gland segmentation based on the DL loss function with U-Net, Attention U-Net, DeepLabV3+, and TransUNet as comparision models. Segmentation was also performed using GDL and GD-BCEL loss functions. Model performance was evaluated using DSC, JSC, PPV, SE, HD, RVD, and VOE metrics. The quantitative results revealed that U<sub>2</sub>A-Net based on DL outperformed the comparative models. While U<sub>2</sub>A-Net V<sub>1</sub> had the highest PPV, U<sub>2</sub>A-Net V<sub>2</sub> demonstrated the best quantitative results in other metrics. Qualitative results showed that U<sub>2</sub>A-Net's segmentation closely matched expert delineations, reducing oversegmentation and undersegmentation, with U<sub>2</sub>A-Net V<sub>2</sub> being more effective. In comparing loss functions, U<sub>2</sub>A-Net V<sub>1</sub> using GD-BCEL and U<sub>2</sub>A-Net V<sub>2</sub> using DL performed best. The U<sub>2</sub>A-Net model significantly improved parotid gland segmentation on radiotherapy localization CT images. The cSE attention mechanism showed advantages with DL, while sSE performed better with GD-BCEL.

Automated Diffusion Analysis for Non-Invasive Prediction of IDH Genotype in WHO Grade 2-3 Gliomas.

Wu J, Thust SC, Wastling SJ, Abdalla G, Benenati M, Maynard JA, Brandner S, Carrasco FP, Barkhof F

pubmed logopapersJun 10 2025
Glioma molecular characterization is essential for risk stratification and treatment planning. Noninvasive imaging biomarkers such as apparent diffusion coefficient (ADC) values have shown potential for predicting glioma genotypes. However, manual segmentation of gliomas is time-consuming and operator-dependent. To address this limitation, we aimed to establish a single-sequence-derived automatic ADC extraction pipeline using T2-weighted imaging to support glioma isocitrate dehydrogenase (IDH) genotyping. Glioma volumes from a hospital data set (University College London Hospitals; n=247) were manually segmented on T2-weighted MRI scans using ITK-Snap Toolbox and co-registered to ADC maps sequences using the FMRIB Linear Image Registration Tool in FSL, followed by ADC histogram extraction (Python). Separately, a nnUNet deep learning algorithm was trained to segment glioma volumes using T2w only from BraTS 2021 data (n=500, 80% training, 5% validation and 15% test split). nnUnet was then applied to the University College London Hospitals (UCLH) data for segmentation and ADC read-outs. Univariable logistic regression was used to test the performance manual and nnUNet derived ADC metrics for IDH status prediction. Statistical equivalence was tested (paired two-sided t-test). nnUnet segmentation achieved a median Dice of 0.85 on BraTS data, and 0.83 on UCLH data. For the best performing metric (rADCmean) the area under the receiver operating characteristic curve (AUC) for differentiating IDH-mutant from IDHwildtype gliomas was 0.82 (95% CI: 0.78-0.88), compared to the manual segmentation AUC 0.84 (95% CI: 0.77-0.89). For all ADC metrics, manually and nnUNet extracted ADC were statistically equivalent (p<0.01). nnUNet identified one area of glioma infiltration missed by human observers. In 0.8% gliomas, nnUnet missed glioma components. In 6% of cases, over-segmentation of brain remote from the tumor occurred (e.g. temporal poles). The T2w trained nnUnet algorithm achieved ADC readouts for IDH genotyping with a performance statistically equivalent to human observers. This approach could support rapid ADC based identification of glioblastoma at an early disease stage, even with limited input data. AUC = Area under the receiver operating characteristic curve, BraTS = The brain tumor segmentation challenge held by MICCAI, Dice = Dice Similarity Coefficient, IDH = Isocitrate dehydrogenase, mGBM = Molecular glioblastoma, ADCmin = Fifth ADC histogram percentile, ADCmean = Mean ADC value, ADCNAWM = ADC in the contralateral centrum semiovale normal white matter, rADCmin = Normalized ADCmin, VOI rADCmean = Normalized ADCmean.

An Explainable Deep Learning Framework for Brain Stroke and Tumor Progression via MRI Interpretation

Rajan Das Gupta, Md Imrul Hasan Showmick, Mushfiqur Rahman Abir, Shanjida Akter, Md. Yeasin Rahat, Md. Jakir Hossen

arxiv logopreprintJun 10 2025
Early and accurate detection of brain abnormalities, such as tumors and strokes, is essential for timely intervention and improved patient outcomes. In this study, we present a deep learning-based system capable of identifying both brain tumors and strokes from MRI images, along with their respective stages. We have executed two groundbreaking strategies involving convolutional neural networks, MobileNet V2 and ResNet-50-optimized through transfer learning to classify MRI scans into five diagnostic categories. Our dataset, aggregated and augmented from various publicly available MRI sources, was carefully curated to ensure class balance and image diversity. To enhance model generalization and prevent overfitting, we applied dropout layers and extensive data augmentation. The models achieved strong performance, with training accuracy reaching 93\% and validation accuracy up to 88\%. While ResNet-50 demonstrated slightly better results, Mobile Net V2 remains a promising option for real-time diagnosis in low resource settings due to its lightweight architecture. This research offers a practical AI-driven solution for early brain abnormality detection, with potential for clinical deployment and future enhancement through larger datasets and multi modal inputs.

Challenges and Advances in Classifying Brain Tumors: An Overview of Machine, Deep Learning, and Hybrid Approaches with Future Perspectives in Medical Imaging.

Alshomrani F

pubmed logopapersJun 10 2025
Accurate brain tumor classification is essential in neuro-oncology, as it directly informs treatment strategies and influences patient outcomes. This review comprehensively explores machine learning (ML) and deep learning (DL) models that enhance the accuracy and efficiency of brain tumor classification using medical imaging data, particularly Magnetic Resonance Imaging (MRI). As a noninvasive imaging technique, MRI plays a central role in detecting, segmenting, and characterizing brain tumors by providing detailed anatomical views that help distinguish various tumor types, including gliomas, meningiomas, and metastatic brain lesions. The review presents a detailed analysis of diverse ML approaches, from classical algorithms such as Support Vector Machines (SVM) and Decision Trees to advanced DL models, including Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and hybrid architectures that combine multiple techniques for improved performance. Through comparative analysis of recent studies across various datasets, the review evaluates these methods using metrics such as accuracy, sensitivity, specificity, and AUC-ROC, offering insights into their effectiveness and limitations. Significant challenges in the field are examined, including the scarcity of annotated datasets, computational complexity requirements, model interpretability issues, and barriers to clinical integration. The review proposes future directions to address these challenges, highlighting the potential of multi-modal imaging that combines MRI with other imaging modalities, explainable AI frameworks for enhanced model transparency, and privacy-preserving techniques for securing sensitive patient data. This comprehensive analysis demonstrates the transformative potential of ML and DL in advancing brain tumor diagnosis while emphasizing the necessity for continued research and innovation to overcome current limitations and ensure successful clinical implementation for improved patient care.

Brain tau PET-based identification and characterization of subpopulations in patients with Alzheimer's disease using deep learning-derived saliency maps.

Li Y, Wang X, Ge Q, Graeber MB, Yan S, Li J, Li S, Gu W, Hu S, Benzinger TLS, Lu J, Zhou Y

pubmed logopapersJun 9 2025
Alzheimer's disease (AD) is a heterogeneous neurodegenerative disorder in which tau neurofibrillary tangles are a pathological hallmark closely associated with cognitive dysfunction and neurodegeneration. In this study, we used brain tau data to investigate AD heterogeneity by identifying and characterizing the subpopulations among patients. We included 615 cognitively normal and 159 AD brain <sup>18</sup>F-flortaucipr PET scans, along with T1-weighted MRI from the Alzheimer Disease Neuroimaging Initiative database. A three dimensional-convolutional neural network model was employed for AD detection using standardized uptake value ratio (SUVR) images. The model-derived saliency maps were generated and employed as informative image features for clustering AD participants. Among the identified subpopulations, statistical analysis of demographics, neuropsychological measures, and SUVR were compared. Correlations between neuropsychological measures and regional SUVRs were assessed. A generalized linear model was utilized to investigate the sex and APOE ε4 interaction effect on regional SUVRs. Two distinct subpopulations of AD patients were revealed, denoted as S<sub>Hi</sub> and S<sub>Lo</sub>. Compared to the S<sub>Lo</sub> group, the S<sub>Hi</sub> group exhibited a significantly higher global tau burden in the brain, but both groups showed similar cognition distribution levels. In the S<sub>Hi</sub> group, the associations between the neuropsychological measurements and regional tau deposition were weakened. Moreover, a significant interaction effect of sex and APOE ε4 on tau deposition was observed in the S<sub>Lo</sub> group, but no such effect was found in the S<sub>Hi</sub> group. Our results suggest that tau tangles, as shown by SUVR, continue to accumulate even when cognitive function plateaus in AD patients, highlighting the advantages of PET in later disease stages. The differing relationships between cognition and tau deposition, and between gender, APOE4, and tau deposition, provide potential for subtype-specific treatments. Targeting gender-specific and genetic factors influencing tau deposition, as well as interventions aimed at tau's impact on cognition, may be effective.

Addressing Limited Generalizability in Artificial Intelligence-Based Brain Aneurysm Detection for Computed Tomography Angiography: Development of an Externally Validated Artificial Intelligence Screening Platform.

Pettersson SD, Filo J, Liaw P, Skrzypkowska P, Klepinowski T, Szmuda T, Fodor TB, Ramirez-Velandia F, Zieliński P, Chang YM, Taussky P, Ogilvy CS

pubmed logopapersJun 9 2025
Brain aneurysm detection models, both in the literature and in industry, continue to lack generalizability during external validation, limiting clinical adoption. This challenge is largely due to extensive exclusion criteria during training data selection. The authors developed the first model to achieve generalizability using novel methodological approaches. Computed tomography angiography (CTA) scans from 2004 to 2023 at the study institution were used for model training, including untreated unruptured intracranial aneurysms without extensive cerebrovascular disease. External validation used digital subtraction angiography-verified CTAs from an international center, while prospective validation occurred at the internal institution over 9 months. A public web platform was created for further model validation. A total of 2194 CTA scans were used for this study. One thousand five hundred eighty-seven patients and 1920 aneurysms with a mean size of 5.3 ± 3.7 mm were included in the training cohort. The mean age of the patients was 69.7 ± 14.9 years, and 1203 (75.8%) were female. The model achieved a training Dice score of 0.88 and a validation Dice score of 0.76. Prospective internal validation on 304 scans yielded a lesion-level (LL) sensitivity of 82.5% (95% CI: 75.5-87.9) and specificity of 89.6 (95% CI: 84.5-93.2). External validation on 303 scans demonstrated an on-par LL sensitivity and specificity of 83.5% (95% CI: 75.1-89.4) and 92.9% (95% CI: 88.8-95.6), respectively. Radiologist LL sensitivity from the external center was 84.5% (95% CI: 76.2-90.2), and 87.5% of the missed aneurysms were detected by the model. The authors developed the first publicly testable artificial intelligence model for aneurysm detection on CTA scans, demonstrating generalizability and state-of-the-art performance in external validation. The model addresses key limitations of previous efforts and enables broader validation through a web-based platform.

Automated Vessel Occlusion Software in Acute Ischemic Stroke: Pearls and Pitfalls.

Aziz YN, Sriwastwa A, Nael K, Harker P, Mistry EA, Khatri P, Chatterjee AR, Heit JJ, Jadhav A, Yedavalli V, Vagal AS

pubmed logopapersJun 9 2025
Software programs leveraging artificial intelligence to detect vessel occlusions are now widely available to aid in stroke triage. Given their proprietary use, there is a surprising lack of information regarding how the software works, who is using the software, and their performance in an unbiased real-world setting. In this educational review of automated vessel occlusion software, we discuss emerging evidence of their utility, underlying algorithms, real-world diagnostic performance, and limitations. The intended audience includes specialists in stroke care in neurology, emergency medicine, radiology, and neurosurgery. Practical tips for onboarding and utilization of this technology are provided based on the multidisciplinary experience of the authorship team.

Transfer learning for accurate brain tumor classification in MRI: a step forward in medical diagnostics.

Khan MA, Hussain MZ, Mehmood S, Khan MF, Ahmad M, Mazhar T, Shahzad T, Saeed MM

pubmed logopapersJun 9 2025
Brain tumor classification is critical for therapeutic applications that benefit from computer-aided diagnostics. Misdiagnosing a brain tumor can significantly reduce a patient's chances of survival, as it may lead to ineffective treatments. This study proposes a novel approach for classifying brain tumors in MRI images using Transfer Learning (TL) with state-of-the-art deep learning models: AlexNet, MobileNetV2, and GoogleNet. Unlike previous studies that often focus on a single model, our work comprehensively compares these architectures, fine-tuned specifically for brain tumor classification. We utilize a publicly available dataset of 4,517 MRI scans, consisting of three prevalent types of brain tumors-glioma (1,129 images), meningioma (1,134 images), and pituitary tumors (1,138 images)-as well as 1,116 images of normal brains (no tumor). Our approach addresses key research gaps, including class imbalance, through data augmentation and model efficiency, leveraging lightweight architectures like MobileNetV2. The GoogleNet model achieves the highest classification accuracy of 99.2%, outperforming previous studies using the same dataset. This demonstrates the potential of our approach to assist physicians in making rapid and precise decisions, thereby improving patient outcomes. The results highlight the effectiveness of TL in medical diagnostics and its potential for real-world clinical deployment. This study advances the field of brain tumor classification and provides a robust framework for future research in medical image analysis.
Page 24 of 58579 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.