Sort by:
Page 36 of 53522 results

Modeling Brain Aging with Explainable Triamese ViT: Towards Deeper Insights into Autism Disorder.

Zhang Z, Aggarwal V, Angelov P, Jiang R

pubmed logopapersMay 27 2025
Machine learning, particularly through advanced imaging techniques such as three-dimensional Magnetic Resonance Imaging (MRI), has significantly improved medical diagnostics. This is especially critical for diagnosing complex conditions like Alzheimer's disease. Our study introduces Triamese-ViT, an innovative Tri-structure of Vision Transformers (ViTs) that incorporates a built-in interpretability function, it has structure-aware explainability that allows for the identification and visualization of key features or regions contributing to the prediction, integrates information from three perspectives to enhance brain age estimation. This method not only increases accuracy but also improves interoperability with existing techniques. When evaluated, Triamese-ViT demonstrated superior performance and produced insightful attention maps. We applied these attention maps to the analysis of natural aging and the diagnosis of Autism Spectrum Disorder (ASD). The results aligned with those from occlusion analysis, identifying the Cingulum, Rolandic Operculum, Thalamus, and Vermis as important regions in normal aging, and highlighting the Thalamus and Caudate Nucleus as key regions for ASD diagnosis.

Auto-segmentation of cerebral cavernous malformations using a convolutional neural network.

Chou CJ, Yang HC, Lee CC, Jiang ZH, Chen CJ, Wu HM, Lin CF, Lai IC, Peng SJ

pubmed logopapersMay 26 2025
This paper presents a deep learning model for the automated segmentation of cerebral cavernous malformations (CCMs). The model was trained using treatment planning data from 199 Gamma Knife (GK) exams, comprising 171 cases with a single CCM and 28 cases with multiple CCMs. The training data included initial MRI images with target CCM regions manually annotated by neurosurgeons. For the extraction of data related to the brain parenchyma, we employed a mask region-based convolutional neural network (Mask R-CNN). Subsequently, this data was processed using a 3D convolutional neural network known as DeepMedic. The efficacy of the brain parenchyma extraction model was demonstrated via five-fold cross-validation, resulting in an average Dice similarity coefficient of 0.956 ± 0.002. The segmentation models used for CCMs achieved average Dice similarity coefficients of 0.741 ± 0.028 based solely on T2W images. The Dice similarity coefficients for the segmentation of CCMs types were as follows: Zabramski Classification type I (0.743), type II (0.742), and type III (0.740). We also developed a user-friendly graphical user interface to facilitate the use of these models in clinical analysis. This paper presents a deep learning model for the automated segmentation of CCMs, demonstrating sufficient performance across various Zabramski classifications. not applicable.

ScanAhead: Simplifying standard plane acquisition of fetal head ultrasound.

Men Q, Zhao H, Drukker L, Papageorghiou AT, Noble JA

pubmed logopapersMay 26 2025
The fetal standard plane acquisition task aims to detect an Ultrasound (US) image characterized by specified anatomical landmarks and appearance for assessing fetal growth. However, in practice, due to variability in human operator skill and possible fetal motion, it can be challenging for a human operator to acquire a satisfactory standard plane. To support a human operator with this task, this paper first describes an approach to automatically predict the fetal head standard plane from a video segment approaching the standard plane. A transformer-based image predictor is proposed to produce a high-quality standard plane by understanding diverse scales of head anatomy within the US video frame. Because of the visual gap between the video frames and standard plane image, the predictor is equipped with an offset adaptor that performs domain adaption to translate the off-plane structures to the anatomies that would usually appear in a standard plane view. To enhance the anatomical details of the predicted US image, the approach is extended by utilizing a second modality, US probe movement, that provides 3D location information. Quantitative and qualitative studies conducted on two different head biometry planes demonstrate that the proposed US image predictor produces clinically plausible standard planes with superior performance to comparative published methods. The results of dual-modality solution show an improved visualization with enhanced anatomical details of the predicted US image. Clinical evaluations are also conducted to demonstrate the consistency between the predicted echo textures and the expected echo patterns seen in a typical real standard plane, which indicates its clinical feasibility for improving the standard plane acquisition process.

Diffusion based multi-domain neuroimaging harmonization method with preservation of anatomical details.

Lan H, Varghese BA, Sheikh-Bahaei N, Sepehrband F, Toga AW, Choupan J

pubmed logopapersMay 26 2025
In multi-center neuroimaging studies, the technical variability caused by the batch differences could hinder the ability to aggregate data across sites, and negatively impact the reliability of study-level results. Recent efforts in neuroimaging harmonization have aimed to minimize these technical gaps and reduce technical variability across batches. While Generative Adversarial Networks (GAN) has been a prominent method for addressing harmonization tasks, GAN-harmonized images suffer from artifacts or anatomical distortions. Given the advancements of denoising diffusion probabilistic model which produces high-fidelity images, we have assessed the efficacy of the diffusion model for neuroimaging harmonization. While GAN-based methods intrinsically transform imaging styles between two domains per model, we have demonstrated the diffusion model's superior capability in harmonizing images across multiple domains with single model. Our experiments highlight that the learned domain invariant anatomical condition reinforces the model to accurately preserve the anatomical details while differentiating batch differences at each diffusion step. Our proposed method has been tested using T1-weighted MRI images from two public neuroimaging datasets of ADNI1 and ABIDE II, yielding harmonization results with consistent anatomy preservation and superior FID score compared to the GAN-based methods. We have conducted multiple analyses including extensive quantitative and qualitative evaluations against the baseline models, ablation study showcasing the benefits of the learned domain invariant conditions, and improvements in the consistency of perivascular spaces segmentation analysis and volumetric analysis through harmonization.

Improving brain tumor diagnosis: A self-calibrated 1D residual network with random forest integration.

Sumithra A, Prathap PMJ, Karthikeyan A, Dhanasekaran S

pubmed logopapersMay 26 2025
Medical specialists need to perform precise MRI analysis for accurate diagnosis of brain tumors. Current research has developed multiple artificial intelligence (AI) techniques for the process automation of brain tumor identification. However, existing approaches often depend on singular datasets, limiting their generalization capabilities across diverse clinical scenarios. The research introduces SCR-1DResNet as a new diagnostic tool for brain tumor detection that incorporates self-calibrated Random Forest along with one-dimensional residual networks. The research starts with MRI image acquisition from multiple Kaggle datasets then proceeds through stepwise processing that eliminates noise, enhances images, and performs resizing and normalization and conducts skull stripping operations. After data collection the WaveSegNet mode l extracts important attributes from tumors at multiple scales. Components of Random Forest classifier together with One-Dimensional Residual Network form the SCR-1DResNet model via self-calibration optimization to improve prediction reliability. Tests show the proposed system produces classification precision of 98.50% accompanied by accuracy of 98.80% and recall reaching 97.80% respectively. The SCR-1DResNet model demonstrates superior diagnostic capability and enhanced performance speed which shows strong prospects towards clinical decision support systems and improved neurological and oncological patient treatments.

Beyond Accuracy: Evaluating certainty of AI models for brain tumour detection.

Nisa ZU, Bhatti SM, Jaffar A, Mazhar T, Shahzad T, Ghadi YY, Almogren A, Hamam H

pubmed logopapersMay 26 2025
Brain tumors pose a severe health risk, often leading to fatal outcomes if not detected early. While most studies focus on improving classification accuracy, this research emphasizes prediction certainty, quantified through loss values. Traditional metrics like accuracy and precision do not capture confidence in predictions, which is critical for medical applications. This study establishes a correlation between lower loss values and higher prediction certainty, ensuring more reliable tumor classification. We evaluate CNN, ResNet50, XceptionNet, and a Proposed Model (VGG19 with customized classification layers) using accuracy, precision, recall, and loss. Results show that while accuracy remains comparable across models, the Proposed Model achieves the best performance (96.95 % accuracy, 0.087 loss), outperforming others in both precision and recall. These findings demonstrate that certainty-aware AI models are essential for reliable clinical decision-making. This study highlights the potential of AI to bridge the shortage of medical professionals by integrating reliable diagnostic tools in healthcare. AI-powered systems can enhance early detection and improve patient outcomes, reinforcing the need for certainty-driven AI adoption in medical imaging.

Brain Fractal Dimension and Machine Learning can predict first-episode psychosis and risk for transition to psychosis.

Hu Y, Frisman M, Andreou C, Avram M, Riecher-Rössler A, Borgwardt S, Barth E, Korda A

pubmed logopapersMay 26 2025
Although there are notable structural abnormalities in the brain associated with psychotic diseases, it is still unclear how these abnormalities relate to clinical presentation. However, the fractal dimension (FD), which offers details on the complexity and irregularity of brain microstructures, may be a promising feature, as demonstrated by neuropsychiatric disorders such as Parkinson's and Alzheimer's. It may offer a possible biomarker for the detection and prognosis of psychosis when paired with machine learning. The purpose of this study is to investigate FD as a structural magnetic resonance imaging (sMRI) feature from individuals with a high clinical risk of psychosis who did not transit to psychosis (CHR_NT), clinical high risk who transit to psychosis (CHR_T), patients with first-episode psychosis (FEP) and healthy controls (HC). Using a machine learning approach that ultimately classifies sMRI images, the goals are (a) to evaluate FD as a potential biomarker and (b) to investigate its ability to predict a subsequent transition to psychosis from the high-risk clinical condition. We obtained sMRI images from 194 subjects, including 44 HCs, 77 FEPs, 16 CHR_Ts, and 57 CHR_NTs. We extracted the FD features and analyzed them using machine learning methods under five classification schemas (a) FEP vs. HC, (b) FEP vs. CHR_NT, (c) FEP vs. CHR_T, (d) CHR_NT vs. CHR_T, (d) CHR_NT vs. HC and (e) CHR_T vs. HC. In addition, the CHR_T group was used as external validation in (a), (b) and (d) comparisons to examine whether the progression of the disorder followed the FEP or CHR_NT patterns. The proposed algorithm resulted in a balanced accuracy greater than 0.77. This study has shown that FD can function as a predictive neuroimaging marker, providing fresh information on the microstructural alterations triggered throughout the course of psychosis. The effectiveness of FD in the detection of psychosis and transition to psychosis should be established by further research using larger datasets.

Impact of contrast-enhanced agent on segmentation using a deep learning-based software "Ai-Seg" for head and neck cancer.

Kihara S, Ueda Y, Harada S, Masaoka A, Kanayama N, Ikawa T, Inui S, Akagi T, Nishio T, Konishi K

pubmed logopapersMay 26 2025
In radiotherapy, auto-segmentation tools using deep learning assist in contouring organs-at-risk (OARs). We developed a segmentation model for head and neck (HN) OARs dedicated to contrast-enhanced (CE) computed tomography (CT) using the segmentation software, Ai-Seg, and compared the performance between CE and non-CE (nCE) CT. The retrospective study recruited 321 patients with HN cancers and trained a segmentation model using CE CT (CE model). The CE model was installed in Ai-Seg and applied to additional 25 patients with CE and nCE CT. The Dice similarity coefficient (DSC) and average Hausdorff distance (AHD) were calculated between the ground truth and Ai-Seg contours for brain, brainstem, chiasm, optic nerves, cochleae, oral cavity, parotid glands, pharyngeal constrictor muscle, and submandibular glands (SMGs). We compared the CE model and the existing model trained with nCE CT available in Ai-Seg for 6 OARs. The CE model obtained significantly higher DSCs on CE CT for parotid and SMGs compared to the existing model. The CE model provided significantly lower DSC values and higher AHD values on nCE CT for SMGs than on CE CT, but comparable values for other OARs. The CE model achieved significantly better performance than the existing model and can be used on nCE CT images without significant performance difference, except SMGs. Our results may facilitate the adoption of segmentation tools in clinical practice. We developed a segmentation model for HN OARs dedicated to CE CT using Ai-Seg and evaluated its usability on nCE CT.

Detecting microcephaly and macrocephaly from ultrasound images using artificial intelligence.

Mengistu AK, Assaye BT, Flatie AB, Mossie Z

pubmed logopapersMay 26 2025
Microcephaly and macrocephaly, which are abnormal congenital markers, are associated with developmental and neurologic deficits. Hence, there is a medically imperative need to conduct ultrasound imaging early on. However, resource-limited countries such as Ethiopia are confronted with inadequacies such that access to trained personnel and diagnostic machines inhibits the exact and continuous diagnosis from being met. This study aims to develop a fetal head abnormality detection model from ultrasound images via deep learning. Data were collected from three Ethiopian healthcare facilities to increase model generalizability. The recruitment period for this study started on November 9, 2024, and ended on November 30, 2024. Several preprocessing techniques have been performed, such as augmentation, noise reduction, and normalization. SegNet, UNet, FCN, MobileNetV2, and EfficientNet-B0 were applied to segment and measure fetal head structures using ultrasound images. The measurements were classified as microcephaly, macrocephaly, or normal using WHO guidelines for gestational age, and then the model performance was compared with that of existing industry experts. The metrics used for evaluation included accuracy, precision, recall, the F1 score, and the Dice coefficient. This study was able to demonstrate the feasibility of using SegNet for automatic segmentation, measurement of abnormalities of the fetal head, and classification of macrocephaly and microcephaly, with an accuracy of 98% and a Dice coefficient of 0.97. Compared with industry experts, the model achieved accuracies of 92.5% and 91.2% for the BPD and HC measurements, respectively. Deep learning models can enhance prenatal diagnosis workflows, especially in resource-constrained settings. Future work needs to be done on optimizing model performance, trying complex models, and expanding datasets to improve generalizability. If these technologies are adopted, they can be used in prenatal care delivery. Not applicable.

Predicting treatment response in individuals with major depressive disorder using structural MRI-based similarity features.

Song S, Wang S, Gao J, Zhu L, Zhang W, Wang Y, Wang D, Zhang D, Wang K

pubmed logopapersMay 26 2025
Major Depressive Disorder (MDD) is a prevalent mental health condition with significant societal impact. Structural magnetic resonance imaging (sMRI) and machine learning have shown promise in psychiatry, offering insights into brain abnormalities in MDD. However, predicting treatment response remains challenging. This study leverages inter-brain similarity from sMRI as a novel feature to enhance prediction accuracy and explore disease mechanisms. The method's generalizability across adult and adolescent cohorts is also evaluated. The study included 172 participants. Based on remission status, 39 participants from the Hangzhou Dataset and 34 from the Jinan Dataset were selected for further analysis. Three methods were used to extract brain similarity features, followed by a statistical test for feature selection. Six machine learning classifiers were employed to predict treatment response, and their generalizability was tested using the Jinan Dataset. Group analyses between remission and non-remission groups were conducted to identify brain regions associated with treatment response. Brain similarity features outperformed traditional metrics in predicting treatment outcomes, with the highest accuracy achieved by the model using these features. Between-group analyses revealed that the remission group had lower gray matter volume and density in the right precentral gyrus, but higher white matter volume (WMV). In the Jinan Dataset, significant differences were observed in the right cerebellum and fusiform gyrus, with higher WMV and density in the remission group. This study demonstrates that brain similarity features combined with machine learning can predict treatment response in MDD with moderate success across age groups. These findings emphasize the importance of considering age-related differences in treatment planning to personalize care. Clinical trial number: not applicable.
Page 36 of 53522 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.