Sort by:
Page 39 of 1621612 results

Extrapolation Convolution for Data Prediction on a 2-D Grid: Bridging Spatial and Frequency Domains With Applications in Image Outpainting and Compressed Sensing.

Ibrahim V, Alaya Cheikh F, Asari VK, Paul JS

pubmed logopapersAug 22 2025
Extrapolation plays a critical role in machine/deep learning (ML/DL), enabling models to predict data points beyond their training constraints, particularly useful in scenarios deviating significantly from training conditions. This article addresses the limitations of current convolutional neural networks (CNNs) in extrapolation tasks within image restoration and compressed sensing (CS). While CNNs show potential in tasks such as image outpainting and CS, traditional convolutions are limited by their reliance on interpolation, failing to fully capture the dependencies needed for predicting values outside the known data. This work proposes an extrapolation convolution (EC) framework that models missing data prediction as an extrapolation problem using linear prediction within DL architectures. The approach is applied in two domains: first, image outpainting, where EC in encoder-decoder (EnDec) networks replaces conventional interpolation methods to reduce artifacts and enhance fine detail representation; second, Fourier-based CS-magnetic resonance imaging (CS-MRI), where it predicts high-frequency signal values from undersampled measurements in the frequency domain, improving reconstruction quality and preserving subtle structural details at high acceleration factors. Comparative experiments demonstrate that the proposed EC-DecNet and FDRN outperform traditional CNN-based models, achieving high-quality image reconstruction with finer details, as shown by improved peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and kernel inception distance (KID)/Frechet inception distance (FID) scores. Ablation studies and analysis highlight the effectiveness of larger kernel sizes and multilevel semi-supervised learning in FDRN for enhancing extrapolation accuracy in the frequency domain.

Motion-robust <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msubsup><mrow><mi>T</mi></mrow> <mrow><mn>2</mn></mrow> <mrow><mo>∗</mo></mrow> </msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> quantification from low-resolution gradient echo brain MRI with physics-informed deep learning.

Eichhorn H, Spieker V, Hammernik K, Saks E, Felsner L, Weiss K, Preibisch C, Schnabel JA

pubmed logopapersAug 22 2025
<math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msubsup><mrow><mi>T</mi></mrow> <mrow><mn>2</mn></mrow> <mrow><mo>∗</mo></mrow> </msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> quantification from gradient echo magnetic resonance imaging is particularly affected by subject motion due to its high sensitivity to magnetic field inhomogeneities, which are influenced by motion and might cause signal loss. Thus, motion correction is crucial to obtain high-quality <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msubsup><mrow><mi>T</mi></mrow> <mrow><mn>2</mn></mrow> <mrow><mo>∗</mo></mrow> </msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> maps. We extend PHIMO, our previously introduced learning-based physics-informed motion correction method for low-resolution <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msubsup><mrow><mi>T</mi></mrow> <mrow><mn>2</mn></mrow> <mrow><mo>∗</mo></mrow> </msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> mapping. Our extended version, PHIMO+, utilizes acquisition knowledge to enhance the reconstruction performance for challenging motion patterns and increase PHIMO's robustness to varying strengths of magnetic field inhomogeneities across the brain. We perform comprehensive evaluations regarding motion detection accuracy and image quality for data with simulated and real motion. PHIMO+ outperforms the learning-based baseline methods both qualitatively and quantitatively with respect to line detection and image quality. Moreover, PHIMO+ performs on par with a conventional state-of-the-art motion correction method for <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msubsup><mrow><mi>T</mi></mrow> <mrow><mn>2</mn></mrow> <mrow><mo>∗</mo></mrow> </msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> quantification from gradient echo MRI, which relies on redundant data acquisition. PHIMO+'s competitive motion correction performance, combined with a reduction in acquisition time by over 40% compared to the state-of-the-art method, makes it a promising solution for motion-robust <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msubsup><mrow><mi>T</mi></mrow> <mrow><mn>2</mn></mrow> <mrow><mo>∗</mo></mrow> </msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> quantification in research settings and clinical routine.

Diagnostic performance of T1-Weighted MRI gray matter biomarkers in Parkinson's disease: A systematic review and meta-analysis.

Torres-Parga A, Gershanik O, Cardona S, Guerrero J, Gonzalez-Ojeda LM, Cardona JF

pubmed logopapersAug 22 2025
T1-weighted structural MRI has advanced our understanding of Parkinson's disease (PD), yet its diagnostic utility in clinical settings remains unclear. To assess the diagnostic performance of T1-weighted MRI gray matter (GM) metrics in distinguishing PD patients from healthy controls and to identify limitations affecting clinical applicability. A systematic review and meta-analysis were conducted on studies reporting sensitivity, specificity, or AUC for PD classification using T1-weighted MRI. Of 2906 screened records, 26 met inclusion criteria, and 10 provided sufficient data for quantitative synthesis. The risk of bias and heterogeneity were evaluated, and sensitivity analyses were performed by excluding influential studies. Pooled estimates showed a sensitivity of 0.71 (95 % CI: 0.70-0.72), specificity of 0.889 (95 % CI: 0.86-0.92), and overall accuracy of 0.909 (95 % CI: 0.89-0.93). These metrics improved after excluding outliers, reducing heterogeneity (I<sup>2</sup> = 95.7 %-0 %). Frequently reported regions showing structural alterations included the substantia nigra, striatum, thalamus, medial temporal cortex, and middle frontal gyrus. However, region-specific diagnostic metrics could not be consistently synthesized due to methodological variability. Machine learning approaches, particularly support vector machines and neural networks, showed enhanced performance with appropriate validation. T1-weighted MRI gray matter metrics demonstrate moderate accuracy in differentiating PD from controls but are not yet suitable as standalone diagnostic tools. Greater methodological standardization, external validation, and integration with clinical and biological data are needed to support precision neurology and clinical translation.

Intra-axial primary brain tumor differentiation: comparing large language models on structured MRI reports vs. radiologists on images.

Nakaura T, Uetani H, Yoshida N, Kobayashi N, Nagayama Y, Kidoh M, Kuroda JI, Mukasa A, Hirai T

pubmed logopapersAug 22 2025
Aimed to evaluate the potential of large language models (LLMs) in differentiating intra-axial primary brain tumors using structured magnetic resonance imaging (MRI) reports and compare their performance with radiologists. Structured reports of preoperative MRI findings from 137 surgically confirmed intra-axial primary brain tumors, including Glioblastoma (n = 77), Central Nervous System (CNS) Lymphoma (n = 22), Astrocytoma (n = 9), Oligodendroglioma (n = 9), and others (n = 20), were analyzed by multiple LLMs, including GPT-4, Claude-3-Opus, Claude-3-Sonnet, GPT-3.5, Llama-2-70B, Qwen1.5-72B, and Gemini-Pro-1.0. The models provided the top 5 differential diagnoses based on the preoperative MRI findings, and their top 1, 3, and 5 accuracies were compared with board-certified neuroradiologists' interpretations of the actual preoperative MRI images. Radiologists achieved top 1, 3, and 5 accuracies of 85.4%, 94.9%, and 94.9%, respectively. Among the LLMs, GPT-4 performed best with top 1, 3, and 5 accuracies of 65.7%, 84.7%, and 90.5%, respectively. Notably, GPT-4's top 3 accuracy of 84.7% approached the radiologists' top 1 accuracy of 85.4%. Other LLMs showed varying performance levels, with average accuracies ranging from 62.3% to 75.9%. LLMs demonstrated high accuracy for Glioblastoma but struggled with CNS Lymphoma and other less common tumors, particularly in top 1 accuracy. LLMs show promise as assistive tools for differentiating intra-axial primary brain tumors using structured MRI reports. However, a significant gap remains between their performance and that of board-certified neuroradiologists interpreting actual images. The choice of LLM and tumor type significantly influences the results. Question How do Large Language Models (LLM) perform when differentiating complex intra-axial primary brain tumors from structured MRI reports compared to radiologists interpreting images? Findings Radiologists outperformed all tested LLMs in diagnostic accuracy. The best model, GPT-4, showed promise but lagged considerably behind radiologists, particularly for less common tumors. Clinical relevance LLMs show potential as assistive tools for generating differential diagnoses from structured MRI reports, particularly for non-specialists, but they cannot currently replace the nuanced diagnostic expertise of a board-certified radiologist interpreting the primary image data.

Learning Explainable Imaging-Genetics Associations Related to a Neurological Disorder

Jueqi Wang, Zachary Jacokes, John Darrell Van Horn, Michael C. Schatz, Kevin A. Pelphrey, Archana Venkataraman

arxiv logopreprintAug 22 2025
While imaging-genetics holds great promise for unraveling the complex interplay between brain structure and genetic variation in neurological disorders, traditional methods are limited to simplistic linear models or to black-box techniques that lack interpretability. In this paper, we present NeuroPathX, an explainable deep learning framework that uses an early fusion strategy powered by cross-attention mechanisms to capture meaningful interactions between structural variations in the brain derived from MRI and established biological pathways derived from genetics data. To enhance interpretability and robustness, we introduce two loss functions over the attention matrix - a sparsity loss that focuses on the most salient interactions and a pathway similarity loss that enforces consistent representations across the cohort. We validate NeuroPathX on both autism spectrum disorder and Alzheimer's disease. Our results demonstrate that NeuroPathX outperforms competing baseline approaches and reveals biologically plausible associations linked to the disorder. These findings underscore the potential of NeuroPathX to advance our understanding of complex brain disorders. Code is available at https://github.com/jueqiw/NeuroPathX .

Development and Validation of an Interpretable Machine Learning Model for Predicting Adverse Clinical Outcomes in Placenta Accreta Spectrum: A Multicenter Study.

Li H, Zhang Y, Mei H, Yuan Y, Wang L, Liu W, Zeng H, Huang J, Chai X, Wu K, Liu H

pubmed logopapersAug 22 2025
Placenta accreta spectrum (PAS) is a serious perinatal complication. Accurate preoperative identification of patients at high risk for adverse clinical outcomes is essential for developing personalized treatment strategies. This study aimed to develop and validate a high-performance, interpretable machine learning model that integrates MRI morphological indicators and clinical features to predict adverse outcomes in PAS, and to build an online prediction tool to enhance its clinical applicability. This retrospective study included 125 clinically confirmed PAS patients from two centers, categorized into high-risk (intraoperative blood loss over 1500 mL or requiring hysterectomy) and low-risk groups. Data from Center 1 were used for model development, and data from Center 2 served as the external validation set. Five MRI morphological indicators and six clinical features were extracted as model inputs. Three machine learning classifiers-AdaBoost, TabPFN, and CatBoost-were trained and evaluated on both internal testing and external validation cohorts. SHAP analysis was used to interpret model decision-making, and the optimal model was deployed via a Streamlit-based web platform. The CatBoost model achieved the best performance, with AUROCs of 0.90 (95% CI: 0.73-0.99) and 0.84 (95% CI: 0.70-0.97) in the internal testing and external validation sets, respectively. Calibration curves indicated strong agreement between predicted and actual risks. SHAP analysis revealed that "Cervical canal length" and "Gestational age" contributed negatively to high-risk predictions, while "Prior C-sections number", "Placental abnormal vasculature area", and Parturition were positively associated. The final online tool allows real-time risk prediction and visualization of individualized force plots and is freely accessible to clinicians and patients. This study successfully developed an interpretable and practical machine learning model for predicting adverse clinical outcomes in PAS. The accompanying online tool may support clinical decision-making and improve individualized management for PAS patients.

AlzhiNet: Traversing from 2D-CNN to 3D-CNN, Towards Early Detection and Diagnosis of Alzheimer's Disease.

Akindele RG, Adebayo S, Yu M, Kanda PS

pubmed logopapersAug 22 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder with increasing prevalence among the ageing population, necessitating early and accurate diagnosis for effective disease management. In this study, we present a novel hybrid deep learning framework, AlzhiNet, that integrates both 2D convolutional neural networks (2D-CNNs) and 3D convolutional neural networks (3D-CNNs), along with a custom loss function and volumetric data augmentation, to enhance feature extraction and improve classification performance in AD diagnosis. According to extensive experiments, AlzhiNet outperforms standalone 2D and 3D models, highlighting the importance of combining these complementary representations of data. The depth and quality of 3D volumes derived from the augmented 2D slices also significantly influence the model's performance. The results indicate that carefully selecting weighting factors in hybrid predictions is imperative for achieving optimal results. Our framework has been validated on the magnetic resonance imaging (MRI) from Kaggle and MIRIAD datasets, obtaining accuracies of 98.9% and 99.99%, respectively, with an AUC of 100%. Furthermore, AlzhiNet was studied under a variety of perturbation scenarios on the Alzheimer's Kaggle dataset, including Gaussian noise, brightness, contrast, salt and pepper noise, color jitter, and occlusion. The results obtained show that AlzhiNet is more robust to perturbations than ResNet-18, making it an excellent choice for real-world applications. This approach represents a promising advancement in the early diagnosis and treatment planning for AD.

Deep learning radiomics based on MRI for differentiating tongue cancer T - staging.

Lu Z, Zhu B, Ling H, Chen X

pubmed logopapersAug 22 2025
To develop a deep learning-based MRI model for predicting tongue cancer T-stage. This retrospective study analyzed clinical and MRI data from 579 tongue cancer patients (Xiangya Cancer Hospital and Jiangsu Province Hospital). T2-weighted (T2WI) and contrast-enhanced T1-weighted (CET1) sequences were preprocessed (anonymization/resampling/calibration). Regions of interest (ROI) were segmented by two radiologists (intraclass correlation coefficient (ICC) > 0.75), and using PyRadiomics, 2375 radiomics features were extracted. ResNet18 and ResNet50 algorithms were employed to build deep learning models (deep learning radiomics (DLR) resnet18 / DLRresnet50), compared with a radiomics model (Rad) based on 17 optimized features. Performance was evaluated via AUC, DCA, IDI, and NRI in different sets. In training set, deep learning models outperformed Rad (AUC: DLRresnet18 = 0.837, DLRresnet50 = 0.847 vs. Rad = 0.828). Test set and and external validation set results were consistent (DLRresnet18, AUC = 0.805 / 0.857; DLRresnet50, AUC = 0.810 / 0.860). The decision curve analysis (DCA) demonstrated that both deep learning models performed better than the Rad model in the training set, test set, and external validation set. Furthermore, both NRI and IDI of the two deep learning models compared with the Rad model were greater than 0. DLRresnet18 and DLRresnet50 models significantly improve T-stage prediction accuracy over traditional radiomics, reducing subjective interpretation errors and supporting personalized treatment planning. This research achievement provides new ideas and tools for image-assisted diagnosis of tongue cancer T-stage. III.

Vision Transformer Autoencoders for Unsupervised Representation Learning: Revealing Novel Genetic Associations through Learned Sparse Attention Patterns

Islam, S. R., He, W., Xie, Z., Zhi, D.

medrxiv logopreprintAug 21 2025
The discovery of genetic loci associated with brain architecture can provide deeper insights into neuroscience and potentially lead to improved personalized medicine outcomes. Previously, we designed the Unsupervised Deep learning-derived Imaging Phenotypes (UDIPs) approach to extract phenotypes from brain imaging using a convolutional (CNN) autoencoder, and conducted brain imaging GWAS on UK Biobank (UKBB). In this work, we design a vision transformer (ViT)-based autoencoder, leveraging its distinct inductive bias and its ability to capture unique patterns through its pairwise attention mechanism. The encoder generates contextual embeddings for input patches, from which we derive a 128-dimensional latent representation, interpreted as phenotypes, by applying average pooling. The GWAS on these 128 phenotypes discovered 10 loci previously unreported by CNN-based UDIP model, 3 of which had no previous associations with brain structure in the GWAS Catalog. Our interpretation results suggest that these novel associations stem from the ViTs capability to learn sparse attention patterns, enabling the capturing of non-local patterns such as left-right hemisphere symmetry within brain MRI data. Our results highlight the advantages of transformer-based architectures in feature extraction and representation learning for genetic discovery.

From Detection to Diagnosis: An Advanced Transfer Learning Pipeline Using YOLO11 with Morphological Post-Processing for Brain Tumor Analysis for MRI Images.

Chourib I

pubmed logopapersAug 21 2025
Accurate and timely detection of brain tumors from magnetic resonance imaging (MRI) scans is critical for improving patient outcomes and informing therapeutic decision-making. However, the complex heterogeneity of tumor morphology, scarcity of annotated medical data, and computational demands of deep learning models present substantial challenges for developing reliable automated diagnostic systems. In this study, we propose a robust and scalable deep learning framework for brain tumor detection and classification, built upon an enhanced YOLO-v11 architecture combined with a two-stage transfer learning strategy. The first stage involves training a base model on a large, diverse MRI dataset. Upon achieving a mean Average Precision (mAP) exceeding 90%, this model is designated as the Brain Tumor Detection Model (BTDM). In the second stage, the BTDM is fine-tuned on a structurally similar but smaller dataset to form Brain Tumor Detection and Segmentation (BTDS), effectively leveraging domain transfer to maintain performance despite limited data. The model is further optimized through domain-specific data augmentation-including geometric transformations-to improve generalization and robustness. Experimental evaluations on publicly available datasets show that the framework achieves high [email protected] scores (up to 93.5% for the BTDM and 91% for BTDS) and consistently outperforms existing state-of-the-art methods across multiple tumor types, including glioma, meningioma, and pituitary tumors. In addition, a post-processing module enhances interpretability by generating segmentation masks and extracting clinically relevant metrics such as tumor size and severity level. These results underscore the potential of our approach as a high-performance, interpretable, and deployable clinical decision-support tool, contributing to the advancement of intelligent real-time neuro-oncological diagnostics.
Page 39 of 1621612 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.