Sort by:
Page 42 of 2352345 results

Resting-State Functional MRI: Current State, Controversies, Limitations, and Future Directions-<i>AJR</i> Expert Panel Narrative Review.

Vachha BA, Kumar VA, Pillai JJ, Shimony JS, Tanabe J, Sair HI

pubmed logopapersSep 3 2025
Resting-state functional MRI (rs-fMRI), a promising method for interrogating different brain functional networks from a single MRI acquisition, is increasingly used in clinical presurgical and other pretherapeutic brain mapping. However, challenges in standardization of acquisition, preprocessing, and analysis methods across centers and variability in results interpretation complicate its clinical use. Additionally, inherent problems regarding reliability of language lateralization, interpatient variability of cognitive network representation, dynamic aspects of intranetwork and internetwork connectivity, and effects of neurovascular uncoupling on network detection still must be overcome. Although deep learning solutions and further methodologic standardization will help address these issues, rs-fMRI remains generally considered an adjunct to task-based fMRI (tb-fMRI) for clinical presurgical mapping. Nonetheless, in many clinical instances, rs-fMRI may offer valuable additional information that supplements tb-fMRI, especially if tb-fMRI is inadequate due to patient performance or other limitations. Future growth in clinical applications of rs-fMRI is anticipated as challenges are increasingly addressed. This <i>AJR</i> Expert Panel Narrative Review summarizes the current state and emerging clinical utility of rs-fMRI, focusing on its role in presurgical mapping. Ongoing controversies and limitations in clinical applicability are presented and future directions are discussed, including the developing role of rs-fMRI in neuromodulation treatment of various neurologic disorders.

MRI-based deep learning radiomics in predicting histological differentiation of oropharyngeal cancer: a multicenter cohort study.

Pan Z, Lu W, Yu C, Fu S, Ling H, Liu Y, Zhang X, Gong L

pubmed logopapersSep 3 2025
The primary aim of this research was to create and rigorously assess a deep learning radiomics (DLR) framework utilizing magnetic resonance imaging (MRI) to forecast the histological differentiation grades of oropharyngeal cancer. This retrospective analysis encompassed 122 patients diagnosed with oropharyngeal cancer across three medical institutions in China. The participants were divided at random into two groups: a training cohort comprising 85 individuals and a test cohort of 37. Radiomics features derived from MRI scans, along with deep learning (DL) features, were meticulously extracted and carefully refined. These two sets of features were then integrated to build the DLR model, designed to assess the histological differentiation of oropharyngeal cancer. The model's predictive efficacy was gaged through the area under the receiver operating characteristic curve (AUC) and decision curve analysis (DCA). The DLR model demonstrated impressive performance, achieving strong AUC scores of 0.871 on the training cohort and 0.803 on the test cohort, outperforming both the standalone radiomics and DL models. Additionally, the DCA curve highlighted the significance of the DLR model in forecasting the histological differentiation of oropharyngeal cancer. The MRI-based DLR model demonstrated high predictive ability for histological differentiation of oropharyngeal cancer, which might be important for accurate preoperative diagnosis and clinical decision-making.

MetaPredictomics: A Comprehensive Approach to Predict Postsurgical Non-Small Cell Lung Cancer Recurrence Using Clinicopathologic, Radiomics, and Organomics Data.

Amini M, Hajianfar G, Salimi Y, Mansouri Z, Zaidi H

pubmed logopapersSep 3 2025
Non-small cell lung cancer (NSCLC) is a complex disease characterized by diverse clinical, genetic, and histopathologic traits, necessitating personalized treatment approaches. While numerous biomarkers have been introduced for NSCLC prognostication, no single source of information can provide a comprehensive understanding of the disease. However, integrating biomarkers from multiple sources may offer a holistic view of the disease, enabling more accurate predictions. In this study, we present MetaPredictomics, a framework that integrates clinicopathologic data with PET/CT radiomics from the primary tumor and presumed healthy organs (referred to as "organomics") to predict postsurgical recurrence. A fully automated deep learning-based segmentation model was employed to delineate 19 affected (whole lung and the affected lobe) and presumed healthy organs from CT images of the presurgical PET/CT scans of 145 NSCLC patients sourced from a publicly available data set. Using PyRadiomics, 214 features (107 from CT, 107 from PET) were extracted from the gross tumor volume (GTV) and each segmented organ. In addition, a clinicopathologic feature set was constructed, incorporating clinical characteristics, histopathologic data, gene mutation status, conventional PET imaging biomarkers, and patients' treatment history. GTV Radiomics, each of the organomics, and the clinicopathologic feature sets were each fed to a time-to-event prediction machine, based on glmboost, to establish first-level models. The risk scores obtained from the first-level models were then used as inputs for meta models developed using a stacked ensemble approach. Questing optimized performance, we assessed meta models established upon all combinations of first-level models with concordance index (C-index) ≥0.6. The performance of all the models was evaluated using the average C-index across a unique 3-fold cross-validation scheme for fair comparison. The clinicopathologic model outperformed other first-level models with a C-index of 0.67, followed closely by GTV radiomics model with C-index of 0.65. Among the organomics models, whole-lung and aorta models achieved top performance with a C-index of 0.65, while 12 organomics models achieved C-indices of ≥0.6. Meta models significantly outperformed the first-level models with the top 100 achieving C-indices between 0.703 and 0.731. The clinicopathologic, whole lung, esophagus, pancreas, and GTV models were the most frequently present models in the top 100 meta models with frequencies of 98, 71, 69, 62, and 61, respectively. In this study, we highlighted the value of maximizing the use of medical imaging for NSCLC recurrence prognostication by incorporating data from various organs, rather than focusing solely on the tumor and its immediate surroundings. This multisource integration proved particularly beneficial in the meta models, where combining clinicopathologic data with tumor radiomics and organomics models significantly enhanced recurrence prediction.

Mammographic density assessed using deep learning in women at high risk of developing breast cancer: the effect of weight change on density.

Squires S, Harvie M, Howell A, Evans DG, Astley SM

pubmed logopapersSep 3 2025
High mammographic density (MD) and excess weight are both associated with increased risk of breast cancer. Classically defined percentage density measures tend to increase with reduced weight due to disproportionate loss of breast fat, however the effect of weight loss on artificial intelligence-based density scores is unknown. We investigated an artificial intelligence-based density method, reporting density changes in 46 women enrolled in a weight-loss study in a family history breast cancer clinic, using a volumetric density method as a comparison.&#xD;&#xD;Methods: We analysed data from women who had weight recorded and mammograms taken at the start and end of the 12-month weight intervention study. MD was assessed at both time points using a deep learning model trained on expert estimates of percent density called pVAS, and the volumetric density software VolparaTM.&#xD;&#xD;Results: Mean (standard deviation) weight of participants at the start and end of the study was 86.0 (12.2) and 82.5 (13.8) respectively; mean (standard deviation) pVAS scores were 35.8 (13.0) and 36.3 (12.4), and Volpara volumetric percent density scores were 7.05 (4.4) and 7.6 (4.4).The Spearman rank correlation between reduction in weight and change in density was 0.17 (-0.13 to 0.43, p=0.27) for pVAS and 0.59 (0.36 to 0.75, p<0.001) for Volpara volumetric percent density.&#xD;&#xD;Conclusion: pVAS percentage density measurements were not significantly affected by change in weight. Percent density measured with Volpara increased as weight decreased, driven by changes in fat volume.&#xD.

Voxel-level Radiomics and Deep Learning Based on MRI for Predicting Microsatellite Instability in Endometrial Carcinoma: A Two-center Study.

Tian CH, Sun P, Xiao KY, Niu XF, Li XS, Xu N

pubmed logopapersSep 3 2025
To develop and validate a non-invasive deep learning model that integrates voxel-level radiomics with multi-sequence MRI to predict microsatellite instability (MSI) status in patients with endometrial carcinoma (EC). This two-center retrospective study included 375 patients with pathologically confirmed EC from two medical centers. Patients underwent preoperative multiparametric MRI (T2WI, DWI, CE-T1WI), and MSI status was determined by immunohistochemistry. Tumor regions were manually segmented, and voxel-level radiomics features were extracted following IBSI guidelines. A dual-channel 3D deep neural network based on the Vision-Mamba architecture was constructed to jointly process voxel-wise radiomics feature maps and MR images. The model was trained and internally validated on cohorts from Center I and tested on an external cohort from Center II. Performance was compared with Vision Transformer, 3D-ResNet, and traditional radiomics models. Interpretability was assessed with feature importance ranking and SHAP value visualization. The Vision-Mamba model achieved strong predictive performance across all datasets. In the external test cohort, it yielded an AUC of 0.866, accuracy of 0.875, sensitivity of 0.833, and specificity of 0.900, outperforming other models. Integrating voxel-level radiomics features with MRI enabled the model to better capture both local and global tumor heterogeneity compared to traditional approaches. Interpretability analysis identified glszm_SizeZoneNonUniformityNormalized, ngtdm_Busyness, and glcm_Correlation as top features, with SHAP analysis revealing that tumor parenchyma, regions of enhancement, and diffusion restriction were pivotal for MSI prediction. The proposed voxel-level radiomics and deep learning model provides a robust, non-invasive tool for predicting MSI status in endometrial carcinoma, potentially supporting personalized treatment decision-making.

Edge-centric Brain Connectome Representations Reveal Increased Brain Functional Diversity of Reward Circuit in Patients with Major Depressive Disorder.

Qin K, Ai C, Zhu P, Xiang J, Chen X, Zhang L, Wang C, Zou L, Chen F, Pan X, Wang Y, Gu J, Pan N, Chen W

pubmed logopapersSep 3 2025
Major depressive disorder (MDD) has been increasingly understood as a disorder of network-level functional dysconnectivity. However, previous brain connectome studies have primarily relied on node-centric approaches, neglecting critical edge-edge interactions that may capture essential features of network dysfunction. This study included resting-state functional MRI data from 838 MDD patients and 881 healthy controls (HC) across 23 sites. We applied a novel edge-centric connectome model to estimate edge functional connectivity and identify overlapping network communities. Regional functional diversity was quantified via normalized entropy based on community overlap patterns. Neurobiological decoding was performed to map brain-wide relationships between functional diversity alterations and patterns of gene expression and neurotransmitter distribution. Comparative machine learning analyses further evaluated the diagnostic utility of edge-centric versus node-centric connectome representations. Compared with HC, MDD patients exhibited significantly increased functional diversity within the prefrontal-striatal-thalamic reward circuit. Neurobiological decoding analysis revealed that functional diversity alterations in MDD were spatially associated with transcriptional patterns enriched for inflammatory processes, as well as distribution of 5-HT1B receptors. Machine learning analyses demonstrated superior classification performance of edge-centric models over traditional node-centric approaches in distinguishing MDD patients from HC at the individual level. Our findings highlighted that abnormal functional diversity within the reward processing system might underlie multi-level neurobiological mechanisms of MDD. The edge-centric connectome approach offers a valuable tool for identifying disease biomarkers, characterizing individual variation and advancing current understanding of complex network configuration in psychiatric disorders.

AlzFormer: Video-based space-time attention model for early diagnosis of Alzheimer's disease.

Akan T, Akan S, Alp S, Ledbetter CR, Nobel Bhuiyan MA

pubmed logopapersSep 3 2025
Early and accurate Alzheimer's disease (AD) diagnosis is critical for effective intervention, but it is still challenging due to neurodegeneration's slow and complex progression. Recent studies in brain imaging analysis have highlighted the crucial roles of deep learning techniques in computer-assisted interventions for diagnosing brain diseases. In this study, we propose AlzFormer, a novel deep learning framework based on a space-time attention mechanism, for multiclass classification of AD, MCI, and CN individuals using structural MRI scans. Unlike conventional deep learning models, we used spatiotemporal self-attention to model inter-slice continuity by treating T1-weighted MRI volumes as sequential inputs, where slices correspond to video frames. Our model was fine-tuned and evaluated using 1.5 T MRI scans from the ADNI dataset. To ensure the anatomical consistency of all the MRI data, All MRI volumes were pre-processed with skull stripping and spatial normalization to MNI space. AlzFormer achieved an overall accuracy of 94 % on the test set, with balanced class-wise F1-scores (AD: 0.94, MCI: 0.99, CN: 0.98) and a macro-average AUC of 0.98. We also utilized attention map analysis to identify clinically significant patterns, particularly emphasizing subcortical structures and medial temporal regions implicated in AD. These findings demonstrate the potential of transformer-based architectures for robust and interpretable classification of brain disorders using structural MRI.

An Artificial Intelligence System for Staging the Spheno-Occipital Synchondrosis.

Milani OH, Mills L, Nikho A, Tliba M, Ayyildiz H, Allareddy V, Ansari R, Cetin AE, Elnagar MH

pubmed logopapersSep 2 2025
The aim of this study was to develop, test and validate automated interpretable deep learning algorithms for the assessment and classification of the spheno-occipital synchondrosis (SOS) fusion stages from a cone beam computed tomography (CBCT). The sample consisted of 723 CBCT scans of orthodontic patients from private practices in the midwestern United States. The SOS fusion stages were classified by two orthodontists and an oral and maxillofacial radiologist. The advanced deep learning models employed consisted of ResNet, EfficientNet and ConvNeXt. Additionally, a new attention-based model, ConvNeXt + Conv Attention, was developed to enhance classification accuracy by integrating attention mechanisms for capturing subtle medical imaging features. Laslty, YOLOv11 was integrated for fully-automated region detection and segmentation. ConvNeXt + Conv Attention outperformed the other models and achieved a 88.94% accuracy with manual cropping and 82.49% accuracy in a fully automated workflow. This study introduces a novel artificial intelligence-based pipeline that reliably automates the classification of the SOS fusion stages using advanced deep learning models, with the highest accuracy achieved by ConvNext + Conv Attention. These models enhance the efficiency, scalability and consistency of SOS staging while minimising manual intervention from the clinician, underscoring the potential for AI-driven solutions in orthodontics and clinical workflows.

Integrating GANs, Contrastive Learning, and Transformers for Robust Medical Image Analysis.

Heng Y, Khan FG, Yinghua M, Khan A, Ali F, Khan N, Kwak D

pubmed logopapersSep 2 2025
Despite the widespread success of convolutional neural networks (CNNs) in general computer vision tasks, their application to complex medical image analysis faces persistent challenges. These include limited labeled data availability, which restricts model generalization; class imbalance, where minority classes are underrepresented and lead to biased predictions; and inadequate feature representation, since conventional CNNs often struggle to capture subtle patterns and intricate dependencies characteristic of medical imaging. To address these limitations, we propose CTNGAN, a unified framework that integrates generative modeling with Generative Adversarial Networks (GANs), contrastive learning, and Transformer architectures to enhance the robustness and accuracy of medical image analysis. Each component is designed to tackle a specific challenge: the GAN model mitigates data scarcity and imbalance, contrastive learning strengthens feature robustness against domain shifts, and the Transformer captures long-range spatial patterns. This tripartite integration not only overcomes the limitations of conventional CNNs but also achieves superior generalizability, as demonstrated by classification experiments on benchmark medical imaging datasets, with up to 98.5% accuracy and an F1-score of 0.968, outperforming existing methods. The framework's ability to jointly optimize data generation, feature discrimination, and contextual modeling establishes a new paradigm for accurate and reliable medical image diagnosis.

Synthetic data generation with Worley-Perlin diffusion for robust subarachnoid hemorrhage detection in imbalanced CT Datasets.

Lu Z, Hu T, Oda M, Fuse Y, Saito R, Jinzaki M, Mori K

pubmed logopapersSep 2 2025
In this paper, we propose a novel generative model to produce high-quality SAH samples, enhancing SAH CT detection performance in imbalanced datasets. Previous methods, such as cost-sensitive learning and previous diffusion models, suffer from overfitting or noise-induced distortion, limiting their effectiveness. Accurate SAH sample generation is crucial for better detection. We propose the Worley-Perlin Diffusion Model (WPDM), leveraging Worley-Perlin noise to synthesize diverse, high-quality SAH images. WPDM addresses limitations of Gaussian noise (homogeneity) and Simplex noise (distortion), enhancing robustness for generating SAH images. Additionally, <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mtext>WPDM</mtext> <mtext>Fast</mtext></msub> </math> optimizes generation speed without compromising quality. WPDM effectively improved classification accuracy in datasets with varying imbalance ratios. Notably, a classifier trained with WPDM-generated samples achieved an F1-score of 0.857 on a 1:36 imbalance ratio, surpassing the state of the art by 2.3 percentage points. WPDM overcomes the limitations of Gaussian and Simplex noise-based models, generating high-quality, realistic SAH images. It significantly enhances classification performance in imbalanced settings, providing a robust solution for SAH CT detection.
Page 42 of 2352345 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.