Sort by:
Page 121 of 4003995 results

A hybrid filtering and deep learning approach for early Alzheimer's disease identification.

Ahamed MKU, Hossen R, Paul BK, Hasan M, Al-Arashi WH, Kazi M, Talukder MA

pubmed logopapersJul 29 2025
Alzheimer's disease is a progressive neurological disorder that profoundly affects cognitive functions and daily activities. Rapid and precise identification is essential for effective intervention and improved patient outcomes. This research introduces an innovative hybrid filtering approach with a deep transfer learning model for detecting Alzheimer's disease utilizing brain imaging data. The hybrid filtering method integrates the Adaptive Non-Local Means filter with a Sharpening filter for image preprocessing. Furthermore, the deep learning model used in this study is constructed on the EfficientNetV2B3 architecture, augmented with additional layers and fine-tuning to guarantee effective classification among four categories: Mild, moderate, very mild, and non-demented. The work employs Grad-CAM++ to enhance interpretability by localizing disease-relevant characteristics in brain images. The experimental assessment, performed on a publicly accessible dataset, illustrates the ability of the model to achieve an accuracy of 99.45%. These findings underscore the capability of sophisticated deep learning methodologies to aid clinicians in accurately identifying Alzheimer's disease.

A novel deep learning-based brain age prediction framework for routine clinical MRI scans.

Kim H, Park S, Seo SW, Na DL, Jang H, Kim JP, Kim HJ, Kang SH, Kwak K

pubmed logopapersJul 29 2025
Physiological brain aging is associated with cognitive impairment and neuroanatomical changes. Brain age prediction of routine clinical 2D brain MRI scans were understudied and often unsuccessful. We developed a novel brain age prediction framework for clinical 2D T1-weighted MRI scans using a deep learning-based model trained with research grade 3D MRI scans mostly from publicly available datasets (N = 8681; age = 51.76 ± 21.74). Our model showed accurate and fast brain age prediction on clinical 2D MRI scans from cognitively unimpaired (CU) subjects (N = 175) with MAE of 2.73 years after age bias correction (Pearson's r = 0.918). Brain age gap of Alzheimer's disease (AD) subjects was significantly greater than CU subjects (p < 0.001) and increase in brain age gap was associated with disease progression in both AD (p < 0.05) and Parkinson's disease (p < 0.01). Our framework can be extended to other MRI modalities and potentially applied to routine clinical examinations, enabling early detection of structural anomalies and improve patient outcome.

MFFBi-Unet: Merging Dynamic Sparse Attention and Multi-scale Feature Fusion for Medical Image Segmentation.

Sun B, Liu C, Wang Q, Bi K, Zhang W

pubmed logopapersJul 29 2025
The advancement of deep learning has driven extensive research validating the effectiveness of U-Net-style symmetric encoder-decoder architectures based on Transformers for medical image segmentation. However, the inherent design requiring attention mechanisms to compute token affinities across all spatial locations leads to prohibitive computational complexity and substantial memory demands. Recent efforts have attempted to address these limitations through sparse attention mechanisms. However, existing approaches employing artificial, content-agnostic sparse attention patterns demonstrate limited capability in modeling long-range dependencies effectively. We propose MFFBi-Unet, a novel architecture incorporating dynamic sparse attention through bi-level routing, enabling context-aware computation allocation with enhanced adaptability. The encoder-decoder module integrates BiFormer to optimize semantic feature extraction and facilitate high-fidelity feature map reconstruction. A novel Multi-scale Feature Fusion (MFF) module in skip connections synergistically combines multi-level contextual information with processed multi-scale features. Extensive evaluations on multiple public medical benchmarks demonstrate that our method consistently exhibits significant advantages. Notably, our method achieves statistically significant improvements, outperforming state-of-the-art approaches like MISSFormer by 2.02% and 1.28% Dice scores on respective benchmarks.

AI generated annotations for Breast, Brain, Liver, Lungs, and Prostate cancer collections in the National Cancer Institute Imaging Data Commons.

Murugesan GK, McCrumb D, Soni R, Kumar J, Nuernberg L, Pei L, Wagner U, Granger S, Fedorov AY, Moore S, Van Oss J

pubmed logopapersJul 29 2025
The Artificial Intelligence in Medical Imaging (AIMI) initiative aims to enhance the National Cancer Institute's (NCI) Image Data Commons (IDC) by releasing fully reproducible nnU-Net models, along with AI-assisted segmentation for cancer radiology images. In this extension of our earlier work, we created high-quality, AI-annotated imaging datasets for 11 IDC collections, spanning computed tomography (CT) and magnetic resonance imaging (MRI) of the lungs, breast, brain, kidneys, prostate, and liver. Each nnU-Net model was trained on open-source datasets, and a portion of the AI-generated annotations was reviewed and corrected by board-certified radiologists. Both the AI and radiologist annotations were encoded in compliance with the Digital Imaging and Communications in Medicine (DICOM) standard, ensuring seamless integration into the IDC collections. By making these models, images, and annotations publicly accessible, we aim to facilitate further research and development in cancer imaging.

A data assimilation framework for predicting the spatiotemporal response of high-grade gliomas to chemoradiation.

Miniere HJM, Hormuth DA, Lima EABF, Farhat M, Panthi B, Langshaw H, Shanker MD, Talpur W, Thrower S, Goldman J, Ty S, Chung C, Yankeelov TE

pubmed logopapersJul 29 2025
High-grade gliomas are highly invasive and respond variably to chemoradiation. Accurate, patient-specific predictions of tumor response could enhance treatment planning. We present a novel computational platform that assimilates MRI data to continually predict spatiotemporal tumor changes during chemoradiotherapy. Tumor growth and response to chemoradiation was described using a two-species reaction-diffusion model of enhancing and non-enhancing regions of the tumor. Two evaluation scenarios were used to test the predictive accuracy of this model. In scenario 1, the model was calibrated on a patient-specific basis (n = 21) to weekly MRI data during the course of chemoradiotherapy. A data assimilation framework was used to update model parameters with each new imaging visit which were then used to update model predictions. In scenario 2, we evaluated the predictive accuracy of the model when fewer data points are available by calibrating the same model using only the first two imaging visits and then predicted tumor response at the remaining five weeks of treatment. We investigated three approaches to assign model parameters for scenario 2: (1) predictions using only parameters estimated by fitting the data obtained from an individual patient's first two imaging visits, (2) predictions made by averaging the patient-specific parameters with the cohort-derived parameters, and (3) predictions using only cohort-derived parameters. Scenario 1 achieved a median [range] concordance correlation coefficient (CCC) between the predicted and measured total tumor cell counts of 0.91 [0.84, 0.95], and a median [range] percent error in tumor volume of -2.6% [-19.7, 8.0%], demonstrating strong agreement throughout the course of treatment. For scenario 2, the three approaches yielded CCCs of: (1) 0.65 [0.51, 0.88], (2) 0.74 [0.70, 0.91], (3) 0.76 [0.73, 0.92] with significant differences between the approach (1) that does not use the cohort parameters and the two approaches (2 and 3) that do. The proposed data assimilation framework enhances the accuracy of tumor growth forecasts by integrating patient-specific and cohort-based data. These findings show a practical method for identifying more personalized treatment strategies in high-grade glioma patients.

Evaluation and analysis of risk factors for fractured vertebral recompression post-percutaneous kyphoplasty: a retrospective cohort study based on logistic regression analysis.

Zhao Y, Li B, Qian L, Chen X, Wang Y, Cui L, Xin Y, Liu L

pubmed logopapersJul 29 2025
Vertebral recompression after percutaneous kyphoplasty (PKP) for osteoporotic vertebral compression fractures (OVCFs) may lead to recurrent pain, deformity, and neurological impairment, compromising prognosis and quality of life. To identify independent risk factors for postoperative recompression and develop predictive models for risk assessment. We retrospectively analyzed 284 OVCF patients treated with PKP, grouped by recompression status. Predictors were screened using univariate and correlation analyses. Multicollinearity was assessed using variance inflation factor (VIF). A multivariable logistic regression model was constructed and validated via 10-fold cross-validation and temporal validation. Five independent predictors were identified: incomplete anterior cortex (odds ratio [OR] = 9.38), high paravertebral muscle fat infiltration (OR = 218.68), low vertebral CT value (OR = 0.87), large Cobb change (OR = 1.45), and high vertebral height recovery rate (OR = 22.64). The logistic regression model achieved strong performance: accuracy 97.67%, precision 97.06%, recall 97.06%, F1 score 97.06%, specificity 98.08%, area under the receiver operating characteristic curve (AUC) 0.998. Machine learning models (e.g., random forest) were also evaluated but did not outperform logistic regression in accuracy or interpretability. Five imaging-based predictors of vertebral recompression were identified. The logistic regression model showed excellent predictive accuracy and generalizability, supporting its clinical utility for early risk stratification and personalized decision-making in OVCF patients undergoing PKP.

BioAug-Net: a bioimage sensor-driven attention-augmented segmentation framework with physiological coupling for early prostate cancer detection in T2-weighted MRI.

Arshad M, Wang C, Us Sima MW, Ali Shaikh J, Karamti H, Alharthi R, Selecky J

pubmed logopapersJul 29 2025
Accurate segmentation of the prostate peripheral zone (PZ) in T2-weighted MRI is critical for the early detection of prostate cancer. Existing segmentation methods are hindered by significant inter-observer variability (37.4 ± 5.6%), poor boundary localization, and the presence of motion artifacts, along with challenges in clinical integration. In this study, we propose BioAug-Net, a novel framework that integrates real-time physiological signal feedback with MRI data, leveraging transformer-based attention mechanisms and a probabilistic clinical decision support system (PCDSS). BioAug-Net features a dual-branch asymmetric attention mechanism: one branch processes spatial MRI features, while the other incorporates temporal sensor signals through a BiGRU-driven adaptive masking module. Additionally, a Markov Decision Process-based PCDSS maps segmentation outputs to clinical PI-RADS scores, with uncertainty quantification. We validated BioAug-Net on a multi-institutional dataset (n=1,542) and demonstrated state-of-the-art performance, achieving a Dice Similarity Coefficient of 89.7% (p < 0.001), sensitivity of 91.2% (p < 0.001), specificity of 88.4% (p < 0.001), and HD95 of 2.14 mm (p < 0.001), outperforming U-Net, Attention U-Net, and TransUNet. Sensor integration improved segmentation accuracy by 12.6% (p < 0.001) and reduced inter-observer variation by 48.3% (p < 0.001). Radiologist evaluations (n=3) confirmed a 15.0% reduction in diagnosis time (p = 0.003) and an increase in inter-reader agreement from K = 0.68 to K = 0.82 (p = 0.001). Our results show that BioAug-Net offers a clinically viable solution for early prostate cancer detection through enhanced physiological coupling and explainable AI diagnostics.

Use of imaging biomarkers and ambulatory functional endpoints in Duchenne muscular dystrophy clinical trials: Systematic review and machine learning-driven trend analysis.

Todd M, Kang S, Wu S, Adhin D, Yoon DY, Willcocks R, Kim S

pubmed logopapersJul 29 2025
Duchenne muscular dystrophy (DMD) is a rare X-linked genetic muscle disorder affecting primarily pediatric males and leading to limited life expectancy. This systematic review of 85 DMD trials and non-interventional studies (2010-2022) evaluated how magnetic resonance imaging biomarkers-particularly fat fraction and T2 relaxation time-are currently being used to quantitatively track disease progression and how their use compares to traditional mobility-based functional endpoints. Imaging biomarker studies lasted on average 4.50 years, approximately 11 months longer than those using only ambulatory functional endpoints. While 93% of biologic intervention trials (n = 28) included ambulatory functional endpoints, only 13.3% (n = 4) incorporated imaging biomarkers. Small molecule trials and natural history studies were the predominant contributors to imaging biomarker use, each comprising 30.4% of such studies. Small molecule trials used imaging biomarkers more frequently than biologic trials, likely because biologics often target dystrophin, an established surrogate biomarker, while small molecules lack regulatory-approved biomarkers. Notably, following the 2018 FDA guidance finalization, we observed a significant decrease in new trials using imaging biomarkers despite earlier regulatory encouragement. This analysis demonstrates that while imaging biomarkers are increasingly used in natural history studies, their integration into interventional trials remains limited. From XGBoost machine learning analysis, trial duration and start year were the strongest predictors of biomarker usage, with a decline observed following the 2018 FDA guidance. Despite their potential to objectively track disease progression, imaging biomarkers have not yet been widely adopted as primary endpoints in therapeutic trials, likely due to regulatory and logistical challenges. Future research should examine whether standardizing imaging protocols or integrating hybrid endpoint models could bridge the regulatory gap currently limiting biomarker adoption in therapeutic trials.

Distribution-Based Masked Medical Vision-Language Model Using Structured Reports

Shreyank N Gowda, Ruichi Zhang, Xiao Gu, Ying Weng, Lu Yang

arxiv logopreprintJul 29 2025
Medical image-language pre-training aims to align medical images with clinically relevant text to improve model performance on various downstream tasks. However, existing models often struggle with the variability and ambiguity inherent in medical data, limiting their ability to capture nuanced clinical information and uncertainty. This work introduces an uncertainty-aware medical image-text pre-training model that enhances generalization capabilities in medical image analysis. Building on previous methods and focusing on Chest X-Rays, our approach utilizes structured text reports generated by a large language model (LLM) to augment image data with clinically relevant context. These reports begin with a definition of the disease, followed by the `appearance' section to highlight critical regions of interest, and finally `observations' and `verdicts' that ground model predictions in clinical semantics. By modeling both inter- and intra-modal uncertainty, our framework captures the inherent ambiguity in medical images and text, yielding improved representations and performance on downstream tasks. Our model demonstrates significant advances in medical image-text pre-training, obtaining state-of-the-art performance on multiple downstream tasks.

GDAIP: A Graph-Based Domain Adaptive Framework for Individual Brain Parcellation

Jianfei Zhu, Haiqi Zhu, Shaohui Liu, Feng Jiang, Baichun Wei, Chunzhi Yi

arxiv logopreprintJul 29 2025
Recent deep learning approaches have shown promise in learning such individual brain parcellations from functional magnetic resonance imaging (fMRI). However, most existing methods assume consistent data distributions across domains and struggle with domain shifts inherent to real-world cross-dataset scenarios. To address this challenge, we proposed Graph Domain Adaptation for Individual Parcellation (GDAIP), a novel framework that integrates Graph Attention Networks (GAT) with Minimax Entropy (MME)-based domain adaptation. We construct cross-dataset brain graphs at both the group and individual levels. By leveraging semi-supervised training and adversarial optimization of the prediction entropy on unlabeled vertices from target brain graph, the reference atlas is adapted from the group-level brain graph to the individual brain graph, enabling individual parcellation under cross-dataset settings. We evaluated our method using parcellation visualization, Dice coefficient, and functional homogeneity. Experimental results demonstrate that GDAIP produces individual parcellations with topologically plausible boundaries, strong cross-session consistency, and ability of reflecting functional organization.
Page 121 of 4003995 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.