Sort by:
Page 74 of 6346332 results

Talukder MA, Islam MM, Uddin MA, Layek MA, Acharjee UK, Bhuiyan T, Moni MA

pubmed logopapersOct 9 2025
Brain tumors are a critical medical challenge, requiring accurate and timely diagnosis to improve patient outcomes. Misclassification can significantly reduce life expectancy, emphasizing the need for precise diagnostic methods. Manual analysis of extensive magnetic resonance imaging (MRI) datasets is both labor-intensive and time-consuming, underscoring the importance of an efficient deep learning (DL) model to enhance diagnostic accuracy. This study presents an innovative deep ensemble approach based on transfer learning (TL) for effective brain tumor classification. The proposed methodology incorporates comprehensive preprocessing, data balancing through synthetic data generation (SDG), reconstruction and fine-tuning of TL architectures, and ensemble modeling using Genetic Algorithm-based Weight Optimization (GAWO) and Grid Search-based Weight Optimization (GSWO) used to optimize model weights for enhanced performance. Experiments were performed on the Figshare Contrast-Enhanced MRI (CE-MRI) brain tumor dataset, consisting of 3064 images. The proposed approach demonstrated exceptional performance, achieving classification accuracies of 99.57% with Xception, 99.48% with ResNet50V2, 99.33% with ResNet152V2, 99.39% with InceptionResNetV2, 99.78% with GAWO, and 99.84% with GSWO. The GSWO achieved the highest average accuracy of 99.84% across five-fold cross-validation among other DL models. The comparative analysis highlights the superiority of the proposed model over State of Arts (SOA) works, showcasing its potential to assist neurologists and clinicians in making precise and timely diagnostic decisions. The study concludes that the optimized deep ensemble model is a robust and reliable tool for brain tumor classification.

Lin M, Xu F, Deng Y, Wei Y, Shi F, Xie Y, Xie C, Chen C, Song J, Shen Y, Lin Y, Ding H, Zhou Y, Lu S, Chen Y, Lan L, Zhao W, Zhu J, Kuang Z, Pang W, Que S, Fang X, Ji R, Dong C, Zhang J, Liu Q, Zhang Z, Gao C, Chen L, Song Y, Zhan L, Huang L, Wu X, Wang R, Song Z

pubmed logopapersOct 9 2025
Host responses during ARDS are highly heterogeneous, contributing to inconsistent therapeutic outcomes. Proteome-based phenotyping may identify biologically and clinically distinct phenotypes to guide precision therapy. In this multicenter cohort study, we used latent class analysis (LCA) of targeted serum proteomics to identify ARDS phenotypes. Serum samples were collected within 72 h of diagnosis to capture early-phase profiles. Validation was conducted in external cohorts. Pathway enrichment assessed molecular heterogeneity. Lung CT scans were analyzed using machine learning-based radiomics to explore phenotypic distinctions. Heterogeneous treatment effects (HTEs) for glucocorticoids and ventilation strategies were evaluated using inverse probability of treatment weighting (IPTW) adjusted Cox regression. A multinomial XGBoost model was developed to classify phenotypes. Among 1048 patients, three inflammatory phenotypes (C1, C2, C3) were identified and validated in two independent cohorts. The phenotype C1 with a larger proportion of poorly/non-inflated lung compartments had the highest 90-day mortality, shock incidence, and fewest ventilator-free days, followed by C3, while C2 patients had the best outcomes (<i>p</i><0.001). Phenotype C1 was characterized by intense innate immune activation, cytokine amplification, and metabolic reprogramming. Phenotype C2 demonstrated immune suppression, enhanced tissue repair, and restoration of anti-inflammatory metabolism. Phenotype C3, comprising the oldest patients, reflected an intermediate state with moderate immune activation and partial immune resolution. Glucocorticoids therapy and higher positive end-expiratory pressure (PEEP) ventilation improved 90-day outcomes in C1 but increased mortality in C2 patients (<i>P</i> <sub>interaction</sub><0.05). Finally, a 12-biomarker classifier can accurately distinguish phenotypes. We identified and validated three proteome-based ARDS phenotypes with distinct clinical, radiographic, and molecular profiles. Their differential treatment responses highlight the potential of biomarker-driven strategies for ARDS precision medicine.

Hoshika M, Kayano S, Akagi N, Inoue T, Funama Y

pubmed logopapersOct 9 2025
In ultra-high-resolution CT (U-HRCT), longer gantry rotation times are sometimes used to maintain image quality when using a small focal spot. This study aimed to evaluate the impact of gantry rotation time on image quality for deep learning reconstruction (DLR), model-based iterative reconstruction (MBIR), and filtered back projection (FBP). A phantom was scanned on a U-HRCT scanner at four dose levels and four gantry rotation times, with images reconstructed using DLR, MBIR, and FBP algorithms. Image quality was evaluated for noise characteristics and high-contrast resolution. Noise was characterized using the noise power spectrum (NPS) to compute the noise magnitude ratio and central frequency ratio for MBIR and DLR relative to FBP, while high-contrast resolution was determined from the profile curve. MBIR and FBP demonstrated consistent image quality across all rotation times, with no statistically significant differences observed. In contrast, DLR showed significantly lower high-contrast resolution at a 1.0 s rotation time compared to 0.5-0.75 s (p<0.05). At 1.0 s, DLR also exhibited an unfavorable shift of the NPS toward lower frequencies, indicating degraded noise texture. While DLR delivers superior image quality at gantry rotation times of 0.5-0.75s, it exhibits a loss of resolution and altered noise texture at 1.0 s. This degradation is likely attributable to the algorithm's limitations when processing data distributions that were underrepresented in its training set. Therefore, to optimize diagnostic performance, scan parameters must be carefully tailored to the specific reconstruction algorithm.

Lyu J, Ning L, Consagra W, Liu Q, Rushmore RJ, Bilgic B, Rathi Y

pubmed logopapersOct 9 2025
High-resolution whole-brain in vivo MR imaging at mesoscale resolutions remains challenging due to long scan durations, motion artifacts, and limited signal-to-noise ratio (SNR). While acquiring multiple anisotropic scans from rotated slice orientations offers a practical compromise, reconstructing accurate isotropic volumes from such inputs remains non-trivial due to the lack of high-resolution ground truth and the presence of inter-scan motion. To address these challenges, we proposes Rotating-view super-resolution (ROVER)-MRI, an unsupervised framework based on multi-scale implicit neural representations (INR), enabling accurate recovery of fine anatomical details from multi-view thick-slice acquisitions. ROVER-MRI employs coordinate-based neural networks to implicitly and continuously encode image structures at multiple spatial scales, simultaneously modeling anatomical continuity and correcting inter-view motion through an integrated registration mechanism. Validation on ex-vivo monkey brain data and multiple in-vivo human datasets demonstrates substantially improved reconstruction performance compared to bi-cubic interpolation and state-of-the-art regularized least-squares super-resolution reconstruction (LS-SRR) with 2-fold reduction in scan time. Notably, ROVER-MRI enables whole-brain in-vivo T2-weighted imaging at 180μm isotropic resolution in just 17 min on a 7T scanner, achieving a 22.4% reduction in relative error compared to LS-SRR. We also demonstrate improved SNR using ROVER-MRI compared to a time-matched 3D GRE acquisition. Quantitative results on several datasets demonstrate better sharpness of the reconstructed images with ROVER-MRI for different super-resolution factors (5 to 11). These findings highlight ROVER-MRI's potential as a rapid, accurate, and motion-resilient mesoscale imaging solution, promising substantial advantages for neuroimaging studies.

Dai C, Huang B, Yu Z, Xu J, Li J, Yang J

pubmed logopapersOct 9 2025
The need for prediction of overall survival (OS) in patients with lung adenocarcinoma (LUAD) has been increasingly recognized. We aimed to generate a computed tomography-derived radiomic signature for predicting prognosis in LUAD patients, and then explored the relationship between radiomic features and tumor heterogeneity and microenvironment. Data of 306 eligible LUAD patients from three institutions were obtained between January 2019 and January 2024. The mainstream Residual Network 50 (ResNet50) was used to develop an image-based deep learning radiomic signature (DLRS). We developed a clinical model and calculated the conventional radiomics score using pyradiomics package. An external cohort from a public database called The Cancer Imaging Archive was obtained for further validation. We performed the time-dependent receiver operator characteristic curve to assess the performance of the models. We divided the whole dataset into high and low-score groups with the help of the DLRS. The differences in tumor heterogeneity and microenvironment between different score groups were investigated using the sequencing data from the corresponding LUAD cohort from the Cancer Genome Atlas. In the test cohort, the DLRS outperformed the conventional radiomics score and clinical model, with the area under the curves (95%CI) for 1, 3, and 5-year OS of 0.912 (0.881-0.952), 0.851 (0.824-0.901), and 0.841 (0.807-0.878), respectively. Significant differences in survival time were observed between different groups stratified by this signature. It showed great discrimination, calibration, and clinical utility (all p<0.05). Distinct gene expression patterns were identified. The tumor heterogeneity and microenvironment significantly varied between different score groups. The DLRS could effectively predict the prognosis of LUAD patients by reflecting the tumor heterogeneity and microenvironment.

Rosell AC, Janssen N, Maselli A, Pereda E, Huertas-Company M, Kitaura FS

pubmed logopapersOct 9 2025
Inferring chronological age from magnetic resonance imaging (MRI) brain data has become a valuable tool for the early detection of neurodegenerative diseases. We present a method inspired by cosmological techniques for analyzing galaxy surveys, utilizing higher-order summary statistics with multivariate two- and three-point analyses in 3D Fourier space. This method offers physiological interpretability during the inference, allowing the detection of scales where brain anatomy differs across age groups, providing insights into brain aging processes. Similarly to the evolution of cosmic structures, the brain structure also evolves naturally but displays contrasting behaviors at different scales. On larger scales, structure loss occurs with age, possibly due to ventricular expansion, while smaller scales show increased structure, likely related to decreased cortical thickness and gray/white matter volume. Using MRI data from the OASIS-3 database of 869 sessions, our method predicts chronological age with a Mean Absolute Error (MAE) of 3.1 years, while providing information as a function of scale. A posterior density estimation shows that the 1-σ uncertainty for each individual varies between ∼2 and 8 years, suggesting that, beyond sample variance, complex genetic or lifestyle-related factors may influence brain aging. We perform a twofold validation of the method. First, we apply the method to the Cam-CAN dataset, yielding a MAE of ∼5.9 years for the age range from 18 to 88 years. Second, we apply the method to thousands of simulated MRI images generated with a state-of-the-art Latent Diffusion model. This work demonstrates the utility of interdisciplinary research, bridging cosmological methods and neuroscience.

Thai-Hoang Pham, Jiayuan Chen, Seungyeon Lee, Yuanlong Wang, Sayoko Moroi, Xueru Zhang, Ping Zhang

arxiv logopreprintOct 9 2025
As machine learning (ML) algorithms are increasingly used in medical image analysis, concerns have emerged about their potential biases against certain social groups. Although many approaches have been proposed to ensure the fairness of ML models, most existing works focus only on medical image diagnosis tasks, such as image classification and segmentation, and overlooked prognosis scenarios, which involve predicting the likely outcome or progression of a medical condition over time. To address this gap, we introduce FairTTE, the first comprehensive framework for assessing fairness in time-to-event (TTE) prediction in medical imaging. FairTTE encompasses a diverse range of imaging modalities and TTE outcomes, integrating cutting-edge TTE prediction and fairness algorithms to enable systematic and fine-grained analysis of fairness in medical image prognosis. Leveraging causal analysis techniques, FairTTE uncovers and quantifies distinct sources of bias embedded within medical imaging datasets. Our large-scale evaluation reveals that bias is pervasive across different imaging modalities and that current fairness methods offer limited mitigation. We further demonstrate a strong association between underlying bias sources and model disparities, emphasizing the need for holistic approaches that target all forms of bias. Notably, we find that fairness becomes increasingly difficult to maintain under distribution shifts, underscoring the limitations of existing solutions and the pressing need for more robust, equitable prognostic models.

Nicolas Ewen, Jairo Diaz-Rodriguez, Kelly Ramsay

arxiv logopreprintOct 9 2025
Traditional transfer learning typically reuses large pre-trained networks by freezing some of their weights and adding task-specific layers. While this approach is computationally efficient, it limits the model's ability to adapt to domain-specific features and can still lead to overfitting with very limited data. To address these limitations, we propose Structured Output Regularization (SOR), a simple yet effective framework that freezes the internal network structures (e.g., convolutional filters) while using a combination of group lasso and $L_1$ penalties. This framework tailors the model to specific data with minimal additional parameters and is easily applicable to various network components, such as convolutional filters or various blocks in neural networks enabling broad applicability for transfer learning tasks. We evaluate SOR on three few shot medical imaging classification tasks and we achieve competitive results using DenseNet121, and EfficientNetB4 bases compared to established benchmarks.

Hadji, M., Moradi, E., Tohka, J.

medrxiv logopreprintOct 9 2025
Neuron loss is a key feature of neurodegenerative diseases often leading to brain atrophy detectable through magnetic resonance imaging (MRI). Various brain atrophy measures are essential in research of Alzheimer disease (AD) and related dementias. This study aims to forecast future annual percentage changes in hippocampal, ventricular, and total gray matter (TGM) volumes in individuals with varying cognitive statuses, from healthy to dementia. We developed a machine learning model using elastic net linear regression and tested two approaches: (1) a baseline model using predictors from a single-time-point and (2) a longitudinal model using predictors derived from longitudinal MRI. Both approaches were evaluated with MRI-only models and models that combined MRI with additional risk factors (age, sex, APOE4, and baseline diagnosis). Cross-validated Pearson correlation scores between predicted and actual annual percentage changes were 0.62 for the hippocampus, 0.51 for the ventricles, and 0.41 for TGM, using the longitudinal MRI + risk factor model. Longitudinal models consistently outperformed baseline models, and models including risk factors outperformed the MRI only model. Validation using an external dataset confirmed these findings, highlighting the value of predictors derived based on longitudinal data. We further studied the value of the predicted atrophy/enlargement rates for clinical status progression prediction across three different datasets. Predicted atrophy was a consistently better indicator of progression to mild cognitive impairment and dementia than present-day regional volumes, with the longitudinal atrophy prediction model typically outperforming the baseline model in terms of clinical status prediction. Future atrophy prediction has significant potential for assessing the risk of cognitive decline, even in cognitively unimpaired individuals, and can aid in selecting participants for clinical trials of disease-modifying drugs for AD.

Hoelscher, D. L., Schmitz, N. E., Niggemeier, L., Pilva, P., Strauch, M., Tesar, V., Barratt, J., Roberts, I. S., Coppo, R., the VALIGA investigators,, Barisoni, L., the CureGN investigators,, Yanagita, M., Alabalik, U., Rule, A. D., Jagtap, J. M., Abreu, E. S., Taal, M. W., Kalra, P. A., the NURTuRE academic steering group,, Floege, J., Kramann, R., Boor, P., the AI4IgAN study,, Buelow, R. D.

medrxiv logopreprintOct 9 2025
IgA nephropathy (IgAN) is a leading cause of kidney failure with diverse clinical presentations and treatment responses, particularly to corticosteroids, with conflicting evidence. Due to potentially severe side effects and heterogenous responses more individualized approaches are needed. We developed a causal machine learning framework for predicting individualized treatment effects of corticosteroids in IgAN by integrating clinical variables, histopathological scores, and deep learning-based biomarkers from digitized kidney biopsies (pathomics) of 1,022 patients from eight retrospective international cohorts. At the cohort-level, corticosteroids showed no significant effect on five-year kidney survival. However, the framework identified subpopulations with and without significant treatment benefit, improving progression-free kidney survival and also reducing overtreatment in low-benefit patients. Pathomics highlighted tubulointerstitial inflammation and glomerular tuft deformation as predictors of corticosteroid response. Our framework offers a blueprint for precision therapy in IgAN, supporting clinical decision-making in the era of emerging targeted treatments.
Page 74 of 6346332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.