Sort by:
Page 78 of 94940 results

Predicting Risk of Pulmonary Fibrosis Formation in PASC Patients

Wanying Dou, Gorkem Durak, Koushik Biswas, Ziliang Hong, Andrea Mia Bejar, Elif Keles, Kaan Akin, Sukru Mehmet Erturk, Alpay Medetalibeyoglu, Marc Sala, Alexander Misharin, Hatice Savas, Mary Salvatore, Sachin Jambawalikar, Drew Torigian, Jayaram K. Udupa, Ulas Bagci

arxiv logopreprintMay 15 2025
While the acute phase of the COVID-19 pandemic has subsided, its long-term effects persist through Post-Acute Sequelae of COVID-19 (PASC), commonly known as Long COVID. There remains substantial uncertainty regarding both its duration and optimal management strategies. PASC manifests as a diverse array of persistent or newly emerging symptoms--ranging from fatigue, dyspnea, and neurologic impairments (e.g., brain fog), to cardiovascular, pulmonary, and musculoskeletal abnormalities--that extend beyond the acute infection phase. This heterogeneous presentation poses substantial challenges for clinical assessment, diagnosis, and treatment planning. In this paper, we focus on imaging findings that may suggest fibrotic damage in the lungs, a critical manifestation characterized by scarring of lung tissue, which can potentially affect long-term respiratory function in patients with PASC. This study introduces a novel multi-center chest CT analysis framework that combines deep learning and radiomics for fibrosis prediction. Our approach leverages convolutional neural networks (CNNs) and interpretable feature extraction, achieving 82.2% accuracy and 85.5% AUC in classification tasks. We demonstrate the effectiveness of Grad-CAM visualization and radiomics-based feature analysis in providing clinically relevant insights for PASC-related lung fibrosis prediction. Our findings highlight the potential of deep learning-driven computational methods for early detection and risk assessment of PASC-related lung fibrosis--presented for the first time in the literature.

Advancing Multiple Instance Learning with Continual Learning for Whole Slide Imaging

Xianrui Li, Yufei Cui, Jun Li, Antoni B. Chan

arxiv logopreprintMay 15 2025
Advances in medical imaging and deep learning have propelled progress in whole slide image (WSI) analysis, with multiple instance learning (MIL) showing promise for efficient and accurate diagnostics. However, conventional MIL models often lack adaptability to evolving datasets, as they rely on static training that cannot incorporate new information without extensive retraining. Applying continual learning (CL) to MIL models is a possible solution, but often sees limited improvements. In this paper, we analyze CL in the context of attention MIL models and find that the model forgetting is mainly concentrated in the attention layers of the MIL model. Using the results of this analysis we propose two components for improving CL on MIL: Attention Knowledge Distillation (AKD) and the Pseudo-Bag Memory Pool (PMP). AKD mitigates catastrophic forgetting by focusing on retaining attention layer knowledge between learning sessions, while PMP reduces the memory footprint by selectively storing only the most informative patches, or ``pseudo-bags'' from WSIs. Experimental evaluations demonstrate that our method significantly improves both accuracy and memory efficiency on diverse WSI datasets, outperforming current state-of-the-art CL methods. This work provides a foundation for CL in large-scale, weakly annotated clinical datasets, paving the way for more adaptable and resilient diagnostic models.

Machine learning prediction prior to onset of mild cognitive impairment using T1-weighted magnetic resonance imaging radiomic of the hippocampus.

Zhan S, Wang J, Dong J, Ji X, Huang L, Zhang Q, Xu D, Peng L, Wang X, Zhang Y, Liang S, Chen L

pubmed logopapersMay 15 2025
Early identification of individuals who progress from normal cognition (NC) to mild cognitive impairment (MCI) may help prevent cognitive decline. We aimed to build predictive models using radiomic features of the bilateral hippocampus in combination with scores from neuropsychological assessments. We utilized the Alzheimer's Disease Neuroimaging Initiative (ADNI) database to study 175 NC individuals, identifying 50 who progressed to MCI within seven years. Employing the Least Absolute Shrinkage and Selection Operator (LASSO) on T1-weighted images, we extracted and refined hippocampal features. Classification models, including Logistic Regression (LR), Support Vector Machine (SVM), Random Forest (RF), and light gradient boosters (LightGBM), were built based on significant neuropsychological scores. Model validation was conducted using 5-fold cross-validation, and hyperparameters were optimized with Scikit-learn, using an 80:20 data split for training and testing. We found that the LightGBM model achieved an area under the receiver operating characteristic (ROC) curve (AUC) value of 0.89 and an accuracy of 0.79 in the training set, and an AUC value of 0.80 and an accuracy of 0.74 in the test set. The study identified that T1-weighted magnetic resonance imaging radiomic of the hippocampus would be used to predict the progression to MCI at the normal cognitive stage, which might provide a new insight into clinical research.

External Validation of a CT-Based Radiogenomics Model for the Detection of EGFR Mutation in NSCLC and the Impact of Prevalence in Model Building by Using Synthetic Minority Over Sampling (SMOTE): Lessons Learned.

Kohan AA, Mirshahvalad SA, Hinzpeter R, Kulanthaivelu R, Avery L, Ortega C, Metser U, Hope A, Veit-Haibach P

pubmed logopapersMay 15 2025
Radiogenomics holds promise in identifying molecular alterations in nonsmall cell lung cancer (NSCLC) using imaging features. Previously, we developed a radiogenomics model to predict epidermal growth factor receptor (EGFR) mutations based on contrast-enhanced computed tomography (CECT) in NSCLC patients. The current study aimed to externally validate this model using a publicly available National Institutes of Health (NIH)-based NSCLC dataset and assess the effect of EGFR mutation prevalence on model performance through synthetic minority oversampling technique (SMOTE). The original radiogenomics model was validated on an independent NIH cohort (n=140). For assessing the influence of disease prevalence, six SMOTE-augmented datasets were created, simulating EGFR mutation prevalence from 25% to 50%. Seven models were developed (one from original data, six SMOTE-augmented), each undergoing rigorous cross-validation, feature selection, and logistic regression modeling. Models were tested against the NIH cohort. Performance was compared using area under the receiver operating characteristic curve (Area Under the Curve [AUC]), and differences between radiomic-only, clinical-only, and combined models were statistically assessed. External validation revealed poor diagnostic performance for both our model and a previously published EGFR radiomics model (AUC ∼0.5). The clinical model alone achieved higher diagnostic accuracy (AUC 0.74). SMOTE-augmented models showed increased sensitivity but did not improve overall AUC compared to the clinical-only model. Changing EGFR mutation prevalence had minimal impact on AUC, challenging previous assumptions about the influence of sample imbalance on model performance. External validation failed to reproduce prior radiogenomics model performance, while clinical variables alone retained strong predictive value. SMOTE-based oversampling did not improve diagnostic accuracy, suggesting that, in EGFR prediction, radiomics may offer limited value beyond clinical data. Emphasis on robust external validation and data-sharing is essential for future clinical implementation of radiogenomic models.

Leveraging Vision Transformers in Multimodal Models for Retinal OCT Analysis.

Feretzakis G, Karakosta C, Gkoulalas-Divanis A, Bisoukis A, Boufeas IZ, Bazakidou E, Sakagianni A, Kalles D, Verykios VS

pubmed logopapersMay 15 2025
Optical Coherence Tomography (OCT) has become an indispensable imaging modality in ophthalmology, providing high-resolution cross-sectional images of the retina. Accurate classification of OCT images is crucial for diagnosing retinal diseases such as Age-related Macular Degeneration (AMD) and Diabetic Macular Edema (DME). This study explores the efficacy of various deep learning models, including convolutional neural networks (CNNs) and Vision Transformers (ViTs), in classifying OCT images. We also investigate the impact of integrating metadata (patient age, sex, eye laterality, and year) into the classification process, even when a significant portion of metadata is missing. Our results demonstrate that multimodal models leveraging both image and metadata inputs, such as the Multimodal ResNet18, can achieve competitive performance compared to image-only models, such as DenseNet121. Notably, DenseNet121 and Multimodal ResNet18 achieved the highest accuracy of 95.16%, with DenseNet121 showing a slightly higher F1-score of 0.9313. The multimodal ViT-based model also demonstrated promising results, achieving an accuracy of 93.22%, indicating the potential of Vision Transformers (ViTs) in medical image analysis, especially for handling complex multimodal data.

A Deep-Learning Framework for Ovarian Cancer Subtype Classification Using Whole Slide Images.

Wang C, Yi Q, Aflakian A, Ye J, Arvanitis T, Dearn KD, Hajiyavand A

pubmed logopapersMay 15 2025
Ovarian cancer, a leading cause of cancer-related deaths among women, comprises distinct subtypes each requiring different treatment approaches. This paper presents a deep-learning framework for classifying ovarian cancer subtypes using Whole Slide Imaging (WSI). Our method contains three stages: image tiling, feature extraction, and multi-instance learning. Our approach is trained and validated on a public dataset from 80 distinct patients, achieving up to 89,8% accuracy with a notable improvement in computational efficiency. The results demonstrate the potential of our framework to augment diagnostic precision in clinical settings, offering a scalable solution for the accurate classification of ovarian cancer subtypes.

Does Whole Brain Radiomics on Multimodal Neuroimaging Make Sense in Neuro-Oncology? A Proof of Concept Study.

Danilov G, Kalaeva D, Vikhrova N, Shugay S, Telysheva E, Goraynov S, Kosyrkova A, Pavlova G, Pronin I, Usachev D

pubmed logopapersMay 15 2025
Employing a whole-brain (WB) mask as a region of interest for extracting radiomic features is a feasible, albeit less common, approach in neuro-oncology research. This study aims to evaluate the relationship between WB radiomic features, derived from various neuroimaging modalities in patients with gliomas, and some key baseline characteristics of patients and tumors such as sex, histological tumor type, WHO Grade (2021), IDH1 mutation status, necrosis lesions, contrast enhancement, T/N peak value and metabolic tumor volume. Forty-one patients (average age 50 ± 15 years, 21 females and 20 males) with supratentorial glial tumors were enrolled in this study. A total of 38,720 radiomic features were extracted. Cluster analysis revealed that whole-brain images of biologically different tumors could be distinguished to a certain extent based on their imaging biomarkers. Machine learning capabilities to detect image properties like contrast-enhanced or necrotic zones validated radiomic features in objectifying image semantics. Furthermore, the predictive capability of imaging biomarkers in determining tumor histology, grade and mutation type underscores their diagnostic potential. Whole-brain radiomics using multimodal neuroimaging data appeared to be informative in neuro-oncology, making research in this area well justified.

Energy-Efficient AI for Medical Diagnostics: Performance and Sustainability Analysis of ResNet and MobileNet.

Rehman ZU, Hassan U, Islam SU, Gallos P, Boudjadar J

pubmed logopapersMay 15 2025
Artificial intelligence (AI) has transformed medical diagnostics by enhancing the accuracy of disease detection, particularly through deep learning models to analyze medical imaging data. However, the energy demands of training these models, such as ResNet and MobileNet, are substantial and often overlooked; however, researchers mainly focus on improving model accuracy. This study compares the energy use of these two models for classifying thoracic diseases using the well-known CheXpert dataset. We calculate power and energy consumption during training using the EnergyEfficientAI library. Results demonstrate that MobileNet outperforms ResNet by consuming less power and completing training faster, resulting in lower overall energy costs. This study highlights the importance of prioritizing energy efficiency in AI model development, promoting sustainable, eco-friendly approaches to advance medical diagnosis.

Characterizing ASD Subtypes Using Morphological Features from sMRI with Unsupervised Learning.

Raj A, Ratnaik R, Sengar SS, Fredo ARJ

pubmed logopapersMay 15 2025
In this study, we attempted to identify the subtypes of autism spectrum disorder (ASD) with the help of anatomical alterations found in structural magnetic resonance imaging (sMRI) data of the ASD brain and machine learning tools. Initially, the sMRI data was preprocessed using the FreeSurfer toolbox. Further, the brain regions were segmented into 148 regions of interest using the Destrieux atlas. Features such as volume, thickness, surface area, and mean curvature were extracted for each brain region. We performed principal component analysis independently on the volume, thickness, surface area, and mean curvature features and identified the top 10 features. Further, we applied k-means clustering on these top 10 features and validated the number of clusters using Elbow and Silhouette method. Our study identified two clusters in the dataset which significantly shows the existence of two subtypes in ASD. We identified the features such as volume of scaled lh_G_front middle, thickness of scaled rh_S_temporal transverse, area of scaled lh_S_temporal sup, and mean curvature of scaled lh_G_precentral as the significant features discriminating the two clusters with statistically significant p-value (p<0.05). Thus, our proposed method is effective for the identification of ASD subtypes and can also be useful for the screening of other similar neurological disorders.

Machine learning-based prognostic subgrouping of glioblastoma: A multicenter study.

Akbari H, Bakas S, Sako C, Fathi Kazerooni A, Villanueva-Meyer J, Garcia JA, Mamourian E, Liu F, Cao Q, Shinohara RT, Baid U, Getka A, Pati S, Singh A, Calabrese E, Chang S, Rudie J, Sotiras A, LaMontagne P, Marcus DS, Milchenko M, Nazeri A, Balana C, Capellades J, Puig J, Badve C, Barnholtz-Sloan JS, Sloan AE, Vadmal V, Waite K, Ak M, Colen RR, Park YW, Ahn SS, Chang JH, Choi YS, Lee SK, Alexander GS, Ali AS, Dicker AP, Flanders AE, Liem S, Lombardo J, Shi W, Shukla G, Griffith B, Poisson LM, Rogers LR, Kotrotsou A, Booth TC, Jain R, Lee M, Mahajan A, Chakravarti A, Palmer JD, DiCostanzo D, Fathallah-Shaykh H, Cepeda S, Santonocito OS, Di Stefano AL, Wiestler B, Melhem ER, Woodworth GF, Tiwari P, Valdes P, Matsumoto Y, Otani Y, Imoto R, Aboian M, Koizumi S, Kurozumi K, Kawakatsu T, Alexander K, Satgunaseelan L, Rulseh AM, Bagley SJ, Bilello M, Binder ZA, Brem S, Desai AS, Lustig RA, Maloney E, Prior T, Amankulor N, Nasrallah MP, O'Rourke DM, Mohan S, Davatzikos C

pubmed logopapersMay 15 2025
Glioblastoma (GBM) is the most aggressive adult primary brain cancer, characterized by significant heterogeneity, posing challenges for patient management, treatment planning, and clinical trial stratification. We developed a highly reproducible, personalized prognostication, and clinical subgrouping system using machine learning (ML) on routine clinical data, magnetic resonance imaging (MRI), and molecular measures from 2838 demographically diverse patients across 22 institutions and 3 continents. Patients were stratified into favorable, intermediate, and poor prognostic subgroups (I, II, and III) using Kaplan-Meier analysis (Cox proportional model and hazard ratios [HR]). The ML model stratified patients into distinct prognostic subgroups with HRs between subgroups I-II and I-III of 1.62 (95% CI: 1.43-1.84, P < .001) and 3.48 (95% CI: 2.94-4.11, P < .001), respectively. Analysis of imaging features revealed several tumor properties contributing unique prognostic value, supporting the feasibility of a generalizable prognostic classification system in a diverse cohort. Our ML model demonstrates extensive reproducibility and online accessibility, utilizing routine imaging data rather than complex imaging protocols. This platform offers a unique approach to personalized patient management and clinical trial stratification in GBM.
Page 78 of 94940 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.