Sort by:
Page 171 of 3593587 results

Artificial intelligence-based diabetes risk prediction from longitudinal DXA bone measurements.

Khan S, Shah Z

pubmed logopapersJul 16 2025
Diabetes mellitus (DM) is a serious global health concern that poses a significant threat to human life. Beyond its direct impact, diabetes substantially increases the risk of developing severe complications such as hypertension, cardiovascular disease, and musculoskeletal disorders like arthritis and osteoporosis. The field of diabetes classification has advanced significantly with the use of diverse data modalities and sophisticated tools to identify individuals or groups as diabetic. But the task of predicting diabetes prior to its onset, particularly through the use of longitudinal multi-modal data, remains relatively underexplored. To better understand the risk factors associated with diabetes development among Qatari adults, this longitudinal research aims to investigate dual-energy X-ray absorptiometry (DXA)-derived whole-body and regional bone composition measures as potential predictors of diabetes onset. We proposed a case-control retrospective study, with a total of 1,382 participants contains 725 male participants (cases: 146, control: 579) and 657 female participants (case: 133, control: 524). We excluded participants with incomplete data points. To handle class imbalance, we augmented our data using Synthetic Minority Over-sampling Technique (SMOTE) and SMOTEENN (SMOTE with Edited Nearest Neighbors), and to further investigated the association between bones data features and diabetes status, we employed ANOVA analytical method. For diabetes onset prediction, we employed both conventional and deep learning (DL) models to predict risk factors associated with diabetes in Qatari adults. We used SHAP and probabilistic methods to investigate the association of identified risk factors with diabetes. During experimental analysis, we found that bone mineral density (BMD), bone mineral contents (BMC) in the hip, femoral neck, troch area, and lumbar spine showed an upward trend in diabetic patients with [Formula: see text]. Meanwhile, we found that patients with abnormal glucose metabolism had increased wards BMD and BMC with low Z-score compared to healthy participants. Consequently, it shows that the diabetic group has superior bone health than the control group in the cohort, because they exhibit higher BMD, muscle mass, and bone area across most body regions. Moreover, in the age group distribution analysis, we found that the diabetes prediction rate was higher among healthy participants in the younger age group 20-40 years. But as the age range increased, the model predictions became more accurate for diabetic participants, especially in the older age group 56-69 years. It is also observed that male participants demonstrated a higher susceptibility to diabetes onset compared to female participants. Shallow models outperformed the DL models by presenting improved accuracy (91.08%), AUROC (96%), and recall values (91%). This pivotal approach utilizing DXA scans highlights significant potential for the rapid and minimally invasive early detection of diabetes.

Image quality and radiation dose of reduced-dose abdominopelvic computed tomography (CT) with silver filter and deep learning reconstruction.

Otgonbaatar C, Jeon SH, Cha SJ, Shim H, Kim JW, Ahn JH

pubmed logopapersJul 16 2025
To assess the image quality and radiation dose between reduced-dose CT with deep learning reconstruction (DLR) using SilverBeam filter and standard dose with iterative reconstruction (IR) in abdominopelvic CT. In total, 182 patients (mean age ± standard deviation, 63 ± 14 years; 100 men) were included. Standard-dose scanning was performed with a tube voltage of 100 kVp, automatic tube current modulation, and IR reconstruction, whereas reduced-dose scanning was performed with a tube voltage of 120 kVp, a SilverBeam filter, and DLR. Additionally, a contrast-enhanced (CE)-boost image was obtained for reduced-dose scanning. Radiation dose, objective, and subjective image analyses were performed in each body mass index (BMI) category. The radiation dose for SilverBeam with DLR was significantly lower than that of standard dose with IR, with an average reduction in the effective dose of 59.0% (1.87 vs. 4.57 mSv). Standard dose with IR (10.59 ± 1.75) and SilverBeam with DLR (10.60 ± 1.08) showed no significant difference in image noise (p = 0.99). In the obese group (BMI > 25 kg/m<sup>2</sup>), there were no significant differences in SNRs of the liver, pancreas, and spleen between standard dose with IR and SilverBeam with DLR. SilverBeam with DLR + CE-boost demonstrated significantly better SNRs and CNRs, compared with standard dose with IR and SilverBeam with DLR. DLR combined with silver filter is effective for routine abdominopelvic CT, achieving a clearly reduced radiation dose while providing image quality that is non-inferior to standard dose with IR.

Multi-scale machine learning model predicts muscle and functional disease progression.

Blemker SS, Riem L, DuCharme O, Pinette M, Costanzo KE, Weatherley E, Statland J, Tapscott SJ, Wang LH, Shaw DWW, Song X, Leung D, Friedman SD

pubmed logopapersJul 16 2025
Facioscapulohumeral muscular dystrophy (FSHD) is a genetic neuromuscular disorder characterized by progressive muscle degeneration with substantial variability in severity and progression patterns. FSHD is a highly heterogeneous disease; however, current clinical metrics used for tracking disease progression lack sensitivity for personalized assessment, which greatly limits the design and execution of clinical trials. This study introduces a multi-scale machine learning framework leveraging whole-body magnetic resonance imaging (MRI) and clinical data to predict regional, muscle, joint, and functional progression in FSHD. The goal this work is to create a 'digital twin' of individual FSHD patients that can be leveraged in clinical trials. Using a combined dataset of over 100 patients from seven studies, MRI-derived metrics-including fat fraction, lean muscle volume, and fat spatial heterogeneity at baseline-were integrated with clinical and functional measures. A three-stage random forest model was developed to predict annualized changes in muscle composition and a functional outcome (timed up-and-go (TUG)). All model stages revealed strong predictive performance in separate holdout datasets. After training, the models predicted fat fraction change with a root mean square error (RMSE) of 2.16% and lean volume change with a RMSE of 8.1 ml in a holdout testing dataset. Feature analysis revealed that metrics of fat heterogeneity within muscle predicts muscle-level progression. The stage 3 model, which combined functional muscle groups, predicted change in TUG with a RMSE of 0.6 s in the holdout testing dataset. This study demonstrates the machine learning models incorporating individual muscle and performance data can effectively predict MRI disease progression and functional performance of complex tasks, addressing the heterogeneity and nonlinearity inherent in FSHD. Further studies incorporating larger longitudinal cohorts, as well as comprehensive clinical and functional measures, will allow for expanding and refining this model. As many neuromuscular diseases are characterized by variability and heterogeneity similar to FSHD, such approaches have broad applicability.

Identifying Signatures of Image Phenotypes to Track Treatment Response in Liver Disease

Matthias Perkonigg, Nina Bastati, Ahmed Ba-Ssalamah, Peter Mesenbrink, Alexander Goehler, Miljen Martic, Xiaofei Zhou, Michael Trauner, Georg Langs

arxiv logopreprintJul 16 2025
Quantifiable image patterns associated with disease progression and treatment response are critical tools for guiding individual treatment, and for developing novel therapies. Here, we show that unsupervised machine learning can identify a pattern vocabulary of liver tissue in magnetic resonance images that quantifies treatment response in diffuse liver disease. Deep clustering networks simultaneously encode and cluster patches of medical images into a low-dimensional latent space to establish a tissue vocabulary. The resulting tissue types capture differential tissue change and its location in the liver associated with treatment response. We demonstrate the utility of the vocabulary on a randomized controlled trial cohort of non-alcoholic steatohepatitis patients. First, we use the vocabulary to compare longitudinal liver change in a placebo and a treatment cohort. Results show that the method identifies specific liver tissue change pathways associated with treatment, and enables a better separation between treatment groups than established non-imaging measures. Moreover, we show that the vocabulary can predict biopsy derived features from non-invasive imaging data. We validate the method on a separate replication cohort to demonstrate the applicability of the proposed method.

Generate to Ground: Multimodal Text Conditioning Boosts Phrase Grounding in Medical Vision-Language Models

Felix Nützel, Mischa Dombrowski, Bernhard Kainz

arxiv logopreprintJul 16 2025
Phrase grounding, i.e., mapping natural language phrases to specific image regions, holds significant potential for disease localization in medical imaging through clinical reports. While current state-of-the-art methods rely on discriminative, self-supervised contrastive models, we demonstrate that generative text-to-image diffusion models, leveraging cross-attention maps, can achieve superior zero-shot phrase grounding performance. Contrary to prior assumptions, we show that fine-tuning diffusion models with a frozen, domain-specific language model, such as CXR-BERT, substantially outperforms domain-agnostic counterparts. This setup achieves remarkable improvements, with mIoU scores doubling those of current discriminative methods. These findings highlight the underexplored potential of generative models for phrase grounding tasks. To further enhance performance, we introduce Bimodal Bias Merging (BBM), a novel post-processing technique that aligns text and image biases to identify regions of high certainty. BBM refines cross-attention maps, achieving even greater localization accuracy. Our results establish generative approaches as a more effective paradigm for phrase grounding in the medical imaging domain, paving the way for more robust and interpretable applications in clinical practice. The source code and model weights are available at https://github.com/Felix-012/generate_to_ground.

Site-Level Fine-Tuning with Progressive Layer Freezing: Towards Robust Prediction of Bronchopulmonary Dysplasia from Day-1 Chest Radiographs in Extremely Preterm Infants

Sybelle Goedicke-Fritz, Michelle Bous, Annika Engel, Matthias Flotho, Pascal Hirsch, Hannah Wittig, Dino Milanovic, Dominik Mohr, Mathias Kaspar, Sogand Nemat, Dorothea Kerner, Arno Bücker, Andreas Keller, Sascha Meyer, Michael Zemlin, Philipp Flotho

arxiv logopreprintJul 16 2025
Bronchopulmonary dysplasia (BPD) is a chronic lung disease affecting 35% of extremely low birth weight infants. Defined by oxygen dependence at 36 weeks postmenstrual age, it causes lifelong respiratory complications. However, preventive interventions carry severe risks, including neurodevelopmental impairment, ventilator-induced lung injury, and systemic complications. Therefore, early BPD prognosis and prediction of BPD outcome is crucial to avoid unnecessary toxicity in low risk infants. Admission radiographs of extremely preterm infants are routinely acquired within 24h of life and could serve as a non-invasive prognostic tool. In this work, we developed and investigated a deep learning approach using chest X-rays from 163 extremely low-birth-weight infants ($\leq$32 weeks gestation, 401-999g) obtained within 24 hours of birth. We fine-tuned a ResNet-50 pretrained specifically on adult chest radiographs, employing progressive layer freezing with discriminative learning rates to prevent overfitting and evaluated a CutMix augmentation and linear probing. For moderate/severe BPD outcome prediction, our best performing model with progressive freezing, linear probing and CutMix achieved an AUROC of 0.78 $\pm$ 0.10, balanced accuracy of 0.69 $\pm$ 0.10, and an F1-score of 0.67 $\pm$ 0.11. In-domain pre-training significantly outperformed ImageNet initialization (p = 0.031) which confirms domain-specific pretraining to be important for BPD outcome prediction. Routine IRDS grades showed limited prognostic value (AUROC 0.57 $\pm$ 0.11), confirming the need of learned markers. Our approach demonstrates that domain-specific pretraining enables accurate BPD prediction from routine day-1 radiographs. Through progressive freezing and linear probing, the method remains computationally feasible for site-level implementation and future federated learning deployments.

Benchmarking and Explaining Deep Learning Cortical Lesion MRI Segmentation in Multiple Sclerosis

Nataliia Molchanova, Alessandro Cagol, Mario Ocampo-Pineda, Po-Jui Lu, Matthias Weigel, Xinjie Chen, Erin Beck, Charidimos Tsagkas, Daniel Reich, Colin Vanden Bulcke, Anna Stolting, Serena Borrelli, Pietro Maggi, Adrien Depeursinge, Cristina Granziera, Henning Mueller, Pedro M. Gordaliza, Meritxell Bach Cuadra

arxiv logopreprintJul 16 2025
Cortical lesions (CLs) have emerged as valuable biomarkers in multiple sclerosis (MS), offering high diagnostic specificity and prognostic relevance. However, their routine clinical integration remains limited due to subtle magnetic resonance imaging (MRI) appearance, challenges in expert annotation, and a lack of standardized automated methods. We propose a comprehensive multi-centric benchmark of CL detection and segmentation in MRI. A total of 656 MRI scans, including clinical trial and research data from four institutions, were acquired at 3T and 7T using MP2RAGE and MPRAGE sequences with expert-consensus annotations. We rely on the self-configuring nnU-Net framework, designed for medical imaging segmentation, and propose adaptations tailored to the improved CL detection. We evaluated model generalization through out-of-distribution testing, demonstrating strong lesion detection capabilities with an F1-score of 0.64 and 0.5 in and out of the domain, respectively. We also analyze internal model features and model errors for a better understanding of AI decision-making. Our study examines how data variability, lesion ambiguity, and protocol differences impact model performance, offering future recommendations to address these barriers to clinical adoption. To reinforce the reproducibility, the implementation and models will be publicly accessible and ready to use at https://github.com/Medical-Image-Analysis-Laboratory/ and https://doi.org/10.5281/zenodo.15911797.

Clinical Implementation of Sixfold-Accelerated Deep Learning Superresolution Knee MRI in Under 5 Minutes: Arthroscopy-Validated Diagnostic Performance.

Vosshenrich J, Breit HC, Donners R, Obmann MM, Walter SS, Serfaty A, Rodrigues TC, Recht M, Stern SE, Fritz J

pubmed logopapersJul 16 2025
<b>BACKGROUND</b>. Deep learning (DL) superresolution image reconstruction enables higher acceleration factors for combined parallel imaging-simultaneous multislice-accelerated knee MRI but requires performance validation against external reference standards. <b>OBJECTIVE</b>. The purpose of this study was to validate the clinical efficacy of six-fold-accelerated sub-5-minute 3-T knee MRI using combined threefold parallel imaging (PI) and twofold simultaneous multislice (SMS) acceleration and DL superresolution image reconstruction against arthroscopic surgery. <b>METHODS</b>. Consecutive adult patients with painful knee conditions who underwent sixfold PI-SMS-accelerated DL superresolution 3-T knee MRI and arthroscopic surgery between October 2022 and July 2023 were retrospectively included. Seven fellowship-trained musculoskeletal radiologists independently assessed the MRI studies for image-quality parameters; presence of artifacts; structural visibility (Likert scale: 1 [very bad/severe] to 5 [very good/absent]); and the presence of cruciate ligament tears, collateral ligament tears, meniscal tears, cartilage defects, and fractures. Statistical analyses included kappa-based interreader agreements and diagnostic performance testing. <b>RESULTS</b>. The final sample included 124 adult patients (mean age ± SD, 46 ± 17 years; 79 men, 45 women) who underwent knee MRI and arthroscopic surgery within a median of 28 days (range, 4-56 days). Overall image quality was good to very good (median, 4 [IQR, 4-5]) with very good interreader agreement (κ = 0.86). Motion artifacts were absent (median, 5 [IQR, 5-5]), and image noise was minimal (median, 4 [IQR, 4-5]). Visibility of anatomic structures was very good (median, 5 [IQR, 5-5]). Diagnostic performance for diagnosing arthroscopy-validated structural abnormalities was good to excellent (AUC ≥ 0.81) with at least good interreader agreement (κ ≥ 0.72). The sensitivity, specificity, accuracy, and AUC values were 100%, 99%, 99%, and 0.99 for anterior cruciate ligament tears; 100%, 100%, 100%, and 1.00 for posterior cruciate ligament tears; 90%, 95%, 94%, and 0.93 for medial meniscus tears; 76%, 97%, 90%, and 0.86 for lateral meniscus tears; and 85%, 88%, 88%, and 0.81 for articular cartilage defects, respectively. <b>CONCLUSION</b>. Sixfold PI-SMS-accelerated sub-5-minute DL superresolution 3-T knee MRI has excellent diagnostic performance for detecting internal derangement. <b>CLINICAL IMPACT</b>. Sixfold PI-SMS-accelerated PI-SMS DL superresolution 3-T knee MRI provides high efficiency through short scan times and high diagnostic performance.

Interpreting Radiologist's Intention from Eye Movements in Chest X-ray Diagnosis

Trong-Thang Pham, Anh Nguyen, Zhigang Deng, Carol C. Wu, Hien Van Nguyen, Ngan Le

arxiv logopreprintJul 16 2025
Radiologists rely on eye movements to navigate and interpret medical images. A trained radiologist possesses knowledge about the potential diseases that may be present in the images and, when searching, follows a mental checklist to locate them using their gaze. This is a key observation, yet existing models fail to capture the underlying intent behind each fixation. In this paper, we introduce a deep learning-based approach, RadGazeIntent, designed to model this behavior: having an intention to find something and actively searching for it. Our transformer-based architecture processes both the temporal and spatial dimensions of gaze data, transforming fine-grained fixation features into coarse, meaningful representations of diagnostic intent to interpret radiologists' goals. To capture the nuances of radiologists' varied intention-driven behaviors, we process existing medical eye-tracking datasets to create three intention-labeled subsets: RadSeq (Systematic Sequential Search), RadExplore (Uncertainty-driven Exploration), and RadHybrid (Hybrid Pattern). Experimental results demonstrate RadGazeIntent's ability to predict which findings radiologists are examining at specific moments, outperforming baseline methods across all intention-labeled datasets.

CytoSAE: Interpretable Cell Embeddings for Hematology

Muhammed Furkan Dasdelen, Hyesu Lim, Michele Buck, Katharina S. Götze, Carsten Marr, Steffen Schneider

arxiv logopreprintJul 16 2025
Sparse autoencoders (SAEs) emerged as a promising tool for mechanistic interpretability of transformer-based foundation models. Very recently, SAEs were also adopted for the visual domain, enabling the discovery of visual concepts and their patch-wise attribution to tokens in the transformer model. While a growing number of foundation models emerged for medical imaging, tools for explaining their inferences are still lacking. In this work, we show the applicability of SAEs for hematology. We propose CytoSAE, a sparse autoencoder which is trained on over 40,000 peripheral blood single-cell images. CytoSAE generalizes to diverse and out-of-domain datasets, including bone marrow cytology, where it identifies morphologically relevant concepts which we validated with medical experts. Furthermore, we demonstrate scenarios in which CytoSAE can generate patient-specific and disease-specific concepts, enabling the detection of pathognomonic cells and localized cellular abnormalities at the patch level. We quantified the effect of concepts on a patient-level AML subtype classification task and show that CytoSAE concepts reach performance comparable to the state-of-the-art, while offering explainability on the sub-cellular level. Source code and model weights are available at https://github.com/dynamical-inference/cytosae.
Page 171 of 3593587 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.