Sort by:
Page 70 of 3493486 results

4D Virtual Imaging Platform for Dynamic Joint Assessment via Uni-Plane X-ray and 2D-3D Registration

Hao Tang, Rongxi Yi, Lei Li, Kaiyi Cao, Jiapeng Zhao, Yihan Xiao, Minghai Shi, Peng Yuan, Yan Xi, Hui Tang, Wei Li, Zhan Wu, Yixin Zhou

arxiv logopreprintAug 22 2025
Conventional computed tomography (CT) lacks the ability to capture dynamic, weight-bearing joint motion. Functional evaluation, particularly after surgical intervention, requires four-dimensional (4D) imaging, but current methods are limited by excessive radiation exposure or incomplete spatial information from 2D techniques. We propose an integrated 4D joint analysis platform that combines: (1) a dual robotic arm cone-beam CT (CBCT) system with a programmable, gantry-free trajectory optimized for upright scanning; (2) a hybrid imaging pipeline that fuses static 3D CBCT with dynamic 2D X-rays using deep learning-based preprocessing, 3D-2D projection, and iterative optimization; and (3) a clinically validated framework for quantitative kinematic assessment. In simulation studies, the method achieved sub-voxel accuracy (0.235 mm) with a 99.18 percent success rate, outperforming conventional and state-of-the-art registration approaches. Clinical evaluation further demonstrated accurate quantification of tibial plateau motion and medial-lateral variance in post-total knee arthroplasty (TKA) patients. This 4D CBCT platform enables fast, accurate, and low-dose dynamic joint imaging, offering new opportunities for biomechanical research, precision diagnostics, and personalized orthopedic care.

Real-world federated learning for the brain imaging scientist

Denissen, S., Laton, J., Grothe, M., Vaneckova, M., Uher, T., Kudrna, M., Horakova, D., Baijot, J., Penner, I.-K., Kirsch, M., Motyl, J., De Vos, M., Chen, O. Y., Van Schependom, J., Sima, D. M., Nagels, G.

medrxiv logopreprintAug 22 2025
BackgroundFederated learning (FL) could boost deep learning in neuroimaging but is rarely deployed in a real-world scenario, where its true potential lies. Here, we propose FLightcase, a new FL toolbox tailored for brain research. We tested FLightcase on a real-world FL network to predict the cognitive status of patients with multiple sclerosis (MS) from brain magnetic resonance imaging (MRI). MethodsWe first trained a DenseNet neural network to predict age from T1-weighted brain MRI on three open-source datasets, IXI (586 images), SALD (491 images) and CamCAN (653 images). These were distributed across the three centres in our FL network, Brussels (BE), Greifswald (DE) and Prague (CZ). We benchmarked this federated model with a centralised version. The best-performing brain age model was then fine-tuned to predict performance on the Symbol Digit Modalities Test (SDMT) of patients with MS (Brussels: 96 images, Greifswald: 756 images, Prague: 2424 images). Shallow transfer learning (TL) was compared with deep transfer learning, updating weights in the last layer or the entire network respectively. ResultsCentralised training outperformed federated training, predicting age with a mean absolute error (MAE) of 6.00 versus 9.02. Federated training yielded a Pearson correlation (all p < .001) between true and predicted age of .78 (IXI, Brussels), .78 (SALD, Greifswald) and .86 (CamCAN, Prague). Fine-tuning of the centralised model to SDMT was most successful with a deep TL paradigm (MAE = 9.12) compared to shallow TL (MAE = 14.08), and respectively on Brussels, Greifswald and Prague predicted SDMT with an MAE of 11.50, 9.64 and 8.86, and a Pearson correlation between true and predicted SDMT of .10 (p = .668), .42 (p < .001) and .51 (p < .001). ConclusionReal-world federated learning using FLightcase is feasible for neuroimaging research in MS, enabling access to a large MS imaging database without sharing this data. The federated SDMT-decoding model is promising and could be improved in the future by adopting FL algorithms that address the non-IID data issue and consider other imaging modalities. We hope our detailed real-world experiments and open-source distribution of FLightcase will prompt researchers to move beyond simulated FL environments.

Application of contrast-enhanced CT-driven multimodal machine learning models for pulmonary metastasis prediction in head and neck adenoid cystic carcinoma.

Gong W, Cui Q, Fu S, Wu Y

pubmed logopapersAug 22 2025
This study explores radiomics and deep learning for predicting pulmonary metastasis in head and neck Adenoid Cystic Carcinoma (ACC), assessing machine learning(ML) algorithms' model performance. The study retrospectively analyzed contrast-enhanced CT imaging data and clinical records from 130 patients with pathologically confirmed ACC in the head and neck region. The dataset was randomly split into training and test sets at a 7:3 ratio. Radiomic features and deep learning-derived features were extracted and subsequently integrated through multi-feature fusion. Z-score normalization was applied to training and test sets. Hypothesis testing selected significant features, followed by LASSO regression (5-fold CV) identifying 7 predictive features. Nine machine learning algorithms were employed to build predictive models for ACC pulmonary metastasis: ada, KNN, rf, NB, GLM, LDA, rpart, SVM-RBF, and GBM. Models were trained using the training set and tested on the test set. Model performance was evaluated using metrics such as recall, sensitivity, PPV, F1-score, precision, prevalence, NPV, specificity, accuracy, detection rate, detection prevalence, and balanced accuracy. Machine learning models based on multi-feature fusion of enhanced CT, utilizing KNN, SVM, rpart, GBM, NB, GLM, and LDA, demonstrated AUC values in the test set of 0.687, 0.863, 0.737, 0.793, 0.763, 0.867, and 0.844, respectively. Rf and ada showed significant overfitting. Among these, GBM and GLM showed higher stability in predicting pulmonary metastasis of head and neck ACC. Radiomics and deep learning methods based on enhanced CT imaging can provide effective auxiliary tools for predicting pulmonary metastasis in head and neck ACC patients, showing promising potential for clinical application.

Improving Performance, Robustness, and Fairness of Radiographic AI Models with Finely-Controllable Synthetic Data

Stefania L. Moroianu, Christian Bluethgen, Pierre Chambon, Mehdi Cherti, Jean-Benoit Delbrouck, Magdalini Paschali, Brandon Price, Judy Gichoya, Jenia Jitsev, Curtis P. Langlotz, Akshay S. Chaudhari

arxiv logopreprintAug 22 2025
Achieving robust performance and fairness across diverse patient populations remains a challenge in developing clinically deployable deep learning models for diagnostic imaging. Synthetic data generation has emerged as a promising strategy to address limitations in dataset scale and diversity. We introduce RoentGen-v2, a text-to-image diffusion model for chest radiographs that enables fine-grained control over both radiographic findings and patient demographic attributes, including sex, age, and race/ethnicity. RoentGen-v2 is the first model to generate clinically plausible images with demographic conditioning, facilitating the creation of a large, demographically balanced synthetic dataset comprising over 565,000 images. We use this large synthetic dataset to evaluate optimal training pipelines for downstream disease classification models. In contrast to prior work that combines real and synthetic data naively, we propose an improved training strategy that leverages synthetic data for supervised pretraining, followed by fine-tuning on real data. Through extensive evaluation on over 137,000 chest radiographs from five institutions, we demonstrate that synthetic pretraining consistently improves model performance, generalization to out-of-distribution settings, and fairness across demographic subgroups. Across datasets, synthetic pretraining led to a 6.5% accuracy increase in the performance of downstream classification models, compared to a modest 2.7% increase when naively combining real and synthetic data. We observe this performance improvement simultaneously with the reduction of the underdiagnosis fairness gap by 19.3%. These results highlight the potential of synthetic imaging to advance equitable and generalizable medical deep learning under real-world data constraints. We open source our code, trained models, and synthetic dataset at https://github.com/StanfordMIMI/RoentGen-v2 .

Towards Diagnostic Quality Flat-Panel Detector CT Imaging Using Diffusion Models

Hélène Corbaz, Anh Nguyen, Victor Schulze-Zachau, Paul Friedrich, Alicia Durrer, Florentin Bieder, Philippe C. Cattin, Marios N Psychogios

arxiv logopreprintAug 22 2025
Patients undergoing a mechanical thrombectomy procedure usually have a multi-detector CT (MDCT) scan before and after the intervention. The image quality of the flat panel detector CT (FDCT) present in the intervention room is generally much lower than that of a MDCT due to significant artifacts. However, using only FDCT images could improve patient management as the patient would not need to be moved to the MDCT room. Several studies have evaluated the potential use of FDCT imaging alone and the time that could be saved by acquiring the images before and/or after the intervention only with the FDCT. This study proposes using a denoising diffusion probabilistic model (DDPM) to improve the image quality of FDCT scans, making them comparable to MDCT scans. Clinicans evaluated FDCT, MDCT, and our model's predictions for diagnostic purposes using a questionnaire. The DDPM eliminated most artifacts and improved anatomical visibility without reducing bleeding detection, provided that the input FDCT image quality is not too low. Our code can be found on github.

Learning Explainable Imaging-Genetics Associations Related to a Neurological Disorder

Jueqi Wang, Zachary Jacokes, John Darrell Van Horn, Michael C. Schatz, Kevin A. Pelphrey, Archana Venkataraman

arxiv logopreprintAug 22 2025
While imaging-genetics holds great promise for unraveling the complex interplay between brain structure and genetic variation in neurological disorders, traditional methods are limited to simplistic linear models or to black-box techniques that lack interpretability. In this paper, we present NeuroPathX, an explainable deep learning framework that uses an early fusion strategy powered by cross-attention mechanisms to capture meaningful interactions between structural variations in the brain derived from MRI and established biological pathways derived from genetics data. To enhance interpretability and robustness, we introduce two loss functions over the attention matrix - a sparsity loss that focuses on the most salient interactions and a pathway similarity loss that enforces consistent representations across the cohort. We validate NeuroPathX on both autism spectrum disorder and Alzheimer's disease. Our results demonstrate that NeuroPathX outperforms competing baseline approaches and reveals biologically plausible associations linked to the disorder. These findings underscore the potential of NeuroPathX to advance our understanding of complex brain disorders. Code is available at https://github.com/jueqiw/NeuroPathX .

Diagnostic performance of T1-Weighted MRI gray matter biomarkers in Parkinson's disease: A systematic review and meta-analysis.

Torres-Parga A, Gershanik O, Cardona S, Guerrero J, Gonzalez-Ojeda LM, Cardona JF

pubmed logopapersAug 22 2025
T1-weighted structural MRI has advanced our understanding of Parkinson's disease (PD), yet its diagnostic utility in clinical settings remains unclear. To assess the diagnostic performance of T1-weighted MRI gray matter (GM) metrics in distinguishing PD patients from healthy controls and to identify limitations affecting clinical applicability. A systematic review and meta-analysis were conducted on studies reporting sensitivity, specificity, or AUC for PD classification using T1-weighted MRI. Of 2906 screened records, 26 met inclusion criteria, and 10 provided sufficient data for quantitative synthesis. The risk of bias and heterogeneity were evaluated, and sensitivity analyses were performed by excluding influential studies. Pooled estimates showed a sensitivity of 0.71 (95 % CI: 0.70-0.72), specificity of 0.889 (95 % CI: 0.86-0.92), and overall accuracy of 0.909 (95 % CI: 0.89-0.93). These metrics improved after excluding outliers, reducing heterogeneity (I<sup>2</sup> = 95.7 %-0 %). Frequently reported regions showing structural alterations included the substantia nigra, striatum, thalamus, medial temporal cortex, and middle frontal gyrus. However, region-specific diagnostic metrics could not be consistently synthesized due to methodological variability. Machine learning approaches, particularly support vector machines and neural networks, showed enhanced performance with appropriate validation. T1-weighted MRI gray matter metrics demonstrate moderate accuracy in differentiating PD from controls but are not yet suitable as standalone diagnostic tools. Greater methodological standardization, external validation, and integration with clinical and biological data are needed to support precision neurology and clinical translation.

Linking morphometric variations in human cranial bone to mechanical behavior using machine learning.

Guo W, Bhagavathula KB, Adanty K, Rabey KN, Ouellet S, Romanyk DL, Westover L, Hogan JD

pubmed logopapersAug 22 2025
With the development of increasingly detailed imaging techniques, there is a need to update the methodology and evaluation criteria for bone analysis to understand the influence of bone microarchitecture on mechanical response. The present study aims to develop a machine learning-based approach to investigate the link between morphology of the human calvarium and its mechanical response under quasi-static uniaxial compression. Micro-computed tomography is used to capture the microstructure at a resolution of 18μm of male (n=5) and female (n=5) formalin-fixed calvarium specimens of the frontal and parietal regions. Image processing-based machine learning methods using convolutional neural networks are developed to isolate and calculate specific morphometric properties, such as porosity, trabecular thickness and trabecular spacing. Then, an ensemble method using a gradient boosted decision tree (XGBoost) is used to predict the mechanical strength based on the morphological results, and found that mean and minimum porosity at diploë are the most relevant factors for the mechanical strength of cranial bones under the studied conditions. Overall, this study provides new tools that can predict the mechanical response of human calvarium a priori. Besides, the quantitative morphology of the human calvarium can be used as input data in finite element models, as well as contributing to efforts in the development of cranial simulant materials.

Spatial imaging features derived from SUVmax location in resectable NSCLC are associated with tumor aggressiveness.

Jiang Z, Spielvogel C, Haberl D, Yu J, Krisch M, Szakall S, Molnar P, Fillinger J, Horvath L, Renyi-Vamos F, Aigner C, Dome B, Lang C, Megyesfalvi Z, Kenner L, Hacker M

pubmed logopapersAug 21 2025
Accurate non-invasive prediction of histopathologic invasiveness and recurrence risk remains a clinical challenge in resectable non-small cell lung cancer (NSCLC). We developed and validated the Edge Proximity Score (EPS), a novel [<sup>18</sup>F]FDG PET/CT-based spatial imaging feature that quantifies the displacement of SUVmax relative to the tumor centroid and perimeter, to assess tumor aggressiveness and predict progression-free survival (PFS). This retrospective study included 244 NSCLC patients with preoperative [<sup>18</sup>F]FDG PET/CT. EPS was computed from normalized SUVmax-to-centroid and SUVmax-to-perimeter distances. A total of 115 PET radiomics features were extracted and standardized. Eight machine learning models (80:20 split) were trained to predict lymphovascular invasion (LVI), visceral pleural invasion (VPI), and spread through air spaces (STAS), with feature importance assessed using SHAP. Prognostic analysis was conducted using multivariable Cox regression. A survival prediction model incorporating EPS was externally validated in the TCIA cohort. RNA sequencing data from 76 TCIA patients were used for transcriptomic and immune profiling. EPS was significantly elevated in tumors with LVI, VPI, and STAS (P < 0.001), consistently ranked among the top SHAP features, and was an independent predictor of PFS (HR = 2.667, P = 0.015). The EPS-based nomogram achieved AUCs of 0.67, 0.70, and 0.68 for predicting 1-, 3-, and 5-year PFS in the TCIA validation cohort. High EPS was associated with proliferative and metabolic gene signatures, whereas low EPS was linked to immune activation and neutrophil infiltration. EPS is a biologically relevant, non-invasive imaging biomarker that may improve risk stratification in NSCLC.

Combined use of two artificial intelligence-based algorithms for mammography triaging: a retrospective simulation study.

Kim HJ, Kim HH, Eom HJ, Choi WJ, Chae EY, Shin HJ, Cha JH

pubmed logopapersAug 21 2025
To evaluate triaging scenarios involving two commercial AI algorithms to enhance mammography interpretation and reduce workload. A total of 3012 screening or diagnostic mammograms, including 213 cancer cases, were analyzed using two AI algorithms (AI-1, AI-2) and categorized as "high-risk" (top 10%), "minimal-risk" (bottom 20%), or "indeterminate" based on malignancy likelihood. Five triaging scenarios of combined AI use (Sensitive, Specific, Conservative, Sequential Modes A and B) determined whether cases would be autonomously recalled, classified as negative, or referred for radiologist interpretation. Sensitivity, specificity, number of mammograms requiring review, and abnormal interpretation rate (AIR) were compared against single AIs and manual reading using McNemar's test. Sensitive Mode achieved 84% sensitivity, outperforming single AI (p = 0.03 [AI-1], 0.01 [AI-2]) and manual reading (p = 0.03), with an 18.3% reduction in mammograms requiring review (AIR, 23.3%). Specific Mode achieved 87.7% specificity, exceeding single AI (p < 0.001 [AI-1, AI-2]) and comparable to manual reading (p = 0.37), with a 41.7% reduction in mammograms requiring review (AIR, 17%). Conservative and Sequential Modes A and B achieved sensitivities of 82.2%, 80.8%, and 80.3%, respectively, comparable to single AI or manual reading (p > 0.05, all), with reductions of 9.8%, 49.8%, and 49.8% in mammograms requiring review (AIRs, 18.6%, 21.6%, 21.7%). Combining two AI algorithms improved sensitivity or specificity in mammography interpretation while reducing mammograms requiring radiologist review in this cancer-enriched dataset from a tertiary center. Scenario selection should consider clinical needs and requires validation in a screening population. Question AI algorithms have the potential to improve workflow efficiency by triaging mammograms. Combining algorithms trained under different conditions may offer synergistic benefits. Findings The combined use of two commercial AI algorithms for triaging mammograms improved sensitivity or specificity, depending on the scenario, while also reducing mammograms requiring radiologist review. Clinical relevance Integrating two commercial AI algorithms could enhance mammography interpretation over using a single AI for triaging or manual reading.
Page 70 of 3493486 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.