Sort by:
Page 239 of 6576562 results

Hao Tang, Rongxi Yi, Lei Li, Kaiyi Cao, Jiapeng Zhao, Yihan Xiao, Minghai Shi, Peng Yuan, Yan Xi, Hui Tang, Wei Li, Zhan Wu, Yixin Zhou

arxiv logopreprintAug 22 2025
Conventional computed tomography (CT) lacks the ability to capture dynamic, weight-bearing joint motion. Functional evaluation, particularly after surgical intervention, requires four-dimensional (4D) imaging, but current methods are limited by excessive radiation exposure or incomplete spatial information from 2D techniques. We propose an integrated 4D joint analysis platform that combines: (1) a dual robotic arm cone-beam CT (CBCT) system with a programmable, gantry-free trajectory optimized for upright scanning; (2) a hybrid imaging pipeline that fuses static 3D CBCT with dynamic 2D X-rays using deep learning-based preprocessing, 3D-2D projection, and iterative optimization; and (3) a clinically validated framework for quantitative kinematic assessment. In simulation studies, the method achieved sub-voxel accuracy (0.235 mm) with a 99.18 percent success rate, outperforming conventional and state-of-the-art registration approaches. Clinical evaluation further demonstrated accurate quantification of tibial plateau motion and medial-lateral variance in post-total knee arthroplasty (TKA) patients. This 4D CBCT platform enables fast, accurate, and low-dose dynamic joint imaging, offering new opportunities for biomechanical research, precision diagnostics, and personalized orthopedic care.

Denissen, S., Laton, J., Grothe, M., Vaneckova, M., Uher, T., Kudrna, M., Horakova, D., Baijot, J., Penner, I.-K., Kirsch, M., Motyl, J., De Vos, M., Chen, O. Y., Van Schependom, J., Sima, D. M., Nagels, G.

medrxiv logopreprintAug 22 2025
BackgroundFederated learning (FL) could boost deep learning in neuroimaging but is rarely deployed in a real-world scenario, where its true potential lies. Here, we propose FLightcase, a new FL toolbox tailored for brain research. We tested FLightcase on a real-world FL network to predict the cognitive status of patients with multiple sclerosis (MS) from brain magnetic resonance imaging (MRI). MethodsWe first trained a DenseNet neural network to predict age from T1-weighted brain MRI on three open-source datasets, IXI (586 images), SALD (491 images) and CamCAN (653 images). These were distributed across the three centres in our FL network, Brussels (BE), Greifswald (DE) and Prague (CZ). We benchmarked this federated model with a centralised version. The best-performing brain age model was then fine-tuned to predict performance on the Symbol Digit Modalities Test (SDMT) of patients with MS (Brussels: 96 images, Greifswald: 756 images, Prague: 2424 images). Shallow transfer learning (TL) was compared with deep transfer learning, updating weights in the last layer or the entire network respectively. ResultsCentralised training outperformed federated training, predicting age with a mean absolute error (MAE) of 6.00 versus 9.02. Federated training yielded a Pearson correlation (all p < .001) between true and predicted age of .78 (IXI, Brussels), .78 (SALD, Greifswald) and .86 (CamCAN, Prague). Fine-tuning of the centralised model to SDMT was most successful with a deep TL paradigm (MAE = 9.12) compared to shallow TL (MAE = 14.08), and respectively on Brussels, Greifswald and Prague predicted SDMT with an MAE of 11.50, 9.64 and 8.86, and a Pearson correlation between true and predicted SDMT of .10 (p = .668), .42 (p < .001) and .51 (p < .001). ConclusionReal-world federated learning using FLightcase is feasible for neuroimaging research in MS, enabling access to a large MS imaging database without sharing this data. The federated SDMT-decoding model is promising and could be improved in the future by adopting FL algorithms that address the non-IID data issue and consider other imaging modalities. We hope our detailed real-world experiments and open-source distribution of FLightcase will prompt researchers to move beyond simulated FL environments.

Mahmood T, Saba T, Rehman A, Alamri FS

pubmed logopapersAug 22 2025
Medical imaging is crucial for clinical practice, providing insight into organ structure and function. Advancements in imaging technologies enable automated image segmentation, which is essential for accurate diagnosis and treatment planning. However, challenges like class imbalance, tissue boundary delineation, and tissue interaction complexity persist. The study introduces ConvTNet, a hybrid model that combines Transformer and CNN features to improve renal CT image segmentation. It uses attention mechanisms and feature fusion techniques to enhance precision. ConvTNet uses the KC module to focus on critical image regions, enabling precise tissue boundary delineation in noisy and ambiguous boundaries. The Mix-KFCA module enhances feature fusion by combining multi-scale features and distinguishing between healthy kidney tissue and surrounding structures. The study proposes innovative preprocessing strategies, including noise reduction, data augmentation, and image normalization, that significantly optimize image quality and ensure reliable inputs for accurate segmentation. ConvTNet employs transfer learning, fine-tuning five pre-trained models to bolster model performance further and leverage knowledge from a vast array of feature extraction techniques. Empirical evaluations demonstrate that ConvTNet performs exceptionally in multi-label classification and lesion segmentation, with an AUC of 0.9970, sensitivity of 0.9942, DSC of 0.9533, and accuracy of 0.9921, proving its efficacy for precise renal cancer diagnosis.

Gong W, Cui Q, Fu S, Wu Y

pubmed logopapersAug 22 2025
This study explores radiomics and deep learning for predicting pulmonary metastasis in head and neck Adenoid Cystic Carcinoma (ACC), assessing machine learning(ML) algorithms' model performance. The study retrospectively analyzed contrast-enhanced CT imaging data and clinical records from 130 patients with pathologically confirmed ACC in the head and neck region. The dataset was randomly split into training and test sets at a 7:3 ratio. Radiomic features and deep learning-derived features were extracted and subsequently integrated through multi-feature fusion. Z-score normalization was applied to training and test sets. Hypothesis testing selected significant features, followed by LASSO regression (5-fold CV) identifying 7 predictive features. Nine machine learning algorithms were employed to build predictive models for ACC pulmonary metastasis: ada, KNN, rf, NB, GLM, LDA, rpart, SVM-RBF, and GBM. Models were trained using the training set and tested on the test set. Model performance was evaluated using metrics such as recall, sensitivity, PPV, F1-score, precision, prevalence, NPV, specificity, accuracy, detection rate, detection prevalence, and balanced accuracy. Machine learning models based on multi-feature fusion of enhanced CT, utilizing KNN, SVM, rpart, GBM, NB, GLM, and LDA, demonstrated AUC values in the test set of 0.687, 0.863, 0.737, 0.793, 0.763, 0.867, and 0.844, respectively. Rf and ada showed significant overfitting. Among these, GBM and GLM showed higher stability in predicting pulmonary metastasis of head and neck ACC. Radiomics and deep learning methods based on enhanced CT imaging can provide effective auxiliary tools for predicting pulmonary metastasis in head and neck ACC patients, showing promising potential for clinical application.

Stefania L. Moroianu, Christian Bluethgen, Pierre Chambon, Mehdi Cherti, Jean-Benoit Delbrouck, Magdalini Paschali, Brandon Price, Judy Gichoya, Jenia Jitsev, Curtis P. Langlotz, Akshay S. Chaudhari

arxiv logopreprintAug 22 2025
Achieving robust performance and fairness across diverse patient populations remains a challenge in developing clinically deployable deep learning models for diagnostic imaging. Synthetic data generation has emerged as a promising strategy to address limitations in dataset scale and diversity. We introduce RoentGen-v2, a text-to-image diffusion model for chest radiographs that enables fine-grained control over both radiographic findings and patient demographic attributes, including sex, age, and race/ethnicity. RoentGen-v2 is the first model to generate clinically plausible images with demographic conditioning, facilitating the creation of a large, demographically balanced synthetic dataset comprising over 565,000 images. We use this large synthetic dataset to evaluate optimal training pipelines for downstream disease classification models. In contrast to prior work that combines real and synthetic data naively, we propose an improved training strategy that leverages synthetic data for supervised pretraining, followed by fine-tuning on real data. Through extensive evaluation on over 137,000 chest radiographs from five institutions, we demonstrate that synthetic pretraining consistently improves model performance, generalization to out-of-distribution settings, and fairness across demographic subgroups. Across datasets, synthetic pretraining led to a 6.5% accuracy increase in the performance of downstream classification models, compared to a modest 2.7% increase when naively combining real and synthetic data. We observe this performance improvement simultaneously with the reduction of the underdiagnosis fairness gap by 19.3%. These results highlight the potential of synthetic imaging to advance equitable and generalizable medical deep learning under real-world data constraints. We open source our code, trained models, and synthetic dataset at https://github.com/StanfordMIMI/RoentGen-v2 .

Hélène Corbaz, Anh Nguyen, Victor Schulze-Zachau, Paul Friedrich, Alicia Durrer, Florentin Bieder, Philippe C. Cattin, Marios N Psychogios

arxiv logopreprintAug 22 2025
Patients undergoing a mechanical thrombectomy procedure usually have a multi-detector CT (MDCT) scan before and after the intervention. The image quality of the flat panel detector CT (FDCT) present in the intervention room is generally much lower than that of a MDCT due to significant artifacts. However, using only FDCT images could improve patient management as the patient would not need to be moved to the MDCT room. Several studies have evaluated the potential use of FDCT imaging alone and the time that could be saved by acquiring the images before and/or after the intervention only with the FDCT. This study proposes using a denoising diffusion probabilistic model (DDPM) to improve the image quality of FDCT scans, making them comparable to MDCT scans. Clinicans evaluated FDCT, MDCT, and our model's predictions for diagnostic purposes using a questionnaire. The DDPM eliminated most artifacts and improved anatomical visibility without reducing bleeding detection, provided that the input FDCT image quality is not too low. Our code can be found on github.

Jueqi Wang, Zachary Jacokes, John Darrell Van Horn, Michael C. Schatz, Kevin A. Pelphrey, Archana Venkataraman

arxiv logopreprintAug 22 2025
While imaging-genetics holds great promise for unraveling the complex interplay between brain structure and genetic variation in neurological disorders, traditional methods are limited to simplistic linear models or to black-box techniques that lack interpretability. In this paper, we present NeuroPathX, an explainable deep learning framework that uses an early fusion strategy powered by cross-attention mechanisms to capture meaningful interactions between structural variations in the brain derived from MRI and established biological pathways derived from genetics data. To enhance interpretability and robustness, we introduce two loss functions over the attention matrix - a sparsity loss that focuses on the most salient interactions and a pathway similarity loss that enforces consistent representations across the cohort. We validate NeuroPathX on both autism spectrum disorder and Alzheimer's disease. Our results demonstrate that NeuroPathX outperforms competing baseline approaches and reveals biologically plausible associations linked to the disorder. These findings underscore the potential of NeuroPathX to advance our understanding of complex brain disorders. Code is available at https://github.com/jueqiw/NeuroPathX .

Torres-Parga A, Gershanik O, Cardona S, Guerrero J, Gonzalez-Ojeda LM, Cardona JF

pubmed logopapersAug 22 2025
T1-weighted structural MRI has advanced our understanding of Parkinson's disease (PD), yet its diagnostic utility in clinical settings remains unclear. To assess the diagnostic performance of T1-weighted MRI gray matter (GM) metrics in distinguishing PD patients from healthy controls and to identify limitations affecting clinical applicability. A systematic review and meta-analysis were conducted on studies reporting sensitivity, specificity, or AUC for PD classification using T1-weighted MRI. Of 2906 screened records, 26 met inclusion criteria, and 10 provided sufficient data for quantitative synthesis. The risk of bias and heterogeneity were evaluated, and sensitivity analyses were performed by excluding influential studies. Pooled estimates showed a sensitivity of 0.71 (95 % CI: 0.70-0.72), specificity of 0.889 (95 % CI: 0.86-0.92), and overall accuracy of 0.909 (95 % CI: 0.89-0.93). These metrics improved after excluding outliers, reducing heterogeneity (I<sup>2</sup> = 95.7 %-0 %). Frequently reported regions showing structural alterations included the substantia nigra, striatum, thalamus, medial temporal cortex, and middle frontal gyrus. However, region-specific diagnostic metrics could not be consistently synthesized due to methodological variability. Machine learning approaches, particularly support vector machines and neural networks, showed enhanced performance with appropriate validation. T1-weighted MRI gray matter metrics demonstrate moderate accuracy in differentiating PD from controls but are not yet suitable as standalone diagnostic tools. Greater methodological standardization, external validation, and integration with clinical and biological data are needed to support precision neurology and clinical translation.

Guo W, Bhagavathula KB, Adanty K, Rabey KN, Ouellet S, Romanyk DL, Westover L, Hogan JD

pubmed logopapersAug 22 2025
With the development of increasingly detailed imaging techniques, there is a need to update the methodology and evaluation criteria for bone analysis to understand the influence of bone microarchitecture on mechanical response. The present study aims to develop a machine learning-based approach to investigate the link between morphology of the human calvarium and its mechanical response under quasi-static uniaxial compression. Micro-computed tomography is used to capture the microstructure at a resolution of 18μm of male (n=5) and female (n=5) formalin-fixed calvarium specimens of the frontal and parietal regions. Image processing-based machine learning methods using convolutional neural networks are developed to isolate and calculate specific morphometric properties, such as porosity, trabecular thickness and trabecular spacing. Then, an ensemble method using a gradient boosted decision tree (XGBoost) is used to predict the mechanical strength based on the morphological results, and found that mean and minimum porosity at diploë are the most relevant factors for the mechanical strength of cranial bones under the studied conditions. Overall, this study provides new tools that can predict the mechanical response of human calvarium a priori. Besides, the quantitative morphology of the human calvarium can be used as input data in finite element models, as well as contributing to efforts in the development of cranial simulant materials.

Goto T, Igarashi R, Cho I, Numata K, Ishino Y, Kitamura Y, Noguchi M, Hirai T, Waki K

pubmed logopapersAug 21 2025
Fusion imaging requires initial registration of ultrasound (US) images using computed tomography (CT) or magnetic resonance (MR) imaging. The sweep position of US depends on the procedure. For instance, the liver may be observed in intercostal, subcostal, or epigastric positions. However, no well-established method for automatic initial registration accommodates all positions. A global rigid 3D-3D registration technique aimed at developing an automatic registration method independent of the US sweep position is proposed. The proposed technique utilizes the liver surface and vessels, such as the portal and hepatic veins, as landmarks. The algorithm segments the liver region and vessels from both US and CT/MR images using deep learning models. Based on these outputs, the point clouds of the liver surface and vessel centerlines were extracted. The rigid transformation parameters were estimated through point cloud registration using a RANSAC-based algorithm. To enhance speed and robustness, the RANSAC procedure incorporated constraints regarding the possible ranges for each registration parameter based on the relative position and orientation of the probe and body surface. Registration accuracy was quantitatively evaluated using clinical data from 80 patients, including US images taken from the intercostal, subcostal, and epigastric regions. The registration errors were 7.3 ± 3.2, 9.3 ± 3.7, and 8.4 ± 3.9 mm for the intercostal, subcostal, and epigastric regions, respectively. The proposed global rigid registration technique fully automated the complex manual registration required for liver fusion imaging and enhanced the workflow efficiency of physicians and sonographers.
Page 239 of 6576562 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.