Sort by:
Page 4 of 12116 results

A Tutorial on MRI Reconstruction: From Modern Methods to Clinical Implications

Tolga Çukur, Salman U. H. Dar, Valiyeh Ansarian Nezhad, Yohan Jun, Tae Hyung Kim, Shohei Fujita, Berkin Bilgic

arxiv logopreprintJul 22 2025
MRI is an indispensable clinical tool, offering a rich variety of tissue contrasts to support broad diagnostic and research applications. Clinical exams routinely acquire multiple structural sequences that provide complementary information for differential diagnosis, while research protocols often incorporate advanced functional, diffusion, spectroscopic, and relaxometry sequences to capture multidimensional insights into tissue structure and composition. However, these capabilities come at the cost of prolonged scan times, which reduce patient throughput, increase susceptibility to motion artifacts, and may require trade-offs in image quality or diagnostic scope. Over the last two decades, advances in image reconstruction algorithms--alongside improvements in hardware and pulse sequence design--have made it possible to accelerate acquisitions while preserving diagnostic quality. Central to this progress is the ability to incorporate prior information to regularize the solutions to the reconstruction problem. In this tutorial, we overview the basics of MRI reconstruction and highlight state-of-the-art approaches, beginning with classical methods that rely on explicit hand-crafted priors, and then turning to deep learning methods that leverage a combination of learned and crafted priors to further push the performance envelope. We also explore the translational aspects and eventual clinical implications of these methods. We conclude by discussing future directions to address remaining challenges in MRI reconstruction. The tutorial is accompanied by a Python toolbox (https://github.com/tutorial-MRI-recon/tutorial) to demonstrate select methods discussed in the article.

AI-based body composition analysis of CT data has the potential to predict disease course in patients with multiple myeloma.

Wegner F, Sieren MM, Grasshoff H, Berkel L, Rowold C, Röttgerding MP, Khalil S, Mogadas S, Nensa F, Hosch R, Riemekasten G, Hamm AF, von Bubnoff N, Barkhausen J, Kloeckner R, Khandanpour C, Leitner T

pubmed logopapersJul 21 2025
The aim of this study was to evaluate the benefit of a volumetric AI-based body composition analysis (BCA) algorithm in multiple myeloma (MM). Therefore, a retrospective monocentric cohort of 91 MM patients was analyzed. The BCA algorithm, powered by a convolutional neural network, quantified tissue compartments and bone density based on routine CT scans. Correlations between BCA data and demographic/clinical parameters were investigated. BCA-endotypes were identified and survival rates were compared between BCA-derived patient clusters. Patients with high-risk cytogenetics exhibited elevated cardiac marker index values. Across Revised-International Staging System (R-ISS) categories, BCA parameters did not show significant differences. However, both subcutaneous and total adipose tissue volumes were significantly lower in patients with progressive disease or death during follow-up compared to patients without progression. Cluster analysis revealed two distinct BCA-endotypes, with one group displaying significantly better survival. Furthermore, a combined model composed of clinical parameters and BCA data demonstrated a higher predictive capability for disease progression compared to models based solely on high-risk cytogenetics or R-ISS. These findings underscore the potential of BCA to improve patient stratification and refining prognostic models in MM.

Software architecture and manual for novel versatile CT image analysis toolbox -- AnatomyArchive

Lei Xu, Torkel B Brismar

arxiv logopreprintJul 18 2025
We have developed a novel CT image analysis package named AnatomyArchive, built on top of the recent full body segmentation model TotalSegmentator. It provides automatic target volume selection and deselection capabilities according to user-configured anatomies for volumetric upper- and lower-bounds. It has a knowledge graph-based and time efficient tool for anatomy segmentation mask management and medical image database maintenance. AnatomyArchive enables automatic body volume cropping, as well as automatic arm-detection and exclusion, for more precise body composition analysis in both 2D and 3D formats. It provides robust voxel-based radiomic feature extraction, feature visualization, and an integrated toolchain for statistical tests and analysis. A python-based GPU-accelerated nearly photo-realistic segmentation-integrated composite cinematic rendering is also included. We present here its software architecture design, illustrate its workflow and working principle of algorithms as well provide a few examples on how the software can be used to assist development of modern machine learning models. Open-source codes will be released at https://github.com/lxu-medai/AnatomyArchive for only research and educational purposes.

Imaging biomarkers of ageing: a review of artificial intelligence-based approaches for age estimation.

Haugg F, Lee G, He J, Johnson J, Zapaishchykova A, Bitterman DS, Kann BH, Aerts HJWL, Mak RH

pubmed logopapersJul 18 2025
Chronological age, although commonly used in clinical practice, fails to capture individual variations in rates of ageing and physiological decline. Recent advances in artificial intelligence (AI) have transformed the estimation of biological age using various imaging techniques. This Review consolidates AI developments in age prediction across brain, chest, abdominal, bone, and facial imaging using diverse methods, including MRI, CT, x-ray, and photographs. The difference between predicted and chronological age-often referred to as age deviation-is a promising biomarker for assessing health status and predicting disease risk. In this Review, we highlight consistent associations between age deviation and various health outcomes, including mortality risk, cognitive decline, and cardiovascular prognosis. We also discuss the technical challenges in developing unbiased models and ethical considerations for clinical application. This Review highlights the potential of AI-based age estimation in personalised medicine as it offers a non-invasive, interpretable biomarker that could transform health risk assessment and guide preventive interventions.

Artificial Intelligence for Tumor [<sup>18</sup>F]FDG PET Imaging: Advancements and Future Trends - Part II.

Safarian A, Mirshahvalad SA, Farbod A, Jung T, Nasrollahi H, Schweighofer-Zwink G, Rendl G, Pirich C, Vali R, Beheshti M

pubmed logopapersJul 18 2025
The integration of artificial intelligence (AI) into [<sup>18</sup>F]FDG PET/CT imaging continues to expand, offering new opportunities for more precise, consistent, and personalized oncologic evaluations. Building on the foundation established in Part I, this second part explores AI-driven innovations across a broader range of malignancies, including hematological, genitourinary, melanoma, and central nervous system tumors as well applications of AI in pediatric oncology. Radiomics and machine learning algorithms are being explored for their ability to enhance diagnostic accuracy, reduce interobserver variability, and inform complex clinical decision-making, such as identifying patients with refractory lymphoma, assessing pseudoprogression in melanoma, or predicting brain metastases in extracranial malignancies. Additionally, AI-assisted lesion segmentation, quantitative feature extraction, and heterogeneity analysis are contributing to improved prediction of treatment response and long-term survival outcomes. Despite encouraging results, variability in imaging protocols, segmentation methods, and validation strategies across studies continues to challenge reproducibility and remains a barrier to clinical translation. This review evaluates recent advancements of AI, its current clinical applications, and emphasizes the need for robust standardization and prospective validation to ensure the reproducibility and generalizability of AI tools in PET imaging and clinical practice.

Multi-scale machine learning model predicts muscle and functional disease progression.

Blemker SS, Riem L, DuCharme O, Pinette M, Costanzo KE, Weatherley E, Statland J, Tapscott SJ, Wang LH, Shaw DWW, Song X, Leung D, Friedman SD

pubmed logopapersJul 16 2025
Facioscapulohumeral muscular dystrophy (FSHD) is a genetic neuromuscular disorder characterized by progressive muscle degeneration with substantial variability in severity and progression patterns. FSHD is a highly heterogeneous disease; however, current clinical metrics used for tracking disease progression lack sensitivity for personalized assessment, which greatly limits the design and execution of clinical trials. This study introduces a multi-scale machine learning framework leveraging whole-body magnetic resonance imaging (MRI) and clinical data to predict regional, muscle, joint, and functional progression in FSHD. The goal this work is to create a 'digital twin' of individual FSHD patients that can be leveraged in clinical trials. Using a combined dataset of over 100 patients from seven studies, MRI-derived metrics-including fat fraction, lean muscle volume, and fat spatial heterogeneity at baseline-were integrated with clinical and functional measures. A three-stage random forest model was developed to predict annualized changes in muscle composition and a functional outcome (timed up-and-go (TUG)). All model stages revealed strong predictive performance in separate holdout datasets. After training, the models predicted fat fraction change with a root mean square error (RMSE) of 2.16% and lean volume change with a RMSE of 8.1 ml in a holdout testing dataset. Feature analysis revealed that metrics of fat heterogeneity within muscle predicts muscle-level progression. The stage 3 model, which combined functional muscle groups, predicted change in TUG with a RMSE of 0.6 s in the holdout testing dataset. This study demonstrates the machine learning models incorporating individual muscle and performance data can effectively predict MRI disease progression and functional performance of complex tasks, addressing the heterogeneity and nonlinearity inherent in FSHD. Further studies incorporating larger longitudinal cohorts, as well as comprehensive clinical and functional measures, will allow for expanding and refining this model. As many neuromuscular diseases are characterized by variability and heterogeneity similar to FSHD, such approaches have broad applicability.

CT-ScanGaze: A Dataset and Baselines for 3D Volumetric Scanpath Modeling

Trong-Thang Pham, Akash Awasthi, Saba Khan, Esteban Duran Marti, Tien-Phat Nguyen, Khoa Vo, Minh Tran, Ngoc Son Nguyen, Cuong Tran Van, Yuki Ikebe, Anh Totti Nguyen, Anh Nguyen, Zhigang Deng, Carol C. Wu, Hien Van Nguyen, Ngan Le

arxiv logopreprintJul 16 2025
Understanding radiologists' eye movement during Computed Tomography (CT) reading is crucial for developing effective interpretable computer-aided diagnosis systems. However, CT research in this area has been limited by the lack of publicly available eye-tracking datasets and the three-dimensional complexity of CT volumes. To address these challenges, we present the first publicly available eye gaze dataset on CT, called CT-ScanGaze. Then, we introduce CT-Searcher, a novel 3D scanpath predictor designed specifically to process CT volumes and generate radiologist-like 3D fixation sequences, overcoming the limitations of current scanpath predictors that only handle 2D inputs. Since deep learning models benefit from a pretraining step, we develop a pipeline that converts existing 2D gaze datasets into 3D gaze data to pretrain CT-Searcher. Through both qualitative and quantitative evaluations on CT-ScanGaze, we demonstrate the effectiveness of our approach and provide a comprehensive assessment framework for 3D scanpath prediction in medical imaging.

Artificial intelligence-based diabetes risk prediction from longitudinal DXA bone measurements.

Khan S, Shah Z

pubmed logopapersJul 16 2025
Diabetes mellitus (DM) is a serious global health concern that poses a significant threat to human life. Beyond its direct impact, diabetes substantially increases the risk of developing severe complications such as hypertension, cardiovascular disease, and musculoskeletal disorders like arthritis and osteoporosis. The field of diabetes classification has advanced significantly with the use of diverse data modalities and sophisticated tools to identify individuals or groups as diabetic. But the task of predicting diabetes prior to its onset, particularly through the use of longitudinal multi-modal data, remains relatively underexplored. To better understand the risk factors associated with diabetes development among Qatari adults, this longitudinal research aims to investigate dual-energy X-ray absorptiometry (DXA)-derived whole-body and regional bone composition measures as potential predictors of diabetes onset. We proposed a case-control retrospective study, with a total of 1,382 participants contains 725 male participants (cases: 146, control: 579) and 657 female participants (case: 133, control: 524). We excluded participants with incomplete data points. To handle class imbalance, we augmented our data using Synthetic Minority Over-sampling Technique (SMOTE) and SMOTEENN (SMOTE with Edited Nearest Neighbors), and to further investigated the association between bones data features and diabetes status, we employed ANOVA analytical method. For diabetes onset prediction, we employed both conventional and deep learning (DL) models to predict risk factors associated with diabetes in Qatari adults. We used SHAP and probabilistic methods to investigate the association of identified risk factors with diabetes. During experimental analysis, we found that bone mineral density (BMD), bone mineral contents (BMC) in the hip, femoral neck, troch area, and lumbar spine showed an upward trend in diabetic patients with [Formula: see text]. Meanwhile, we found that patients with abnormal glucose metabolism had increased wards BMD and BMC with low Z-score compared to healthy participants. Consequently, it shows that the diabetic group has superior bone health than the control group in the cohort, because they exhibit higher BMD, muscle mass, and bone area across most body regions. Moreover, in the age group distribution analysis, we found that the diabetes prediction rate was higher among healthy participants in the younger age group 20-40 years. But as the age range increased, the model predictions became more accurate for diabetic participants, especially in the older age group 56-69 years. It is also observed that male participants demonstrated a higher susceptibility to diabetes onset compared to female participants. Shallow models outperformed the DL models by presenting improved accuracy (91.08%), AUROC (96%), and recall values (91%). This pivotal approach utilizing DXA scans highlights significant potential for the rapid and minimally invasive early detection of diabetes.

An interpretable machine learning model for predicting bone marrow invasion in patients with lymphoma via <sup>18</sup>F-FDG PET/CT: a multicenter study.

Zhu X, Lu D, Wu Y, Lu Y, He L, Deng Y, Mu X, Fu W

pubmed logopapersJul 15 2025
Accurate identification of bone marrow invasion (BMI) is critical for determining the prognosis of and treatment strategies for lymphoma. Although bone marrow biopsy (BMB) is the current gold standard, its invasive nature and sampling errors highlight the necessity for noninvasive alternatives. We aimed to develop and validate an interpretable machine learning model that integrates clinical data, <sup>18</sup>F-fluorodeoxyglucose positron emission tomography/computed tomography (<sup>18</sup>F-FDG PET/CT) parameters, radiomic features, and deep learning features to predict BMI in lymphoma patients. We included 159 newly diagnosed lymphoma patients (118 from Center I and 41 from Center II), excluding those with prior treatments, incomplete data, or under 18 years of age. Data from Center I were randomly allocated to training (n = 94) and internal test (n = 24) sets; Center II served as an external validation set (n = 41). Clinical parameters, PET/CT features, radiomic characteristics, and deep learning features were comprehensively analyzed and integrated into machine learning models. Model interpretability was elucidated via Shapley Additive exPlanations (SHAPs). Additionally, a comparative diagnostic study evaluated reader performance with and without model assistance. BMI was confirmed in 70 (44%) patients. The key clinical predictors included B symptoms and platelet count. Among the tested models, the ExtraTrees classifier achieved the best performance. For external validation, the combined model (clinical + PET/CT + radiomics + deep learning) achieved an area under the receiver operating characteristic curve (AUC) of 0.886, outperforming models that use only clinical (AUC 0.798), radiomic (AUC 0.708), or deep learning features (AUC 0.662). SHAP analysis revealed that PET radiomic features (especially PET_lbp_3D_m1_glcm_DependenceEntropy), platelet count, and B symptoms were significant predictors of BMI. Model assistance significantly enhanced junior reader performance (AUC improved from 0.663 to 0.818, p = 0.03) and improved senior reader accuracy, although not significantly (AUC 0.768 to 0.867, p = 0.10). Our interpretable machine learning model, which integrates clinical, imaging, radiomic, and deep learning features, demonstrated robust BMI prediction performance and notably enhanced physician diagnostic accuracy. These findings underscore the clinical potential of interpretable AI to complement medical expertise and potentially reduce the reliance on invasive BMB for lymphoma staging.

3D Wavelet Latent Diffusion Model for Whole-Body MR-to-CT Modality Translation

Jiaxu Zheng, Meiman He, Xuhui Tang, Xiong Wang, Tuoyu Cao, Tianyi Zeng, Lichi Zhang, Chenyu You

arxiv logopreprintJul 14 2025
Magnetic Resonance (MR) imaging plays an essential role in contemporary clinical diagnostics. It is increasingly integrated into advanced therapeutic workflows, such as hybrid Positron Emission Tomography/Magnetic Resonance (PET/MR) imaging and MR-only radiation therapy. These integrated approaches are critically dependent on accurate estimation of radiation attenuation, which is typically facilitated by synthesizing Computed Tomography (CT) images from MR scans to generate attenuation maps. However, existing MR-to-CT synthesis methods for whole-body imaging often suffer from poor spatial alignment between the generated CT and input MR images, and insufficient image quality for reliable use in downstream clinical tasks. In this paper, we present a novel 3D Wavelet Latent Diffusion Model (3D-WLDM) that addresses these limitations by performing modality translation in a learned latent space. By incorporating a Wavelet Residual Module into the encoder-decoder architecture, we enhance the capture and reconstruction of fine-scale features across image and latent spaces. To preserve anatomical integrity during the diffusion process, we disentangle structural and modality-specific characteristics and anchor the structural component to prevent warping. We also introduce a Dual Skip Connection Attention mechanism within the diffusion model, enabling the generation of high-resolution CT images with improved representation of bony structures and soft-tissue contrast.
Page 4 of 12116 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.