Sort by:
Page 7 of 14132 results

A Workflow-Efficient Approach to Pre- and Post-Operative Assessment of Weight-Bearing Three-Dimensional Knee Kinematics.

Banks SA, Yildirim G, Jachode G, Cox J, Anderson O, Jensen A, Cole JD, Kessler O

pubmed logopapersJul 1 2025
Knee kinematics during daily activities reflect disease severity preoperatively and are associated with clinical outcomes after total knee arthroplasty (TKA). It is widely believed that measured kinematics would be useful for preoperative planning and postoperative assessment. Despite decades-long interest in measuring three-dimensional (3D) knee kinematics, no methods are available for routine, practical clinical examinations. We report a clinically practical method utilizing machine-learning-enhanced software and upgraded C-arm fluoroscopy for the accurate and time-efficient measurement of pre-TKA and post-TKA 3D dynamic knee kinematics. Using a common C-arm with an upgraded detector and software, we performed an 8-s horizontal sweeping pulsed fluoroscopic scan of the weight-bearing knee joint. The patient's knee was then imaged using pulsed C-arm fluoroscopy while performing standing, kneeling, squatting, stair, chair, and gait motion activities. We used limited-arc cone-beam reconstruction methods to create 3D models of the femur and tibia/fibula bones with implants, which can then be used to perform model-image registration to quantify the 3D knee kinematics. The proposed protocol can be accomplished by an individual radiology technician in ten minutes and does not require additional equipment beyond a step and stool. The image analysis can be performed by a computer onboard the upgraded c-arm or in the cloud, before loading the examination results into the Picture Archiving and Communication System and Electronic Medical Record systems. Weight-bearing kinematics affects knee function pre- and post-TKA. It has long been exclusively the domain of researchers to make such measurements. We present an approach that leverages common, but digitally upgraded, imaging hardware and software to implement an efficient examination protocol for accurately assessing 3D knee kinematics. With these capabilities, it will be possible to include dynamic 3D knee kinematics as a component of the routine clinical workup for patients who have diseased or replaced knees.

Machine-Learning-Based Computed Tomography Radiomics Regression Model for Predicting Pulmonary Function.

Wang W, Sun Y, Wu R, Jin L, Shi Z, Tuersun B, Yang S, Li M

pubmed logopapersJul 1 2025
Chest computed tomography (CT) radiomics can be utilized for categorical predictions; however, models predicting pulmonary function indices directly are lacking. This study aimed to develop machine-learning-based regression models to predict pulmonary function using chest CT radiomics. This retrospective study enrolled patients who underwent chest CT and pulmonary function tests between January 2018 and April 2024. Machine-learning regression models were constructed and validated to predict pulmonary function indices, including forced vital capacity (FVC) and forced expiratory volume in 1 s (FEV<sub>1</sub>). The models incorporated radiomics of the whole lung and clinical features. Model performance was evaluated using mean absolute error, mean squared error, root mean squared error, concordance correlation coefficient (CCC), and R-squared (R<sup>2</sup>) value and compared to spirometry results. Individual explanations of the models' decisions were analyzed using an explainable approach based on SHapley Additive exPlanations. In total, 1585 cases were included in the analysis, with 102 of them being external cases. Across the training, validation, test, and external test sets, the combined model consistently achieved the best performance in the regression task for predicting FVC (e.g. external test set: CCC, 0.745 [95% confidence interval 0.642-0.818]; R<sup>2</sup>, 0.601 [0.453-0.707]) and FEV<sub>1</sub> (e.g. external test set: CCC, 0.744 [0.633-0.824]; R<sup>2</sup>, 0.527 [0.298-0.675]). Age, sex, and emphysema were important factors for both FVC and FEV<sub>1</sub>, while distinct radiomics features contributed to each. Whole-lung-based radiomics features can be used to construct regression models to improve pulmonary function prediction.

Personalized prediction model generated with machine learning for kidney function one year after living kidney donation.

Oki R, Hirai T, Iwadoh K, Kijima Y, Hashimoto H, Nishimura Y, Banno T, Unagami K, Omoto K, Shimizu T, Hoshino J, Takagi T, Ishida H, Hirai T

pubmed logopapersJul 1 2025
Living kidney donors typically experience approximately a 30% reduction in kidney function after donation, although the degree of reduction varies among individuals. This study aimed to develop a machine learning (ML) model to predict serum creatinine (Cre) levels at one year post-donation using preoperative clinical data, including kidney-, fat-, and muscle-volumetry values from computed tomography. A total of 204 living kidney donors were included. Symbolic regression via genetic programming was employed to create an ML-based Cre prediction model using preoperative clinical variables. Validation was conducted using a 7:3 training-to-test data split. The ML model demonstrated a median absolute error of 0.079 mg/dL for predicting Cre. In the validation cohort, it outperformed conventional methods (which assume post-donation eGFR to be 70% of the preoperative value) with higher R<sup>2</sup> (0.58 vs. 0.27), lower root mean squared error (5.27 vs. 6.89), and lower mean absolute error (3.92 vs. 5.8). Key predictive variables included preoperative Cre and remnant kidney volume. The model was deployed as a web application for clinical use. The ML model offers accurate predictions of post-donation kidney function and may assist in monitoring donor outcomes, enhancing personalized care after kidney donation.

Regression modeling with convolutional neural network for predicting extent of resection from preoperative MRI in giant pituitary adenomas: a pilot study.

Patel BK, Tariciotti L, DiRocco L, Mandile A, Lohana S, Rodas A, Zohdy YM, Maldonado J, Vergara SM, De Andrade EJ, Revuelta Barbero JM, Reyes C, Solares CA, Garzon-Muvdi T, Pradilla G

pubmed logopapersJul 1 2025
Giant pituitary adenomas (GPAs) are challenging skull base tumors due to their size and proximity to critical neurovascular structures. Achieving gross-total resection (GTR) can be difficult, and residual tumor burden is commonly reported. This study evaluated the ability of convolutional neural networks (CNNs) to predict the extent of resection (EOR) from preoperative MRI with the goals of enhancing surgical planning, improving preoperative patient counseling, and enhancing multidisciplinary postoperative coordination of care. A retrospective study of 100 consecutive patients with GPAs was conducted. Patients underwent surgery via the endoscopic endonasal transsphenoidal approach. CNN models were trained on DICOM images from preoperative MR images to predict EOR, using a split of 80 patients for training and 20 for validation. The models included different architectural modules to refine image selection and predict EOR based on tumor-contained images in various anatomical planes. The model design, training, and validation were conducted in a local environment in Python using the TensorFlow machine learning system. The median preoperative tumor volume was 19.4 cm3. The median EOR was 94.5%, with GTR achieved in 49% of cases. The CNN model showed high predictive accuracy, especially when analyzing images from the coronal plane, with a root mean square error of 2.9916 and a mean absolute error of 2.6225. The coefficient of determination (R2) was 0.9823, indicating excellent model performance. CNN-based models may effectively predict the EOR for GPAs from preoperative MRI scans, offering a promising tool for presurgical assessment and patient counseling. Confirmatory studies with large patient samples are needed to definitively validate these findings.

Bridging Classical and Learning-based Iterative Registration through Deep Equilibrium Models

Yi Zhang, Yidong Zhao, Qian Tao

arxiv logopreprintJul 1 2025
Deformable medical image registration is traditionally formulated as an optimization problem. While classical methods solve this problem iteratively, recent learning-based approaches use recurrent neural networks (RNNs) to mimic this process by unrolling the prediction of deformation fields in a fixed number of steps. However, classical methods typically converge after sufficient iterations, but learning-based unrolling methods lack a theoretical convergence guarantee and show instability empirically. In addition, unrolling methods have a practical bottleneck at training time: GPU memory usage grows linearly with the unrolling steps due to backpropagation through time (BPTT). To address both theoretical and practical challenges, we propose DEQReg, a novel registration framework based on Deep Equilibrium Models (DEQ), which formulates registration as an equilibrium-seeking problem, establishing a natural connection between classical optimization and learning-based unrolling methods. DEQReg maintains constant memory usage, enabling theoretically unlimited iteration steps. Through extensive evaluation on the public brain MRI and lung CT datasets, we show that DEQReg can achieve competitive registration performance, while substantially reducing memory consumption compared to state-of-the-art unrolling methods. We also reveal an intriguing phenomenon: the performance of existing unrolling methods first increases slightly then degrades irreversibly when the inference steps go beyond the training configuration. In contrast, DEQReg achieves stable convergence with its inbuilt equilibrium-seeking mechanism, bridging the gap between classical optimization-based and modern learning-based registration methods.

Accurate and Efficient Fetal Birth Weight Estimation from 3D Ultrasound

Jian Wang, Qiongying Ni, Hongkui Yu, Ruixuan Yao, Jinqiao Ying, Bin Zhang, Xingyi Yang, Jin Peng, Jiongquan Chen, Junxuan Yu, Wenlong Shi, Chaoyu Chen, Zhongnuo Yan, Mingyuan Luo, Gaocheng Cai, Dong Ni, Jing Lu, Xin Yang

arxiv logopreprintJul 1 2025
Accurate fetal birth weight (FBW) estimation is essential for optimizing delivery decisions and reducing perinatal mortality. However, clinical methods for FBW estimation are inefficient, operator-dependent, and challenging to apply in cases of complex fetal anatomy. Existing deep learning methods are based on 2D standard ultrasound (US) images or videos that lack spatial information, limiting their prediction accuracy. In this study, we propose the first method for directly estimating FBW from 3D fetal US volumes. Our approach integrates a multi-scale feature fusion network (MFFN) and a synthetic sample-based learning framework (SSLF). The MFFN effectively extracts and fuses multi-scale features under sparse supervision by incorporating channel attention, spatial attention, and a ranking-based loss function. SSLF generates synthetic samples by simply combining fetal head and abdomen data from different fetuses, utilizing semi-supervised learning to improve prediction performance. Experimental results demonstrate that our method achieves superior performance, with a mean absolute error of $166.4\pm155.9$ $g$ and a mean absolute percentage error of $5.1\pm4.6$%, outperforming existing methods and approaching the accuracy of a senior doctor. Code is available at: https://github.com/Qioy-i/EFW.

Comparison of Deep Learning Models for fast and accurate dose map prediction in Microbeam Radiation Therapy.

Arsini L, Humphreys J, White C, Mentzel F, Paino J, Bolst D, Caccia B, Cameron M, Ciardiello A, Corde S, Engels E, Giagu S, Rosenfeld A, Tehei M, Tsoi AC, Vogel S, Lerch M, Hagenbuchner M, Guatelli S, Terracciano CM

pubmed logopapersJul 1 2025
Microbeam Radiation Therapy (MRT) is an innovative radiotherapy modality which uses highly focused synchrotron-generated X-ray microbeams. Current pre-clinical research in MRT mostly rely on Monte Carlo (MC) simulations for dose estimation, which are highly accurate but computationally intensive. Recently, Deep Learning (DL) dose engines have been proved effective in generating fast and reliable dose distributions in different RT modalities. However, relatively few studies compare different models on the same task. This work aims to compare a Graph-Convolutional-Network-based DL model, developed in the context of Very High Energy Electron RT, to the Convolutional 3D U-Net that we recently implemented for MRT dose predictions. The two DL solutions are trained with 3D dose maps, generated with the MC-Toolkit Geant4, in rats used in MRT pre-clinical research. The models are evaluated against Geant4 simulations, used as ground truth, and are assessed in terms of Mean Absolute Error, Mean Relative Error, and a voxel-wise version of the γ-index. Also presented are specific comparisons of predictions in relevant tumor regions, tissues boundaries and air pockets. The two models are finally compared from the perspective of the execution time and size. This study finds that the two models achieve comparable overall performance. Main differences are found in their dosimetric accuracy within specific regions, such as air pockets, and their respective inference times. Consequently, the choice between models should be guided primarily by data structure and time constraints, favoring the graph-based method for its flexibility or the 3D U-Net for its faster execution.

Dynamic frame-by-frame motion correction for 18F-flurpiridaz PET-MPI using convolution neural network

Urs, M., Killekar, A., Builoff, V., Lemley, M., Wei, C.-C., Ramirez, G., Kavanagh, P., Buckley, C., Slomka, P. J.

medrxiv logopreprintJul 1 2025
PurposePrecise quantification of myocardial blood flow (MBF) and flow reserve (MFR) in 18F-flurpiridaz PET significantly relies on motion correction (MC). However, the manual frame-by-frame correction leads to significant inter-observer variability, time-consuming, and requires significant experience. We propose a deep learning (DL) framework for automatic MC of 18F-flurpiridaz PET. MethodsThe method employs a 3D ResNet based architecture that takes 3D PET volumes and outputs motion vectors. It was validated using 5-fold cross-validation on data from 32 sites of a Phase III clinical trial (NCT01347710). Manual corrections from two experienced operators served as ground truth, and data augmentation using simulated vectors enhanced training robustness. The study compared the DL approach to both manual and standard non-AI automatic MC methods, assessing agreement and diagnostic accuracy using minimal segmental MBF and MFR. ResultsThe area under the receiver operating characteristic curves (AUC) for significant CAD were comparable between DL-MC MBF, manual-MC MBF from Operators (AUC=0.897,0.892 and 0.889, respectively; p>0.05), standard non-AI automatic MC (AUC=0.877; p>0.05) and significantly higher than No-MC (AUC=0.835; p<0.05). Similar findings were observed with MFR. The 95% confidence limits for agreement with the operator were {+/-}0.49ml/g/min (mean difference = 0.00) for MFR and {+/-}0.24ml/g/min (mean difference = 0.00) for MBF. ConclusionDL-MC is significantly faster but diagnostically comparable to manual-MC. The quantitative results obtained with DL-MC for MBF and MFR are in excellent agreement with those manually corrected by experienced operators compared to standard non-AI automatic MC in patients undergoing 18F-flurpiridaz PET-MPI.

Scout-Dose-TCM: Direct and Prospective Scout-Based Estimation of Personalized Organ Doses from Tube Current Modulated CT Exams

Maria Jose Medrano, Sen Wang, Liyan Sun, Abdullah-Al-Zubaer Imran, Jennie Cao, Grant Stevens, Justin Ruey Tse, Adam S. Wang

arxiv logopreprintJun 30 2025
This study proposes Scout-Dose-TCM for direct, prospective estimation of organ-level doses under tube current modulation (TCM) and compares its performance to two established methods. We analyzed contrast-enhanced chest-abdomen-pelvis CT scans from 130 adults (120 kVp, TCM). Reference doses for six organs (lungs, kidneys, liver, pancreas, bladder, spleen) were calculated using MC-GPU and TotalSegmentator. Based on these, we trained Scout-Dose-TCM, a deep learning model that predicts organ doses corresponding to discrete cosine transform (DCT) basis functions, enabling real-time estimates for any TCM profile. The model combines a feature learning module that extracts contextual information from lateral and frontal scouts and scan range with a dose learning module that output DCT-based dose estimates. A customized loss function incorporated the DCT formulation during training. For comparison, we implemented size-specific dose estimation per AAPM TG 204 (Global CTDIvol) and its organ-level TCM-adapted version (Organ CTDIvol). A 5-fold cross-validation assessed generalizability by comparing mean absolute percentage dose errors and r-squared correlations with benchmark doses. Average absolute percentage errors were 13% (Global CTDIvol), 9% (Organ CTDIvol), and 7% (Scout-Dose-TCM), with bladder showing the largest discrepancies (15%, 13%, and 9%). Statistical tests confirmed Scout-Dose-TCM significantly reduced errors vs. Global CTDIvol across most organs and improved over Organ CTDIvol for the liver, bladder, and pancreas. It also achieved higher r-squared values, indicating stronger agreement with Monte Carlo benchmarks. Scout-Dose-TCM outperformed Global CTDIvol and was comparable to or better than Organ CTDIvol, without requiring organ segmentations at inference, demonstrating its promise as a tool for prospective organ-level dose estimation in CT.

Deep learning-based contour propagation in magnetic resonance imaging-guided radiotherapy of lung cancer patients.

Wei C, Eze C, Klaar R, Thorwarth D, Warda C, Taugner J, Hörner-Rieber J, Regnery S, Jaekel O, Weykamp F, Palacios MA, Marschner S, Corradini S, Belka C, Kurz C, Landry G, Rabe M

pubmed logopapersJun 26 2025
Fast and accurate organ-at-risk (OAR) and gross tumor volume (GTV) contour propagation methods are needed to improve the efficiency of magnetic resonance (MR) imaging-guided radiotherapy. We trained deformable image registration networks to accurately propagate contours from planning to fraction MR images.&#xD;Approach: Data from 140 stage 1-2 lung cancer patients treated at a 0.35T MR-Linac were split into 102/17/21 for training/validation/testing. Additionally, 18 central lung tumor patients, treated at a 0.35T MR-Linac externally, and 14 stage 3 lung cancer patients from a phase 1 clinical trial, treated at 0.35T or 1.5T MR-Linacs at three institutions, were used for external testing. Planning and fraction images were paired (490 pairs) for training. Two hybrid transformer-convolutional neural network TransMorph models with mean squared error (MSE), Dice similarity coefficient (DSC), and regularization losses (TM_{MSE+Dice}) or MSE and regularization losses (TM_{MSE}) were trained to deformably register planning to fraction images. The TransMorph models predicted diffeomorphic dense displacement fields. Multi-label images including seven thoracic OARs and the GTV were propagated to generate fraction segmentations. Model predictions were compared with contours obtained through B-spline, vendor registration and the auto-segmentation method nnUNet. Evaluation metrics included the DSC and Hausdorff distance percentiles (50th and 95th) against clinical contours.&#xD;Main results: TM_{MSE+Dice} and TM_{MSE} achieved mean OARs/GTV DSCs of 0.90/0.82 and 0.90/0.79 for the internal and 0.84/0.77 and 0.85/0.76 for the central lung tumor external test data. On stage 3 data, TM_{MSE+Dice} achieved mean OARs/GTV DSCs of 0.87/0.79 and 0.83/0.78 for the 0.35 T MR-Linac datasets, and 0.87/0.75 for the 1.5 T MR-Linac dataset. TM_{MSE+Dice} and TM_{MSE} had significantly higher geometric accuracy than other methods on external data. No significant difference between TM_{MSE+Dice} and TM_{MSE} was found.&#xD;Significance: TransMorph models achieved time-efficient segmentation of fraction MRIs with high geometrical accuracy and accurately segmented images obtained at different field strengths.
Page 7 of 14132 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.