Sort by:
Page 1 of 550 results
Next

CNN-based prediction using early post-radiotherapy MRI as a proxy for toxicity in the murine head and neck.

Huynh BN, Kakar M, Zlygosteva O, Juvkam IS, Edin N, Tomic O, Futsaether CM, Malinen E

pubmed logopapersSep 25 2025
Radiotherapy (RT) of head and neck cancer can cause severe toxicities. Early identification of individuals at risk could enable personalized treatment. This study evaluated whether convolutional neural networks (CNNs) applied to Magnetic Resonance (MR) images acquired early after irradiation can predict radiation-induced tissue changes associated with toxicity in mice. Patient/material and methods: Twenty-nine C57BL/6JRj mice were included (irradiated: n = 14; control: n = 15). Irradiated mice received 65 Gy of fractionated RT to the oral cavity, swallowing muscles and salivary glands. T2-weighted MR images were acquired 3-5 days post-irradiation. CNN models (VGG, MobileNet, ResNet, EfficientNet) were trained to classify sagittal slices as irradiated or control (n = 586 slices). Predicted class probabilities were correlated with five toxicity endpoints assessed 8-105 days post-irradiation. Model explainability was assessed with VarGrad heatmaps, to verify that predictions relied on clinically relevant image regions. The best-performing model (EfficientNet B3) achieved 83% slice-level accuracy (ACC) and correctly classified 28 of 29 mice. Higher predicted probabilities of the irradiated class were strongly associated with oral mucositis, dermatitis, reduced saliva production, late submandibular gland fibrosis and atrophy of salivary gland acinar cells. Explainability heatmaps confirmed that CNNs focused on irradiated regions. The high CNN classification ACC, the regions highlighted by the explainability analysis and the strong correlations between model predictions and toxicity suggest that CNNs, together with post-irradiation magnetic resonance imaging, may identify individuals at risk of developing toxicity.

Evaluation of Operator Variability and Validation of an AI-Assisted α-Angle Measurement System for DDH Using a Phantom Model.

Ohashi Y, Shimizu T, Koyano H, Nakamura Y, Takahashi D, Yamada K, Iwasaki N

pubmed logopapersSep 22 2025
Ultrasound examination using the Graf method is widely applied for early detection of developmental dysplasia of the hip (DDH), but intra- and inter-operator variability remains a limitation. This study aimed to quantify operator variability in hip ultrasound assessments and to validate an AI-assisted system for automated α-angle measurement to improve reproducibility. Thirty participants of different experience levels, including trained clinicians, residents, and medical students, each performed six ultrasound scans on a standardized infant hip phantom. Examination time, iliac margin inclination, and α-angle measurements were analyzed to assess intra- and inter-operator variability. In parallel, an AI-based system was developed to automatically detect anatomical landmarks and calculate α-angles from static images and dynamic video sequences. Validation was conducted using the phantom model with a known α-angle of 70°. Clinicians achieved shorter examination times and higher reproducibility than residents and students, with manual measurements systematically underestimating the reference α-angle. Static AI produced closer estimates with greater variability, whereas dynamic AI achieved the highest accuracy (mean 69.2°) and consistency with narrower limits of agreement than manual measurements. These findings confirm substantial operator variability and demonstrate that AI-assisted dynamic ultrasound analysis can improve reproducibility and reliability in routine DDH screening.

Insertion of hepatic lesions into clinical photon-counting-detector CT projection data.

Gong H, Kharat S, Wellinghoff J, El Sadaney AO, Fletcher JG, Chang S, Yu L, Leng S, McCollough CH

pubmed logopapersSep 19 2025
To facilitate task-driven image quality assessment of lesion detectability in clinical photon-counting-detector CT (PCD-CT), it is desired to have patient image data with known pathology and precise annotation. Standard patient case collection and reference standard establishment are time- and resource-intensive. To mitigate this challenge, we aimed to develop a projection-domain lesion insertion framework that efficiently creates realistic patient cases by digitally inserting real radiopathologic features into patient PCD-CT images. 
Approach. This framework used an artificial-intelligence-assisted (AI) semi-automatic annotation to generate digital lesion models from real lesion images. The x-ray energy for commercial beam-hardening correction in PCD-CT system was estimated and used for calculating multi-energy forward projections of these lesion models at different energy thresholds. Lesion projections were subsequently added to patient projections from PCD-CT exams. The modified projections were reconstructed to form realistic lesion-present patient images, using the CT manufacturer's offline reconstruction software. Image quality was qualitatively and quantitatively validated in phantom scans and patient cases with liver lesions, using visual inspection, CT number accuracy, structural similarity index (SSIM), and radiomic feature analysis. Statistical tests were performed using Wilcoxon signed rank test. 
Main results. No statistically significant discrepancy (p>0.05) of CT numbers was observed between original and re-inserted tissue- and contrast-media-mimicking rods and hepatic lesions (mean ± standard deviation): rods 0.4 ± 2.3 HU, lesions -1.8 ± 6.4 HU. The original and inserted lesions showed similar morphological features at original and re-inserted locations: mean ± standard deviation of SSIM 0.95 ± 0.02. Additionally, the corresponding radiomic features presented highly similar feature clusters with no statistically significant differences (p>0.05). 
Significance. The proposed framework can generate patient PCD-CT exams with realistic liver lesions using archived patient data and lesion images. It will facilitate systematic evaluation of PCD-CT systems and advanced reconstruction and post-processing algorithms with target pathological features.

Geometric-Driven Cross-Modal Registration Framework for Optical Scanning and CBCT Models in AR-Based Maxillofacial Surgical Navigation.

Liu Y, Wang E, Gong M, Tao B, Wu Y, Qi X, Chen X

pubmed logopapersSep 4 2025
Accurate preoperative planning for dental implants, especially in edentulous or partially edentulous patients, relies on precise localization of radiographic templates that guide implant positioning. By wearing a patientspecific radiographic template, clinicians can better assess anatomical constraints and plan optimal implant paths. However, due to the low radiopacity of such templates, their spatial position is difficult to determine directly from cone-beam computed tomography (CBCT) scans. To overcome this limitation, high-resolution optical scans of the templates are acquired, providing detailed geometric information for accurate spatial registration. This paper proposes a geometric-driven cross-modal registration framework that aligns the optical scan model of the radiographic template with patient CBCT data, enhancing registration accuracy through geometric feature extraction such as curvature and occlusal contours. A hybrid deep learning workflow further improves robustness, achieving a root mean square error (RMSE) of 1.68mm and mean absolute error (MAE) of 1.25mm. The system also incorporates augmented reality (AR) for real-time surgical navigation. Clinical and phantom experiments validate its effectiveness in supporting precise implant path planning and execution. Our proposed system enhances the efficiency and safety of dental implant surgery by integrating geometric feature extraction, deep learning-based registration, and AR-assisted navigation.

Automated quantification of lung pathology on micro-CT in diverse disease models using deep learning.

Belmans F, Seldeslachts L, Vanhoffelen E, Tielemans B, Vos W, Maes F, Vande Velde G

pubmed logopapersAug 30 2025
Micro-CT significantly enhances the efficiency, predictive power and translatability of animal studies to human clinical trials for respiratory diseases. However, the analysis of large micro-CT datasets remains a bottleneck. We developed a generic deep learning (DL)-based lung segmentation model using longitudinal micro-CT images from studies of Down syndrome, viral and fungal infections, and exacerbation with variable lung pathology and degree of disease burden. 2D models were trained with cross-validation on axial, coronal and sagittal slices. Predictions from these single-orientation models were combined to create a 2.5D model using majority voting or probability averaging. The generalisability of these models to other studies (COVID-19, lung inflammation and fibrosis), scanner configurations and rodent species (rats, hamsters, degus) was tested, including a publicly available database. On the internal validation data, the highest mean Dice Similarity Coefficient (DSC) was found for the 2.5D probability averaging model (0.953 ± 0.023), further improving the output of the 2D models by removing erroneous voxels outside the lung region. The models demonstrated good generalisability with average DSC values ranging from 0.89 to 0.94 across different lung pathologies and scanner configurations. The biomarkers extracted from manual and automated segmentations are well in agreement and proved that our proposed solution effectively monitors longitudinal lung pathology development and response to treatment in real-world preclinical studies. Our DL-based pipeline for lung pathology quantification offers efficient analysis of large micro-CT datasets, is widely applicable across rodent disease models and acquisition protocols and enables real-time insights into therapy efficacy. This research was supported by the Service Public de Wallonie (AEROVID grant to FB, WV) and The Flemish Research Foundation (FWO, doctoral mandate 1SF2224N to EV and 1186121N/1186123N to LS, infrastructure grant I006524N to GVV).

Proteogenomic Biomarker Profiling for Predicting Radiolabeled Immunotherapy Response in Resistant Prostate Cancer.

Yan B, Gao Y, Zou Y, Zhao L, Li Z

pubmed logopapersAug 29 2025
Treatment resistance prevents patients with preoperative chemoradiotherapy or targeted radiolabeled immunotherapy from achieving a good result, which remains a major challenge in the prostate cancer (PCa) area. A novel integrative framework combining a machine learning workflow with proteogenomic profiling was used to identify predictive ultrasound biomarkers and classify patient response to radiolabeled immunotherapy in high-risk PCa patients who are treatment resistant. The deep stacked autoencoder (DSAE) model, combined with Extreme Gradient Boosting, was designed for feature refinement and classification. The Cancer Genome Atlas and an independent radiotherapy-treated cohort have been utilized to collect multiomics data through their respective applications. In addition to genetic mutations (whole-exome sequencing), these data contained proteomic (mass spectrometry) and transcriptomic (RNA sequencing) data. Maintaining biological variety across omics layers while reducing the dimensionality of the data requires the use of the DSAE architecture. Resistance phenotypes show a notable relationship with proteogenomic profiles, including DNA repair pathways (Breast Cancer gene 2 [BRCA2], ataxia-telangiectasia mutated [ATM]), androgen receptor (AR) signaling regulators, and metabolic enzymes (ATP citrate lyase [ACLY], isocitrate dehydrogenase 1 [IDH1]). A specific panel of ultrasound biomarkers has been confirmed in a state deemed preclinical using patient-derived xenografts. To support clinical translation, real-time phenotypic features from ultrasound imaging (e.g., perfusion, stiffness) were also considered, providing complementary insights into the tumor microenvironment and treatment responsiveness. This approach provides an integrated platform that offers a clinically actionable foundation for the development of radiolabeled immunotherapy drugs before surgical operations.

PWLS-SOM: alternative PWLS reconstruction for limited-view CT by strategic optimization of a deep learning model.

Chen C, Zhang L, Xing Y, Chen Z

pubmed logopapersAug 27 2025
While deep learning (DL) methods have exhibited promising results in mitigating streaking artifacts caused by limited-view computed tomography (CT), their generalization to practical applications remains challenging. To address this challenge, we aim to develop a novel approach that integrates DL priors with targeted-case data consistency for improved artifact suppression and robust reconstruction.
Approach: We propose an alternative Penalized Weighted Least Squares reconstruction framework by Strategic Optimization of a DL Model (PWLS-SOM). This framework combines data-driven DL priors with data consistency constraints in a three-stage process: (1) Group-level embedding: DL network parameters are optimized on a large-scale paired dataset to learn general artifact elimination. (2) Significance evaluation: A novel significance score quantifies the contribution of DL model parameters, guiding the subsequent strategic adaptation. (3) Individual-level consistency adaptation: PWLS-driven strategic optimization further adapts DL parameters for target-specific projection data.
Main Results: Experiments were conducted on sparse-view (90 views) circular trajectory CT data and a multi-segment linear trajectory CT scan with a mixed data missing problem. PWLS-SOM reconstruction demonstrated superior generalization across variations in patients, anatomical structures, and data distributions. It outperformed supervised DL methods in recovering contextual structures and adapting to practical CT scenarios. The method was validated with real experiments on a dead rat, showcasing its applicability to real-world CT scans.
Significance: PWLS-SOM reconstruction advances the field of limited-view CT reconstruction by uniting DL priors with PWLS adaptation. This approach facilitates robust and personalized imaging. The introduction of the significance score provides an efficient metric to evaluate generalization and guide the strategic optimization of DL parameters, enhancing adaptability across diverse data and practical imaging conditions.

Spectral computed tomography thermometry for thermal ablation: applicability and needle artifact reduction.

Koetzier LR, Hendriks P, Heemskerk JWT, van der Werf NR, Selles M, van der Molen AJ, Smits MLJ, Goorden MC, Burgmans MC

pubmed logopapersAug 23 2025
Effective thermal ablation of liver tumors requires precise monitoring of the ablation zone. Computed tomography (CT) thermometry can non-invasively monitor lethal temperatures but suffers from metal artifacts caused by ablation equipment. This study assesses spectral CT thermometry's applicability during microwave ablation, comparing the reproducibility, precision, and accuracy of attenuation-based versus physical density-based thermometry. Furthermore, it identifies optimal metal artifact reduction (MAR) methods: O-MAR, deep learning-MAR, spectral CT, and combinations thereof. Four gel phantoms embedded with temperature sensors underwent a 10- minute, 60 W microwave ablation imaged by dual-layer spectral CT scanner in 23 scans over time. For each scan attenuation-based and physical density-based temperature maps were reconstructed. Attenuation-based and physical density-based thermometry models were tested for reproducibility over three repetitions; a fourth repetition focused on accuracy. MAR techniques were applied to one repetition to evaluate temperature precision in artifact-corrupted slices. The correlation between CT value and temperature was highly linear with an R-squared value exceeding 96 %. Model parameters for attenuation-based and physical density-based thermometry were -0.38 HU/°C and 0.00039 °C<sup>-1</sup>, with coefficients of variation of 2.3 % and 6.7 %, respectively. Physical density maps improved temperature precision in presence of needle artifacts by 73 % compared to attenuation images. O-MAR improved temperature precision with 49 % compared to no MAR. Attenuation-based thermometry yielded narrower Bland-Altman limits-of-agreement (-7.7 °C to 5.3 °C) than physical density-based thermometry. Spectral physical density-based CT thermometry at 150 keV, utilized alongside O-MAR, enhances temperature precision in presence of metal artifacts and achieves reproducible temperature measurements with high accuracy.

Dedicated prostate DOI-TOF-PET based on the ProVision detection concept.

Vo HP, Williams T, Doroud K, Williams C, Rafecas M

pubmed logopapersAug 22 2025
The ProVision scanner is a dedicated prostate PET system with limited angular coverage; it employs a new detector technology that provides high spatial resolution as well as information about depth-of-interaction (DOI) and time-of-flight (TOF). The goal of this work is to develop a flexible image reconstruction framework and study the image performance of the current ProVision scanners.&#xD;Approach: Experimental datasets, including point-like sources, an image quality phantom, and a pelvic phantom, were acquired using the ProVision scanner to investigate the impact of oblique lines of response introduced via a multi-offset scanning protocol. This approach aims to mitigate data truncation artifacts and further characterise the current imaging performance of the system. For image reconstruction, we applied the list-mode Maximum Likelihood Expectation Maximisation algorithm incorporating TOF information. The system matrix and sensitivity models account for both detector attenuation and position uncertainty.&#xD;Main Results: The scanner provides good spatial resolution on the coronal plane; however, elongations caused by the limited angular coverage distort the reconstructed images. The availability of TOF and DOI information, as well as the addition of a multi-offset scanning protocol, could not fully compensate for these distortions.&#xD;Significance: The ProVision scanner concept, with innovative detector technology, shows promising outcomes for fast and inexpensive PET without CT. Despite current limitations due to limited angular coverage, which leads to image distortions, ongoing advancements, such as improved timing resolution, regularisation techniques, and artificial intelligence, are expected to significantly reduce these artifacts and enhance image quality.

Vision-Guided Surgical Navigation Using Computer Vision for Dynamic Intraoperative Imaging Updates.

Ruthberg J, Gunderson N, Chen P, Harris G, Case H, Bly R, Seibel EJ, Abuzeid WM

pubmed logopapersAug 22 2025
Residual disease after endoscopic sinus surgery (ESS) contributes to poor outcomes and revision surgery. Image-guided surgery systems cannot dynamically reflect intraoperative changes. We propose a sensorless, video-based method for intraoperative CT updating using neural radiance fields (NeRF), a deep learning algorithm used to create 3D surgical field reconstructions. Bilateral ESS was performed on three 3D-printed models (n = 6 sides). Postoperative endoscopic videos were processed through a custom NeRF pipeline to generate 3D reconstructions, which were co-registered to preoperative CT scans. Digitally updated CT models were created through algorithmic subtraction of resected regions, then volumetrically segmented, and compared to ground-truth postoperative CT. Accuracy was assessed using Hausdorff distance (surface alignment), Dice similarity coefficient (DSC) (volumetric overlap), and Bland‒Altman analysis (BAA) (statistical agreement). Comparison of the updated CT and the ground-truth postoperative CT indicated an average Hausdorff distance of 0.27 ± 0.076 mm and a 95th percentile Hausdorff distance of 0.82 ± 0.165 mm, indicating sub-millimeter surface alignment. The DSC was 0.93 ± 0.012 with values >0.9 suggestive of excellent spatial overlap. BAA indicated modest underestimation of volume on the updated CT versus ground-truth CT with a mean difference in volumes of 0.40 cm<sup>3</sup> with 95% limits of agreement of 0.04‒0.76 cm<sup>3</sup> indicating that all samples fell within acceptable bounds of variability. Computer vision can enable dynamic intraoperative imaging by generating highly accurate CT updates from monocular endoscopic video without external tracking. By directly visualizing resection progress, this software-driven tool has the potential to enhance surgical completeness in ESS for next-generation navigation platforms.
Page 1 of 550 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.