Sort by:
Page 1 of 19183 results
Next

Generation of multimodal realistic computational phantoms as a test-bed for validating deep learning-based cross-modality synthesis techniques.

Camagni F, Nakas A, Parrella G, Vai A, Molinelli S, Vitolo V, Barcellini A, Chalaszczyk A, Imparato S, Pella A, Orlandi E, Baroni G, Riboldi M, Paganelli C

pubmed logopapersSep 27 2025
The validation of multimodal deep learning models for medical image translation is limited by the lack of high-quality, paired datasets. We propose a novel framework that leverages computational phantoms to generate realistic CT and MRI images, enabling reliable ground-truth datasets for robust validation of artificial intelligence (AI) methods that generate synthetic CT (sCT) from MRI, specifically for radiotherapy applications. Two CycleGANs (cycle-consistent generative adversarial networks) were trained to transfer the imaging style of real patients onto CT and MRI phantoms, producing synthetic data with realistic textures and continuous intensity distributions. These data were evaluated through paired assessments with original phantoms, unpaired comparisons with patient scans, and dosimetric analysis using patient-specific radiotherapy treatment plans. Additional external validation was performed on public CT datasets to assess the generalizability to unseen data. The resulting, paired CT/MRI phantoms were used to validate a GAN-based model for sCT generation from abdominal MRI in particle therapy, available in the literature. Results showed strong anatomical consistency with original phantoms, high histogram correlation with patient images (HistCC = 0.998 ± 0.001 for MRI, HistCC = 0.97 ± 0.04 for CT), and dosimetric accuracy comparable to real data. The novelty of this work lies in using generated phantoms as validation data for deep learning-based cross-modality synthesis techniques.

Ultra-fast whole-brain T2-weighted imaging in 7 seconds using dual-type deep learning reconstruction with single-shot acquisition: clinical feasibility and comparison with conventional methods.

Ikebe Y, Fujima N, Kameda H, Harada T, Shimizu Y, Kwon J, Yoneyama M, Kudo K

pubmed logopapersSep 26 2025
To evaluate the image quality and clinical utility of ultra-fast T2-weighted imaging (UF-T2WI), which acquires all slice data in 7 s using a single-shot turbo spin-echo technique combined with dual-type deep learning (DL) reconstruction, incorporating DL-based image denoising and super-resolution processing, by comparing UF-T2WI with conventional T2WI. We analyzed data from 38 patients who underwent both conventional T2WI and UF-T2WI with the dual-type DL-based image reconstruction. Two board-certified radiologists independently performed blinded qualitative assessments of the patients' images obtained with UF-T2WI with DL and conventional T2WI, evaluating the overall image quality, anatomical structure visibility, and levels of noise and artifacts. In cases that included central nervous system diseases, the lesions' delineation was also assessed. The quantitative analysis included measurements of signal-to-noise ratios in white and gray matter and the contrast-to-noise ratio between gray and white matter. Compared to conventional T2WI, UF-T2WI with DL received significantly higher ratings for overall image quality and lower noise and artifact levels (p < 0.001 for both readers). The anatomical visibility was significantly better in UF-T2WI for one reader, with no significant difference for the other reader. The lesion visibility in UF-T2WI was comparable to that in conventional T2WI. Quantitatively, the SNRs and CNRs were all significantly higher in UF-T2WI than conventional T2WI (p < 0.001). The combination of SSTSE with dual-type DL reconstruction allows for the acquisition of clinically acceptable T2WI images in just 7 s. This technique shows strong potential to reduce MRI scan times and improve clinical workflow efficiency.

Clinical Uncertainty Impacts Machine Learning Evaluations

Simone Lionetti, Fabian Gröger, Philippe Gottfrois, Alvaro Gonzalez-Jimenez, Ludovic Amruthalingam, Alexander A. Navarini, Marc Pouly

arxiv logopreprintSep 26 2025
Clinical dataset labels are rarely certain as annotators disagree and confidence is not uniform across cases. Typical aggregation procedures, such as majority voting, obscure this variability. In simple experiments on medical imaging benchmarks, accounting for the confidence in binary labels significantly impacts model rankings. We therefore argue that machine-learning evaluations should explicitly account for annotation uncertainty using probabilistic metrics that directly operate on distributions. These metrics can be applied independently of the annotations' generating process, whether modeled by simple counting, subjective confidence ratings, or probabilistic response models. They are also computationally lightweight, as closed-form expressions have linear-time implementations once examples are sorted by model score. We thus urge the community to release raw annotations for datasets and to adopt uncertainty-aware evaluation so that performance estimates may better reflect clinical data.

Interpreting Convolutional Neural Network Activation Maps with Hand-crafted Radiomics Features on Progression of Pediatric Craniopharyngioma after Irradiation Therapy

Wenjun Yang, Chuang Wang, Tina Davis, Jinsoo Uh, Chia-Ho Hua, Thomas E. Merchant

arxiv logopreprintSep 25 2025
Purpose: Convolutional neural networks (CNNs) are promising in predicting treatment outcome for pediatric craniopharyngioma while the decision mechanisms are difficult to interpret. We compared the activation maps of CNN with hand crafted radiomics features of a densely connected artificial neural network (ANN) to correlate with clinical decisions. Methods: A cohort of 100 pediatric craniopharyngioma patients were included. Binary tumor progression was classified by an ANN and CNN with input of T1w, T2w, and FLAIR MRI. Hand-crafted radiomic features were calculated from the MRI using the LifeX software and key features were selected by Group lasso regularization, comparing to the activation maps of CNN. We evaluated the radiomics models by accuracy, area under receiver operational curve (AUC), and confusion matrices. Results: The average accuracy of T1w, T2w, and FLAIR MRI was 0.85, 0.92, and 0.86 (ANOVA, F = 1.96, P = 0.18) with ANN; 0.83, 0.81, and 0.70 (ANOVA, F = 10.11, P = 0.003) with CNN. The average AUC of ANN was 0.91, 0.97, and 0.90; 0.86, 0.88, and 0.75 of CNN for the 3 MRI, respectively. The activation maps were correlated with tumor shape, min and max intensity, and texture features. Conclusions: The tumor progression for pediatric patients with craniopharyngioma achieved promising accuracy with ANN and CNN model. The activation maps extracted from different levels were interpreted with hand-crafted key features of ANN.

LiLAW: Lightweight Learnable Adaptive Weighting to Meta-Learn Sample Difficulty and Improve Noisy Training

Abhishek Moturu, Anna Goldenberg, Babak Taati

arxiv logopreprintSep 25 2025
Training deep neural networks in the presence of noisy labels and data heterogeneity is a major challenge. We introduce Lightweight Learnable Adaptive Weighting (LiLAW), a novel method that dynamically adjusts the loss weight of each training sample based on its evolving difficulty level, categorized as easy, moderate, or hard. Using only three learnable parameters, LiLAW adaptively prioritizes informative samples throughout training by updating these weights using a single mini-batch gradient descent step on the validation set after each training mini-batch, without requiring excessive hyperparameter tuning or a clean validation set. Extensive experiments across multiple general and medical imaging datasets, noise levels and types, loss functions, and architectures with and without pretraining demonstrate that LiLAW consistently enhances performance, even in high-noise environments. It is effective without heavy reliance on data augmentation or advanced regularization, highlighting its practicality. It offers a computationally efficient solution to boost model generalization and robustness in any neural network training setup.

3D gadolinium-enhanced high-resolution near-isotropic pancreatic imaging at 3.0-T MR using deep-learning reconstruction.

Guan S, Poujol J, Gouhier E, Touloupas C, Delpla A, Boulay-Coletta I, Zins M

pubmed logopapersSep 24 2025
To compare overall image quality, lesion conspicuity and detectability on 3D-T1w-GRE arterial phase high-resolution MR images with deep learning reconstruction (3D-DLR) against standard-of-care reconstruction (SOC-Recon) in patients with suspected pancreatic disease. Patients who underwent a pancreatic MR exam with a high-resolution 3D-T1w-GRE arterial phase acquisition on a 3.0-T MR system between December 2021 and June 2022 in our center were retrospectively included. A new deep learning-based reconstruction algorithm (3D-DLR) was used to additionally reconstruct arterial phase images. Two radiologists blinded to the reconstruction type assessed images for image quality, artifacts and lesion conspicuity using a Likert scale and counted the lesions. Signal-to-noise ratio and lesion contrast-to-noise ratio were calculated for each reconstruction. Quantitative data were evaluated using paired t-tests. Ordinal data such as image quality, artifacts and lesions conspicuity were analyzed using paired-Wilcoxon tests. Interobserver agreement for image quality and artifact assessment was evaluated using Cohen's kappa. Thirty-two patients (mean age 62 years ± 12, 16 female) were included. 3D-DLR significantly improved SNR for each pancreatic segment and lesion CNR compared to SOC-Recon (p < 0.01), and demonstrated significantly higher average image quality score (3.34 vs 2.68, p < 0.01). 3D DLR also significantly reduced artifacts compared to SOC-Recon (p < 0.01) for one radiologist. 3D-DLR exhibited significantly higher average lesion conspicuity (2.30 vs 1.85, p < 0.01). The sensitivity was increased with 3D-DLR compared to SOC-Recon for both reader 1 and reader 2 (1 vs 0.88 and 0.88 vs 0.83, p = 0.62 for both results). 3D-DLR images demonstrated higher overall image quality, leading to better lesion conspicuity. 3D deep learning reconstruction can be applied to gadolinium-enhanced pancreatic 3D-T1w arterial phase high-resolution images without additional acquisition time to further improve image quality and lesion conspicuity. 3D DLR has not yet been applied to pancreatic MRI high-resolution sequences. This method improves SNR, CNR, and overall 3D T1w arterial pancreatic image quality. Enhanced lesion conspicuity may improve pancreatic lesion detectability.

Radiomics-based artificial intelligence (AI) models in colorectal cancer (CRC) diagnosis, metastasis detection, prognosis, and treatment response prediction.

Elahi R, Karami P, Amjadzadeh M, Nazari M

pubmed logopapersSep 24 2025
Colorectal cancer (CRC) is the third most common cause of cancer-related morbidity and mortality in the world. Radiomics and radiogenomics are utilized for the high-throughput quantification of features from medical images, providing non-invasive means to characterize cancer heterogeneity and gain insight into the underlying biology. Such radiomics-based artificial intelligence (AI)-methods have demonstrated great potential to improve the accuracy of CRC diagnosis and staging, to distinguish between benign and malignant lesions, to aid in the detection of lymph node and hepatic metastasis, and to predict the effects of therapy and prognosis for patients. This review presents the latest evidence on the clinical applications of radiomics models based on different imaging modalities in CRC. We also discuss the challenges facing clinical translation, including differences in image acquisition, issues related to reproducibility, a lack of standardization, and limited external validation. Given the progress of machine learning (ML) and deep learning (DL) algorithms, radiomics is expected to have an important effect on the personalized treatment of CRC and contribute to a more accurate and individualized clinical decision-making in the future.

CPT-4DMR: Continuous sPatial-Temporal Representation for 4D-MRI Reconstruction

Xinyang Wu, Muheng Li, Xia Li, Orso Pusterla, Sairos Safai, Philippe C. Cattin, Antony J. Lomax, Ye Zhang

arxiv logopreprintSep 22 2025
Four-dimensional MRI (4D-MRI) is an promising technique for capturing respiratory-induced motion in radiation therapy planning and delivery. Conventional 4D reconstruction methods, which typically rely on phase binning or separate template scans, struggle to capture temporal variability, complicate workflows, and impose heavy computational loads. We introduce a neural representation framework that considers respiratory motion as a smooth, continuous deformation steered by a 1D surrogate signal, completely replacing the conventional discrete sorting approach. The new method fuses motion modeling with image reconstruction through two synergistic networks: the Spatial Anatomy Network (SAN) encodes a continuous 3D anatomical representation, while a Temporal Motion Network (TMN), guided by Transformer-derived respiratory signals, produces temporally consistent deformation fields. Evaluation using a free-breathing dataset of 19 volunteers demonstrates that our template- and phase-free method accurately captures both regular and irregular respiratory patterns, while preserving vessel and bronchial continuity with high anatomical fidelity. The proposed method significantly improves efficiency, reducing the total processing time from approximately five hours required by conventional discrete sorting methods to just 15 minutes of training. Furthermore, it enables inference of each 3D volume in under one second. The framework accurately reconstructs 3D images at any respiratory state, achieves superior performance compared to conventional methods, and demonstrates strong potential for application in 4D radiation therapy planning and real-time adaptive treatment.

Guidance for reporting artificial intelligence technology evaluations for ultrasound scanning in regional anaesthesia (GRAITE-USRA): an international multidisciplinary consensus reporting framework.

Zhang X, Ferry J, Hewson DW, Collins GS, Wiles MD, Zhao Y, Martindale APL, Tomaschek M, Bowness JS

pubmed logopapersSep 18 2025
The application of artificial intelligence to enhance the clinical practice of ultrasound-guided regional anaesthesia is of increasing interest to clinicians, researchers and industry. The lack of standardised reporting for studies in this field hinders the comparability, reproducibility and integration of findings. We aimed to develop a consensus-based reporting guideline for research evaluating artificial intelligence applications for ultrasound scanning in regional anaesthesia. We followed methodology recommended by the EQUATOR Network for the development of reporting guidelines. Review of published literature and expert consultation generated a preliminary list of candidate reporting items. An international, multidisciplinary, modified Delphi process was then undertaken, involving experts from clinical practice, academia and industry. Two rounds of expert consultation were conducted, in which participants evaluated each item for inclusion in a final reporting guideline, followed by an online discussion. A total of 67 experts participated in the first Delphi round, 63 in the second round and 25 in the roundtable consensus meeting. The GRAITE-USRA reporting guideline comprises 40 items addressing key aspects of reporting in artificial intelligence research for ultrasound scanning in regional anaesthesia. Specific items include ultrasound acquisition protocols and operator expertise, which are not covered in existing artificial intelligence reporting guidelines. The GRAITE-USRA reporting guideline provides a minimum set of recommendations for artificial intelligence-related research for ultrasound scanning in regional anaesthesia. Its adoption will promote consistent reporting standards, enhance transparency, improve study reproducibility and ultimately support the effective integration of evidence into clinical practice.

Dose reduction in 4D CT imaging: Breathing signal-guided deep learning-driven data acquisition.

Wimmert L, Gauerd T, Dickmanne J, Hofmanne C, Sentkera T, Wernera R

pubmed logopapersSep 18 2025
4D CT imaging is essential for radiotherapy planning in thoracic tumors. However, current protocols tend to acquire more projection data than is strictly necessary for reconstructing the 4D CT, potentially leading to unnecessary radiation exposure and misalignment with the ALARA (As Low As Reasonably Achievable) principle. We propose a deep learning (DL)-driven approach that uses the patient's breathing signal to guide data acquisition, aiming to acquire only necessary projection data. This retrospective study analyzed 1,415 breathing signals from 294 patients, with a 75/25 training/validation split at patient level. Based on the signals, a DL model was trained to predict optimal beam-on events for projection data acquisition. Model testing was performed on 104 independent clinical 4D CT scans. The performance of the model was assessed by measuring temporal alignment between predicted and optimal beam-on events. To assess the impact on the reconstructed images, each 4D dataset was reconstructed twice: (1) using all clinically acquired projections (reference) and (2) using only the model-selected projections (dose-reduced). Reference and dose-reduced images were compared using Dice coefficients for organ segmentations, deformable image registration (DIR)-based displacement fields, artifact frequency, and tumor segmentation agreement, the latter evaluated in terms of Hausdorff distance and tumor motion ranges. The proposed approach reduced beam-on time and imaging dose by a median of 29% (IQR: 24-35%), corresponding to 11.6 mGy dose reduction for a standard 4D CT CTDIvol of 40 mGy. Temporal alignment between predicted and optimal beam-on events showed marginal differences. Similarly, reconstructed dose-reduced images showed only minimal differences to the reference images, demonstrated by high lung and liver segmentation Dice values, small-magnitude (DIR) displacement fields, and unchanged artifact frequency. Minor deviations of tumor segmentation and motion ranges compared to the reference suggest only minimal impact of the proposed approach on treatment planning. The proposed DL-driven data acquisition approach has the ability to reduce radiation exposure during 4D CT imaging while preserving diagnostic quality, offering a clinically viable, ALARA-adhering solution for 4D CT imaging.
Page 1 of 19183 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.