Sort by:
Page 1 of 11107 results
Next

Generation of multimodal realistic computational phantoms as a test-bed for validating deep learning-based cross-modality synthesis techniques.

Camagni F, Nakas A, Parrella G, Vai A, Molinelli S, Vitolo V, Barcellini A, Chalaszczyk A, Imparato S, Pella A, Orlandi E, Baroni G, Riboldi M, Paganelli C

pubmed logopapersSep 27 2025
The validation of multimodal deep learning models for medical image translation is limited by the lack of high-quality, paired datasets. We propose a novel framework that leverages computational phantoms to generate realistic CT and MRI images, enabling reliable ground-truth datasets for robust validation of artificial intelligence (AI) methods that generate synthetic CT (sCT) from MRI, specifically for radiotherapy applications. Two CycleGANs (cycle-consistent generative adversarial networks) were trained to transfer the imaging style of real patients onto CT and MRI phantoms, producing synthetic data with realistic textures and continuous intensity distributions. These data were evaluated through paired assessments with original phantoms, unpaired comparisons with patient scans, and dosimetric analysis using patient-specific radiotherapy treatment plans. Additional external validation was performed on public CT datasets to assess the generalizability to unseen data. The resulting, paired CT/MRI phantoms were used to validate a GAN-based model for sCT generation from abdominal MRI in particle therapy, available in the literature. Results showed strong anatomical consistency with original phantoms, high histogram correlation with patient images (HistCC = 0.998 ± 0.001 for MRI, HistCC = 0.97 ± 0.04 for CT), and dosimetric accuracy comparable to real data. The novelty of this work lies in using generated phantoms as validation data for deep learning-based cross-modality synthesis techniques.

Ultra-fast whole-brain T2-weighted imaging in 7 seconds using dual-type deep learning reconstruction with single-shot acquisition: clinical feasibility and comparison with conventional methods.

Ikebe Y, Fujima N, Kameda H, Harada T, Shimizu Y, Kwon J, Yoneyama M, Kudo K

pubmed logopapersSep 26 2025
To evaluate the image quality and clinical utility of ultra-fast T2-weighted imaging (UF-T2WI), which acquires all slice data in 7 s using a single-shot turbo spin-echo technique combined with dual-type deep learning (DL) reconstruction, incorporating DL-based image denoising and super-resolution processing, by comparing UF-T2WI with conventional T2WI. We analyzed data from 38 patients who underwent both conventional T2WI and UF-T2WI with the dual-type DL-based image reconstruction. Two board-certified radiologists independently performed blinded qualitative assessments of the patients' images obtained with UF-T2WI with DL and conventional T2WI, evaluating the overall image quality, anatomical structure visibility, and levels of noise and artifacts. In cases that included central nervous system diseases, the lesions' delineation was also assessed. The quantitative analysis included measurements of signal-to-noise ratios in white and gray matter and the contrast-to-noise ratio between gray and white matter. Compared to conventional T2WI, UF-T2WI with DL received significantly higher ratings for overall image quality and lower noise and artifact levels (p < 0.001 for both readers). The anatomical visibility was significantly better in UF-T2WI for one reader, with no significant difference for the other reader. The lesion visibility in UF-T2WI was comparable to that in conventional T2WI. Quantitatively, the SNRs and CNRs were all significantly higher in UF-T2WI than conventional T2WI (p < 0.001). The combination of SSTSE with dual-type DL reconstruction allows for the acquisition of clinically acceptable T2WI images in just 7 s. This technique shows strong potential to reduce MRI scan times and improve clinical workflow efficiency.

3D gadolinium-enhanced high-resolution near-isotropic pancreatic imaging at 3.0-T MR using deep-learning reconstruction.

Guan S, Poujol J, Gouhier E, Touloupas C, Delpla A, Boulay-Coletta I, Zins M

pubmed logopapersSep 24 2025
To compare overall image quality, lesion conspicuity and detectability on 3D-T1w-GRE arterial phase high-resolution MR images with deep learning reconstruction (3D-DLR) against standard-of-care reconstruction (SOC-Recon) in patients with suspected pancreatic disease. Patients who underwent a pancreatic MR exam with a high-resolution 3D-T1w-GRE arterial phase acquisition on a 3.0-T MR system between December 2021 and June 2022 in our center were retrospectively included. A new deep learning-based reconstruction algorithm (3D-DLR) was used to additionally reconstruct arterial phase images. Two radiologists blinded to the reconstruction type assessed images for image quality, artifacts and lesion conspicuity using a Likert scale and counted the lesions. Signal-to-noise ratio and lesion contrast-to-noise ratio were calculated for each reconstruction. Quantitative data were evaluated using paired t-tests. Ordinal data such as image quality, artifacts and lesions conspicuity were analyzed using paired-Wilcoxon tests. Interobserver agreement for image quality and artifact assessment was evaluated using Cohen's kappa. Thirty-two patients (mean age 62 years ± 12, 16 female) were included. 3D-DLR significantly improved SNR for each pancreatic segment and lesion CNR compared to SOC-Recon (p < 0.01), and demonstrated significantly higher average image quality score (3.34 vs 2.68, p < 0.01). 3D DLR also significantly reduced artifacts compared to SOC-Recon (p < 0.01) for one radiologist. 3D-DLR exhibited significantly higher average lesion conspicuity (2.30 vs 1.85, p < 0.01). The sensitivity was increased with 3D-DLR compared to SOC-Recon for both reader 1 and reader 2 (1 vs 0.88 and 0.88 vs 0.83, p = 0.62 for both results). 3D-DLR images demonstrated higher overall image quality, leading to better lesion conspicuity. 3D deep learning reconstruction can be applied to gadolinium-enhanced pancreatic 3D-T1w arterial phase high-resolution images without additional acquisition time to further improve image quality and lesion conspicuity. 3D DLR has not yet been applied to pancreatic MRI high-resolution sequences. This method improves SNR, CNR, and overall 3D T1w arterial pancreatic image quality. Enhanced lesion conspicuity may improve pancreatic lesion detectability.

Radiomics-based artificial intelligence (AI) models in colorectal cancer (CRC) diagnosis, metastasis detection, prognosis, and treatment response prediction.

Elahi R, Karami P, Amjadzadeh M, Nazari M

pubmed logopapersSep 24 2025
Colorectal cancer (CRC) is the third most common cause of cancer-related morbidity and mortality in the world. Radiomics and radiogenomics are utilized for the high-throughput quantification of features from medical images, providing non-invasive means to characterize cancer heterogeneity and gain insight into the underlying biology. Such radiomics-based artificial intelligence (AI)-methods have demonstrated great potential to improve the accuracy of CRC diagnosis and staging, to distinguish between benign and malignant lesions, to aid in the detection of lymph node and hepatic metastasis, and to predict the effects of therapy and prognosis for patients. This review presents the latest evidence on the clinical applications of radiomics models based on different imaging modalities in CRC. We also discuss the challenges facing clinical translation, including differences in image acquisition, issues related to reproducibility, a lack of standardization, and limited external validation. Given the progress of machine learning (ML) and deep learning (DL) algorithms, radiomics is expected to have an important effect on the personalized treatment of CRC and contribute to a more accurate and individualized clinical decision-making in the future.

Guidance for reporting artificial intelligence technology evaluations for ultrasound scanning in regional anaesthesia (GRAITE-USRA): an international multidisciplinary consensus reporting framework.

Zhang X, Ferry J, Hewson DW, Collins GS, Wiles MD, Zhao Y, Martindale APL, Tomaschek M, Bowness JS

pubmed logopapersSep 18 2025
The application of artificial intelligence to enhance the clinical practice of ultrasound-guided regional anaesthesia is of increasing interest to clinicians, researchers and industry. The lack of standardised reporting for studies in this field hinders the comparability, reproducibility and integration of findings. We aimed to develop a consensus-based reporting guideline for research evaluating artificial intelligence applications for ultrasound scanning in regional anaesthesia. We followed methodology recommended by the EQUATOR Network for the development of reporting guidelines. Review of published literature and expert consultation generated a preliminary list of candidate reporting items. An international, multidisciplinary, modified Delphi process was then undertaken, involving experts from clinical practice, academia and industry. Two rounds of expert consultation were conducted, in which participants evaluated each item for inclusion in a final reporting guideline, followed by an online discussion. A total of 67 experts participated in the first Delphi round, 63 in the second round and 25 in the roundtable consensus meeting. The GRAITE-USRA reporting guideline comprises 40 items addressing key aspects of reporting in artificial intelligence research for ultrasound scanning in regional anaesthesia. Specific items include ultrasound acquisition protocols and operator expertise, which are not covered in existing artificial intelligence reporting guidelines. The GRAITE-USRA reporting guideline provides a minimum set of recommendations for artificial intelligence-related research for ultrasound scanning in regional anaesthesia. Its adoption will promote consistent reporting standards, enhance transparency, improve study reproducibility and ultimately support the effective integration of evidence into clinical practice.

Dose reduction in 4D CT imaging: Breathing signal-guided deep learning-driven data acquisition.

Wimmert L, Gauerd T, Dickmanne J, Hofmanne C, Sentkera T, Wernera R

pubmed logopapersSep 18 2025
4D CT imaging is essential for radiotherapy planning in thoracic tumors. However, current protocols tend to acquire more projection data than is strictly necessary for reconstructing the 4D CT, potentially leading to unnecessary radiation exposure and misalignment with the ALARA (As Low As Reasonably Achievable) principle. We propose a deep learning (DL)-driven approach that uses the patient's breathing signal to guide data acquisition, aiming to acquire only necessary projection data. This retrospective study analyzed 1,415 breathing signals from 294 patients, with a 75/25 training/validation split at patient level. Based on the signals, a DL model was trained to predict optimal beam-on events for projection data acquisition. Model testing was performed on 104 independent clinical 4D CT scans. The performance of the model was assessed by measuring temporal alignment between predicted and optimal beam-on events. To assess the impact on the reconstructed images, each 4D dataset was reconstructed twice: (1) using all clinically acquired projections (reference) and (2) using only the model-selected projections (dose-reduced). Reference and dose-reduced images were compared using Dice coefficients for organ segmentations, deformable image registration (DIR)-based displacement fields, artifact frequency, and tumor segmentation agreement, the latter evaluated in terms of Hausdorff distance and tumor motion ranges. The proposed approach reduced beam-on time and imaging dose by a median of 29% (IQR: 24-35%), corresponding to 11.6 mGy dose reduction for a standard 4D CT CTDIvol of 40 mGy. Temporal alignment between predicted and optimal beam-on events showed marginal differences. Similarly, reconstructed dose-reduced images showed only minimal differences to the reference images, demonstrated by high lung and liver segmentation Dice values, small-magnitude (DIR) displacement fields, and unchanged artifact frequency. Minor deviations of tumor segmentation and motion ranges compared to the reference suggest only minimal impact of the proposed approach on treatment planning. The proposed DL-driven data acquisition approach has the ability to reduce radiation exposure during 4D CT imaging while preserving diagnostic quality, offering a clinically viable, ALARA-adhering solution for 4D CT imaging.

Magnetization transfer MRI (MT-MRI) detects white matter damage beyond the primary site of compression in degenerative cervical myelopathy using a novel semi-automated analysis.

Muhammad F, Weber Ii KA, Haynes G, Villeneuve L, Smith L, Baha A, Hameed S, Khan AF, Dhaher Y, Parrish T, Rohan M, Smith ZA

pubmed logopapersSep 14 2025
Degenerative cervical myelopathy (DCM) is the leading cause of spinal cord disorder in adults, yet conventional MRI cannot detect microstructural damage beyond the compression site. Current application of magnetization transfer ratio (MTR), while promising, suffer from limited standardization, operator dependence, and unclear added value over traditional metrics such as cross-sectional area (CSA). To address these limitations, we utilized our semi-automated analysis pipeline built on the Spinal Cord Toolbox (SCT) platform to automate MTR extraction. Our method integrates deep learning-based convolutional neural networks (CNNs) for spinal cord segmentation, vertebral labeling via the global curve optimization algorithm and PAM50 template registration to enable automated MTR extraction. Using the Generic Spine Protocol, we acquired 3T T2w- and MT-MRI images from 30 patients with DCM and 15 age-matched healthy controls (HC). We computed MTR and CSA at the maximal compression level (C5-C6) and a distant, uncompressed region (C2-C3). We extracted regional and tract-specific MTR using probabilistic maps in template space. Diagnostic accuracy was assessed with ROC analysis, and k-means clustering reveal patients subgroups based on neurological impairments. Correlation analysis assessed associations between MTR measures and DCM deficits. Patients with DCM showed significant MTR reductions in both compressed and uncompressed regions (p < 0.05). At C2-C3, MTR outperformed CSA (AUC 0.74 vs 0.69) in detecting spinal cord pathology. Tract-specific MTR were correlated with dexterity, grip strength, and balance deficits. Our reproducible, computationally robust pipeline links microstructural injury to clinical outcomes in DCM and provides a scalable framework for multi-site quantitative MRI analysis of the spinal cord.

Deep learning-based volume of interest imaging in helical CT for image quality improvement and radiation dose reduction.

Zhou Z, Inoue A, Cox CW, McCollough CH, Yu L

pubmed logopapersSep 13 2025
To develop a volume of interest (VOI) imaging technique in multi-detector-row helical CT to reduce radiation dose or improve image quality within the VOI. A deep-learning method based on a residual U-Net architecture, named VOI-Net, was developed to correct truncation artifacts in VOI helical CT. Three patient cases, a chest CT of interstitial lung disease and 2 abdominopelvic CT of liver tumour, were used for evaluation through simulation. VOI-Net effectively corrected truncation artifacts (root mean square error [RMSE] of 5.97 ± 2.98 Hounsfield Units [HU] for chest, 3.12 ± 1.93 HU, and 3.71 ± 1.87 HU for liver). Radiation dose was reduced by 71% without sacrificing image quality within a 10-cm diameter VOI, compared to a full scan field of view (FOV) of 50 cm. With the same total energy deposited as in a full FOV scan, image quality within the VOI matched that at 350% higher radiation dose. A radiologist confirmed improved lesion conspicuity and visibility of small linear reticulations associated with ground-glass opacity and liver tumour. Focusing radiation on the VOI and using VOI-Net in a helical scan, total radiation can be reduced or higher image quality equivalent to those at higher doses in standard full FOV scan can be achieved within the VOI. A targeted helical VOI imaging technique enabled by a deep-learning-based artifact correction method improves image quality within the VOI without increasing radiation dose.

AI-assisted detection of cerebral aneurysms on 3D time-of-flight MR angiography: user variability and clinical implications.

Liao L, Puel U, Sabardu O, Harsan O, Medeiros LL, Loukoul WA, Anxionnat R, Kerrien E

pubmed logopapersSep 10 2025
The generalizability and reproducibility of AI-assisted detection for cerebral aneurysms on 3D time-of-flight MR angiography remain unclear. We aimed to evaluate physician performance using AI assistance, focusing on inter- and intra-user variability, identifying factors influencing performance and clinical implications. In this retrospective study, four state-of-the-art AI models were hyperparameter-optimized on an in-house dataset (2019-2021) and evaluated via 5-fold cross-validation on a public external dataset. The two best-performing models were selected for evaluation on an expert-revised external dataset. saccular aneurysms without prior treatment. Five physicians, grouped by expertise, each performed two AI-assisted evaluations, one with each model. Lesion-wise sensitivity and false positives per case (FPs/case) were calculated for each physician-AI pair and AI models alone. Agreement was assessed using kappa. Aneurysm size comparisons used the Mann-Whitney U test. The in-house dataset included 132 patients with 206 aneurysms (mean size: 4.0 mm); the revised external dataset, 270 patients with 174 aneurysms (mean size: 3.7 mm). Standalone AI achieved 86.8% sensitivity and 0.58 FPs/case. With AI assistance, non-experts achieved 72.1% sensitivity and 0.037 FPs/case; experts, 88.6% and 0.076 FPs/case; the intermediate-level physician, 78.5% and 0.037 FPs/case. Intra-group agreement was 80% for non-experts (kappa: 0.57, 95% CI: 0.54-0.59) and 77.7% for experts (kappa: 0.53, 95% CI: 0.51-0.55). In experts, false positives were smaller than true positives (2.7 vs. 3.8 mm, p < 0.001); no difference in non-experts (p = 0.09). Missed aneurysm locations were mainly model-dependent, while true- and false-positive locations reflected physician expertise. Non-experts more often rejected AI suggestions and added fewer annotations; experts were more conservative and added more. Evaluating AI models in isolation provides an incomplete view of their clinical applicability. Detection performance and patterns differ between standalone AI and AI-assisted use, and are modulated by physician expertise. Rigorous external validation is essential before clinical deployment.

Integration of nested cross-validation, automated hyperparameter optimization, high-performance computing to reduce and quantify the variance of test performance estimation of deep learning models.

Calle P, Bates A, Reynolds JC, Liu Y, Cui H, Ly S, Wang C, Zhang Q, de Armendi AJ, Shettar SS, Fung KM, Tang Q, Pan C

pubmed logopapersSep 10 2025
The variability and biases in the real-world performance benchmarking of deep learning models for medical imaging compromise their trustworthiness for real-world deployment. The common approach of holding out a single fixed test set fails to quantify the variance in the estimation of test performance metrics. This study introduces NACHOS (Nested and Automated Cross-validation and Hyperparameter Optimization using Supercomputing) to reduce and quantify the variance of test performance metrics of deep learning models. NACHOS integrates Nested Cross-Validation (NCV) and Automated Hyperparameter Optimization (AHPO) within a parallelized high-performance computing (HPC) framework. NACHOS was demonstrated on a chest X-ray repository and an Optical Coherence Tomography (OCT) dataset under multiple data partitioning schemes. Beyond performance estimation, DACHOS (Deployment with Automated Cross-validation and Hyperparameter Optimization using Supercomputing) is introduced to leverage AHPO and cross-validation to build the final model on the full dataset, improving expected deployment performance. The findings underscore the importance of NCV in quantifying and reducing estimation variance, AHPO in optimizing hyperparameters consistently across test folds, and HPC in ensuring computational feasibility. By integrating these methodologies, NACHOS and DACHOS provide a scalable, reproducible, and trustworthy framework for DL model evaluation and deployment in medical imaging. To maximize public availability, the full open-source codebase is provided at https://github.com/thepanlab/NACHOS.
Page 1 of 11107 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.