Sort by:
Page 10 of 25249 results

Assessment of biventricular cardiac function using free-breathing artificial intelligence cine with motion correction: Comparison with standard multiple breath-holding cine.

Ran L, Yan X, Zhao Y, Yang Z, Chen Z, Jia F, Song X, Huang L, Xia L

pubmed logopapersJul 1 2025
To assess the image quality and biventricular function utilizing a free-breathing artificial intelligence cine method with motion correction (FB AI MOCO). A total of 72 participants (mean age 38.3 ± 15.4 years, 40 males) prospectively enrolled in this single-center, cross-sectional study underwent cine scans using standard breath-holding (BH) cine and FB AI MOCO cine at 3.0 Tesla. The image quality of the cine images was evaluated with a 5-point Ordinal Likert scale based on blood-pool to myocardium contrast, endocardial edge definition, and artifacts, and overall quality score was calculated by the equal weight average of all three criteria, apparent signal to noise ratio (aSNR), estimated contrast to noise ratio (eCNR) were assessed. Biventricular functional parameters including Left Ventricular (LV), Right Ventricular (RV) End-Diastolic Volume (EDV), End-Systolic Volume (ESV), Stroke Volume (SV), Ejection Fraction (EF), and LV End-Diastolic Mass (LVEDM) were also assessed. Comparison between two sequences was assessed using paired t-test and Wilcoxon signed-rank test, correlation using Pearson correlation. The agreement of quantitative parameters was assessed using intraclass correlation coefficient (ICC) and Bland-Altman analysis. P < 0.05 was statistically significant. The total acquisition time of the entire stack for FB AI MOCO cine (14.7 s ± 1.9 s) was notably shorter than that for standard BH cine (82.6 s ± 11.9 s, P < 0.001). The aSNR between FB AI MOCO cine and standard BH cine has no significantly difference (76.7 ± 20.7 vs. 79.8 ± 20.7, P = 0.193). The eCNR of FB AI MOCO cine was higher than standard BH cine (191.6 ± 54.0 vs. 155.8 ± 68.4, P < 0.001), as was the scores of blood-pool to myocardium contrast (4.6 ± 0.5 vs. 4.4 ± 0.6, P = 0.003). Qualitative scores including endocardial edge definition (4.2 ± 0.5 vs. 4.3 ± 0.7, P = 0.123), artifact presence (4.3 ± 0.6 vs. 4.1 ± 0.8, P = 0.085), and overall image quality (4.4 ± 0.4 vs. 4.3 ± 0.6, P = 0.448), showed no significant differences between the two methods. Representative RV and LV functional parameters - including RVEDV (102.2 (86.4, 120.4) ml vs. 104.0 (88.5, 120.3) ml, P = 0.294), RVEF (31.0 ± 11.1 % vs. 31.2 ± 11.0 %, P = 0.570), and LVEDV (106.2 (86.7, 131.3) ml vs. 105.8 (84.4, 130.3) ml, P = 0.450) - also did not differ significantly between the two methods. Strong correlations (r > 0.900) and excellent agreement (ICC > 0.900) were found for all biventricular functional parameters between the two sequences. In subgroups with reduced LVEF (<50 %, n = 24) or elevated heart rate (≥80  bpm, n = 17), no significant differences were observed in any biventricular functional metrics (P > 0.05 for all) between the two sequences. In comparison to multiple BH cine, the FB AI MOCO cine achieved comparable image quality and biventricular functional parameters with shorter scan times, suggesting its promising potential for clinical applications.

Development and validation of an MRI spatiotemporal interaction model for early noninvasive prediction of neoadjuvant chemotherapy response in breast cancer: a multicentre study.

Tang W, Jin C, Kong Q, Liu C, Chen S, Ding S, Liu B, Feng Z, Li Y, Dai Y, Zhang L, Chen Y, Han X, Liu S, Chen D, Weng Z, Liu W, Wei X, Jiang X, Zhou Q, Mao N, Guo Y

pubmed logopapersJul 1 2025
The accurate and early evaluation of response to neoadjuvant chemotherapy (NAC) in breast cancer is crucial for optimizing treatment strategies and minimizing unnecessary interventions. While deep learning (DL)-based approaches have shown promise in medical imaging analysis, existing models often fail to comprehensively integrate spatial and temporal tumor dynamics. This study aims to develop and validate a spatiotemporal interaction (STI) model based on longitudinal MRI data to predict pathological complete response (pCR) to NAC in breast cancer patients. This study included retrospective and prospective datasets from five medical centers in China, collected from June 2018 to December 2024. These datasets were assigned to the primary cohort (including training and internal validation sets), external validation cohorts, and a prospective validation cohort. DCE-MRI scans from both pre-NAC (T0) and early-NAC (T1) stages were collected for each patient, along with surgical pathology results. A Siamese network-based STI model was developed, integrating spatial features from tumor segmentation with temporal dependencies using a transformer-based multi-head attention mechanism. This model was designed to simultaneously capture spatial heterogeneity and temporal dynamics, enabling accurate prediction of NAC response. The STI model's performance was evaluated using the area under the ROC curve (AUC) and Precision-Recall curve (AP), accuracy, sensitivity, and specificity. Additionally, the I-SPY1 and I-SPY2 datasets were used for Kaplan-Meier survival analysis and to explore the biological basis of the STI model, respectively. The prospective cohort was registered with Chinese Clinical Trial Registration Centre (ChiCTR2500102170). A total of 1044 patients were included in this study, with the pCR rate ranging from 23.8% to 35.9%. The STI model demonstrated good performance in early prediction of NAC response in breast cancer. In the external validation cohorts, the AUC values were 0.923 (95% CI: 0.859-0.987), 0.892 (95% CI: 0.821-0.963), and 0.913 (95% CI: 0.835-0.991), all outperforming the single-timepoint T0 or T1 models, as well as models with spatial information added (all p < 0.05, Delong test). Additionally, the STI model significantly outperformed the clinical model (p < 0.05, Delong test) and radiologists' predictions. In the prospective validation cohort, the STI model identified 90.2% (37/41) of non-pCR and 82.6% (19/23) of pCR patients, reducing misclassification rates by 58.7% and 63.3% compared to radiologists. This indicates that these patients might benefit from treatment adjustment or continued therapy in the early NAC stage. Survival analysis showed a significant correlation between the STI model and both recurrence-free survival (RFS) and overall survival (OS) in breast cancer patients. Further investigation revealed that favorable NAC responses predicted by the STI model were closely linked to upregulated immune-related genes and enhanced immune cell infiltration. Our study established a novel noninvasive STI model that integrates the spatiotemporal evolution of MRI before and during NAC to achieve early and accurate pCR prediction, offering potential guidance for personalized treatment. This study was supported by the National Natural Science Foundation of China (82302314, 62271448, 82171920, 81901711), Basic and Applied Basic Research Foundation of Guangdong Province (2022A1515110792, 2023A1515220097, 2024A1515010653), Medical Scientific Research Foundation of Guangdong Province (A2023073, A2024116), Science and Technology Projects in Guangzhou (2023A04J1275, 2024A03J1030, 2025A03J4163, 2025A03J4162); Guangzhou First People's Hospital Frontier Medical Technology Project (QY-C04).

Dual-type deep learning-based image reconstruction for advanced denoising and super-resolution processing in head and neck T2-weighted imaging.

Fujima N, Shimizu Y, Ikebe Y, Kameda H, Harada T, Tsushima N, Kano S, Homma A, Kwon J, Yoneyama M, Kudo K

pubmed logopapersJul 1 2025
To assess the utility of dual-type deep learning (DL)-based image reconstruction with DL-based image denoising and super-resolution processing by comparing images reconstructed with the conventional method in head and neck fat-suppressed (Fs) T2-weighted imaging (T2WI). We retrospectively analyzed the cases of 43 patients who underwent head/neck Fs-T2WI for the assessment of their head and neck lesions. All patients underwent two sets of Fs-T2WI scans with conventional- and DL-based reconstruction. The Fs-T2WI with DL-based reconstruction was acquired based on a 30% reduction of its spatial resolution in both the x- and y-axes with a shortened scan time. Qualitative and quantitative assessments were performed with both the conventional method- and DL-based reconstructions. For the qualitative assessment, we visually evaluated the overall image quality, visibility of anatomical structures, degree of artifact(s), lesion conspicuity, and lesion edge sharpness based on five-point grading. In the quantitative assessment, we measured the signal-to-noise ratio (SNR) of the lesion and the contrast-to-noise ratio (CNR) between the lesion and the adjacent or nearest muscle. In the qualitative analysis, significant differences were observed between the Fs-T2WI with the conventional- and DL-based reconstruction in all of the evaluation items except the degree of the artifact(s) (p < 0.001). In the quantitative analysis, significant differences were observed in the SNR between the Fs-T2WI with conventional- (21.4 ± 14.7) and DL-based reconstructions (26.2 ± 13.5) (p < 0.001). In the CNR assessment, the CNR between the lesion and adjacent or nearest muscle in the DL-based Fs-T2WI (16.8 ± 11.6) was significantly higher than that in the conventional Fs-T2WI (14.2 ± 12.9) (p < 0.001). Dual-type DL-based image reconstruction by an effective denoising and super-resolution process successfully provided high image quality in head and neck Fs-T2WI with a shortened scan time compared to the conventional imaging method.

Accelerating brain T2-weighted imaging using artificial intelligence-assisted compressed sensing combined with deep learning-based reconstruction: a feasibility study at 5.0T MRI.

Wen Y, Ma H, Xiang S, Feng Z, Guan C, Li X

pubmed logopapersJul 1 2025
T2-weighted imaging (T2WI), renowned for its sensitivity to edema and lesions, faces clinical limitations due to prolonged scanning time, increasing patient discomfort, and motion artifacts. The individual applications of artificial intelligence-assisted compressed sensing (ACS) and deep learning-based reconstruction (DLR) technologies have demonstrated effectiveness in accelerated scanning. However, the synergistic potential of ACS combined with DLR at 5.0T remains unexplored. This study systematically evaluates the diagnostic efficacy of the integrated ACS-DLR technique for T2WI at 5.0T, comparing it to conventional parallel imaging (PI) protocols. The prospective analysis was performed on 98 participants who underwent brain T2WI scans using ACS, DLR, and PI techniques. Two observers evaluated the overall image quality, truncation artifacts, motion artifacts, cerebrospinal fluid flow artifacts, vascular pulsation artifacts, and the significance of lesions. Subjective rating differences among the three sequences were compared. Objective assessment involved the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) in gray matter, white matter, and cerebrospinal fluid for each sequence. The SNR, CNR, and acquisition time of each sequence were compared. The acquisition time for ACS and DLR was reduced by 78%. The overall image quality of DLR is higher than that of ACS (P < 0.001) and equivalent to PI (P > 0.05). The SNR of the DLR sequence is the highest, and the CNR of DLR is higher than that of the ACS sequence (P < 0.001) and equivalent to PI (P > 0.05). The integration of ACS and DLR enables the ultrafast acquisition of brain T2WI while maintaining superior SNR and comparable CNR compared to PI sequences. Not applicable.

Artificial Intelligence Iterative Reconstruction for Dose Reduction in Pediatric Chest CT: A Clinical Assessment via Below 3 Years Patients With Congenital Heart Disease.

Zhang F, Peng L, Zhang G, Xie R, Sun M, Su T, Ge Y

pubmed logopapersJul 1 2025
To assess the performance of a newly introduced deep learning-based reconstruction algorithm, namely the artificial intelligence iterative reconstruction (AIIR), in reducing the dose of pediatric chest CT by using the image data of below 3-year-old patients with congenital heart disease (CHD). The lung image available from routine-dose cardiac CT angiography (CTA) on below 3 years patients with CHD was employed as a reference for evaluating the paired low-dose chest CT. A total of 191 subjects were prospectively enrolled, where the dose for chest CT was reduced to ~0.1 mSv while the cardiac CTA protocol was kept unchanged. The low-dose chest CT images, obtained with the AIIR and the hybrid iterative reconstruction (HIR), were compared in image quality, ie, overall image quality and lung structure depiction, and in diagnostic performance, ie, severity assessment of pneumonia and airway stenosis. Compared with the reference, lung image quality was not found significantly different on low-dose AIIR images (all P >0.05) but obviously inferior with the HIR (all P <0.05). Compared with the HIR, low-dose AIIR images also achieved a closer pneumonia severity index (AIIR 4.32±3.82 vs. Ref 4.37±3.84, P >0.05; HIR 5.12±4.06 vs. Ref 4.37±3.84, P <0.05) and airway stenosis grading (consistently graded: AIIR 88.5% vs. HIR 56.5% ) to the reference. AIIR has the potential for large dose reduction in chest CT of patients below 3 years of age while preserving image quality and achieving diagnostic results nearly equivalent to routine dose scans.

Assessment of AI-accelerated T2-weighted brain MRI, based on clinical ratings and image quality evaluation.

Nonninger JN, Kienast P, Pogledic I, Mallouhi A, Barkhof F, Trattnig S, Robinson SD, Kasprian G, Haider L

pubmed logopapersJul 1 2025
To compare clinical ratings and signal-to-noise ratio (SNR) measures of a commercially available Deep Learning-based MRI reconstruction method (T2<sub>(DR)</sub>) against conventional T2- turbo spin echo brain MRI (T2<sub>(CN)</sub>). 100 consecutive patients with various neurological conditions underwent both T2<sub>(DR)</sub> and T2<sub>(CN)</sub> on a Siemens Vida 3 T scanner with a 64-channel head coil in the same examination. Acquisition times were 3.33 min for T2<sub>(CN)</sub> and 1.04 min for T2<sub>(DR)</sub>. Four neuroradiologists evaluated overall image quality (OIQ), diagnostic safety (DS), and image artifacts (IA), blinded to the acquisition mode. SNR and SNR<sub>eff</sub> (adjusted for acquisition time) were calculated for air, grey- and white matter, and cerebrospinal fluid. The mean patient age was 43.6 years (SD 20.3), with 54 females. The distribution of non-diagnostic ratings did not differ significantly between T2<sub>(CN)</sub> and T2<sub>(DR)</sub> (IA p = 0.108; OIQ: p = 0.700 and DS: p = 0.652). However, when considering the full spectrum of ratings, significant differences favouring T2<sub>(CN)</sub> emerged in OIQ (p = 0.003) and IA (p < 0.001). T2<sub>(CN)</sub> had higher SNR (157.9, SD 123.4) than T2<sub>(DR)</sub> (112.8, SD 82.7), p < 0.001, but T2<sub>(DR)</sub> demonstrated superior SNR<sub>eff</sub> (14.1, SD 10.3) compared to T2<sub>(CN)</sub> (10.8, SD 8.5), p < 0.001. Our results suggest that while T2<sub>(DR)</sub> may be clinically applicable for a diagnostic setting, it does not fully match the quality of high-standard conventional T2<sub>(CN)</sub>, MRI acquisitions.

Accelerated Multi-b-Value DWI Using Deep Learning Reconstruction: Image Quality Improvement and Microvascular Invasion Prediction in BCLC Stage A Hepatocellular Carcinoma.

Zhu Y, Wang P, Wang B, Feng B, Cai W, Wang S, Meng X, Wang S, Zhao X, Ma X

pubmed logopapersJul 1 2025
To investigate the effect of accelerated deep-learning (DL) multi-b-value DWI (Mb-DWI) on acquisition time, image quality, and predictive ability of microvascular invasion (MVI) in BCLC stage A hepatocellular carcinoma (HCC), compared to standard Mb-DWI. Patients who underwent liver MRI were prospectively collected. Subjective image quality, signal-to-noise ratio (SNR), lesion contrast-to-noise ratio (CNR), and Mb-DWI-derived parameters from various models (mono-exponential model, intravoxel incoherent motion, diffusion kurtosis imaging, and stretched exponential model) were calculated and compared between the two sequences. The Mb-DWI parameters of two sequences were compared between MVI-positive and MVI-negative groups, respectively. ROC and logistic regression analysis were performed to evaluate and identify the predictive performance. The study included 118 patients. 48/118 (40.67%) lesions were identified as MVI positive. DL Mb-DWI significantly reduced acquisition time by 52.86%. DL Mb-DWI produced significantly higher overall image quality, SNR, and CNR than standard Mb-DWI. All diffusion-related parameters except pseudo-diffusion coefficient showed significant differences between the two sequences. Both in DL and standard Mb-DWI, the apparent diffusion coefficient, true diffusion coefficient (D), perfusion fraction (f), mean diffusivity (MD), mean kurtosis (MK), and distributed diffusion coefficient (DDC) values were significantly different between MVI-positive and MVI-negative groups. The combination of D, f, and MK yield the highest AUC of 0.912 and 0.928 in standard and DL sequences, with no significant difference regarding the predictive efficiency. The DL Mb-DWI significantly reduces acquisition time and improves image quality, with comparable predictive performance to standard Mb-DWI in discriminating MVI status in BCLC stage A HCC.

Radiation and contrast dose reduction in coronary CT angiography for slender patients with 70 kV tube voltage and deep learning image reconstruction.

Ren Z, Shen L, Zhang X, He T, Yu N, Zhang M

pubmed logopapersJul 1 2025
To evaluate the radiation and contrast dose reduction potential of combining 70 kV with deep learning image reconstruction (DLIR) in coronary computed tomography angiography (CCTA) for slender patients with body-mass-index (BMI) ≤25 kg/m2. Sixty patients for CCTA were randomly divided into 2 groups: group A with 120 kV and contrast agent dose of 0.8 mL/kg, and group B with 70 kV and contrast agent dose of 0.5 mL/kg. Group A used adaptive statistical iterative reconstruction-V (ASIR-V) with 50% strength level (50%ASIR-V) while group B used 50% ASIR-V, DLIR of low level (DLIR-L), DLIR of medium level (DLIR-M), and DLIR of high level (DLIR-H) for image reconstruction. The CT values and SD values of coronary arteries and pericardial fat were measured, and signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated. The image quality was subjectively evaluated by 2 radiologists using a five-point scoring system. The effective radiation dose (ED) and contrast dose were calculated and compared. Group B significantly reduced radiation dose by 75.6% and contrast dose by 32.9% compared to group A. Group B exhibited higher CT values of coronary arteries than group A, and DLIR-L, DLIR-M, and DLIR-H in group B provided higher SNR values and CNR values and subjective scores, among which DLIR-H had the lowest noise and highest subjective scores. Using 70 kV combined with DLIR significantly reduces radiation and contrast dose while improving image quality in CCTA for slender patients with DLIR-H having the best effect on improving image quality. The 70 kV and DLIR-H may be used in CCTA for slender patients to significantly reduce radiation dose and contrast dose while improving image quality.

A Machine Learning Model for Predicting the HER2 Positive Expression of Breast Cancer Based on Clinicopathological and Imaging Features.

Qin X, Yang W, Zhou X, Yang Y, Zhang N

pubmed logopapersJul 1 2025
To develop a machine learning (ML) model based on clinicopathological and imaging features to predict the Human Epidermal Growth Factor Receptor 2 (HER2) positive expression (HER2-p) of breast cancer (BC), and to compare its performance with that of a logistic regression (LR) model. A total of 2541 consecutive female patients with pathologically confirmed primary breast lesions were enrolled in this study. Based on chronological order, 2034 patients treated between January 2018 and December 2022 were designated as the retrospective development cohort, while 507 patients treated between January 2023 and May 2024 were designated as the prospective validation cohort. The patients were randomly divided into a train cohort (n=1628) and a test cohort (n=406) in an 8:2 ratio within the development cohort. Pretreatment mammography (MG) and breast MRI data, along with clinicopathological features, were recorded. Extreme Gradient Boosting (XGBoost) in combination with Artificial Neural Network (ANN) and multivariate LR analyses were employed to extract features associated with HER2 positivity in BC and to develop an ANN model (using XGBoost features) and an LR model, respectively. The predictive value was assessed using a receiver operating characteristic (ROC) curve. Following the application of Recursive Feature Elimination with Cross-Validation (RFE-CV) for feature dimensionality reduction, the XGBoost algorithm identified tumor size, suspicious calcifications, Ki-67 index, spiculation, and minimum apparent diffusion coefficient (minimum ADC) as key feature subsets indicative of HER2-p in BC. The constructed ANN model consistently outperformed the LR model, achieving the area under the curve (AUC) of 0.853 (95% CI: 0.837-0.872) in the train cohort, 0.821 (95% CI: 0.798-0.853) in the test cohort, and 0.809 (95% CI: 0.776-0.841) in the validation cohort. The ANN model, built using the significant feature subsets identified by the XGBoost algorithm with RFE-CV, demonstrates potential in predicting HER2-p in BC.

An AI-based tool for prosthetic crown segmentation serving automated intraoral scan-to-CBCT registration in challenging high artifact scenarios.

Elgarba BM, Ali S, Fontenele RC, Meeus J, Jacobs R

pubmed logopapersJul 1 2025
Accurately registering intraoral and cone beam computed tomography (CBCT) scans in patients with metal artifacts poses a significant challenge. Whether a cloud-based platform trained for artificial intelligence (AI)-driven segmentation can improve registration is unclear. The purpose of this clinical study was to validate a cloud-based platform trained for the AI-driven segmentation of prosthetic crowns on CBCT scans and subsequent multimodal intraoral scan-to-CBCT registration in the presence of high metal artifact expression. A dataset consisting of 30 time-matched maxillary and mandibular CBCT and intraoral scans, each containing at least 4 prosthetic crowns, was collected. CBCT acquisition involved placing cotton rolls between the cheeks and teeth to facilitate soft tissue delineation. Segmentation and registration were compared using either a semi-automated (SA) method or an AI-automated (AA). SA served as clinical reference, where prosthetic crowns and their radicular parts (natural roots or implants) were threshold-based segmented with point surface-based registration. The AA method included fully automated segmentation and registration based on AI algorithms. Quantitative assessment compared AA's median surface deviation (MSD) and root mean square (RMS) in crown segmentation and subsequent intraoral scan-to-CBCT registration with those of SA. Additionally, segmented crown STL files were voxel-wise analyzed for comparison between AA and SA. A qualitative assessment of AA-based crown segmentation evaluated the need for refinement, while the AA-based registration assessment scrutinized the alignment of the registered-intraoral scan with the CBCT teeth and soft tissue contours. Ultimately, the study compared the time efficiency and consistency of both methods. Quantitative outcomes were analyzed with the Kruskal-Wallis, Mann-Whitney, and Student t tests, and qualitative outcomes with the Wilcoxon test (all α=.05). Consistency was evaluated by using the intraclass correlation coefficient (ICC). Quantitatively, AA methods excelled with a 0.91 Dice Similarity Coefficient for crown segmentation and an MSD of 0.03 ±0.05 mm for intraoral scan-to-CBCT registration. Additionally, AA achieved 91% clinically acceptable matches of teeth and gingiva on CBCT scans, surpassing SA method's 80%. Furthermore, AA was significantly faster than SA (P<.05), being 200 times faster in segmentation and 4.5 times faster in registration. Both AA and SA exhibited excellent consistency in segmentation and registration, with ICC values of 0.99 and 1 for AA and 0.99 and 0.96 for SA, respectively. The novel cloud-based platform demonstrated accurate, consistent, and time-efficient prosthetic crown segmentation, as well as intraoral scan-to-CBCT registration in scenarios with high artifact expression.
Page 10 of 25249 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.