Sort by:
Page 68 of 1651650 results

Evaluation of deep learning reconstruction in accelerated knee MRI: comparison of visual and diagnostic performance metrics.

Wen S, Xu Y, Yang G, Huang F, Zeng Z

pubmed logopapersJun 23 2025
To investigate the clinical value of deep learning reconstruction (DLR) in accelerated magnetic resonance imaging (MRI) of the knee and compare its visual quality and diagnostic performance metrics with conventional fast spin-echo T2-weighted imaging with fat suppression (FSE-T2WI-FS). This prospective study included 116 patients with knee injuries. All patients underwent both conventional FSE-T2WI-FS and DLR-accelerated FSE-T2WI-FS scans on a 1.5‑T MRI scanner. Two radiologists independently evaluated overall image quality, artifacts, and image sharpness using a 5-point Likert scale. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of lesion regions were measured. Subjective scores were compared using the Wilcoxon signed-rank test, SNR/CNR differences were analyzed via paired t tests, and inter-reader agreement was assessed using Cohen's kappa. The accelerated sequences with DLR achieved a 36 % reduction in total scan time compared to conventional sequences (p < 0.05), shortening acquisition from 9 min 50 s to 6 min 15 s. Moreover, DLR demonstrated superior artifact suppression and enhanced quantitative image quality, with significantly higher SNR and CNR (p < 0.001). Despite these improvements, diagnostic equivalence was maintained: No significant differences were observed in overall image quality, sharpness (p > 0.05), or lesion detection rates. Inter-reader agreement was good (κ> 0.75), further validating the clinical reliability of the DLR technique. Using DLR-accelerated FSE-T2WI-FS reduces scan time, suppresses artifacts, and improves quantitative image quality while maintaining diagnostic accuracy comparable to conventional sequences. This technology holds promise for optimizing clinical workflows in MRI of the knee.

Enabling Early Identification of Malignant Vertebral Compression Fractures via 2.5D Convolutional Neural Network Model with CT Image Analysis.

Huang C, Li E, Hu J, Huang Y, Wu Y, Wu B, Tang J, Yang L

pubmed logopapersJun 23 2025
This study employed a retrospective data analysis approach combined with model development and validation. The present study introduces a 2.5D convolutional neural network (CNN) model leveraging CT imaging to facilitate the early detection of malignant vertebral compression fractures (MVCFs), potentially reducing reliance on invasive biopsies. Vertebral histopathological biopsy is recognized as the gold standard for differentiating between osteoporotic and malignant vertebral compression fractures (VCFs). Nevertheless, its application is restricted due to its invasive nature and high cost, highlighting the necessity for alternative methods to identify MVCFs. The clinical, imaging, and pathological data of patients who underwent vertebral augmentation and biopsy at Institution 1 and Institution 2 were collected and analyzed. Based on the vertebral CT images of these patients, 2D, 2.5D, and 3D CNN models were developed to identify the patients with osteoporotic vertebral compression fractures (OVCF) and MVCF. To verify the clinical application value of the CNN model, two rounds of reader studies were performed. The 2.5D CNN model performed well, and its performance in identifying MVCF patients was significantly superior to that of the 2D and 3D CNN models. In the training dataset, the area under the receiver operating characteristic curve (AUC) of the 2.5D CNN model was 0.996 and an F1 score of 0.915. In the external cohort test, the AUC was 0.815 and an F1 score of 0.714. And clinicians' ability to identify MVCF patients has been enhanced by the 2.5D CNN model. With the assistance of the 2.5D CNN model, the AUC of senior clinicians was 0.882, and the F1 score was 0.774. For junior clinicians, the 2.5D CNN model-assisted AUC was 0.784 and the F1 score was 0.667. The development of our 2.5D CNN model marks a significant step towards non-invasive identification of MVCF patients,. The 2.5D CNN model may be a potential model to assist clinicians in better identifying MVCF patients.

Physiological Response of Tissue-Engineered Vascular Grafts to Vasoactive Agents in an Ovine Model.

Guo M, Villarreal D, Watanabe T, Wiet M, Ulziibayar A, Morrison A, Nelson K, Yuhara S, Hussaini SF, Shinoka T, Breuer C

pubmed logopapersJun 23 2025
Tissue-engineered vascular grafts (TEVGs) are emerging as promising alternatives to synthetic grafts, particularly in pediatric cardiovascular surgery. While TEVGs have demonstrated growth potential, compliance, and resistance to calcification, their functional integration into the circulation, especially their ability to respond to physiological stimuli, remains underexplored. Vasoreactivity, the dynamic contraction or dilation of blood vessels in response to vasoactive agents, is a key property of native vessels that affects systemic hemodynamics and long-term vascular function. This study aimed to develop and validate an <i>in vivo</i> protocol to assess the vasoreactive capacity of TEVGs implanted as inferior vena cava (IVC) interposition grafts in a large animal model. Bone marrow-seeded TEVGs were implanted in the thoracic IVC of Dorset sheep. A combination of intravascular ultrasound (IVUS) imaging and invasive hemodynamic monitoring was used to evaluate vessel response to norepinephrine (NE) and sodium nitroprusside (SNP). Cross-sectional luminal area changes were measured using a custom Python-based software package (VIVUS) that leverages deep learning for IVUS image segmentation. Physiological parameters including blood pressure, heart rate, and cardiac output were continuously recorded. NE injections induced significant, dose-dependent vasoconstriction of TEVGs, with peak reductions in luminal area averaging ∼15% and corresponding increases in heart rate and mean arterial pressure. Conversely, SNP did not elicit measurable vasodilation in TEVGs, likely due to structural differences in venous tissue, the low-pressure environment of the thoracic IVC, and systemic confounders. Overall, the TEVGs demonstrated active, rapid, and reversible vasoconstrictive behavior in response to pharmacologic stimuli. This study presents a novel <i>in vivo</i> method for assessing TEVG vasoreactivity using real-time imaging and hemodynamic data. TEVGs possess functional vasoactivity, suggesting they may play an active role in modulating venous return and systemic hemodynamics. These findings are particularly relevant for Fontan patients and other scenarios where dynamic venous regulation is critical. Future work will compare TEVG vasoreactivity with native veins and synthetic grafts to further characterize their physiological integration and potential clinical benefits.

GPT-4o and Specialized AI in Breast Ultrasound Imaging: A comparative Study on Accuracy, Agreement, Limitations, and Diagnostic Potential.

Sanli DET, Sanli AN, Buyukdereli Atadag Y, Kurt A, Esmerer E

pubmed logopapersJun 23 2025
This study aimed to evaluate the ability of ChatGPT and Breast Ultrasound Helper, a special ChatGPT-based subprogram trained on ultrasound image analysis, to analyze and differentiate benign and malignant breast lesions on ultrasound images. Ultrasound images of histopathologically confirmed breast cancer and fibroadenoma patients were read GPT-4o (the latest ChatGPT version) and Breast Ultrasound Helper (BUH), a tool from the "Explore" section of ChatGPT. Both were prompted in English using ACR BI-RADS Breast Ultrasound Lexicon criteria: lesion shape, orientation, margin, internal echo pattern, echogenicity, posterior acoustic features, microcalcifications or hyperechoic foci, perilesional hyperechoic rim, edema or architectural distortion, lesion size, and BI-RADS category. Two experienced radiologists evaluated the images and the responses of the programs in consensus. The outputs, BI-RADS category agreement, and benign/malignant discrimination were statistically compared. A total of 232 ultrasound images were analyzed, of which 133 (57.3%) were malignant and 99 (42.7%) benign. In comparative analysis, BUH showed superior performance overall, with higher kappa values and statistically significant results across multiple features (P .001). However, the overall level of agreement with the radiologists' consensus for all features was similar for BUH (κ: 0.387-0.755) and GPT-4o (κ: 0.317-0.803). On the other hand, BI-RADS category agreement was slightly higher in GPT-4o than in BUH (69.4% versus 65.9%), but BUH was slightly more successful in distinguishing benign lesions from malignant lesions (65.9% versus 67.7%). Although both AI tools show moderate-good performance in ultrasound image analysis, their limited compatibility with radiologists' evaluations and BI-RADS categorization suggests that their clinical application in breast ultrasound interpretation is still early and unreliable.

Deep learning-quantified body composition from positron emission tomography/computed tomography and cardiovascular outcomes: a multicentre study.

Miller RJH, Yi J, Shanbhag A, Marcinkiewicz A, Patel KK, Lemley M, Ramirez G, Geers J, Chareonthaitawee P, Wopperer S, Berman DS, Di Carli M, Dey D, Slomka PJ

pubmed logopapersJun 23 2025
Positron emission tomography (PET)/computed tomography (CT) myocardial perfusion imaging (MPI) is a vital diagnostic tool, especially in patients with cardiometabolic syndrome. Low-dose CT scans are routinely performed with PET for attenuation correction and potentially contain valuable data about body tissue composition. Deep learning and image processing were combined to automatically quantify skeletal muscle (SM), bone and adipose tissue from these scans and then evaluate their associations with death or myocardial infarction (MI). In PET MPI from three sites, deep learning quantified SM, bone, epicardial adipose tissue (EAT), subcutaneous adipose tissue (SAT), visceral adipose tissue (VAT), and intermuscular adipose tissue (IMAT). Sex-specific thresholds for abnormal values were established. Associations with death or MI were evaluated using unadjusted and multivariable models adjusted for clinical and imaging factors. This study included 10 085 patients, with median age 68 (interquartile range 59-76) and 5767 (57%) male. Body tissue segmentations were completed in 102 ± 4 s. Higher VAT density was associated with an increased risk of death or MI in both unadjusted [hazard ratio (HR) 1.40, 95% confidence interval (CI) 1.37-1.43] and adjusted (HR 1.24, 95% CI 1.19-1.28) analyses, with similar findings for IMAT, SAT, and EAT. Patients with elevated VAT density and reduced myocardial flow reserve had a significantly increased risk of death or MI (adjusted HR 2.49, 95% CI 2.23-2.77). Volumetric body tissue composition can be obtained rapidly and automatically from standard cardiac PET/CT. This new information provides a detailed, quantitative assessment of sarcopenia and cardiometabolic health for physicians.

Cost-effectiveness of a novel AI technology to quantify coronary inflammation and cardiovascular risk in patients undergoing routine coronary computed tomography angiography.

Tsiachristas A, Chan K, Wahome E, Kearns B, Patel P, Lyasheva M, Syed N, Fry S, Halborg T, West H, Nicol E, Adlam D, Modi B, Kardos A, Greenwood JP, Sabharwal N, De Maria GL, Munir S, McAlindon E, Sohan Y, Tomlins P, Siddique M, Shirodaria C, Blankstein R, Desai M, Neubauer S, Channon KM, Deanfield J, Akehurst R, Antoniades C

pubmed logopapersJun 23 2025
Coronary computed tomography angiography (CCTA) is a first-line investigation for chest pain in patients with suspected obstructive coronary artery disease (CAD). However, many acute cardiac events occur in the absence of obstructive CAD. We assessed the lifetime cost-effectiveness of integrating a novel artificial intelligence-enhanced image analysis algorithm (AI-Risk) that stratifies the risk of cardiac events by quantifying coronary inflammation, combined with the extent of coronary artery plaque and clinical risk factors, by analysing images from routine CCTA. A hybrid decision-tree with population cohort Markov model was developed from 3393 consecutive patients who underwent routine CCTA for suspected obstructive CAD and followed up for major adverse cardiac events over a median (interquartile range) of 7.7(6.4-9.1) years. In a prospective real-world evaluation survey of 744 consecutive patients undergoing CCTA for chest pain investigation, the availability of AI-Risk assessment led to treatment initiation or intensification in 45% of patients. In a further prospective study of 1214 consecutive patients with extensive guidelines recommended cardiovascular risk profiling, AI-Risk stratification led to treatment initiation or intensification in 39% of patients beyond the current clinical guideline recommendations. Treatment guided by AI-Risk modelled over a lifetime horizon could lead to fewer cardiac events (relative reductions of 11%, 4%, 4%, and 12% for myocardial infarction, ischaemic stroke, heart failure, and cardiac death, respectively). Implementing AI-Risk Classification in routine interpretation of CCTA is highly likely to be cost-effective (incremental cost-effectiveness ratio £1371-3244), both in scenarios of current guideline compliance, or when applied only to patients without obstructive CAD. Compared with standard care, the addition of AI-Risk assessment in routine CCTA interpretation is cost-effective, by refining risk-guided medical management.

Machine Learning Models Based on CT Enterography for Differentiating Between Ulcerative Colitis and Colonic Crohn's Disease Using Intestinal Wall, Mesenteric Fat, and Visceral Fat Features.

Wang X, Wang X, Lei J, Rong C, Zheng X, Li S, Gao Y, Wu X

pubmed logopapersJun 23 2025
This study aimed to develop radiomic-based machine learning models using computed tomography enterography (CTE) features derived from the intestinal wall, mesenteric fat, and visceral fat to differentiate between ulcerative colitis (UC) and colonic Crohn's disease (CD). Clinical and imaging data from 116 patients with inflammatory bowel disease (IBD) (68 with UC and 48 with colonic CD) were retrospectively collected. Radiomic features were extracted from venous-phase CTE images. Feature selection was performed via the intraclass correlation coefficient (ICC), correlation analysis, SelectKBest, and least absolute shrinkage and selection operator (LASSO) regression. Support vector machine models were constructed using features from individual and combined regions, with model performance evaluated using the area under the ROC curve (AUC). The combined radiomic model, integrating features from all three regions, exhibited superior classification performance (AUC= 0.857, 95% CI, 0.732-0.982), with a sensitivity of 0.762 (95% CI, 0.547-0.903) and specificity of 0.857 (95% CI, 0.601-0.960) in the testing cohort. The models based on features from the intestinal wall, mesenteric fat, and visceral fat achieved AUCs of 0.847 (95% CI, 0.710-0.984), 0.707 (95% CI, 0.526-0.889), and 0.731 (95% CI, 0.553-0.910), respectively, in the testing cohort. The intestinal wall model demonstrated the best calibration. This study demonstrated the feasibility of constructing machine learning models based on radiomic features of the intestinal wall, mesenteric fat, and visceral fat to distinguish between UC and colonic CD.

Ensemble-based Convolutional Neural Networks for brain tumor classification in MRI: Enhancing accuracy and interpretability using explainable AI.

Sánchez-Moreno L, Perez-Peña A, Duran-Lopez L, Dominguez-Morales JP

pubmed logopapersJun 23 2025
Accurate and efficient classification of brain tumors, including gliomas, meningiomas, and pituitary adenomas, is critical for early diagnosis and treatment planning. Magnetic resonance imaging (MRI) is a key diagnostic tool, and deep learning models have shown promise in automating tumor classification. However, challenges remain in achieving high accuracy while maintaining interpretability for clinical use. This study explores the use of transfer learning with pre-trained architectures, including VGG16, DenseNet121, and Inception-ResNet-v2, to classify brain tumors from MRI images. An ensemble-based classifier was developed using a majority voting strategy to improve robustness. To enhance clinical applicability, explainability techniques such as Grad-CAM++ and Integrated Gradients were employed, allowing visualization of model decision-making. The ensemble model outperformed individual Convolutional Neural Network (CNN) architectures, achieving an accuracy of 86.17% in distinguishing gliomas, meningiomas, pituitary adenomas, and benign cases. Interpretability techniques provided heatmaps that identified key regions influencing model predictions, aligning with radiological features and enhancing trust in the results. The proposed ensemble-based deep learning framework improves the accuracy and interpretability of brain tumor classification from MRI images. By combining multiple CNN architectures and integrating explainability methods, this approach offers a more reliable and transparent diagnostic tool to support medical professionals in clinical decision-making.

MRI Radiomics and Automated Habitat Analysis Enhance Machine Learning Prediction of Bone Metastasis and High-Grade Gleason Scores in Prostate Cancer.

Yang Y, Zheng B, Zou B, Liu R, Yang R, Chen Q, Guo Y, Yu S, Chen B

pubmed logopapersJun 23 2025
To explore the value of machine learning models based on MRI radiomics and automated habitat analysis in predicting bone metastasis and high-grade pathological Gleason scores in prostate cancer. This retrospective study enrolled 214 patients with pathologically diagnosed prostate cancer from May 2013 to January 2025, including 93 cases with bone metastasis and 159 cases with high-grade Gleason scores. Clinical, pathological and MRI data were collected. An nnUNet model automatically segmented the prostate in MRI scans. K-means clustering identified subregions within the entire prostate in T2-FS images. Senior radiologists manually segmented regions of interest (ROIs) in prostate lesions. Radiomics features were extracted from these habitat subregions and lesion ROIs. These features combined with clinical features were utilized to build multiple machine learning classifiers to predict bone metastasis and high-grade Gleason scores while a K-means clustering method was applied to obtain habitat subregions within the whole prostate. Finally, the models underwent interpretable analysis based on feature importance. The nnUNet model achieved a mean Dice coefficient of 0.970 for segmentation. Habitat analysis using 2 clusters yielded the highest average silhouette coefficient (0.57). Machine learning models based on a combination of lesion radiomics, habitat radiomics, and clinical features achieved the best performance in both prediction tasks. The Extra Trees Classifier achieved the highest AUC (0.900) for predicting bone metastasis, while the CatBoost Classifier performed best (AUC 0.895) for predicting high-grade Gleason scores. The interpretability analysis of the optimal models showed that the PSA clinical feature was crucial for predictions, while both habitat radiomics and lesion radiomics also played important roles. The study proposed an automated prostate habitat analysis for prostate cancer, enabling a comprehensive analysis of tumor heterogeneity. The machine learning models developed achieved excellent performance in predicting the risk of bone metastasis and high-grade Gleason scores in prostate cancer. This approach overcomes the limitations of manual feature extraction, and the inadequate analysis of heterogeneity often encountered in traditional radiomics, thereby improving model performance.

Towards a comprehensive characterization of arteries and veins in retinal imaging.

Andreini P, Bonechi S

pubmed logopapersJun 23 2025
Retinal fundus imaging is crucial for diagnosing and monitoring eye diseases, which are often linked to systemic health conditions such as diabetes and hypertension. Current deep learning techniques often narrowly focus on segmenting retinal blood vessels, lacking a more comprehensive analysis and characterization of the retinal vascular system. This study fills this gap by proposing a novel, integrated approach that leverages multiple stages to accurately determine vessel paths and extract informative features from them. The segmentation of veins and arteries, achieved through a deep semantic segmentation network, is used by a newly designed algorithm to reconstruct individual vessel paths. The reconstruction process begins at the optic disc, identified by a localization network, and uses a recurrent neural network to predict the vessel paths at various junctions. The different stages of the proposed approach are validated both qualitatively and quantitatively, demonstrating robust performance. The proposed approach enables the extraction of critical features at the individual vessel level, such as vessel tortuosity and diameter. This work lays the foundation for a comprehensive retinal image evaluation, going beyond isolated tasks like vessel segmentation, with significant potential for clinical diagnosis.
Page 68 of 1651650 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.