Sort by:
Page 195 of 2052045 results

Hierarchical diagnosis of breast phyllodes tumors enabled by deep learning of ultrasound images: a retrospective multi-center study.

Yan Y, Liu Y, Wang Y, Jiang T, Xie J, Zhou Y, Liu X, Yan M, Zheng Q, Xu H, Chen J, Sui L, Chen C, Ru R, Wang K, Zhao A, Li S, Zhu Y, Zhang Y, Wang VY, Xu D

pubmed logopapersMay 8 2025
Phyllodes tumors (PTs) are rare breast tumors with high recurrence rates, current methods relying on post-resection pathology often delay detection and require further surgery. We propose a deep-learning-based Phyllodes Tumors Hierarchical Diagnosis Model (PTs-HDM) for preoperative identification and grading. Ultrasound images from five hospitals were retrospectively collected, with all patients having undergone surgical pathological confirmation of either PTs or fibroadenomas (FAs). PTs-HDM follows a two-stage classification: first distinguishing PTs from FAs, then grading PTs into benign or borderline/malignant. Model performance metrics including AUC and accuracy were quantitatively evaluated. A comparative analysis was conducted between the algorithm's diagnostic capabilities and those of radiologists with varying clinical experience within an external validation cohort. Through the provision of PTs-HDM's automated classification outputs and associated thermal activation mapping guidance, we systematically assessed the enhancement in radiologists' diagnostic concordance and classification accuracy. A total of 712 patients were included. On the external test set, PTs-HDM achieved an AUC of 0.883, accuracy of 87.3% for PT vs. FA classification. Subgroup analysis showed high accuracy for tumors < 2 cm (90.9%). In hierarchical classification, the model obtained an AUC of 0.856 and accuracy of 80.9%. Radiologists' performance improved with PTs-HDM assistance, with binary classification accuracy increasing from 82.7%, 67.7%, and 64.2-87.6%, 76.6%, and 82.1% for senior, attending, and resident radiologists, respectively. Their hierarchical classification AUCs improved from 0.566 to 0.827 to 0.725-0.837. PTs-HDM also enhanced inter-radiologist consistency, increasing Kappa values from - 0.05 to 0.41 to 0.12 to 0.65, and the intraclass correlation coefficient from 0.19 to 0.45. PTs-HDM shows strong diagnostic performance, especially for small lesions, and improves radiologists' accuracy across all experience levels, bridging diagnostic gaps and providing reliable support for PTs' hierarchical diagnosis.

Chest X-Ray Visual Saliency Modeling: Eye-Tracking Dataset and Saliency Prediction Model.

Lou J, Wang H, Wu X, Ng JCH, White R, Thakoor KA, Corcoran P, Chen Y, Liu H

pubmed logopapersMay 8 2025
Radiologists' eye movements during medical image interpretation reflect their perceptual-cognitive processes of diagnostic decisions. The eye movement data can be modeled to represent clinically relevant regions in a medical image and potentially integrated into an artificial intelligence (AI) system for automatic diagnosis in medical imaging. In this article, we first conduct a large-scale eye-tracking study involving 13 radiologists interpreting 191 chest X-ray (CXR) images, establishing a best-of-its-kind CXR visual saliency benchmark. We then perform analysis to quantify the reliability and clinical relevance of saliency maps (SMs) generated for CXR images. We develop CXR image saliency prediction method (CXRSalNet), a novel saliency prediction model that leverages radiologists' gaze information to optimize the use of unlabeled CXR images, enhancing training and mitigating data scarcity. We also demonstrate the application of our CXR saliency model in enhancing the performance of AI-powered diagnostic imaging systems.

Application of Artificial Intelligence to Deliver Healthcare From the Eye.

Weinreb RN, Lee AY, Baxter SL, Lee RWJ, Leng T, McConnell MV, El-Nimri NW, Rhew DC

pubmed logopapersMay 8 2025
Oculomics is the science of analyzing ocular data to identify, diagnose, and manage systemic disease. This article focuses on prescreening, its use with retinal images analyzed by artificial intelligence (AI), to identify ocular or systemic disease or potential disease in asymptomatic individuals. The implementation of prescreening in a coordinated care system, defined as Healthcare From the Eye prescreening, has the potential to improve access, affordability, equity, quality, and safety of health care on a global level. Stakeholders include physicians, payers, policymakers, regulators and representatives from industry, government, and data privacy sectors. The combination of AI analysis of ocular data with automated technologies that capture images during routine eye examinations enables prescreening of large populations for chronic disease. Retinal images can be acquired during either a routine eye examination or in settings outside of eye care with readily accessible, safe, quick, and noninvasive retinal imaging devices. The outcome of such an examination can then be digitally communicated across relevant stakeholders in a coordinated fashion to direct a patient to screening and monitoring services. Such an approach offers the opportunity to transform health care delivery and improve early disease detection, improve access to care, enhance equity especially in rural and underserved communities, and reduce costs. With effective implementation and collaboration among key stakeholders, this approach has the potential to contribute to an equitable and effective health care system.

Automated Emergent Large Vessel Occlusion Detection Using Viz.ai Software and Its Impact on Stroke Workflow Metrics and Patient Outcomes in Stroke Centers: A Systematic Review and Meta-analysis.

Sarhan K, Azzam AY, Moawad MHED, Serag I, Abbas A, Sarhan AE

pubmed logopapersMay 8 2025
The implementation of artificial intelligence (AI), particularly Viz.ai software in stroke care, has emerged as a promising tool to enhance the detection of large vessel occlusion (LVO) and to improve stroke workflow metrics and patient outcomes. The aim of this systematic review and meta-analysis is to evaluate the impact of Viz.ai on stroke workflow efficiency in hospitals and on patients' outcomes. Following the PRISMA guidelines, we conducted a comprehensive search on electronic databases, including PubMed, Web of Science, and Scopus databases, to obtain relevant studies until 25 October 2024. Our primary outcomes were door-to-groin puncture (DTG) time, CT scan-to-start of endovascular treatment (EVT) time, CT scan-to-recanalization time, and door-in-door-out time. Secondary outcomes included symptomatic intracranial hemorrhage (ICH), any ICH, mortality, mRS score < 2 at 90 days, and length of hospital stay. A total of 12 studies involving 15,595 patients were included in our analysis. The pooled analysis demonstrated that the implementation of the Viz.ai algorithm was associated with lesser CT scan to EVT time (SMD -0.71, 95% CI [-0.98, -0.44], p < 0.001) and DTG time (SMD -0.50, 95% CI [-0.66, -0.35], p < 0.001) as well as CT to recanalization time (SMD -0.55, 95% CI [-0.76, -0.33], p < 0.001). Additionally, patients in the post-AI group had significantly lower door-in door-out time than the pre-AI group (SMD -0.49, 95% CI [-0.71, -0.28], p < 0.001). Despite the workflow metrics improvement, our analysis did not reveal statistically significant differences in patient clinical outcomes (p > 0.05). Our results suggest that the integration of the Viz.ai platform in stroke care holds significant potential for reducing EVT delays in patients with LVO and optimizing stroke flow metrics in comprehensive stroke centers. Further studies are required to validate its efficacy in improving clinical outcomes in patients with LVO.

MRI-based machine learning reveals proteasome subunit PSMB8-mediated malignant glioma phenotypes through activating TGFBR1/2-SMAD2/3 axis.

Pei D, Ma Z, Qiu Y, Wang M, Wang Z, Liu X, Zhang L, Zhang Z, Li R, Yan D

pubmed logopapersMay 8 2025
Gliomas are the most prevalent and aggressive neoplasms of the central nervous system, representing a major challenge for effective treatment and patient prognosis. This study identifies the proteasome subunit beta type-8 (PSMB8/LMP7) as a promising prognostic biomarker for glioma. Using a multiparametric radiomic model derived from preoperative magnetic resonance imaging (MRI), we accurately predicted PSMB8 expression levels. Notably, radiomic prediction of poor prognosis was highly consistent with elevated PSMB8 expression. Our findings demonstrate that PSMB8 depletion not only suppressed glioma cell proliferation and migration but also induced apoptosis via activation of the transforming growth factor beta (TGF-β) signaling pathway. This was supported by downregulation of key receptors (TGFBR1 and TGFBR2). Furthermore, interference with PSMB8 expression impaired phosphorylation and nuclear translocation of SMAD2/3, critical mediators of TGF-β signaling. Consequently, these molecular alterations resulted in reduced tumor progression and enhanced sensitivity to temozolomide (TMZ), a standard chemotherapeutic agent. Overall, our findings highlight PSMB8's pivotal role in glioma pathophysiology and its potential as a prognostic marker. This study also demonstrates the clinical utility of MRI radiomics for preoperative risk stratification and pre-diagnosis. Targeted inhibition of PSMB8 may represent a therapeutic strategy to overcome TMZ resistance and improve glioma patient outcomes.

Are Diffusion Models Effective Good Feature Extractors for MRI Discriminative Tasks?

Li B, Sun Z, Li C, Kamagata K, Andica C, Uchida W, Takabayashi K, Guo S, Zou R, Aoki S, Tanaka T, Zhao Q

pubmed logopapersMay 8 2025
Diffusion models (DMs) excel in pixel-level and spatial tasks and are proven feature extractors for 2D image discriminative tasks when pretrained. However, their capabilities in 3D MRI discriminative tasks remain largely untapped. This study seeks to assess the effectiveness of DMs in this underexplored area. We use 59830 T1-weighted MR images (T1WIs) from the extensive, yet unlabeled, UK Biobank dataset. Additionally, we apply 369 T1WIs from the BraTS2020 dataset specifically for brain tumor classification, and 421 T1WIs from the ADNI1 dataset for the diagnosis of Alzheimer's disease. Firstly, a high-performing denoising diffusion probabilistic model (DDPM) with a U-Net backbone is pretrained on the UK Biobank, then fine-tuned on the BraTS2020 and ADNI1 datasets. Afterward, we assess its feature representation capabilities for discriminative tasks using linear probes. Finally, we accordingly introduce a novel fusion module, named CATS, that enhances the U-Net representations, thereby improving performance on discriminative tasks. Our DDPM produces synthetic images of high quality that match the distribution of the raw datasets. Subsequent analysis reveals that DDPM features extracted from middle blocks and smaller timesteps are of high quality. Leveraging these features, the CATS module, with just 1.7M additional parameters, achieved average classification scores of 0.7704 and 0.9217 on the BraTS2020 and ADNI1 datasets, demonstrating competitive performance with that of the representations extracted from the transferred DDPM model, as well as the 33.23M parameters ResNet18 trained from scratch. We have found that pretraining a DM on a large-scale dataset and then fine-tuning it on limited data from discriminative datasets is a viable approach for MRI data. With these well-performing DMs, we show that they excel not just in generation tasks but also as feature extractors when combined with our proposed CATS module.

Deep learning approach based on a patch residual for pediatric supracondylar subtle fracture detection.

Ye Q, Wang Z, Lou Y, Yang Y, Hou J, Liu Z, Liu W, Li J

pubmed logopapersMay 8 2025
Supracondylar humerus fractures in children are among the most common elbow fractures in pediatrics. However, their diagnosis can be particularly challenging due to the anatomical characteristics and imaging features of the pediatric skeleton. In recent years, convolutional neural networks (CNNs) have achieved notable success in medical image analysis, though their performance typically relies on large-scale, high-quality labeled datasets. Unfortunately, labeled samples for pediatric supracondylar fractures are scarce and difficult to obtain. To address this issue, this paper introduces a deep learning-based multi-scale patch residual network (MPR) for the automatic detection and localization of subtle pediatric supracondylar fractures. The MPR framework combines a CNN for automatic feature extraction with a multi-scale generative adversarial network to model skeletal integrity using healthy samples. By leveraging healthy images to learn the normal skeletal distribution, the approach reduces the dependency on labeled fracture data and effectively addresses the challenges posed by limited pediatric datasets. Datasets from two different hospitals were used, with data augmentation techniques applied during both training and validation. On an independent test set, the proposed model achieves an accuracy of 90.5%, with 89% sensitivity, 92% specificity, and an F1 score of 0.906-outperforming the diagnostic accuracy of emergency medicine physicians and approaching that of pediatric radiologists. Furthermore, the model demonstrates a fast inference speed of 1.1 s per sheet, underscoring its substantial potential for clinical application.

A myocardial reorientation method based on feature point detection for quantitative analysis of PET myocardial perfusion imaging.

Shang F, Huo L, Gong T, Wang P, Shi X, Tang X, Liu S

pubmed logopapersMay 8 2025
Reorienting cardiac positron emission tomography (PET) images to the transaxial plane is essential for cardiac PET image analysis. This study aims to design a convolutional neural network (CNN) for automatic reorientation and evaluate its generalizability. An artificial intelligence (AI) method integrating U-Net and the differentiable spatial to numeric transform module (DSNT-U) was proposed to automatically position three feature points (P<sub>apex</sub>, P<sub>base</sub>, and P<sub>RV</sub>), with these three points manually located by an experienced radiologist as the reference standard (RS). A second radiologist performed manual location for reproducibility evaluation. The DSNT-U, initially trained and tested on a [<sup>11</sup>C]acetate dataset (training/testing: 40/17), was further compared with a CNN-spatial transformer network (CNN-STN). The network fine-tuned with 4 subjects was tested on a [<sup>13</sup>N]ammonia dataset (n = 30). The performance of the DSNT-U was evaluated in terms of coordinates, volume, and quantitative indexes (pharmacokinetic parameters and total perfusion deficit). The proposed DSNT-U successfully achieved automatic myocardial reorientation for both [<sup>11</sup>C]acetate and [<sup>13</sup>N]ammonia datasets. For the former dataset, the intraclass correlation coefficients (ICCs) between the coordinates predicted by the DSNT-U and the RS exceeded 0.876. The average normalized mean squared error (NMSE) between the short-axis (SA) images obtained through DSNT-U-based reorientation and the reference SA images was 0.051 ± 0.043. For pharmacokinetic parameters, the R² between the DSNT-U and the RS was larger than 0.968. Compared with the CNN-STN, the DSNT-U demonstrated a higher ICC between the estimated rigid transformation parameters and the RS. After fine-tuning on the [<sup>13</sup>N]ammonia dataset, the average NMSE between the SA images reoriented by the DSNT-U and the reference SA images was 0.056 ± 0.046. The ICC between the total perfusion deficit (TPD) values computed from DSNT-U-derived images and the reference values was 0.981. Furthermore, no significant differences were observed in the performance of the DSNT-U prediction among subjects with different genders or varying myocardial perfusion defect (MPD) statuses. The proposed DSNT-U can accurately position P<sub>apex</sub>, P<sub>base</sub>, and P<sub>RV</sub> on the [<sup>11</sup>C]acetate dataset. After fine-tuning, the positioning model can be applied to the [<sup>13</sup>N]ammonia perfusion dataset, demonstrating good generalization performance. This method can adapt to data of different genders (with or without MPD) and different tracers, displaying the potential to replace manual operations.

Impact of spectrum bias on deep learning-based stroke MRI analysis.

Krag CH, Müller FC, Gandrup KL, Plesner LL, Sagar MV, Andersen MB, Nielsen M, Kruuse C, Boesen M

pubmed logopapersMay 8 2025
To evaluate spectrum bias in stroke MRI analysis by excluding cases with uncertain acute ischemic lesions (AIL) and examining patient, imaging, and lesion factors associated with these cases. This single-center retrospective observational study included adults with brain MRIs for suspected stroke between January 2020 and April 2022. Diagnostic uncertain AIL were identified through reader disagreement or low certainty grading by a radiology resident, a neuroradiologist, and the original radiology report consisting of various neuroradiologists. A commercially available deep learning tool analyzing brain MRIs for AIL was evaluated to assess the impact of excluding uncertain cases on diagnostic odds ratios. Patient-related, MRI acquisition-related, and lesion-related factors were analyzed using the Wilcoxon rank sum test, χ2 test, and multiple logistic regression. The study was approved by the National Committee on Health Research Ethics. In 989 patients (median age 73 (IQR: 59-80), 53% female), certain AIL were found in 374 (38%), uncertain AIL in 63 (6%), and no AIL in 552 (56%). Excluding uncertain cases led to a four-fold increase in the diagnostic odds ratio (from 68 to 278), while a simulated case-control design resulted in a six-fold increase compared to the full disease spectrum (from 68 to 431). Independent factors associated with uncertain AIL were MRI artifacts, smaller lesion size, older lesion age, and infratentorial location. Excluding uncertain cases leads to a four-fold overestimation of the diagnostic odds ratio. MRI artifacts, smaller lesion size, infratentorial location, and older lesion age are associated with uncertain AIL and should be accounted for in validation studies.

Early budget impact analysis of AI to support the review of radiographic examinations for suspected fractures in NHS emergency departments (ED).

Gregory L, Boodhna T, Storey M, Shelmerdine S, Novak A, Lowe D, Harvey H

pubmed logopapersMay 7 2025
To develop an early budget impact analysis of and inform future research on the national adoption of a commercially available AI application to support clinicians reviewing radiographs for suspected fractures across NHS emergency departments in England. A decision tree framework was coded to assess a change in outcomes for suspected fractures in adults when AI fracture detection was integrated into clinical workflow over a 1-year time horizon. Standard of care was the comparator scenario and the ground truth reference cases were characterised by radiology report findings. The effect of AI on assisting ED clinicians when detecting fractures was sourced from US literature. Data on resource use conditioned on the correct identification of a fracture in the ED was extracted from a London NHS trust. Sensitivity analysis was conducted to account for the influence of parameter uncertainty on results. In one year, an estimated 658,564 radiographs were performed in emergency departments across England for suspected wrist, ankle or hip fractures. The number of patients returning to the ED with a missed fracture was reduced by 21,674 cases and a reduction of 20, 916 unnecessary referrals to fracture clinics. The cost of current practice was estimated at £66,646,542 and £63,012,150 with the integration of AI. Overall, generating a return on investment of £3,634,392 to the NHS. The adoption of AI in EDs across England has the potential to generate cost savings. However, additional evidence on radiograph review accuracy and subsequent resource use is required to further demonstrate this.
Page 195 of 2052045 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.