Sort by:
Page 12 of 34334 results

Artificial Intelligence-Enabled Point-of-Care Echocardiography: Bringing Precision Imaging to the Bedside.

East SA, Wang Y, Yanamala N, Maganti K, Sengupta PP

pubmed logopapersJul 7 2025
The integration of artificial intelligence (AI) with point-of-care ultrasound (POCUS) is transforming cardiovascular diagnostics by enhancing image acquisition, interpretation, and workflow efficiency. These advancements hold promise in expanding access to cardiovascular imaging in resource-limited settings and enabling early disease detection through screening applications. This review explores the opportunities and challenges of AI-enabled POCUS as it reshapes the landscape of cardiovascular imaging. AI-enabled systems can reduce operator dependency, improve image quality, and support clinicians-both novice and experienced-in capturing diagnostically valuable images, ultimately promoting consistency across diverse clinical environments. However, widespread adoption faces significant challenges, including concerns around algorithm generalizability, bias, explainability, clinician trust, and data privacy. Addressing these issues through standardized development, ethical oversight, and clinician-AI collaboration will be critical to safe and effective implementation. Looking ahead, emerging innovations-such as autonomous scanning, real-time predictive analytics, tele-ultrasound, and patient-performed imaging-underscore the transformative potential of AI-enabled POCUS in reshaping cardiovascular care and advancing equitable healthcare delivery worldwide.

Impact of a computed tomography-based artificial intelligence software on radiologists' workflow for detecting acute intracranial hemorrhage.

Kim J, Jang J, Oh SW, Lee HY, Min EJ, Choi JW, Ahn KJ

pubmed logopapersJul 7 2025
To assess the impact of a commercially available computed tomography (CT)-based artificial intelligence (AI) software for detecting acute intracranial hemorrhage (AIH) on radiologists' diagnostic performance and workflow in a real-world clinical setting. This retrospective study included a total of 956 non-contrast brain CT scans obtained over a 70-day period, interpreted independently by 2 board-certified general radiologists. Of these, 541 scans were interpreted during the initial 35 days before the implementation of AI software, and the remaining 415 scans were interpreted during the subsequent 35 days, with reference to AIH probability scores generated by the software. To assess the software's impact on radiologists' performance in detecting AIH, performance before and after implementation was compared. Additionally, to evaluate the software's effect on radiologists' workflow, Kendall's Tau was used to assess the correlation between the daily chronological order of CT scans and the radiologists' reading order before and after implementation. The early diagnosis rate for AIH (defined as the proportion of AIH cases read within the first quartile by radiologists) and the median reading order of AIH cases were also compared before and after implementation. A total of 956 initial CT scans from 956 patients [mean age: 63.14 ± 18.41 years; male patients: 447 (47%)] were included. There were no significant differences in accuracy [from 0.99 (95% confidence interval: 0.99-1.00) to 0.99 (0.98-1.00), <i>P</i> = 0.343], sensitivity [from 1.00 (0.99-1.00) to 1.00 (0.99-1.00), <i>P</i> = 0.859], or specificity [from 1.00 (0.99-1.00) to 0.99 (0.97-1.00), <i>P</i> = 0.252] following the implementation of the AI software. However, the daily correlation between the chronological order of CT scans and the radiologists' reading order significantly decreased [Kendall's Tau, from 0.61 (0.48-0.73) to 0.01 (0.00-0.26), <i>P</i> < 0.001]. Additionally, the early diagnosis rate significantly increased [from 0.49 (0.34-0.63) to 0.76 (0.60-0.93), <i>P</i> = 0.013], and the daily median reading order of AIH cases significantly decreased [from 7.25 (Q1-Q3: 3-10.75) to 1.5 (1-3), <i>P</i> < 0.001] after the implementation. After the implementation of CT-based AI software for detecting AIH, the radiologists' daily reading order was considerably reprioritized to allow more rapid interpretation of AIH cases without compromising diagnostic performance in a real-world clinical setting. With the increasing number of CT scans and the growing burden on radiologists, optimizing the workflow for diagnosing AIH through CT-based AI software integration may enhance the prompt and efficient treatment of patients with AIH.

Performance of a deep-learning-based lung nodule detection system using 0.25-mm thick ultra-high-resolution CT images.

Higashibori H, Fukumoto W, Kusuda S, Yokomachi K, Mitani H, Nakamura Y, Awai K

pubmed logopapersJul 7 2025
Artificial intelligence (AI) algorithms for lung nodule detection assist radiologists. As their performance using ultra-high-resolution CT (U-HRCT) images has not been evaluated, we investigated the usefulness of 0.25-mm slices at U-HRCT using the commercially available deep-learning-based lung nodule detection (DL-LND) system. We enrolled 63 patients who underwent U-HRCT for lung cancer and suspected lung cancer. Two board-certified radiologists identified nodules more than 4 mm in diameter on 1-mm HRCT slices and set the reference standard consensually. They recorded all lesions detected on 5-, 1-, and 0.25-mm slices by the DL-LND system. Unidentified nodules were included in the reference standard. To examine the performance of the DL-LND system, the sensitivity, and positive predictive value (PPV) and the number of false positive (FP) nodules were recorded. The mean number of lesions detected on 5-, 1-, and 0.25-mm slices was 5.1, 7.8 and 7.2 per CT scan. On 5-mm slices the sensitivity and PPV were 79.8% and 46.4%; on 1-mm slices they were 91.5% and 34.8%, and on 0.25-mm slices they were 86.7% and 36.1%. The sensitivity was significantly higher on 1- than 5-mm slices (p < 0.01) while the PPV was significantly lower on 1- than 5-mm slices (p < 0.01). A slice thickness of 0.25 mm failed to improve its performance. The mean number of FP nodules on 5-, 1-, and 0.25-mm slices was 2.8, 5.2, and 4.7 per CT scan. We found that 1 mm was the best slice thickness for U-HRCT images using the commercially available DL-LND system.

Artificial Intelligence-Assisted Standard Plane Detection in Hip Ultrasound for Developmental Dysplasia of the Hip: A Novel Real-Time Deep Learning Approach.

Darilmaz MF, Demirel M, Altun HO, Adiyaman MC, Bilgili F, Durmaz H, Sağlam Y

pubmed logopapersJul 6 2025
Developmental dysplasia of the hip (DDH) includes a range of conditions caused by inadequate hip joint development. Early diagnosis is essential to prevent long-term complications. Ultrasound, particularly the Graf method, is commonly used for DDH screening, but its interpretation is highly operator-dependent and lacks standardization, especially in identifying the correct standard plane. This variability often leads to misdiagnosis, particularly among less experienced users. This study presents AI-SPS, an AI-based instant standard plane detection software for real-time hip ultrasound analysis. Using 2,737 annotated frames, including 1,737 standard and 1,000 non-standard examples extracted from 45 clinical ultrasound videos, we trained and evaluated two object detection models: SSD-MobileNet V2 and YOLOv11n. The software was further validated on an independent set of 934 additional frames (347 standard and 587 non-standard) from the same video sources. YOLOv11n achieved an accuracy of 86.3%, precision of 0.78, recall of 0.88, and F1-score of 0.83, outperforming SSD-MobileNet V2, which reached an accuracy of 75.2%. These results indicate that AI-SPS can detect the standard plane with expert-level performance and improve consistency in DDH screening. By reducing operator variability, the software supports more reliable ultrasound assessments. Integration with live systems and Graf typing may enable a fully automated DDH diagnostic workflow. Level of Evidence: Level III, diagnostic study.

Artificial Intelligence in Prenatal Ultrasound: A Systematic Review of Diagnostic Tools for Detecting Congenital Anomalies

Dunne, J., Kumarasamy, C., Belay, D. G., Betran, A. P., Gebremedhin, A. T., Mengistu, S., Nyadanu, S. D., Roy, A., Tessema, G., Tigest, T., Pereira, G.

medrxiv logopreprintJul 5 2025
BackgroundArtificial intelligence (AI) has potentially shown promise in interpreting ultrasound imaging through flexible pattern recognition and algorithmic learning, but implementation in clinical practice remains limited. This study aimed to investigate the current application of AI in prenatal ultrasounds to identify congenital anomalies, and to synthesise challenges and opportunities for the advancement of AI-assisted ultrasound diagnosis. This comprehensive analysis addresses the clinical translation gap between AI performance metrics and practical implementation in prenatal care. MethodsSystematic searches were conducted in eight electronic databases (CINAHL Plus, Ovid/EMBASE, Ovid/MEDLINE, ProQuest, PubMed, Scopus, Web of Science and Cochrane Library) and Google Scholar from inception to May 2025. Studies were included if they applied an AI-assisted ultrasound diagnostic tool to identify a congenital anomaly during pregnancy. This review adhered to PRISMA guidelines for systematic reviews. We evaluated study quality using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) guidelines. FindingsOf 9,918 records, 224 were identified for full-text review and 20 met the inclusion criteria. The majority of studies (11/20, 55%) were conducted in China, with most published after 2020 (16/20, 80%). All AI models were developed as an assistive tool for anomaly detection or classification. Most models (85%) focused on single-organ systems: heart (35%), brain/cranial (30%), or facial features (20%), while three studies (15%) attempted multi-organ anomaly detection. Fifty percent of the included studies reported exceptionally high model performance, with both sensitivity and specificity exceeding 0.95, with AUC-ROC values ranging from 0.91 to 0.97. Most studies (75%) lacked external validation, with internal validation often limited to small training and testing datasets. InterpretationWhile AI applications in prenatal ultrasound showed potential, current evidence indicates significant limitations in their practical implementation. Much work is required to optimise their application, including the external validation of diagnostic models with clinical utility to have real-world implications. Future research should prioritise larger-scale multi-centre studies, developing multi-organ anomaly detection capabilities rather than the current single-organ focus, and robust evaluation of AI tools in real-world clinical settings.

AI-enabled obstetric point-of-care ultrasound as an emerging technology in low- and middle-income countries: provider and health system perspectives.

Della Ripa S, Santos N, Walker D

pubmed logopapersJul 4 2025
In many low- and middle-income countries (LMICs), widespread access to obstetric ultrasound is challenged by lack of trained providers, workload, and inadequate resources required for sustainability. Artificial intelligence (AI) is a powerful tool for automating image acquisition and interpretation and may help overcome these barriers. This study explored stakeholders' opinions about how AI-enabled point-of-care ultrasound (POCUS) might change current antenatal care (ANC) services in LMICs and identified key considerations for introduction. We purposely sampled midwives, doctors, researchers, and implementors for this mixed methods study, with a focus on those who live or work in African LMICs. Individuals completed an anonymous web-based survey, then participated in an interview or focus group. Among the 41 participants, we captured demographics, experience with and perceptions of standard POCUS, and reactions to an AI-enabled POCUS prototype description. Qualitative data were analyzed by thematic content analysis and quantitative Likert and rank-order data were aggregated as frequencies; the latter was presented alongside illustrative quotes to highlight overall versus nuanced perceptions. The following themes emerged: (1) priority AI capabilities; (2) potential impact on ANC quality, services and clinical outcomes; (3) health system integration considerations; and (4) research priorities. First, AI-enabled POCUS elicited concerns around algorithmic accuracy and compromised clinical acumen due to over-reliance on AI, but an interest in gestational age automation. Second, there was overall agreement that both standard and AI-enabled POCUS could improve ANC attendance (75%, 65%, respectively), provider-client trust (82%, 60%), and providers' confidence in clinical decision-making (85%, 70%). AI consistently elicited more uncertainty among respondents. Third, health system considerations emerged including task sharing with midwives, ultrasound training delivery and curricular content, and policy-related issues such as data security and liability risks. For both standard and AI-enabled POCUS, clinical decision support and referral strengthening were deemed necessary to improve outcomes. Lastly, ranked priority research areas included algorithm accuracy across diverse populations and impact on ANC performance indicators; mortality indicators were less prioritized. Optimism that AI-enabled POCUS can increase access in settings with limited personnel and resources is coupled with expressions of caution and potential risks that warrant careful consideration and exploration.

Cross-validation of an artificial intelligence tool for fracture classification and localization on conventional radiography in Dutch population.

Ruitenbeek HC, Sahil S, Kumar A, Kushawaha RK, Tanamala S, Sathyamurthy S, Agrawal R, Chattoraj S, Paramasamy J, Bos D, Fahimi R, Oei EHG, Visser JJ

pubmed logopapersJul 3 2025
The aim of this study is to validate the effectiveness of an AI tool trained on Indian data in a Dutch medical center and to assess its ability to classify and localize fractures. Conventional radiographs acquired between January 2019 and November 2022 were analyzed using a multitask deep neural network. The tool, trained on Indian data, identified and localized fractures in 17 body parts. The reference standard was based on radiology reports resulting from routine clinical workflow and confirmed by an experienced musculoskeletal radiologist. The analysis included both patient-wise and fracture-wise evaluations, employing binary and Intersection over Union (IoU) metrics to assess fracture detection and localization accuracy. In total, 14,311 radiographs (median age, 48 years (range 18-98), 7265 male) were analyzed and categorized by body parts; clavicle, shoulder, humerus, elbow, forearm, wrist, hand and finger, pelvis, hip, femur, knee, lower leg, ankle, foot and toe. 4156/14,311 (29%) had fractures. The AI tool demonstrated overall patient-wise sensitivity, specificity, and AUC of 87.1% (95% CI: 86.1-88.1%), 87.1% (95% CI: 86.4-87.7%), and 0.92 (95% CI: 0.91-0.93), respectively. Fracture detection rate was 60% overall, ranging from 7% for rib fractures to 90% for clavicle fractures. This study validates a fracture detection AI tool on a Western-European dataset, originally trained on Indian data. While classification performance is robust on real clinical data, fracture-wise analysis reveals variability in localization accuracy, underscoring the need for refinement in fracture localization. AI may provide help by enabling optimal use of limited resources or personnel. This study evaluates an AI tool designed to aid in detecting fractures, possibly reducing reading time or optimization of radiology workflow by prioritizing fracture-positive cases. Cross-validation on a consecutive Dutch cohort confirms this AI tool's clinical robustness. The tool detected fractures with 87% sensitivity, 87% specificity, and 0.92 AUC. AI localizes 60% of fractures, the highest for clavicle (90%) and lowest for ribs (7%).

A comparative three-dimensional analysis of skeletal and dental changes induced by Herbst and PowerScope appliances in Class II malocclusion treatment: a retrospective cohort study.

Caleme E, Moro A, Mattos C, Miguel J, Batista K, Claret J, Leroux G, Cevidanes L

pubmed logopapersJul 3 2025
Skeletal Class II malocclusion is commonly treated using mandibular advancement appliances during growth. Evaluating the comparative effectiveness of different appliances can help optimize treatment outcomes. This study aimed to compare dental and skeletal outcomes of Class II malocclusion treatment using Herbst and PowerScope appliances in conjunction with fixed orthodontic therapy. This retrospective comparative study included 46 consecutively treated patients in two university clinics: 26 with PowerScope and 20 with Herbst MiniScope. CBCT scans were obtained before and after treatment. Skeletal and dental changes were analyzed using maxillary and mandibular voxel-based regional superimpositions and cranial base registrations, aided by AI-based landmark detection. Measurement bias was minimized through the use of a calibrated, blinded examiner. No patients were excluded from the analysis. Due to the study's retrospective nature, no prospective registration was performed; the institutional review board granted ethical approval. The Herbst group showed greater anterior displacement at B-point and Pogonion than PowerScope (2.4 mm and 2.6 mm, respectively). Both groups exhibited improved maxillomandibular relationships, with PowerScope's SNA angle reduced and Herbst's SNB increased. Vertical skeletal changes were observed at points A, B, and Pog in both groups. Herbst also resulted in less lower incisor proclination and more pronounced distal movement of upper incisors. Both appliances effectively corrected Class II malocclusion. Herbst promoted more pronounced skeletal advancement, while PowerScope induced greater dental compensation. These findings may be generalizable to similarly aged Class II patients in CVM stages 3-4.

A deep active learning framework for mitotic figure detection with minimal manual annotation and labelling.

Liu E, Lin A, Kakodkar P, Zhao Y, Wang B, Ling C, Zhang Q

pubmed logopapersJul 3 2025
Accurately and efficiently identifying mitotic figures (MFs) is crucial for diagnosing and grading various cancers, including glioblastoma (GBM), a highly aggressive brain tumour requiring precise and timely intervention. Traditional manual counting of MFs in whole slide images (WSIs) is labour-intensive and prone to interobserver variability. Our study introduces a deep active learning framework that addresses these challenges with minimal human intervention. We utilized a dataset of GBM WSIs from The Cancer Genome Atlas (TCGA). Our framework integrates convolutional neural networks (CNNs) with an active learning strategy. Initially, a CNN is trained on a small, annotated dataset. The framework then identifies uncertain samples from the unlabelled data pool, which are subsequently reviewed by experts. These ambiguous cases are verified and used for model retraining. This iterative process continues until the model achieves satisfactory performance. Our approach achieved 81.75% precision and 82.48% recall for MF detection. For MF subclass classification, it attained an accuracy of 84.1%. Furthermore, this approach significantly reduced annotation time - approximately 900 min across 66 WSIs - cutting the effort nearly in half compared to traditional methods. Our deep active learning framework demonstrates a substantial improvement in both efficiency and accuracy for MF detection and classification in GBM WSIs. By reducing reliance on large annotated datasets, it minimizes manual effort while maintaining high performance. This methodology can be generalized to other medical imaging tasks, supporting broader applications in the healthcare domain.

Diagnostic performance of artificial intelligence based on contrast-enhanced computed tomography in pancreatic ductal adenocarcinoma: a systematic review and meta-analysis.

Yan G, Chen X, Wang Y

pubmed logopapersJul 2 2025
This meta-analysis systematically evaluated the diagnostic performance of artificial intelligence (AI) based on contrast-enhanced computed tomography (CECT) in detecting pancreatic ductal adenocarcinoma (PDAC). Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Diagnostic Test Accuracy (PRISMA-DTA) guidelines, a comprehensive literature search was conducted across PubMed, Embase, and Web of Science from inception to March 2025. Bivariate random-effects models pooled sensitivity, specificity, and area under the curve (AUC). Heterogeneity was quantified via I² statistics, with subgroup analyses examining sources of variability, including AI methodologies, model architectures, sample sizes, geographic distributions, control groups and tumor stages. Nineteen studies involving 5,986 patients in internal validation cohorts and 2,069 patients in external validation cohorts were included. AI models demonstrated robust diagnostic accuracy in internal validation, with pooled sensitivity of 0.94 (95% CI 0.89-0.96), specificity of 0.93 (95% CI 0.90-0.96), and AUC of 0.98 (95% CI 0.96-0.99). External validation revealed moderately reduced sensitivity (0.84; 95% CI 0.78-0.89) and AUC (0.94; 95% CI 0.92-0.96), while specificity remained comparable (0.93; 95% CI 0.87-0.96). Substantial heterogeneity (I² > 85%) was observed, predominantly attributed to methodological variations in AI architectures and disparities in cohort sizes. AI demonstrates excellent diagnostic performance for PDAC on CECT, achieving high sensitivity and specificity across validation scenarios. However, its efficacy varies significantly with clinical context and tumor stage. Therefore, prospective multicenter trials that utilize standardized protocols and diverse cohorts, including early-stage tumors and complex benign conditions, are essential to validate the clinical utility of AI.
Page 12 of 34334 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.