Sort by:
Page 7 of 33329 results

Diagnostic value of artificial intelligence-based software for the detection of pediatric upper extremity fractures.

Mollica F, Metz C, Anders MS, Wismayer KK, Schmid A, Niehues SM, Veldhoen S

pubmed logopapersAug 23 2025
Fractures in children are common in emergency care, and accurate diagnosis is crucial to avoid complications affecting skeletal development. Limited access to pediatric radiology specialists emphasizes the potential of artificial intelligence (AI)-based diagnostic tools. This study evaluates the performance of the AI software BoneView® for detecting fractures of the upper extremity in children aged 2-18 years. A retrospective analysis was conducted using radiographic data from 826 pediatric patients presenting to the university's pediatric emergency department. Independent assessments by two experienced pediatric radiologists served as reference standard. The diagnostic accuracy of the AI tool compared to the reference standard was evaluated and performance parameters, e.g., sensitivity, specificity, positive and negative predictive values were calculated. The AI tool achieved an overall sensitivity of 89% and specificity of 91% for detecting fractures of the upper extremities. Significantly poorer performance compared to the reference standard was observed for the shoulder, elbow, hand, and fingers, while no significant difference was found for the wrist, clavicle, upper arm, and forearm. The software performed best for wrist fractures (sensitivity: 96%; specificity: 94%) and worst for elbow fractures (sensitivity: 87%; specificity: 65%). The software assessed provides diagnostic support in pediatric emergency radiology. While its overall performance is robust, limitations in specific anatomical regions underscore the need for further training of the underlying algorithms. The results suggest that AI can complement clinical expertise but should not replace radiological assessment. Question There is no comprehensive analysis of an AI-based tool for the diagnosis of pediatric fractures focusing on the upper extremities. Findings The AI-based software demonstrated solid overall diagnostic accuracy in the detection of upper limb fractures in children, with performance differing by anatomical region. Clinical relevance AI-based fracture detection can support pediatric emergency radiology, especially where expert interpretation is limited. However, further algorithm training is needed for certain anatomical regions and for detecting associated findings such as joint effusions to maximize clinical benefit.

Pushing the limits of cardiac MRI: deep-learning based real-time cine imaging in free breathing vs breath hold.

Klemenz AC, Watzke LM, Deyerberg KK, Böttcher B, Gorodezky M, Manzke M, Dalmer A, Lorbeer R, Weber MA, Meinel FG

pubmed logopapersAug 23 2025
To evaluate deep-learning (DL) based real-time cardiac cine sequences acquired in free breathing (FB) vs breath hold (BH). In this prospective single-centre cohort study, 56 healthy adult volunteers were investigated on a 1.5-T MRI scanner. A set of real-time cine sequences, including a short-axis stack, 2-, 3-, and 4-chamber views, was acquired in FB and with BH. A validated DL-based cine sequence acquired over three cardiac cycles served as the reference standard for volumetric results. Subjective image quality (sIQ) was rated by two blinded readers. Volumetric analysis of both ventricles was performed. sIQ was rated as good to excellent for FB real-time cine images, slightly inferior to BH real-time cine images (p < 0.0001). Overall acquisition time for one set of cine sequences was 50% shorter with FB (median 90 vs 180 s, p < 0.0001). There were significant differences between the real-time sequences and the reference in left ventricular (LV) end-diastolic volume, LV end-systolic volume, LV stroke volume and LV mass. Nevertheless, BH cine imaging showed excellent correlation with the reference standard, with an intra-class correlation coefficient (ICC) > 0.90 for all parameters except right ventricular ejection fraction (RV EF, ICC = 0.887). With FB cine imaging, correlation with the reference standard was good for LV ejection fraction (LV EF, ICC = 0.825) and RV EF (ICC = 0.824) and excellent (ICC > 0.90) for all other parameters. DL-based real-time cine imaging is feasible even in FB with good to excellent image quality and acceptable volumetric results in healthy volunteers. Question Conventional cardiac MR (CMR) cine imaging is challenged by arrhythmias and patients unable to hold their breath, since data is acquired over several heartbeats. Findings DL-based real-time cine imaging is feasible in FB with acceptable volumetric results and reduced acquisition time by 50% compared to real-time breath-hold sequences. Clinical relevance This study fits into the wider goal of increasing the availability of CMR by reducing the complexity, duration of the examination and improving patient comfort and making CMR available even for patients who are unable to hold their breath.

ESR Essentials: lung cancer screening with low-dose CT-practice recommendations by the European Society of Thoracic Imaging.

Revel MP, Biederer J, Nair A, Silva M, Jacobs C, Snoeckx A, Prokop M, Prosch H, Parkar AP, Frauenfelder T, Larici AR

pubmed logopapersAug 23 2025
Low-dose CT screening for lung cancer reduces the risk of death from lung cancer by at least 21% in high-risk participants and should be offered to people aged between 50 and 75 with at least 20 pack-years of smoking. Iterative reconstruction or deep learning algorithms should be used to keep the effective dose below 1 mSv. Deep learning algorithms are required to facilitate the detection of nodules and the measurement of their volumetric growth. Only large solid nodules larger than 500 mm<sup>3</sup> or those with spiculations, bubble-like lucencies, or pleural indentation and complex cysts should be investigated further. Short-term follow-up at 3 or 6 months is required for solid nodules of 100 to 500 mm<sup>3</sup>. A watchful waiting approach is recommended for most subsolid nodules, to limit the risk of overtreatment. Finally, the description of additional findings must be limited if LCS is to be cost-effective. KEY POINTS: Low-dose CT screening reduces the risk of death from lung cancer by at least 21% in high-risk individuals, with a greater benefit in women. Quality assurance of screening is essential to control radiation dose and the number of false positives. Screening with low-dose CT scans detects incidental findings of variable clinical relevance, only those of importance should be reported.

Towards expert-level autonomous carotid ultrasonography with large-scale learning-based robotic system.

Jiang H, Zhao A, Yang Q, Yan X, Wang T, Wang Y, Jia N, Wang J, Wu G, Yue Y, Luo S, Wang H, Ren L, Chen S, Liu P, Yao G, Yang W, Song S, Li X, He K, Huang G

pubmed logopapersAug 23 2025
Carotid ultrasound requires skilled operators due to small vessel dimensions and high anatomical variability, exacerbating sonographer shortages and diagnostic inconsistencies. Prior automation attempts, including rule-based approaches with manual heuristics and reinforcement learning trained in simulated environments, demonstrate limited generalizability and fail to complete real-world clinical workflows. Here, we present UltraBot, a fully learning-based autonomous carotid ultrasound robot, achieving human-expert-level performance through four innovations: (1) A unified imitation learning framework for acquiring anatomical knowledge and scanning operational skills; (2) A large-scale expert demonstration dataset (247,000 samples, 100 × scale-up), enabling embodied foundation models with strong generalization; (3) A comprehensive scanning protocol ensuring full anatomical coverage for biometric measurement and plaque screening; (4) The clinical-oriented validation showing over 90% success rates, expert-level accuracy, up to 5.5 × higher reproducibility across diverse unseen populations. Overall, we show that large-scale deep learning offers a promising pathway toward autonomous, high-precision ultrasonography in clinical practice.

The impact of a neuroradiologist on the report of a real-world CT perfusion imaging map derived by AI/ML-driven software.

De Rubeis G, Stasolla A, Piccoli C, Federici M, Cozzolino V, Lovullo G, Leone E, Pesapane F, Fabiano S, Bertaccini L, Pingi A, Galluzzo M, Saba L, Pampana E

pubmed logopapersAug 22 2025
According to guideline the computed tomography perfusion (CTP) should read and analysis using computer-aided software. This study evaluates the efficacy of AI/ML (machine learning) -driven software in CTP imaging and the effect of neuroradiologists interpretation of these automated results. We conducted a retrospective, single-center cohort study from June to December 2023 at a comprehensive stroke center. A total of 132 patients suspected of acute ischemic stroke underwent CTP using. The AI software RAPID.AI was utilized for initial analysis, with subsequent validation and adjustments made by experienced neuroradiologists. The rate of CTP marked as "non reportable", "reportable" and "reportable with correction" by neuroradiologist was recorded. The degree of confidence in the report of basal and angio-CT scan was assessed before and after CTP visualization. Statistical analysis included logistic regression and F1 score assessments to evaluate the predictive accuracy of AI-generated CTP maps RESULTS: The study found that CTP maps derived from AI software were reportable in 65.2% of cases without artifacts, improved to 87.9% reportable cases when reviewed by neuroradiologists. Key predictive factors for artifact-free CTP maps included motion parameters and the timing of contrast peak distances. There was a significant shift to higher confidence scores of the angiographic phase of the CT after the result of CTP CONCLUSIONS: Neuroradiologists play an indispensable role in enhancing the reliability of CTP imaging by interpreting and correcting AI-processed maps. CTP=computed tomography perfusion; AI/ML= Artificial Intelligence/Machine Learning; LVO = Large vessel occlusion.

Predictive model integrating deep learning and clinical features based on ultrasound imaging data for surgical intervention in intussusception in children younger than 8 months.

Qian YF, Zhou JJ, Shi SL, Guo WL

pubmed logopapersAug 22 2025
The objective of this study was to identify risk factors for enema reduction failure and to establish a combined model that integrates deep learning (DL) features and clinical features for predicting surgical intervention in intussusception in children younger than 8 months of age. A retrospective study with a prospective validation cohort of intussusception. The retrospective data were collected from two hospitals in south east China between January 2017 and December 2022. The prospective data were collected between January 2023 and July 2024. A total of 415 intussusception cases in patients younger than 8 months were included in the study. 280 cases collected from Centre 1 were randomly divided into two groups at a 7:3 ratio: the training cohort (n=196) and the internal validation cohort (n=84). 85 cases collected from Centre 2 were designed as external validation cohort. Pretrained DL networks were used to extract deep transfer learning features, with least absolute shrinkage and selection operator regression selecting the non-zero coefficient features. The clinical features were screened by univariate and multivariate logistic regression analyses. We constructed a combined model that integrated the selected two types of features, along with individual clinical and DL models for comparison. Additionally, the combined model was validated in a prospective cohort (n=50) collected from Centre 1. In the internal and external validation cohorts, the combined model (area under curve (AUC): 0.911 and 0.871, respectively) demonstrated better performance for predicting surgical intervention in intussusception in children younger than 8 months of age than the clinical model (AUC: 0.776 and 0.740, respectively) and the DL model (AUC: 0.828 and 0.793, respectively). In the prospective validation cohort, the combined model also demonstrated impressive performance with an AUC of 0.890. The combined model, integrating DL and clinical features, demonstrated stable predictive accuracy, suggesting its potential for improving clinical therapeutic strategies for intussusception.

Performance of chest X-ray with computer-aided detection powered by deep learning-based artificial intelligence for tuberculosis presumptive identification during case finding in the Philippines.

Marquez N, Carpio EJ, Santiago MR, Calderon J, Orillaza-Chi R, Salanap SS, Stevens L

pubmed logopapersAug 22 2025
The Philippines' high tuberculosis (TB) burden calls for effective point-of-care screening. Systematic TB case finding using chest X-ray (CXR) with computer-aided detection powered by deep learning-based artificial intelligence (AI-CAD) provided this opportunity. We aimed to comprehensively review AI-CAD's real-life performance in the local context to support refining its integration into the country's programmatic TB elimination efforts. Retrospective cross-sectional data analysis was done on case-finding activities conducted in four regions of the Philippines between May 2021 and March 2024. Individuals 15 years and older with complete CXR and molecular World Health Organization-recommended rapid diagnostic (mWRD) test results were included. TB presumptive was detected either by CXR or TB signs and symptoms and/or official radiologist readings. The overall diagnostic accuracy of CXR with AI-CAD, stratified by different factors, was assessed using a fixed abnormality threshold and mWRD as the standard reference. Given the imbalanced dataset, we evaluated both precision-recall (PRC) and receiver operating characteristic (ROC) plots. Due to limited verification of CAD-negative individuals, we used "pseudo-sensitivity" and "pseudo-specificity" to reflect estimates based on partial testing. We identified potential factors that may affect performance metrics. Using a 0.5 abnormality threshold in analyzing 5740 individuals, the AI-CAD model showed high pseudo-sensitivity at 95.6% (95% CI, 95.1-96.1) but low pseudo-specificity at 28.1% (26.9-29.2) and positive predictive value (PPV) at 18.4% (16.4-20.4). The area under the operating characteristic curve was 0.820, whereas the area under the precision-recall curve was 0.489. Pseudo-sensitivity was higher among males, younger individuals, and newly diagnosed TB. Threshold analysis revealed trade-offs, as increasing the threshold score to 0.68 saved more mWRD tests (42%) but led to an increase in missed cases (10%). Threshold adjustments affected PPV, tests saved, and case detection differently across settings. Scaling up AI-CAD use in TB screening to improve TB elimination efforts could be beneficial. There is a need to calibrate threshold scores based on resource availability, prevalence, and program goals. ROC and PRC plots, which specify PPV, could serve as valuable metrics for capturing the best estimate of model performance and cost-benefit ratios within the context-specific implementation of resource-limited settings.

4D Virtual Imaging Platform for Dynamic Joint Assessment via Uni-Plane X-ray and 2D-3D Registration

Hao Tang, Rongxi Yi, Lei Li, Kaiyi Cao, Jiapeng Zhao, Yihan Xiao, Minghai Shi, Peng Yuan, Yan Xi, Hui Tang, Wei Li, Zhan Wu, Yixin Zhou

arxiv logopreprintAug 22 2025
Conventional computed tomography (CT) lacks the ability to capture dynamic, weight-bearing joint motion. Functional evaluation, particularly after surgical intervention, requires four-dimensional (4D) imaging, but current methods are limited by excessive radiation exposure or incomplete spatial information from 2D techniques. We propose an integrated 4D joint analysis platform that combines: (1) a dual robotic arm cone-beam CT (CBCT) system with a programmable, gantry-free trajectory optimized for upright scanning; (2) a hybrid imaging pipeline that fuses static 3D CBCT with dynamic 2D X-rays using deep learning-based preprocessing, 3D-2D projection, and iterative optimization; and (3) a clinically validated framework for quantitative kinematic assessment. In simulation studies, the method achieved sub-voxel accuracy (0.235 mm) with a 99.18 percent success rate, outperforming conventional and state-of-the-art registration approaches. Clinical evaluation further demonstrated accurate quantification of tibial plateau motion and medial-lateral variance in post-total knee arthroplasty (TKA) patients. This 4D CBCT platform enables fast, accurate, and low-dose dynamic joint imaging, offering new opportunities for biomechanical research, precision diagnostics, and personalized orthopedic care.

AI-based diagnosis of acute aortic syndrome from noncontrast CT.

Hu Y, Xiang Y, Zhou YJ, He Y, Lang D, Yang S, Du X, Den C, Xu Y, Wang G, Ding Z, Huang J, Zhao W, Wu X, Li D, Zhu Q, Li Z, Qiu C, Wu Z, He Y, Tian C, Qiu Y, Lin Z, Zhang X, Hu L, He Y, Yuan Z, Zhou X, Fan R, Chen R, Guo W, Xu J, Zhang J, Mok TCW, Li Z, Kalra MK, Lu L, Xiao W, Li X, Bian Y, Shao C, Wang G, Lu W, Huang Z, Xu M, Zhang H

pubmed logopapersAug 20 2025
The accurate and timely diagnosis of acute aortic syndrome (AAS) in patients presenting with acute chest pain remains a clinical challenge. Aortic computed tomography (CT) angiography is the imaging protocol of choice in patients with suspected AAS. However, due to economic and workflow constraints in China, the majority of suspected patients initially undergo noncontrast CT as the initial imaging testing, and CT angiography is reserved for those at higher risk. Although noncontrast CT can reveal specific signs indicative of AAS, its diagnostic efficacy when used alone has not been well characterized. Here we present an artificial intelligence-based warning system, iAorta, using noncontrast CT for AAS identification in China, which demonstrates remarkably high accuracy and provides clinicians with interpretable warnings. iAorta was evaluated through a comprehensive step-wise study. In the multicenter retrospective study (n = 20,750), iAorta achieved a mean area under the receiver operating curve of 0.958 (95% confidence interval 0.950-0.967). In the large-scale real-world study (n = 137,525), iAorta demonstrated consistently high performance across various noncontrast CT protocols, achieving a sensitivity of 0.913-0.942 and a specificity of 0.991-0.993. In the prospective comparative study (n = 13,846), iAorta demonstrated the capability to significantly shorten the time to correct diagnostic pathway for patients with initial false suspicion from an average of 219.7 (115-325) min to 61.6 (43-89) min. Furthermore, for the prospective pilot deployment that we conducted, iAorta correctly identified 21 out of 22 patients with AAS among 15,584 consecutive patients presenting with acute chest pain and under noncontrast CT protocol in the emergency department. For these 21 AAS-positive patients, the average time to diagnosis was 102.1 (75-133) min. Finally, iAorta may help prevent delayed or missed diagnoses of AAS in settings where noncontrast CT remains the only feasible initial imaging modality-such as in resource-limited regions or in patients who cannot receive, or did not receive, intravenous contrast.

AI-assisted 3D versus conventional 2D preoperative planning in total hip arthroplasty for Crowe type II-IV high hip dislocation: a two-year retrospective study.

Lu Z, Yuan C, Xu Q, Feng Y, Xia Q, Wang X, Zhu J, Wu J, Wang T, Chen J, Wang X, Wang Q

pubmed logopapersAug 20 2025
With the growing complexity of total hip arthroplasty (THA) for high hip dislocation (HHD), artificial intelligence (AI)-assisted three-dimensional (3D) preoperative planning has emerged as a promising tool to enhance surgical accuracy. This study compared clinical outcomes of AI-assisted 3D versus conventional two-dimensional (2D) X-ray preoperative planning in such cases. A retrospective cohort of 92 patients with Crowe type II-IV HHD who underwent THA between May 2020 and January 2023 was analyzed. Patients received either AI-assisted 3D preoperative planning (n = 49) or 2D X-ray preoperative planning (n = 43). The primary outcome was the accuracy of implant size prediction. Secondary outcomes included operative time, blood loss, leg length discrepancy (LLD), implant positioning, functional scores (Harris Hip Score [HHS], WOMAC, VAS), complications, and implant survival at 24 months. At 24 months, both groups demonstrated significant improvements in functional outcomes. Compared to the 2D X-ray group, the AI-3D group showed higher accuracy in implant size prediction (acetabular cup: 59.18% vs. 30.23%; femoral stem: 65.31% vs. 41.86%; both p < 0.05), a greater proportion of cups placed within the Lewinnek and Callanan safe zones (p < 0.05), shorter operative time, reduced intraoperative blood loss, and more effective correction of leg length discrepancy (all p < 0.05). No significant differences were observed in HHS, WOMAC, or VAS scores between groups at 24 months (all p > 0.05). Implant survivorship was also comparable (100% vs. 97.7%; p = 0.283), with one revision noted in the 2D X-ray group. AI-assisted 3D preoperative planning improves prosthesis selection accuracy, implant positioning, and perioperative outcomes in Crowe type II-IV HHD THA, although 2-year functional and survival outcomes were comparable to 2D X-ray preoperative planning. Considering the higher cost, radiation exposure, and workflow complexity, its broader application warrants further investigation, particularly in identifying patients who may benefit most.
Page 7 of 33329 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.