Sort by:
Page 1 of 36353 results
Next

Artificial Intelligence to Detect Developmental Dysplasia of Hip: A Systematic Review.

Bhavsar S, Gowda BB, Bhavsar M, Patole S, Rao S, Rath C

pubmed logopapersSep 28 2025
Deep learning (DL), a branch of artificial intelligence (AI), has been applied to diagnose developmental dysplasia of the hip (DDH) on pelvic radiographs and ultrasound (US) images. This technology can potentially assist in early screening, enable timely intervention and improve cost-effectiveness. We conducted a systematic review to evaluate the diagnostic accuracy of the DL algorithm in detecting DDH. PubMed, Medline, EMBASE, EMCARE, the clinicaltrials.gov (clinical trial registry), IEEE Xplore and Cochrane Library databases were searched in October 2024. Prospective and retrospective cohort studies that included children (< 16 years) at risk of or suspected to have DDH and reported hip ultrasonography (US) or X-ray images using AI were included. A review was conducted using the guidelines of the Cochrane Collaboration Diagnostic Test Accuracy Working Group. Risk of bias was assessed using the QUADAS-2 tool. Twenty-three studies met inclusion criteria, with 15 (n = 8315) evaluating DDH on US images and eight (n = 7091) on pelvic radiographs. The area under the curve of the included studies ranged from 0.80 to 0.99 for pelvic radiographs and 0.90-0.99 for US images. Sensitivity and specificity for detecting DDH on radiographs ranged from 92.86% to 100% and 95.65% to 99.82%, respectively. For US images, sensitivity ranged from 86.54% to 100% and specificity from 62.5% to 100%. AI demonstrated comparable effectiveness to physicians in detecting DDH. However, limited evaluation on external datasets restricts its generalisability. Further research incorporating diverse datasets and real-world applications is needed to assess its broader clinical impact on DDH diagnosis.

Quantifying 3D foot and ankle alignment using an AI-driven framework: a pilot study.

Huysentruyt R, Audenaert E, Van den Borre I, Pižurica A, Duquesne K

pubmed logopapersSep 27 2025
Accurate assessment of foot and ankle alignment through clinical measurements is essential for diagnosing deformities, treatment planning, and monitoring outcomes. The traditional 2D radiographs fail to fully represent the 3D complexity of the foot and ankle. In contrast, weight-bearing CT provides a 3D view of bone alignment under physiological loading. Nevertheless, manual landmark identification on WBCT remains time-intensive and prone to variability. This study presents a novel AI framework automating foot and ankle alignment assessment via deep learning landmark detection. By training 3D U-Net models to predict 22 anatomical landmarks directly from weight-bearing CT images, using heatmap predictions, our approach eliminates the need for segmentation and iterative mesh registration methods. A small dataset of 74 orthopedic patients, including foot deformity cases such as pes cavus and planovalgus, was used to develop and evaluate the model in a clinically relevant population. The mean absolute error was assessed for each landmark and each angle using a fivefold cross-validation. Mean absolute distance errors ranged from 1.00 mm for the proximal head center of the first phalanx to a maximum of 1.88 mm for the lowest point of the calcaneus. Automated clinical measurements derived from these landmarks achieved mean absolute errors between 0.91° for the hindfoot angle and a maximum of 2.90° for the Böhler angle. The heatmap-based AI approach enables automated foot and ankle alignment assessment from WBCT imaging, achieving accuracies comparable to the manual inter-rater variability reported in previous studies. This novel AI-driven method represents a potentially valuable approach for evaluating foot and ankle morphology. However, this exploratory study requires further evaluation with larger datasets to assess its real clinical applicability.

COVID-19 Pneumonia Diagnosis Using Medical Images: Deep Learning-Based Transfer Learning Approach.

Dharmik A

pubmed logopapersSep 26 2025
SARS-CoV-2, the causative agent of COVID-19, remains a global health concern due to its high transmissibility and evolving variants. Although vaccination efforts and therapeutic advancements have mitigated disease severity, emerging mutations continue to challenge diagnostics and containment strategies. As of mid-February 2025, global test positivity has risen to 11%, marking the highest level in over 6 months, despite widespread immunization efforts. Newer variants demonstrate enhanced host cell binding, increasing both infectivity and diagnostic complexity. This study aimed to evaluate the effectiveness of deep transfer learning in delivering a rapid, accurate, and mutation-resilient COVID-19 diagnosis from medical imaging, with a focus on scalability and accessibility. An automated detection system was developed using state-of-the-art convolutional neural networks, including VGG16 (Visual Geometry Group network-16 layers), ResNet50 (residual network-50 layers), ConvNeXtTiny (convolutional next-tiny), MobileNet (mobile network), NASNetMobile (neural architecture search network-mobile version), and DenseNet121 (densely connected convolutional network-121 layers), to detect COVID-19 from chest X-ray and computed tomography (CT) images. Among all the models evaluated, DenseNet121 emerged as the best-performing architecture for COVID-19 diagnosis using X-ray and CT images. It achieved an impressive accuracy of 98%, with a precision of 96.9%, a recall of 98.9%, an F1-score of 97.9%, and an area under the curve score of 99.8%, indicating a high degree of consistency and reliability in detecting both positive and negative cases. The confusion matrix showed minimal false positives and false negatives, underscoring the model's robustness in real-world diagnostic scenarios. Given its performance, DenseNet121 is a strong candidate for deployment in clinical settings and serves as a benchmark for future improvements in artificial intelligence-assisted diagnostic tools. The study results underscore the potential of artificial intelligence-powered diagnostics in supporting early detection and global pandemic response. With careful optimization, deep learning models can address critical gaps in testing, particularly in settings constrained by limited resources or emerging variants.

Performance of artificial intelligence in automated measurement of patellofemoral joint parameters: a systematic review.

Zhan H, Zhao Z, Liang Q, Zheng J, Zhang L

pubmed logopapersSep 26 2025
The evaluation of patellofemoral joint parameters is essential for diagnosing patellar dislocation, yet manual measurements exhibit poor reproducibility and demonstrate significant variability dependent on clinician expertise. This systematic review aimed to evaluate the performance of artificial intelligence (AI) models in automatically measuring patellofemoral joint parameters. A comprehensive literature search of PubMed, Web of Science, Cochrane Library, and Embase databases was conducted from database inception through June 15, 2025. Two investigators independently performed study screening and data extraction, with methodological quality assessment based on the modified MINORS checklist. This systematic review is registered with PROSPERO. A narrative review was conducted to summarize the findings of the included studies. A total of 19 studies comprising 10,490 patients met the inclusion and exclusion criteria, with a mean age of 51.3 years and a mean female proportion of 56.8%. Among these, six studies developed AI models based on radiographic series, nine on CT imaging, and four on MRI. The results demonstrated excellent reliability, with intraclass correlation coefficients (ICCs) ranging from 0.900 to 0.940 for femoral anteversion angle, 0.910-0.920 for trochlear groove depth and 0.930-0.950 for tibial tuberosity-trochlear groove distance. Additionally, good reliability was observed for patellar height (ICCs: 0.880-0.985), sulcus angle (ICCs: 0.878-0.980), and patellar tilt angle (ICCs: 0.790-0.990). Notably, the AI system successfully detected trochlear dysplasia, achieving 88% accuracy, 79% sensitivity, 96% specificity, and an AUC of 0.88. AI-based measurement of patellofemoral joint parameters demonstrates methodological robustness and operational efficiency, showing strong agreement with expert manual measurements. To further establish clinical utility, multicenter prospective studies incorporating rigorous external validation protocols are needed. Such validation would strengthen the model's generalizability and facilitate its integration into clinical decision support systems. This systematic review was registered in PROSPERO (CRD420251075068).

Artificial intelligence applications in thyroid cancer care.

Pozdeyev N, White SL, Bell CC, Haugen BR, Thomas J

pubmed logopapersSep 25 2025
Artificial intelligence (AI) has created tremendous opportunities to improve thyroid cancer care. We used the "artificial intelligence thyroid cancer" query to search the PubMed database until May 31, 2025. We highlight a set of high-impact publications selected based on technical innovation, large generalizable training datasets, and independent and/or prospective validation of AI. We review the key applications of AI for diagnosing and managing thyroid cancer. Our primary focus is on using computer vision to evaluate thyroid nodules on thyroid ultrasound, an area of thyroid AI that has gained the most attention from researchers and will likely have a significant clinical impact. We also highlight AI for detecting and predicting thyroid cancer neck lymph node metastases, digital cyto- and histopathology, large language models for unstructured data analysis, patient education, and other clinical applications. We discuss how thyroid AI technology has evolved and cite the most impactful research studies. Finally, we balance our excitement about the potential of AI to improve clinical care for thyroid cancer with current limitations, such as the lack of high-quality, independent prospective validation of AI in clinical trials, the uncertain added value of AI software, unknown performance on non-papillary thyroid cancer types, and the complexity of clinical implementation. AI promises to improve thyroid cancer diagnosis, reduce healthcare costs and enable personalized management. High-quality, independent prospective validation of AI in clinical trials is lacking and is necessary for the clinical community's broad adoption of this technology.

Acute myeloid leukemia classification using ReLViT and detection with YOLO enhanced by adversarial networks on bone marrow images.

Hameed M, Raja MAZ, Zameer A, Dar HS, Alluhaidan AS, Aziz R

pubmed logopapersSep 25 2025
Acute myeloid leukemia (AML) is recognized as a highly aggressive cancer that affects the bone marrow and blood, making it the most lethal type of leukemia. The detection of AML through medical imaging is challenging due to the complex structural and textural variations inherent in bone marrow images. These challenges are further intensified by the overlapping intensity between leukemia and non-leukemia regions, which reduces the effectiveness of traditional predictive models. This study presents a novel artificial intelligence framework that utilizes residual block merging vision transformers, convolutions, and advanced object detection techniques to address the complexities of bone marrow images and enhance the accuracy of AML detection. The framework integrates residual learning-based vision transformer (ReLViT) blocks within a bottleneck architecture, harnessing the combined strengths of residual learning and transformer mechanisms to improve feature representation and computational efficiency. Tailored data pre-processing strategies are employed to manage the textural and structural complexities associated with low-quality images and tumor shapes. The framework's performance is further optimized through a strategic weight-sharing technique to minimize computational overhead. Additionally, a generative adversarial network (GAN) is employed to enhance image quality across all AML imaging modalities, and when combined with a You Only Look Once (YOLO) object detector, it accurately localizes tumor formations in bone marrow images. Extensive and comparative evaluations have demonstrated the superiority of the proposed framework over existing deep convolutional neural networks (CNN) and object detection methods. The model achieves an F1-score of 99.15%, precision of 99.02%, and recall of 99.16%, marking a significant advancement in the field of medical imaging.

Artificial Intelligence for Ischemic Stroke Detection in Non-contrast CT: A Systematic Review and Meta-analysis.

Shen W, Peng J, Lu J

pubmed logopapersSep 25 2025
We aim to conduct a systematic review and meta-analysis to objectively assess the diagnostic accuracy of artificial intelligence (AI) models for detecting ischemic stroke (IS) in non-contrast CT (NCCT), and to compare the diagnostic performance between AI and clinicians. Until February 2025, systematic searches were conducted in PubMed, Web of Science, Cochrane, IEEE Xplore, and Embase for studies using AI based on NCCT images from human subjects for IS detection or classification. The risk of bias was evaluated using the prediction model study risk of bias assessment tool (PROBAST). For meta-analysis, the pooled sensitivities, specificities, and hierarchical summary receiver operating characteristic (HSROC) curves were used. A total of 38 studies, with 74 trials extracted from 32 studies were included. For AI performance, the pooled sensitivity and specificity were 91.2% (95%CI: 87.6%-93.8%) and 96.0% (95%CI: 93.6%-97.6%) for internal validation and 59.8% (95%CI:39.9%-76.9%) and 97.3% (95%CI: 93.2%-98.9%) for external validation. For clinicians' performance, the pooled sensitivity and specificity were 44.1% (95%CI: 33.8%-55.0%) and 85.5% (95%CI: 68.4%-94.1%) for internal validation and 46.1% (95%CI: 31.5%-61.3%) and 83.6% (95%CI: 62.8%-93.9%) for external validation. The pooled sensitivity and specificity increased to 83.7% (95%CI: 53.0%-95.9%) and 86.7% (95%CI: 77.1%-92.6%) for clinicians with AI assistance. The subgroup analysis results indicated that higher model sensitivity was associated with the data augmentation (93.9%, 95%CI: 90.2%-96.2%) and transfer learning (94.7%, 95%CI: 92.0%-96.6%). There were 22 of 38 (58%) studies that were judged to have high risk of bias. Sensitive analysis and subgroup analysis identified multiple sources of heterogeneity in the data, including risk of bias and AI model types. Our study reveals that AI has an acceptable performance in detecting IS in NCCT in internal validation, although significant heterogeneity was observed in the meta-analysis. However, the generalizability and practical applicability of AI in real-world clinical settings remain limited due to insufficient external validation.

Deep learning in abdominopelvic digital subtraction angiography: a systematic review of interventional radiology applications.

Raskin D, Klang E, Barash Y, Korfiatis P, Partovi S, McCarthy CJ, Nadkarni G, Collins JD, Sorin V

pubmed logopapersSep 25 2025
Deep learning (DL) is increasingly explored in interventional radiology (IR) applications. This systematic review evaluates current DL-based applications for digital subtraction angiography (DSA) in abdominopelvic interventions, summarizes performance, and identifies gaps in the literature. Following PRISMA guidelines, we searched MEDLINE, Scopus, and Google Scholar for studies published up to February 1, 2025. English-language original articles assessing DL methods for automatic DSA image analysis were included, and study quality was evaluated with QUADAS-2. Nine studies were included. Two focused on hemorrhage detection, in which area under the curve (AUC) values ranged between 0.80-0.85. Four examined image enhancement, one performed vessel segmentation, and one applied classification of the anatomic location. Only a single study evaluated treatment response prediction, with an accuracy of 0.75. Most models were tested on small datasets from single centers, limiting their generalizability. Preliminary studies show that DL can improve hemorrhage detection, image quality, and vessel segmentation in DSA. However, larger, prospectively validated datasets are warranted. Currently no FDA-approved DL tools exist for abdominal or pelvic DSA. Future efforts should explore advanced generative AI and multimodal approaches.

Incidental Cardiovascular Findings in Lung Cancer Screening and Noncontrast Chest Computed Tomography.

Cham MD, Shemesh J

pubmed logopapersSep 24 2025
While the primary goal of lung cancer screening CT is to detect early-stage lung cancer in high-risk populations, it often reveals asymptomatic cardiovascular abnormalities that can be clinically significant. These findings include coronary artery calcifications (CACs), myocardial pathologies, cardiac chamber enlargement, valvular lesions, and vascular disease. CAC, a marker of subclinical atherosclerosis, is particularly emphasized due to its strong predictive value for cardiovascular events and mortality. Guidelines recommend qualitative or quantitative CAC scoring on all noncontrast chest CTs. Other actionable findings include aortic aneurysms, pericardial disease, and myocardial pathology, some of which may indicate past or impending cardiac events. This article explores the wide range of incidental cardiovascular findings detectable during low-dose CT (LDCT) scans for lung cancer screening, as well as noncontrast chest CT scans. Distinguishing which findings warrant further evaluation is essential to avoid overdiagnosis, unnecessary anxiety, and resource misuse. The article advocates for a structured approach to follow-up based on the clinical significance of each finding and the patient's overall risk profile. It also notes the rising role of artificial intelligence in automatically detecting and quantifying these abnormalities, potentiating early behavioral modification or medical and surgical interventions. Ultimately, this piece highlights the opportunity to reframe LDCT as a comprehensive cardiothoracic screening tool.

Artificial intelligence in cerebral cavernous malformations: a scoping review.

Santos AN, Venkatesh V, Chidambaram S, Piedade Santos G, Dawoud B, Rauschenbach L, Choucha A, Bingöl S, Wipplinger T, Wipplinger C, Siegel AM, Dammann P, Abou-Hamden A

pubmed logopapersSep 24 2025
Artificial Intelligence (AI) and Machine Learning (ML) are increasingly being applied in medical research, including studies on cerebral cavernous malformations (CCM). This scoping review aims to analyze the scope and impact of AI in CCM, focusing on diagnostic tools, risk assessment, biomarker identification, outcome prediction, and treatment planning. We conducted a comprehensive literature search across different databases, reviewing articles that explore AI applications in CCM. Articles were selected based on predefined eligibility criteria and categorized according to their primary focus: drug discovery, diagnostic imaging, genetic analysis, biomarker identification, outcome prediction, and treatment planning. Sixteen studies met the inclusion criteria, showcasing diverse AI applications in CCM. Nearly half (47%) were cohort or prospective studies, primarily focused on biomarker discovery and risk prediction. Technical notes and diagnostic studies accounted for 27%, concentrating on computer-aided diagnosis (CAD) systems and drug screening. Other studies included a conceptual review on AI for surgical planning and a systematic review confirming ML's superiority in predicting clinical outcomes within neurosurgery. AI applications in CCM show significant promise, particularly in enhancing diagnostic accuracy, risk assessment, and surgical planning. These advancements suggest that AI could transform CCM management, offering pathways to improved patient outcomes and personalized care strategies.
Page 1 of 36353 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.