Sort by:
Page 4 of 45441 results

Artificial Intelligence in Cardiovascular Health: Insights into Post-COVID Public Health Challenges.

Naushad Z, Malik J, Mishra AK, Singh S, Shrivastav D, Sharma CK, Verma VV, Pal RK, Roy B, Sharma VK

pubmed logopapersSep 16 2025
Cardiovascular diseases (CVDs) continue to be the topmost cause of the worldwide morbidity and mortality. Risk factors such as diabetes, hypertension, obesity and smoking are significantly worsening the situation. The COVID-19 pandemic has powerfully highlighted the undeniable connection between viral infections and cardiovascular health. Current literature highlights that SARS-CoV-2 contributes to myocardial injury, endothelial dysfunction, thrombosis, and systemic inflammation, increasing the severity of CVD outcomes. Long COVID has also been associated with persistent cardiovascular complications, including myocarditis, arrhythmias, thromboembolic events, and accelerated atherosclerosis. Addressing these challenges requires continued research and public health strategies to mitigate long-term risks. Artificial intelligence (AI) is changing cardiovascular medicine and community health through progressive machine learning (ML) and deep learning (DL) applications. AI enhances risk prediction, facilitates biomarker discovery, and improves imaging techniques such as echocardiography, CT, and MRI for detecting coronary artery disease and myocardial injury on time. Remote monitoring and wearable devices powered by AI enable real-time cardiovascular assessment and personalized treatment. In public health, AI optimizes disease surveillance, epidemiological modeling, and healthcare resource allocation. AI-driven clinical decision support systems improve diagnostic accuracy and health equity by enabling targeted interventions. The integration of AI into cardiovascular medicine and public health offers data-driven, efficient, and patient-centered solutions to mitigate post-COVID cardiovascular complications.

Challenges and Limitations of Multimodal Large Language Models in Interpreting Pediatric Panoramic Radiographs.

Mine Y, Iwamoto Y, Okazaki S, Nishimura T, Tabata E, Takeda S, Peng TY, Nomura R, Kakimoto N, Murayama T

pubmed logopapersSep 16 2025
Multimodal large language models (LLMs) have potential for medical image analysis, yet their reliability for pediatric panoramic radiographs remains uncertain. This study evaluated two multimodal LLMs (OpenAI o1, Claude 3.5 Sonnet) for detecting and counting teeth (including tooth germs) on pediatric panoramic radiographs. Eighty-seven pediatric panoramic radiographs from an open-source data set were analyzed. Two pediatric dentists annotated the presence or absence of each potential tooth position. Each image was processed five times by the LLMs using identical prompts, and the results were compared with the expert annotations. Standard performance metrics and Fleiss' kappa were calculated. Detailed examination revealed that subtle developmental stages and minor tooth loss were consistently misidentified. Claude 3.5 Sonnet had higher sensitivity but significantly lower specificity (29.8% ± 21.5%), resulting in many false positives. OpenAI o1 demonstrated superior specificity compared to Claude 3.5 Sonnet, but still failed to correctly detect subtle defects in certain mixed dentition cases. Both models showed large variability in repeated runs. Both LLMs failed to achieve clinically acceptable performance and cannot reliably identify nuanced discrepancies critical for pediatric dentistry. Further refinements and consistency improvements are essential before routine clinical use.

Intelligent Healthcare Imaging Platform An VLM-Based Framework for Automated Medical Image Analysis and Clinical Report Generation

Samer Al-Hamadani

arxiv logopreprintSep 16 2025
The rapid advancement of artificial intelligence (AI) in healthcare imaging has revolutionized diagnostic medicine and clinical decision-making processes. This work presents an intelligent multimodal framework for medical image analysis that leverages Vision-Language Models (VLMs) in healthcare diagnostics. The framework integrates Google Gemini 2.5 Flash for automated tumor detection and clinical report generation across multiple imaging modalities including CT, MRI, X-ray, and Ultrasound. The system combines visual feature extraction with natural language processing to enable contextual image interpretation, incorporating coordinate verification mechanisms and probabilistic Gaussian modeling for anomaly distribution. Multi-layered visualization techniques generate detailed medical illustrations, overlay comparisons, and statistical representations to enhance clinical confidence, with location measurement achieving 80 pixels average deviation. Result processing utilizes precise prompt engineering and textual analysis to extract structured clinical information while maintaining interpretability. Experimental evaluations demonstrated high performance in anomaly detection across multiple modalities. The system features a user-friendly Gradio interface for clinical workflow integration and demonstrates zero-shot learning capabilities to reduce dependence on large datasets. This framework represents a significant advancement in automated diagnostic support and radiological workflow efficiency, though clinical validation and multi-center evaluation are necessary prior to widespread adoption.

Artificial intelligence aided ultrasound imaging of foetal congenital heart disease: A scoping review.

Norris L, Lockwood P

pubmed logopapersSep 16 2025
Congenital heart diseases (CHD) are a significant cause of neonatal mortality and morbidity. Detecting these abnormalities during pregnancy increases survival rates, enhances prognosis, and improves pregnancy management and quality of life for the affected families. Foetal echocardiography can be considered an accurate method for detecting CHDs. However, the detection of CHDs can be limited by factors such as the sonographer's skill, expertise and patient specific variables. Using artificial intelligence (AI) has the potential to address these challenges, increasing antenatal CHD detection during prenatal care. A scoping review was conducted using Google Scholar, PubMed, and ScienceDirect databases, employing keywords, Boolean operators, and inclusion and exclusion criteria to identify peer-reviewed studies. Thematic mapping and synthesis of the found literature were conducted to review key concepts, research methods and findings. A total of n = 233 articles were identified, after exclusion criteria, the focus was narrowed to n = 7 that met the inclusion criteria. Themes in the literature identified the potential of AI to assist clinicians and trainees, alongside emerging new ethical limitations in ultrasound imaging. AI-based tools in ultrasound imaging offer great potential in assisting sonographers and doctors with decision-making in CHD diagnosis. However, due to the paucity of data and small sample sizes, further research and technological advancements are needed to improve reliability and integrate AI into routine clinical practice. This scoping review identified the reported accuracy and limitations of AI-based tools within foetal cardiac ultrasound imaging. AI has the potential to aid in reducing missed diagnoses, enhance training, and improve pregnancy management. There is a need to understand and address the ethical and legal considerations involved with this new paradigm in imaging.

Adapting and Evaluating Multimodal Large Language Models for Adolescent Idiopathic Scoliosis Self-Management: A Divide and Conquer Framework

Zhaolong Wu, Pu Luo, Jason Pui Yin Cheung, Teng Zhang

arxiv logopreprintSep 15 2025
This study presents the first comprehensive evaluation of Multimodal Large Language Models (MLLMs) for Adolescent Idiopathic Scoliosis (AIS) self-management. We constructed a database of approximately 3,000 anteroposterior X-rays with diagnostic texts and evaluated five MLLMs through a `Divide and Conquer' framework consisting of a visual question-answering task, a domain knowledge assessment task, and a patient education counseling assessment task. Our investigation revealed limitations of MLLMs' ability in interpreting complex spinal radiographs and comprehending AIS care knowledge. To address these, we pioneered enhancing MLLMs with spinal keypoint prompting and compiled an AIS knowledge base for retrieval augmented generation (RAG), respectively. Results showed varying effectiveness of visual prompting across different architectures, while RAG substantially improved models' performances on the knowledge assessment task. Our findings indicate current MLLMs are far from capable in realizing personalized assistant in AIS care. The greatest challenge lies in their abilities to obtain accurate detections of spinal deformity locations (best accuracy: 0.55) and directions (best accuracy: 0.13).

Enhancing Radiographic Disease Detection with MetaCheX, a Context-Aware Multimodal Model

Nathan He, Cody Chen

arxiv logopreprintSep 15 2025
Existing deep learning models for chest radiology often neglect patient metadata, limiting diagnostic accuracy and fairness. To bridge this gap, we introduce MetaCheX, a novel multimodal framework that integrates chest X-ray images with structured patient metadata to replicate clinical decision-making. Our approach combines a convolutional neural network (CNN) backbone with metadata processed by a multilayer perceptron through a shared classifier. Evaluated on the CheXpert Plus dataset, MetaCheX consistently outperformed radiograph-only baseline models across multiple CNN architectures. By integrating metadata, the overall diagnostic accuracy was significantly improved, measured by an increase in AUROC. The results of this study demonstrate that metadata reduces algorithmic bias and enhances model generalizability across diverse patient populations. MetaCheX advances clinical artificial intelligence toward robust, context-aware radiographic disease detection.

Accuracy of AI-Based Algorithms in Pulmonary Embolism Detection on Computed Tomographic Pulmonary Angiography: An Updated Systematic Review and Meta-analysis.

Nabipoorashrafi SA, Seyedi A, Bahri RA, Yadegar A, Shomal-Zadeh M, Mohammadi F, Afshari SA, Firoozeh N, Noroozzadeh N, Khosravi F, Asadian S, Chalian H

pubmed logopapersSep 15 2025
Several artificial intelligence (AI) algorithms have been designed for detection of pulmonary embolism (PE) using computed tomographic pulmonary angiography (CTPA). Due to the rapid development of this field and the lack of an updated meta-analysis, we aimed to systematically review the available literature about the accuracy of AI-based algorithms to diagnose PE via CTPA. We searched EMBASE, PubMed, Web of Science, and Cochrane for studies assessing the accuracy of AI-based algorithms. Studies that reported sensitivity and specificity were included. The R software was used for univariate meta-analysis and drawing summary receiver operating characteristic (sROC) curves based on bivariate analysis. To explore the source of heterogeneity, sub-group analysis was performed (PROSPERO: CRD42024543107). A total of 1722 articles were found, and after removing duplicated records, 1185 were screened. Twenty studies with 26 AI models/population met inclusion criteria, encompassing 11,950 participants. Univariate meta-analysis showed a pooled sensitivity of 91.5% (95% CI 85.5-95.2) and specificity of 84.3 (95% CI 74.9-90.6) for PE detection. Additionally, in the bivariate sROC, the pooled area under the curved (AUC) was 0.923 out of 1, indicating a very high accuracy of AI algorithms in the detection of PE. Also, subgroup meta-analysis showed geographical area as a potential source of heterogeneity where the I<sup>2</sup> for sensitivity and specificity in the Asian article subgroup were 60% and 6.9%, respectively. Findings highlight the promising role of AI in accurately diagnosing PE while also emphasizing the need for further research to address regional variations and improve generalizability.

Challenging the Status Quo Regarding the Benefit of Chest Radiographic Screening.

Yankelevitz DF, Yip R, Henschke CI

pubmed logopapersSep 15 2025
Chest radiographic (CXR) screening is currently not recommended in the United States by any major guideline organization. Multiple randomized controlled trials done in the United States and also in Europe, with the largest being the Prostate, Lung, Colorectal and Ovarian (PLCO) trial, all failed to show a benefit and are used as evidence to support the current recommendation. Nevertheless, there is renewed interest in CXR screening, especially in low- and middle-resourced countries around the world. Reasons for this are multi-factorial, including the continued concern that those trials still may have missed a benefit, but perhaps more importantly, it is now established conclusively that finding smaller cancers is better than finding larger ones. This was the key finding in those large randomized controlled trials for CT screening. So, while CT finds cancers smaller than CXR, both clearly perform better than waiting for cancers to be larger and detected by symptom prompting. Without it being well understood that treating cancers found in the asymptomatic state by CXR, there would also be no basis for treating them when found incidentally. In addition, advances in artificial intelligence are allowing for nodules to be found earlier and more reliably with CXR than in those prior studies, and in many countries around the world, TB screening is already taking place on a large scale. This presents a major opportunity for integration with lung screening programs.

Annotation-efficient deep learning detection and measurement of mediastinal lymph nodes in CT.

Olesinski A, Lederman R, Azraq Y, Sosna J, Joskowicz L

pubmed logopapersSep 13 2025
Manual detection and measurement of structures in volumetric scans is routine in clinical practice but is time-consuming and subject to observer variability. Automatic deep learning-based solutions are effective but require a large dataset of manual annotations by experts. We present a novel annotation-efficient semi-supervised deep learning method for automatic detection, segmentation, and measurement of the short axis length (SAL) of mediastinal lymph nodes (LNs) in contrast-enhanced CT (ceCT) scans. Our semi-supervised method combines the precision of expert annotations with the quantity advantages of pseudolabeled data. It uses an ensemble of 3D nnU-Net models trained on a few expert-annotated scans to generate pseudolabels on a large dataset of unannotated scans. The pseudolabels are then filtered to remove false positive LNs by excluding LNs outside the mediastinum and LNs overlapping with other anatomical structures. Finally, a single 3D nnU-Net model is trained using the filtered pseudo-labels. Our method optimizes the ratio of annotated/non-annotated dataset sizes to achieve the desired performance, thus reducing manual annotation effort. Experimental studies on three chest ceCT datasets with a total of 268 annotated scans (1817 LNs), of which 134 scans were used for testing and the remaining for ensemble training in batches of 17, 34, 67, and 134 scans, as well as 710 unannotated scans, show that the semi-supervised models' recall improvements were 11-24% (0.72-0.87) while maintaining comparable precision levels. The best model achieved mean SAL differences of 1.65 ± 0.92 mm for normal LNs and 4.25 ± 4.98 mm for enlarged LNs, both within the observer variability. Our semi-supervised method requires one-fourth to one-eighth less annotations to achieve a performance to supervised models trained on the same dataset for the automatic measurement of mediastinal LNs in chest ceCT. Using pseudolabels with anatomical filtering may be effective to overcome the challenges of the development of AI-based solutions in radiology.

Artificial Intelligence and Carpal Tunnel Syndrome: A Systematic Review and Contemporary Update on Imaging Techniques.

Misch M, Medani K, Rhisheekesan A, Manjila S

pubmed logopapersSep 12 2025
Trailblazing strides in artificial intelligence (AI) programs have led to enhanced diagnostic imaging, including ultrasound (US), magnetic resonance imaging, and infrared thermography. This systematic review summarizes current efforts to integrate AI into the diagnosis of carpal tunnel syndrome (CTS) and its potential to improve clinical decision-making. A comprehensive literature search was conducted in PubMed, Embase, and Cochrane database in accordance with PRISMA guidelines. Articles were included if they evaluated the application of AI in the diagnosis or detection of CTS. Search terms included "carpal tunnel syndrome" and "artificial intelligence", along with relevant MeSH terms. A total of 22 studies met inclusion criteria and were analyzed qualitatively. AI models, especially deep learning algorithms, demonstrated strong diagnostic performance, particularly with US imaging. Frequently used inputs included echointensity, pixelation patterns, and the cross-sectional area of the median nerve. AI-assisted image analysis enabled superior detection and segmentation of the median nerve, often outperforming radiologists in sensitivity and specificity. Additionally, AI complemented electromyography by offering insight into the physiological integrity of the nerve. AI holds significant promise as an adjunctive tool in the diagnosis and management of CTS. Its ability to extract and quantify radiomic features may support accurate, reproducible diagnoses and allow for longitudinal digital documentation. When integrated with existing modalities, AI may enhance clinical assessments, inform surgical decision-making, and extend diagnostic capabilities into telehealth and point-of-care settings. Continued development and prospective validation of these technologies are essential for streamlining widespread integration into clinical practice.
Page 4 of 45441 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.