Sort by:
Page 2 of 325 results

The present and future of lung cancer screening: latest evidence.

Gutiérrez Alliende J, Kazerooni EA, Crosbie PAJ, Xie X, Sharma A, Reis J

pubmed logopapersMay 9 2025
Lung cancer is the leading cause of cancer-related mortality worldwide. Early lung cancer detection improves lung cancer-related mortality and survival. This report summarizes presentations and panel discussions from a webinar, "The Present and Future of Lung Cancer Screening: Latest Evidence and AI Perspectives." The webinar provided the perspectives of experts from the United States, United Kingdom, and China on evidence-based recommendations and management in lung cancer screening (LCS), barriers, and the role of artificial intelligence (AI). With several countries now incorporating the utilization of AI in their screening programs, AI offers potential solutions to some of the challenges associated with LCS.

Adherence to SVS Abdominal Aortic Aneurysm Guidelines Among Pati ents Detected by AI-Based Algorithm.

Wilson EM, Yao K, Kostiuk V, Bader J, Loh S, Mojibian H, Fischer U, Ochoa Chaar CI, Aboian E

pubmed logopapersMay 9 2025
This study evaluates adherence to the latest Society for Vascular Surgery (SVS) guidelines on imaging surveillance, physician evaluation, and surgical intervention for abdominal aortic aneurysm (AAA). AI-based natural language processing applied retrospectively identified AAA patients from imaging scans at a tertiary care center between January-March 2019 and 2021, excluding the pandemic period. Retrospective chart review assessed demographics, comorbidities, imaging, and follow-up adherence. Statistical significance was set at p<0.05. Among 479 identified patients, 279 remained in the final cohort following exclusion of deceased patients. Imaging surveillance adherence was 67.7% (189/279), with males comprising 72.5% (137/189) (Figure 1). The mean age for adherent patients was 73.9 (SD ±9.5) vs. 75.2 (SD ±10.8) for non-adherent patients (Table 1). Adherent females were significantly younger than non-adherent females (76.7 vs. 81.1 years; p=0.003) with no significant age difference in adherent males. Adherent patients were more likely to be evaluated by a vascular provider within six months (p<0.001), but aneurysm size did not affect imaging adherence: 3.0-4.0cm (p=0.24), 4.0-5.0cm (p=0.88), >5.0cm (p=0.29). Based on SVS surgical criteria, 18 males (AAA >5.5cm) and 17 females (AAA >5.0cm) qualified for intervention and repair rates increased in 2021. 34 males (20 in 2019 v. 14 in 2021) and 7 females (2021 only) received surgical intervention below the threshold for repair. Despite consistent SVS guidelines, adherence remains moderate. AI-based detection and follow-up algorithms may enhance adherence and long-term AAA patient outcomes, however further research is needed to assess the specific impacts of AI.

Application of a pulmonary nodule detection program using AI technology to ultra-low-dose CT: differences in detection ability among various image reconstruction methods.

Tsuchiya N, Kobayashi S, Nakachi R, Tomori Y, Yogi A, Iida G, Ito J, Nishie A

pubmed logopapersMay 9 2025
This study aimed to investigate the performance of an artificial intelligence (AI)-based lung nodule detection program in ultra-low-dose CT (ULDCT) imaging, with a focus on the influence of various image reconstruction methods on detection accuracy. A chest phantom embedded with artificial lung nodules (solid and ground-glass nodules [GGNs]; diameters: 12 mm, 8 mm, 5 mm, and 3 mm) was scanned using six combinations of tube currents (160 mA, 80 mA, and 10 mA) and voltages (120 kV and 80 kV) on a Canon Aquilion One CT scanner. Images were reconstructed using filtered back projection (FBP), hybrid iterative reconstruction (HIR), model-based iterative reconstruction (MBIR), and deep learning reconstruction (DLR). Nodule detection was performed using an AI-based lung nodule detection program, and performance metrics were analyzed across different reconstruction methods and radiation dose protocols. At the lowest dose protocol (80 kV, 10 mA), FBP showed a 0% detection rate for all nodule sizes. HIR and DLR consistently achieved 100% detection rates for solid nodules ≥ 5 mm and GGNs ≥ 8 mm. No method detected 3 mm GGNs under any protocol. DLR demonstrated the highest detection rates, even under ultra-low-dose settings, while maintaining high image quality. AI-based lung nodule detection in ULDCT is strongly dependent on the choice of image reconstruction method.

Artificial Intelligence in Vascular Neurology: Applications, Challenges, and a Review of AI Tools for Stroke Imaging, Clinical Decision Making, and Outcome Prediction Models.

Alqadi MM, Vidal SGM

pubmed logopapersMay 9 2025
Artificial intelligence (AI) promises to compress stroke treatment timelines, yet its clinical return on investment remains uncertain. We interrogate state‑of‑the‑art AI platforms across imaging, workflow orchestration, and outcome prediction to clarify value drivers and execution risks. Convolutional, recurrent, and transformer architectures now trigger large‑vessel‑occlusion alerts, delineate ischemic core in seconds, and forecast 90‑day function. Commercial deployments-RapidAI, Viz.ai, Aidoc-report double‑digit reductions in door‑to‑needle metrics and expanded thrombectomy eligibility. However, dataset bias, opaque reasoning, and limited external validation constrain scalability. Hybrid image‑plus‑clinical models elevate predictive accuracy but intensify data‑governance demands. AI can operationalize precision stroke care, but enterprise‑grade adoption requires federated data pipelines, explainable‑AI dashboards, and fit‑for‑purpose regulation. Prospective multicenter trials and continuous lifecycle surveillance are mandatory to convert algorithmic promise into reproducible, equitable patient benefit.

Towards Better Cephalometric Landmark Detection with Diffusion Data Generation

Dongqian Guo, Wencheng Han, Pang Lyu, Yuxi Zhou, Jianbing Shen

arxiv logopreprintMay 9 2025
Cephalometric landmark detection is essential for orthodontic diagnostics and treatment planning. Nevertheless, the scarcity of samples in data collection and the extensive effort required for manual annotation have significantly impeded the availability of diverse datasets. This limitation has restricted the effectiveness of deep learning-based detection methods, particularly those based on large-scale vision models. To address these challenges, we have developed an innovative data generation method capable of producing diverse cephalometric X-ray images along with corresponding annotations without human intervention. To achieve this, our approach initiates by constructing new cephalometric landmark annotations using anatomical priors. Then, we employ a diffusion-based generator to create realistic X-ray images that correspond closely with these annotations. To achieve precise control in producing samples with different attributes, we introduce a novel prompt cephalometric X-ray image dataset. This dataset includes real cephalometric X-ray images and detailed medical text prompts describing the images. By leveraging these detailed prompts, our method improves the generation process to control different styles and attributes. Facilitated by the large, diverse generated data, we introduce large-scale vision detection models into the cephalometric landmark detection task to improve accuracy. Experimental results demonstrate that training with the generated data substantially enhances the performance. Compared to methods without using the generated data, our approach improves the Success Detection Rate (SDR) by 6.5%, attaining a notable 82.2%. All code and data are available at: https://um-lab.github.io/cepha-generation

Automated Emergent Large Vessel Occlusion Detection Using Viz.ai Software and Its Impact on Stroke Workflow Metrics and Patient Outcomes in Stroke Centers: A Systematic Review and Meta-analysis.

Sarhan K, Azzam AY, Moawad MHED, Serag I, Abbas A, Sarhan AE

pubmed logopapersMay 8 2025
The implementation of artificial intelligence (AI), particularly Viz.ai software in stroke care, has emerged as a promising tool to enhance the detection of large vessel occlusion (LVO) and to improve stroke workflow metrics and patient outcomes. The aim of this systematic review and meta-analysis is to evaluate the impact of Viz.ai on stroke workflow efficiency in hospitals and on patients' outcomes. Following the PRISMA guidelines, we conducted a comprehensive search on electronic databases, including PubMed, Web of Science, and Scopus databases, to obtain relevant studies until 25 October 2024. Our primary outcomes were door-to-groin puncture (DTG) time, CT scan-to-start of endovascular treatment (EVT) time, CT scan-to-recanalization time, and door-in-door-out time. Secondary outcomes included symptomatic intracranial hemorrhage (ICH), any ICH, mortality, mRS score < 2 at 90 days, and length of hospital stay. A total of 12 studies involving 15,595 patients were included in our analysis. The pooled analysis demonstrated that the implementation of the Viz.ai algorithm was associated with lesser CT scan to EVT time (SMD -0.71, 95% CI [-0.98, -0.44], p < 0.001) and DTG time (SMD -0.50, 95% CI [-0.66, -0.35], p < 0.001) as well as CT to recanalization time (SMD -0.55, 95% CI [-0.76, -0.33], p < 0.001). Additionally, patients in the post-AI group had significantly lower door-in door-out time than the pre-AI group (SMD -0.49, 95% CI [-0.71, -0.28], p < 0.001). Despite the workflow metrics improvement, our analysis did not reveal statistically significant differences in patient clinical outcomes (p > 0.05). Our results suggest that the integration of the Viz.ai platform in stroke care holds significant potential for reducing EVT delays in patients with LVO and optimizing stroke flow metrics in comprehensive stroke centers. Further studies are required to validate its efficacy in improving clinical outcomes in patients with LVO.

Deep learning approach based on a patch residual for pediatric supracondylar subtle fracture detection.

Ye Q, Wang Z, Lou Y, Yang Y, Hou J, Liu Z, Liu W, Li J

pubmed logopapersMay 8 2025
Supracondylar humerus fractures in children are among the most common elbow fractures in pediatrics. However, their diagnosis can be particularly challenging due to the anatomical characteristics and imaging features of the pediatric skeleton. In recent years, convolutional neural networks (CNNs) have achieved notable success in medical image analysis, though their performance typically relies on large-scale, high-quality labeled datasets. Unfortunately, labeled samples for pediatric supracondylar fractures are scarce and difficult to obtain. To address this issue, this paper introduces a deep learning-based multi-scale patch residual network (MPR) for the automatic detection and localization of subtle pediatric supracondylar fractures. The MPR framework combines a CNN for automatic feature extraction with a multi-scale generative adversarial network to model skeletal integrity using healthy samples. By leveraging healthy images to learn the normal skeletal distribution, the approach reduces the dependency on labeled fracture data and effectively addresses the challenges posed by limited pediatric datasets. Datasets from two different hospitals were used, with data augmentation techniques applied during both training and validation. On an independent test set, the proposed model achieves an accuracy of 90.5%, with 89% sensitivity, 92% specificity, and an F1 score of 0.906-outperforming the diagnostic accuracy of emergency medicine physicians and approaching that of pediatric radiologists. Furthermore, the model demonstrates a fast inference speed of 1.1 s per sheet, underscoring its substantial potential for clinical application.

Automated detection of bottom-of-sulcus dysplasia on MRI-PET in patients with drug-resistant focal epilepsy

Macdonald-Laurs, E., Warren, A. E. L., Mito, R., Genc, S., Alexander, B., Barton, S., Yang, J. Y., Francis, P., Pardoe, H. R., Jackson, G., Harvey, A. S.

medrxiv logopreprintMay 8 2025
Background and ObjectivesBottom-of-sulcus dysplasia (BOSD) is a diagnostically challenging subtype of focal cortical dysplasia, 60% being missed on patients first MRI. Automated MRI-based detection methods have been developed for focal cortical dysplasia, but not BOSD specifically. Use of FDG-PET alongside MRI is not established in automated methods. We report the development and performance of an automated BOSD detector using combined MRI+PET data. MethodsThe training set comprised 54 mostly operated patients with BOSD. The test sets comprised 17 subsequently diagnosed patients with BOSD from the same center, and 12 published patients from a different center. 81% patients across training and test sets had reportedly normal first MRIs and most BOSDs were <1.5cm3. In the training set, 12 features from T1-MRI, FLAIR-MRI and FDG-PET were evaluated using a novel "pseudo-control" normalization approach to determine which features best distinguished dysplastic from normal-appearing cortex. Using the Multi-centre Epilepsy Lesion Detection groups machine-learning detection method with the addition of FDG-PET, neural network classifiers were then trained and tested on MRI+PET features, MRI-only and PET-only. The proportion of patients whose BOSD was overlapped by the top output cluster, and the top five output clusters, were assessed. ResultsCortical and subcortical hypometabolism on FDG-PET were superior in discriminating dysplastic from normal-appearing cortex compared to MRI features. When the BOSD detector was trained on MRI+PET features, 87% BOSDs were overlapped by one of the top five clusters (69% top cluster) in the training set, 76% in the prospective test set (71% top cluster) and 75% in the published test set (42% top cluster). Cluster overlap was similar when the detector was trained and tested on PET-only features but lower when trained and tested on MRI-only features. ConclusionDetection of BOSD is possible using established MRI-based automated detection methods, supplemented with FDG-PET features and trained on a BOSD-specific cohort. In clinical practice, an MRI+PET BOSD detector could improve assessment and outcomes in seemingly MRI-negative patients being considered for epilepsy surgery.

An automated hip fracture detection, classification system on pelvic radiographs and comparison with 35 clinicians.

Yilmaz A, Gem K, Kalebasi M, Varol R, Gencoglan ZO, Samoylenko Y, Tosyali HK, Okcu G, Uvet H

pubmed logopapersMay 8 2025
Accurate diagnosis of orthopedic injuries, especially pelvic and hip fractures, is vital in trauma management. While pelvic radiographs (PXRs) are widely used, misdiagnosis is common. This study proposes an automated system that uses convolutional neural networks (CNNs) to detect potential fracture areas and predict fracture conditions, aiming to outperform traditional object detection-based systems. We developed two deep learning models for hip fracture detection and prediction, trained on PXRs from three hospitals. The first model utilized automated hip area detection, cropping, and classification of the resulting patches. The images were preprocessed using the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm. The YOLOv5 architecture was employed for the object detection model, while three different pre-trained deep neural network (DNN) architectures were used for classification, applying transfer learning. Their performance was evaluated on a test dataset, and compared with 35 clinicians. YOLOv5 achieved a 92.66% accuracy on regular images and 88.89% on CLAHE-enhanced images. The classifier models, MobileNetV2, Xception, and InceptionResNetV2, achieved accuracies between 94.66% and 97.67%. In contrast, the clinicians demonstrated a mean accuracy of 84.53% and longer prediction durations. The DNN models showed significantly better accuracy and speed compared to human evaluators (p < 0.0005, p < 0.01). These DNN models highlight promising utility in trauma diagnosis due to their high accuracy and speed. Integrating such systems into clinical practices may enhance the diagnostic efficiency of PXRs.
Page 2 of 325 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.