Sort by:
Page 9 of 45441 results

Performance of chest X-ray with computer-aided detection powered by deep learning-based artificial intelligence for tuberculosis presumptive identification during case finding in the Philippines.

Marquez N, Carpio EJ, Santiago MR, Calderon J, Orillaza-Chi R, Salanap SS, Stevens L

pubmed logopapersAug 22 2025
The Philippines' high tuberculosis (TB) burden calls for effective point-of-care screening. Systematic TB case finding using chest X-ray (CXR) with computer-aided detection powered by deep learning-based artificial intelligence (AI-CAD) provided this opportunity. We aimed to comprehensively review AI-CAD's real-life performance in the local context to support refining its integration into the country's programmatic TB elimination efforts. Retrospective cross-sectional data analysis was done on case-finding activities conducted in four regions of the Philippines between May 2021 and March 2024. Individuals 15 years and older with complete CXR and molecular World Health Organization-recommended rapid diagnostic (mWRD) test results were included. TB presumptive was detected either by CXR or TB signs and symptoms and/or official radiologist readings. The overall diagnostic accuracy of CXR with AI-CAD, stratified by different factors, was assessed using a fixed abnormality threshold and mWRD as the standard reference. Given the imbalanced dataset, we evaluated both precision-recall (PRC) and receiver operating characteristic (ROC) plots. Due to limited verification of CAD-negative individuals, we used "pseudo-sensitivity" and "pseudo-specificity" to reflect estimates based on partial testing. We identified potential factors that may affect performance metrics. Using a 0.5 abnormality threshold in analyzing 5740 individuals, the AI-CAD model showed high pseudo-sensitivity at 95.6% (95% CI, 95.1-96.1) but low pseudo-specificity at 28.1% (26.9-29.2) and positive predictive value (PPV) at 18.4% (16.4-20.4). The area under the operating characteristic curve was 0.820, whereas the area under the precision-recall curve was 0.489. Pseudo-sensitivity was higher among males, younger individuals, and newly diagnosed TB. Threshold analysis revealed trade-offs, as increasing the threshold score to 0.68 saved more mWRD tests (42%) but led to an increase in missed cases (10%). Threshold adjustments affected PPV, tests saved, and case detection differently across settings. Scaling up AI-CAD use in TB screening to improve TB elimination efforts could be beneficial. There is a need to calibrate threshold scores based on resource availability, prevalence, and program goals. ROC and PRC plots, which specify PPV, could serve as valuable metrics for capturing the best estimate of model performance and cost-benefit ratios within the context-specific implementation of resource-limited settings.

The impact of a neuroradiologist on the report of a real-world CT perfusion imaging map derived by AI/ML-driven software.

De Rubeis G, Stasolla A, Piccoli C, Federici M, Cozzolino V, Lovullo G, Leone E, Pesapane F, Fabiano S, Bertaccini L, Pingi A, Galluzzo M, Saba L, Pampana E

pubmed logopapersAug 22 2025
According to guideline the computed tomography perfusion (CTP) should read and analysis using computer-aided software. This study evaluates the efficacy of AI/ML (machine learning) -driven software in CTP imaging and the effect of neuroradiologists interpretation of these automated results. We conducted a retrospective, single-center cohort study from June to December 2023 at a comprehensive stroke center. A total of 132 patients suspected of acute ischemic stroke underwent CTP using. The AI software RAPID.AI was utilized for initial analysis, with subsequent validation and adjustments made by experienced neuroradiologists. The rate of CTP marked as "non reportable", "reportable" and "reportable with correction" by neuroradiologist was recorded. The degree of confidence in the report of basal and angio-CT scan was assessed before and after CTP visualization. Statistical analysis included logistic regression and F1 score assessments to evaluate the predictive accuracy of AI-generated CTP maps RESULTS: The study found that CTP maps derived from AI software were reportable in 65.2% of cases without artifacts, improved to 87.9% reportable cases when reviewed by neuroradiologists. Key predictive factors for artifact-free CTP maps included motion parameters and the timing of contrast peak distances. There was a significant shift to higher confidence scores of the angiographic phase of the CT after the result of CTP CONCLUSIONS: Neuroradiologists play an indispensable role in enhancing the reliability of CTP imaging by interpreting and correcting AI-processed maps. CTP=computed tomography perfusion; AI/ML= Artificial Intelligence/Machine Learning; LVO = Large vessel occlusion.

Automated biometry for assessing cephalopelvic disproportion in 3D 0.55T fetal MRI at term

Uus, A., Bansal, S., Gerek, Y., Waheed, H., Neves Silva, S., Aviles Verdera, J., Kyriakopoulou, V., Betti, L., Jaufuraully, S., Hajnal, J. V., Siasakos, D., David, A., Chandiramani, M., Hutter, J., Story, L., Rutherford, M.

medrxiv logopreprintAug 21 2025
Fetal MRI offers detailed three-dimensional visualisation of both fetal and maternal pelvic anatomy, allowing for assessment of the risk of cephalopelvic disproportion and obstructed labour. However, conventional measurements of fetal and pelvic proportions and their relative positioning are typically performed manually in 2D, making them time-consuming, subject to inter-observer variability, and rarely integrated into routine clinical workflows. In this work, we present the first fully automated pipeline for pelvic and fetal head biometry in T2-weighted fetal MRI at late gestation. The method employs deep learning-based localisation of anatomical landmarks in 3D reconstructed MRI images, followed by computation of 12 standard linear and circumference measurements commonly used in the assessment of cephalopelvic disproportion. Landmark detection is based on 3D UNet models within MONAI framework, trained on 57 semi-manually annotated datasets. The full pipeline is quantitatively validated on 10 test cases. Furthermore, we demonstrate its clinical feasibility and relevance by applying it to 206 fetal MRI scans (36-40 weeks gestation) from the MiBirth study, which investigates prediction of mode of delivery using low field MRI.

From Detection to Diagnosis: An Advanced Transfer Learning Pipeline Using YOLO11 with Morphological Post-Processing for Brain Tumor Analysis for MRI Images.

Chourib I

pubmed logopapersAug 21 2025
Accurate and timely detection of brain tumors from magnetic resonance imaging (MRI) scans is critical for improving patient outcomes and informing therapeutic decision-making. However, the complex heterogeneity of tumor morphology, scarcity of annotated medical data, and computational demands of deep learning models present substantial challenges for developing reliable automated diagnostic systems. In this study, we propose a robust and scalable deep learning framework for brain tumor detection and classification, built upon an enhanced YOLO-v11 architecture combined with a two-stage transfer learning strategy. The first stage involves training a base model on a large, diverse MRI dataset. Upon achieving a mean Average Precision (mAP) exceeding 90%, this model is designated as the Brain Tumor Detection Model (BTDM). In the second stage, the BTDM is fine-tuned on a structurally similar but smaller dataset to form Brain Tumor Detection and Segmentation (BTDS), effectively leveraging domain transfer to maintain performance despite limited data. The model is further optimized through domain-specific data augmentation-including geometric transformations-to improve generalization and robustness. Experimental evaluations on publicly available datasets show that the framework achieves high [email protected] scores (up to 93.5% for the BTDM and 91% for BTDS) and consistently outperforms existing state-of-the-art methods across multiple tumor types, including glioma, meningioma, and pituitary tumors. In addition, a post-processing module enhances interpretability by generating segmentation masks and extracting clinically relevant metrics such as tumor size and severity level. These results underscore the potential of our approach as a high-performance, interpretable, and deployable clinical decision-support tool, contributing to the advancement of intelligent real-time neuro-oncological diagnostics.

Automated Midline Shift Detection in Head CT Using Localization and Symmetry Techniques Based on User-Selected Slice.

Banayan NE, Shalu H, Hatzoglou V, Swinburne N, Holodny A, Zhang Z, Stember J

pubmed logopapersAug 20 2025
Midline shift (MLS) is an intracranial pathology characterized by the displacement of brain parenchyma across the skull's midsagittal axis, typically caused by mass effect from space-occupying lesions or traumatic brain injuries. Prompt detection of MLS is crucial, because delays in identification and intervention can negatively impact patient outcomes. The gap we have addressed in this work is the development of a deep learning algorithm that encompasses the full severity range from mild to severe cases of MLS. Notably, in more severe cases, the mass effect often effaces the septum pellucidum, rendering it unusable as a fiducial point of reference. We sought to enable rapid and accurate detection of MLS by leveraging advances in artificial intelligence (AI). Using a cohort of 981 patient CT scans with a breadth of cerebral pathologies from our institution, we manually chose an individual slice from each CT scan primarily based on the presence of the lateral ventricles and annotated 400 of these scans for the lateral ventricles and skull-axis midline by using Roboflow. Finally, we trained an AI model based on the You Only Look Once object detection system to identify MLS in the individual slices of the remaining 581 CT scans. When comparing normal and mild cases to moderate and severe cases of MLS detection, our model yielded an area under the curve of 0.79 with a sensitivity of 0.73 and specificity of 0.72 indicating our model is sensitive enough to capture moderate and severe MLS and specific enough to differentiate them from mild and normal cases. We developed an AI model that reliably identifies the lateral ventricles and the cerebral midline across various pathologies in patient CT scans. Most importantly, our model accurately identifies and stratifies clinically significant and emergent MLS from nonemergent cases. This could serve as a foundational element for a future clinically integrated approach that flags urgent studies for expedited review, potentially facilitating more timely treatment when necessary.

Detection of neonatal pneumoperitoneum on radiographs using deep multi-task learning.

Park C, Choi J, Hwang J, Jeong H, Kim PH, Cho YA, Lee BS, Jung E, Kwon SH, Kim M, Jun H, Nam Y, Kim N, Yoon HM

pubmed logopapersAug 20 2025
Neonatal pneumoperitoneum is a life-threatening condition requiring prompt diagnosis, yet its subtle radiographic signs pose diagnostic challenges, especially in emergency settings. To develop and validate a deep multi-task learning model for diagnosing neonatal pneumoperitoneum on radiographs and to assess its clinical utility across clinicians of varying experience levels. Retrospective diagnostic study using internal and external datasets. Internal data were collected between January 1995 and August 2018, while external data were sourced from 11 neonatal intensive care units. Tertiary hospital and multicenter validation settings. Internal dataset: 204 neonates (546 radiographs), external dataset: 378 radiographs (125 pneumoperitoneum cases, 214 non-pneumoperitoneum cases). Radiographs were reviewed by two pediatric radiologists. A reader study involved 4 physicians with varying experience levels. A deep multi-task learning model combining classification and segmentation tasks for pneumoperitoneum detection. The primary outcomes included diagnostic accuracy, area under the receiver operating characteristic curve (AUC), and inter-reader agreement. AI-assisted and unassisted reader performance metrics were compared. The AI model achieved an AUC of 0.98 (95 % CI, 0.94-1.00) and accuracy of 94 % (95 % CI, 85.1-99.6) in internal validation, and AUC of 0.89 (95 % CI, 0.85-0.92) with accuracy of 84.1 % (95 % CI, 80.4-87.8) in external validation. AI assistance improved reader accuracy from 82.5 % to 86.6 % (p < .001) and inter-reader agreement (kappa increased from 0.33 to 0.71 to 0.54-0.86). The multi-task learning model demonstrated excellent diagnostic performance and improved clinicians' diagnostic accuracy and agreement, suggesting its potential to enhance care in neonatal intensive care settings. All code is available at https://github.com/brody9512/NEC_MTL.

A comprehensive deep learning approach to improve enchondroma detection on X-ray images.

Aydin A, Ozcan C, Simsek SA, Say F

pubmed logopapersAug 20 2025
An enchondroma is a benign neoplasm of mature hyaline cartilage that proliferates from the medullary cavity toward the cortical bone. This results in the formation of a significant endogenous mass within the medullary cavity. Although enchondromas are predominantly asymptomatic, they may exhibit various clinical manifestations contingent on the size of the lesion, its localization, and the characteristics observed on radiological imaging. This study aimed to identify and present cases of bone tissue enchondromas to field specialists as preliminary data. In this study, authentic X-ray radiographs of patients were obtained following ethical approval and subjected to preprocessing. The images were then annotated by orthopedic oncology specialists using advanced, state-of-the-art object detection algorithms trained with diverse architectural frameworks. All processes, from preprocessing to identifying pathological regions using object detection systems, underwent rigorous cross-validation and oversight by the research team. After performing various operations and procedural steps, including modifying deep learning architectures and optimizing hyperparameters, enchondroma formation in bone tissue was successfully identified. This achieved an average precision of 0.97 and an accuracy rate of 0.98, corroborated by medical professionals. A comprehensive study incorporating 1055 authentic patient data from multiple healthcare centers will be a pioneering investigation that introduces innovative approaches for delivering preliminary insights to specialists concerning bone radiography.

[The application effect of Generative Pre-Treatment Tool of Skeletal Pathology in functional lumbar spine radiographic analysis].

Yilihamu Y, Zhao K, Zhong H, Feng SQ

pubmed logopapersAug 20 2025
<b>Objective:</b> To investigate the application effectiveness of the artificial intelligence(AI) based Generative Pre-treatment tool of Skeletal Pathology (GPTSP) in measuring functional lumbar radiographic examinations. <b>Methods:</b> This is a retrospective case series study,reviewing the clinical and imaging data of 34 patients who underwent lumbar dynamic X-ray radiography at Department of Orthopedics, the Second Hospital of Shandong University from September 2021 to June 2023. Among the patients, 13 were male and 21 were female, with an age of (68.0±8.0) years (range:55 to 88 years). The AI model of the GPTSP system was built upon a multi-dimensional constrained loss function constructed based on the YOLOv8 model, incorporating Kullback-Leibler divergence to quantify the anatomical distribution deviation of lumbar intervertebral space detection boxes, along with the introduction of a global dynamic attention mechanism. It can identify lumbar vertebral body edge points and measure lumbar intervertebral space. Furthermore, spondylolisthesis index, lumbar index, and lumbar intervertebral angles were measured using three methods: manual measurement by doctors, predefined annotated measurement, and AI-assisted measurement. The consistency between the doctors and the AI model was analyzed through intra-class correlation coefficient (ICC) and Kappa coefficient. <b>Results:</b> AI-assisted physician measurement time was (1.5±0.1) seconds (range: 1.3 to 1.7 seconds), which was shorter than the manual measurement time ((2 064.4±108.2) seconds,range: 1 768.3 to 2 217.6 seconds) and the pre-defined annotation measurement time ((602.0±48.9) seconds,range: 503.9 to 694.4 seconds). Kappa values between physicians' diagnoses and AI model's diagnoses (based on GPTSP platform) for the lumbar slip index, lumbar index, and intervertebral angles measured by three methods were 0.95, 0.92, and 0.82 (all <i>P</i><0.01), with ICC values consistently exceeding 0.90, indicating high consistency. Based on the doctor's manual measurement, compared with the predefined label measurement, altering AI assistance, doctors measurement with average annotation errors reduced from 2.52 mm (range: 0.01 to 6.78 mm) to 1.47 mm(range: 0 to 5.03 mm). <b>Conclusions:</b> The GPTSP system enhanced efficiency in functional lumbar analysis. AI model demonstrated high consistency in annotation and measurement results, showing strong potential to serve as a reliable clinical auxiliary tool.

A fully automated AI-based method for tumour detection and quantification on [<sup>18</sup>F]PSMA-1007 PET-CT images in prostate cancer.

Trägårdh E, Ulén J, Enqvist O, Larsson M, Valind K, Minarik D, Edenbrandt L

pubmed logopapersAug 20 2025
In this study, we further developed an artificial intelligence (AI)-based method for the detection and quantification of tumours in the prostate, lymph nodes and bone in prostate-specific membrane antigen (PSMA)-targeting positron emission tomography with computed tomography (PET-CT) images. A total of 1064 [<sup>18</sup>F]PSMA-1007 PET-CT scans were used (approximately twice as many compared to our previous AI model), of which 120 were used as test set. Suspected lesions were manually annotated and used as ground truth. A convolutional neural network was developed and trained. The sensitivity and positive predictive value (PPV) were calculated using two sets of manual segmentations as reference. Results were also compared to our previously developed AI method. The correlation between manually and AI-based calculations of total lesion volume (TLV) and total lesion uptake (TLU) were calculated. The sensitivities of the AI method were 85% for prostate tumour/recurrence, 91% for lymph node metastases and 61% for bone metastases (82%, 86% and 70% for manual readings and 66%, 88% and 71% for the old AI method). The PPVs of the AI method were 85%, 83% and 58%, respectively (63%, 86% and 39% for manual readings, and 69%, 70% and 39% for the old AI method). The correlations between manual and AI-based calculations of TLV and TLU ranged from r = 0.62 to r = 0.96. The performance of the newly developed and fully automated AI-based method for detecting and quantifying prostate tumour and suspected lymph node and bone metastases increased significantly, especially the PPV. The AI method is freely available to other researchers ( www.recomia.org ).

Clinical and Economic Evaluation of a Real-Time Chest X-Ray Computer-Aided Detection System for Misplaced Endotracheal and Nasogastric Tubes and Pneumothorax in Emergency and Critical Care Settings: Protocol for a Cluster Randomized Controlled Trial.

Tsai CL, Chu TC, Wang CH, Chang WT, Tsai MS, Ku SC, Lin YH, Tai HC, Kuo SW, Wang KC, Chao A, Tang SC, Liu WL, Tsai MH, Wang TA, Chuang SL, Lee YC, Kuo LC, Chen CJ, Kao JH, Wang W, Huang CH

pubmed logopapersAug 20 2025
Advancements in artificial intelligence (AI) have driven substantial breakthroughs in computer-aided detection (CAD) for chest x-ray (CXR) imaging. The National Taiwan University Hospital research team previously developed an AI-based emergency CXR system (Capstone project), which led to the creation of a CXR module. This CXR module has an established model supported by extensive research and is ready for application in clinical trials without requiring additional model training. This study will use 3 submodules of the system: detection of misplaced endotracheal tubes, detection of misplaced nasogastric tubes, and identification of pneumothorax. This study aims to apply a real-time CXR CAD system in emergency and critical care settings to evaluate its clinical and economic benefits without requiring additional CXR examinations or altering standard care and procedures. The study will evaluate the impact of CAD system on mortality reduction, postintubation complications, hospital stay duration, workload, and interpretation time, as wells as conduct a cost-effectiveness comparison with standard care. This study adopts a pilot trial and cluster randomized controlled trial design, with random assignment conducted at the ward level. In the intervention group, units are granted access to AI diagnostic results, while the control group continues standard care practices. Consent will be obtained from attending physicians, residents, and advanced practice nurses in each participating ward. Once consent is secured, these health care providers in the intervention group will be authorized to use the CAD system. Intervention units will have access to AI-generated interpretations, whereas control units will maintain routine medical procedures without access to the AI diagnostic outputs. The study was funded in September 2024. Data collection is expected to last from January 2026 to December 2027. This study anticipates that the real-time CXR CAD system will automate the identification and detection of misplaced endotracheal and nasogastric tubes on CXRs, as well as assist clinicians in diagnosing pneumothorax. By reducing the workload of physicians, the system is expected to shorten the time required to detect tube misplacement and pneumothorax, decrease patient mortality and hospital stays, and ultimately lower health care costs. PRR1-10.2196/72928.
Page 9 of 45441 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.