Sort by:
Page 11 of 45441 results

Weighted loss for imbalanced glaucoma detection: Insights from visual explanations.

Nugraha DJ, Yudistira N, Widodo AW

pubmed logopapersAug 17 2025
Glaucoma is a leading cause of irreversible vision loss in ophthalmology, primarily resulting from damage to the optic nerve. Early detection is crucial but remains challenging due to the inherent class imbalance in glaucoma fundus image datasets. This study addresses this limitation by applying a weighted loss function to Convolutional Neural Networks (CNNs), evaluated on the standardized SMDG-19 dataset, which integrates data from 19 publicly available sources. Key performance metrics including recall, F1-score, precision, accuracy, and AUC were analyzed, and interpretability was assessed using Grad-CAM.The results demonstrate that recall increased from 60.3% to 87.3%, representing a relative improvement of 44.75%, while F1-score improved from 66.5% to 71.4% (+7.25%). Minor trade-offs were observed in precision, which declined from 74.5% to 69.6% (-6.53%), and in accuracy, which dropped from 84.2% to 80.7% (-4.10%). In contrast, AUC rose from 84.2% to 87.4%, reflecting a relative gain of 3.21%. Grad-CAM visualizations showed consistent focus on clinically relevant regions of the optic nerve head, underscoring the effectiveness of the weighted loss strategy in improving both the performance and interpretability of CNN-based glaucoma detection systems.

A Computer Vision and Machine Learning Approach to Classify Views in Distal Radius Radiographs.

Vemu R, Birhiray D, Darwish B, Hollis R, Unnam S, Chilukuri S, Deveza L

pubmed logopapersAug 17 2025
Advances in computer vision and machine learning have augmented the ability to analyze orthopedic radiographs. A critical but underexplored component of this process is the accurate classification of radiographic views and localization of relevant anatomical regions, both of which can impact the performance of downstream diagnostic models. This study presents a deep learning object detection model and mobile application designed to classify distal radius radiographs into standard views-anterior-posterior (AP), lateral (LAT), and oblique (OB)- while localizing the anatomical region most relevant to distal radius fractures. A total of 1593 deidentified radiographs were collected from a single institution between 2021 and 2023 (544 AP, 538 LAT, and 521 OB). Each image was annotated using Labellerr software to draw bounding boxes encompassing the region spanning from the second digit MCP joint to the distal third of the radius, with annotations verified by an experienced orthopedic surgeon. A YOLOv5 object detection model was fine-tuned and trained using a 70/15/15 train/validation/test split. The model achieved an overall accuracy of 97.3%, with class-specific accuracies of 99% for AP, 100% for LAT, and 93% for OB. Overall precision and recall were 96.8% and 97.5%, respectively. Model performance exceeded the expected accuracy from random guessing (p < 0.001, binomial test). A Streamlit-based mobile application was developed to support clinical deployment. This automated view classification step reduces feature space by isolating only the relevant anatomy. Focusing subsequent models on the targeted region can minimize distraction from irrelevant areas and improve the accuracy of downstream fracture classification models.

Deep learning-based identification of necrosis and microvascular proliferation in adult diffuse gliomas from whole-slide images

Guo, Y., Huang, H., Liu, X., Zou, W., Qiu, F., Liu, Y., Chai, R., Jiang, T., Wang, J.

medrxiv logopreprintAug 16 2025
For adult diffuse gliomas (ADGs), most grading can be achieved through molecular subtyping, retaining only two key histopathological features for high-grade glioma (HGG): necrosis (NEC) and microvascular proliferation (MVP). We developed a deep learning (DL) framework to automatically identify and characterize these features. We trained patch-level models to detect and quantify NEC and MVP using a dataset that employed active learning, incorporating patches from 621 whole-slide images (WSIs) from the Chinese Glioma Genome Atlas (CGGA). Utilizing trained patch-level models, we effectively integrated the predicted outcomes and positions of individual patches within WSIs from The Cancer Genome Atlas (TCGA) cohort to form datasets. Subsequently, we introduced a patient-level model, named PLNet (Probability Localization Network), which was trained on these datasets to facilitate patient diagnosis. We also explored the subtypes of NEC and MVP based on the features extracted from patch-level models with clustering process applied on all positive patches. The patient-level models demonstrated exceptional performance, achieving an AUC of 0.9968, 0.9995 and AUPRC of 0.9788, 0.9860 for NEC and MVP, respectively. Compared to pathological reports, our patient-level models achieved the accuracy of 88.05% for NEC and 90.20% for MVP, along with a sensitivity of 73.68% and 77%. When sensitivity was set at 80%, the accuracy for NEC reached 79.28% and for MVP reached 77.55%. DL models enabled more efficient and accurate histopathological image analysis which will aid traditional glioma diagnosis. Clustering-based analyses utilizing features extracted from patch-level models could further investigate the subtypes of NEC and MVP.

Point-of-Care Ultrasound Imaging for Automated Detection of Abdominal Haemorrhage: A Systematic Review.

Zgool T, Antico M, Edwards C, Fontanarosa D

pubmed logopapersAug 16 2025
Abdominal haemorrhage is a life-threatening condition requiring prompt detection to enable timely intervention. Conventional ultrasound (US) is widely used but is highly operator-dependent, limiting its reliability outside clinical settings. In anatomical regions, in particular Morison's Pouch, US provides a higher detection reliability due to the preferential accumulation of free fluid in dependent areas. Recent advancements in artificial intelligence (AI)-integrated point-of-care US (POCUS) systems show promise for use in emergency, pre-hospital, military, and resource-limited environments. This systematic review evaluates the performance of AI-driven POCUS systems for detecting and estimating abdominal haemorrhage. A systematic search of Scopus, PubMed, EMBASE, and Web of Science (2014-2024) identified seven studies with sample sizes ranging from 94 to 6608 images and patient numbers ranging between 78 and 864 trauma patients. AI models, including YOLOv3, U-Net, and ResNet50, demonstrated high diagnostic accuracy, with sensitivity ranging from 88% to 98% and specificity from 68% to 99%. Most studies utilized 2D US imaging and conducted internal validation, typically employing systems such as the Philips Lumify and Mindray TE7. Model performance was predominantly assessed using internal datasets, wherein training and evaluation were performed on the same dataset. Of particular note, only one study validated its model on an independent dataset obtained from a different clinical setting. This limited use of external validation restricts the ability to evaluate the applicability of AI models across diverse populations and varying imaging conditions. Moreover, the Focused Assessment with Sonography in Trauma (FAST) is a protocol drive US method for detecting free fluid in the abdominal cavity, primarily in trauma cases. However, while it is commonly used to assess the right upper quadrant, particularly Morison's pouch, which is gravity-dependent and sensitive for early haemorrhage its application to other abdominal regions, such as the left upper quadrant and pelvis, remains underexplored. This is clinically significant, as fluid may preferentially accumulate in these areas depending on the mechanism of injury, patient positioning, or time since trauma, underscoring the need for broader anatomical coverage in AI applications. Researchers aiming to address the current reliance on 2D imaging and the limited use of external validation should focus future studies on integrating 3D imaging and utilising diverse, multicentre datasets to improve the reliability and generalizability of AI-driven POCUS systems for haemorrhage detection in trauma care.

TN5000: An Ultrasound Image Dataset for Thyroid Nodule Detection and Classification.

Zhang H, Liu Q, Han X, Niu L, Sun W

pubmed logopapersAug 16 2025
Accurate diagnosis of thyroid nodules using ultrasonography is a highly valuable, but challenging task. With the emergence of artificial intelligence, deep learning based methods can provide assistance to radiologists, whose performance heavily depends on the quantity and quality of training data, but current ultrasound image datasets for thyroid nodule either directly utilize the TI-RADS assessments as labels or are not publicly available. Faced with these issues, an open-access ultrasound image dataset for thyroid nodule detection and classification is proposed, i.e. the TN5000, which comprises 5,000 B-mode ultrasound images of thyroid nodule, as well as complete annotations and biopsy confirmations by expert radiologists. Additionally, the statistical characteristics of this proposed dataset have been analyzed clearly, some baseline methods for the detection and classification of thyroid nodules are recommended as the benchmark, along with their evaluation results. To our best knowledge, TN5000 is the largest open-access ultrasound image dataset of thyroid nodule with professional labeling, and is the first ultrasound image dataset designed both for the thyroid nodule detection and classification. These kinds of images with annotations can contribute to analyze the intrinsic properties of thyroid nodules and to determine the necessity of FNA biopsy, which are crucial in ultrasound diagnosis.

A comparative analysis of imaging-based algorithms for detecting focal cortical dysplasia type II in children.

Šanda J, Holubová Z, Kala D, Jiránková K, Kudr M, Masák T, Bělohlávková A, Kršek P, Otáhal J, Kynčl M

pubmed logopapersAug 15 2025
Focal cortical dysplasia (FCD) is the leading cause of drug-resistant epilepsy (DRE) in pediatric patients. Accurate detection of FCDs is crucial for successful surgical outcomes, yet remains challenging due to frequently subtle MRI findings, especially in children, whose brain morphology undergoes significant developmental changes. Automated detection algorithms have the potential to improve diagnostic precision, particularly in cases, where standard visual assessment fails. This study aimed to evaluate the performance of automated algorithms in detecting FCD type II in pediatric patients and to examine the impact of adult versus pediatric templates on detection accuracy. MRI data from 23 surgical pediatric patients with histologically confirmed FCD type II were retrospectively analyzed. Three imaging-based detection algorithms were applied to T1-weighted images, each targeting key structural features: cortical thickness, gray matter intensity (extension), and gray-white matter junction blurring. Their performance was assessed using adult and pediatric healthy controls templates, with validation against both predictive radiological ROIs (PRR) and post-resection cavities (PRC). The junction algorithm achieved the highest median dice score (0.028, IQR 0.038, p < 0.01 when compared with other algorithms) and detected relevant clusters even in MRI-negative cases. The adult template (median dice score 0.013, IQR 0.027) significantly outperformed the pediatric template (0.0032, IQR 0.023) (p < 0.001), highlighting the importance of template consistency. Despite superior performance of the adult template, its use in pediatric populations may introduce bias, as it does not account for age-specific morphological features such as cortical maturation and incomplete myelination. Automated algorithms, especially those targeting junction blurring, enhance FCD detection in pediatric populations. These algorithms may serve as valuable decision-support tools, particularly in settings where neuroradiological expertise is limited.

High sensitivity in spontaneous intracranial hemorrhage detection from emergency head CT scans using ensemble-learning approach.

Takala J, Peura H, Pirinen R, Väätäinen K, Terjajev S, Lin Z, Raj R, Korja M

pubmed logopapersAug 15 2025
Spontaneous intracranial hemorrhages have a high disease burden. Due to increasing medical imaging, new technological solutions for assisting in image interpretation are warranted. We developed a deep learning (DL) solution for spontaneous intracranial hemorrhage detection from head CT scans. The DL solution included four base convolutional neural networks (CNNs), which were trained using 300 head CT scans. A metamodel was trained on top of the four base CNNs, and simple post processing steps were applied to improve the solution's accuracy. The solution performance was evaluated using a retrospective dataset of consecutive emergency head CTs imaged in ten different emergency rooms. 7797 head CT scans were included in the validation dataset and 118 CT scans presented with spontaneous intracranial hemorrhage. The trained metamodel together with a simple rule-based post-processing step showed 89.8% sensitivity and 89.5% specificity for hemorrhage detection at the case-level. The solution detected all 78 spontaneous hemorrhage cases imaged presumably or confirmedly within 12 h from the symptom onset and identified five hemorrhages missed in the initial on-call reports. Although the success of DL algorithms depends on multiple factors, including training data versatility and quality of annotations, using the proposed ensemble-learning approach and rule-based post-processing may help clinicians to develop highly accurate DL solutions for clinical imaging diagnostics.

Enhancing Diagnostic Accuracy of Fresh Vertebral Compression Fractures With Deep Learning Models.

Li KY, Ye HB, Zhang YL, Huang JW, Li HL, Tian NF

pubmed logopapersAug 15 2025
Retrospective study. The study aimed to develop and authenticated a deep learning model based on X-ray images to accurately diagnose fresh thoracolumbar vertebral compression fractures. In clinical practice, diagnosing fresh vertebral compression fractures often requires MRI. However, due to the scarcity of MRI resources and the high time and economic costs involved, some patients may not receive timely diagnosis and treatment. Using a deep learning model combined with X-rays for diagnostic assistance could potentially serve as an alternative to MRI. In this study, the main collection included X-ray images suspected of thoracolumbar vertebral compression fractures from the municipal shared database between December 2012 and February 2024. Deep learning models were constructed using frameworks of EfficientNet, MobileNet, and MnasNet, respectively. We conducted a preliminary evaluation of the deep learning model using the validation set. The diagnostic performance of the models was evaluated using metrics such as AUC value, accuracy, sensitivity, specificity, F1 score, precision, and ROC curve. Finally, the deep learning models were compared with evaluations from two spine surgeons of different experience levels on the control set. This study included a total of 3025 lateral X-ray images from 2224 patients. The data set was divided into a training set of 2388 cases, a validation set of 482 cases, and a control set of 155 cases. In the validation set, the three groups of DL models had accuracies of 83.0%, 82.4%, and 82.2%, respectively. The AUC values were 0.861, 0.852, and 0.865, respectively. In the control set, the accuracies of the three groups of DL models were 78.1%, 78.1%, and 80.7%, respectively, all higher than spinal surgeons and significantly higher than junior spine surgeon. This study developed deep learning models for detecting fresh vertebral compression fractures, demonstrating high accuracy.

Multimodal quantitative analysis guides precise preoperative localization of epilepsy.

Shen Y, Shen Z, Huang Y, Wu Z, Ma Y, Hu F, Shu K

pubmed logopapersAug 15 2025
Epilepsy surgery efficacy is critically contingent upon the precise localization of the epileptogenic zone (EZ). However, conventional qualitative methods face challenges in achieving accurate localization, integrating multimodal data, and accounting for variations in clinical expertise among practitioners. With the rapid advancement of artificial intelligence and computing power, multimodal quantitative analysis has emerged as a pivotal approach for EZ localization. Nonetheless, no research team has thus far provided a systematic elaboration of this concept. This narrative review synthesizes recent advancements across four key dimensions: (1) seizure semiology quantification using deep learning and computer vision to analyze behavioral patterns; (2) structural neuroimaging leveraging high-field MRI, radiomics, and AI; (3) functional imaging integrating EEG-fMRI dynamics and PET biomarkers; and (4) electrophysiological quantification encompassing source localization, intracranial EEG, and network modeling. The convergence of these complementary approaches enables comprehensive characterization of epileptogenic networks across behavioral, structural, functional, and electrophysiological domains. Despite these advancements, clinical heterogeneity, limitations in algorithmic generalizability, and barriers to data sharing hinder translation into clinical practice. Future directions emphasize personalized modeling, federated learning, and cross-modal standardization to advance data-driven localization. This integrated paradigm holds promise for overcoming qualitative limitations, reducing medical costs, and improving seizure-free outcomes.

Development and validation of deep learning model for detection of obstructive coronary artery disease in patients with acute chest pain: a multi-center study.

Kim JY, Park J, Lee KH, Lee JW, Park J, Kim PK, Han K, Baek SE, Im DJ, Choi BW, Hur J

pubmed logopapersAug 14 2025
This study aimed to develop and validate a deep learning (DL) model to detect obstructive coronary artery disease (CAD, ≥ 50% stenosis) in coronary CT angiography (CCTA) among patients presenting to the emergency department (ED) with acute chest pain. The training dataset included 378 patients with acute chest pain who underwent CCTA (10,060 curved multiplanar reconstruction [MPR] images) from a single-center ED between January 2015 and December 2022. The external validation dataset included 298 patients from 3 ED centers between January 2021 and December 2022. A DL model based on You Only Look Once v4, requires manual preprocessing for curved MPR extraction and was developed using 15 manually preprocessed MPR images per major coronary artery. Model performance was evaluated per artery and per patient. The training dataset included 378 patients (mean age 61.3 ± 12.2 years, 58.2% men); the external dataset included 298 patients (mean age 58.3 ± 13.8 years, 54.6% men). Obstructive CAD prevalence in the external dataset was 27.5% (82/298). The DL model achieved per-artery sensitivity, specificity, positive predictive value, negative predictive value (NPV), and area under the curve (AUC) of 92.7%, 89.9%, 62.6%, 98.5%, and 0.919, respectively; and per-patient values of 93.3%, 80.7%, 67.7%, 96.6%, and 0.871, respectively. The DL model demonstrated high sensitivity and NPV for identifying obstructive CAD in patients with acute chest pain undergoing CCTA, indicating its potential utility in aiding ED physicians in CAD detection.
Page 11 of 45441 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.