Sort by:
Page 27 of 3433422 results

Automatic bone age assessment: a Turkish population study.

Öztürk S, Yüce M, Pamuk GG, Varlık C, Cimilli AT, Atay M

pubmed logopapersSep 8 2025
Established methods for bone age assessment (BAA), such as the Greulich and Pyle atlas, suffer from variability due to population differences and observer discrepancies. Although automated BAA offers speed and consistency, limited research exists on its performance across different populations using deep learning. This study examines deep learning algorithms on the Turkish population to enhance bone age models by understanding demographic influences. We analyzed reports from Bağcılar Hospital's Health Information Management System between April 2012 and September 2023 using "bone age" as a keyword. Patient images were re-evaluated by an experienced radiologist and anonymized. A total of 2,730 hand radiographs from Bağcılar Hospital (Turkish population), 12,572 from the Radiological Society of North America (RSNA), and 6,185 from the Radiological Hand Pose Estimation (RHPE) public datasets were collected, along with corresponding bone ages and gender information. A random set of 546 radiographs (273 from Bağcılar, 273 from public datasets) was initially randomly split for an internal test set with bone age stratification; the remaining data were used for training and validation. BAAs were generated using a modified InceptionV3 model on 500 × 500-pixel images, selecting the model with the lowest mean absolute error (MAE) on the validation set. Three models were trained and tested based on dataset origin: Bağcılar (Turkish), public (RSNA-RHPE), and a Combined model. Internal test set predictions of the Combined model estimated bone age within less than 6, 12, 18, and 24 months at rates of 44%, 73%, 87%, and 94%, respectively. The MAE was 9.2 months in the overall internal test set, 7 months on the public test set, and 11.5 months on the Bağcılar internal test data. The Bağcılar-only model had an MAE of 12.7 months on the Bağcılar internal test data. Despite less training data, there was no significant difference between the combined and Bağcılar models on the Bağcılar dataset (<i>P</i> > 0.05). The public model showed an MAE of 16.5 months on the Bağcılar dataset, significantly worse than the other models (<i>P</i> < 0.05). We developed an automatic BAA model including the Turkish population, one of the few such studies using deep learning. Despite challenges from population differences and data heterogeneity, these models can be effectively used in various clinical settings. Model accuracy can improve over time with cumulative data, and publicly available datasets may further refine them. Our approach enables more accurate and efficient BAAs, supporting healthcare professionals where traditional methods are time-consuming and variable. The developed automated BAA model for the Turkish population offers a reliable and efficient alternative to traditional methods. By utilizing deep learning with diverse datasets from Bağcılar Hospital and publicly available sources, the model minimizes assessment time and reduces variability. This advancement enhances clinical decision-making, supports standardized BAA practices, and improves patient care in various healthcare settings.

Predicting Breath Hold Task Compliance From Head Motion.

Weng TB, Porwal G, Srinivasan D, Inglis B, Rodriguez S, Jacobs DR, Schreiner PJ, Sorond FA, Sidney S, Lewis C, Launer L, Erus G, Nasrallah IM, Bryan RN, Dula AN

pubmed logopapersSep 8 2025
Cerebrovascular reactivity reflects changes in cerebral blood flow in response to an acute stimulus and is reflective of the brain's ability to match blood flow to demand. Functional MRI with a breath-hold task can be used to elicit this vasoactive response, but data validity hinges on subject compliance. Determining breath-hold compliance often requires external monitoring equipment. To develop a non-invasive and data-driven quality filter for breath-hold compliance using only measurements of head motion during imaging. Prospective cohort. Longitudinal data from healthy middle-aged subjects enrolled in the Coronary Artery Risk Development in Young Adults Brain MRI Study, N = 1141, 47.1% female. 3.0 Tesla gradient-echo MRI. Manual labelling of respiratory belt monitored data was used to determine breath hold compliance during MRI scan. A model to estimate the probability of non-compliance with the breath hold task was developed using measures of head motion. The model's ability to identify scans in which the participant was not performing the breath hold were summarized using performance metrics including sensitivity, specificity, recall, and F1 score. The model was applied to additional unmarked data to assess effects on population measures of CVR. Sensitivity analysis revealed exclusion of non-compliant scans using the developed model did not affect median cerebrovascular reactivity (Median [q1, q3] = 1.32 [0.96, 1.71]) compared to using manual review of respiratory belt data (1.33 [1.02, 1.74]) while reducing interquartile range. The final model based on a multi-layer perceptron machine learning classifier estimated non-compliance with an accuracy of 76.9% and an F1 score of 69.5%, indicating a moderate balance between precision and recall for the identification of scans in which the participant was not compliant. The developed model provides the probability of non-compliance with a breath-hold task, which could later be used as a quality filter or included in statistical analyses. TECHNICAL EFFICACY: Stage 3.

Evaluating artificial intelligence for a focal nodular hyperplasia diagnosis using magnetic resonance imaging: preliminary findings.

Kantarcı M, Kızılgöz V, Terzi R, Kılıç AE, Kabalcı H, Durmaz Ö, Tokgöz N, Harman M, Sağır Kahraman A, Avanaz A, Aydın S, Elpek GÖ, Yazol M, Aydınlı B

pubmed logopapersSep 8 2025
This study aimed to evaluate the effectiveness of artificial intelligence (AI) in diagnosing focal nodular hyperplasia (FNH) of the liver using magnetic resonance imaging (MRI) and compare its performance with that of radiologists. In the first phase of the study, the MRIs of 60 patients (30 patients with FNH and 30 patients with no lesions or lesions other than FNH) were processed using a segmentation program and introduced to an AI model. After the learning process, the MRIs of 42 different patients that the AI model had no experience with were introduced to the system. In addition, a radiology resident and a radiology specialist evaluated patients with the same MR sequences. The sensitivity and specificity values were obtained from all three reviews. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of the AI model were found to be 0.769, 0.966, 0.909, and 0.903, respectively. The sensitivity and specificity values were higher than those of the radiology resident and lower than those of the radiology specialist. The results of the specialist versus the AI model revealed a good agreement level, with a kappa (κ) value of 0.777. For the diagnosis of FNH, the sensitivity, specificity, PPV, and NPV of the AI device were higher than those of the radiology resident and lower than those of the radiology specialist. With additional studies focused on different specific lesions of the liver, AI models are expected to be able to diagnose each liver lesion with high accuracy in the future. AI is studied to provide assisted or automated interpretation of radiological images with an accurate and reproducible imaging diagnosis.

Deep learning for named entity recognition in Turkish radiology reports.

Abdullahi AA, Ganiz MC, Koç U, Gökhan MB, Aydın C, Özdemir AB

pubmed logopapersSep 8 2025
The primary objective of this research is to enhance the accuracy and efficiency of information extraction from radiology reports. In addressing this objective, the study aims to develop and evaluate a deep learning framework for named entity recognition (NER). We used a synthetic dataset of 1,056 Turkish radiology reports created and labeled by the radiologists in our research team. Due to privacy concerns, actual patient data could not be used; however, the synthetic reports closely mimic genuine reports in structure and content. We employed the four-stage DYGIE++ model for the experiments. First, we performed token encoding using four bidirectional encoder representations from transformers (BERT) models: BERTurk, BioBERTurk, PubMedBERT, and XLM-RoBERTa. Second, we introduced adaptive span enumeration, considering the word count of a sentence in Turkish. Third, we adopted span graph propagation to generate a multidirectional graph crucial for coreference resolution. Finally, we used a two-layered feed-forward neural network to classify the named entity. The experiments conducted on the labeled dataset showcase the approach's effectiveness. The study achieved an F1 score of 80.1 for the NER task, with the BioBERTurk model, which is pre-trained on Turkish Wikipedia, radiology reports, and biomedical texts, proving to be the most effective of the four BERT models used in the experiment. We show how different dataset labels affect the model's performance. The results demonstrate the model's ability to handle the intricacies of Turkish radiology reports, providing a detailed analysis of precision, recall, and F1 scores for each label. Additionally, this study compares its findings with related research in other languages. Our approach provides clinicians with more precise and comprehensive insights to improve patient care by extracting relevant information from radiology reports. This innovation in information extraction streamlines the diagnostic process and helps expedite patient treatment decisions.

Two step approach for detecting and segmenting the second mesiobuccal canal of maxillary first molars on cone beam computed tomography (CBCT) images via artificial intelligence.

Mansour S, Anter E, Mohamed AK, Dahaba MM, Mousa A

pubmed logopapersSep 8 2025
The purpose of this study was to assess the accuracy of a customized deep learning model based on CNN and U-Net for detecting and segmenting the second mesiobuccal canal (MB2) of maxillary first molar teeth on cone beam computed tomography (CBCT) scans. CBCT scans of 37 patients were imported into 3D slicer software to crop and segment the canals of the mesiobuccal (MB) root of the maxillary first molar. The annotated data were divided into two groups: 80% for training and validation and 20% for testing. The data were used to train the AI model in 2 separate steps: a classification model based on a customized CNN and a segmentation model based on U-Net. A confusion matrix and receiver-operating characteristic (ROC) analysis were used in the statistical evaluation of the results of the classification model, whereas the Dice-coefficient (DCE) was used to express the segmentation accuracy. F1 score, testing accuracy, recall and precision values were 0.93, 0.87, 1.0 and 0.87 respectively, for the cropped images of MB root of maxillary 1st molar teeth in the testing group. The testing loss was 0.4, and the area under the curve (AUC) value was 0.57. The segmentation accuracy results were satisfactory, where the DCE of training was 0.85 and DCE of testing was 0.79. MB2 in the maxillary first molar can be precisely detected and segmented via the developed AI algorithm in CBCT images. Current Controlled Trial Number NCT05340140. April 22, 2022.

Intraoperative 2D/3D Registration via Spherical Similarity Learning and Inference-Time Differentiable Levenberg-Marquardt Optimization

Minheng Chen, Youyong Kong

arxiv logopreprintSep 8 2025
Intraoperative 2D/3D registration aligns preoperative 3D volumes with real-time 2D radiographs, enabling accurate localization of instruments and implants. A recent fully differentiable similarity learning framework approximates geodesic distances on SE(3), expanding the capture range of registration and mitigating the effects of substantial disturbances, but existing Euclidean approximations distort manifold structure and slow convergence. To address these limitations, we explore similarity learning in non-Euclidean spherical feature spaces to better capture and fit complex manifold structure. We extract feature embeddings using a CNN-Transformer encoder, project them into spherical space, and approximate their geodesic distances with Riemannian distances in the bi-invariant SO(4) space. This enables a more expressive and geometrically consistent deep similarity metric, enhancing the ability to distinguish subtle pose differences. During inference, we replace gradient descent with fully differentiable Levenberg-Marquardt optimization to accelerate convergence. Experiments on real and synthetic datasets show superior accuracy in both patient-specific and patient-agnostic scenarios.

Radiologist-AI Collaboration for Ischemia Diagnosis in Small Bowel Obstruction: Multicentric Development and External Validation of a Multimodal Deep Learning Model

Vanderbecq, Q., Xia, W. F., Chouzenoux, E., Pesquet, J.-c., Zins, M., Wagner, M.

medrxiv logopreprintSep 8 2025
PurposeTo develop and externally validate a multimodal AI model for detecting ischaemia complicating small-bowel obstruction (SBO). MethodsWe combined 3D CT data with routine laboratory markers (C-reactive protein, neutrophil count) and, optionally, radiology report text. From two centers, 1,350 CT examinations were curated; 771 confirmed SBO scans were used for model development with patient-level splits. Ischemia labels were defined by surgical confirmation within 24 hours of imaging. Models (MViT, ResNet-101, DaViT) were trained as unimodal and multimodal variants. External testing was used for 66 independent cases from a third center. Two radiologists (attending, resident) read the test set with and without AI assistance. Performance was assessed using AUC, sensitivity, specificity, and 95% bootstrap confidence intervals; predictions included a confidence score. ResultsThe image-plus-laboratory model performed best on external testing (AUC 0.69 [0.59-0.79], sensitivity 0.89 [0.76-1.00], and specificity 0.44 [0.35-0.54]). Adding report text improved internal validation but did not generalize externally; image+text and full multimodal variants did not exceed image+laboratory performance. Without AI, the attending outperformed the resident (AUC 0.745 [0.617-0.845] vs 0.706 [0.581-0.818]); with AI, both improved, attending 0.752 [0.637-0.853] and resident 0.752 [0.629-0.867], rising to 0.750 [0.631-0.839] and 0.773 [0.657-0.867] with confidence display; differences were not statistically significant. ConclusionA multimodal AI that combines CT images with routine laboratory markers outperforms single-modality approaches and boosts radiologist readers performance notably junior, supporting earlier, more consistent decisions within the first 24 hours. Key PointsA multimodal artificial intelligence (AI) model that combines CT images with laboratory markers detected ischemia in small-bowel obstruction with AUC 0.69 (95% CI 0.59-0.79) and sensitivity 0.89 (0.76-1.00) on external testing, outperforming single-modality models. Adding report text did not generalize across sites: the image+text model fell from AUC 0.82 (internal) to 0.53 (external), and adding text to image+biology left external AUC unchanged (0.69) with similar specificity (0.43-0.44). With AI assistance both junior and senior readers improved; the juniors AUC rose from 0.71 to 0.77, reaching senior-level performance. Summary StatementA multicentric AI model combining CT and routine laboratory data (CRP and neutrophilia) improved radiologists detection of ischemia in small-bowel obstruction. This tool supports earlier decision-making within the first 24 hours.

Predicting Rejection Risk in Heart Transplantation: An Integrated Clinical-Histopathologic Framework for Personalized Post-Transplant Care

Kim, D. D., Madabhushi, A., Margulies, K. B., Peyster, E. G.

medrxiv logopreprintSep 8 2025
BackgroundCardiac allograft rejection (CAR) remains the leading cause of early graft failure after heart transplantation (HT). Current diagnostics, including histologic grading of endomyocardial biopsy (EMB) and blood-based assays, lack accurate predictive power for future CAR risk. We developed a predictive model integrating routine clinical data with quantitative morphologic features extracted from routine EMBs to demonstrate the precision-medicine potential of mining existing data sources in post-HT care. MethodsIn a retrospective cohort of 484 HT recipients with 1,188 EMB encounters within 6 months post-transplant, we extracted 370 quantitative pathology features describing lymphocyte infiltration and stromal architecture from digitized H&E-stained slides. Longitudinal clinical data comprising 268 variables--including lab values, immunosuppression records, and prior rejection history--were aggregated per patient. Using the XGBoost algorithm with rigorous cross-validation, we compared models based on four different data sources: clinical-only, morphology-only, cross-sectional-only, and fully integrated longitudinal data. The top predictors informed the derivation of a simplified Integrated Rejection Risk Index (IRRI), which relies on just 4 clinical and 4 morphology risk facts. Model performance was evaluated by AUROC, AUPRC, and time-to-event hazard ratios. ResultsThe fully integrated longitudinal model achieved superior predictive accuracy (AUROC 0.86, AUPRC 0.74). IRRI stratified patients into risk categories with distinct future CAR hazards: high-risk patients showed a markedly increased CAR risk (HR=6.15, 95% CI: 4.17-9.09), while low-risk patients had significantly reduced risk (HR=0.52, 95% CI: 0.33-0.84). This performance exceeded models based on just cross-sectional or single-domain data, demonstrating the value of multi-modal, temporal data integration. ConclusionsBy integrating longitudinal clinical and biopsy morphologic features, IRRI provides a scalable, interpretable tool for proactive CAR risk assessment. This precision-based approach could support risk-adaptive surveillance and immunosuppression management strategies, offering a promising pathway toward safer, more personalized post-HT care with the potential to reduce unnecessary procedures and improve outcomes. Clinical PerspectiveWhat is new? O_LICurrent tools for cardiac allograft monitoring detect rejection only after it occurs and are not designed to forecast future risk. This leads to missed opportunities for early intervention, avoidable patient injury, unnecessary testing, and inefficiencies in care. C_LIO_LIWe developed a machine learning-based risk index that integrates clinical features, quantitative biopsy morphology, and longitudinal temporal trends to create a robust predictive framework. C_LIO_LIThe Integrated Rejection Risk Index (IRRI) provides highly accurate prediction of future allograft rejection, identifying both high- and low-risk patients up to 90 days in advance - a capability entirely absent from current transplant management. C_LI What are the clinical implications? O_LIIntegrating quantitative histopathology with clinical data provides a more precise, individualized estimate of rejection risk in heart transplant recipients. C_LIO_LIThis framework has the potential to guide post-transplant surveillance intensity, immunosuppressive management, and patient counseling. C_LIO_LIAutomated biopsy analysis could be incorporated into digital pathology workflows, enabling scalable, multicenter application in real-world transplant care. C_LI

Explainable Machine Learning for Estimating the Contrast Material Arrival Time in Computed Tomography Pulmonary Angiography.

Meng XP, Yu H, Pan C, Chen FM, Li X, Wang J, Hu C, Fang X

pubmed logopapersSep 8 2025
To establish an explainable machine learning (ML) approach using patient-related and noncontrast chest CT-derived features to predict the contrast material arrival time (TARR) in CT pulmonary angiography (CTPA). This retrospective study included consecutive patients referred for CTPA between September 2023 to October 2024. Sixteen clinical and 17 chest CT-derived parameters were used as inputs for the ML approach, which employed recursive feature elimination for feature selection and XGBoost with SHapley Additive exPlanations (SHAP) for explainable modeling. The prediction target was abnormal TARR of the pulmonary artery (ie, TARR <7 seconds or >10 s), determined by the time to peak enhancement in the test bolus, with 2 models distinguishing these cases. External validation was conducted. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). A total of 666 patients (mean age, 70 [IQR, 59.3 to 78.0]; 46.8% female participants) were split into training (n = 353), testing (n = 151), and external validation (n = 162) sets. 86 cases (12.9%) had TARR <7 seconds, and 138 cases (20.7%) had TARR >10 seconds. The ML models exhibited good performance in their respective testing and external validation sets (AUC: 0.911 and 0.878 for TARR <7 s; 0.834 and 0.897 for TARR >10 s). SHAP analysis identified the measurements of the vena cava and pulmonary artery as key features for distinguishing abnormal TARR. The explainable ML algorithm accurately identified normal and abnormal TARR of the pulmonary artery, facilitating personalized CTPA scans.

AI-Driven Fetal Liver Echotexture Analysis: A New Frontier in Predicting Neonatal Insulin Imbalance.

Da Correggio KS, Santos LO, Muylaert Barroso FS, Galluzzo RN, Chaves TZL, Wangenheim AV, Onofre ASC

pubmed logopapersSep 8 2025
To evaluate the performance of artificial intelligence (AI)-based models in predicting elevated neonatal insulin levels through fetal hepatic echotexture analysis. This diagnostic accuracy study analyzed ultrasound images of fetal livers from pregnancies between 37 and 42 weeks, including cases with and without gestational diabetes mellitus (GDM). Images were stored in Digital Imaging and Communications in Medicine (DICOM) format, annotated by experts, and converted to segmented masks after quality checks. A balanced dataset was created by randomly excluding overrepresented categories. Artificial intelligence classification models developed using the FastAI library-ResNet-18, ResNet-34, ResNet-50, EfficientNet-B0, and EfficientNet-B7-were trained to detect elevated C-peptide levels (>75th percentile) in umbilical cord blood at birth, based on fetal hepatic ultrasonographic images. Out of 2339 ultrasound images, 606 were excluded due to poor quality, resulting in 1733 images analyzed. Elevated C-peptide levels were observed in 34.3% of neonates. Among the 5 CNN models evaluated, EfficientNet-B0 demonstrated the highest overall performance, achieving a sensitivity of 86.5%, specificity of 82.1%, positive predictive value (PPV) of 83.0%, negative predictive value (NPV) of 85.7%, accuracy of 84.3%, and an area under the ROC curve (AUC) of 0.83 in predicting elevated neonatal insulin levels through fetal hepatic echotexture analysis. AI-based analysis of fetal liver echotexture via ultrasound effectively predicted elevated neonatal C-peptide levels, offering a promising non-invasive method for detecting insulin imbalance in newborns.
Page 27 of 3433422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.