Sort by:
Page 121 of 3453445 results

Feasibility study of fully automatic measurement of adenoid size on lateral neck and head radiographs using deep learning.

Hao D, Tang L, Li D, Miao S, Dong C, Cui J, Gao C, Li J

pubmed logopapersJul 14 2025
The objective and reliable quantification of adenoid size is pivotal for precise clinical diagnosis and the formulation of effective treatment strategies. Conventional manual measurement techniques, however, are often labor-intensive and time-consuming. To develop and validate a fully automated system for measuring adenoid size using deep learning (DL) on lateral head and neck radiographs. In this retrospective study, we analyzed 711 lateral head and neck radiographs collected from two centers between February and July 2023. A DL-based adenoid size measurement system was developed, utilizing Fujioka's method. The system employed the RTMDet network and RTMPose networks for accurate landmark detection, and mathematical formulas were applied to determine adenoid size. To evaluate consistency and reliability of the system, we employed the intra-class correlation coefficient (ICC), mean absolute difference (MAD), and Bland-Altman plots as key assessment metrics. The DL-based system exhibited high reliability in the prediction of adenoid, nasopharynx, and adenoid-nasopharyngeal ratio measurements, showcasing strong agreement with the reference standard. The results indicated an ICC for adenoid measurements of 0.902 [95%CI, 0.872-0.925], with a MAD of 1.189 and a root mean square (RMS) of 1.974. For nasopharynx measurements, the ICC was 0.868 [95%CI, 0.828-0.899], with a MAD of 1.671 and an RMS of 1.916. Additionally, the adenoid-nasopharyngeal ratio measurements yielded an ICC of 0.911 [95%CI, 0.883-0.932], a MAD of 0.054, and an RMS of 0.076. The developed DL-based system effectively automates the measurement of the adenoid-nasopharyngeal ratio, adenoid, and nasopharynx on lateral neck or head radiographs, showcasing high reliability.

Deep Learning-Based Prediction for Bone Cement Leakage During Percutaneous Kyphoplasty Using Preoperative Computed Tomography: MODEL Development and Validation.

Chen R, Wang T, Liu X, Xi Y, Liu D, Xie T, Wang A, Fan N, Yuan S, Du P, Jiao S, Zhang Y, Zang L

pubmed logopapersJul 14 2025
Retrospective study. To develop a deep learning (DL) model to predict bone cement leakage (BCL) subtypes during percutaneous kyphoplasty (PKP) using preoperative computed tomography (CT) as well as employing multicenter data to evaluate the effectiveness and generalizability of the model. DL excels at automatically extracting features from medical images. However, there is a lack of models that can predict BCL subtypes based on preoperative images. This study included an internal dataset for DL model training, validation, and testing as well as an external dataset for additional model testing. Our model integrated a segment localization module based on vertebral segmentation via three-dimensional (3D) U-Net with a classification module based on 3D ResNet-50. Vertebral level mismatch rates were calculated, and confusion matrixes were used to compare the performance of the DL model with that of spine surgeons in predicting BCL subtypes. Furthermore, the simple Cohen's kappa coefficient was used to assess the reliability of spine surgeons and the DL model against the reference standard. A total of 901 patients containing 997 eligible segments were included in the internal dataset. The model demonstrated a vertebral segment identification accuracy of 96.9%. It also showed high area under the curve (AUC) values of 0.734-0.831 and sensitivities of 0.649-0.900 for BCL prediction in the internal dataset. Similar favorable AUC values of 0.709-0.818 and sensitivities of 0.706-0.857 were observed in the external dataset, indicating the stability and generalizability of the model. Moreover, the model outperformed nonexpert spine surgeons in predicting BCL subtypes, except for type II. The model achieved satisfactory accuracy, reliability, generalizability, and interpretability in predicting BCL subtypes, outperforming nonexpert spine surgeons. This study offers valuable insights for assessing osteoporotic vertebral compression fractures, thereby aiding preoperative surgical decision-making. 3.

Digitalization of Prison Records Supports Artificial Intelligence Application.

Whitford WG

pubmed logopapersJul 14 2025
Artificial intelligence (AI)-empowered data processing tools improve our ability to assess, measure, and enhance medical interventions. AI-based tools automate the extraction of data from histories, test results, imaging, prescriptions, and treatment outcomes, and transform them into unified, accessible records. They are powerful in converting unstructured data such as clinical notes, magnetic resonance images, and electroencephalograms into structured, actionable formats. For example, in the extraction and classification of diseases, symptoms, medications, treatments, and dates from even incomplete and fragmented clinical notes, pathology reports, images, and histological markers. Especially because the demographics within correctional facilities greatly diverge from the general population, the adoption of electronic health records and AI-enabled data processing will play a crucial role in improving disease detection, treatment management, and the overall efficiency of health care within prison systems.

Pathological omics prediction of early and advanced colon cancer based on artificial intelligence model.

Wang Z, Wu Y, Li Y, Wang Q, Yi H, Shi H, Sun X, Liu C, Wang K

pubmed logopapersJul 14 2025
Artificial intelligence (AI) models based on pathological slides have great potential to assist pathologists in disease diagnosis and have become an important research direction in the field of medical image analysis. The aim of this study was to develop an AI model based on whole-slide images to predict the stage of colon cancer. In this study, a total of 100 pathological slides of colon cancer patients were collected as the training set, and 421 pathological slides of colon cancer were downloaded from The Cancer Genome Atlas (TCGA) database as the external validation set. Cellprofiler and CLAM tools were used to extract pathological features, and machine learning algorithms and deep learning algorithms were used to construct prediction models. The area under the curve (AUC) of the best machine learning model was 0.78 in the internal test set and 0.68 in the external test set. The AUC of the deep learning model in the internal test set was 0.889, and the accuracy of the model was 0.854. The AUC of the deep learning model in the external test set was 0.700. The prediction model has the potential to generalize in the process of combining pathological omics diagnosis. Compared with machine learning, deep learning has higher recognition and accuracy of images, and the performance of the model is better.

Is a score enough? Pitfalls and solutions for AI severity scores.

Bernstein MH, van Assen M, Bruno MA, Krupinski EA, De Cecco C, Baird GL

pubmed logopapersJul 14 2025
Severity scores, which often refer to the likelihood or probability of a pathology, are commonly provided by artificial intelligence (AI) tools in radiology. However, little attention has been given to the use of these AI scores, and there is a lack of transparency into how they are generated. In this comment, we draw on key principles from psychological science and statistics to elucidate six human factors limitations of AI scores that undermine their utility: (1) variability across AI systems; (2) variability within AI systems; (3) variability between radiologists; (4) variability within radiologists; (5) unknown distribution of AI scores; and (6) perceptual challenges. We hypothesize that these limitations can be mitigated by providing the false discovery rate and false omission rate for each score as a threshold. We discuss how this hypothesis could be empirically tested. KEY POINTS: The radiologist-AI interaction has not been given sufficient attention. The utility of AI scores is limited by six key human factors limitations. We propose a hypothesis for how to mitigate these limitations by using false discovery rate and false omission rate.

Human-centered explainability evaluation in clinical decision-making: a critical review of the literature.

Bauer JM, Michalowski M

pubmed logopapersJul 14 2025
This review paper comprehensively summarizes healthcare provider (HCP) evaluation of explanations produced by explainable artificial intelligence methods to support point-of-care, patient-specific, clinical decision-making (CDM) within medical settings. It highlights the critical need to incorporate human-centered (HCP) evaluation approaches based on their CDM needs, processes, and goals. The review was conducted in Ovid Medline and Scopus databases, following the Institute of Medicine's methodological standards and PRISMA guidelines. An individual study appraisal was conducted using design-specific appraisal tools. MaxQDA software was used for data extraction and evidence table procedures. Of the 2673 unique records retrieved, 25 records were included in the final sample. Studies were excluded if they did not meet this review's definitions of HCP evaluation (1156), healthcare use (995), explainable AI (211), and primary research (285), and if they were not available in English (1). The sample focused primarily on physicians and diagnostic imaging use cases and revealed wide-ranging evaluation measures. The synthesis of sampled studies suggests a potential common measure of clinical explainability with 3 indicators of interpretability, fidelity, and clinical value. There is an opportunity to extend the current model-centered evaluation approaches to incorporate human-centered metrics, supporting the transition into practice. Future research should aim to clarify and expand key concepts in HCP evaluation, propose a comprehensive evaluation model positioned in current theoretical knowledge, and develop a valid instrument to support comparisons.

The MSA Atrophy Index (MSA-AI): An Imaging Marker for Diagnosis and Clinical Progression in Multiple System Atrophy.

Trujillo P, Hett K, Cooper A, Brown AE, Iregui J, Donahue MJ, Landman ME, Biaggioni I, Bradbury M, Wong C, Stamler D, Claassen DO

pubmed logopapersJul 14 2025
Reliable biomarkers are essential for tracking disease progression and advancing treatments for multiple system atrophy (MSA). In this study, we propose the MSA Atrophy Index (MSA-AI), a novel composite volumetric measure to distinguish MSA from related disorders and monitor disease progression. Seventeen participants with an initial diagnosis of probable MSA were enrolled in the longitudinal bioMUSE study and underwent 3T MRI, biofluid analysis, and clinical assessments at baseline, 6, and 12 months. Final diagnoses were determined after 12 months using clinical progression, imaging, and fluid biomarkers. Ten participants retained an MSA diagnosis, while five were reclassified as either Parkinson disease (PD, n = 4) or dementia with Lewy bodies (DLB, n = 1). Cross-sectional comparisons included additional MSA cases (n = 26), healthy controls (n = 23), pure autonomic failure (n = 23), PD (n = 56), and DLB (n = 8). Lentiform nucleus, cerebellum, and brainstem volumes were extracted using deep learning-based segmentation. Z-scores were computed using a normative dataset (n = 469) and integrated into the MSA-AI. Group differences were tested with linear regression; longitudinal changes and clinical correlations were assessed using mixed-effects models and Spearman correlations. MSA patients exhibited significantly lower MSA-AI scores compared to all other diagnostic groups (p < 0.001). The MSA-AI effectively distinguished MSA from related synucleinopathies, correlated with baseline clinical severity (ρ = -0.57, p < 0.001), and predicted disease progression (ρ = -0.55, p = 0.03). Longitudinal reductions in MSA-AI were associated with worsening clinical scores over 12 months (ρ = -0.61, p = 0.01). The MSA-AI is a promising imaging biomarker for diagnosis and monitoring disease progression in MSA. These findings require validation in larger, independent cohorts.

A radiomics-clinical predictive model for difficult laparoscopic cholecystectomy based on preoperative CT imaging: a retrospective single center study.

Sun RT, Li CL, Jiang YM, Hao AY, Liu K, Li K, Tan B, Yang XN, Cui JF, Bai WY, Hu WY, Cao JY, Qu C

pubmed logopapersJul 14 2025
Accurately identifying difficult laparoscopic cholecystectomy (DLC) preoperatively remains a clinical challenge. Previous studies utilizing clinical variables or morphological imaging markers have demonstrated suboptimal predictive performance. This study aims to develop an optimal radiomics-clinical model by integrating preoperative CT-based radiomics features with clinical characteristics. A retrospective analysis was conducted on 2,055 patients who underwent laparoscopic cholecystectomy (LC) for cholecystitis at our center. Preoperative CT images were processed with super-resolution reconstruction to improve consistency, and high-throughput radiomic features were extracted from the gallbladder wall region. A combination of radiomic and clinical features was selected using the Boruta-LASSO algorithm. Predictive models were constructed using six machine learning algorithms and validated, with model performance evaluated based on the AUC, accuracy, Brier score, and DCA to identify the optimal model. Model interpretability was further enhanced using the SHAP method. The Boruta-LASSO algorithm identified 10 key radiomic and clinical features for model construction, including the Rad-Score, gallbladder wall thickness, fibrinogen, C-reactive protein, and low-density lipoprotein cholesterol. Among the six machine learning models developed, the radiomics-clinical model based on the random forest algorithm demonstrated the best predictive performance, with an AUC of 0.938 in the training cohort and 0.874 in the validation cohort. The Brier score, calibration curve, and DCA confirmed the superior predictive capability of this model, significantly outperforming previously published models. The SHAP analysis further visualized the importance of features, enhancing model interpretability. This study developed the first radiomics-clinical random forest model for the preoperative prediction of DLC by machine learning algorithms. This predictive model supports safer and individualized surgical planning and treatment strategies.

Classification of Renal Lesions by Leveraging Hybrid Features from CT Images Using Machine Learning Techniques.

Kaur R, Khattar S, Singla S

pubmed logopapersJul 14 2025
Renal cancer is amid the several reasons of increasing mortality rates globally, which can be reduced by early detection and diagnosis. The classification of lesions is based mostly on their characteristics, which include varied shape and texture properties. Computed tomography (CT) imaging is a regularly used imaging modality for study of the renal soft tissues. Furthermore, a radiologist's ability to assess a corpus of CT images is limited, which can lead to misdiagnosis of kidney lesions, which might lead to cancer progression or unnecessary chemotherapy. To address these challenges, this study presents a machine learning technique based on a novel feature vector for the automated classification of renal lesions using a multi-model texture-based feature extraction. The proposed feature vector could serve as an integral component in improving the accuracy of a computer aided diagnosis (CAD) system for identifying the texture of renal lesion and can assist physicians in order to provide more precise lesion interpretation. In this work, the authors employed different texture models for the analysis of CT scans, in order to classify benign and malignant kidney lesions. Texture analysis is performed using features such as first-order statistics (FoS), spatial gray level co-occurrence matrix (SGLCM), Fourier power spectrum (FPS), statistical feature matrix (SFM), Law's texture energy measures (TEM), gray level difference statistics (GLDS), fractal, and neighborhood gray tone difference matrix (NGTDM). Multiple texture models were utilized to quantify the renal texture patterns, which used image texture analysis on a selected region of interest (ROI) from the renal lesions. In addition, dimensionality reduction is employed to discover the most discriminative features for categorization of benign and malignant lesions, and a unique feature vector based on correlation-based feature selection, information gain, and gain ratio is proposed. Different machine learning-based classifiers were employed to test the performance of the proposed features, out of which the random forest (RF) model outperforms all other techniques to distinguish benign from malignant tumors in terms of distinct performance evaluation metrics. The final feature set is evaluated using various machine learning classifiers, with the RF model achieving the highest performance. The proposed system is validated on a dataset of 50 subjects, achieving a classification accuracy of 95.8%, outperforming other conventional models.

A hybrid learning approach for MRI-based detection of alzheimer's disease stages using dual CNNs and ensemble classifier.

Zolfaghari S, Joudaki A, Sarbaz Y

pubmed logopapersJul 14 2025
Alzheimer's Disease (AD) and related dementias are significant global health issues characterized by progressive cognitive decline and memory loss. Computer-aided systems can help physicians in the early and accurate detection of AD, enabling timely intervention and effective management. This study presents a combination of two parallel Convolutional Neural Networks (CNNs) and an ensemble learning method for classifying AD stages using Magnetic Resonance Imaging (MRI) data. Initially, these images were resized and augmented before being input into Network 1 and Network 2, which have different structures and layers to extract important features. These features were then fused and fed into an ensemble learning classifier containing Support Vector Machine, Random Forest, and K-Nearest Neighbors, with hyperparameters optimized by the Grid Search Cross-Validation technique. Considering distinct Network 1 and Network 2 along with ensemble learning, four classes were identified with accuracies of 95.16% and 97.97%, respectively. However, using the derived features from both networks resulted in an acceptable classification accuracy of 99.06%. These findings imply the potential of the proposed hybrid approach in the classification of AD stages. As the evaluation was conducted at the slice-level using a Kaggle dataset, additional subject-level validation and clinical testing are required to determine its real-world applicability.
Page 121 of 3453445 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.