Sort by:
Page 137 of 6476462 results

Jiang H, Gao H, Wang D, Zeng Q, Hao X, Cheng Z

pubmed logopapersSep 24 2025
Pulmonary hypertension (PH) is a progressive disorder characterized by elevated pulmonary artery pressure and increased pulmonary vascular resistance, ultimately leading to right heart failure. Early detection is critical for improving patient outcomes. The diagnosis of PH primarily relies on right heart catheterization, but its invasive nature significantly limits its clinical use. Echocardiography, as the most common noninvasive screening and diagnostic tool for PH, provides valuable patient information. This study aims to identify key PH predictors from echocardiographic parameters, laboratory tests, and demographic data using machine learning, ultimately constructing a predictive model to support early noninvasive diagnosis of PH. This study compiled comprehensive datasets comprising echocardiography measurements, clinical laboratory data, and fundamental demographic information from patients with PH and matched controls. The final analytical cohort consisted of 895 participants with 85 evaluated variables. Recursive feature elimination was used to select the most relevant echocardiographic variables, which were subsequently integrated into a composite ultrasound index using machine learning techniques, XGBoost (Extreme Gradient Boosting). LASSO (least absolute shrinkage and selection operator) regression was applied to select the potential predictive variable from laboratory tests. Then, the ultrasound index variables and selected laboratory tests were combined to construct a logistic regression model for the predictive diagnosis of PH. The model's performance was rigorously evaluated using receiver operating characteristic curves, calibration plots, and decision curve analysis to ensure its clinical relevance and accuracy. Both internal and external validation were used to assess the performance of the constructed model. A total of 16 echocardiographic parameters (right atrium diameter, pulmonary artery diameter, left atrium diameter, tricuspid valve reflux degree, right ventricular diameter, E/E' [ratio of mitral valve early diastolic inflow velocity (E) to mitral annulus early diastolic velocity (E')], interventricular septal thickness, left ventricular diameter, ascending aortic diameter, left ventricular ejection fraction, left ventricular outflow tract velocity, mitral valve reflux degree, pulmonary valve outflow velocity, mitral valve inflow velocity, aortic valve reflux degree, and left ventricular posterior wall thickness) combined with 2 laboratory biomarkers (prothrombin time activity and cystatin C) were identified as optimal predictors, forming a high-performance PH prediction model. The diagnostic model demonstrated high predictive accuracy, with an area under the receiver operating characteristic curve of 0.997 in the internal validation and 0.974 in the external validation. Both calibration plots and decision curve analysis validated the model's predictive accuracy and clinical applicability, with optimal performance observed at higher risk stratification cutoffs. This model enhances early PH diagnosis through a noninvasive approach and demonstrates strong predictive accuracy. It facilitates early intervention and personalized treatment, with potential applications in broader cardiovascular disease management.

Anoch B, Parthiban L

pubmed logopapersSep 24 2025
Chronic kidney disease (CKD) is an advancing disease which significantly impacts global healthcare, requiring early detection and prompt treatment is required to prevent its advancement to end-stage renal disease. Conventional diagnostic methods tend to be invasive, lengthy, and costly, creating a demand for automated, precise, and efficient solutions. This study proposes a novel technique for identifying and classifying CKD from medical images by utilizing a Convolutional Neural Network based Crow Search (CNN based CS) algorithm. The method employs sophisticated pre-processing techniques, including Z-score standardization, min-max normalization and robust scaling to improve the input data's quality. Selection of features is carried out using the chi-square test, and the Crow Search Algorithm (CSA) further optimizes the feature set for the improvement of accuracy classification and effectivess. The CNN architecture is employed to capture complex patterns using deep learning methods to accurately classify CKD in medical pictures. The model optimized and examined using an open access Kidney CT Scan data set. It achieved 99.05% accuracy, 99.03% Area under the Receiver Operating Characteristic Curve (AUC-ROC), and 99.01% Area under the precision-recall curve (PR-AUC), along with high precision (99.04%), recall (99.02%), and F1-score (99.00%). The results show that the CNN-based CS method delivers high accuracy and improved diagnostic precision related to conventional machine learning techniques. By incorporating CSA for feature optimization, the approach minimizes redundancy and improves model interpretability. This makes it a promising tool for automated CKD diagnosis, contributing to the development of AI-driven medical diagnostics and providing a scalable solution for early detection and management of CKD.

Russe MF, Reisert M, Fink A, Hohenhaus M, Nakagawa JM, Wilpert C, Simon CP, Kotter E, Urbach H, Rau A

pubmed logopapersSep 24 2025
To assess the performance of state-of-the-art large language models in classifying vertebral metastasis stability using the Spinal Instability Neoplastic Score (SINS) compared to human experts, and to evaluate the impact of task-specific refinement including in-context learning on their performance. This retrospective study analyzed 100 synthetic CT and MRI reports encompassing a broad range of SINS scores. Four human experts (two radiologists and two neurosurgeons) and four large language models (Mistral, Claude, GPT-4 turbo, and GPT-4o) evaluated the reports. Large language models were tested in both generic form and with task-specific refinement. Performance was assessed based on correct SINS category assignment and attributed SINS points. Human experts demonstrated high median performance in SINS classification (98.5% correct) and points calculation (92% correct), with a median point offset of 0 [0-0]. Generic large language models performed poorly with 26-63% correct category and 4-15% correct SINS points allocation. In-context learning significantly improved chatbot performance to near-human levels (96-98/100 correct for classification, 86-95/100 for scoring, no significant difference to human experts). Refined large language models performed 71-85% better in SINS points allocation. In-context learning enables state-of-the-art large language models to perform at near-human expert levels in SINS classification, offering potential for automating vertebral metastasis stability assessment. The poor performance of generic large language models highlights the importance of task-specific refinement in medical applications of artificial intelligence.

Lee JH, Min JH, Gu K, Han S, Hwang JA, Choi SY, Song KD, Lee JE, Lee J, Moon JE, Adetyan H, Yang JD

pubmed logopapersSep 24 2025
 To evaluate the effectiveness of open-weight large language models (LLMs) in extracting key radiological features and determining National Comprehensive Cancer Network (NCCN) resectability status from free-text radiology reports for pancreatic ductal adenocarcinoma (PDAC). Methods. Prompts were developed using 30 fictitious reports, internally validated on 100 additional fictitious reports, and tested using 200 real reports from two institutions (January 2022 to December 2023). Two radiologists established ground truth for 18 key features and resectability status. Gemma-2-27b-it and Llama-3-70b-instruct models were evaluated using recall, precision, F1-score, extraction accuracy, and overall resectability accuracy. Statistical analyses included McNemar's test and mixed-effects logistic regression. Results. In internal validation, Llama had significantly higher recall than Gemma (99% vs. 95%, p < 0.01) and slightly higher extraction accuracy (98% vs. 97%). Llama also demonstrated higher overall resectability accuracy (93% vs. 91%). In the internal test set, both models achieved 96% recall and 96% extraction accuracy. Overall resectability accuracy was 95% for Llama and 93% for Gemma. In the external test set, both models had 93% recall. Extraction accuracy was 93% for Llama and 95% for Gemma. Gemma achieved higher overall resectability accuracy (89% vs. 83%), but the difference was not statistically significant (p > 0.05). Conclusion. Open-weight models accurately extracted key radiological features and determined NCCN resectability status from free-text PDAC reports. While internal dataset performance was robust, performance on external data decreased, highlighting the need for institution-specific optimization.

Guan S, Poujol J, Gouhier E, Touloupas C, Delpla A, Boulay-Coletta I, Zins M

pubmed logopapersSep 24 2025
To compare overall image quality, lesion conspicuity and detectability on 3D-T1w-GRE arterial phase high-resolution MR images with deep learning reconstruction (3D-DLR) against standard-of-care reconstruction (SOC-Recon) in patients with suspected pancreatic disease. Patients who underwent a pancreatic MR exam with a high-resolution 3D-T1w-GRE arterial phase acquisition on a 3.0-T MR system between December 2021 and June 2022 in our center were retrospectively included. A new deep learning-based reconstruction algorithm (3D-DLR) was used to additionally reconstruct arterial phase images. Two radiologists blinded to the reconstruction type assessed images for image quality, artifacts and lesion conspicuity using a Likert scale and counted the lesions. Signal-to-noise ratio and lesion contrast-to-noise ratio were calculated for each reconstruction. Quantitative data were evaluated using paired t-tests. Ordinal data such as image quality, artifacts and lesions conspicuity were analyzed using paired-Wilcoxon tests. Interobserver agreement for image quality and artifact assessment was evaluated using Cohen's kappa. Thirty-two patients (mean age 62 years ± 12, 16 female) were included. 3D-DLR significantly improved SNR for each pancreatic segment and lesion CNR compared to SOC-Recon (p < 0.01), and demonstrated significantly higher average image quality score (3.34 vs 2.68, p < 0.01). 3D DLR also significantly reduced artifacts compared to SOC-Recon (p < 0.01) for one radiologist. 3D-DLR exhibited significantly higher average lesion conspicuity (2.30 vs 1.85, p < 0.01). The sensitivity was increased with 3D-DLR compared to SOC-Recon for both reader 1 and reader 2 (1 vs 0.88 and 0.88 vs 0.83, p = 0.62 for both results). 3D-DLR images demonstrated higher overall image quality, leading to better lesion conspicuity. 3D deep learning reconstruction can be applied to gadolinium-enhanced pancreatic 3D-T1w arterial phase high-resolution images without additional acquisition time to further improve image quality and lesion conspicuity. 3D DLR has not yet been applied to pancreatic MRI high-resolution sequences. This method improves SNR, CNR, and overall 3D T1w arterial pancreatic image quality. Enhanced lesion conspicuity may improve pancreatic lesion detectability.

Elahi R, Karami P, Amjadzadeh M, Nazari M

pubmed logopapersSep 24 2025
Colorectal cancer (CRC) is the third most common cause of cancer-related morbidity and mortality in the world. Radiomics and radiogenomics are utilized for the high-throughput quantification of features from medical images, providing non-invasive means to characterize cancer heterogeneity and gain insight into the underlying biology. Such radiomics-based artificial intelligence (AI)-methods have demonstrated great potential to improve the accuracy of CRC diagnosis and staging, to distinguish between benign and malignant lesions, to aid in the detection of lymph node and hepatic metastasis, and to predict the effects of therapy and prognosis for patients. This review presents the latest evidence on the clinical applications of radiomics models based on different imaging modalities in CRC. We also discuss the challenges facing clinical translation, including differences in image acquisition, issues related to reproducibility, a lack of standardization, and limited external validation. Given the progress of machine learning (ML) and deep learning (DL) algorithms, radiomics is expected to have an important effect on the personalized treatment of CRC and contribute to a more accurate and individualized clinical decision-making in the future.

Tsai A, Samal S, Lamonica P, Morris N, McNeil J, Pienaar R

pubmed logopapersSep 24 2025
To deploy an AI model to measure limb-length discrepancy (LLD) and prospectively validate its performance. We encoded the inference of an LLD AI model into a docker container, incorporated it into a computational platform for clinical deployment, and conducted two prospective validation studies: a shadow trial (07/2024-9/2024) and a clinical trial (11/2024-01/2025). During each trial period, we queried for LLD EOS scanograms to serve as inputs to our model. For the shadow trial, we hid the AI-annotated outputs from the radiologists, and for the clinical trial, we displayed the AI-annotated output to the radiologists at the time of study interpretation. Afterward, we collected the bilateral femoral and tibial lengths from the radiology reports and compared them against those generated by the AI model. We used median absolute difference (MAD) and interquartile range (IQR) as summary statistics to assess the performance of our model. Our shadow trial consisted of 84 EOS scanograms from 84 children, with 168 femoral and tibial lengths. The MAD (IQR) of the femoral and tibial lengths were 0.2 cm (0.3 cm) and 0.2 cm (0.3 cm), respectively. Our clinical trial consisted of 114 EOS scanograms from 114 children, with 228 femoral and tibial lengths. The MAD (IQR) of the femoral and tibial lengths were 0.3 cm (0.4 cm) and 0.2 cm (0.3 cm), respectively. We successfully employed a computational platform for seamless integration and deployment of an LLD AI model into our clinical workflow, and prospectively validated its performance. Question No AI models have been clinically deployed for limb-length discrepancy (LLD) assessment in children, and the prospective validation of these models is unknown. Findings We deployed an LLD AI model using a homegrown platform, with prospective trials showing a median absolute difference of 0.2-0.3 cm in estimating bone lengths. Clinical relevance An LLD AI model with performance comparable to that of radiologists can serve as a secondary reader in increasing the confidence and accuracy of LLD measurements.

Huang S, Chen X, Tian L, Chen X, Yang Y, Sun Y, Zhou Y, Qu W, Wang R, Wang X

pubmed logopapersSep 24 2025
Endoscopic ultrasound (EUS) is crucial for diagnosing and managing mediastinal diseases but lacks effective quality control. This study developed and evaluated an artificial intelligence (AI) system to assist in anatomical landmark identification and scanning guidance, aiming to improve quality control of mediastinal EUS examinations in clinical practice. The AI system for mediastinal EUS was trained on 11,230 annotated images from 120 patients, validated internally (1,972 images) and externally (824 images from three institutions). A single-center randomized controlled trial was designed to evaluate the effect of quality control, which enrolled patients requiring mediastinal EUS, randomized 1:1 to AI-assisted or control groups. Four endoscopists performed EUS, with the AI group receiving real-time AI feedback. The primary outcome was standard station completeness; secondary outcomes included structure completeness, procedure time, and adverse events. Blinded analysis ensured objectivity. Between 16 September 2023, and 28 February 2025, a total of 148 patients were randomly assigned and analyzed, with 72 patients in the AI-assisted group and 76 in the control group. The overall station completeness was significantly higher in the AI-assisted group than in the control group (1.00 [IQR, 1.00-1.00] vs. 0.80 [IQR, 0.60-0.80]; p < 0.001), with the AI-assisted group also demonstrating significantly higher anatomical structure completeness (1.00 [IQR, 1.00-1.00] vs. 0.85 [IQR, 0.62-0.92]; p < 0.001). However, no significant differences were found for station 2 (subcarinal area) or average procedural time, and no adverse events were reported. The AI system significantly improved the scan completeness and shows promise in enhancing EUS quality control.

Teng X, Luo QN, Chen YD, Peng T

pubmed logopapersSep 24 2025
Hepatocellular carcinoma (HCC) poses a substantial global health burden with high morbidity and mortality rates. Radiomics, which extracts quantitative features from medical images to develop predictive models, has emerged as a promising non-invasive approach for HCC diagnosis and management. However, comprehensive analysis of research trends in this field remains limited. We conducted a systematic bibliometric analysis of radiomics applications in HCC using literature from the Web of Science Core Collection (January 2006-April 2025). Publications were analyzed using CiteSpace, VOSviewer, R, and Python scripts to evaluate publication patterns, citation metrics, institutional contributions, keyword evolution, and collaboration networks. Among 906 included publications, we observed exponential growth, particularly accelerating after 2019. A global landscape analysis revealed China as the leader in publication volume, while the USA acted as the primary international collaboration hub. Countries like South Korea and the UK demonstrated higher average citation impact. Sun Yat-sen University was the most productive institution. Research themes evolved from fundamental texture analysis and CT/MRI applications toward predicting microvascular invasion, assessing treatment response (especially TACE), and prognostic modeling, driven recently by the deep integration of artificial intelligence (AI) and deep learning. Co-citation analysis revealed core knowledge clusters spanning radiomics methodology, clinical management, and landmark applications, demonstrating the field's interdisciplinary nature. Radiomics in HCC represents a rapidly expanding, AI-driven field characterized by extensive multidisciplinary collaboration. Future priorities should emphasize standardization, large-scale multicenter validation, enhanced international cooperation, and clinical translation to maximize radiomics' potential in precision HCC oncology.

R S, Maganti S, Akundi SH

pubmed logopapersSep 24 2025
Alzheimer's disease (AD) is a progressive illness that can cause behavioural abnormalities, personality changes, and memory loss. Early detection helps with future planning for both the affected person and caregivers. Thus, an innovative hybrid Deep Learning (DL) method is introduced for the segmentation and classification of AD. The classification is performed by a Fuzzy Res-LeNet model. At first, an input Magnetic Resonance Imaging (MRI) image is attained from the database. Image preprocessing is then performed by a Bilateral Filter (BF) to enhance the quality of image by denoising. Then segmentation is carried out by the proposed O-SegUNet. This method integrates the O-SegNet and U-Net model using Pearson correlation coefficient-based fusion. After the segmentation, augmentation is carried out by utilizing Synthetic Minority Oversampling Technique (SMOTE) to address class imbalance. After that, feature extraction is carried out. Finally, AD classification is performed by the Fuzzy Res-LeNet. The stages are classified as Mild Cognitive Impairment (MCI), AD, Cognitive Normal (CN), Early Mild Cognitive Impairment (EMCI), and Late Mild Cognitive Impairment (LMCI). Here, Fuzzy Res-LeNet is devised by integrating Fuzzy logic, ResNeXt, and LeNet. Furthermore, the proposed Fuzzy Res-LeNet obtained the maximum performance with an accuracy of 93.887%, sensitivity of 94.587%, and specificity of 94.008%.
Page 137 of 6476462 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.