Sort by:
Page 26 of 2352341 results

Deep learning-based automated detection and diagnosis of gouty arthritis in ultrasound images of the first metatarsophalangeal joint.

Xiao L, Zhao Y, Li Y, Yan M, Liu M, Ning C

pubmed logopapersSep 17 2025
This study aimed to develop a deep learning (DL) model for automatic detection and diagnosis of gouty arthritis (GA) in the first metatarsophalangeal joint (MTPJ) using ultrasound (US) images. A retrospective study included individuals who underwent first MTPJ ultrasonography between February and July 2023. A five-fold cross-validation method (training set = 4:1) was employed. A deep residual convolutional neural network (CNN) was trained, and Gradient-weighted Class Activation Mapping (Grad-CAM) was used for visualization. Different ResNet18 models with varying residual blocks (2, 3, 4, 6) were compared to select the optimal model for image classification. Diagnostic decisions were based on a threshold proportion of abnormal images, determined from the training set. A total of 2401 US images from 260 patients (149 gout, 111 control) were analyzed. The model with 3 residual blocks performed best, achieving an AUC of 0.904 (95% CI: 0.887~0.927). Visualization results aligned with radiologist opinions in 2000 images. The diagnostic model attained an accuracy of 91.1% (95% CI: 90.4%~91.8%) on the testing set, with a diagnostic threshold of 0.328.  The DL model demonstrated excellent performance in automatically detecting and diagnosing GA in the first MTPJ.

Automating classification of treatment responses to combined targeted therapy and immunotherapy in HCC.

Quan B, Dai M, Zhang P, Chen S, Cai J, Shao Y, Xu P, Li P, Yu L

pubmed logopapersSep 17 2025
Tyrosine kinase inhibitors (TKIs) combined with immunotherapy regimens are now widely used for treating advanced hepatocellular carcinoma (HCC), but their clinical efficacy is limited to a subset of patients. Considering that the vast majority of advanced HCC patients lose the opportunity for liver resection and thus cannot provide tumor tissue samples, we leveraged the clinical and image data to construct a multimodal convolutional neural network (CNN)-Transformer model for predicting and analyzing tumor response to TKI-immunotherapy. An automatic liver tumor segmentation system, based on a two-stage 3D U-Net framework, delineates lesions by first segmenting the liver parenchyma and then precisely localizing the tumor. This approach effectively addresses the variability in clinical data and significantly reduces bias introduced by manual intervention. Thus, we developed a clinical model using only pre-treatment clinical information, a CNN model using only pre-treatment magnetic resonance imaging data, and an advanced multimodal CNN-Transformer model that fused imaging and clinical parameters using a training cohort (n = 181) and then validated them using an independent cohort (n = 30). In the validation cohort, the area under the curve (95% confidence interval) values were 0.720 (0.710-0.731), 0.695 (0.683-0.707), and 0.785 (0.760-0.810), respectively, indicating that the multimodal model significantly outperformed the single-modality baseline models across validations. Finally, single-cell sequencing with the surgical tumor specimens reveals tumor ecosystem diversity associated with treatment response, providing a preliminary biological validation for the prediction model. In summary, this multimodal model effectively integrates imaging and clinical features of HCC patients, has a superior performance in predicting tumor response to TKI-immunotherapy, and provides a reliable tool for optimizing personalized treatment strategies.

A Novel Ultrasound-based Nomogram Using Contrast-enhanced and Conventional Ultrasound Features to Improve Preoperative Diagnosis of Parathyroid Adenomas versus Cervical Lymph Nodes.

Xu Y, Zuo Z, Peng Q, Zhang R, Tang K, Niu C

pubmed logopapersSep 17 2025
Precise preoperative localization of parathyroid gland lesion is essential for guiding surgery in primary hyperparathyroidism (PHPT). The aim of our study was to investigate the contrast-enhanced ultrasound (CEUS) characteristics of parathyroid gland adenoma (PGA) and to evaluate whether PGA can be differentiated from central cervical lymph nodes (CCLN). Fifty-four consecutive patients with PHPT were retrospectively enrolled and underwent preoperative imaging with high-resolution ultrasound (US) and CEUS, and underwent subsequent parathyroidectomy. One hundred and seventy-four lymph nodes of papillary thyroid carcinomas (PTC) patients were examined by high-resolution US and CEUS, and underwent unilateral, subtotal, or total thyroidectomy with central neck dissection were enrolled. By incorporating US and CEUS characteristics, a predictive model presented as a nomogram was developed, and their performance and utility were evaluated by plotting receiver operating characteristic (ROC) curves, calibration curves and decision curve analysis (DCA). Three US characteristics and two CEUS characteristics were independent characteristics related to PGA for their differentiation from CCLN, and were obtained for machine learning model construction. The area under the receiver characteristic curve (AUC) of the US+CEUS model was 0.915, was higher than the other US model (0.874) and CEUS model (0.791). It is recommended that CEUS techniques be used to enhance the diagnostic utility of US in cases of suspected parathyroid lesions. This is the first study to use a combination of US+CEUS to build a nomogram to distinguish between PGA and CCLN, filling a gap in the existing literatures.

Accuracy of Foundation AI Models for Hepatic Macrovesicular Steatosis Quantification in Frozen Sections

Koga, S., Guda, A., Wang, Y., Sahni, A., Wu, J., Rosen, A., Nield, J., Nandish, N., Patel, K., Goldman, H., Rajapakse, C., Walle, S., Kristen, S., Tondon, R., Alipour, Z.

medrxiv logopreprintSep 17 2025
IntroductionAccurate intraoperative assessment of macrovesicular steatosis in donor liver biopsies is critical for transplantation decisions but is often limited by inter-observer variability and freezing artifacts that can obscure histological details. Artificial intelligence (AI) offers a potential solution for standardized and reproducible evaluation. To evaluate the diagnostic performance of two self-supervised learning (SSL)-based foundation models, Prov-GigaPath and UNI, for classifying macrovesicular steatosis in frozen liver biopsy sections, compared with assessments by surgical pathologists. MethodsWe retrospectively analyzed 131 frozen liver biopsy specimens from 68 donors collected between November 2022 and September 2024. Slides were digitized into whole-slide images, tiled into patches, and used to extract embeddings with Prov-GigaPath and UNI; slide-level classifiers were then trained and tested. Intraoperative diagnoses by on-call surgical pathologists were compared with ground truth determined from independent reviews of permanent sections by two liver pathologists. Accuracy was evaluated for both five-category classification and a clinically significant binary threshold (<30% vs. [&ge;]30%). ResultsFor binary classification, Prov-GigaPath achieved 96.4% accuracy, UNI 85.7%, and surgical pathologists 84.0% (P = .22). In five-category classification, accuracies were lower: Prov-GigaPath 57.1%, UNI 50.0%, and pathologists 58.7% (P = .70). Misclassification primarily occurred in intermediate categories (5%-<30% steatosis). ConclusionsSSL-based foundation models performed comparably to surgical pathologists in classifying macrovesicular steatosis, at the clinically relevant <30% vs. [&ge;]30% threshold. These findings support the potential role of AI in standardizing intraoperative evaluation of donor liver biopsies; however, the small sample size limits generalizability and requires validation in larger, balanced cohorts.

MedFormer: hierarchical medical vision transformer with content-aware dual sparse selection attention.

Xia Z, Li H, Lan L

pubmed logopapersSep 16 2025
Medical image recognition serves as a key way to aid in clinical diagnosis, enabling more accurate and timely identification of diseases and abnormalities. Vision transformer-based approaches have proven effective in handling various medical recognition tasks. However, these methods encounter two primary challenges. First, they are often task-specific and architecture-tailored, limiting their general applicability. Second, they usually either adopt full attention to model long-range dependencies, resulting in high computational costs, or rely on handcrafted sparse attention, potentially leading to suboptimal performance. To tackle these issues, we present MedFormer, an efficient medical vision transformer with two key ideas. First, it employs a pyramid scaling structure as a versatile backbone for various medical image recognition tasks, including image classification and dense prediction tasks such as semantic segmentation and lesion detection. This structure facilitates hierarchical feature representation while reducing the computation load of feature maps, highly beneficial for boosting performance. Second, it introduces a novel Dual Sparse Selection Attention (DSSA) with content awareness to improve computational efficiency and robustness against noise while maintaining high performance. As the core building technique of MedFormer, DSSA is designed to explicitly attend to the most relevant content. Theoretical analysis demonstrates that MedFormer outperforms existing medical vision transformers in terms of generality and efficiency. Extensive experiments across various imaging modality datasets show that MedFormer consistently enhances performance in all three medical image recognition tasks mentioned above. MedFormer provides an efficient and versatile solution for medical image recognition, with strong potential for clinical application. The code is available on GitHub.

Mammographic features in screening mammograms with high AI scores but a true-negative screening result.

Koch HW, Bergan MB, Gjesvik J, Larsen M, Bartsch H, Haldorsen IHS, Hofvind S

pubmed logopapersSep 16 2025
BackgroundThe use of artificial intelligence (AI) in screen-reading of mammograms has shown promising results for cancer detection. However, less attention has been paid to the false positives generated by AI.PurposeTo investigate mammographic features in screening mammograms with high AI scores but a true-negative screening result.Material and MethodsIn this retrospective study, 54,662 screening examinations from BreastScreen Norway 2010-2022 were analyzed with a commercially available AI system (Transpara v. 2.0.0). An AI score of 1-10 indicated the suspiciousness of malignancy. We selected examinations with an AI score of 10, with a true-negative screening result, followed by two consecutive true-negative screening examinations. Of the 2,124 examinations matching these criteria, 382 random examinations underwent blinded consensus review by three experienced breast radiologists. The examinations were classified according to mammographic features, radiologist interpretation score (1-5), and mammographic breast density (BI-RADS 5th ed. a-d).ResultsThe reviews classified 91.1% (348/382) of the examinations as negative (interpretation score 1). All examinations (26/26) categorized as BI-RADS d were given an interpretation score of 1. Classification of mammographic features: asymmetry = 30.6% (117/382); calcifications = 30.1% (115/382); asymmetry with calcifications = 29.3% (112/382); mass = 8.9% (34/382); distortion = 0.8% (3/382); spiculated mass = 0.3% (1/382). For examinations with calcifications, 79.1% (91/115) were classified with benign morphology.ConclusionThe majority of false-positive screening examinations generated by AI were classified as non-suspicious in a retrospective blinded consensus review and would likely not have been recalled for further assessment in a real screening setting using AI as a decision support.

Multi-Atlas Brain Network Classification through Consistency Distillation and Complementary Information Fusion.

Xu J, Lan M, Dong X, He K, Zhang W, Bian Q, Ke Y

pubmed logopapersSep 16 2025
Brain network analysis plays a crucial role in identifying distinctive patterns associated with neurological disorders. Functional magnetic resonance imaging (fMRI) enables the construction of brain networks by analyzing correlations in blood-oxygen-level-dependent (BOLD) signals across different brain regions, known as regions of interest (ROIs). These networks are typically constructed using atlases that parcellate the brain based on various hypotheses of functional and anatomical divisions. However, there is no standard atlas for brain network classification, leading to limitations in detecting abnormalities in disorders. Recent methods leveraging multiple atlases fail to ensure consistency across atlases and lack effective ROI-level information exchange, limiting their efficacy. To address these challenges, we propose the Atlas-Integrated Distillation and Fusion network (AIDFusion), a novel framework designed to enhance brain network classification using fMRI data. AIDFusion introduces a disentangle Transformer to filter out inconsistent atlas-specific information and distill meaningful cross-atlas connections. Additionally, it enforces subject- and population-level consistency constraints to improve cross-atlas coherence. To further enhance feature integration, AIDFusion incorporates an inter-atlas message-passing mechanism that facilitates the fusion of complementary information across brain regions. We evaluate AIDFusion on four resting-state fMRI datasets encompassing different neurological disorders. Experimental results demonstrate its superior classification performance and computational efficiency compared to state-of-the-art methods. Furthermore, a case study highlights AIDFusion's ability to extract interpretable patterns that align with established neuroscience findings, reinforcing its potential as a robust tool for multi-atlas brain network analysis. The code is publicly available at https://github.com/AngusMonroe/AIDFusion.

Concurrent AI assistance with LI-RADS classification for contrast enhanced MRI of focal hepatic nodules: a multi-reader, multi-case study.

Qin X, Huang L, Wei Y, Li H, Wu Y, Zhong J, Jian M, Zhang J, Zheng Z, Xu Y, Yan C

pubmed logopapersSep 16 2025
The Liver Imaging Reporting and Data System (LI-RADS) assessment is subject to inter-reader variability. The present study aimed to evaluate the impact of an artificial intelligence (AI) system on the accuracy and inter-reader agreement of LI-RADS classification based on contrast-enhanced magnetic resonance imaging among radiologists with varying experience levels. This single-center, multi-reader, multi-case retrospective study included 120 patients with 200 focal liver lesions who underwent abdominal contrast-enhanced magnetic resonance imaging examinations between June 2023 and May 2024. Five radiologists with different experience levels independently assessed LI-RADS classification and imaging features with and without AI assistance. The reference standard was established by consensus between two expert radiologists. Accuracy was used to measure the performance of AI systems and radiologists. Kappa or intraclass correlation coefficient was utilized to estimate inter-reader agreement. The LI-RADS categories were as follows: 33.5% of LR-3 (67/200), 29.0% of LR-4 (58/200), 33.5% of LR-5 (67/200), and 4.0% of LR-M (8/200) cases. The AI system significantly improved the overall accuracy of LI-RADS classification from 69.9 to 80.1% (p < 0.001), with the most notable improvement among junior radiologists from 65.7 to 79.7% (p < 0.001). Inter-reader agreement for LI-RADS classification was significantly higher with AI assistance compared to that without (weighted Cohen's kappa, 0.655 vs. 0.812, p < 0.001). The AI system also enhanced the accuracy and inter-reader agreement for imaging features, including non-rim arterial phase hyperenhancement, non-peripheral washout, and restricted diffusion. Additionally, inter-reader agreement for lesion size measurements improved, with intraclass correlation coefficient changing from 0.857 to 0.951 (p < 0.001). The AI system significantly increases accuracy and inter-reader agreement of LI-RADS 3/4/5/M classification, particularly benefiting junior radiologists.

CT-Based deep learning platform combined with clinical parameters for predicting different discharge outcome in spontaneous intracerebral hemorrhage.

Wu TC, Chan MH, Lin KH, Liu CF, Chen JH, Chang RF

pubmed logopapersSep 16 2025
This study aims to enhance the prognostic prediction of spontaneous intracerebral hemorrhage (sICH) by comparing the accuracy of three models: a CT-based deep learning model, a clinical variable-based machine learning model, and a hybrid model that integrates both approaches. The goal is to evaluate their performance across different outcome thresholds, including poor outcome (mRS 3-6), loss of independence (mRS 4-6), and severe disability or death (mRS 5-6). A retrospective analysis was conducted on 1,853 sICH patients from a stroke center database (2008-2021). Patients were divided into two datasets: Dataset A (958 patients) for training/testing the clinical and hybrid models, and Dataset B (895 patients) for training the deep learning model. The imaging model used a 3D ResNet-50 architecture with attention modules, while the clinical model incorporated 19 clinical variables. The hybrid model combined clinical data with prediction probability from the imaging model. Performance metrics were compared using the DeLong test. The hybrid model consistently outperformed the other models across all outcome thresholds. For predicting severe disability and death, loss of independence, and poor outcome, the hybrid model achieved accuracies of 82.6%, 79.5%, 80.6% with AUC values of 0.897, 0.871, 0.0873, respectively. GCS scores and imaging model prediction probability were the most significant predictors. The hybrid model, combining CT-based deep learning with clinical variables, offers superior prognostic prediction for sICH outcomes. This integrated approach shows promise for improving clinical decision-making, though further validation in prospective studies is needed. Not applicable because this is a retrospective study, not a clinical trial.

More performant and scalable: Rethinking contrastive vision-language pre-training of radiology in the LLM era

Yingtai Li, Haoran Lai, Xiaoqian Zhou, Shuai Ming, Wenxin Ma, Wei Wei, Shaohua Kevin Zhou

arxiv logopreprintSep 16 2025
The emergence of Large Language Models (LLMs) presents unprecedented opportunities to revolutionize medical contrastive vision-language pre-training. In this paper, we show how LLMs can facilitate large-scale supervised pre-training, thereby advancing vision-language alignment. We begin by demonstrate that modern LLMs can automatically extract diagnostic labels from radiology reports with remarkable precision (>96\% AUC in our experiments) without complex prompt engineering, enabling the creation of large-scale "silver-standard" datasets at a minimal cost (~\$3 for 50k CT image-report pairs). Further, we find that vision encoder trained on this "silver-standard" dataset achieves performance comparable to those trained on labels extracted by specialized BERT-based models, thereby democratizing the access to large-scale supervised pre-training. Building on this foundation, we proceed to reveal that supervised pre-training fundamentally improves contrastive vision-language alignment. Our approach achieves state-of-the-art performance using only a 3D ResNet-18 with vanilla CLIP training, including 83.8\% AUC for zero-shot diagnosis on CT-RATE, 77.3\% AUC on RAD-ChestCT, and substantial improvements in cross-modal retrieval (MAP@50=53.7\% for image-image, Recall@100=52.2\% for report-image). These results demonstrate the potential of utilizing LLMs to facilitate {\bf more performant and scalable} medical AI systems. Our code is avaiable at https://github.com/SadVoxel/More-performant-and-scalable.
Page 26 of 2352341 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.