Sort by:
Page 159 of 6486473 results

Li Y, Shen MJ, Yi JW, Zhao QQ, Zhao QP, Hao LY, Qi JJ, Li WH, Wu XD, Zhao L, Wang Y

pubmed logopapersSep 17 2025
This study aimed to develop and validate machine learning models integrating clinicoradiological and radiomic features from 2-[18 F]-fluoro-2-deoxy-D-glucose (<sup>18</sup>F-FDG) positron emission tomography/computed tomography (PET/CT) to predict pathological high invasiveness in cT1-sized (tumor size ≤ 3 cm) non-small cell lung cancer (NSCLC). We retrospectively reviewed 1459 patients with NSCLC (633 with pathological high invasiveness and 826 with pathological non-high invasiveness) from two medical centers. Patients with cT1-sized NSCLC were included. 1145 radiomic features were extracted per modality (PET and CT) from each patient. Optimal predictors were selected to construct a radiomics score (Rad-score) for the PET/CT radiomics model. A combined model incorporating significant clinicoradiological features and the Rad-score was developed. Logistic regression (LR), random forest (RF), support vector machine (SVM), and extreme gradient boosting (XGBoost) algorithms were used to train the combined model. Model performance was assessed the area under the receiver operating characteristic (ROC) curve (AUC), calibration curve, and decision curve analysis (DCA). Shapley Additive Explanations (SHAP) was applied to visualize the prediction process. The radiomics model was built using 11 radiomic features, achieving AUCs of 0.851 (training), 0.859 (internal validation), and 0.829 (external validation). Among all models, the XGBoost combined model demonstrated the best predictive performance, with AUCs of 0.958, 0.919, and 0.903, respectively, along with good calibration and high net benefit. The XGBoost combined model showed strong performance in predicting pathological high invasiveness in cT1-sized NSCLC.

Zhao Z, Alzubaidi L, Zhang J, Duan Y, Naseem U, Gu Y

pubmed logopapersSep 17 2025
Deep learning has significantly advanced automatic medical diagnostics, releasing human resources from clinical pressure, yet the persistent challenge of data scarcity in this area hampers its further improvements and applications. To address this gap, we introduce a novel ensemble framework called 'Efficient Transfer and Self-supervised Learning based Ensemble Framework' (ETSEF). ETSEF leverages features from multiple pre-trained deep learning models to efficiently learn powerful representations from a limited number of data samples. To the best of our knowledge, ETSEF is the first strategy that combines two pre-training methodologies (Transfer Learning and Self-supervised Learning) with ensemble learning approaches. Various data enhancement techniques, including data augmentation, feature fusion, feature selection, and decision fusion, have also been deployed to maximise the efficiency and robustness of the ETSEF model. Five independent medical imaging tasks, including endoscopy, breast cancer detection, monkeypox detection, brain tumour detection, and glaucoma detection, were tested to demonstrate ETSEF's effectiveness and robustness. Facing limited sample numbers and challenging medical tasks, ETSEF has demonstrated its effectiveness by improving diagnostic accuracy by up to 13.3% compared to strong ensemble baseline models and up to 14.4% compared with recent state-of-the-art methods. Moreover, we emphasise the robustness and trustworthiness of the ETSEF method through various vision-explainable artificial intelligence techniques, including Grad-CAM, SHAP, and t-SNE. Compared to large-scale deep learning models, ETSEF can be flexibly deployed and maintain superior performance for challenging medical imaging tasks, demonstrating potential for application in areas lacking training data. The code is available at Github ETSEF.

Qin Y, Zhang Z, Qu X, Liu W, Yan Y, Huang Y

pubmed logopapersSep 17 2025
This study aims to explore the potential of machine learning as a non-invasive automated tool for skin tumor differentiation. Data were included from 156 lesions, collected retrospectively from September 2021 to February 2024. Univariate and multivariate analyses of traditional clinical features were performed to establish a logistic regression model. Ultrasound-based radiomics features are extracted from grayscale images after delineating regions of interest (ROIs). Independent samples t-tests, Mann-Whitney U tests, and Least Absolute Shrinkage and Selection Operator (LASSO) regression were employed to select ultrasound-based radiomics features. Subsequently, five machine learning methods were used to construct radiomics models based on the selected features. Model performance was evaluated using receiver operating characteristic (ROC) curves and the Delong test. Age, poorly defined margins, and irregular shape were identified as independent risk factors for malignant skin tumors. The multilayer perception (MLP) model achieved the best performance, with area under the curve (AUC) values of 0.963 and 0.912, respectively. The results of DeLong's test revealed a statistically significant discrepancy in efficacy between the MLP and clinical models (Z=2.611, p=0.009). Machine learning based skin tumor models may serve as a potential non-invasive method to improve diagnostic efficiency.

Xiao L, Zhao Y, Li Y, Yan M, Liu M, Ning C

pubmed logopapersSep 17 2025
This study aimed to develop a deep learning (DL) model for automatic detection and diagnosis of gouty arthritis (GA) in the first metatarsophalangeal joint (MTPJ) using ultrasound (US) images. A retrospective study included individuals who underwent first MTPJ ultrasonography between February and July 2023. A five-fold cross-validation method (training set = 4:1) was employed. A deep residual convolutional neural network (CNN) was trained, and Gradient-weighted Class Activation Mapping (Grad-CAM) was used for visualization. Different ResNet18 models with varying residual blocks (2, 3, 4, 6) were compared to select the optimal model for image classification. Diagnostic decisions were based on a threshold proportion of abnormal images, determined from the training set. A total of 2401 US images from 260 patients (149 gout, 111 control) were analyzed. The model with 3 residual blocks performed best, achieving an AUC of 0.904 (95% CI: 0.887~0.927). Visualization results aligned with radiologist opinions in 2000 images. The diagnostic model attained an accuracy of 91.1% (95% CI: 90.4%~91.8%) on the testing set, with a diagnostic threshold of 0.328.  The DL model demonstrated excellent performance in automatically detecting and diagnosing GA in the first MTPJ.

Quan B, Dai M, Zhang P, Chen S, Cai J, Shao Y, Xu P, Li P, Yu L

pubmed logopapersSep 17 2025
Tyrosine kinase inhibitors (TKIs) combined with immunotherapy regimens are now widely used for treating advanced hepatocellular carcinoma (HCC), but their clinical efficacy is limited to a subset of patients. Considering that the vast majority of advanced HCC patients lose the opportunity for liver resection and thus cannot provide tumor tissue samples, we leveraged the clinical and image data to construct a multimodal convolutional neural network (CNN)-Transformer model for predicting and analyzing tumor response to TKI-immunotherapy. An automatic liver tumor segmentation system, based on a two-stage 3D U-Net framework, delineates lesions by first segmenting the liver parenchyma and then precisely localizing the tumor. This approach effectively addresses the variability in clinical data and significantly reduces bias introduced by manual intervention. Thus, we developed a clinical model using only pre-treatment clinical information, a CNN model using only pre-treatment magnetic resonance imaging data, and an advanced multimodal CNN-Transformer model that fused imaging and clinical parameters using a training cohort (n = 181) and then validated them using an independent cohort (n = 30). In the validation cohort, the area under the curve (95% confidence interval) values were 0.720 (0.710-0.731), 0.695 (0.683-0.707), and 0.785 (0.760-0.810), respectively, indicating that the multimodal model significantly outperformed the single-modality baseline models across validations. Finally, single-cell sequencing with the surgical tumor specimens reveals tumor ecosystem diversity associated with treatment response, providing a preliminary biological validation for the prediction model. In summary, this multimodal model effectively integrates imaging and clinical features of HCC patients, has a superior performance in predicting tumor response to TKI-immunotherapy, and provides a reliable tool for optimizing personalized treatment strategies.

Yu Y, Huang G, Tan Z, Shi J, Li M, Pun CM, Zheng F, Ma S, Wang S, He L

pubmed logopapersSep 17 2025
The task of medical report generation involves automatically creating descriptive text reports from medical images, with the aim of alleviating the workload of physicians and enhancing diagnostic efficiency. However, although many existing medical report generation models based on the Transformer framework consider structural information in medical images, they ignore the interference of confounding factors on these structures, which limits the model's ability to effectively capture rich and critical lesion information. Furthermore, these models often struggle to address the significant imbalance between normal and abnormal content in actual reports, leading to challenges in accurately describing abnormalities. To address these limitations, we propose the Multi-modal Prompt Collaboration Mechanism for Radiology Report Generation Model (MPCM-RRG). This model consists of three key components: the Visual Causal Prompting Module (VCP), the Textual Prompt-Guided Feature Enhancement Module (TPGF), and the Visual-Textual Semantic Consistency Module (VTSC). The VCP module uses chest X-ray masks as visual prompts and incorporates causal inference principles to help the model minimize the influence of irrelevant regions. Through causal intervention, the model can learn the causal relationships between the pathological regions in the image and the corresponding findings described in the report. The TPGF module tackles the imbalance between abnormal and normal text by integrating detailed textual prompts, which also guide the model to focus on lesion areas using a multi-head attention mechanism. The VTSC module promotes alignment between the visual and textual representations through contrastive consistency loss, fostering greater interaction and collaboration between the visual and textual prompts. Experimental results demonstrate that MPCM-RRG outperforms other methods on the IU X-ray and MIMIC-CXR datasets, highlighting its effectiveness in generating high-quality medical reports.

Xu Y, Zuo Z, Peng Q, Zhang R, Tang K, Niu C

pubmed logopapersSep 17 2025
Precise preoperative localization of parathyroid gland lesion is essential for guiding surgery in primary hyperparathyroidism (PHPT). The aim of our study was to investigate the contrast-enhanced ultrasound (CEUS) characteristics of parathyroid gland adenoma (PGA) and to evaluate whether PGA can be differentiated from central cervical lymph nodes (CCLN). Fifty-four consecutive patients with PHPT were retrospectively enrolled and underwent preoperative imaging with high-resolution ultrasound (US) and CEUS, and underwent subsequent parathyroidectomy. One hundred and seventy-four lymph nodes of papillary thyroid carcinomas (PTC) patients were examined by high-resolution US and CEUS, and underwent unilateral, subtotal, or total thyroidectomy with central neck dissection were enrolled. By incorporating US and CEUS characteristics, a predictive model presented as a nomogram was developed, and their performance and utility were evaluated by plotting receiver operating characteristic (ROC) curves, calibration curves and decision curve analysis (DCA). Three US characteristics and two CEUS characteristics were independent characteristics related to PGA for their differentiation from CCLN, and were obtained for machine learning model construction. The area under the receiver characteristic curve (AUC) of the US+CEUS model was 0.915, was higher than the other US model (0.874) and CEUS model (0.791). It is recommended that CEUS techniques be used to enhance the diagnostic utility of US in cases of suspected parathyroid lesions. This is the first study to use a combination of US+CEUS to build a nomogram to distinguish between PGA and CCLN, filling a gap in the existing literatures.

Meneses JP, Tejos C, Makalic E, Uribe S

pubmed logopapersSep 17 2025
Liver proton density fat fraction (PDFF), the ratio between fat-only and overall proton densities, is an extensively validated biomarker associated with several diseases. In recent years, numerous deep learning-based methods for estimating PDFF have been proposed to optimize acquisition and post-processing times without sacrificing accuracy, compared to conventional methods. However, the lack of interpretability and the often poor generalizability of these DL-based models undermine the adoption of such techniques in clinical practice. In this work, we propose an Artificial Intelligence-based Decomposition of water and fat with Echo Asymmetry and Least-squares (AI-DEAL) method, designed to estimate both proton density fat fraction (PDFF) and the associated uncertainty maps. Once trained, AI-DEAL performs a one-shot MRI water-fat separation by first calculating the nonlinear confounder variables, R<sub>2</sub><sup>∗</sup> and off-resonance field. It then employs a weighted least squares approach to compute water-only and fat-only signals, along with their corresponding covariance matrix, which are subsequently used to derive the PDFF and its associated uncertainty. We validated our method using in vivo liver CSE-MRI, a fat-water phantom, and a numerical phantom. AI-DEAL demonstrated PDFF biases of 0.25% and -0.12% at two liver ROIs, outperforming state-of-the-art deep learning-based techniques. Although trained using in vivo data, our method exhibited PDFF biases of -3.43% in the fat-water phantom and -0.22% in the numerical phantom with no added noise. The latter bias remained approximately constant when noise was introduced. Furthermore, the estimated uncertainties showed good agreement with the observed errors and the variations within each ROI, highlighting their potential value for assessing the reliability of the resulting PDFF maps.

Jia S, Piché N, McKee MD, Reznikov N

pubmed logopapersSep 17 2025
Avian eggs exhibit a variety of shapes and sizes, reflecting different reproductive strategies. The eggshell not only protects the egg contents, but also regulates gas and water vapor exchange vital for embryonic development. While many studies have explored eggshell ultrastructure, the distribution of pores across the entire shell is less well understood because of a trade-off between resolution and field-of-view in imaging. To overcome this, a neural network was developed for resolution enhancement of low-resolution 3D tomographic data, while performing voxel-wise labeling. Trained on X-ray microcomputed tomography images of ostrich, guillemot and crow eggshells from a natural history museum collection, the model used stepwise magnification to create low- and high-resolution training sets. Registration performance was validated with a novel metric based on local grayscale gradients. An edge-attentive loss function prevented bias towards the dominant background class (95% of all voxels), ensuring accurate labeling of eggshell (5%) and pore (0.1%) voxels. The results indicate that besides edge-attention and class balancing, 3D context preservation and 3D convolution are of paramount importance for extrapolating subvoxel features.

Prucker P, Lemke T, Mertens CJ, Ziegelmayer S, Graf MM, Weller D, Kim SH, Gassert FT, Kader A, Dorfner FJ, Meddeb A, Makowski MR, Lammert J, Huber T, Lohöfer F, Bressem KK, Adams LC, Luiken I, Busch F

pubmed logopapersSep 17 2025
To prospectively assess the diagnostic performance, workflow efficiency, and clinical impact of three commercial deep-learning tools (BoneView, Rayvolve, RBfracture) for routine musculoskeletal radiograph interpretation. From January to March 2025, two radiologists (4 and 5 years' experience) independently interpreted 1,037 adult musculoskeletal studies (2,926 radiographs) first unaided and, after 14-day washouts, with each AI tool in a randomized crossover design. Ground truth was established by confirmatory CT when available. Outcomes included sensitivity, specificity, accuracy, area under the receiver operating characteristic curve (AUC), interpretation time, diagnostic confidence (5-point Likert), and rates of additional CT recommendations and senior consultations. DeLong tests compared AUCs; Mann-Whitney U and χ2 tests assessed secondary endpoints. AI assistance did not significantly change performance for fractures, dislocations, or effusions. For fractures, AUCs were comparable to baseline (Reader 1: 96.50 % vs. 96.30-96.50 %; Reader 2: 95.35 % vs. 95.97 %; all p > 0.11). For dislocations, baseline AUCs (Reader 1: 92.66 %; Reader 2: 90.68 %) were unchanged with AI (92.76-93.95 % and 92.00 %; p ≥ 0.280). For effusions, baseline AUCs (Reader 1: 92.52 %; Reader 2: 96.75 %) were similar with AI (93.12 % and 96.99 %; p ≥ 0.157). Median interpretation times decreased with AI (Reader 1: 34 s to 21-25 s; Reader 2: 30 s to 21-26 s; all p < 0.001). Confidence improved across tools: BoneView increased combined "very good/excellent" ratings versus unaided reads (Reader 1: 509 vs. 449, p < 0.001; Reader 2: 483 vs. 439, p < 0.001); Rayvolve (Reader 1: 456 vs. 449, p = 0.029; Reader 2: 449 vs. 439, p < 0.001) and RBfracture (Reader 1: 457 vs. 449, p = 0.017; Reader 2: 448 vs. 439, p = 0.001) yielded smaller but significant gains. Reader 1 recommended fewer CT scans with AI assistance (33 vs. 22-23, p = 0.007). In a real-world clinical setting, AI-assisted interpretation of musculoskeletal radiographs reduced reading time and increased diagnostic confidence without materially affecting diagnostic performance. These findings support AI assistance as a lever for workflow efficiency and potential cost-effectiveness at scale.
Page 159 of 6486473 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.