Sort by:
Page 5 of 3793788 results

Artificial intelligence applications in thyroid cancer care.

Pozdeyev N, White SL, Bell CC, Haugen BR, Thomas J

pubmed logopapersSep 25 2025
Artificial intelligence (AI) has created tremendous opportunities to improve thyroid cancer care. We used the "artificial intelligence thyroid cancer" query to search the PubMed database until May 31, 2025. We highlight a set of high-impact publications selected based on technical innovation, large generalizable training datasets, and independent and/or prospective validation of AI. We review the key applications of AI for diagnosing and managing thyroid cancer. Our primary focus is on using computer vision to evaluate thyroid nodules on thyroid ultrasound, an area of thyroid AI that has gained the most attention from researchers and will likely have a significant clinical impact. We also highlight AI for detecting and predicting thyroid cancer neck lymph node metastases, digital cyto- and histopathology, large language models for unstructured data analysis, patient education, and other clinical applications. We discuss how thyroid AI technology has evolved and cite the most impactful research studies. Finally, we balance our excitement about the potential of AI to improve clinical care for thyroid cancer with current limitations, such as the lack of high-quality, independent prospective validation of AI in clinical trials, the uncertain added value of AI software, unknown performance on non-papillary thyroid cancer types, and the complexity of clinical implementation. AI promises to improve thyroid cancer diagnosis, reduce healthcare costs and enable personalized management. High-quality, independent prospective validation of AI in clinical trials is lacking and is necessary for the clinical community's broad adoption of this technology.

Deep learning powered breast ultrasound to improve characterization of breast masses: a prospective study.

Singla V, Garg D, Negi S, Mehta N, Pallavi T, Choudhary S, Dhiman A

pubmed logopapersSep 25 2025
BackgroundThe diagnostic performance of ultrasound (US) is heavily reliant on the operator's expertise. Advances in artificial intelligence (AI) have introduced deep learning (DL) tools that detect morphology beyond human perception, providing automated interpretations.PurposeTo evaluate Smart-Detect (S-Detect), a DL tool, for its potential to enhance diagnostic precision and standardize US assessments among radiologists with varying levels of experience.Material and MethodsThis prospective observational study was conducted between May and November 2024. US and S-Detect analyses were performed by a breast imaging fellow. Images were independently analyzed by five radiologists with varying experience in breast imaging (<1 year-15 years). Each radiologist assessed the images twice: without and with S-Detect. ROC analyses compared the diagnostic performance. True downgrades and upgrades were calculated to determine the biopsy reduction with AI assistance. Kappa statistics assessed radiologist agreement before and after incorporating S-Detect.ResultsThis study analyzed 230 breast masses from 216 patients. S-Detect demonstrated high specificity (92.7%), PPV (92.9%), NPV (87.9%), and accuracy (90.4%). It enhanced less experienced radiologists' performance, increasing the sensitivity (85% to 93.33%), specificity (54.5% to 73.64%), and accuracy (70.43% to 83.91%; <i>P</i> <0.001). AUC significantly increased for the less experienced radiologists (0.698 to 0.835 <i>P</i> <0.001), with no significant gains for the expert radiologist. It also reduced variability in assessment between radiologists with an increase in kappa agreement (0.459-0.696) and enabled significant downgrades, reducing unnecessary biopsies.ConclusionThe DL tool improves diagnostic accuracy, bridges the expertise gap, reduces reliance on invasive procedures, and enhances consistency in clinical decisions among radiologists.

Clinically Explainable Disease Diagnosis based on Biomarker Activation Map.

Zang P, Wang C, Hormel TT, Bailey ST, Hwang TS, Jia Y

pubmed logopapersSep 25 2025
Artificial intelligence (AI)-based disease classifiers have achieved specialist-level performances in several diagnostic tasks. However, real-world adoption of these classifiers remains challenging due to the black box issue. Here, we report a novel biomarker activation map (BAM) generation framework that can provide clinically meaningful explainability to current AI-based disease classifiers. We designed the framework based on the concept of residual counterfactual explanation by generating counterfactual outputs that could reverse the decision-making of the disease classifier. The BAM was generated as the difference map between the counterfactual output and original input with postprocessing. We evaluated the BAM on four different disease classifiers, including an age-related macular degeneration classier based on fundus photography, a diabetic retinopathy classifier based on optical coherence tomography angiography, a brain tumor classifier based on magnetic resonance imaging (MRI), and a breast cancer classifier based on computerized tomography (CT) scans. The highlighted regions in the BAM correlated highly with manually demarcated biomarkers of each disease. The BAM can improve the clinical applicability of an AI-based disease classifier by providing intuitive output clinicians can use to understand and verify the diagnostic decision.

Active-Supervised Model for Intestinal Ulcers Segmentation Using Fuzzy Labeling.

Chen J, Lin Y, Saeed F, Ding Z, Diyan M, Li J, Wang Z

pubmed logopapersSep 25 2025
Inflammatory bowel disease (IBD) is a chronic inflammatory condition of the intestines with a rising global incidence. Colonoscopy remains the gold standard for IBD diagnosis, but traditional image-scoring methods are subjective and complex, impacting diagnostic accuracy and efficiency. To address these limitations, this paper investigates machine learning techniques for intestinal ulcer segmentation, focusing on multi-category ulcer segmentation to enhance IBD diagnosis. We identified two primary challenges in intestinal ulcer segmentation: 1) labeling noise, where inaccuracies in medical image annotation introduce ambiguity, hindering model training, and 2) performance variability across datasets, where models struggle to maintain high accuracy due to medical image diversity. To address these challenges, we propose an active ulcer segmentation algorithm based on fuzzy labeling. A collaborative training segmentation model is designed to utilize pixel-wise confidence extracted from fuzzy labels, distinguishing high- and low-confidence regions, and enhancing robustness to noisy labels through network cooperation. To mitigate performance disparities, we introduce a data adaptation strategy leveraging active learning. By selecting high-information samples based on uncertainty and diversity, the strategy enables incremental model training, improving adaptability. Extensive experiments on public and hospital datasets validate the proposed methods. Our collaborative training model and active learning strategy show significant advantages in handling noisy labels and enhancing model performance across datasets, paving the way for more precise and efficient IBD diagnosis.

Conditional Virtual Imaging for Few-Shot Vascular Image Segmentation.

He Y, Ge R, Tang H, Liu Y, Su M, Coatrieux JL, Shu H, Chen Y, He Y

pubmed logopapersSep 25 2025
In the field of medical image processing, vascular image segmentation plays a crucial role in clinical diagnosis, treatment planning, prognosis, and medical decision-making. Accurate and automated segmentation of vascular images can assist clinicians in understanding the vascular network structure, leading to more informed medical decisions. However, manual annotation of vascular images is time-consuming and challenging due to the fine and low-contrast vascular branches, especially in the medical imaging domain where annotation requires specialized knowledge and clinical expertise. Data-driven deep learning models struggle to achieve good performance when only a small number of annotated vascular images are available. To address this issue, this paper proposes a novel Conditional Virtual Imaging (CVI) framework for few-shot vascular image segmentation learning. The framework combines limited annotated data with extensive unlabeled data to generate high-quality images, effectively improving the accuracy and robustness of segmentation learning. Our approach primarily includes two innovations: First, aligned image-mask pair generation, which leverages the powerful image generation capabilities of large pre-trained models to produce high-quality vascular images with complex structures using only a few training images; Second, the Dual-Consistency Learning (DCL) strategy, which simultaneously trains the generator and segmentation model, allowing them to learn from each other and maximize the utilization of limited data. Experimental results demonstrate that our CVI framework can generate high-quality medical images and effectively enhance the performance of segmentation models in few-shot scenarios. Our code will be made publicly available online.

The identification and severity staging of chronic obstructive pulmonary disease using quantitative CT parameters, radiomics features, and deep learning features.

Feng S, Zhang W, Zhang R, Yang Y, Wang F, Miao C, Chen Z, Yang K, Yao Q, Liang Q, Zhao H, Chen Y, Liang C, Liang X, Chen R, Liang Z

pubmed logopapersSep 25 2025
To evaluate the value of quantitative CT (QCT) parameters, radiomics features, and deep learning (DL) features based on inspiratory and expiratory CT for the identification and severity staging of chronic obstructive pulmonary disease (COPD). This retrospective analysis included 223 COPD patients and 59 healthy controls from the Guangzhou cohort. We stratified the participants into a training cohort and a testing cohort (7:3) and extracted DL features based on VGG-16 method, radiomics features based on pyradiomics package, and QCT parameters based on NeuLungCARE software. The Logistic regression method was employed to construct models for the identification and severity staging of COPD. The Shenzhen cohort was used as the external validation cohort to assess the generalizability of the models. In the COPD identification models, Model 5-B1 (the QCT combined with DL model in biphasic CT) showed the best predictive performance with AUC of 0.920, and 0.897 in testing cohort and external validation cohort, respectively. In the COPD severity staging models, the predictive performance of Model 4-B2 (the model combining QCT with radiomics features in biphasic CT) and Model 5-B2 (the model combining QCT with DL features in biphasic CT was superior to that of the other models. This biphasic CT-based multi-modal approach integrating QCT, radiomics, or DL features offers a clinically valuable tool for COPD identification and severity staging.

SAFNet: a spatial adaptive fusion network for dual-domain undersampled MRI reconstruction.

Huo Y, Zhang H, Ge D, Ren Z

pubmed logopapersSep 25 2025
Undersampled magnetic resonance imaging (MRI) reconstruction reduces scanning time while preserving image quality, improving patient comfort and clinical efficiency. Current parallel reconstruction strategies leverage k-space and image domains information to improve feature extraction and accuracy. However, most existing dual-domain reconstruction methods rely on simplistic fusion strategies that ignore spatial feature variations, suffer from constrained receptive fields limiting complex anatomical structure modeling, and employ static frameworks lacking adaptability to the heterogeneous artifact profiles induced by diverse undersampling patterns. This paper introduces a Spatial Adaptive Fusion Network (SAFNet) for dual-domain undersampled MRI reconstruction. SAFNet comprises two parallel reconstruction branches. A Dynamic Perception Initialization Module (DPIM) in each encoder enriches receptive fields for multi-scale information capture. Spatial Adaptive Fusion Modules (SAFM) within each branch's decoder achieve pixel-wise adaptive fusion of dual-domain features and incorporate original magnitude information, ensuring faithful preservation of intensity details. The Weighted Shortcut Module (WSM) enables dynamic strategy adaptation by scaling shortcut connections to adaptively balance residual learning and direct reconstruction. Experiments demonstrate SAFNet's superior accuracy and adaptability over state-of-the-art methods, offering valuable insights for image reconstruction and multimodal information fusion.

Segmentation-model-based framework to detect aortic dissection on non-contrast CT images: a retrospective study.

Wang Q, Huang S, Pan W, Feng Z, Lv L, Guan D, Yang Z, Huang Y, Liu W, Shui W, Ying M, Xiao W

pubmed logopapersSep 25 2025
To develop an automated deep learning framework for detecting aortic dissection (AD) and visualizing its morphology and extent on non-contrast CT (NCCT) images. This retrospective study included patients who underwent aortic CTA from January 2021 to January 2023 at two tertiary hospitals. Demographic data, medical history, and CT scans were collected. A segmentation-based deep learning model was trained to identify true and false lumens on NCCT images, with performance evaluated on internal and external test sets. Segmentation accuracy was measured using the Dice coefficient, while the intraclass correlation coefficient (ICC) assessed consistency between predicted and ground-truth false lumen volumes. Receiver operating characteristic (ROC) analysis evaluated the model's predictive performance. Among 701 patients (median age, 53 years, IQR: 41-64, 486 males), data from Center 1 were split into training (439 cases: 318 non-AD, 121 AD) and internal test sets (106 cases: 77 non-AD, 29 AD) (8:2 ratio), while Center 2 served as the external test set (156 cases: 80 non-AD, 76 AD). The ICC for false lumen volume was 0.823 (95% CI: 0.750-0.880) internally and 0.823 (95% CI: 0.760-0.870) externally. The model achieved an AUC of 0.935 (95% CI: 0.894-0.968) in the external test set, with an optimal cutoff of 7649 mm<sup>3</sup> yielding 88.2% sensitivity, 91.3% specificity, and 89.0% negative predictive value. The proposed deep learning framework accurately detects AD on NCCT and effectively visualizes its morphological features, demonstrating strong clinical potential. This deep learning framework helps reduce the misdiagnosis of AD in emergencies with limited time. The satisfactory results of presenting true/false lumen on NCCT images benefit patients with contrast media contraindications and promote treatment decisions. False lumen volume was used as an indicator for AD. NCCT detects AD via this segmentation model. This framework enhances AD diagnosis in emergencies, reducing unnecessary contrast use.

Proof-of-concept comparison of an artificial intelligence-based bone age assessment tool with Greulich-Pyle and Tanner-Whitehouse version 2 methods in a pediatric cohort.

Marinelli L, Lo Mastro A, Grassi F, Berritto D, Russo A, Patanè V, Festa A, Grassi E, Grandone A, Nasto LA, Pola E, Reginelli A

pubmed logopapersSep 25 2025
Bone age assessment is essential in evaluating pediatric growth disorders. Artificial intelligence (AI) systems offer potential improvements in accuracy and reproducibility compared to traditional methods. To compare the performance of a commercially available artificial intelligence-based software (BoneView BoneAge, Gleamer, Paris, France) against two human-assessed methods-the Greulich-Pyle (GP) atlas and Tanner-Whitehouse version 2 (TW2)-in a pediatric population. This proof-of-concept study included 203 pediatric patients (mean age, 9.0 years; range, 2.0-17.0 years) who underwent hand and wrist radiographs for suspected endocrine or growth-related conditions. After excluding technically inadequate images, 157 cases were analyzed using AI and GP-assessed methods. A subset of 35 patients was also evaluated using the TW2 method by a pediatric endocrinologist. Performance was measured using mean absolute error (MAE), root mean square error (RMSE), bias, and Pearson's correlation coefficient, using chronological age as reference. The AI model achieved a MAE of 1.38 years, comparable to the radiologist's GP-based estimate (MAE, 1.30 years), and superior to TW2 (MAE, 2.86 years). RMSE values were 1.75 years, 1.80 years, and 3.88 years, respectively. AI showed minimal bias (-0.05 years), while TW2-based assessments systematically underestimated bone age (bias, -2.63 years). Strong correlations with chronological age were observed for AI (r=0.857) and GP (r=0.894), but not for TW2 (r=0.490). BoneView demonstrated comparable accuracy to radiologist-assessed GP method and outperformed TW2 assessments in this cohort. AI-based systems may enhance consistency in pediatric bone age estimation but require careful validation, especially in ethnically diverse populations.

Machine Learning-Based Classification of White Matter Functional Changes in Stroke Patients Using Resting-State fMRI.

Liu LH, Wang CX, Huang X, Chen RB

pubmed logopapersSep 25 2025
Neuroimaging studies of brain function are important research methods widely applied to stroke patients. Currently, a large number of studies have focused on functional imaging of the gray matter cortex. Relevant research indicates that certain areas of the gray matter cortex in stroke patients exhibit abnormal brain activity during resting state. However, studies on brain function based on white matter remain insufficient. The changes in functional connectivity caused by stroke in white matter, as well as the repair or compensation mechanisms of white matter function after stroke, are still unclear. The aim of this study is to investigate and demonstrate the changes in brain functional connectivity activity in the white matter region of stroke patients. Revealing the recombination characteristics of white matter functional networks after stroke, providing potential biomarkers for rehabilitation therapy Provide new clinical insights for the rehabilitation and treatment of stroke patients. We recruited 36 stroke patients and 36 healthy controls for resting-state functional magnetic resonance imaging (rs-fMRI). Regional Homogeneity (ReHo) and Degree Centrality (DC), which are sensitive to white matter functional abnormalities, were selected as feature vectors. ReHo reflects local neuronal synchrony, while DC quantifies global network hub properties. The combination of both effectively characterizes functional changes in white matter. ReHo evaluates the functional consistency of different white matter regions by calculating the activity similarity between adjacent brain regions. Additionally, DC analysis of white matter was used to investigate the connectivity patterns and organizational principles of functional networks between white matter regions. This was achieved by calculating the number of connections in each brain region to identify changes in neural activation of white matter regions that significantly impact the brain network. Furthermore, ReHo and DC metrics were used as feature vectors for classification using machine learning algorithms. The results indicated significant differences in white matter DC and ReHo values between stroke patients and healthy controls. In the two-sample t-test analysis of white matter DC, stroke patients showed a significant reduction in DC values in the corpus callosum genu (GCC), corpus callosum body (BCC), and left anterior corona radiata (ACRL) regions (GCC: 0.143 vs. 1.024; BCC: 0.238 vs. 1.143; ACRL: 0.143 vs. 0.821, p < 0.001). However, an increase in DC values was observed in the left superior longitudinal fasciculus (SLF_L) region (1.190 vs. 0.190, p < 0.001). In the two-sample t-test analysis of white matter ReHo, stroke patients exhibited a decrease in ReHo values in the GCC and BCC regions (GCC: 0.859 vs. 1.375; BCC: 1.156 vs. 1.687, p < 0.001), indicating values lower than those in the healthy control group. Using leave-one-out cross-validation (LOOCV) to evaluate the white matter DC and ReHo feature values through SVM classification models for stroke patients and healthy controls, the DC classification AUC was 0.89, and the ReHo classification AUC reached 0.98. These results suggest that the features possess validity and discriminative power. These findings suggest alterations in functional connectivity in specific white matter regions following stroke. Specifically, we observed a weakening of functional connectivity in the genu of the corpus callosum (GCC), the body of the corpus callosum (BCC), and the left anterior corona radiata (ACR_L) regions, while compensatory functional connectivity was enhanced in the left superior longitudinal fasciculus (SLF_L) region. These findings reveal the reorganization characteristics of white matter functional networks after stroke, which may provide potential biomarkers for the rehabilitation treatment of stroke patients and offer new clinical insights for their rehabilitation and treatment.
Page 5 of 3793788 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.