Sort by:
Page 61 of 2372364 results

Mapping the Evolution of Thyroid Ultrasound Research: A 30-Year Bibliometric Analysis.

Jiang T, Yang C, Wu L, Li X, Zhang J

pubmed logopapersAug 21 2025
Thyroid ultrasound has emerged as a critical diagnostic modality, attracting substantial research attention. This bibliometric analysis systematically maps the 30-year evolution of thyroid ultrasound research to identify developmental trends, research hotspots, and emerging frontiers. English-language articles and reviews (1994-2023) from Web of Science Core Collection were extracted. Bibliometric analysis was performed using VOSviewer and CiteSpace to examine collaborative networks among countries/institutions/authors, reference timeline visualization, and keyword burst detection. A total of 8,489 documents were included for further analysis. An overall upward trend in research publications was found. China, the United States, and Italy were the productive countries, while the United States, Italy, and South Korea had the greatest influence. The journal Thyroid obtained the highest IF. The keywords with the greatest strength were "disorders", "thyroid volume", and "association guidelines". The timeline view of reference demonstrated that deep learning, ultrasound-based risk stratification systems, and radiofrequency ablation were the latest reference clusters. Three dominant themes emerged: the ultrasound characteristics of thyroid disorders, the application of new techniques, and the assessment of the risk of malignancy of thyroid nodules. Applications of deep learning and the development and improvement of correlation guides such as TIRADS are the present focus of research. The specific application efficacy and improvement of TI-RADS and the optimization of deep learning algorithms and their clinical applicability will be the focus of subsequent research.

Hierarchical Multi-Label Classification Model for CBCT-Based Extraction Socket Healing Assessment and Stratified Diagnostic Decision-Making to Assist Implant Treatment Planning.

Li Q, Han R, Huang J, Liu CB, Zhao S, Ge L, Zheng H, Huang Z

pubmed logopapersAug 21 2025
Dental implant treatment planning requires assessing extraction socket healing, yet current methods face challenges distinguishing soft tissue from woven bone on cone beam computed tomography (CBCT) imaging and lack standardized classification systems. In this study, we propose a hierarchical multilabel classification model for CBCT-based extraction socket healing assessment. We established a novel classification system dividing extraction socket healing status into two levels: Level 1 distinguishes physiological healing (Type I) from pathological healing (Type II); Level 2 is further subdivided into 5 subtypes. The HierTransFuse-Net architecture integrates ResNet50 with a two-dimensional transformer module for hierarchical multilabel classification. Additionally, a stratified diagnostic principle coupled with random forest algorithms supported personalized implant treatment planning. The HierTransFuse-Net model performed excellently in classifying extraction socket healing, achieving an mAccuracy of 0.9705, with mPrecision, mRecall, and mF1 scores of 0.9156, 0.9376, and 0.9253, respectively. The HierTransFuse-Net model demonstrated superior diagnostic reliability (κω = 0.9234) significantly exceeding that of clinical practitioners (mean κω = 0.7148, range: 0.6449-0.7843). The random forest model based on stratified diagnostic decision indicators achieved an accuracy of 81.48% and an mF1 score of 82.55% in predicting 12 clinical treatment pathways. This study successfully developed HierTransFuse-Net, which demonstrated excellent performance in distinguishing different extraction socket healing statuses and subtypes. Random forest algorithms based on stratified diagnostic indicators have shown potential for clinical pathway prediction. The hierarchical multilabel classification system simulates clinical diagnostic reasoning, enabling precise disease stratification and providing a scientific basis for personalized treatment decisions.

Integrating Imaging-Derived Clinical Endotypes with Plasma Proteomics and External Polygenic Risk Scores Enhances Coronary Microvascular Disease Risk Prediction

Venkatesh, R., Cherlin, T., Penn Medicine BioBank,, Ritchie, M. D., Guerraty, M., Verma, S. S.

medrxiv logopreprintAug 21 2025
Coronary microvascular disease (CMVD) is an underdiagnosed but significant contributor to the burden of ischemic heart disease, characterized by angina and myocardial infarction. The development of risk prediction models such as polygenic risk scores (PRS) for CMVD has been limited by a lack of large-scale genome-wide association studies (GWAS). However, there is significant overlap between CMVD and enrollment criteria for coronary artery disease (CAD) GWAS. In this study, we developed CMVD PRS models by selecting variants identified in a CMVD GWAS and applying weights from an external CAD GWAS, using CMVD-associated loci as proxies for the genetic risk. We integrated plasma proteomics, clinical measures from perfusion PET imaging, and PRS to evaluate their contributions to CMVD risk prediction in comprehensive machine and deep learning models. We then developed a novel unsupervised endotyping framework for CMVD from perfusion PET-derived myocardial blood flow data, revealing distinct patient subgroups beyond traditional case-control definitions. This imaging-based stratification substantially improved classification performance alongside plasma proteomics and PRS, achieving AUROCs between 0.65 and 0.73 per class, significantly outperforming binary classifiers and existing clinical models, highlighting the potential of this stratification approach to enable more precise and personalized diagnosis by capturing the underlying heterogeneity of CMVD. This work represents the first application of imaging-based endotyping and the integration of genetic and proteomic data for CMVD risk prediction, establishing a framework for multimodal modeling in complex diseases.

Vision Transformer Autoencoders for Unsupervised Representation Learning: Revealing Novel Genetic Associations through Learned Sparse Attention Patterns

Islam, S. R., He, W., Xie, Z., Zhi, D.

medrxiv logopreprintAug 21 2025
The discovery of genetic loci associated with brain architecture can provide deeper insights into neuroscience and potentially lead to improved personalized medicine outcomes. Previously, we designed the Unsupervised Deep learning-derived Imaging Phenotypes (UDIPs) approach to extract phenotypes from brain imaging using a convolutional (CNN) autoencoder, and conducted brain imaging GWAS on UK Biobank (UKBB). In this work, we design a vision transformer (ViT)-based autoencoder, leveraging its distinct inductive bias and its ability to capture unique patterns through its pairwise attention mechanism. The encoder generates contextual embeddings for input patches, from which we derive a 128-dimensional latent representation, interpreted as phenotypes, by applying average pooling. The GWAS on these 128 phenotypes discovered 10 loci previously unreported by CNN-based UDIP model, 3 of which had no previous associations with brain structure in the GWAS Catalog. Our interpretation results suggest that these novel associations stem from the ViTs capability to learn sparse attention patterns, enabling the capturing of non-local patterns such as left-right hemisphere symmetry within brain MRI data. Our results highlight the advantages of transformer-based architectures in feature extraction and representation learning for genetic discovery.

Dynamic-Attentive Pooling Networks: A Hybrid Lightweight Deep Model for Lung Cancer Classification.

Ayivi W, Zhang X, Ativi WX, Sam F, Kouassi FAP

pubmed logopapersAug 21 2025
Lung cancer is one of the leading causes of cancer-related mortality worldwide. The diagnosis of this disease remains a challenge due to the subtle and ambiguous nature of early-stage symptoms and imaging findings. Deep learning approaches, specifically Convolutional Neural Networks (CNNs), have significantly advanced medical image analysis. However, conventional architectures such as ResNet50 that rely on first-order pooling often fall short. This study aims to overcome the limitations of CNNs in lung cancer classification by proposing a novel and dynamic model named LungSE-SOP. The model is based on Second-Order Pooling (SOP) and Squeeze-and-Excitation Networks (SENet) within a ResNet50 backbone to improve feature representation and class separation. A novel Dynamic Feature Enhancement (DFE) module is also introduced, which dynamically adjusts the flow of information through SOP and SENet blocks based on learned importance scores. The model was trained using a publicly available IQ-OTH/NCCD lung cancer dataset. The performance of the model was assessed using various metrics, including the accuracy, precision, recall, F1-score, ROC curves, and confidence intervals. For multiclass tumor classification, our model achieved 98.6% accuracy for benign, 98.7% for malignant, and 99.9% for normal cases. Corresponding F1-scores were 99.2%, 99.8%, and 99.9%, respectively, reflecting the model's high precision and recall across all tumor types and its strong potential for clinical deployment.

TPA: Temporal Prompt Alignment for Fetal Congenital Heart Defect Classification

Darya Taratynova, Alya Almsouti, Beknur Kalmakhanbet, Numan Saeed, Mohammad Yaqub

arxiv logopreprintAug 21 2025
Congenital heart defect (CHD) detection in ultrasound videos is hindered by image noise and probe positioning variability. While automated methods can reduce operator dependence, current machine learning approaches often neglect temporal information, limit themselves to binary classification, and do not account for prediction calibration. We propose Temporal Prompt Alignment (TPA), a method leveraging foundation image-text model and prompt-aware contrastive learning to classify fetal CHD on cardiac ultrasound videos. TPA extracts features from each frame of video subclips using an image encoder, aggregates them with a trainable temporal extractor to capture heart motion, and aligns the video representation with class-specific text prompts via a margin-hinge contrastive loss. To enhance calibration for clinical reliability, we introduce a Conditional Variational Autoencoder Style Modulation (CVAESM) module, which learns a latent style vector to modulate embeddings and quantifies classification uncertainty. Evaluated on a private dataset for CHD detection and on a large public dataset, EchoNet-Dynamic, for systolic dysfunction, TPA achieves state-of-the-art macro F1 scores of 85.40% for CHD diagnosis, while also reducing expected calibration error by 5.38% and adaptive ECE by 6.8%. On EchoNet-Dynamic's three-class task, it boosts macro F1 by 4.73% (from 53.89% to 58.62%). Temporal Prompt Alignment (TPA) is a framework for fetal congenital heart defect (CHD) classification in ultrasound videos that integrates temporal modeling, prompt-aware contrastive learning, and uncertainty quantification.

COVID19 Prediction Based On CT Scans Of Lungs Using DenseNet Architecture

Deborup Sanyal

arxiv logopreprintAug 21 2025
COVID19 took the world by storm since December 2019. A highly infectious communicable disease, COVID19 is caused by the SARSCoV2 virus. By March 2020, the World Health Organization (WHO) declared COVID19 as a global pandemic. A pandemic in the 21st century after almost 100 years was something the world was not prepared for, which resulted in the deaths of around 1.6 million people worldwide. The most common symptoms of COVID19 were associated with the respiratory system and resembled a cold, flu, or pneumonia. After extensive research, doctors and scientists concluded that the main reason for lives being lost due to COVID19 was failure of the respiratory system. Patients were dying gasping for breath. Top healthcare systems of the world were failing badly as there was an acute shortage of hospital beds, oxygen cylinders, and ventilators. Many were dying without receiving any treatment at all. The aim of this project is to help doctors decide the severity of COVID19 by reading the patient's Computed Tomography (CT) scans of the lungs. Computer models are less prone to human error, and Machine Learning or Neural Network models tend to give better accuracy as training improves over time. We have decided to use a Convolutional Neural Network model. Given that a patient tests positive, our model will analyze the severity of COVID19 infection within one month of the positive test result. The severity of the infection may be promising or unfavorable (if it leads to intubation or death), based entirely on the CT scans in the dataset.

Deep Learning Model for Breast Shear Wave Elastography to Improve Breast Cancer Diagnosis (INSPiRED 006): An International, Multicenter Analysis.

Cai L, Pfob A, Barr RG, Duda V, Alwafai Z, Balleyguier C, Clevert DA, Fastner S, Gomez C, Goncalo M, Gruber I, Hahn M, Kapetas P, Nees J, Ohlinger R, Riedel F, Rutten M, Stieber A, Togawa R, Sidey-Gibbons C, Tozaki M, Wojcinski S, Heil J, Golatta M

pubmed logopapersAug 20 2025
Shear wave elastography (SWE) has been investigated as a complement to B-mode ultrasound for breast cancer diagnosis. Although multicenter trials suggest benefits for patients with Breast Imaging Reporting and Data System (BI-RADS) 4(a) breast masses, widespread adoption remains limited because of the absence of validated velocity thresholds. This study aims to develop and validate a deep learning (DL) model using SWE images (artificial intelligence [AI]-SWE) for BI-RADS 3 and 4 breast masses and compare its performance with human experts using B-mode ultrasound. We used data from an international, multicenter trial (ClinicalTrials.gov identifier: NCT02638935) evaluating SWE in women with BI-RADS 3 or 4 breast masses across 12 institutions in seven countries. Images from 11 sites were used to develop an EfficientNetB1-based DL model. An external validation was conducted using data from the 12th site. Another validation was performed using the latest SWE software from a separate institutional cohort. Performance metrics included sensitivity, specificity, false-positive reduction, and area under the receiver operator curve (AUROC). The development set included 924 patients (4,026 images); the external validation sets included 194 patients (562 images) and 176 patients (188 images, latest SWE software). AI-SWE achieved an AUROC of 0.94 (95% CI, 0.91 to 0.96) and 0.93 (95% CI, 0.88 to 0.98) in the two external validation sets. Compared with B-mode ultrasound, AI-SWE significantly reduced false-positive rates by 62.1% (20.4% [30/147] <i>v</i> 53.8% [431/801]; <i>P</i> < .001) and 38.1% (33.3% [14/42] <i>v</i> 53.8% [431/801]; <i>P</i> < .001), with comparable sensitivity (97.9% [46/47] and 97.8% [131/134] <i>v</i> 98.1% [311/317]; <i>P</i> = .912 and <i>P</i> = .810). AI-SWE demonstrated accuracy comparable with human experts in malignancy detection while significantly reducing false-positive imaging findings (ie, unnecessary biopsies). Future studies should explore its integration into multimodal breast cancer diagnostics.

FedVGM: Enhancing Federated Learning Performance on Multi-Dataset Medical Images with XAI.

Tahosin MS, Sheakh MA, Alam MJ, Hassan MM, Bairagi AK, Abdulla S, Alshathri S, El-Shafai W

pubmed logopapersAug 20 2025
Advances in deep learning have transformed medical imaging, yet progress is hindered by data privacy regulations and fragmented datasets across institutions. To address these challenges, we propose FedVGM, a privacy-preserving federated learning framework for multi-modal medical image analysis. FedVGM integrates four imaging modalities, including brain MRI, breast ultrasound, chest X-ray, and lung CT, across 14 diagnostic classes without centralizing patient data. Using transfer learning and an ensemble of VGG16 and MobileNetV2, FedVGM achieves 97.7% $\pm$ 0.01 accuracy on the combined dataset and 91.9-99.1% across individual modalities. We evaluated three aggregation strategies and demonstrated median aggregation to be the most effective. To ensure clinical interpretability, we apply explainable AI techniques and validate results through performance metrics, statistical analysis, and k-fold cross-validation. FedVGM offers a robust, scalable solution for collaborative medical diagnostics, supporting clinical deployment while preserving data privacy.

Evolution and integration of artificial intelligence across the cancer continuum in women: advances in risk assessment, prevention, and early detection.

Desai M, Desai B

pubmed logopapersAug 20 2025
Artificial Intelligence (AI) is revolutionizing the prevention and control of breast cancer by improving risk assessment, prevention, and early diagnosis. Considering an emphasis on AI applications across the women's breast cancer spectrum, this review summarizes developments, existing applications, and future potential prospects. We conducted an in-depth review of the literature on AI applications in breast cancer risk prediction, prevention, and early detection from 2000 to 2025, with particular emphasis on Explainable AI (XAI), deep learning (DL), and machine learning (ML). We examined algorithmic fairness, model transparency, dataset representation, and clinical performance indicators. As compared to traditional methods, AI-based models continuously enhanced risk categorization, screening sensitivity, and early detection (AUCs ranging from 0.65 to 0.975). However, challenges remain in algorithmic bias, underrepresentation of minority populations, and limited external validation. Remarkably, 58% of public datasets focused on mammography, leaving gaps in modalities such as tomosynthesis and histopathology. AI technologies have an enormous number of opportunities for enhancing the diagnosis and treatment of breast cancer. However, transparent models, inclusive datasets, and standardized frameworks for explainability and external validation should be given the greatest attention in subsequent studies to ensure equitable and effective implementation.
Page 61 of 2372364 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.