Sort by:
Page 242 of 6576562 results

Jiang T, Yang C, Wu L, Li X, Zhang J

pubmed logopapersAug 21 2025
Thyroid ultrasound has emerged as a critical diagnostic modality, attracting substantial research attention. This bibliometric analysis systematically maps the 30-year evolution of thyroid ultrasound research to identify developmental trends, research hotspots, and emerging frontiers. English-language articles and reviews (1994-2023) from Web of Science Core Collection were extracted. Bibliometric analysis was performed using VOSviewer and CiteSpace to examine collaborative networks among countries/institutions/authors, reference timeline visualization, and keyword burst detection. A total of 8,489 documents were included for further analysis. An overall upward trend in research publications was found. China, the United States, and Italy were the productive countries, while the United States, Italy, and South Korea had the greatest influence. The journal Thyroid obtained the highest IF. The keywords with the greatest strength were "disorders", "thyroid volume", and "association guidelines". The timeline view of reference demonstrated that deep learning, ultrasound-based risk stratification systems, and radiofrequency ablation were the latest reference clusters. Three dominant themes emerged: the ultrasound characteristics of thyroid disorders, the application of new techniques, and the assessment of the risk of malignancy of thyroid nodules. Applications of deep learning and the development and improvement of correlation guides such as TIRADS are the present focus of research. The specific application efficacy and improvement of TI-RADS and the optimization of deep learning algorithms and their clinical applicability will be the focus of subsequent research.

Li Q, Han R, Huang J, Liu CB, Zhao S, Ge L, Zheng H, Huang Z

pubmed logopapersAug 21 2025
Dental implant treatment planning requires assessing extraction socket healing, yet current methods face challenges distinguishing soft tissue from woven bone on cone beam computed tomography (CBCT) imaging and lack standardized classification systems. In this study, we propose a hierarchical multilabel classification model for CBCT-based extraction socket healing assessment. We established a novel classification system dividing extraction socket healing status into two levels: Level 1 distinguishes physiological healing (Type I) from pathological healing (Type II); Level 2 is further subdivided into 5 subtypes. The HierTransFuse-Net architecture integrates ResNet50 with a two-dimensional transformer module for hierarchical multilabel classification. Additionally, a stratified diagnostic principle coupled with random forest algorithms supported personalized implant treatment planning. The HierTransFuse-Net model performed excellently in classifying extraction socket healing, achieving an mAccuracy of 0.9705, with mPrecision, mRecall, and mF1 scores of 0.9156, 0.9376, and 0.9253, respectively. The HierTransFuse-Net model demonstrated superior diagnostic reliability (κω = 0.9234) significantly exceeding that of clinical practitioners (mean κω = 0.7148, range: 0.6449-0.7843). The random forest model based on stratified diagnostic decision indicators achieved an accuracy of 81.48% and an mF1 score of 82.55% in predicting 12 clinical treatment pathways. This study successfully developed HierTransFuse-Net, which demonstrated excellent performance in distinguishing different extraction socket healing statuses and subtypes. Random forest algorithms based on stratified diagnostic indicators have shown potential for clinical pathway prediction. The hierarchical multilabel classification system simulates clinical diagnostic reasoning, enabling precise disease stratification and providing a scientific basis for personalized treatment decisions.

Juampablo E. Heras Rivera, Hitender Oswal, Tianyi Ren, Yutong Pan, William Henry, Caitlin M. Neher, Mehmet Kurt

arxiv logopreprintAug 21 2025
Stroke is among the top three causes of death worldwide, and accurate identification of ischemic stroke lesion boundaries from imaging is critical for diagnosis and treatment. The main imaging modalities used include magnetic resonance imaging (MRI), particularly diffusion weighted imaging (DWI), and computed tomography (CT)-based techniques such as non-contrast CT (NCCT), contrast-enhanced CT angiography (CTA), and CT perfusion (CTP). DWI is the gold standard for the identification of lesions but has limited applicability in low-resource settings due to prohibitive costs. CT-based imaging is currently the most practical imaging method in low-resource settings due to low costs and simplified logistics, but lacks the high specificity of MRI-based methods in monitoring ischemic insults. Supervised deep learning methods are the leading solution for automated ischemic stroke lesion segmentation and provide an opportunity to improve diagnostic quality in low-resource settings by incorporating insights from DWI when segmenting from CT. Here, we develop a series of models which use CT images taken upon arrival as inputs to predict follow-up lesion volumes annotated from DWI taken 2-9 days later. Furthermore, we implement clinically motivated preprocessing steps and show that the proposed pipeline results in a 38% improvement in Dice score over 10 folds compared to a nnU-Net model trained with the baseline preprocessing. Finally, we demonstrate that through additional preprocessing of CTA maps to extract vessel segmentations, we further improve our best model by 21% over 5 folds.

Kevin Arias, Edwin Vargas, Kumar Vijay Mishra, Antonio Ortega, Henry Arguello

arxiv logopreprintAug 21 2025
Supervised learning techniques have proven their efficacy in many applications with abundant data. However, applying these methods to medical imaging is challenging due to the scarcity of data, given the high acquisition costs and intricate data characteristics of those images, thereby limiting the full potential of deep neural networks. To address the lack of data, augmentation techniques leverage geometry, color, and the synthesis ability of generative models (GMs). Despite previous efforts, gaps in the generation process limit the impact of data augmentation to improve understanding of medical images, e.g., the highly structured nature of some domains, such as X-ray images, is ignored. Current GMs rely solely on the network's capacity to blindly synthesize augmentations that preserve semantic relationships of chest X-ray images, such as anatomical restrictions, representative structures, or structural similarities consistent across datasets. In this paper, we introduce a novel GM that leverages the structural resemblance of medical images by learning a latent graph representation (LGR). We design an end-to-end model to learn (i) a LGR that captures the intrinsic structure of X-ray images and (ii) a graph convolutional network (GCN) that reconstructs the X-ray image from the LGR. We employ adversarial training to guide the generator and discriminator models in learning the distribution of the learned LGR. Using the learned GCN, our approach generates structure-preserving synthetic images by mapping generated LGRs to X-ray. Additionally, we evaluate the learned graph representation for other tasks, such as X-ray image classification and segmentation. Numerical experiments demonstrate the efficacy of our approach, increasing performance up to $3\%$ and $2\%$ for classification and segmentation, respectively.

Venkatesh, R., Cherlin, T., Penn Medicine BioBank,, Ritchie, M. D., Guerraty, M., Verma, S. S.

medrxiv logopreprintAug 21 2025
Coronary microvascular disease (CMVD) is an underdiagnosed but significant contributor to the burden of ischemic heart disease, characterized by angina and myocardial infarction. The development of risk prediction models such as polygenic risk scores (PRS) for CMVD has been limited by a lack of large-scale genome-wide association studies (GWAS). However, there is significant overlap between CMVD and enrollment criteria for coronary artery disease (CAD) GWAS. In this study, we developed CMVD PRS models by selecting variants identified in a CMVD GWAS and applying weights from an external CAD GWAS, using CMVD-associated loci as proxies for the genetic risk. We integrated plasma proteomics, clinical measures from perfusion PET imaging, and PRS to evaluate their contributions to CMVD risk prediction in comprehensive machine and deep learning models. We then developed a novel unsupervised endotyping framework for CMVD from perfusion PET-derived myocardial blood flow data, revealing distinct patient subgroups beyond traditional case-control definitions. This imaging-based stratification substantially improved classification performance alongside plasma proteomics and PRS, achieving AUROCs between 0.65 and 0.73 per class, significantly outperforming binary classifiers and existing clinical models, highlighting the potential of this stratification approach to enable more precise and personalized diagnosis by capturing the underlying heterogeneity of CMVD. This work represents the first application of imaging-based endotyping and the integration of genetic and proteomic data for CMVD risk prediction, establishing a framework for multimodal modeling in complex diseases.

Islam, S. R., He, W., Xie, Z., Zhi, D.

medrxiv logopreprintAug 21 2025
The discovery of genetic loci associated with brain architecture can provide deeper insights into neuroscience and potentially lead to improved personalized medicine outcomes. Previously, we designed the Unsupervised Deep learning-derived Imaging Phenotypes (UDIPs) approach to extract phenotypes from brain imaging using a convolutional (CNN) autoencoder, and conducted brain imaging GWAS on UK Biobank (UKBB). In this work, we design a vision transformer (ViT)-based autoencoder, leveraging its distinct inductive bias and its ability to capture unique patterns through its pairwise attention mechanism. The encoder generates contextual embeddings for input patches, from which we derive a 128-dimensional latent representation, interpreted as phenotypes, by applying average pooling. The GWAS on these 128 phenotypes discovered 10 loci previously unreported by CNN-based UDIP model, 3 of which had no previous associations with brain structure in the GWAS Catalog. Our interpretation results suggest that these novel associations stem from the ViTs capability to learn sparse attention patterns, enabling the capturing of non-local patterns such as left-right hemisphere symmetry within brain MRI data. Our results highlight the advantages of transformer-based architectures in feature extraction and representation learning for genetic discovery.

Barajas Ordonez, F., Xie, K., Ferreira, A., Siepmann, R., Chargi, N., Nebelung, S., Truhn, D., Berge, S., Bruners, P., Egger, J., Hölzle, F., Wirth, M., Kuhl, C., Puladi, B.

medrxiv logopreprintAug 21 2025
BackgroundHead and neck cancer (HNC) patients face an increased risk of malnutrition due to lifestyle, tumor localization, and treatment effects. While skeletal muscle area (SMA) and radiation attenuation (SM-RA) at the third lumbar vertebra (L3) are established prognostic markers, L3 is not routinely available in head and neck imaging. The prognostic value of SM-RA at the third cervical vertebra (C3) remains unclear. This study assesses whether SMA and SM-RA at C3 predict locoregional control (LRC) and overall survival (OS) in HNC. MethodsWe analyzed 904 HNC cases with head and neck CT scans. A deep learning pipeline identified C3, and SMA/SM-RA were quantified via automated segmentation with manual verification. Cox proportional hazards models assessed associations with LRC and OS, adjusting for clinical factors. ResultsMedian SMA and SM-RA were 36.64 cm{superscript 2} (IQR: 30.12-42.44) and 50.77 HU (IQR: 43.04-57.39). In multivariate analysis, lower SMA (HR 1.62, 95% CI: 1.02-2.58, p = 0.04), lower SM-RA (HR 1.89, 95% CI: 1.30-2.79, p < 0.001), and advanced T stage (HR 1.50, 95% CI: 1.06-2.12, p = 0.02) were prognostic for LRC. OS predictors included advanced T stage (HR 2.17, 95% CI: 1.64-2.87, p < 0.001), age [&ge;]70 years (HR 1.40, 95% CI: 1.00-1.96, p = 0.05), male sex (HR 1.64, 95% CI: 1.02-2.63, p = 0.04), and lower SM-RA (HR 2.15, 95% CI: 1.56-2.96, p < 0.001). ConclusionDeep learning-assisted SM-RA assessment at C3 outperforms SMA for LRC and OS in HNC, supporting its use as a routine biomarker and L3 alternative.

shirzadeh barough, s., Bilgel, M., Ventura, C., Moghekar, A., Albert, M., Miller, M. I., Moghekar, A.

medrxiv logopreprintAug 21 2025
BACKGROUND AND PURPOSENormal pressure hydrocephalus (NPH) is a potentially treatable neurodegenerative disorder that remains underdiagnosed due to its clinical overlap with other conditions and the labor-intensive nature of manual imaging analyses. Imaging biomarkers, such as the callosal angle (CA), Evans Index (EI), and Disproportionately Enlarged Subarachnoid Space Hydrocephalus (DESH), play a crucial role in NPH diagnosis but are often limited by subjective interpretations. To address these challenges, we developed a fully automated and robust deep learning framework for measuring the CA directly from raw T1 MPRAGE and non-MPRAGE MRI scans. MATERIALS AND METHODSOur method integrates two complementary modules. First, a BrainSignsNET model is employed to accurately detect key anatomical landmarks, notably the anterior commissure (AC) and posterior commissure (PC). Preprocessed 3D MRI scans, reoriented to the Right Anterior Superior (RAS) system and resized to standardized cubes while preserving aspect ratios, serve as input for landmark localization. After detecting these landmarks, a coronal slice, perpendicular to the AC-PC line at the PC level, is extracted for subsequent analysis. Second, a UNet-based segmentation network, featuring a pretrained EfficientNetB0 encoder, generates multiclass masks of the lateral ventricles from the coronal slices which then used for calculation of the Callosal Angle. RESULTSTraining and internal validation were performed using datasets from the Baltimore Longitudinal Study of Aging (BLSA) and BIOCARD, while external validation utilized 216 clinical MRI scans from Johns Hopkins Bayview Hospital. Our framework achieved high concordance with manual measurements, demonstrating a strong correlation (r = 0.98, p < 0.001) and a mean absolute error (MAE) of 2.95 (SD 1.58) degrees. Moreover, error analysis confirmed that CA measurement performance was independent of patient age, gender, and EI, underscoring the broad applicability of this method. CONCLUSIONSThese results indicate that our fully automated CA measurement framework is a reliable and reproducible alternative to manual methods, outperforms reported interobserver variability in assessing the callosal angle, and offers significant potential to enhance early detection and diagnosis of NPH in both research and clinical settings.

Uus, A., Bansal, S., Gerek, Y., Waheed, H., Neves Silva, S., Aviles Verdera, J., Kyriakopoulou, V., Betti, L., Jaufuraully, S., Hajnal, J. V., Siasakos, D., David, A., Chandiramani, M., Hutter, J., Story, L., Rutherford, M.

medrxiv logopreprintAug 21 2025
Fetal MRI offers detailed three-dimensional visualisation of both fetal and maternal pelvic anatomy, allowing for assessment of the risk of cephalopelvic disproportion and obstructed labour. However, conventional measurements of fetal and pelvic proportions and their relative positioning are typically performed manually in 2D, making them time-consuming, subject to inter-observer variability, and rarely integrated into routine clinical workflows. In this work, we present the first fully automated pipeline for pelvic and fetal head biometry in T2-weighted fetal MRI at late gestation. The method employs deep learning-based localisation of anatomical landmarks in 3D reconstructed MRI images, followed by computation of 12 standard linear and circumference measurements commonly used in the assessment of cephalopelvic disproportion. Landmark detection is based on 3D UNet models within MONAI framework, trained on 57 semi-manually annotated datasets. The full pipeline is quantitatively validated on 10 test cases. Furthermore, we demonstrate its clinical feasibility and relevance by applying it to 206 fetal MRI scans (36-40 weeks gestation) from the MiBirth study, which investigates prediction of mode of delivery using low field MRI.

Chourib I

pubmed logopapersAug 21 2025
Accurate and timely detection of brain tumors from magnetic resonance imaging (MRI) scans is critical for improving patient outcomes and informing therapeutic decision-making. However, the complex heterogeneity of tumor morphology, scarcity of annotated medical data, and computational demands of deep learning models present substantial challenges for developing reliable automated diagnostic systems. In this study, we propose a robust and scalable deep learning framework for brain tumor detection and classification, built upon an enhanced YOLO-v11 architecture combined with a two-stage transfer learning strategy. The first stage involves training a base model on a large, diverse MRI dataset. Upon achieving a mean Average Precision (mAP) exceeding 90%, this model is designated as the Brain Tumor Detection Model (BTDM). In the second stage, the BTDM is fine-tuned on a structurally similar but smaller dataset to form Brain Tumor Detection and Segmentation (BTDS), effectively leveraging domain transfer to maintain performance despite limited data. The model is further optimized through domain-specific data augmentation-including geometric transformations-to improve generalization and robustness. Experimental evaluations on publicly available datasets show that the framework achieves high [email protected] scores (up to 93.5% for the BTDM and 91% for BTDS) and consistently outperforms existing state-of-the-art methods across multiple tumor types, including glioma, meningioma, and pituitary tumors. In addition, a post-processing module enhances interpretability by generating segmentation masks and extracting clinically relevant metrics such as tumor size and severity level. These results underscore the potential of our approach as a high-performance, interpretable, and deployable clinical decision-support tool, contributing to the advancement of intelligent real-time neuro-oncological diagnostics.
Page 242 of 6576562 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.