Sort by:
Page 231 of 6576562 results

Kurokawa R, Hagiwara A, Ueda D, Ito R, Saida T, Honda M, Nishioka K, Sakata A, Yanagawa M, Takumi K, Oda S, Ide S, Sofue K, Sugawara S, Watabe T, Hirata K, Kawamura M, Iima M, Naganawa S

pubmed logopapersAug 25 2025
Recent advances in molecular genetics have revolutionized the classification of pediatric-type high-grade gliomas in the 2021 World Health Organization central nervous system tumor classification. This narrative review synthesizes current evidence on the following four tumor types: diffuse midline glioma, H3 K27-altered; diffuse hemispheric glioma, H3 G34-mutant; diffuse pediatric-type high-grade glioma, H3-wildtype and IDH-wildtype; and infant-type hemispheric glioma. We conducted a comprehensive literature search for articles published through January 2025. For each tumor type, we analyze characteristic clinical presentations, molecular alterations, conventional and advanced magnetic resonance imaging features, radiological-molecular correlations, and current therapeutic approaches. Emerging radiogenomic approaches utilizing artificial intelligence, including radiomics and deep learning, show promise in identifying imaging biomarkers that correlate with molecular features. This review highlights the importance of integrating radiological and molecular data for accurate diagnosis and treatment planning, while acknowledging limitations in current methodologies and the need for prospective validation in larger cohorts. Understanding these correlations is crucial for advancing personalized treatment strategies for these challenging tumors.

Kumar K K, P R, M N, G D

pubmed logopapersAug 25 2025
Thyroid hormones are significant for controlling metabolism, and two common thyroid disorders, such as hypothyroidism. The hyperthyroidism are directly affect the metabolic rate of the human body. Predicting and diagnosing thyroid disease remain significant challenges in medical research due to the complexity of thyroid hormone regulation and its impact on metabolism. Therefore, this paper proposes a Complex-valued Spatio-Temporal Graph Convolution Neural Network optimized with Giraffe Kicking Optimization Algorithm for Thyroid Nodule Classification in Ultrasound Images (CSGCNN-GKOA-TNC-UI). Here, the ultrasound images are collected through DDTI (Digital Database of Thyroid ultrasound Imageries) dataset. The gathered data is given into the pre-processing stage using Bilinear Double-Order Filter (BDOF) approach to eradicate the noise and increase the input images quality. The pre-processing image is given into the Deep Adaptive Fuzzy Clustering (DAFC) for Region of Interest (RoI) segmentation. The segmented image is fed to the Multi-Objective Matched Synchro Squeezing Chirplet Transform (MMSSCT) for extracting the features, like Geometric features and Morphological features. The extracted features are fed into the CSGCNN, which classifies the Thyroid Nodule into Benign Nodules and Malign Nodules. Finally, Giraffe Kicking Optimization Algorithm (GKOA) is considered to enhance the CSGCNN classifier. The CSGCNN-GKOA-TNC-UI algorithm is implemented in MATLAB. The CSGCNN-GKOA-TNC-UI approach attains 34.9%, 21.5% and 26.8% higher f-score, 18.6%, 29.3 and 19.2% higher accuracy when compared with existing models: Thyroid diagnosis with classification utilizing DNN depending on hybrid meta-heuristic with LSTM method (LSTM-TNC-UI), innovative full-scale connected network for segmenting thyroid nodule in UI (FCG Net-TNC-UI), and Adversarial architecture dependent multi-scale fusion method for segmenting thyroid nodule (AMSeg-TNC-UI) methods respectively. The proposed model enhances thyroid nodule classification accuracy, aiding radiologists and endocrinologists. By reducing misclassification, it minimizes unnecessary biopsies and enables early malignancy detection.

Rozak, M. W., Mester, J. R., Attarpour, A., Dorr, A., Patel, S., Koletar, M., Hill, M. E., McLaurin, J., Goubran, M., Stefanovic, B.

biorxiv logopreprintAug 25 2025
Functional hyperaemia is a well-established hallmark of healthy brain function, whereby local brain blood flow adjusts in response to a change in the activity of the surrounding neurons. Although functional hyperemia has been extensively studied at the level of both tissue and individual vessels, vascular network-level coordination remains largely unknown. To bridge this gap, we developed a deep learning-based computational pipeline that uses two-photon fluorescence microscopy images of cerebral microcirculation to enable automated reconstruction and quantification of the geometric changes across the microvascular network, comprising hundreds of interconnected blood vessels, pre and post-activation of the neighbouring neurons. The pipeline's utility was demonstrated in the Thy1-ChR2 optogenetic mouse model, where we observed network-wide vessel radius changes to depend on the photostimulation intensity, with both dilations and constrictions occurring across the cortical depth, at an average of 16.1{+/-}14.3 m (mean{+/-}stddev) away from the most proximal neuron for dilations; and at 21.9{+/-}14.6 m away for constrictions. We observed a significant heterogeneity of the vascular radius changes within vessels, with radius adjustment varying by an average of 24 {+/-} 28% of the resting diameter, likely reflecting the heterogeneity of the distribution of contractile cells on the vessel walls. A graph theory-based network analysis revealed that the assortativity of adjacent blood vessel responses rose by 152 {+/-} 65% at 4.3 mW/mm2 of blue photostimulation vs. the control, with a 4% median increase in the efficiency of the capillary networks during this level of blue photostimulation in relation to the baseline. Interrogating individual vessels is thus not sufficient to predict how the blood flow is modulated in the network. Our computational pipeline, to be made openly available, enables tracking of the microvascular network geometry over time, relating caliber adjustments to vessel wall-associated cells' state, and mapping network-level flow distribution impairments in experimental models of disease.

S J, Perumalsamy M

pubmed logopapersAug 25 2025
The process of detecting and measuring the fat layer surrounding the heart from medical images is referred to as epicardial fat segmentation. Accurate segmentation is essential for assessing heart health and associated risk factors. It plays a critical role in evaluating cardiovascular disease, requiring advanced techniques to enhance precision and effectiveness. However, there is currently a shortage of resources made for fat mass measurement. The Visual Lab's cardiac fat database addresses this limitation by providing a comprehensive set of high-resolution images crucial for reliable fat analysis. This study proposes a novel method for epicardial fat segmentation, involving a multi-stage framework. In the preprocessing phase, window-aware guided bilateral filtering (WGBR) is applied to reduce noise while preserving structural features. For region-of-interest (ROI) selection, the White Shark Optimizer (WSO) is employed to improve exploration and exploitation accuracy. The segmentation task is handled using a bidirectional guided semi-3D network (BGSNet), which enhances robustness by extracting features in both forward and backward directions. Following segmentation, quantification is performed to estimate the epicardial fat volume. This is achieved using reflection-equivariant quantum neural networks (REQNN), which are well-suited for modelling complex visual patterns. The Parrot optimizer is further utilized to fine-tune hyperparameters, ensuring optimal performance. The experimental results confirm the effectiveness of the suggested BGSNet with REQNN approach, achieving a Dice score of 99.50 %, an accuracy of 99.50 %, and an execution time of 1.022 s per slice. Furthermore, the Spearman correlation coefficient for fat quantification yielded an R<sup>2</sup> value of 0.9867, indicating a strong agreement with the reference measurements. This integrated approach offers a reliable solution for epicardial fat segmentation and quantification, thereby supporting improved cardiovascular risk assessment and monitoring.

Nguyen TC, Phung KA, Dao TTP, Nguyen-Mau TH, Nguyen-Quang T, Pham CN, Le TN, Shen J, Nguyen TV, Tran MT

pubmed logopapersAug 25 2025
Polyp shape is critical for diagnosing colorectal polyps and assessing cancer risk, yet there is limited data on segmenting pedunculated and sessile polyps. This paper introduces PolypDB_INS, a dataset of 4403 images containing 4918 annotated polyps, specifically for sessile and pedunculated polyps. In addition, we propose DYNAFormer, a novel transformer-based model utilizing an anchor mask-guided mechanism that incorporates cross-attention, dynamic query updates, and query denoising for improved object segmentation. Treating each positional query as an anchor mask dynamically updated through decoder layers enhances perceptual information regarding the object's position, allowing for more precise segmentation of complex structures like polyps. Extensive experiments on the PolypDB_INS dataset using standard evaluation metrics for both instance and semantic segmentation show that DYNAFormer significantly outperforms state-of-the-art methods. Ablation studies confirm the effectiveness of the proposed techniques, highlighting the model's robustness for diagnosing colorectal cancer. The source code and dataset are available at https://github.com/ntcongvn/DYNAFormer https://github.com/ntcongvn/DYNAFormer.

Bai X, Feng M, Ma W, Liao Y

pubmed logopapersAug 25 2025
Artificial intelligence (AI) chatbots have emerged as promising tools for enhancing medical communication, yet their efficacy in interpreting complex radiological reports remains underexplored. This study evaluates the performance of AI chatbots in translating magnetic resonance imaging (MRI) reports into patient-friendly language and providing clinical recommendations. A cross-sectional analysis was conducted on 6174 MRI reports from tumor patients across three hospitals. Two AI chatbots, GPT o1-preview (Chatbot 1) and Deepseek-R1 (Chatbot 2), were tasked with interpreting reports, classifying tumor characteristics, assessing surgical necessity, and suggesting treatments. Readability was measured using Flesch-Kincaid and Gunning Fog metrics, while accuracy was evaluated by medical reviewers. Statistical analyses included Friedman and Wilcoxon signed-rank tests. Both chatbots significantly improved readability, with Chatbot 2 achieving higher Flesch-Kincaid Reading Ease scores (median: 58.70 vs. 46.00, p < 0.001) and lower text complexity. Chatbot 2 outperformed Chatbot 1 in diagnostic accuracy (92.05% vs. 89.03% for tumor classification; 95.12% vs. 84.73% for surgical necessity, p < 0.001). Treatment recommendations from Chatbot 2 were more clinically relevant (98.10% acceptable vs. 75.41%), though both demonstrated high empathy (92.82-96.11%). Errors included misinterpretations of medical terminology and occasional hallucinations. AI chatbots, particularly Deepseek-R1, effectively enhance the readability and accuracy of MRI report interpretations for patients. However, physician oversight remains critical to mitigate errors. These tools hold potential to reduce healthcare burdens but require further refinement for clinical integration.

Xiao Y, Lin W, Xie F, Liu L, Zheng G, Xiao C

pubmed logopapersAug 25 2025
This study investigates the impact of cone beam computed tomography (CBCT) image quality on radiomic analysis and evaluates the potential of deep learning-based enhancement to improve radiomic feature accuracy in nasopharyngeal cancer (NPC). The CBAMRegGAN model was trained on 114 paired CT and CBCT datasets from 114 nasopharyngeal cancer patients to enhance CBCT images, with CT images as ground truth. The dataset was split into 82 patients for training, 12 for validation, and 20 for testing. The radiomic features in 6 different categories, including first-order, gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), gray-level size-zone matrix(GLSZM), neighbouring gray tone difference matrix (NGTDM), and gray-level dependence matrix (GLDM), were extracted from the gross tumor volume (GTV) of original CBCT, enhanced CBCT, and CT. Comparing feature errors between original and enhanced CBCT showed that deep learning-based enhancement improves radiomic feature accuracy. The CBAMRegGAN model achieved improved image quality with a peak signal-to-noise ratio (PSNR) of 29.52 ± 2.28 dB, normalized mean absolute error (NMAE) of 0.0129 ± 0.004, and structural similarity index (SSIM) of 0.910 ± 0.025 for enhanced CBCT images. This led to reduced errors in most radiomic features, with average reductions across 20 patients of 19.0%, 24.0%, 3.0%, 19%, 15.0%, and 5.0% for first-order, GLCM, GLRLM, GLSZM, NGTDM, and GLDM features. This study demonstrates that CBCT image quality significantly influences radiomic analysis, and deep learning-based enhancement techniques can effectively improve both image quality and the accuracy of radiomic features in NPC.

Handayani VW, Margareth Amiatun Ruth MS, Rulaningtyas R, Caesarardhi MR, Yudhantorro BA, Yudianto A

pubmed logopapersAug 25 2025
Accurately determining sex using features like facial bone profiles and teeth is crucial for identifying unknown victims. Lateral cephalometric radiographs effectively depict the lateral cranial structure, aiding the development of computational identification models. This study develops and evaluates a sex prediction model using cephalometric radiographs with several convolutional neural network (CNN) architectures. The primary goal is to evaluate the model's performance on standardized radiographic data and real-world cranial photographs to simulate forensic applications. Six CNN architectures-VGG16, VGG19, MobileNetV2, ResNet50V2, InceptionV3, and InceptionResNetV2-were employed to train and validate 340 cephalometric images of Indonesian individuals aged 18 to 40 years. The data were divided into training (70%), validation (15%), and testing (15%) subsets. Data augmentation was implemented to mitigate class imbalance. Additionally, a set of 40 cranial images from anatomical specimens was employed to evaluate the model's generalizability. Model performance metrics included accuracy, precision, recall, and F1-score. CNN models were trained and evaluated on 340 cephalometric images (255 females and 85 males). VGG19 and ResNet50V2 achieved high F1-scores of 95% (females) and 83% (males), respectively, using cephalometric data, highlighting their strong class-specific performance. Although the overall accuracy exceeded 90%, the F1-score better reflected model performance in this imbalanced dataset. In contrast, performance notably decreased with cranial photographs, particularly when classifying female samples. That is, while InceptionResNetV2 achieved the highest F1-score for cranial photographs (62%), misclassification of females remained significant. Confusion matrices and per-class metrics further revealed persistent issues related to data imbalance and generalization across imaging modalities. Basic CNN models perform well on standardized cephalometric images but less effectively on photographic cranial images, indicating a domain shift between image types that limits generalizability. Improving real-world forensic performance will require further optimization and more diverse training data. Not applicable.

Issac BM, Kumar SN, Zafar S, Shakil KA, Wani MA

pubmed logopapersAug 25 2025
With the exponential growth of big data in domains such as telemedicine and digital forensics, the secure transmission of sensitive medical information has become a critical concern. Conventional steganographic methods often fail to maintain diagnostic integrity or exhibit robustness against noise and transformations. In this study, we propose a novel deep learning-based steganographic framework that combines Squeeze-and-Excitation (SE) blocks, Inception modules, and residual connections to address these challenges. The encoder integrates dilated convolutions and SE attention to embed secret medical images within natural cover images, while the decoder employs residual and multi-scale Inception-based feature extraction for accurate reconstruction. Designed for deployment on NVIDIA Jetson TX2, the model ensures real-time, low-power operation suitable for edge healthcare applications. Experimental evaluation on MRI and OCT datasets demonstrates the model's efficacy, achieving Peak Signal-to-Noise Ratio (PSNR) values of 39.02 and 38.75, and Structural Similarity Index (SSIM) values of 0.9757, confirming minimal visual distortion. This research contributes to advancing secure, high-capacity steganographic systems for practical use in privacy-sensitive environments.

Dassanayake M, Lopez A, Reader A, Cook GJR, Mingels C, Rahmim A, Seifert R, Alberts I, Yousefirizi F

pubmed logopapersAug 25 2025
This article reviews recent advancements in PET/computed tomography imaging, emphasizing the transformative impact of total-body and long-axial field-of-view scanners, which offer increased sensitivity, larger coverage, and faster, lower-dose imaging. It highlights the growing role of artificial intelligence (AI) in enhancing image reconstruction, resolution, and multi-tracer applications, enabling rapid processing and improved quantification. AI-driven techniques, such as super-resolution, positron range correction, and motion compensation, are improving lesion detectability and image quality. The review underscores the potential of these innovations to revolutionize clinical and research PET imaging, while also noting the challenges in validation and implementation for routine practice.
Page 231 of 6576562 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.