Sort by:
Page 6 of 24236 results

[Radiosurgery of benign intracranial lesions. Indications, results , and perspectives].

Danthez N, De Cournuaud C, Pistocchi S, Aureli V, Giammattei L, Hottinger AF, Schiappacasse L

pubmed logopapersMay 14 2025
Stereotactic radiosurgery (SRS) is a non-invasive technique that is transforming the management of benign intracranial lesions through its precision and preservation of healthy tissues. It is effective for meningiomas, trigeminal neuralgia (TN), pituitary adenomas, vestibular schwannomas, and arteriovenous malformations. SRS ensures high tumor control rates, particularly for Grade I meningiomas and vestibular schwannomas. For refractory TN, it provides initial pain relief > 80 %. The advent of technologies such as PET-MRI, hypofractionation, and artificial intelligence is further improving treatment precision, but challenges remain, including the management of late side effects and standardization of practice.

Optimizing breast lesions diagnosis and decision-making with a deep learning fusion model integrating ultrasound and mammography: a dual-center retrospective study.

Xu Z, Zhong S, Gao Y, Huo J, Xu W, Huang W, Huang X, Zhang C, Zhou J, Dan Q, Li L, Jiang Z, Lang T, Xu S, Lu J, Wen G, Zhang Y, Li Y

pubmed logopapersMay 14 2025
This study aimed to develop a BI-RADS network (DL-UM) via integrating ultrasound (US) and mammography (MG) images and explore its performance in improving breast lesion diagnosis and management when collaborating with radiologists, particularly in cases with discordant US and MG Breast Imaging Reporting and Data System (BI-RADS) classifications. We retrospectively collected image data from 1283 women with breast lesions who underwent both US and MG within one month at two medical centres and categorised them into concordant and discordant BI-RADS classification subgroups. We developed a DL-UM network via integrating US and MG images, and DL networks using US (DL-U) or MG (DL-M) alone, respectively. The performance of DL-UM network for breast lesion diagnosis was evaluated using ROC curves and compared to DL-U and DL-M networks in the external testing dataset. The diagnostic performance of radiologists with different levels of experience under the assistance of DL-UM network was also evaluated. In the external testing dataset, DL-UM outperformed DL-M in sensitivity (0.962 vs. 0.833, P = 0.016) and DL-U in specificity (0.667 vs. 0.526, P = 0.030), respectively. In the discordant BI-RADS classification subgroup, DL-UM achieved an AUC of 0.910. The diagnostic performance of four radiologists improved when collaborating with the DL-UM network, with AUCs increased from 0.674-0.772 to 0.889-0.910, specificities from 52.1%-75.0 to 81.3-87.5% and reducing unnecessary biopsies by 16.1%-24.6%, particularly for junior radiologists. Meanwhile, DL-UM outputs and heatmaps enhanced radiologists' trust and improved interobserver agreement between US and MG, with weighted kappa increased from 0.048 to 0.713 (P < 0.05). The DL-UM network, integrating complementary US and MG features, assisted radiologists in improving breast lesion diagnosis and management, potentially reducing unnecessary biopsies.

Application of artificial intelligence medical imaging aided diagnosis system in the diagnosis of pulmonary nodules.

Yang Y, Wang P, Yu C, Zhu J, Sheng J

pubmed logopapersMay 14 2025
The application of artificial intelligence (AI) technology has realized the transformation of people's production and lifestyle, and also promoted the rapid development of the medical field. At present, the application of intelligence in the medical field is increasing. Using its advanced methods and technologies of AI, this paper aims to realize the integration of medical imaging-aided diagnosis system and AI, which is helpful to analyze and solve the loopholes and errors of traditional artificial diagnosis in the diagnosis of pulmonary nodules. Drawing on the principles and rules of image segmentation methods, the construction and optimization of a medical image-aided diagnosis system is carried out to realize the precision of the diagnosis system in the diagnosis of pulmonary nodules. In the diagnosis of pulmonary nodules carried out by traditional artificial and medical imaging-assisted diagnosis systems, 231 nodules with pathology or no change in follow-up for more than two years were also tested in 200 cases. The results showed that the AI software detected a total of 881 true nodules with a sensitivity of 99.10% (881/889). The radiologists detected 385 true nodules with a sensitivity of 43.31% (385/889). The sensitivity of AI software in detecting non-calcified nodules was significantly higher than that of radiologists (99.01% vs 43.30%, P < 0.001), and the difference was statistically significant.

A multi-layered defense against adversarial attacks in brain tumor classification using ensemble adversarial training and feature squeezing.

Yinusa A, Faezipour M

pubmed logopapersMay 14 2025
Deep learning, particularly convolutional neural networks (CNNs), has proven valuable for brain tumor classification, aiding diagnostic and therapeutic decisions in medical imaging. Despite their accuracy, these models are vulnerable to adversarial attacks, compromising their reliability in clinical settings. In this research, we utilized a VGG16-based CNN model to classify brain tumors, achieving 96% accuracy on clean magnetic resonance imaging (MRI) data. To assess robustness, we exposed the model to Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks, which reduced accuracy to 32% and 13%, respectively. We then applied a multi-layered defense strategy, including adversarial training with FGSM and PGD examples and feature squeezing techniques such as bit-depth reduction and Gaussian blurring. This approach improved model resilience, achieving 54% accuracy on FGSM and 47% on PGD adversarial examples. Our results highlight the importance of proactive defense strategies for maintaining the reliability of AI in medical imaging under adversarial conditions.

CT-based AI framework leveraging multi-scale features for predicting pathological grade and Ki67 index in clear cell renal cell carcinoma: a multicenter study.

Yang H, Zhang Y, Li F, Liu W, Zeng H, Yuan H, Ye Z, Huang Z, Yuan Y, Xiang Y, Wu K, Liu H

pubmed logopapersMay 14 2025
To explore whether a CT-based AI framework, leveraging multi-scale features, can offer a non-invasive approach to accurately predict pathological grade and Ki67 index in clear cell renal cell carcinoma (ccRCC). In this multicenter retrospective study, a total of 1073 pathologically confirmed ccRCC patients from seven cohorts were split into internal cohorts (training and validation sets) and an external test set. The AI framework comprised an image processor, a 3D-kidney and tumor segmentation model by 3D-UNet, a multi-scale features extractor built upon unsupervised learning, and a multi-task classifier utilizing XGBoost. A quantitative model interpretation technique, known as SHapley Additive exPlanations (SHAP), was employed to explore the contribution of multi-scale features. The 3D-UNet model showed excellent performance in segmenting both the kidney and tumor regions, with Dice coefficients exceeding 0.92. The proposed multi-scale features model exhibited strong predictive capability for pathological grading and Ki67 index, with AUROC values of 0.84 and 0.87, respectively, in the internal validation set, and 0.82 and 0.82, respectively, in the external test set. The SHAP results demonstrated that features from radiomics, the 3D Auto-Encoder, and dimensionality reduction all made significant contributions to both prediction tasks. The proposed AI framework, leveraging multi-scale features, accurately predicts the pathological grade and Ki67 index of ccRCC. The CT-based AI framework leveraging multi-scale features offers a promising avenue for accurately predicting the pathological grade and Ki67 index of ccRCC preoperatively, indicating a direction for non-invasive assessment. Non-invasively determining pathological grade and Ki67 index in ccRCC could guide treatment decisions. The AI framework integrates segmentation, classification, and model interpretation, enabling fully automated analysis. The AI framework enables non-invasive preoperative detection of high-risk tumors, assisting clinical decision-making.

Whole-body CT-to-PET synthesis using a customized transformer-enhanced GAN.

Xu B, Nie Z, He J, Li A, Wu T

pubmed logopapersMay 14 2025
Positron emission tomography with 2-deoxy-2-[fluorine-18]fluoro-D-glucose integrated with computed tomography (18F-FDG PET-CT) is a multi-modality medical imaging technique widely used for screening and diagnosis of lesions and tumors, in which, CT can provide detailed anatomical structures, while PET can show metabolic activities. Nevertheless, it has disadvantages such as long scanning time, high cost, and relatively high radiation doses.&#xD;&#xD;Purpose: We propose a deep learning model for the whole-body CT-to-PET synthesis task, generating high-quality synthetic PET images that are comparable to real ones in both clinical relevance and diagnostic value.&#xD;&#xD;Material: We collect 102 pairs of 3D CT and PET scans, which are sliced into 27,240 pairs of 2D CT and PET images ( training: 21,855 pairs, validation: 2,810, testing: 2,575 pairs).&#xD;&#xD;Methods: We propose a Transformer-enhanced Generative Adversarial Network (GAN) for whole-body CT-to-PET synthesis task. The CPGAN model uses residual blocks and Fully Connected Transformer Residual (FCTR) blocks to capture both local features and global contextual information. A customized loss function incorporating structural consistency is designed to improve the quality of synthesized PET images.&#xD;&#xD;Results: Both quantitative and qualitative evaluation results demonstrate effectiveness of the CPGAN model. The mean and standard variance of NRMSE,PSNR and SSIM values on test set are (16.90 ± 12.27) × 10-4, 28.71 ± 2.67 and 0.926 ± 0.033, respectively, outperforming other seven state-of-the-art models. Three radiologists independently and blindly evaluated and gave subjective scores to 100 randomly chosen PET images (50 real and 50 synthetic). By Wilcoxon signed rank test, there are no statistical differences between the synthetic PET images and the real ones.&#xD;&#xD;Conclusions: Despite the inherent limitations of CT images to directly reflect biological information of metabolic tissues, CPGAN model effectively synthesizes satisfying PET images from CT scans, which has potential in reducing the reliance on actual PET-CT scans.

Early detection of Alzheimer's disease progression stages using hybrid of CNN and transformer encoder models.

Almalki H, Khadidos AO, Alhebaishi N, Senan EM

pubmed logopapersMay 14 2025
Alzheimer's disease (AD) is a neurodegenerative disorder that affects memory and cognitive functions. Manual diagnosis is prone to human error, often leading to misdiagnosis or delayed detection. MRI techniques help visualize the fine tissues of the brain cells, indicating the stage of disease progression. Artificial intelligence techniques analyze MRI with high accuracy and extract subtle features that are difficult to diagnose manually. In this study, a modern methodology was designed that combines the power of CNN models (ResNet101 and GoogLeNet) to extract local deep features and the power of Vision Transformer (ViT) models to extract global features and find relationships between image spots. First, the MRI images of the Open Access Imaging Studies Series (OASIS) dataset were improved by two filters: the adaptive median filter (AMF) and Laplacian filter. The ResNet101 and GoogLeNet models were modified to suit the feature extraction task and reduce computational cost. The ViT architecture was modified to reduce the computational cost while increasing the number of attention vertices to further discover global features and relationships between image patches. The enhanced images were fed into the proposed ViT-CNN methodology. The enhanced images were fed to the modified ResNet101 and GoogLeNet models to extract the deep feature maps with high accuracy. Deep feature maps were fed into the modified ViT model. The deep feature maps were partitioned into 32 feature maps using ResNet101 and 16 feature maps using GoogLeNet, both with a size of 64 features. The feature maps were encoded to recognize the spatial arrangement of the patch and preserve the relationship between patches, helping the self-attention layers distinguish between patches based on their positions. They were fed to the transformer encoder, which consisted of six blocks and multiple vertices to focus on different patterns or regions simultaneously. Finally, the MLP classification layers classify each image into one of four dataset classes. The improved ResNet101-ViT hybrid methodology outperformed the GoogLeNet-ViT hybrid methodology. ResNet101-ViT achieved 98.7% accuracy, 95.05% AUC, 96.45% precision, 99.68% sensitivity, and 97.78% specificity.

An Annotated Multi-Site and Multi-Contrast Magnetic Resonance Imaging Dataset for the study of the Human Tongue Musculature.

Ribeiro FL, Zhu X, Ye X, Tu S, Ngo ST, Henderson RD, Steyn FJ, Kiernan MC, Barth M, Bollmann S, Shaw TB

pubmed logopapersMay 14 2025
This dataset provides the first annotated, openly available MRI-based imaging dataset for investigations of tongue musculature, including multi-contrast and multi-site MRI data from non-disease participants. The present dataset includes 47 participants collated from three studies: BeLong (four participants; T2-weighted images), EATT4MND (19 participants; T2-weighted images), and BMC (24 participants; T1-weighted images). We provide manually corrected segmentations of five key tongue muscles: the superior longitudinal, combined transverse/vertical, genioglossus, and inferior longitudinal muscles. Other phenotypic measures, including age, sex, weight, height, and tongue muscle volume, are also available for use. This dataset will benefit researchers across domains interested in the structure and function of the tongue in health and disease. For instance, researchers can use this data to train new machine learning models for tongue segmentation, which can be leveraged for segmentation and tracking of different tongue muscles engaged in speech formation in health and disease. Altogether, this dataset provides the means to the scientific community for investigation of the intricate tongue musculature and its role in physiological processes and speech production.

Recognizing artery segments on carotid ultrasonography using embedding concatenation of deep image and vision-language models.

Lo CM, Sung SF

pubmed logopapersMay 14 2025
Evaluating large artery atherosclerosis is critical for predicting and preventing ischemic strokes. Ultrasonographic assessment of the carotid arteries is the preferred first-line examination due to its ease of use, noninvasive, and absence of radiation exposure. This study proposed an automated classification model for the common carotid artery (CCA), carotid bulb, internal carotid artery (ICA), and external carotid artery (ECA) to enhance the quantification of carotid artery examinations.&#xD;Approach: A total of 2,943 B-mode ultrasound images (CCA: 1,563; bulb: 611; ICA: 476; ECA: 293) from 288 patients were collected. Three distinct sets of embedding features were extracted from artificial intelligence networks including pre-trained DenseNet201, vision Transformer (ViT), and echo contrastive language-image pre-training (EchoCLIP) models using deep learning architectures for pattern recognition. These features were then combined in a support vector machine (SVM) classifier to interpret the anatomical structures in B-mode images.&#xD;Main results: After ten-fold cross-validation, the model achieved an accuracy of 82.3%, which was significantly better than using individual feature sets, with a p-value of <0.001.&#xD;Significance: The proposed model could make carotid artery examinations more accurate and consistent with the achieved classification accuracy. The source code is available at https://github.com/buddykeywordw/Artery-Segments-Recognition&#xD.

Development and Validation of Ultrasound Hemodynamic-based Prediction Models for Acute Kidney Injury After Renal Transplantation.

Ni ZH, Xing TY, Hou WH, Zhao XY, Tao YL, Zhou FB, Xing YQ

pubmed logopapersMay 14 2025
Acute kidney injury (AKI) post-renal transplantation often has a poor prognosis. This study aimed to identify patients with elevated risks of AKI after kidney transplantation. A retrospective analysis was conducted on 422 patients who underwent kidney transplants from January 2020 to April 2023. Participants from 2020 to 2022 were randomized to training group (n=261) and validation group 1 (n=113), and those in 2023, as validation group 2 (n=48). Risk factors were determined by employing logistic regression analysis alongside the least absolute shrinkage and selection operator, making use of ultrasound hemodynamic, clinical, and laboratory information. Models for prediction were developed using logistic regression analysis and six machine-learning techniques. The evaluation of the logistic regression model encompassed its discrimination, calibration, and applicability in clinical settings, and a nomogram was created to illustrate the model. SHapley Additive exPlanations were used to explain and visualize the best of the six machine learning models. The least absolute shrinkage and selection operator combined with logistic regression identified and incorporated five risk factors into the predictive model. The logistic regression model (AUC=0.927 in the validation set 1; AUC=0.968 in the validation set 2) and the random forest model (AUC=0.946 in the validation set 1;AUC=0.996 in the validation set 2) showed good performance post-validation, with no significant difference in their predictive accuracy. These findings can assist clinicians in the early identification of patients at high risk for AKI, allowing for timely interventions and potentially enhancing the prognosis following kidney transplantation.
Page 6 of 24236 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.