Sort by:
Page 119 of 1381373 results

Application of improved graph convolutional network for cortical surface parcellation.

Tan J, Ren X, Chen Y, Yuan X, Chang F, Yang R, Ma C, Chen X, Tian M, Chen W, Wang Z

pubmed logopapersMay 12 2025
Accurate cortical surface parcellation is essential for elucidating brain organizational principles, functional mechanisms, and the neural substrates underlying higher cognitive and emotional processes. However, the cortical surface is a highly folded complex geometry, and large regional variations make the analysis of surface data challenging. Current methods rely on geometric simplification, such as spherical expansion, which takes hours for spherical mapping and registration, a popular but costly process that does not take full advantage of inherent structural information. In this study, we propose an Attention-guided Deep Graph Convolutional network (ADGCN) for end-to-end parcellation on primitive cortical surface manifolds. ADGCN consists of a deep graph convolutional layer with a symmetrical U-shaped structure, which enables it to effectively transmit detailed information of the original brain map and learn the complex graph structure, help the network enhance feature extraction capability. What's more, we introduce the Squeeze and Excitation (SE) module, which enables the network to better capture key features, suppress unimportant features, and significantly improve parcellation performance with a small amount of computation. We evaluated the model on a public dataset of 100 artificially labeled brain surfaces. Compared with other methods, the proposed network achieves Dice coefficient of 88.53% and an accuracy of 90.27%. The network can segment the cortex directly in the original domain, and has the advantages of high efficiency, simple operation and strong interpretability. This approach facilitates the investigation of cortical changes during development, aging, and disease progression, with the potential to enhance the accuracy of neurological disease diagnosis and the objectivity of treatment efficacy evaluation.

MRI-Based Diagnostic Model for Alzheimer's Disease Using 3D-ResNet.

Chen D, Yang H, Li H, He X, Mu H

pubmed logopapersMay 12 2025
Alzheimer's disease (AD), a progressive neurodegenerative disorder, is the leading cause of dementia worldwide and remains incurable once it begins. Therefore, early and accurate diagnosis is essential for effective intervention. Leveraging recent advances in deep learning, this study proposes a novel diagnostic model based on the 3D-ResNet architecture to classify three cognitive states: AD, mild cognitive impairment (MCI), and cognitively normal (CN) individuals, using MRI data. The model integrates the strengths of ResNet and 3D convolutional neural networks (3D-CNN), and incorporates a special attention mechanism(SAM) within the residual structure to enhance feature representation. The study utilized the ADNI dataset, comprising 800 brain MRI scans. The dataset was split in a 7:3 ratio for training and testing, and the network was trained using data augmentation and cross-validation strategies. The proposed model achieved 92.33% accuracy in the three-class classification task, and 97.61%, 95.83%, and 93.42% accuracy in binary classifications of AD vs. CN, AD vs. MCI, and CN vs. MCI, respectively, outperforming existing state-of-the-art methods. Furthermore, Grad-CAM heatmaps and 3D MRI reconstructions revealed that the cerebral cortex and hippocampus are critical regions for AD classification. These findings demonstrate a robust and interpretable AI-based diagnostic framework for AD, providing valuable technical support for its timely detection and clinical intervention.

Use of Artificial Intelligence in Recognition of Fetal Open Neural Tube Defect on Prenatal Ultrasound.

Kumar M, Arora U, Sengupta D, Nain S, Meena D, Yadav R, Perez M

pubmed logopapersMay 12 2025
To compare the axial cranial ultrasound images of normal and open neural tube defect (NTD) fetuses using a deep learning (DL) model and to assess its predictive accuracy in identifying open NTD.It was a prospective case-control study. Axial trans-thalamic fetal ultrasound images of participants with open fetal NTD and normal controls between 14 and 28 weeks of gestation were taken after consent. The images were divided into training, testing, and validation datasets randomly in the ratio of 70:15:15. The images were further processed and classified using DL convolutional neural network (CNN) transfer learning (TL) models. The TL models were trained for 50 epochs. The data was analyzed in terms of Cohen kappa score, accuracy score, area under receiver operating curve (AUROC) score, F1 score validity, sensitivity, and specificity of the test.A total of 59 cases and 116 controls were fully followed. Efficient net B0, Visual Geometry Group (VGG), and Inception V3 TL models were used. Both Efficient net B0 and VGG16 models gave similar high training and validation accuracy (100 and 95.83%, respectively). Using inception V3, the training and validation accuracy was 98.28 and 95.83%, respectively. The sensitivity and specificity of Efficient NetB0 was 100 and 89%, respectively, and was the best.The analysis of the changes in axial images of the fetal cranium using the DL model, Efficient Net B0 proved to be an effective model to be used in clinical application for the identification of open NTD. · Open spina bifida is often missed due to the nonrecognition of the lemon sign on ultrasound.. · Image classification using DL identified open spina bifida with excellent accuracy.. · The research is clinically relevant in low- and middle-income countries..

Automated field-in-field planning for tangential breast radiation therapy based on digitally reconstructed radiograph.

Srikornkan P, Khamfongkhruea C, Intanin P, Thongsawad S

pubmed logopapersMay 12 2025
The tangential field-in-field (FIF) technique is a widely used method in breast radiation therapy, known for its efficiency and the reduced number of fields required in treatment planning. However, it is labor-intensive, requiring manual shaping of the multileaf collimator (MLC) to minimize hot spots. This study aims to develop a novel automated FIF planning approach for tangential breast radiation therapy using Digitally Reconstructed Radiograph (DRR) images. A total of 78 patients were selected to train and test a fluence map prediction model based on U-Net architecture. DRR images were used as input data to predict the fluence maps. The predicted fluence maps for each treatment plan were then converted into MLC positions and exported as Digital Imaging and Communications in Medicine (DICOM) files. These files were used to recalculate the dose distribution and assess dosimetric parameters for both the PTV and OARs. The mean absolute error (MAE) between the predicted and original fluence map was 0.007 ± 0.002. The result of gamma analysis indicates strong agreement between the predicted and original fluence maps, with gamma passing rate values of 95.47 ± 4.27 for the 3 %/3 mm criteria, 94.65 ± 4.32 for the 3 %/2 mm criteria, and 83.4 ± 12.14 for the 2 %/2 mm criteria. The plan quality, in terms of tumor coverage and doses to organs at risk (OARs), showed no significant differences between the automated FIF and original plans. The automated plans yielded promising results, with plan quality comparable to the original.

Accelerating prostate rs-EPI DWI with deep learning: Halving scan time, enhancing image quality, and validating in vivo.

Zhang P, Feng Z, Chen S, Zhu J, Fan C, Xia L, Min X

pubmed logopapersMay 12 2025
This study aims to evaluate the feasibility and effectiveness of deep learning-based super-resolution techniques to reduce scan time while preserving image quality in high-resolution prostate diffusion-weighted imaging (DWI) with readout-segmented echo-planar imaging (rs-EPI). We retrospectively and prospectively analyzed prostate rs-EPI DWI data, employing deep learning super-resolution models, particularly the Multi-Scale Self-Similarity Network (MSSNet), to reconstruct low-resolution images into high-resolution images. Performance metrics such as structural similarity index (SSIM), Peak signal-to-noise ratio (PSNR), and normalized root mean squared error (NRMSE) were used to compare reconstructed images against the high-resolution ground truth (HR<sub>GT</sub>). Additionally, we evaluated the apparent diffusion coefficient (ADC) values and signal-to-noise ratio (SNR) across different models. The MSSNet model demonstrated superior performance in image reconstruction, achieving maximum SSIM values of 0.9798, and significant improvements in PSNR and NRMSE compared to other models. The deep learning approach reduced the rs-EPI DWI scan time by 54.4 % while maintaining image quality comparable to HR<sub>GT</sub>. Pearson correlation analysis revealed a strong correlation between ADC values from deep learning-reconstructed images and the ground truth, with differences remaining within 5 %. Furthermore, all models showed significant SNR enhancement, with MSSNet performing best across most cases. Deep learning-based super-resolution techniques, particularly MSSNet, effectively reduce scan time and enhance image quality in prostate rs-EPI DWI, making them promising tools for clinical applications.

AI-based volumetric six-tissue body composition quantification from CT cardiac attenuation scans for mortality prediction: a multicentre study.

Yi J, Marcinkiewicz AM, Shanbhag A, Miller RJH, Geers J, Zhang W, Killekar A, Manral N, Lemley M, Buchwald M, Kwiecinski J, Zhou J, Kavanagh PB, Liang JX, Builoff V, Ruddy TD, Einstein AJ, Feher A, Miller EJ, Sinusas AJ, Berman DS, Dey D, Slomka PJ

pubmed logopapersMay 12 2025
CT attenuation correction (CTAC) scans are routinely obtained during cardiac perfusion imaging, but currently only used for attenuation correction and visual calcium estimation. We aimed to develop a novel artificial intelligence (AI)-based approach to obtain volumetric measurements of chest body composition from CTAC scans and to evaluate these measures for all-cause mortality risk stratification. We applied AI-based segmentation and image-processing techniques on CTAC scans from a large international image-based registry at four sites (Yale University, University of Calgary, Columbia University, and University of Ottawa), to define the chest rib cage and multiple tissues. Volumetric measures of bone, skeletal muscle, subcutaneous adipose tissue, intramuscular adipose tissue (IMAT), visceral adipose tissue (VAT), and epicardial adipose tissue (EAT) were quantified between automatically identified T5 and T11 vertebrae. The independent prognostic value of volumetric attenuation and indexed volumes were evaluated for predicting all-cause mortality, adjusting for established risk factors and 18 other body composition measures via Cox regression models and Kaplan-Meier curves. The end-to-end processing time was less than 2 min per scan with no user interaction. Between 2009 and 2021, we included 11 305 participants from four sites participating in the REFINE SPECT registry, who underwent single-photon emission computed tomography cardiac scans. After excluding patients who had incomplete T5-T11 scan coverage, missing clinical data, or who had been used for EAT model training, the final study group comprised 9918 patients. 5451 (55%) of 9918 participants were male and 4467 (45%) of 9918 participants were female. Median follow-up time was 2·48 years (IQR 1·46-3·65), during which 610 (6%) patients died. High VAT, EAT, and IMAT attenuation were associated with an increased all-cause mortality risk (adjusted hazard ratio 2·39, 95% CI 1·92-2·96; p<0·0001, 1·55, 1·26-1·90; p<0·0001, and 1·30, 1·06-1·60; p=0·012, respectively). Patients with high bone attenuation were at reduced risk of death (0·77, 0·62-0·95; p=0·016). Likewise, high skeletal muscle volume index was associated with a reduced risk of death (0·56, 0·44-0·71; p<0·0001). CTAC scans obtained routinely during cardiac perfusion imaging contain important volumetric body composition biomarkers that can be automatically measured and offer important additional prognostic value. The National Heart, Lung, and Blood Institute, National Institutes of Health.

A comparison of performance of DeepSeek-R1 model-generated responses to musculoskeletal radiology queries against ChatGPT-4 and ChatGPT-4o - A feasibility study.

Uldin H, Saran S, Gandikota G, Iyengar KP, Vaishya R, Parmar Y, Rasul F, Botchu R

pubmed logopapersMay 12 2025
Artificial Intelligence (AI) has transformed society and chatbots using Large Language Models (LLM) are playing an increasing role in scientific research. This study aims to assess and compare the efficacy of newer DeepSeek R1 and ChatGPT-4 and 4o models in answering scientific questions about recent research. We compared output generated from ChatGPT-4, ChatGPT-4o, and DeepSeek-R1 in response to ten standardized questions in the setting of musculoskeletal (MSK) radiology. These were independently analyzed by one MSK radiologist and one final-year MSK radiology trainee and graded using a Likert scale from 1 to 5 (1 being inaccurate to 5 being accurate). Five DeepSeek answers were significantly inaccurate and provided fictitious references only on prompting. All ChatGPT-4 and 4o answers were well-written with good content, the latter including useful and comprehensive references. ChatGPT-4o generates structured research answers to questions on recent MSK radiology research with useful references in all our cases, enabling reliable usage. DeepSeek-R1 generates articles that, on the other hand, may appear authentic to the unsuspecting eye but contain a higher amount of falsified and inaccurate information in the current version. Further iterations may improve these accuracies.

LiteMIL: A Computationally Efficient Transformer-Based MIL for Cancer Subtyping on Whole Slide Images.

Kussaibi, H.

medrxiv logopreprintMay 12 2025
PurposeAccurate cancer subtyping is crucial for effective treatment; however, it presents challenges due to overlapping morphology and variability among pathologists. Although deep learning (DL) methods have shown potential, their application to gigapixel whole slide images (WSIs) is often hindered by high computational demands and the need for efficient, context-aware feature aggregation. This study introduces LiteMIL, a computationally efficient transformer-based multiple instance learning (MIL) network combined with Phikon, a pathology-tuned self-supervised feature extractor, for robust and scalable cancer subtyping on WSIs. MethodsInitially, patches were extracted from TCGA-THYM dataset (242 WSIs, six subtypes) and subsequently fed in real-time to Phikon for feature extraction. To train MILs, features were arranged into uniform bags using a chunking strategy that maintains tissue context while increasing training data. LiteMIL utilizes a learnable query vector within an optimized multi-head attention module for effective feature aggregation. The models performance was evaluated against established MIL methods on the Thymic Dataset and three additional TCGA datasets (breast, lung, and kidney cancer). ResultsLiteMIL achieved 0.89 {+/-} 0.01 F1 score and 0.99 AUC on Thymic dataset, outperforming other MILs. LiteMIL demonstrated strong generalizability across the external datasets, scoring the best on breast and kidney cancer datasets. Compared to TransMIL, LiteMIL significantly reduces training time and GPU memory usage. Ablation studies confirmed the critical role of the learnable query and layer normalization in enhancing performance and stability. ConclusionLiteMIL offers a resource-efficient, robust solution. Its streamlined architecture, combined with the compact Phikon features, makes it suitable for integrating into routine histopathological workflows, particularly in resource-limited settings.

Automated scout-image-based estimation of contrast agent dosing: a deep learning approach

Schirrmeister, R., Taleb, L., Friemel, P., Reisert, M., Bamberg, F., Weiss, J., Rau, A.

medrxiv logopreprintMay 12 2025
We developed and tested a deep-learning-based algorithm for the approximation of contrast agent dosage based on computed tomography (CT) scout images. We prospectively enrolled 817 patients undergoing clinically indicated CT imaging, predominantly of the thorax and/or abdomen. Patient weight was collected by study staff prior to the examination 1) with a weight scale and 2) as self-reported. Based on the scout images, we developed an EfficientNet convolutional neural network pipeline to estimate the optimal contrast agent dose based on patient weight and provide a browser-based user interface as a versatile open-source tool to account for different contrast agent compounds. We additionally analyzed the body-weight-informative CT features by synthesizing representative examples for different weights using in-context learning and dataset distillation. The cohort consisted of 533 thoracic, 70 abdominal and 229 thoracic-abdominal CT scout scans. Self-reported patient weight was statistically significantly lower than manual measurements (75.13 kg vs. 77.06 kg; p < 10-5, Wilcoxon signed-rank test). Our pipeline predicted patient weight with a mean absolute error of 3.90 {+/-} 0.20 kg (corresponding to a roughly 4.48 - 11.70 ml difference in contrast agent depending on the agent) in 5-fold cross-validation and is publicly available at https://tinyurl.com/ct-scout-weight. Interpretability analysis revealed that both larger anatomical shape and higher overall attenuation were predictive of body weight. Our open-source deep learning pipeline allows for the automatic estimation of accurate contrast agent dosing based on scout images in routine CT imaging studies. This approach has the potential to streamline contrast agent dosing workflows, improve efficiency, and enhance patient safety by providing quick and accurate weight estimates without additional measurements or reliance on potentially outdated records. The models performance may vary depending on patient positioning and scout image quality and the approach requires validation on larger patient cohorts and other clinical centers. Author SummaryAutomation of medical workflows using AI has the potential to increase reproducibility while saving costs and time. Here, we investigated automating the estimation of the required contrast agent dosage for CT examinations. We trained a deep neural network to predict the body weight from the initial 2D CT Scout images that are required prior to the actual CT examination. The predicted weight is then converted to a contrast agent dosage based on contrast-agent-specific conversion factors. To facilitate application in clinical routine, we developed a user-friendly browser-based user interface that allows clinicians to select a contrast agent or input a custom conversion factor to receive dosage suggestions, with local data processing in the browser. We also investigate what image characteristics predict body weight and find plausible relationships such as higher attenuation and larger anatomical shapes correlating with higher body weights. Our work goes beyond prior work by implementing a single model for a variety of anatomical regions, providing an accessible user interface and investigating the predictive characteristics of the images.

Automatic Quantification of Ki-67 Labeling Index in Pediatric Brain Tumors Using QuPath

Spyretos, C., Pardo Ladino, J. M., Blomstrand, H., Nyman, P., Snodahl, O., Shamikh, A., Elander, N. O., Haj-Hosseini, N.

medrxiv logopreprintMay 12 2025
AO_SCPLOWBSTRACTC_SCPLOWThe quantification of the Ki-67 labeling index (LI) is critical for assessing tumor proliferation and prognosis in tumors, yet manual scoring remains a common practice. This study presents an automated workflow for Ki-67 scoring in whole slide images (WSIs) using an Apache Groovy code script for QuPath, complemented by a Python-based post-processing script, providing cell density maps and summary tables. The tissue and cell segmentation are performed using StarDist, a deep learning model, and adaptive thresholding to classify Ki-67 positive and negative nuclei. The pipeline was applied to a cohort of 632 pediatric brain tumor cases with 734 Ki-67-stained WSIs from the Childrens Brain Tumor Network. Medulloblastoma showed the highest Ki-67 LI (median: 19.84), followed by atypical teratoid rhabdoid tumor (median: 19.36). Moderate values were observed in brainstem glioma-diffuse intrinsic pontine glioma (median: 11.50), high-grade glioma (grades 3 & 4) (median: 9.50), and ependymoma (median: 5.88). Lower indices were found in meningioma (median: 1.84), while the lowest were seen in low-grade glioma (grades 1 & 2) (median: 0.85), dysembryoplastic neuroepithelial tumor (median: 0.63), and ganglioglioma (median: 0.50). The results aligned with the consensus of the oncology, demonstrating a significant correlation in Ki-67 LI across most of the tumor families/types, with high malignancy tumors showing the highest proliferation indices and lower malignancy tumors exhibiting lower Ki-67 LI. The automated approach facilitates the assessment of large amounts of Ki-67 WSIs in research settings.
Page 119 of 1381373 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.