Sort by:
Page 19 of 23225 results

Automated Detection of Black Hole Sign for Intracerebral Hemorrhage Patients Using Self-Supervised Learning.

Wang H, Schwirtlich T, Houskamp EJ, Hutch MR, Murphy JX, do Nascimento JS, Zini A, Brancaleoni L, Giacomozzi S, Luo Y, Naidech AM

pubmed logopapersMay 7 2025
Intracerebral Hemorrhage (ICH) is a devastating form of stroke. Hematoma expansion (HE), growth of the hematoma on interval scans, predicts death and disability. Accurate prediction of HE is crucial for targeted interventions to improve patient outcomes. The black hole sign (BHS) on non-contrast computed tomography (CT) scans is a predictive marker for HE. An automated method to recognize the BHS and predict HE could speed precise patient selection for treatment. In. this paper, we presented a novel framework leveraging self-supervised learning (SSL) techniques for BHS identification on head CT images. A ResNet-50 encoder model was pre-trained on over 1.7 million unlabeled head CT images. Layers for binary classification were added on top of the pre-trained model. The resulting model was fine-tuned using the training data and evaluated on the held-out test set to collect AUC and F1 scores. The evaluations were performed on scan and slice levels. We ran different panels, one using two multi-center datasets for external validation and one including parts of them in the pre-training RESULTS: Our model demonstrated strong performance in identifying BHS when compared with the baseline model. Specifically, the model achieved scan-level AUC scores between 0.75-0.89 and F1 scores between 0.60-0.70. Furthermore, it exhibited robustness and generalizability across an external dataset, achieving a scan-level AUC score of up to 0.85 and an F1 score of up to 0.60, while it performed less well on another dataset with more heterogeneous samples. The negative effects could be mitigated after including parts of the external datasets in the fine-tuning process. This study introduced a novel framework integrating SSL into medical image classification, particularly on BHS identification from head CT scans. The resulting pre-trained head CT encoder model showed potential to minimize manual annotation, which would significantly reduce labor, time, and costs. After fine-tuning, the framework demonstrated promising performance for a specific downstream task, identifying the BHS to predict HE, upon comprehensive evaluation on diverse datasets. This approach holds promise for enhancing medical image analysis, particularly in scenarios with limited data availability. ICH = Intracerebral Hemorrhage; HE = Hematoma Expansion; BHS = Black Hole Sign; CT = Computed Tomography; SSL = Self-supervised Learning; AUC = Area Under the receiver operator Curve; CNN = Convolutional Neural Network; SimCLR = Simple framework for Contrastive Learning of visual Representation; HU = Hounsfield Unit; CLAIM = Checklist for Artificial Intelligence in Medical Imaging; VNA = Vendor Neutral Archive; DICOM = Digital Imaging and Communications in Medicine; NIfTI = Neuroimaging Informatics Technology Initiative; INR = International Normalized Ratio; GPU= Graphics Processing Unit; NIH= National Institutes of Health.

Multistage Diffusion Model With Phase Error Correction for Fast PET Imaging.

Gao Y, Huang Z, Xie X, Zhao W, Yang Q, Yang X, Yang Y, Zheng H, Liang D, Liu J, Chen R, Hu Z

pubmed logopapersMay 7 2025
Fast PET imaging is clinically important for reducing motion artifacts and improving patient comfort. While recent diffusion-based deep learning methods have shown promise, they often fail to capture the true PET degradation process, suffer from accumulated inference errors, introduce artifacts, and require extensive reconstruction iterations. To address these challenges, we propose a novel multistage diffusion framework tailored for fast PET imaging. At the coarse level, we design a multistage structure to approximate the temporal non-linear PET degradation process in a data-driven manner, using paired PET images collected under different acquisition duration. A Phase Error Correction Network (PECNet) ensures consistency across stages by correcting accumulated deviations. At the fine level, we introduce a deterministic cold diffusion mechanism, which simulates intra-stage degradation through interpolation between known acquisition durations-significantly reducing reconstruction iterations to as few as 10. Evaluations on [<sup>68</sup>Ga]FAPI and [<sup>18</sup>F]FDG PET datasets demonstrate the superiority of our approach, achieving peak PSNRs of 36.2 dB and 39.0 dB, respectively, with average SSIMs over 0.97. Our framework offers high-fidelity PET imaging with fewer iterations, making it practical for accelerated clinical imaging.

Accelerated inference for thyroid nodule recognition in ultrasound imaging using FPGA.

Ma W, Wu X, Zhang Q, Li X, Wu X, Wang J

pubmed logopapersMay 7 2025
Thyroid cancer is the most prevalent malignant tumour in the endocrine system, with its incidence steadily rising in recent years. Current central processing units (CPUs) and graphics processing units (GPUs) face significant challenges in terms of processing speed, energy consumption, cost, and scalability in the identification of thyroid nodules, making them inadequate for the demands of future green, efficient, and accessible healthcare. To overcome these limitations, this study proposes an efficient quantized inference method using a field-programmable gate array (FPGA). We employ the YOLOv4-tiny neural network model, enhancing software performance with the K-means + + optimization algorithm and improving hardware performance through techniques such as 8-bit weight quantization, batch normalization, and convolutional layer fusion. The study is based on the ZYNQ7020 FPGA platform. Experimental results demonstrate an average accuracy of 81.44% on the Tn3k dataset and 81.20% on the internal test set from a Chinese tertiary hospital. The power consumption of the FPGA platform, CPU (Intel Core i5-10200 H), and GPU (NVIDIA RTX 4090) were 3.119 watts, 45 watts, and 68 watts, respectively, with energy efficiency ratios of 5.45, 0.31, and 5.56. This indicates that the FPGA's energy efficiency is 17.6 times that of the CPU and 0.98 times that of the GPU. These results show that the FPGA not only significantly outperforms the CPU in speed but also consumes far less power than the GPU. Moreover, using mid-to-low-end FPGAs yields performance comparable to that of commercial-grade GPUs. This technology presents a novel solution for medical imaging diagnostics, with the potential to significantly enhance the speed, accuracy, and environmental sustainability of ultrasound image analysis, thereby supporting the future development of medical care.

Alterations in static and dynamic functional network connectivity in chronic low back pain: a resting-state network functional connectivity and machine learning study.

Liu H, Wan X

pubmed logopapersMay 7 2025
Low back pain (LBP) is a prevalent pain condition whose persistence can lead to changes in the brain regions responsible for sensory, cognitive, attentional, and emotional processing. Previous neuroimaging studies have identified various structural and functional abnormalities in patients with LBP; however, how the static and dynamic large-scale functional network connectivity (FNC) of the brain is affected in these patients remains unclear. Forty-one patients with chronic low back pain (cLBP) and 42 healthy controls underwent resting-state functional MRI scanning. The independent component analysis method was employed to extract the resting-state networks. Subsequently, we calculate and compare between groups for static intra- and inter-network functional connectivity. In addition, we investigated the differences between dynamic functional network connectivity and dynamic temporal metrics between cLBP patients and healthy controls. Finally, we tried to distinguish cLBP patients from healthy controls by support vector machine method. The results showed that significant reductions in functional connectivity within the network were found within the DMN,DAN, and ECN in cLBP patients. Significant between-group differences were also found in static FNC and in each state of dynamic FNC. In addition, in terms of dynamic temporal metrics, fraction time and mean dwell time were significantly altered in cLBP patients. In conclusion, our study suggests the existence of static and dynamic large-scale brain network alterations in patients with cLBP. The findings provide insights into the neural mechanisms underlying various brain function abnormalities and altered pain experiences in patients with cLBP.

False Promises in Medical Imaging AI? Assessing Validity of Outperformance Claims

Evangelia Christodoulou, Annika Reinke, Pascaline Andrè, Patrick Godau, Piotr Kalinowski, Rola Houhou, Selen Erkan, Carole H. Sudre, Ninon Burgos, Sofiène Boutaj, Sophie Loizillon, Maëlys Solal, Veronika Cheplygina, Charles Heitz, Michal Kozubek, Michela Antonelli, Nicola Rieke, Antoine Gilson, Leon D. Mayer, Minu D. Tizabi, M. Jorge Cardoso, Amber Simpson, Annette Kopp-Schneider, Gaël Varoquaux, Olivier Colliot, Lena Maier-Hein

arxiv logopreprintMay 7 2025
Performance comparisons are fundamental in medical imaging Artificial Intelligence (AI) research, often driving claims of superiority based on relative improvements in common performance metrics. However, such claims frequently rely solely on empirical mean performance. In this paper, we investigate whether newly proposed methods genuinely outperform the state of the art by analyzing a representative cohort of medical imaging papers. We quantify the probability of false claims based on a Bayesian approach that leverages reported results alongside empirically estimated model congruence to estimate whether the relative ranking of methods is likely to have occurred by chance. According to our results, the majority (>80%) of papers claims outperformance when introducing a new method. Our analysis further revealed a high probability (>5%) of false outperformance claims in 86% of classification papers and 53% of segmentation papers. These findings highlight a critical flaw in current benchmarking practices: claims of outperformance in medical imaging AI are frequently unsubstantiated, posing a risk of misdirecting future research efforts.

Convergent Complex Quasi-Newton Proximal Methods for Gradient-Driven Denoisers in Compressed Sensing MRI Reconstruction

Tao Hong, Zhaoyi Xu, Se Young Chun, Luis Hernandez-Garcia, Jeffrey A. Fessler

arxiv logopreprintMay 7 2025
In compressed sensing (CS) MRI, model-based methods are pivotal to achieving accurate reconstruction. One of the main challenges in model-based methods is finding an effective prior to describe the statistical distribution of the target image. Plug-and-Play (PnP) and REgularization by Denoising (RED) are two general frameworks that use denoisers as the prior. While PnP/RED methods with convolutional neural networks (CNNs) based denoisers outperform classical hand-crafted priors in CS MRI, their convergence theory relies on assumptions that do not hold for practical CNNs. The recently developed gradient-driven denoisers offer a framework that bridges the gap between practical performance and theoretical guarantees. However, the numerical solvers for the associated minimization problem remain slow for CS MRI reconstruction. This paper proposes a complex quasi-Newton proximal method that achieves faster convergence than existing approaches. To address the complex domain in CS MRI, we propose a modified Hessian estimation method that guarantees Hermitian positive definiteness. Furthermore, we provide a rigorous convergence analysis of the proposed method for nonconvex settings. Numerical experiments on both Cartesian and non-Cartesian sampling trajectories demonstrate the effectiveness and efficiency of our approach.

Advancing 3D Medical Image Segmentation: Unleashing the Potential of Planarian Neural Networks in Artificial Intelligence

Ziyuan Huang, Kevin Huggins, Srikar Bellur

arxiv logopreprintMay 7 2025
Our study presents PNN-UNet as a method for constructing deep neural networks that replicate the planarian neural network (PNN) structure in the context of 3D medical image data. Planarians typically have a cerebral structure comprising two neural cords, where the cerebrum acts as a coordinator, and the neural cords serve slightly different purposes within the organism's neurological system. Accordingly, PNN-UNet comprises a Deep-UNet and a Wide-UNet as the nerve cords, with a densely connected autoencoder performing the role of the brain. This distinct architecture offers advantages over both monolithic (UNet) and modular networks (Ensemble-UNet). Our outcomes on a 3D MRI hippocampus dataset, with and without data augmentation, demonstrate that PNN-UNet outperforms the baseline UNet and several other UNet variants in image segmentation.

3D Brain MRI Classification for Alzheimer Diagnosis Using CNN with Data Augmentation

Thien Nhan Vo, Bac Nam Ho, Thanh Xuan Truong

arxiv logopreprintMay 7 2025
A three-dimensional convolutional neural network was developed to classify T1-weighted brain MRI scans as healthy or Alzheimer. The network comprises 3D convolution, pooling, batch normalization, dense ReLU layers, and a sigmoid output. Using stochastic noise injection and five-fold cross-validation, the model achieved test set accuracy of 0.912 and area under the ROC curve of 0.961, an improvement of approximately 0.027 over resizing alone. Sensitivity and specificity both exceeded 0.90. These results align with prior work reporting up to 0.10 gain via synthetic augmentation. The findings demonstrate the effectiveness of simple augmentation for 3D MRI classification and motivate future exploration of advanced augmentation methods and architectures such as 3D U-Net and vision transformers.

Cross-organ all-in-one parallel compressed sensing magnetic resonance imaging

Baoshun Shi, Zheng Liu, Xin Meng, Yan Yang

arxiv logopreprintMay 7 2025
Recent advances in deep learning-based parallel compressed sensing magnetic resonance imaging (p-CSMRI) have significantly improved reconstruction quality. However, current p-CSMRI methods often require training separate deep neural network (DNN) for each organ due to anatomical variations, creating a barrier to developing generalized medical image reconstruction systems. To address this, we propose CAPNet (cross-organ all-in-one deep unfolding p-CSMRI network), a unified framework that implements a p-CSMRI iterative algorithm via three specialized modules: auxiliary variable module, prior module, and data consistency module. Recognizing that p-CSMRI systems often employ varying sampling ratios for different organs, resulting in organ-specific artifact patterns, we introduce an artifact generation submodule, which extracts and integrates artifact features into the data consistency module to enhance the discriminative capability of the overall network. For the prior module, we design an organ structure-prompt generation submodule that leverages structural features extracted from the segment anything model (SAM) to create cross-organ prompts. These prompts are strategically incorporated into the prior module through an organ structure-aware Mamba submodule. Comprehensive evaluations on a cross-organ dataset confirm that CAPNet achieves state-of-the-art reconstruction performance across multiple anatomical structures using a single unified model. Our code will be published at https://github.com/shibaoshun/CAPNet.

A Deep Learning Approach for Mandibular Condyle Segmentation on Ultrasonography.

Keser G, Yülek H, Öner Talmaç AG, Bayrakdar İŞ, Namdar Pekiner F, Çelik Ö

pubmed logopapersMay 6 2025
Deep learning techniques have demonstrated potential in various fields, including segmentation, and have recently been applied to medical image processing. This study aims to develop and evaluate computer-based diagnostic software designed to assess the segmentation of the mandibular condyle in ultrasound images. A total of 668 retrospective ultrasound images of anonymous adult mandibular condyles were analyzed. The CranioCatch labeling program (CranioCatch, Eskişehir, Turkey) was utilized to annotate the mandibular condyle using a polygonal labeling method. These annotations were subsequently reviewed and validated by experts in oral and maxillofacial radiology. In this study, all test images were detected and segmented using the YOLOv8 deep learning artificial intelligence (AI) model. When evaluating the model's performance in image estimation, it achieved an F1 score of 0.93, a sensitivity of 0.90, and a precision of 0.96. The automatic segmentation of the mandibular condyle from ultrasound images presents a promising application of artificial intelligence. This approach can help surgeons, radiologists, and other specialists save time in the diagnostic process.
Page 19 of 23225 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.