Sort by:
Page 23 of 27269 results

Accelerated inference for thyroid nodule recognition in ultrasound imaging using FPGA.

Ma W, Wu X, Zhang Q, Li X, Wu X, Wang J

pubmed logopapersMay 7 2025
Thyroid cancer is the most prevalent malignant tumour in the endocrine system, with its incidence steadily rising in recent years. Current central processing units (CPUs) and graphics processing units (GPUs) face significant challenges in terms of processing speed, energy consumption, cost, and scalability in the identification of thyroid nodules, making them inadequate for the demands of future green, efficient, and accessible healthcare. To overcome these limitations, this study proposes an efficient quantized inference method using a field-programmable gate array (FPGA). We employ the YOLOv4-tiny neural network model, enhancing software performance with the K-means + + optimization algorithm and improving hardware performance through techniques such as 8-bit weight quantization, batch normalization, and convolutional layer fusion. The study is based on the ZYNQ7020 FPGA platform. Experimental results demonstrate an average accuracy of 81.44% on the Tn3k dataset and 81.20% on the internal test set from a Chinese tertiary hospital. The power consumption of the FPGA platform, CPU (Intel Core i5-10200 H), and GPU (NVIDIA RTX 4090) were 3.119 watts, 45 watts, and 68 watts, respectively, with energy efficiency ratios of 5.45, 0.31, and 5.56. This indicates that the FPGA's energy efficiency is 17.6 times that of the CPU and 0.98 times that of the GPU. These results show that the FPGA not only significantly outperforms the CPU in speed but also consumes far less power than the GPU. Moreover, using mid-to-low-end FPGAs yields performance comparable to that of commercial-grade GPUs. This technology presents a novel solution for medical imaging diagnostics, with the potential to significantly enhance the speed, accuracy, and environmental sustainability of ultrasound image analysis, thereby supporting the future development of medical care.

Alterations in static and dynamic functional network connectivity in chronic low back pain: a resting-state network functional connectivity and machine learning study.

Liu H, Wan X

pubmed logopapersMay 7 2025
Low back pain (LBP) is a prevalent pain condition whose persistence can lead to changes in the brain regions responsible for sensory, cognitive, attentional, and emotional processing. Previous neuroimaging studies have identified various structural and functional abnormalities in patients with LBP; however, how the static and dynamic large-scale functional network connectivity (FNC) of the brain is affected in these patients remains unclear. Forty-one patients with chronic low back pain (cLBP) and 42 healthy controls underwent resting-state functional MRI scanning. The independent component analysis method was employed to extract the resting-state networks. Subsequently, we calculate and compare between groups for static intra- and inter-network functional connectivity. In addition, we investigated the differences between dynamic functional network connectivity and dynamic temporal metrics between cLBP patients and healthy controls. Finally, we tried to distinguish cLBP patients from healthy controls by support vector machine method. The results showed that significant reductions in functional connectivity within the network were found within the DMN,DAN, and ECN in cLBP patients. Significant between-group differences were also found in static FNC and in each state of dynamic FNC. In addition, in terms of dynamic temporal metrics, fraction time and mean dwell time were significantly altered in cLBP patients. In conclusion, our study suggests the existence of static and dynamic large-scale brain network alterations in patients with cLBP. The findings provide insights into the neural mechanisms underlying various brain function abnormalities and altered pain experiences in patients with cLBP.

False Promises in Medical Imaging AI? Assessing Validity of Outperformance Claims

Evangelia Christodoulou, Annika Reinke, Pascaline Andrè, Patrick Godau, Piotr Kalinowski, Rola Houhou, Selen Erkan, Carole H. Sudre, Ninon Burgos, Sofiène Boutaj, Sophie Loizillon, Maëlys Solal, Veronika Cheplygina, Charles Heitz, Michal Kozubek, Michela Antonelli, Nicola Rieke, Antoine Gilson, Leon D. Mayer, Minu D. Tizabi, M. Jorge Cardoso, Amber Simpson, Annette Kopp-Schneider, Gaël Varoquaux, Olivier Colliot, Lena Maier-Hein

arxiv logopreprintMay 7 2025
Performance comparisons are fundamental in medical imaging Artificial Intelligence (AI) research, often driving claims of superiority based on relative improvements in common performance metrics. However, such claims frequently rely solely on empirical mean performance. In this paper, we investigate whether newly proposed methods genuinely outperform the state of the art by analyzing a representative cohort of medical imaging papers. We quantify the probability of false claims based on a Bayesian approach that leverages reported results alongside empirically estimated model congruence to estimate whether the relative ranking of methods is likely to have occurred by chance. According to our results, the majority (>80%) of papers claims outperformance when introducing a new method. Our analysis further revealed a high probability (>5%) of false outperformance claims in 86% of classification papers and 53% of segmentation papers. These findings highlight a critical flaw in current benchmarking practices: claims of outperformance in medical imaging AI are frequently unsubstantiated, posing a risk of misdirecting future research efforts.

Convergent Complex Quasi-Newton Proximal Methods for Gradient-Driven Denoisers in Compressed Sensing MRI Reconstruction

Tao Hong, Zhaoyi Xu, Se Young Chun, Luis Hernandez-Garcia, Jeffrey A. Fessler

arxiv logopreprintMay 7 2025
In compressed sensing (CS) MRI, model-based methods are pivotal to achieving accurate reconstruction. One of the main challenges in model-based methods is finding an effective prior to describe the statistical distribution of the target image. Plug-and-Play (PnP) and REgularization by Denoising (RED) are two general frameworks that use denoisers as the prior. While PnP/RED methods with convolutional neural networks (CNNs) based denoisers outperform classical hand-crafted priors in CS MRI, their convergence theory relies on assumptions that do not hold for practical CNNs. The recently developed gradient-driven denoisers offer a framework that bridges the gap between practical performance and theoretical guarantees. However, the numerical solvers for the associated minimization problem remain slow for CS MRI reconstruction. This paper proposes a complex quasi-Newton proximal method that achieves faster convergence than existing approaches. To address the complex domain in CS MRI, we propose a modified Hessian estimation method that guarantees Hermitian positive definiteness. Furthermore, we provide a rigorous convergence analysis of the proposed method for nonconvex settings. Numerical experiments on both Cartesian and non-Cartesian sampling trajectories demonstrate the effectiveness and efficiency of our approach.

An imageless magnetic resonance framework for fast and cost-effective decision-making

Alba González-Cebrián, Pablo García-Cristóbal, Fernando Galve, Efe Ilıcak, Viktor Van Der Valk, Marius Staring, Andrew Webb, Joseba Alonso

arxiv logopreprintMay 7 2025
Magnetic Resonance Imaging (MRI) is the gold standard in countless diagnostic procedures, yet hardware complexity, long scans, and cost preclude rapid screening and point-of-care use. We introduce Imageless Magnetic Resonance Diagnosis (IMRD), a framework that bypasses k-space sampling and image reconstruction by analyzing raw one-dimensional MR signals. We identify potentially impactful embodiments where IMRD requires only optimized pulse sequences for time-domain contrast, minimal low-field hardware, and pattern recognition algorithms to answer clinical closed queries and quantify lesion burden. As a proof of concept, we simulate multiple sclerosis lesions in silico within brain phantoms and deploy two extremely fast protocols (approximately 3 s), with and without spatial information. A 1D convolutional neural network achieves AUC close to 0.95 for lesion detection and R2 close to 0.99 for volume estimation. We also perform robustness tests under reduced signal-to-noise ratio, partial signal omission, and relaxation-time variability. By reframing MR signals as direct diagnostic metrics, IMRD paves the way for fast, low-cost MR screening and monitoring in resource-limited environments.

Advancing 3D Medical Image Segmentation: Unleashing the Potential of Planarian Neural Networks in Artificial Intelligence

Ziyuan Huang, Kevin Huggins, Srikar Bellur

arxiv logopreprintMay 7 2025
Our study presents PNN-UNet as a method for constructing deep neural networks that replicate the planarian neural network (PNN) structure in the context of 3D medical image data. Planarians typically have a cerebral structure comprising two neural cords, where the cerebrum acts as a coordinator, and the neural cords serve slightly different purposes within the organism's neurological system. Accordingly, PNN-UNet comprises a Deep-UNet and a Wide-UNet as the nerve cords, with a densely connected autoencoder performing the role of the brain. This distinct architecture offers advantages over both monolithic (UNet) and modular networks (Ensemble-UNet). Our outcomes on a 3D MRI hippocampus dataset, with and without data augmentation, demonstrate that PNN-UNet outperforms the baseline UNet and several other UNet variants in image segmentation.

3D Brain MRI Classification for Alzheimer Diagnosis Using CNN with Data Augmentation

Thien Nhan Vo, Bac Nam Ho, Thanh Xuan Truong

arxiv logopreprintMay 7 2025
A three-dimensional convolutional neural network was developed to classify T1-weighted brain MRI scans as healthy or Alzheimer. The network comprises 3D convolution, pooling, batch normalization, dense ReLU layers, and a sigmoid output. Using stochastic noise injection and five-fold cross-validation, the model achieved test set accuracy of 0.912 and area under the ROC curve of 0.961, an improvement of approximately 0.027 over resizing alone. Sensitivity and specificity both exceeded 0.90. These results align with prior work reporting up to 0.10 gain via synthetic augmentation. The findings demonstrate the effectiveness of simple augmentation for 3D MRI classification and motivate future exploration of advanced augmentation methods and architectures such as 3D U-Net and vision transformers.

Cross-organ all-in-one parallel compressed sensing magnetic resonance imaging

Baoshun Shi, Zheng Liu, Xin Meng, Yan Yang

arxiv logopreprintMay 7 2025
Recent advances in deep learning-based parallel compressed sensing magnetic resonance imaging (p-CSMRI) have significantly improved reconstruction quality. However, current p-CSMRI methods often require training separate deep neural network (DNN) for each organ due to anatomical variations, creating a barrier to developing generalized medical image reconstruction systems. To address this, we propose CAPNet (cross-organ all-in-one deep unfolding p-CSMRI network), a unified framework that implements a p-CSMRI iterative algorithm via three specialized modules: auxiliary variable module, prior module, and data consistency module. Recognizing that p-CSMRI systems often employ varying sampling ratios for different organs, resulting in organ-specific artifact patterns, we introduce an artifact generation submodule, which extracts and integrates artifact features into the data consistency module to enhance the discriminative capability of the overall network. For the prior module, we design an organ structure-prompt generation submodule that leverages structural features extracted from the segment anything model (SAM) to create cross-organ prompts. These prompts are strategically incorporated into the prior module through an organ structure-aware Mamba submodule. Comprehensive evaluations on a cross-organ dataset confirm that CAPNet achieves state-of-the-art reconstruction performance across multiple anatomical structures using a single unified model. Our code will be published at https://github.com/shibaoshun/CAPNet.

Opinions and preferences regarding artificial intelligence use in healthcare delivery: results from a national multi-site survey of breast imaging patients.

Dontchos BN, Dodelzon K, Bhole S, Edmonds CE, Mullen LA, Parikh JR, Daly CP, Epling JA, Christensen S, Grimm LJ

pubmed logopapersMay 6 2025
Artificial intelligence (AI) utilization is growing, but patient perceptions of AI are unclear. Our objective was to understand patient perceptions of AI through a multi-site survey of breast imaging patients. A 36-question survey was distributed to eight US practices (6 academic, 2 non-academic) from October 2023 through October 2024. This manuscript analyzes a subset of questions from the survey addressing digital health literacy and attitudes towards AI in medicine and breast imaging specifically. Multivariable analysis compared responses by respondent demographics. A total of 3,532 surveys were collected (response rate: 69.9%, 3,532/5053). Median respondent age was 55 years (IQR 20). Most respondents were White (73.0%, 2579/3532) and had completed college (77.3%, 2732/3532). Overall, respondents were undecided (range: 43.2%-50.8%) regarding questions about general perceptions of AI in healthcare. Respondents with higher electronic health literacy, more education, and younger age were significantly more likely to consider it useful to use utilize AI for aiding medical tasks (all p<0.001). In contrast, respondents with lower electronic health literacy and less education were significantly more likely to indicate it was a bad idea for AI to perform medical tasks (p<0.001). Non-White patients were more likely to express concerns that AI will not work as well for some groups compared to others (p<0.05). Overall, favorable opinions of AI use for medical tasks were associated with younger age, more education, and higher electronic health literacy. As AI is increasingly implemented into clinical workflows, it is important to educate patients and provide transparency to build patient understanding and trust.

A Deep Learning Approach for Mandibular Condyle Segmentation on Ultrasonography.

Keser G, Yülek H, Öner Talmaç AG, Bayrakdar İŞ, Namdar Pekiner F, Çelik Ö

pubmed logopapersMay 6 2025
Deep learning techniques have demonstrated potential in various fields, including segmentation, and have recently been applied to medical image processing. This study aims to develop and evaluate computer-based diagnostic software designed to assess the segmentation of the mandibular condyle in ultrasound images. A total of 668 retrospective ultrasound images of anonymous adult mandibular condyles were analyzed. The CranioCatch labeling program (CranioCatch, Eskişehir, Turkey) was utilized to annotate the mandibular condyle using a polygonal labeling method. These annotations were subsequently reviewed and validated by experts in oral and maxillofacial radiology. In this study, all test images were detected and segmented using the YOLOv8 deep learning artificial intelligence (AI) model. When evaluating the model's performance in image estimation, it achieved an F1 score of 0.93, a sensitivity of 0.90, and a precision of 0.96. The automatic segmentation of the mandibular condyle from ultrasound images presents a promising application of artificial intelligence. This approach can help surgeons, radiologists, and other specialists save time in the diagnostic process.
Page 23 of 27269 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.