Sort by:
Page 79 of 2372364 results

Response Assessment in Hepatocellular Carcinoma: A Primer for Radiologists.

Mroueh N, Cao J, Srinivas Rao S, Ghosh S, Song OK, Kongboonvijit S, Shenoy-Bhangle A, Kambadakone A

pubmed logopapersAug 7 2025
Hepatocellular carcinoma (HCC) is the third leading cause of cancer-related deaths worldwide, necessitating accurate and early diagnosis to guide therapy, along with assessment of treatment response. Response assessment criteria have evolved from traditional morphologic approaches, such as WHO criteria and Response Evaluation Criteria in Solid Tumors (RECIST), to more recent methods focused on evaluating viable tumor burden, including European Association for Study of Liver (EASL) criteria, modified RECIST (mRECIST) and Liver Imaging Reporting and Data System (LI-RADS) Treatment Response (LI-TR) algorithm. This shift reflects the complex and evolving landscape of HCC treatment in the context of emerging systemic and locoregional therapies. Each of these criteria have their own nuanced strengths and limitations in capturing the detailed characteristics of HCC treatment and response assessment. The emergence of functional imaging techniques, including dual-energy CT, perfusion imaging, and rising use of radiomics, are enhancing the capabilities of response assessment. Growth in the realm of artificial intelligence and machine learning models provides an opportunity to refine the precision of response assessment by facilitating analysis of complex imaging data patterns. This review article provides a comprehensive overview of existing criteria, discusses functional and emerging imaging techniques, and outlines future directions for advancing HCC tumor response assessment.

Best Machine Learning Model for Predicting Axial Symptoms After Unilateral Laminoplasty: Based on C2 Spinous Process Muscle Radiomics Features and Sagittal Parameters.

Zheng B, Zhu Z, Liang Y, Liu H

pubmed logopapersAug 7 2025
Study DesignRetrospective study.ObjectiveTo develop a machine learning model for predicting axial symptoms (AS) after unilateral laminoplasty by integrating C2 spinous process muscle radiomics features and cervical sagittal parameters.MethodsIn this retrospective study of 96 cervical myelopathy patients (30 with AS, 66 without) who underwent unilateral laminoplasty between 2018-2022, we extracted radiomics features from preoperative MRI of C2 spinous muscles using PyRadiomics. Clinical data including C2-C7 Cobb angle, cervical sagittal vertical axis (cSVA), T1 slope (T1S) and C2 muscle fat infiltration are collected for clinical model construction. After LASSO regression feature selection, we constructed six machine learning models (SVM, KNN, Random Forest, ExtraTrees, XGBoost, and LightGBM) and evaluated their performance using ROC curves and AUC.ResultsThe AS group demonstrated significantly lower preoperative C2-C7 Cobb angles (12.80° ± 7.49° vs 18.02° ± 8.59°, <i>P</i> = .006), higher cSVA (3.01 cm ± 0.87 vs 2.46 ± 1.19 cm, <i>P</i> = .026), T1S (26.68° ± 5.12° vs 23.66° ± 7.58°, <i>P</i> = .025) and higher C2 muscle fat infiltration (23.73 ± 7.78 vs 20.62 ± 6.93 <i>P</i> = .026). Key radiomics features included local binary pattern texture features and wavelet transform characteristics. The combined model integrating radiomics and clinical parameters achieved the best performance with test AUC of 0.881, sensitivity of 0.833, and specificity of 0.786.ConclusionThe machine learning model based on C2 spinous process muscle radiomics features and clinical parameters (C2-C7 Cobb angle, cSVA, T1S and C2 muscle infiltration) effectively predicts AS occurrence after unilateral laminoplasty, providing clinicians with a valuable tool for preoperative risk assessment and personalized treatment planning.

Enhancing image retrieval through optimal barcode representation.

Khosrowshahli R, Kheiri F, Asilian Bidgoli A, Tizhoosh HR, Makrehchi M, Rahnamayan S

pubmed logopapersAug 7 2025
Data binary encoding has proven to be a versatile tool for optimizing data processing and memory efficiency in various machine learning applications. This includes deep barcoding, generating barcodes from deep learning feature extraction for image retrieval of similar cases among millions of indexed images. Despite the recent advancement in barcode generation methods, converting high-dimensional feature vectors (e.g., deep features) to compact and discriminative binary barcodes is still an urgent necessity and remains an unresolved problem. Difference-based binarization of features is one of the most efficient binarization methods, transforming continuous feature vectors into binary sequences and capturing trend information. However, the performance of this method is highly dependent on the ordering of the input features, leading to a significant combinatorial challenge. This research addresses this problem by optimizing feature sequences based on retrieval performance metrics. Our approach identifies optimal feature orderings, leading to substantial improvements in retrieval effectiveness compared to arbitrary or default orderings. We assess the performance of the proposed approach in various medical and non-medical image retrieval tasks. This evaluation includes medical images from The Cancer Genome Atlas (TCGA), a comprehensive publicly available dataset, as well as COVID-19 Chest X-rays dataset. In addition, we evaluate the proposed approach on non-medical benchmark image datasets, such as CIFAR-10, CIFAR-100, and Fashion-MNIST. Our findings demonstrate the importance of optimizing binary barcode representation to significantly enhance accuracy for fast image retrieval across a wide range of applications, highlighting the applicability and potential of barcodes in various domains.

MoMA: A Mixture-of-Multimodal-Agents Architecture for Enhancing Clinical Prediction Modelling

Jifan Gao, Mahmudur Rahman, John Caskey, Madeline Oguss, Ann O'Rourke, Randy Brown, Anne Stey, Anoop Mayampurath, Matthew M. Churpek, Guanhua Chen, Majid Afshar

arxiv logopreprintAug 7 2025
Multimodal electronic health record (EHR) data provide richer, complementary insights into patient health compared to single-modality data. However, effectively integrating diverse data modalities for clinical prediction modeling remains challenging due to the substantial data requirements. We introduce a novel architecture, Mixture-of-Multimodal-Agents (MoMA), designed to leverage multiple large language model (LLM) agents for clinical prediction tasks using multimodal EHR data. MoMA employs specialized LLM agents ("specialist agents") to convert non-textual modalities, such as medical images and laboratory results, into structured textual summaries. These summaries, together with clinical notes, are combined by another LLM ("aggregator agent") to generate a unified multimodal summary, which is then used by a third LLM ("predictor agent") to produce clinical predictions. Evaluating MoMA on three prediction tasks using real-world datasets with different modality combinations and prediction settings, MoMA outperforms current state-of-the-art methods, highlighting its enhanced accuracy and flexibility across various tasks.

Beyond Pixels: Medical Image Quality Assessment with Implicit Neural Representations

Caner Özer, Patryk Rygiel, Bram de Wilde, İlkay Öksüz, Jelmer M. Wolterink

arxiv logopreprintAug 7 2025
Artifacts pose a significant challenge in medical imaging, impacting diagnostic accuracy and downstream analysis. While image-based approaches for detecting artifacts can be effective, they often rely on preprocessing methods that can lead to information loss and high-memory-demand medical images, thereby limiting the scalability of classification models. In this work, we propose the use of implicit neural representations (INRs) for image quality assessment. INRs provide a compact and continuous representation of medical images, naturally handling variations in resolution and image size while reducing memory overhead. We develop deep weight space networks, graph neural networks, and relational attention transformers that operate on INRs to achieve image quality assessment. Our method is evaluated on the ACDC dataset with synthetically generated artifact patterns, demonstrating its effectiveness in assessing image quality while achieving similar performance with fewer parameters.

MedMambaLite: Hardware-Aware Mamba for Medical Image Classification

Romina Aalishah, Mozhgan Navardi, Tinoosh Mohsenin

arxiv logopreprintAug 7 2025
AI-powered medical devices have driven the need for real-time, on-device inference such as biomedical image classification. Deployment of deep learning models at the edge is now used for applications such as anomaly detection and classification in medical images. However, achieving this level of performance on edge devices remains challenging due to limitations in model size and computational capacity. To address this, we present MedMambaLite, a hardware-aware Mamba-based model optimized through knowledge distillation for medical image classification. We start with a powerful MedMamba model, integrating a Mamba structure for efficient feature extraction in medical imaging. We make the model lighter and faster in training and inference by modifying and reducing the redundancies in the architecture. We then distill its knowledge into a smaller student model by reducing the embedding dimensions. The optimized model achieves 94.5% overall accuracy on 10 MedMNIST datasets. It also reduces parameters 22.8x compared to MedMamba. Deployment on an NVIDIA Jetson Orin Nano achieves 35.6 GOPS/J energy per inference. This outperforms MedMamba by 63% improvement in energy per inference.

RegionMed-CLIP: A Region-Aware Multimodal Contrastive Learning Pre-trained Model for Medical Image Understanding

Tianchen Fang, Guiru Liu

arxiv logopreprintAug 7 2025
Medical image understanding plays a crucial role in enabling automated diagnosis and data-driven clinical decision support. However, its progress is impeded by two primary challenges: the limited availability of high-quality annotated medical data and an overreliance on global image features, which often miss subtle but clinically significant pathological regions. To address these issues, we introduce RegionMed-CLIP, a region-aware multimodal contrastive learning framework that explicitly incorporates localized pathological signals along with holistic semantic representations. The core of our method is an innovative region-of-interest (ROI) processor that adaptively integrates fine-grained regional features with the global context, supported by a progressive training strategy that enhances hierarchical multimodal alignment. To enable large-scale region-level representation learning, we construct MedRegion-500k, a comprehensive medical image-text corpus that features extensive regional annotations and multilevel clinical descriptions. Extensive experiments on image-text retrieval, zero-shot classification, and visual question answering tasks demonstrate that RegionMed-CLIP consistently exceeds state-of-the-art vision language models by a wide margin. Our results highlight the critical importance of region-aware contrastive pre-training and position RegionMed-CLIP as a robust foundation for advancing multimodal medical image understanding.

Clinical Decision Support for Alzheimer's: Challenges in Generalizable Data-Driven Approach.

Gao T, Madanian S, Templeton J, Merkin A

pubmed logopapersAug 7 2025
This paper reviews the current research on Alzheimer's disease and the use of deep learning, particularly 3D-convolutional neural networks (3D-CNN), in analyzing brain images. It presents a predictive model based on MRI and clinical data from the ADNI dataset, showing that deep learning can improve diagnosis accuracy and sensitivity. We also discuss potential applications in biomarker discovery, disease progression prediction, and personalised treatment planning, highlighting the ability to identify sensitive features for early diagnosis.

Automatic Multi-Stage Classification Model for Fetal Ultrasound Images Based on EfficientNet.

Shih CS, Chiu HW

pubmed logopapersAug 7 2025
This study aims to enhance the accuracy of fetal ultrasound image classification using convolutional neural networks, specifically EfficientNet. The research focuses on data collection, preprocessing, model training, and evaluation at different pregnancy stages: early, midterm, and newborn. EfficientNet showed the best performance, particularly in the newborn stage, demonstrating deep learning's potential to improve classification performance and support clinical workflows.

Lower Extremity Bypass Surveillance and Peak Systolic Velocities Value Prediction Using Recurrent Neural Networks.

Luo X, Tahabi FM, Rollins DM, Sawchuk AP

pubmed logopapersAug 7 2025
Routine duplex ultrasound surveillance is recommended after femoral-popliteal and femoral-tibial-pedal vein bypass grafts at various post-operative intervals. Currently, there is no systematic method for bypass graft surveillance using a set of peak systolic velocities (PSVs) collected during these exams. This research aims to explore the use of recurrent neural networks to predict the next set of PSVs, which can then indicate occlusion status. Recurrent neural network models were developed to predict occlusion and stenosis based on one to three prior sets of PSVs, with a sequence-to-sequence model utilized to forecast future PSVs within the stent graft and nearby arteries. The study employed 5-fold cross-validation for model performance comparison, revealing that the BiGRU model outperformed BiLSTM when two or more sets of PSVs were included, demonstrating that increasing duplex ultrasound exams improve prediction accuracy and reduces error rates. This work establishes a basis for integrating comprehensive clinical data, including demographics, comorbidities, symptoms, and other risk factors, with PSVs to enhance lower extremity bypass graft surveillance predictions.
Page 79 of 2372364 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.