Sort by:
Page 566 of 7597590 results

Siyu Mu, Wei Xuan Chan, Choon Hwai Yap

arxiv logopreprintJun 25 2025
Elucidating the biomechanical behavior of the myocardium is crucial for understanding cardiac physiology, but cannot be directly inferred from clinical imaging and typically requires finite element (FE) simulations. However, conventional FE methods are computationally expensive and often fail to reproduce observed cardiac motions. We propose IMC-PINN-FE, a physics-informed neural network (PINN) framework that integrates imaged motion consistency (IMC) with FE modeling for patient-specific left ventricular (LV) biomechanics. Cardiac motion is first estimated from MRI or echocardiography using either a pre-trained attention-based network or an unsupervised cyclic-regularized network, followed by extraction of motion modes. IMC-PINN-FE then rapidly estimates myocardial stiffness and active tension by fitting clinical pressure measurements, accelerating computation from hours to seconds compared to traditional inverse FE. Based on these parameters, it performs FE modeling across the cardiac cycle at 75x speedup. Through motion constraints, it matches imaged displacements more accurately, improving average Dice from 0.849 to 0.927, while preserving realistic pressure-volume behavior. IMC-PINN-FE advances previous PINN-FE models by introducing back-computation of material properties and better motion fidelity. Using motion from a single subject to reconstruct shape modes also avoids the need for large datasets and improves patient specificity. IMC-PINN-FE offers a robust and efficient approach for rapid, personalized, and image-consistent cardiac biomechanical modeling.

Zitong Yu, Md Ashequr Rahman, Zekun Li, Chunwei Ying, Hongyu An, Tammie L. S. Benzinger, Richard Laforest, Jingqin Luo, Scott A. Norris, Abhinav K. Jha

arxiv logopreprintJun 25 2025
Quantitative measures of dopamine transporter (DaT) uptake in caudate, putamen, and globus pallidus derived from DaT-single-photon emission computed tomography (SPECT) images are being investigated as biomarkers to diagnose, assess disease status, and track the progression of Parkinsonism. Reliable quantification from DaT-SPECT images requires performing attenuation compensation (AC), typically with a separate X-ray CT scan. Such CT-based AC (CTAC) has multiple challenges, a key one being the non-availability of X-ray CT component on many clinical SPECT systems. Even when a CT is available, the additional CT scan leads to increased radiation dose, costs, and complexity, potential quantification errors due to SPECT-CT misalignment, and higher training and regulatory requirements. To overcome the challenges with the requirement of a CT scan for AC in DaT SPECT, we propose a deep learning (DL)-based transmission-less AC method for DaT-SPECT (DaT-CTLESS). An in silico imaging trial, titled ISIT-DaT, was designed to evaluate the performance of DaT-CTLESS on the regional uptake quantification task. We observed that DaT-CTLESS yielded a significantly higher correlation with CTAC than that between UAC and CTAC on the regional DaT uptake quantification task. Further, DaT-CLTESS had an excellent agreement with CTAC on this task, significantly outperformed UAC in distinguishing patients with normal versus reduced putamen SBR, yielded good generalizability across two scanners, was generally insensitive to intra-regional uptake heterogeneity, demonstrated good repeatability, exhibited robust performance even as the size of the training data was reduced, and generally outperformed the other considered DL methods on the task of quantifying regional uptake across different training dataset sizes. These results provide a strong motivation for further clinical evaluation of DaT-CTLESS.

Racheal Mukisa, Arvind K. Bansal

arxiv logopreprintJun 25 2025
Artificial intelligence, including deep learning models, will play a transformative role in automated medical image analysis for the diagnosis of cardiac disorders and their management. Automated accurate delineation of cardiac images is the first necessary initial step for the quantification and automated diagnosis of cardiac disorders. In this paper, we propose a deep learning based enhanced UNet model, U-R-Veda, which integrates convolution transformations, vision transformer, residual links, channel-attention, and spatial attention, together with edge-detection based skip-connections for an accurate fully-automated semantic segmentation of cardiac magnetic resonance (CMR) images. The model extracts local-features and their interrelationships using a stack of combination convolution blocks, with embedded channel and spatial attention in the convolution block, and vision transformers. Deep embedding of channel and spatial attention in the convolution block identifies important features and their spatial localization. The combined edge information with channel and spatial attention as skip connection reduces information-loss during convolution transformations. The overall model significantly improves the semantic segmentation of CMR images necessary for improved medical image analysis. An algorithm for the dual attention module (channel and spatial attention) has been presented. Performance results show that U-R-Veda achieves an average accuracy of 95.2%, based on DSC metrics. The model outperforms the accuracy attained by other models, based on DSC and HD metrics, especially for the delineation of right-ventricle and left-ventricle-myocardium.

Gehin W, Lambert A, Bibault JE

pubmed logopapersJun 25 2025
Sarcopenia, defined as the progressive loss of skeletal muscle mass and function, has been associated with poor prognosis in patients with pancreatic cancer, particularly those with borderline resectable pancreatic cancer (BRPC). Although body composition can be extracted from routine CT imaging, sarcopenia assessment remains underused in clinical practice. Recent advances in artificial intelligence (AI) offer the potential to automate and standardize this process, but their clinical translation remains limited. This narrative review aims to critically evaluate (1) the clinical impact of CT-defined sarcopenia in BRPC, and (2) the performance and maturity of AI-based methods for automated muscle and fat segmentation on CT images. A dual-axis literature search was conducted to identify clinical studies assessing the prognostic role of sarcopenia in BRPC, and technical studies developing AI-based segmentation models for body composition analysis. Structured data extraction was applied to 13 clinical and 71 technical studies. A PRISMA-inspired flow diagram was included to ensure methodological transparency. Sarcopenia was consistently associated with worse survival and treatment tolerance in BRPC, yet clinical definitions and cut-offs varied widely. AI models-mostly 2D U-Nets trained on L3-level CT slices-achieved high segmentation accuracy (mean DSC >0.93), but external validation and standardization were often lacking. CT-based AI assessment of sarcopenia holds promise for improving patient stratification in BRPC. However, its clinical adoption will require standardization, integration into decision-support frameworks, and prospective validation across diverse populations.

Sabido-Sauri R, Eder L, Emery P, Aydin SZ

pubmed logopapersJun 25 2025
Musculoskeletal ultrasound is a key tool in rheumatology for diagnosing and managing inflammatory arthritis. Traditional ultrasound systems, while effective, can be cumbersome and costly, limiting their use in many clinical settings. Handheld ultrasound (HHUS) devices, which are portable, affordable, and user-friendly, have emerged as a promising alternative. This review explores the role of HHUS in rheumatology, specifically evaluating its impact on diagnostic accuracy, ease of use, and utility in screening for inflammatory arthritis. The review also addresses key challenges, such as image quality, storage and data security, and the potential for integrating artificial intelligence to improve device performance. We compare HHUS devices to cart-based ultrasound machines, discuss their advantages and limitations, and examine the potential for widespread adoption. Our findings suggest that HHUS devices can effectively support musculoskeletal assessments and offer significant benefits in resource-limited settings. However, proper training, standardized protocols, and continued technological advancements are essential for optimizing their use in clinical practice.

Wang D, Sun L

pubmed logopapersJun 25 2025
Differentiating intrahepatic cholangiocarcinoma (ICC) from hepatocellular carcinoma (HCC) is essential for selecting the most effective treatment strategies. However, traditional imaging modalities and serum biomarkers often lack sufficient specificity. Radiomics, a sophisticated image analysis approach that derives quantitative data from medical imaging, has emerged as a promising non-invasive tool. To systematically review and meta-analyze the radiomics diagnostic accuracy in differentiating ICC from HCC. PubMed, EMBASE, and Web of Science databases were systematically searched through January 24, 2025. Studies evaluating radiomics models for distinguishing ICC from HCC were included. Assessing the quality of included studies was done by using Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) and METhodological RadiomICs Score tools. Pooled sensitivity, specificity, and area under the curve (AUC) were calculated using a bivariate random-effects model. Subgroup and publication bias analyses were also performed. 12 studies with 2541 patients were included, with 14 validation cohorts entered into meta-analysis. The pooled sensitivity and specificity of radiomics models were 0.82 (95% CI: 0.76-0.86) and 0.90 (95% CI: 0.85-0.93), respectively, with an AUC of 0.88 (95% CI: 0.85-0.91). Subgroup analyses revealed variations based on segmentation method, software used, and sample size, though not all differences were statistically significant. Publication bias was not detected. Radiomics demonstrates high diagnostic accuracy in distinguishing ICC from HCC and offers a non-invasive adjunct to conventional diagnostics. Further prospective, multicenter studies with standardized workflows are needed to enhance clinical applicability and reproducibility.

Khokan MIP, Tonni TJ, Rony MAH, Fatema K, Hasan MZ

pubmed logopapersJun 25 2025
Respiratory disorders cause approximately 4 million deaths annually worldwide, making them the third leading cause of mortality. Early detection is critical to improving survival rates and recovery outcomes. However, chest X-rays require expertise, and computational intelligence provides valuable support to improve diagnostic accuracy and support medical professionals in decision-making. This study presents an automated system to classify respiratory diseases using three diverse datasets comprising 18,000 chest X-ray images and masks, categorized into six classes. Image preprocessing techniques, such as resizing for input standardization and CLAHE for contrast enhancement, were applied to ensure uniformity and improve the visual quality of the images. Albumentations-based augmentation methods addressed class imbalances, while bitwise segmentation focused on extracting the region of interest (ROI). Furthermore, clinically handcrafted feature extraction enabled the accurate identification of 20 critical clinical features essential for disease classification. The K-nearest neighbors (KNN) graph construction technique was utilized to transform tabular data into graph structures for effective node classification. We employed feature analysis to identify critical attributes that contribute to class predictions within the graph structure. Additionally, the GNNExplainer was utilized to validate these findings by highlighting significant nodes, edges, and features that influence the model's decision-making process. The proposed model, Chest X-ray Graph Neural Network (CHXGNN), a robust Graph Neural Network (GNN) architecture, incorporates advanced layers, batch normalization, dropout regularization, and optimization strategies. Extensive testing and ablation studies demonstrated the model's exceptional performance, achieving an accuracy of 99.56 %. Our CHXGNN model shows significant potential in detecting and classifying respiratory diseases, promising to enhance diagnostic efficiency and improve patient outcomes in respiratory healthcare.

Yang Y, Yuan Y, Ren B, Wu Y, Feng Y, Zhang X

pubmed logopapersJun 25 2025
Diffusion MRI tractography technique enables non-invasive visualization of the white matter pathways in the brain. It plays a crucial role in neuroscience and clinical fields by facilitating the study of brain connectivity and neurological disorders. However, the accuracy of reconstructed tractograms has been a longstanding challenge. Recently, deep learning methods have been applied to improve tractograms for better white matter coverage, but often comes at the expense of generating excessive false-positive connections. This is largely due to their reliance on local information to predict long-range streamlines. To improve the accuracy of streamline propagation predictions, we introduce a novel deep learning framework that integrates image-domain spatial information and anatomical information along tracts, with the former extracted through convolutional layers and the latter modeled via a Transformer-decoder. Additionally, we employ a weighted loss function to address fiber class imbalance encountered during training. We evaluate the proposed method on the simulated ISMRM 2015 Tractography Challenge dataset, achieving a valid streamline rate of 66.2 %, white matter coverage of 63.8 %, and successfully reconstructing 24 out of 25 bundles. Furthermore, on the multi-site Tractoinferno dataset, the proposed method demonstrates its ability to handle various diffusion MRI acquisition schemes, achieving a 5.7 % increase in white matter coverage and a 4.1 % decrease in overreach compared to RNN-based methods.

Zhu Q, Hou B, Mathai TS, Mukherjee P, Jin Q, Chen X, Wang Z, Cheng R, Summers RM, Lu Z

pubmed logopapersJun 25 2025
This study introduces a novel evaluation framework, GPTRadScore, to systematically assess the performance of multimodal large language models (MLLMs) in generating clinically accurate findings from CT imaging. Specifically, GPTRadScore leverages LLMs as an evaluation metric, aiming to provide a more accurate and clinically informed assessment than traditional language-specific methods. Using this framework, we evaluate the capability of several MLLMs, including GPT-4 with Vision (GPT-4V), Gemini Pro Vision, LLaVA-Med, and RadFM, to interpret findings in CT scans. This retrospective study leverages a subset of the public DeepLesion dataset to evaluate the performance of several multimodal LLMs in describing findings in CT slices. GPTRadScore was developed to assess the generated descriptions (location, body part, and type) using GPT-4, alongside traditional metrics. RadFM was fine-tuned using a subset of the DeepLesion dataset with additional labeled examples targeting complex findings. Post fine-tuning, performance was reassessed using GPTRadScore to measure accuracy improvements. Evaluations demonstrated a high correlation of GPTRadScore with clinician assessments, with Pearson's correlation coefficients of 0.87, 0.91, 0.75, 0.90, and 0.89. These results highlight its superiority over traditional metrics, such as BLEU, METEOR, and ROUGE, and indicate that GPTRadScore can serve as a reliable evaluation metric. Using GPTRadScore, it was observed that while GPT-4V and Gemini Pro Vision outperformed other models, significant areas for improvement remain, primarily due to limitations in the datasets used for training. Fine-tuning RadFM resulted in substantial accuracy gains: location accuracy increased from 3.41% to 12.8%, body part accuracy improved from 29.12% to 53%, and type accuracy rose from 9.24% to 30%. These findings reinforce the hypothesis that fine-tuning RadFM can significantly enhance its performance. GPT-4 effectively correlates with expert assessments, validating its use as a reliable metric for evaluating multimodal LLMs in radiological diagnostics. Additionally, the results underscore the efficacy of fine-tuning approaches in improving the descriptive accuracy of LLM-generated medical imaging findings.

Fangyijie Wang, Yuan Liang, Sourav Bhattacharjee, Abey Campbell, Kathleen M. Curran, Guénolé Silvestre

arxiv logopreprintJun 25 2025
Accurate gestational age (GA) estimation, ideally through fetal ultrasound measurement, is a crucial aspect of providing excellent antenatal care. However, deriving GA from manual fetal biometric measurements depends on the operator and is time-consuming. Hence, automatic computer-assisted methods are demanded in clinical practice. In this paper, we present a novel feature fusion framework to estimate GA using fetal ultrasound images without any measurement information. We adopt a deep learning model to extract deep representations from ultrasound images. We extract radiomic features to reveal patterns and characteristics of fetal brain growth. To harness the interpretability of radiomics in medical imaging analysis, we estimate GA by fusing radiomic features and deep representations. Our framework estimates GA with a mean absolute error of 8.0 days across three trimesters, outperforming current machine learning-based methods at these gestational ages. Experimental results demonstrate the robustness of our framework across different populations in diverse geographical regions. Our code is publicly available on \href{https://github.com/13204942/RadiomicsImageFusion_FetalUS}.
Page 566 of 7597590 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,600+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.