Sort by:
Page 271 of 4644638 results

U-R-VEDA: Integrating UNET, Residual Links, Edge and Dual Attention, and Vision Transformer for Accurate Semantic Segmentation of CMRs

Racheal Mukisa, Arvind K. Bansal

arxiv logopreprintJun 25 2025
Artificial intelligence, including deep learning models, will play a transformative role in automated medical image analysis for the diagnosis of cardiac disorders and their management. Automated accurate delineation of cardiac images is the first necessary initial step for the quantification and automated diagnosis of cardiac disorders. In this paper, we propose a deep learning based enhanced UNet model, U-R-Veda, which integrates convolution transformations, vision transformer, residual links, channel-attention, and spatial attention, together with edge-detection based skip-connections for an accurate fully-automated semantic segmentation of cardiac magnetic resonance (CMR) images. The model extracts local-features and their interrelationships using a stack of combination convolution blocks, with embedded channel and spatial attention in the convolution block, and vision transformers. Deep embedding of channel and spatial attention in the convolution block identifies important features and their spatial localization. The combined edge information with channel and spatial attention as skip connection reduces information-loss during convolution transformations. The overall model significantly improves the semantic segmentation of CMR images necessary for improved medical image analysis. An algorithm for the dual attention module (channel and spatial attention) has been presented. Performance results show that U-R-Veda achieves an average accuracy of 95.2%, based on DSC metrics. The model outperforms the accuracy attained by other models, based on DSC and HD metrics, especially for the delineation of right-ventricle and left-ventricle-myocardium.

AI-based CT assessment of sarcopenia in borderline resectable pancreatic Cancer: A narrative review of clinical and technical perspectives.

Gehin W, Lambert A, Bibault JE

pubmed logopapersJun 25 2025
Sarcopenia, defined as the progressive loss of skeletal muscle mass and function, has been associated with poor prognosis in patients with pancreatic cancer, particularly those with borderline resectable pancreatic cancer (BRPC). Although body composition can be extracted from routine CT imaging, sarcopenia assessment remains underused in clinical practice. Recent advances in artificial intelligence (AI) offer the potential to automate and standardize this process, but their clinical translation remains limited. This narrative review aims to critically evaluate (1) the clinical impact of CT-defined sarcopenia in BRPC, and (2) the performance and maturity of AI-based methods for automated muscle and fat segmentation on CT images. A dual-axis literature search was conducted to identify clinical studies assessing the prognostic role of sarcopenia in BRPC, and technical studies developing AI-based segmentation models for body composition analysis. Structured data extraction was applied to 13 clinical and 71 technical studies. A PRISMA-inspired flow diagram was included to ensure methodological transparency. Sarcopenia was consistently associated with worse survival and treatment tolerance in BRPC, yet clinical definitions and cut-offs varied widely. AI models-mostly 2D U-Nets trained on L3-level CT slices-achieved high segmentation accuracy (mean DSC >0.93), but external validation and standardization were often lacking. CT-based AI assessment of sarcopenia holds promise for improving patient stratification in BRPC. However, its clinical adoption will require standardization, integration into decision-support frameworks, and prospective validation across diverse populations.

Integrating handheld ultrasound in rheumatology: A review of benefits and drawbacks.

Sabido-Sauri R, Eder L, Emery P, Aydin SZ

pubmed logopapersJun 25 2025
Musculoskeletal ultrasound is a key tool in rheumatology for diagnosing and managing inflammatory arthritis. Traditional ultrasound systems, while effective, can be cumbersome and costly, limiting their use in many clinical settings. Handheld ultrasound (HHUS) devices, which are portable, affordable, and user-friendly, have emerged as a promising alternative. This review explores the role of HHUS in rheumatology, specifically evaluating its impact on diagnostic accuracy, ease of use, and utility in screening for inflammatory arthritis. The review also addresses key challenges, such as image quality, storage and data security, and the potential for integrating artificial intelligence to improve device performance. We compare HHUS devices to cart-based ultrasound machines, discuss their advantages and limitations, and examine the potential for widespread adoption. Our findings suggest that HHUS devices can effectively support musculoskeletal assessments and offer significant benefits in resource-limited settings. However, proper training, standardized protocols, and continued technological advancements are essential for optimizing their use in clinical practice.

Diagnostic Performance of Radiomics for Differentiating Intrahepatic Cholangiocarcinoma from Hepatocellular Carcinoma: A Systematic Review and Meta-analysis.

Wang D, Sun L

pubmed logopapersJun 25 2025
Differentiating intrahepatic cholangiocarcinoma (ICC) from hepatocellular carcinoma (HCC) is essential for selecting the most effective treatment strategies. However, traditional imaging modalities and serum biomarkers often lack sufficient specificity. Radiomics, a sophisticated image analysis approach that derives quantitative data from medical imaging, has emerged as a promising non-invasive tool. To systematically review and meta-analyze the radiomics diagnostic accuracy in differentiating ICC from HCC. PubMed, EMBASE, and Web of Science databases were systematically searched through January 24, 2025. Studies evaluating radiomics models for distinguishing ICC from HCC were included. Assessing the quality of included studies was done by using Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) and METhodological RadiomICs Score tools. Pooled sensitivity, specificity, and area under the curve (AUC) were calculated using a bivariate random-effects model. Subgroup and publication bias analyses were also performed. 12 studies with 2541 patients were included, with 14 validation cohorts entered into meta-analysis. The pooled sensitivity and specificity of radiomics models were 0.82 (95% CI: 0.76-0.86) and 0.90 (95% CI: 0.85-0.93), respectively, with an AUC of 0.88 (95% CI: 0.85-0.91). Subgroup analyses revealed variations based on segmentation method, software used, and sample size, though not all differences were statistically significant. Publication bias was not detected. Radiomics demonstrates high diagnostic accuracy in distinguishing ICC from HCC and offers a non-invasive adjunct to conventional diagnostics. Further prospective, multicenter studies with standardized workflows are needed to enhance clinical applicability and reproducibility.

Framework for enhanced respiratory disease identification with clinical handcrafted features.

Khokan MIP, Tonni TJ, Rony MAH, Fatema K, Hasan MZ

pubmed logopapersJun 25 2025
Respiratory disorders cause approximately 4 million deaths annually worldwide, making them the third leading cause of mortality. Early detection is critical to improving survival rates and recovery outcomes. However, chest X-rays require expertise, and computational intelligence provides valuable support to improve diagnostic accuracy and support medical professionals in decision-making. This study presents an automated system to classify respiratory diseases using three diverse datasets comprising 18,000 chest X-ray images and masks, categorized into six classes. Image preprocessing techniques, such as resizing for input standardization and CLAHE for contrast enhancement, were applied to ensure uniformity and improve the visual quality of the images. Albumentations-based augmentation methods addressed class imbalances, while bitwise segmentation focused on extracting the region of interest (ROI). Furthermore, clinically handcrafted feature extraction enabled the accurate identification of 20 critical clinical features essential for disease classification. The K-nearest neighbors (KNN) graph construction technique was utilized to transform tabular data into graph structures for effective node classification. We employed feature analysis to identify critical attributes that contribute to class predictions within the graph structure. Additionally, the GNNExplainer was utilized to validate these findings by highlighting significant nodes, edges, and features that influence the model's decision-making process. The proposed model, Chest X-ray Graph Neural Network (CHXGNN), a robust Graph Neural Network (GNN) architecture, incorporates advanced layers, batch normalization, dropout regularization, and optimization strategies. Extensive testing and ablation studies demonstrated the model's exceptional performance, achieving an accuracy of 99.56 %. Our CHXGNN model shows significant potential in detecting and classifying respiratory diseases, promising to enhance diagnostic efficiency and improve patient outcomes in respiratory healthcare.

Deep learning-based diffusion MRI tractography: Integrating spatial and anatomical information.

Yang Y, Yuan Y, Ren B, Wu Y, Feng Y, Zhang X

pubmed logopapersJun 25 2025
Diffusion MRI tractography technique enables non-invasive visualization of the white matter pathways in the brain. It plays a crucial role in neuroscience and clinical fields by facilitating the study of brain connectivity and neurological disorders. However, the accuracy of reconstructed tractograms has been a longstanding challenge. Recently, deep learning methods have been applied to improve tractograms for better white matter coverage, but often comes at the expense of generating excessive false-positive connections. This is largely due to their reliance on local information to predict long-range streamlines. To improve the accuracy of streamline propagation predictions, we introduce a novel deep learning framework that integrates image-domain spatial information and anatomical information along tracts, with the former extracted through convolutional layers and the latter modeled via a Transformer-decoder. Additionally, we employ a weighted loss function to address fiber class imbalance encountered during training. We evaluate the proposed method on the simulated ISMRM 2015 Tractography Challenge dataset, achieving a valid streamline rate of 66.2 %, white matter coverage of 63.8 %, and successfully reconstructing 24 out of 25 bundles. Furthermore, on the multi-site Tractoinferno dataset, the proposed method demonstrates its ability to handle various diffusion MRI acquisition schemes, achieving a 5.7 % increase in white matter coverage and a 4.1 % decrease in overreach compared to RNN-based methods.

How well do multimodal LLMs interpret CT scans? An auto-evaluation framework for analyses.

Zhu Q, Hou B, Mathai TS, Mukherjee P, Jin Q, Chen X, Wang Z, Cheng R, Summers RM, Lu Z

pubmed logopapersJun 25 2025
This study introduces a novel evaluation framework, GPTRadScore, to systematically assess the performance of multimodal large language models (MLLMs) in generating clinically accurate findings from CT imaging. Specifically, GPTRadScore leverages LLMs as an evaluation metric, aiming to provide a more accurate and clinically informed assessment than traditional language-specific methods. Using this framework, we evaluate the capability of several MLLMs, including GPT-4 with Vision (GPT-4V), Gemini Pro Vision, LLaVA-Med, and RadFM, to interpret findings in CT scans. This retrospective study leverages a subset of the public DeepLesion dataset to evaluate the performance of several multimodal LLMs in describing findings in CT slices. GPTRadScore was developed to assess the generated descriptions (location, body part, and type) using GPT-4, alongside traditional metrics. RadFM was fine-tuned using a subset of the DeepLesion dataset with additional labeled examples targeting complex findings. Post fine-tuning, performance was reassessed using GPTRadScore to measure accuracy improvements. Evaluations demonstrated a high correlation of GPTRadScore with clinician assessments, with Pearson's correlation coefficients of 0.87, 0.91, 0.75, 0.90, and 0.89. These results highlight its superiority over traditional metrics, such as BLEU, METEOR, and ROUGE, and indicate that GPTRadScore can serve as a reliable evaluation metric. Using GPTRadScore, it was observed that while GPT-4V and Gemini Pro Vision outperformed other models, significant areas for improvement remain, primarily due to limitations in the datasets used for training. Fine-tuning RadFM resulted in substantial accuracy gains: location accuracy increased from 3.41% to 12.8%, body part accuracy improved from 29.12% to 53%, and type accuracy rose from 9.24% to 30%. These findings reinforce the hypothesis that fine-tuning RadFM can significantly enhance its performance. GPT-4 effectively correlates with expert assessments, validating its use as a reliable metric for evaluating multimodal LLMs in radiological diagnostics. Additionally, the results underscore the efficacy of fine-tuning approaches in improving the descriptive accuracy of LLM-generated medical imaging findings.

Fusing Radiomic Features with Deep Representations for Gestational Age Estimation in Fetal Ultrasound Images

Fangyijie Wang, Yuan Liang, Sourav Bhattacharjee, Abey Campbell, Kathleen M. Curran, Guénolé Silvestre

arxiv logopreprintJun 25 2025
Accurate gestational age (GA) estimation, ideally through fetal ultrasound measurement, is a crucial aspect of providing excellent antenatal care. However, deriving GA from manual fetal biometric measurements depends on the operator and is time-consuming. Hence, automatic computer-assisted methods are demanded in clinical practice. In this paper, we present a novel feature fusion framework to estimate GA using fetal ultrasound images without any measurement information. We adopt a deep learning model to extract deep representations from ultrasound images. We extract radiomic features to reveal patterns and characteristics of fetal brain growth. To harness the interpretability of radiomics in medical imaging analysis, we estimate GA by fusing radiomic features and deep representations. Our framework estimates GA with a mean absolute error of 8.0 days across three trimesters, outperforming current machine learning-based methods at these gestational ages. Experimental results demonstrate the robustness of our framework across different populations in diverse geographical regions. Our code is publicly available on \href{https://github.com/13204942/RadiomicsImageFusion_FetalUS}.

Patch2Loc: Learning to Localize Patches for Unsupervised Brain Lesion Detection

Hassan Baker, Austin J. Brockmeier

arxiv logopreprintJun 25 2025
Detecting brain lesions as abnormalities observed in magnetic resonance imaging (MRI) is essential for diagnosis and treatment. In the search of abnormalities, such as tumors and malformations, radiologists may benefit from computer-aided diagnostics that use computer vision systems trained with machine learning to segment normal tissue from abnormal brain tissue. While supervised learning methods require annotated lesions, we propose a new unsupervised approach (Patch2Loc) that learns from normal patches taken from structural MRI. We train a neural network model to map a patch back to its spatial location within a slice of the brain volume. During inference, abnormal patches are detected by the relatively higher error and/or variance of the location prediction. This generates a heatmap that can be integrated into pixel-wise methods to achieve finer-grained segmentation. We demonstrate the ability of our model to segment abnormal brain tissues by applying our approach to the detection of tumor tissues in MRI on T2-weighted images from BraTS2021 and MSLUB datasets and T1-weighted images from ATLAS and WMH datasets. We show that it outperforms the state-of-the art in unsupervised segmentation. The codebase for this work can be found on our \href{https://github.com/bakerhassan/Patch2Loc}{GitHub page}.

Ultrasound Displacement Tracking Techniques for Post-Stroke Myofascial Shear Strain Quantification.

Ashikuzzaman M, Huang J, Bonwit S, Etemadimanesh A, Ghasemi A, Debs P, Nickl R, Enslein J, Fayad LM, Raghavan P, Bell MAL

pubmed logopapersJun 24 2025
Ultrasound shear strain is a potential biomarker of myofascial dysfunction. However, the quality of estimated shear strains can be impacted by differences in ultrasound displacement tracking techniques, potentially altering clinical conclusions surrounding myofascial pain. This work assesses the reliability of four displacement estimation algorithms under a novel clinical hypothesis that the shear strain between muscles on a stroke-affected (paretic) shoulder with myofascial pain is lower than that on the non-paretic side of the same patient. After initial validation with simulations, four approaches were evaluated with in vivo data acquired from ten research participants with myofascial post-stroke shoulder pain: (1) Search is a common window-based method that determines displacements by searching for maximum normalized cross-correlations within windowed data, whereas (2) OVERWIND-Search, (3) SOUL-Search, and (4) $L1$-SOUL-Search fine-tune the Search initial estimates by optimizing cost functions comprising data and regularization terms, utilizing $L1$-norm-based first-order regularization, $L2$-norm-based first- and second-order regularization, and $L1$-norm-based first- and second-order regularization, respectively. SOUL-Search and $L1$-SOUL-Search most accurately and reliably estimate shear strain relative to our clinical hypothesis, when validated with visual inspection of ultrasound cine loops and quantitative T1$\rho$ magnetic resonance imaging. In addition, $L1$-SOUL-Search produced the most reliable displacement tracking performance by generating lateral displacement images with smooth displacement gradients (measured as the mean and variance of displacement derivatives) and sharp edges (which enables distinction of shoulder muscle layers). Among the four investigated methods, $L1$-SOUL-Search emerged as the most suitable option to investigate myofascial pain and dysfunction, despite the drawback of slow runtimes, which can potentially be resolved with a deep learning solution. This work advances musculoskeletal health, ultrasound shear strain imaging, and related applications by establishing the foundation required to develop reliable image-based biomarkers for accurate diagnoses and treatments.
Page 271 of 4644638 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.