Sort by:
Page 236 of 6576562 results

Li J, Liu R, Xing Y, Yin Q, Su Q

pubmed logopapersAug 23 2025
Accurately predicting pseudoprogression (PsP) from tumor progression (TuP) in patients with glioblastoma (GBM) is crucial for treatment and prognosis. This study develops a deep learning (DL) prognostic model using pre- and post-operative contrast-enhanced T1-weighted (CET1) magnetic resonance imaging (MRI) to forecast the likelihood of PsP or TuP following standard GBM treatment. Brain MRI data and clinical characteristics from 110 GBM patients were divided into a training set (n = 68) and a validation set (n = 42). Pre-operative and post-operative CET1 images were used individually and combined. A Vision Transformer (ViT) model was built using expert-segmented tumor images to extract DL features. Several mainstream convolutional neural network (CNN) models (DenseNet121, Inception_v3, MobileNet_v2, ResNet18, ResNet50, and VGG16) were built for comparative evaluation. Principal Component Analysis (PCA) and Least Absolute Shrinkage and Selection Operator (LASSO) regression selected the significant features, classified using a Multi-Layer Perceptron (MLP). Model performance was evaluated with Receiver Operating Characteristic (ROC) curves. A multimodal model also incorporated DL features and clinical characteristics. The optimal input for predicting TuP versus PsP was the combination of pre- and post-operative CET1 tumor regions. The CET1-ViT model achieved an area under the curve (AUC) of 95.5% and accuracy of 90.7% on the training set, and an AUC of 95.2% and accuracy of 96.7% on the validation set. This model outperformed the mainstream CNN models. The multimodal model showed superior performance, with AUCs of 98.6% and 99.3% on the training and validation sets, respectively. We developed a DL model based on pre- and post-operative CET1 imaging that can effectively forecast PsP versus TuP in GBM patients, offering potential for evaluating treatment responses and early indications of tumor progression.

Pouya Shiri, Xin Yi, Neel P. Mistry, Samaneh Javadinia, Mohammad Chegini, Seok-Bum Ko, Amirali Baniasadi, Scott J. Adams

arxiv logopreprintAug 23 2025
Contrast-enhanced computed tomography (CT) imaging is essential for diagnosing and monitoring thoracic diseases, including aortic pathologies. However, contrast agents pose risks such as nephrotoxicity and allergic-like reactions. The ability to generate high-fidelity synthetic contrast-enhanced CT angiography (CTA) images without contrast administration would be transformative, enhancing patient safety and accessibility while reducing healthcare costs. In this study, we propose the first bridge diffusion-based solution for synthesizing contrast-enhanced CTA images from non-contrast CT scans. Our approach builds on the Slice-Consistent Brownian Bridge Diffusion Model (SC-BBDM), leveraging its ability to model complex mappings while maintaining consistency across slices. Unlike conventional slice-wise synthesis methods, our framework preserves full 3D anatomical integrity while operating in a high-resolution 2D fashion, allowing seamless volumetric interpretation under a low memory budget. To ensure robust spatial alignment, we implement a comprehensive preprocessing pipeline that includes resampling, registration using the Symmetric Normalization method, and a sophisticated dilated segmentation mask to extract the aorta and surrounding structures. We create two datasets from the Coltea-Lung dataset: one containing only the aorta and another including both the aorta and heart, enabling a detailed analysis of anatomical context. We compare our approach against baseline methods on both datasets, demonstrating its effectiveness in preserving vascular structures while enhancing contrast fidelity.

Mirza Mumtaz Zahoor, Saddam Hussain Khan

arxiv logopreprintAug 23 2025
Brain tumors remain among the most lethal human diseases, where early detection and accurate classification are critical for effective diagnosis and treatment planning. Although deep learning-based computer-aided diagnostic (CADx) systems have shown remarkable progress. However, conventional convolutional neural networks (CNNs) and Transformers face persistent challenges, including high computational cost, sensitivity to minor contrast variations, structural heterogeneity, and texture inconsistencies in MRI data. Therefore, a novel hybrid framework, CE-RS-SBCIT, is introduced, integrating residual and spatial learning-based CNNs with transformer-driven modules. The proposed framework exploits local fine-grained and global contextual cues through four core innovations: (i) a smoothing and boundary-based CNN-integrated Transformer (SBCIT), (ii) tailored residual and spatial learning CNNs, (iii) a channel enhancement (CE) strategy, and (iv) a novel spatial attention mechanism. The developed SBCIT employs stem convolution and contextual interaction transformer blocks with systematic smoothing and boundary operations, enabling efficient global feature modeling. Moreover, Residual and spatial CNNs, enhanced by auxiliary transfer-learned feature maps, enrich the representation space, while the CE module amplifies discriminative channels and mitigates redundancy. Furthermore, the spatial attention mechanism selectively emphasizes subtle contrast and textural variations across tumor classes. Extensive evaluation on challenging MRI datasets from Kaggle and Figshare, encompassing glioma, meningioma, pituitary tumors, and healthy controls, demonstrates superior performance, achieving 98.30% accuracy, 98.08% sensitivity, 98.25% F1-score, and 98.43% precision.

Riad Hassan, M. Rubaiyat Hossain Mondal, Sheikh Iqbal Ahamed, Fahad Mostafa, Md Mostafijur Rahman

arxiv logopreprintAug 23 2025
Proper segmentation of organs-at-risk is important for radiation therapy, surgical planning, and diagnostic decision-making in medical image analysis. While deep learning-based segmentation architectures have made significant progress, they often fail to balance segmentation accuracy with computational efficiency. Most of the current state-of-the-art methods either prioritize performance at the cost of high computational complexity or compromise accuracy for efficiency. This paper addresses this gap by introducing an efficient dual-line decoder segmentation network (EDLDNet). The proposed method features a noisy decoder, which learns to incorporate structured perturbation at training time for better model robustness, yet at inference time only the noise-free decoder is executed, leading to lower computational cost. Multi-Scale convolutional Attention Modules (MSCAMs), Attention Gates (AGs), and Up-Convolution Blocks (UCBs) are further utilized to optimize feature representation and boost segmentation performance. By leveraging multi-scale segmentation masks from both decoders, we also utilize a mutation-based loss function to enhance the model's generalization. Our approach outperforms SOTA segmentation architectures on four publicly available medical imaging datasets. EDLDNet achieves SOTA performance with an 84.00% Dice score on the Synapse dataset, surpassing baseline model like UNet by 13.89% in Dice score while significantly reducing Multiply-Accumulate Operations (MACs) by 89.7%. Compared to recent approaches like EMCAD, our EDLDNet not only achieves higher Dice score but also maintains comparable computational efficiency. The outstanding performance across diverse datasets establishes EDLDNet's strong generalization, computational efficiency, and robustness. The source code, pre-processed data, and pre-trained weights will be available at https://github.com/riadhassan/EDLDNet .

Nakaura T, Uetani H, Yoshida N, Kobayashi N, Nagayama Y, Kidoh M, Kuroda JI, Mukasa A, Hirai T

pubmed logopapersAug 22 2025
Aimed to evaluate the potential of large language models (LLMs) in differentiating intra-axial primary brain tumors using structured magnetic resonance imaging (MRI) reports and compare their performance with radiologists. Structured reports of preoperative MRI findings from 137 surgically confirmed intra-axial primary brain tumors, including Glioblastoma (n = 77), Central Nervous System (CNS) Lymphoma (n = 22), Astrocytoma (n = 9), Oligodendroglioma (n = 9), and others (n = 20), were analyzed by multiple LLMs, including GPT-4, Claude-3-Opus, Claude-3-Sonnet, GPT-3.5, Llama-2-70B, Qwen1.5-72B, and Gemini-Pro-1.0. The models provided the top 5 differential diagnoses based on the preoperative MRI findings, and their top 1, 3, and 5 accuracies were compared with board-certified neuroradiologists' interpretations of the actual preoperative MRI images. Radiologists achieved top 1, 3, and 5 accuracies of 85.4%, 94.9%, and 94.9%, respectively. Among the LLMs, GPT-4 performed best with top 1, 3, and 5 accuracies of 65.7%, 84.7%, and 90.5%, respectively. Notably, GPT-4's top 3 accuracy of 84.7% approached the radiologists' top 1 accuracy of 85.4%. Other LLMs showed varying performance levels, with average accuracies ranging from 62.3% to 75.9%. LLMs demonstrated high accuracy for Glioblastoma but struggled with CNS Lymphoma and other less common tumors, particularly in top 1 accuracy. LLMs show promise as assistive tools for differentiating intra-axial primary brain tumors using structured MRI reports. However, a significant gap remains between their performance and that of board-certified neuroradiologists interpreting actual images. The choice of LLM and tumor type significantly influences the results. Question How do Large Language Models (LLM) perform when differentiating complex intra-axial primary brain tumors from structured MRI reports compared to radiologists interpreting images? Findings Radiologists outperformed all tested LLMs in diagnostic accuracy. The best model, GPT-4, showed promise but lagged considerably behind radiologists, particularly for less common tumors. Clinical relevance LLMs show potential as assistive tools for generating differential diagnoses from structured MRI reports, particularly for non-specialists, but they cannot currently replace the nuanced diagnostic expertise of a board-certified radiologist interpreting the primary image data.

Wang H, Qi Y, Liu W, Guo K, Lv W, Liang Z

pubmed logopapersAug 22 2025
Addressing the critical challenge of precise boundary delineation in medical image segmentation, we introduce DPGNet, an adaptive deep learning model engineered to emulate expert perception of intricate anatomical edges. Our key innovations drive its superior performance and clinical utility, encompassing: 1) a three-stage progressive refinement strategy that establishes global context, performs hierarchical feature enhancement, and precisely delineates local boundaries; 2) a novel Edge Difference Attention (EDA) module that implicitly learns and quantifies boundary uncertainties without requiring explicit ground truth supervision; and 3) a lightweight, transformer-based architecture ensuring an exceptional balance between performance and computational efficiency. Extensive experiments across diverse and challenging medical image datasets demonstrate DPGNet's consistent superiority over state-of-the-art methods, notably achieving this with significantly lower computational overhead (25.51 M parameters). Its exceptional boundary refinement is rigorously validated through comprehensive metrics (Boundary-IoU, HD95) and confirmed by rigorous clinical expert evaluations. Crucially, DPGNet generates an explicit uncertainty boundary map, providing clinicians with actionable insights to identify ambiguous regions, thereby enhancing diagnostic precision and facilitating more accurate clinical segmentation outcomes. Our code is available at: https://github.fangnengwuyou/DPGNet.

Miao CL, He Y, Shi B, Bian Z, Yu W, Chen Y, Zhou GQ

pubmed logopapersAug 22 2025
The thickness of the diaphragm serves as a crucial biometric indicator, particularly in assessing rehabilitation and respiratory dysfunction. However, measuring diaphragm thickness from ultrasound images mainly depends on manual delineation of the fascia, which is subjective, time-consuming, and sensitive to the inherent speckle noise. In this study, we introduce an edge-aware diffusion segmentation model (ESADiff), which incorporates prior structural knowledge of the fascia to improve the accuracy and reliability of diaphragm thickness measurements in ultrasound imaging. We first apply a diffusion model, guided by annotations, to learn the image features while preserving edge details through an iterative denoising process. Specifically, we design an anisotropic edge-sensitive annotation refinement module that corrects inaccurate labels by integrating Hessian geometric priors with a backtracking shortest-path connection algorithm, further enhancing model accuracy. Moreover, a curvature-aware deformable convolution and edge-prior ranking loss function are proposed to leverage the shape prior knowledge of the fascia, allowing the model to selectively focus on relevant linear structures while mitigating the influence of noise on feature extraction. We evaluated the proposed model on an in-house diaphragm ultrasound dataset, a public calf muscle dataset, and an internal tongue muscle dataset to demonstrate robust generalization. Extensive experimental results demonstrate that our method achieves finer fascia segmentation and significantly improves the accuracy of thickness measurements compared to other state-of-the-art techniques, highlighting its potential for clinical applications.

Ibrahim V, Alaya Cheikh F, Asari VK, Paul JS

pubmed logopapersAug 22 2025
Extrapolation plays a critical role in machine/deep learning (ML/DL), enabling models to predict data points beyond their training constraints, particularly useful in scenarios deviating significantly from training conditions. This article addresses the limitations of current convolutional neural networks (CNNs) in extrapolation tasks within image restoration and compressed sensing (CS). While CNNs show potential in tasks such as image outpainting and CS, traditional convolutions are limited by their reliance on interpolation, failing to fully capture the dependencies needed for predicting values outside the known data. This work proposes an extrapolation convolution (EC) framework that models missing data prediction as an extrapolation problem using linear prediction within DL architectures. The approach is applied in two domains: first, image outpainting, where EC in encoder-decoder (EnDec) networks replaces conventional interpolation methods to reduce artifacts and enhance fine detail representation; second, Fourier-based CS-magnetic resonance imaging (CS-MRI), where it predicts high-frequency signal values from undersampled measurements in the frequency domain, improving reconstruction quality and preserving subtle structural details at high acceleration factors. Comparative experiments demonstrate that the proposed EC-DecNet and FDRN outperform traditional CNN-based models, achieving high-quality image reconstruction with finer details, as shown by improved peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and kernel inception distance (KID)/Frechet inception distance (FID) scores. Ablation studies and analysis highlight the effectiveness of larger kernel sizes and multilevel semi-supervised learning in FDRN for enhancing extrapolation accuracy in the frequency domain.

Chen H, Han A

pubmed logopapersAug 22 2025
Accurately imaging the spatial distribution of longitudinal speed of sound (SoS) has a profound impact on image quality and the diagnostic value of ultrasound. Knowledge of SoS distribution allows effective aberration correction to improve image quality. SoS imaging also provides a new contrast mechanism to facilitate disease diagnosis. However, SoS imaging is challenging in the pulse-echo mode. Deep learning (DL) is a promising approach for pulse-echo SoS imaging, which may yield more accurate results than pure physics-based approaches. Herein, we developed a robust DL approach for SoS imaging that learns the nonlinear mapping between measured time shifts and the underlying SoS without subjecting to the constraints of a specific forward model. Various strategies were adopted to enhance model performance. Time-shift maps were computed by adopting a common mid-angle configuration from the non-DL literature, normalizing complex beamformed ultrasound data, and accounting for depth-dependent frequency when converting phase shifts to time shifts. The structural similarity index measure (SSIM) was incorporated into the loss function to learn the global structure for SoS imaging. A two-stage training strategy was employed, leveraging computationally efficient ray-tracing synthesis for extensive pretraining, and more realistic but computationally expensive full-wave simulations for fine-tuning. Using these combined strategies, our model was shown to be robust and generalizable across different conditions. The simulation-trained model successfully reconstructed the SoS maps of phantoms using experimental data. Compared with the physics-based inversion approach, our method improved reconstruction accuracy and contrast-to-noise ratio in phantom experiments. These results demonstrated the accuracy and robustness of our approach.

Losev V, Lu C, Tahasildar S, Senevirathne DS, Inglese P, Bai W, King AP, Shah M, de Marvao A, O'Regan DP

pubmed logopapersAug 22 2025
Cardiovascular ageing is a progressive loss of physiological reserve, modified by environmental and genetic risk factors, that contributes to multi-morbidity due to accumulated damage across diverse cell types, tissues, and organs. Obesity is implicated in premature ageing, but the effect of body fat distribution in humans is unknown. This study determined the influence of sex-dependent fat phenotypes on human cardiovascular ageing. Data from 21 241 participants in the UK Biobank were analysed. Machine learning was used to predict cardiovascular age from 126 image-derived traits of vascular function, cardiac motion, and myocardial fibrosis. An age-delta was calculated as the difference between predicted age and chronological age. The volume and distribution of body fat was assessed from whole-body imaging. The association between fat phenotypes and cardiovascular age-delta was assessed using multivariable linear regression with age and sex as co-covariates, reporting β coefficients with 95% confidence intervals (CI). Two-sample Mendelian randomization was used to assess causal associations. Visceral adipose tissue volume [β = 0.656, (95% CI, .537-.775), P < .0001], muscle adipose tissue infiltration [β = 0.183, (95% CI, .122-.244), P = .0003], and liver fat fraction [β = 1.066, (95% CI .835-1.298), P < .0001] were the strongest predictors of increased cardiovascular age-delta for both sexes. Abdominal subcutaneous adipose tissue volume [β = 0.432, (95% CI, .269-.596), P < .0001] and android fat mass [β = 0.983, (95% CI, .64-1.326), P < .0001] were each associated with increased age-delta only in males. Genetically predicted gynoid fat showed an association with decreased age-delta. Shared and sex-specific patterns of body fat are associated with both protective and harmful changes in cardiovascular ageing, highlighting adipose tissue distribution and function as a key target for interventions to extend healthy lifespan.
Page 236 of 6576562 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.