Sort by:
Page 148 of 2082073 results

Free-running isotropic three-dimensional cine magnetic resonance imaging with deep learning image reconstruction.

Erdem S, Erdem O, Stebbings S, Greil G, Hussain T, Zou Q

pubmed logopapersMay 29 2025
Cardiovascular magnetic resonance (CMR) cine imaging is the gold standard for assessing ventricular volumes and function. It typically requires two-dimensional (2D) bSSFP sequences and multiple breath-holds, which can be challenging for patients with limited breath-holding capacity. Three-dimensional (3D) cardiovascular magnetic resonance angiography (MRA) usually suffers from lengthy acquisition. Free-running 3D cine imaging with deep learning (DL) reconstruction offers a potential solution by acquiring both cine and angiography simultaneously. To evaluate the efficiency and accuracy of a ferumoxytol-enhanced 3D cine imaging MR sequence combined with DL reconstruction and Heart-NAV technology in patients with congenital heart disease. This Institutional Review Board approved this prospective study that compared (i) functional and volumetric measurements between 3 and 2D cine images; (ii) contrast-to-noise ratio (CNR) between deep-learning (DL) and compressed sensing (CS)-reconstructed 3D cine images; and (iii) cross-sectional area (CSA) measurements between DL-reconstructed 3D cine images and the clinical 3D MRA images acquired using the bSSFP sequence. Paired t-tests were used to compare group measurements, and Bland-Altman analysis assessed agreement in CSA and volumetric data. Sixteen patients (seven males; median age 6 years) were recruited. 3D cine imaging showed slightly larger right ventricular (RV) volumes and lower RV ejection fraction (EF) compared to 2D cine, with a significant difference only in RV end-systolic volume (P = 0.02). Left ventricular (LV) volumes and EF were slightly higher, and LV mass was lower, without significant differences (P ≥ 0.05). DL-reconstructed 3D cine images showed significantly higher CNR in all pulmonary veins than CS-reconstructed 3D cine images (all P < 0.05). Highly accelerated free-running 3D cine imaging with DL reconstruction shortens acquisition times and provides comparable volumetric measurements to 2D cine, and comparable CSA to clinical 3D MRA.

Diagnosis of trigeminal neuralgia based on plain skull radiography using convolutional neural network.

Han JH, Ji SY, Kim M, Kwon JE, Park JB, Kang H, Hwang K, Kim CY, Kim T, Jeong HG, Ahn YH, Chung HT

pubmed logopapersMay 29 2025
This study aimed to determine whether trigeminal neuralgia can be diagnosed using convolutional neural networks (CNNs) based on plain X-ray skull images. A labeled dataset of 166 skull images from patients aged over 16 years with trigeminal neuralgia was compiled, alongside a control dataset of 498 images from patients with unruptured intracranial aneurysms. The images were randomly partitioned into training, validation, and test datasets in a 6:2:2 ratio. Classifier performance was assessed using accuracy and the area under the receiver operating characteristic (AUROC) curve. Gradient-weighted class activation mapping was applied to identify regions of interest. External validation was conducted using a dataset obtained from another institution. The CNN achieved an overall accuracy of 87.2%, with sensitivity and specificity of 0.72 and 0.91, respectively, and an AUROC of 0.90 on the test dataset. In most cases, the sphenoid body and clivus were identified as key areas for predicting trigeminal neuralgia. Validation on the external dataset yielded an accuracy of 71.0%, highlighting the potential of deep learning-based models in distinguishing X-ray skull images of patients with trigeminal neuralgia from those of control individuals. Our preliminary results suggest that plain x-ray can be potentially used as an adjunct to conventional MRI, ideally with CISS sequences, to aid in the clinical diagnosis of TN. Further refinement could establish this approach as a valuable screening tool.

CT-denoimer: efficient contextual transformer network for low-dose CT denoising.

Zhang Y, Xu F, Zhang R, Guo Y, Wang H, Wei B, Ma F, Meng J, Liu J, Lu H, Chen Y

pubmed logopapersMay 29 2025
Low-dose computed tomography (LDCT) effectively reduces radiation exposure to patients, but introduces severe noise artifacts that affect diagnostic accuracy. Recently, Transformer-based network architectures have been widely applied to LDCT image denoising, generally achieving superior results compared to traditional convolutional methods. However, these methods are often hindered by high computational costs and struggles in capturing complex local contextual features, which negatively impact denoising performance. In this work, we propose CT-Denoimer, an efficient CT Denoising Transformer network that captures both global correlations and intricate, spatially varying local contextual details in CT images, enabling the generation of high-quality images. The core of our framework is a Transformer module that consists of two key components: the Multi-Dconv head Transposed Attention (MDTA) and the Mixed Contextual Feed-forward Network (MCFN). The MDTA block captures global correlations in the image with linear computational complexity, while the MCFN block manages multi-scale local contextual information, both static and dynamic, through a series of Enhanced Contextual Transformer (eCoT) modules. In addition, we incorporate Operation-Wise Attention Layers (OWALs) to enable collaborative refinement in the proposed CT-Denoimer, enhancing its ability to more effectively handle complex and varying noise patterns in LDCT images. Extensive experimental validation on both the AAPM-Mayo public dataset and a real-world clinical dataset demonstrated the state-of-the-art performance of the proposed CT-Denoimer. It achieved a peak signal-to-noise ratio (PSNR) of 33.681 dB, a structural similarity index measure (SSIM) of 0.921, an information fidelity criterion (IFC) of 2.857 and a visual information fidelity (VIF) of 0.349. Subjective assessment by radiologists gave an average score of 4.39, confirming its clinical applicability and clear advantages over existing methods. This study presents an innovative CT denoising Transformer network that sets a new benchmark in LDCT image denoising, excelling in both noise reduction and fine structure preservation.

ADC-MambaNet: A Lightweight U-Shaped Architecture with Mamba and Multi-Dimensional Priority Attention for Medical Image Segmentation.

Nguyen TN, Ho QH, Nguyen VQ, Pham VT, Tran TT

pubmed logopapersMay 29 2025
Medical image segmentation is becoming a growing crucial step in assisting with disease detection and diagnosis. However, medical images often exhibit complex structures and textures, resulting in the need for highly complex methods. Particularly, when Deep Learning methods are utilized, they often require large-scale pretraining, leading to significant memory demands and increased computational costs. The well-known Convolutional Neural Networks (CNNs) have become the backbone of medical image segmentation tasks thanks to their effective feature extraction abilities. However, they often struggle to capture global context due to the limited sizes of their kernels. To address this, various Transformer-based models have been introduced to learn long-range dependencies through self-attention mechanisms. However, these architectures typically incur relatively high computational complexity.&#xD;Methods: To address the aforementioned challenges, we propose a lightweight and computationally efficient model named ADC-MambaNet, which combines the conventional Depthwise Convolutional layers with the Mamba algorithm that can address the computational complexity of Transformers. In the proposed model, a new feature extractor named Harmonious Mamba-Convolution (HMC) block, and the Multi-Dimensional Priority Attention (MDPA) block have been designed. These blocks enhance the feature extraction process, thereby improving the overall performance of the model. In particular, the mechanisms enable the model to effectively capture local and global patterns from the feature maps while keeping the computational costs low. A novel loss function called the Balanced Normalized Cross Entropy is also introduced, bringing promising performance compared to other losses. Evaluations on five public medical image datasets: ISIC 2018 Lesion Segmentation, PH2, Data Science Bowl 2018, GlaS, and Lung X-ray demonstrate that ADC-MambaNet achieves higher evaluation scores while maintaining compact parameters and low computational complexity.&#xD;Conclusion: ADC-MambaNet offers a promising solution for accurate and efficient medical image segmentation, especially in resource-limited or edge-computing environments. Implementation code will be publicly accessible at: https://github.com/nqnguyen812/mambaseg-model.

RNN-AHF Framework: Enhancing Multi-focal Nature of Hypoxic Ischemic Encephalopathy Lesion Region in MRI Image Using Optimized Rough Neural Network Weight and Anti-Homomorphic Filter.

Thangeswari M, Muthucumaraswamy R, Anitha K, Shanker NR

pubmed logopapersMay 29 2025
Image enhancement of the Hypoxic-Ischemic Encephalopathy (HIE) lesion region in neonatal brain MR images is a challenging task due to the diffuse (i.e., multi-focal) nature, small size, and low contrast of the lesions. Classifying the stages of HIE is also difficult because of the unclear boundaries and edges of the lesions, which are dispersedthroughout the brain. Moreover, unclear boundaries and edges are due to chemical shifts, partial volume artifacts, and motion artifacts. Further, voxels may reflect signals from adjacent tissues. Existing algorithms perform poorly in HIE lesion enhancement due to artifacts, voxels, and the diffuse nature of the lesion. In this paper, we propose a Rough Neural Network and Anti-Homomorphic Filter (RNN-AHF) framework for the enhancement of the HIE lesion region. The RNN-AHF framework reduces the pixel dimensionality of the feature space, eliminates unnecessary pixels, and preserves essential pixels for lesion enhancement. The RNN efficiently learns and identifies pixel patterns and facilitates adaptive enhancement based on different weights in the neural network. The proposed RNN-AHF framework operates using optimized neural weights and an optimized training function. The hybridization of optimized weights and the training function enhances the lesion region with high contrast while preserving the boundaries and edges. The proposed RNN-AHF framework achieves a lesion image enhancement and classification accuracy of approximately 93.5%, which is better than traditional algorithms.

Mild to moderate COPD, vitamin D deficiency, and longitudinal bone loss: The MESA study.

Ghotbi E, Hathaway QA, Hadidchi R, Momtazmanesh S, Bancks MP, Bluemke DA, Barr RG, Post WS, Budoff M, Smith BM, Lima JAC, Demehri S

pubmed logopapersMay 29 2025
Despite the established association between chronic obstructive pulmonary disease (COPD) severity and risk of osteoporosis, even after accounting for the known shared confounding variables (e.g., age, smoking, history of exacerbations, steroid use), there is paucity of data on bone loss among mild to moderate COPD, which is more prevalent in the general population. We conducted a longitudinal analysis using data from the Multi-Ethnic Study of Atherosclerosis. Participants with chest CT at Exam 5 (2010-2012) and Exam 6 (2016-2018) were included. Mild to moderate COPD was defined as forced expiratory volume in 1 s (FEV<sub>1</sub>) to forced vital capacity ratio of <0.70 and FEV<sub>1</sub> of 50 % or higher. Vitamin D deficiency was defined as serum vitamin D < 20 ng/mL. We utilized a validated deep learning algorithm to perform automated multilevel segmentation of vertebral bodies (T1-T10) from chest CT and derive 3D volumetric thoracic vertebral BMD measurements at Exam 5 and 6. Of the 1226 participants, 173 had known mild to moderate COPD at baseline, while 1053 had no known COPD. After adjusting for age, race/ethnicity, sex, body mass, index, bisphosphonate use, alcohol consumption, smoking, diabetes, physical activity, C-reactive protein and vitamin D deficiency, mild to moderate COPD was associated with faster decline in BMD (estimated difference, β = -0.38 g/cm<sup>3</sup>/year; 95 % CI: -0.74, -0.02). A significant interaction between COPD and vitamin D deficiency (p = 0.001) prompted stratified analyses. Among participants with vitamin D deficiency (47 % of participants), COPD was associated with faster decline in BMD (-0.64 g/cm<sup>3</sup>/year; 95 % CI: -1.17 to -0.12), whereas no significant association was observed among those with normal vitamin D in both crude and adjusted models. Mild to moderate COPD is associated with longitudinal declines in vertebral BMD exclusively in participants with vitamin D deficiency over 6-year follow-up. Vitamin D deficiency may play a crucial role in bone loss among patients with mild to moderate COPD.

RadCLIP: Enhancing Radiologic Image Analysis Through Contrastive Language-Image Pretraining.

Lu Z, Li H, Parikh NA, Dillman JR, He L

pubmed logopapersMay 28 2025
The integration of artificial intelligence (AI) with radiology signifies a transformative era in medicine. Vision foundation models have been adopted to enhance radiologic imaging analysis. However, the inherent complexities of 2D and 3D radiologic data present unique challenges that existing models, which are typically pretrained on general nonmedical images, do not adequately address. To bridge this gap and harness the diagnostic precision required in radiologic imaging, we introduce radiologic contrastive language-image pretraining (RadCLIP): a cross-modal vision-language foundational model that utilizes a vision-language pretraining (VLP) framework to improve radiologic image analysis. Building on the contrastive language-image pretraining (CLIP) approach, RadCLIP incorporates a slice pooling mechanism designed for volumetric image analysis and is pretrained using a large, diverse dataset of radiologic image-text pairs. This pretraining effectively aligns radiologic images with their corresponding text annotations, resulting in a robust vision backbone for radiologic imaging. Extensive experiments demonstrate RadCLIP's superior performance in both unimodal radiologic image classification and cross-modal image-text matching, underscoring its significant promise for enhancing diagnostic accuracy and efficiency in clinical settings. Our key contributions include curating a large dataset featuring diverse radiologic 2D/3D image-text pairs, pretraining RadCLIP as a vision-language foundation model on this dataset, developing a slice pooling adapter with an attention mechanism for integrating 2D images, and conducting comprehensive evaluations of RadCLIP on various radiologic downstream tasks.

Deep Separable Spatiotemporal Learning for Fast Dynamic Cardiac MRI.

Wang Z, Xiao M, Zhou Y, Wang C, Wu N, Li Y, Gong Y, Chang S, Chen Y, Zhu L, Zhou J, Cai C, Wang H, Jiang X, Guo D, Yang G, Qu X

pubmed logopapersMay 28 2025
Dynamic magnetic resonance imaging (MRI) plays an indispensable role in cardiac diagnosis. To enable fast imaging, the k-space data can be undersampled but the image reconstruction poses a great challenge of high-dimensional processing. This challenge necessitates extensive training data in deep learning reconstruction methods. In this work, we propose a novel and efficient approach, leveraging a dimension-reduced separable learning scheme that can perform exceptionally well even with highly limited training data. We design this new approach by incorporating spatiotemporal priors into the development of a Deep Separable Spatiotemporal Learning network (DeepSSL), which unrolls an iteration process of a 2D spatiotemporal reconstruction model with both temporal lowrankness and spatial sparsity. Intermediate outputs can also be visualized to provide insights into the network behavior and enhance interpretability. Extensive results on cardiac cine datasets demonstrate that the proposed DeepSSL surpasses stateof-the-art methods both visually and quantitatively, while reducing the demand for training cases by up to 75%. Additionally, its preliminary adaptability to unseen cardiac patients has been verified through a blind reader study conducted by experienced radiologists and cardiologists. Furthermore, DeepSSL enhances the accuracy of the downstream task of cardiac segmentation and exhibits robustness in prospectively undersampled real-time cardiac MRI. DeepSSL is efficient under highly limited training data and adaptive to patients and prospective undersampling. This approach holds promise in addressing the escalating demand for high-dimensional data reconstruction in MRI applications.

High-Quality CEST Mapping With Lorentzian-Model Informed Neural Representation.

Chen C, Liu Y, Park SW, Li J, Chan KWY, Huang J, Morel JM, Chan RH

pubmed logopapersMay 28 2025
Chemical Exchange Saturation Transfer (CEST) MRI has demonstrated its remarkable ability to enhance the detection of macromolecules and metabolites with low concentrations. While CEST mapping is essential for quantifying molecular information, conventional methods face critical limitations: model-based approaches are constrained by limited sensitivity and robustness depending heavily on parameter setups, while data-driven deep learning methods lack generalizability across heterogeneous datasets and acquisition protocols. To overcome these challenges, we propose a Lorentzian-model Informed Neural Representation (LINR) framework for high-quality CEST mapping. LINR employs a self-supervised neural architecture embedding the Lorentzian equation - the fundamental biophysical model of CEST signal evolution - to directly reconstruct high-sensitivity parameter maps from raw z-spectra, eliminating dependency on labeled training data. Convergence of the self-supervised training strategy is guaranteed theoretically, ensuring LINR's mathematical validity. The superior performance of LINR in capturing CEST contrasts is revealed through comprehensive evaluations based on synthetic phantoms and in-vivo experiments (including tumor and Alzheimer's disease models). The intuitive parameter-free design enables adaptive integration into diverse CEST imaging workflows, positioning LINR as a versatile tool for non-invasive molecular diagnostics and pathophysiological discovery.

Toward diffusion MRI in the diagnosis and treatment of pancreatic cancer.

Lee J, Lin T, He Y, Wu Y, Qin J

pubmed logopapersMay 28 2025
Pancreatic cancer is a highly aggressive malignancy with rising incidence and mortality rates, often diagnosed at advanced stages. Conventional imaging methods, such as computed tomography (CT) and magnetic resonance imaging (MRI), struggle to assess tumor characteristics and vascular involvement, which are crucial for treatment planning. This paper explores the potential of diffusion magnetic resonance imaging (dMRI) in enhancing pancreatic cancer diagnosis and treatment. Diffusion-based techniques, such as diffusion-weighted imaging (DWI), diffusion tensor imaging (DTI), intravoxel incoherent motion (IVIM), and diffusion kurtosis imaging (DKI), combined with emerging AI‑powered analysis, provide insights into tissue microstructure, allowing for earlier detection and improved evaluation of tumor cellularity. These methods may help assess prognosis and monitor therapy response by tracking diffusion and perfusion metrics. However, challenges remain, such as standardized protocols and robust data analysis pipelines. Ongoing research, including deep learning applications, aims to improve reliability, and dMRI shows promise in providing functional insights and improving patient outcomes. Further clinical validation is necessary to maximize its benefits.
Page 148 of 2082073 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.