Sort by:
Page 433 of 4564555 results

Groupwise image registration with edge-based loss for low-SNR cardiac MRI.

Lei X, Schniter P, Chen C, Ahmad R

pubmed logopapersMay 12 2025
The purpose of this study is to perform image registration and averaging of multiple free-breathing single-shot cardiac images, where the individual images may have a low signal-to-noise ratio (SNR). To address low SNR encountered in single-shot imaging, especially at low field strengths, we propose a fast deep learning (DL)-based image registration method, called Averaging Morph with Edge Detection (AiM-ED). AiM-ED jointly registers multiple noisy source images to a noisy target image and utilizes a noise-robust pre-trained edge detector to define the training loss. We validate AiM-ED using synthetic late gadolinium enhanced (LGE) images from the MR extended cardiac-torso (MRXCAT) phantom and free-breathing single-shot LGE images from healthy subjects (24 slices) and patients (5 slices) under various levels of added noise. Additionally, we demonstrate the clinical feasibility of AiM-ED by applying it to data from patients (6 slices) scanned on a 0.55T scanner. Compared with a traditional energy-minimization-based image registration method and DL-based VoxelMorph, images registered using AiM-ED exhibit higher values of recovery SNR and three perceptual image quality metrics. An ablation study shows the benefit of both jointly processing multiple source images and using an edge map in AiM-ED. For single-shot LGE imaging, AiM-ED outperforms existing image registration methods in terms of image quality. With fast inference, minimal training data requirements, and robust performance at various noise levels, AiM-ED has the potential to benefit single-shot CMR applications.

BodyGPS: Anatomical Positioning System

Halid Ziya Yerebakan, Kritika Iyer, Xueqi Guo, Yoshihisa Shinagawa, Gerardo Hermosillo Valadez

arxiv logopreprintMay 12 2025
We introduce a new type of foundational model for parsing human anatomy in medical images that works for different modalities. It supports supervised or unsupervised training and can perform matching, registration, classification, or segmentation with or without user interaction. We achieve this by training a neural network estimator that maps query locations to atlas coordinates via regression. Efficiency is improved by sparsely sampling the input, enabling response times of less than 1 ms without additional accelerator hardware. We demonstrate the utility of the algorithm in both CT and MRI modalities.

Cardiac imaging for the detection of ischemia: current status and future perspectives.

Rodriguez C, Pappas L, Le Hong Q, Baquero L, Nagel E

pubmed logopapersMay 12 2025
Coronary artery disease is the main cause of mortality worldwide mandating early detection, appropriate treatment, and follow-up. Noninvasive cardiac imaging techniques allow detection of obstructive coronary heart disease by direct visualization of the arteries or myocardial blood flow reduction. These techniques have made remarkable progress since their introduction, achieving high diagnostic precision. This review aims at evaluating these noninvasive cardiac imaging techniques, rendering a thorough overview of diagnostic decision-making for detection of ischemia. We discuss the latest advances in the field such as computed tomography angiography, single-photon emission tomography, positron emission tomography, and cardiac magnetic resonance; their main advantages and disadvantages, their most appropriate use and prospects. For the review, we analyzed the literature from 2009 to 2024 on noninvasive cardiac imaging in the diagnosis of coronary artery disease. The review included the 78 publications considered most relevant, including landmark trials, review articles and guidelines. The progress in cardiac imaging is anticipated to overcome various limitations such as high costs, radiation exposure, artifacts, and differences in interpretation among observers. It is expected to lead to more automated scanning processes, and with the assistance of artificial intelligence-driven post-processing software, higher accuracy and reproducibility may be attained.

ABS-Mamba: SAM2-Driven Bidirectional Spiral Mamba Network for Medical Image Translation

Feng Yuan, Yifan Gao, Wenbin Wu, Keqing Wu, Xiaotong Guo, Jie Jiang, Xin Gao

arxiv logopreprintMay 12 2025
Accurate multi-modal medical image translation requires ha-rmonizing global anatomical semantics and local structural fidelity, a challenge complicated by intermodality information loss and structural distortion. We propose ABS-Mamba, a novel architecture integrating the Segment Anything Model 2 (SAM2) for organ-aware semantic representation, specialized convolutional neural networks (CNNs) for preserving modality-specific edge and texture details, and Mamba's selective state-space modeling for efficient long- and short-range feature dependencies. Structurally, our dual-resolution framework leverages SAM2's image encoder to capture organ-scale semantics from high-resolution inputs, while a parallel CNNs branch extracts fine-grained local features. The Robust Feature Fusion Network (RFFN) integrates these epresentations, and the Bidirectional Mamba Residual Network (BMRN) models spatial dependencies using spiral scanning and bidirectional state-space dynamics. A three-stage skip fusion decoder enhances edge and texture fidelity. We employ Efficient Low-Rank Adaptation (LoRA+) fine-tuning to enable precise domain specialization while maintaining the foundational capabilities of the pre-trained components. Extensive experimental validation on the SynthRAD2023 and BraTS2019 datasets demonstrates that ABS-Mamba outperforms state-of-the-art methods, delivering high-fidelity cross-modal synthesis that preserves anatomical semantics and structural details to enhance diagnostic accuracy in clinical applications. The code is available at https://github.com/gatina-yone/ABS-Mamba

Multi-Plane Vision Transformer for Hemorrhage Classification Using Axial and Sagittal MRI Data

Badhan Kumar Das, Gengyan Zhao, Boris Mailhe, Thomas J. Re, Dorin Comaniciu, Eli Gibson, Andreas Maier

arxiv logopreprintMay 12 2025
Identifying brain hemorrhages from magnetic resonance imaging (MRI) is a critical task for healthcare professionals. The diverse nature of MRI acquisitions with varying contrasts and orientation introduce complexity in identifying hemorrhage using neural networks. For acquisitions with varying orientations, traditional methods often involve resampling images to a fixed plane, which can lead to information loss. To address this, we propose a 3D multi-plane vision transformer (MP-ViT) for hemorrhage classification with varying orientation data. It employs two separate transformer encoders for axial and sagittal contrasts, using cross-attention to integrate information across orientations. MP-ViT also includes a modality indication vector to provide missing contrast information to the model. The effectiveness of the proposed model is demonstrated with extensive experiments on real world clinical dataset consists of 10,084 training, 1,289 validation and 1,496 test subjects. MP-ViT achieved substantial improvement in area under the curve (AUC), outperforming the vision transformer (ViT) by 5.5% and CNN-based architectures by 1.8%. These results highlight the potential of MP-ViT in improving performance for hemorrhage detection when different orientation contrasts are needed.

Inference-specific learning for improved medical image segmentation.

Chen Y, Liu S, Li M, Han B, Xing L

pubmed logopapersMay 12 2025
Deep learning networks map input data to output predictions by fitting network parameters using training data. However, applying a trained network to new, unseen inference data resembles an interpolation process, which may lead to inaccurate predictions if the training and inference data distributions differ significantly. This study aims to generally improve the prediction accuracy of deep learning networks on the inference case by bridging the gap between training and inference data. We propose an inference-specific learning strategy to enhance the network learning process without modifying the network structure. By aligning training data to closely match the specific inference data, we generate an inference-specific training dataset, enhancing the network optimization around the inference data point for more accurate predictions. Taking medical image auto-segmentation as an example, we develop an inference-specific auto-segmentation framework consisting of initial segmentation learning, inference-specific training data deformation, and inference-specific segmentation refinement. The framework is evaluated on public abdominal, head-neck, and pancreas CT datasets comprising 30, 42, and 210 cases, respectively, for medical image segmentation. Experimental results show that our method improves the organ-averaged mean Dice by 6.2% (p-value = 0.001), 1.5% (p-value = 0.003), and 3.7% (p-value < 0.001) on the three datasets, respectively, with a more notable increase for difficult-to-segment organs (such as a 21.7% increase for the gallbladder [p-value = 0.004]). By incorporating organ mask-based weak supervision into the training data alignment learning, the inference-specific auto-segmentation accuracy is generally improved compared with the image intensity-based alignment. Besides, a moving-averaged calculation of the inference organ mask during the learning process strengthens both the robustness and accuracy of the final inference segmentation. By leveraging inference data during training, the proposed inference-specific learning strategy consistently improves auto-segmentation accuracy and holds the potential to be broadly applied for enhanced deep learning decision-making.

Generation of synthetic CT from MRI for MRI-based attenuation correction of brain PET images using radiomics and machine learning.

Hoseinipourasl A, Hossein-Zadeh GA, Sheikhzadeh P, Arabalibeik H, Alavijeh SK, Zaidi H, Ay MR

pubmed logopapersMay 12 2025
Accurate quantitative PET imaging in neurological studies requires proper attenuation correction. MRI-guided attenuation correction in PET/MRI remains challenging owing to the lack of direct relationship between MRI intensities and linear attenuation coefficients. This study aims at generating accurate patient-specific synthetic CT volumes, attenuation maps, and attenuation correction factor (ACF) sinograms with continuous values utilizing a combination of machine learning algorithms, image processing techniques, and voxel-based radiomics feature extraction approaches. Brain MR images of ten healthy volunteers were acquired using IR-pointwise encoding time reduction with radial acquisition (IR-PETRA) and VIBE-Dixon techniques. synthetic CT (SCT) images, attenuation maps, and attenuation correction factors (ACFs) were generated using the LightGBM, a fast and accurate machine learning algorithm, from the radiomics-based and image processing-based feature maps of MR images. Additionally, ultra-low-dose CT images of the same volunteers were acquired and served as the standard of reference for evaluation. The SCT images, attenuation maps, and ACF sinograms were assessed using qualitative and quantitative evaluation metrics and compared against their corresponding reference images, attenuation maps, and ACF sinograms. The voxel-wise and volume-wise comparison between synthetic and reference CT images yielded an average mean absolute error of 60.75 ± 8.8 HUs, an average structural similarity index of 0.88 ± 0.02, and an average peak signal-to-noise ratio of 32.83 ± 2.74 dB. Additionally, we compared MRI-based attenuation maps and ACF sinograms with their CT-based counterparts, revealing average normalized mean absolute errors of 1.48% and 1.33%, respectively. Quantitative assessments indicated higher correlations and similarities between LightGBM-synthesized CT and Reference CT images. Moreover, the cross-validation results showed the possibility of producing accurate SCT images, MRI-based attenuation maps, and ACF sinograms. This might spur the implementation of MRI-based attenuation correction on PET/MRI and dedicated brain PET scanners with lower computational time using CPU-based processors.

Fully volumetric body composition analysis for prognostic overall survival stratification in melanoma patients.

Borys K, Lodde G, Livingstone E, Weishaupt C, Römer C, Künnemann MD, Helfen A, Zimmer L, Galetzka W, Haubold J, Friedrich CM, Umutlu L, Heindel W, Schadendorf D, Hosch R, Nensa F

pubmed logopapersMay 12 2025
Accurate assessment of expected survival in melanoma patients is crucial for treatment decisions. This study explores deep learning-based body composition analysis to predict overall survival (OS) using baseline Computed Tomography (CT) scans and identify fully volumetric, prognostic body composition features. A deep learning network segmented baseline abdomen and thorax CTs from a cohort of 495 patients. The Sarcopenia Index (SI), Myosteatosis Fat Index (MFI), and Visceral Fat Index (VFI) were derived and statistically assessed for prognosticating OS. External validation was performed with 428 patients. SI was significantly associated with OS on both CT regions: abdomen (P ≤ 0.0001, HR: 0.36) and thorax (P ≤ 0.0001, HR: 0.27), with lower SI associated with prolonged survival. MFI was also associated with OS on abdomen (P ≤ 0.0001, HR: 1.16) and thorax CTs (P ≤ 0.0001, HR: 1.08), where higher MFI was linked to worse outcomes. Lastly, VFI was associated with OS on abdomen CTs (P ≤ 0.001, HR: 1.90), with higher VFI linked to poor outcomes. External validation replicated these results. SI, MFI, and VFI showed substantial potential as prognostic factors for OS in malignant melanoma patients. This approach leveraged existing CT scans without additional procedural or financial burdens, highlighting the seamless integration of DL-based body composition analysis into standard oncologic staging routines.

Identification of HER2-over-expression, HER2-low-expression, and HER2-zero-expression statuses in breast cancer based on <sup>18</sup>F-FDG PET/CT radiomics.

Hou X, Chen K, Luo H, Xu W, Li X

pubmed logopapersMay 12 2025
According to the updated classification system, human epidermal growth factor receptor 2 (HER2) expression statuses are divided into the following three groups: HER2-over-expression, HER2-low-expression, and HER2-zero-expression. HER2-negative expression was reclassified into HER2-low-expression and HER2-zero-expression. This study aimed to identify three different HER2 expression statuses for breast cancer (BC) patients using PET/CT radiomics and clinicopathological characteristics. A total of 315 BC patients who met the inclusion and exclusion criteria from two institutions were retrospectively included. The patients in institution 1 were divided into the training set and the independent validation set according to the ratio of 7:3, and institution 2 was used as the external validation set. According to the results of pathological examination, all BC patients were divided into HER2-over-expression, HER2-low-expression, and HER2-zero-expression. First, PET/CT radiomic features and clinicopathological features based on each patient were extracted and collected. Second, multiple methods were used to perform feature screening and feature selection. Then, four machine learning classifiers, including logistic regression (LR), k-nearest neighbor (KNN), support vector machine (SVM), and random forest (RF), were constructed to identify HER2-over-expression vs. others, HER2-low-expression vs. others, and HER2-zero-expression vs. others. The receiver operator characteristic (ROC) curve was plotted to measure the model's predictive power. According to the feature screening process, 8, 10, and 2 radiomics features and 2 clinicopathological features were finally selected to construct three prediction models (HER2-over-expression vs. others, HER2-low-expression vs. others, and HER2-zero-expression vs. others). For HER2-over-expression vs. others, the RF model outperformed other models with an AUC value of 0.843 (95%CI: 0.774-0.897), 0.785 (95%CI: 0.665-0.877), and 0.788 (95%CI: 0.708-0.868) in the training set, independent validation set, and external validation set. Concerning HER2-low-expression vs. others, the outperformance of the LR model over other models was identified with an AUC value of 0.783 (95%CI: 0.708-0.846), 0.756 (95%CI: 0.634-0.854), and 0.779 (95%CI: 0.698-0.860) in the training set, independent validation set, and external validation set. Whereas, the KNN model was confirmed as the optimal model to distinguish HER2-zero-expression from others, with an AUC value of 0.929 (95%CI: 0.890-0.958), 0.847 (95%CI: 0.764-0.910), and 0.835 (95%CI: 0.762-0.908) in the training set, independent validation set, and external validation set. Combined PET/CT radiomic models integrating with clinicopathological characteristics are non-invasively predictive of different HER2 statuses of BC patients.

Insights into radiomics: a comprehensive review for beginners.

Mariotti F, Agostini A, Borgheresi A, Marchegiani M, Zannotti A, Giacomelli G, Pierpaoli L, Tola E, Galiffa E, Giovagnoni A

pubmed logopapersMay 12 2025
Radiomics and artificial intelligence (AI) are rapidly evolving, significantly transforming the field of medical imaging. Despite their growing adoption, these technologies remain challenging to approach due to their technical complexity. This review serves as a practical guide for early-career radiologists and researchers seeking to integrate radiomics into their studies. It provides practical insights for clinical and research applications, addressing common challenges, limitations, and future directions in the field. This work offers a structured overview of the essential steps in the radiomics workflow, focusing on concrete aspects of each step, including indicative and practical examples. It covers the main steps such as dataset definition, image acquisition and preprocessing, segmentation, feature extraction and selection, and AI model training and validation. Different methods to be considered are discussed, accompanied by summary diagrams. This review equips readers with the knowledge necessary to approach radiomics and AI in medical imaging from a hands-on research perspective.
Page 433 of 4564555 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.