Sort by:
Page 467 of 7497488 results

Xing Z, Chen J, Pan L, Huang D, Qiu Y, Sheng C, Zhang Y, Wang Q, Cheng R, Xing W, Ding J

pubmed logopapersJul 11 2025
To assess artificial intelligence (AI)'s added value in detecting prostate cancer lesions on MRI by comparing radiologists' performance with and without AI assistance. A fully-crossed multi-reader multi-case clinical trial was conducted across three institutions with 10 non-expert radiologists. Biparametric MRI cases comprising T2WI, diffusion-weighted images, and apparent diffusion coefficient were retrospectively collected. Three reading modes were evaluated: AI alone, radiologists alone (unaided), and radiologists with AI (aided). Aided and unaided readings were compared using the Dorfman-Berbaum-Metz method. Reference standards were established by senior radiologists based on pathological reports. Performance was quantified via sensitivity, specificity, and area under the alternative free-response receiver operating characteristic curve (AFROC-AUC). Among 407 eligible male patients (69.5±9.3years), aided reading significantly improved lesion-level sensitivity from 67.3% (95% confidence intervals [CI]: 58.8%, 75.8%) to 85.5% (95% CI: 81.3%, 89.7%), with a substantial difference of 18.2% (95% CI: 10.7%, 25.7%, p<0.001). Case-level specificity increased from 75.9% (95% CI: 68.7%, 83.1%) to 79.5% (95% CI: 74.1%, 84.8%), demonstrating non-inferiority (p<0.001). AFROC-AUC was also higher for aided than unaided reading (86.9% vs 76.1%, p<0.001). AI alone achieved robust performance (AFROC-AUC=83.1%, 95%CI: 79.7%, 86.6%), with lesion-level sensitivity of 88.4% (95% CI: 84.0%, 92.0%) and case-level specificity of 77.8% (95% CI: 71.5%, 83.3%). Subgroup analysis revealed improved detection for lesions with smaller size and lower prostate imaging reporting and data system scores. AI-aided reading significantly enhances lesion detection compared to unaided reading, while AI alone also demonstrates high diagnostic accuracy.

Mikayama R, Kojima T, Shirasaka T, Yamane S, Funatsu R, Kato T, Yabuuchi H

pubmed logopapersJul 11 2025
Deep learning-based fast kV-switching CT (DL-FKSCT) generates complete sinograms for fast kV-switching dual-energy CT (DECT) scans by using a trained neural network to restore missing views. Such restoration significantly enhances the image quality of coronary CT angiography (CCTA), and the allowable heart rate (HR) may vary between DECT and single-energy CT (SECT). This study aimed to examine HR's effect onCCTA using DL-FKSCT. We scanned stenotic coronary artery phantoms attached to a pulsating cardiac phantom with DECT and SECT modes on a DL-FKSCT scanner. The phantom unit was operated with simulated HRs ranging from 0 (static) to 50-70 beats per minute (bpm). The sharpness and stenosis ratio of the coronary model were quantitatively compared between DECT and SECT, stratified by simulated HR settings using the paired t-test (significance was set at p < 0.01 with a Bonferroni adjustment for multiple comparisons). Regarding image sharpness, DECT showed significant superiority over SECT. In terms of the stenosis ratio compared to a static image reference, 70 keV virtual monochromatic image in DECT exhibited errors exceeding 10 % at HRs surpassing 65 bpm (p < 0.01), whereas 120 kVp SECT registered errors below 10 % across all HR settings, with no significant differences observed. In DL-FKSCT, DECT exhibited a lower upper limit of HR than SECT. Therefore, HR control is important for DECT scans in DL-FKSCT.

Zhang J, Horn M, Tanaka K, Bala F, Singh N, Benali F, Ganesh A, Demchuk AM, Menon BK, Qiu W

pubmed logopapersJul 11 2025
Presence of spot sign on CT Angiography (CTA) is associated with hematoma growth in patients with intracerebral hemorrhage. Measuring spot sign volume over time may aid to predict hematoma expansion. Due to the difficulties that imaging characteristics of spot sign are similar with vein and calcification and spot signs are tiny appeared in CTA images to detect, our aim is to develop an automated method to pick up spot signs accurately. We proposed a novel collaborative architecture of network based on a student-teacher model by efficiently exploiting additional negative samples with contrastive learning. In particular, a set of dynamic-updated memory banks is proposed to learn more distinctive features from the extremely imbalanced positive and negative samples. Alongside, a two-steam network with an additional contextual-decoder is designed for learning more contextual information at different scales in a collaborative way. Besides, to better inhibit the false positive detection rate, a region restriction loss function is further designed to confine the spot sign segmentation within the hemorrhage. Quantitative evaluations using dice, volume correlation, sensitivity, specificity, area under the curve show that the proposed method is able to segment and detect spot signs accurately. Our proposed contractive learning framework obtained the best segmentation performance regarding a mean Dice of 0.638 ± 0211, a mean VC of 0.871 and a mean VDP of 0.348 ± 0.237 and detection performance regarding sensitivity of 0.956 with CI(0.895,1.000), specificity of 0.833 with CI(0.766,0.900), and AUC of 0.892 with CI(0.888,0.896), outperforming nnuNet, cascade-nnuNet, nnuNet++, SegRegNet, UNETR and SwinUNETR. This paper proposed a novel segmentation approach that leverages contrastive learning to explore additional negative samples concurrently for the automatic segmentation of spot signs on mCTA images. The experimental results demonstrate the effectiveness of our method and highlight its potential applicability in clinical settings for measuring spot sign volumes.

Anna Rosenberg, John Kennedy, Zohar Keidar, Yehoshua Y. Zeevi, Guy Gilboa

arxiv logopreprintJul 11 2025
Solving computer vision problems through machine learning, one often encounters lack of sufficient training data. To mitigate this we propose the use of ensembles of weak learners based on spectral total-variation (STV) features (Gilboa 2014). The features are related to nonlinear eigenfunctions of the total-variation subgradient and can characterize well textures at various scales. It was shown (Burger et-al 2016) that, in the one-dimensional case, orthogonal features are generated, whereas in two-dimensions the features are empirically lowly correlated. Ensemble learning theory advocates the use of lowly correlated weak learners. We thus propose here to design ensembles using learners based on STV features. To show the effectiveness of this paradigm we examine a hard real-world medical imaging problem: the predictive value of computed tomography (CT) data for high uptake in positron emission tomography (PET) for patients suspected of skeletal metastases. The database consists of 457 scans with 1524 unique pairs of registered CT and PET slices. Our approach is compared to deep-learning methods and to Radiomics features, showing STV learners perform best (AUC=0.87), compared to neural nets (AUC=0.75) and Radiomics (AUC=0.79). We observe that fine STV scales in CT images are especially indicative for the presence of high uptake in PET.

Mengyuan Liu, Jeongkyu Lee

arxiv logopreprintJul 11 2025
Magnetic resonance imaging (MRI) enables non-invasive, high-resolution analysis of muscle structures. However, automated segmentation remains limited by high computational costs, reliance on large training datasets, and reduced accuracy in segmenting smaller muscles. Convolutional neural network (CNN)-based methods, while powerful, often suffer from substantial computational overhead, limited generalizability, and poor interpretability across diverse populations. This study proposes a training-free segmentation approach based on keypoint tracking, which integrates keypoint selection with Lucas-Kanade optical flow. The proposed method achieves a mean Dice similarity coefficient (DSC) ranging from 0.6 to 0.7, depending on the keypoint selection strategy, performing comparably to state-of-the-art CNN-based models while substantially reducing computational demands and enhancing interpretability. This scalable framework presents a robust and explainable alternative for muscle segmentation in clinical and research applications.

Hengyu Gao, Shaodong Ding, Ziyang Liu, Jiefu Zhang, Bolun Li, Zhiwu An, Li Wang, Jing Jing, Tao Liu, Yubo Fan, Zhongtao Hu

arxiv logopreprintJul 11 2025
Accurate targeting is critical for the effectiveness of transcranial focused ultrasound (tFUS) neuromodulation. While CT provides accurate skull acoustic properties, its ionizing radiation and poor soft tissue contrast limit clinical applicability. In contrast, MRI offers superior neuroanatomical visualization without radiation exposure but lacks skull property mapping. This study proposes a novel, fully CT free simulation framework that integrates MRI-derived synthetic CT (sCT) with efficient modeling techniques for rapid and precise tFUS targeting. We trained a deep-learning model to generate sCT from T1-weighted MRI and integrated it with both full-wave (k-Wave) and accelerated simulation methods, hybrid angular spectrum (kWASM) and Rayleigh-Sommerfeld ASM (RSASM). Across five skull models, both full-wave and hybrid pipelines using sCT demonstrated sub-millimeter targeting deviation, focal shape consistency (FWHM ~3.3-3.8 mm), and <0.2 normalized pressure error compared to CT-based gold standard. Notably, the kW-ASM and RS-ASM pipelines reduced simulation time from ~3320 s to 187 s and 34 s respectively, achieving ~94% and ~90% time savings. These results confirm that MRI-derived sCT combined with innovative rapid simulation techniques enables fast, accurate, and radiation-free tFUS planning, supporting its feasibility for scalable clinical applications.

Inye Na, Nejung Rue, Jiwon Chung, Hyunjin Park

arxiv logopreprintJul 11 2025
Medical image retrieval is a valuable field for supporting clinical decision-making, yet current methods primarily support 2D images and require fully annotated queries, limiting clinical flexibility. To address this, we propose RadiomicsRetrieval, a 3D content-based retrieval framework bridging handcrafted radiomics descriptors with deep learning-based embeddings at the tumor level. Unlike existing 2D approaches, RadiomicsRetrieval fully exploits volumetric data to leverage richer spatial context in medical images. We employ a promptable segmentation model (e.g., SAM) to derive tumor-specific image embeddings, which are aligned with radiomics features extracted from the same tumor via contrastive learning. These representations are further enriched by anatomical positional embedding (APE). As a result, RadiomicsRetrieval enables flexible querying based on shape, location, or partial feature sets. Extensive experiments on both lung CT and brain MRI public datasets demonstrate that radiomics features significantly enhance retrieval specificity, while APE provides global anatomical context essential for location-based searches. Notably, our framework requires only minimal user prompts (e.g., a single point), minimizing segmentation overhead and supporting diverse clinical scenarios. The capability to query using either image embeddings or selected radiomics attributes highlights its adaptability, potentially benefiting diagnosis, treatment planning, and research on large-scale medical imaging repositories. Our code is available at https://github.com/nainye/RadiomicsRetrieval.

Zach Eidex, Mojtaba Safari, Tonghe Wang, Vanessa Wildman, David S. Yu, Hui Mao, Erik Middlebrooks, Aparna Kesewala, Xiaofeng Yang

arxiv logopreprintJul 11 2025
Purpose: Ultra-high-field 7T MRI offers improved resolution and contrast over standard clinical field strengths (1.5T, 3T). However, 7T scanners are costly, scarce, and introduce additional challenges such as susceptibility artifacts. We propose an efficient transformer-based model (7T-Restormer) to synthesize 7T-quality T1-maps from routine 1.5T or 3T T1-weighted (T1W) images. Methods: Our model was validated on 35 1.5T and 108 3T T1w MRI paired with corresponding 7T T1 maps of patients with confirmed MS. A total of 141 patient cases (32,128 slices) were randomly divided into 105 (25; 80) training cases (19,204 slices), 19 (5; 14) validation cases (3,476 slices), and 17 (5; 14) test cases (3,145 slices) where (X; Y) denotes the patients with 1.5T and 3T T1W scans, respectively. The synthetic 7T T1 maps were compared against the ResViT and ResShift models. Results: The 7T-Restormer model achieved a PSNR of 26.0 +/- 4.6 dB, SSIM of 0.861 +/- 0.072, and NMSE of 0.019 +/- 0.011 for 1.5T inputs, and 25.9 +/- 4.9 dB, and 0.866 +/- 0.077 for 3T inputs, respectively. Using 10.5 M parameters, our model reduced NMSE by 64 % relative to 56.7M parameter ResShift (0.019 vs 0.052, p = <.001 and by 41 % relative to 70.4M parameter ResViT (0.019 vs 0.032, p = <.001) at 1.5T, with similar advantages at 3T (0.021 vs 0.060 and 0.033; p < .001). Training with a mixed 1.5 T + 3 T corpus was superior to single-field strategies. Restricting the model to 1.5T increased the 1.5T NMSE from 0.019 to 0.021 (p = 1.1E-3) while training solely on 3T resulted in lower performance on input 1.5T T1W MRI. Conclusion: We propose a novel method for predicting quantitative 7T MP2RAGE maps from 1.5T and 3T T1W scans with higher quality than existing state-of-the-art methods. Our approach makes the benefits of 7T MRI more accessible to standard clinical workflows.

Nikita Malik, Pratinav Seth, Neeraj Kumar Singh, Chintan Chitroda, Vinay Kumar Sankarapu

arxiv logopreprintJul 11 2025
Deep learning has driven significant advances in medical image analysis, yet its adoption in clinical practice remains constrained by the large size and lack of transparency in modern models. Advances in interpretability techniques such as DL-Backtrace, Layer-wise Relevance Propagation, and Integrated Gradients make it possible to assess the contribution of individual components within neural networks trained on medical imaging tasks. In this work, we introduce an interpretability-guided pruning framework that reduces model complexity while preserving both predictive performance and transparency. By selectively retaining only the most relevant parts of each layer, our method enables targeted compression that maintains clinically meaningful representations. Experiments across multiple medical image classification benchmarks demonstrate that this approach achieves high compression rates with minimal loss in accuracy, paving the way for lightweight, interpretable models suited for real-world deployment in healthcare settings.

Ulzee An, Moonseong Jeong, Simon A. Lee, Aditya Gorla, Yuzhe Yang, Sriram Sankararaman

arxiv logopreprintJul 11 2025
Current challenges in developing foundational models for volumetric imaging data, such as magnetic resonance imaging (MRI), stem from the computational complexity of training state-of-the-art architectures in high dimensions and curating sufficiently large datasets of volumes. To address these challenges, we introduce Raptor (Random Planar Tensor Reduction), a train-free method for generating semantically rich embeddings for volumetric data. Raptor leverages a frozen 2D foundation model, pretrained on natural images, to extract visual tokens from individual cross-sections of medical volumes. These tokens are then spatially compressed using random projections, significantly reducing computational complexity while retaining semantic information. Extensive experiments on ten diverse medical volume tasks verify the superior performance of Raptor over state-of-the-art methods, including those pretrained exclusively on medical volumes (+3% SuPreM, +6% MISFM, +10% Merlin, +13% VoCo, and +14% SLIViT), while entirely bypassing the need for costly training. Our results highlight the effectiveness and versatility of Raptor as a foundation for advancing deep learning-based methods for medical volumes.
Page 467 of 7497488 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,500+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.