Sort by:
Page 7 of 6036030 results

Xue H, Hooper SM, Pierce I, Davies RH, Stairs J, Naegele J, Campbell-Washburn AE, Manisty C, Moon JC, Treibel TA, Hansen MS, Kellman P

pubmed logopapersOct 22 2025
Purpose To develop and evaluate a novel deep learning-based MRI denoising method using quantitative noise distribution information obtained during image reconstruction to improve model performance and generalization. Materials and Methods This retrospective study included a training set of 2885236 images from 96605 cardiac cine series acquired on 3T MRI scanners from January 2018 to December 2020. 95% of these data were used for training and 5% for validation. The hold-out test set included 3000 cine series, acquired in the same period. Fourteen model architectures were evaluated by instantiating each of the two backbone types with seven transformer and convolution block types. The proposed SNRAware training scheme leveraged MRI reconstruction knowledge to enhance denoising by simulating diverse synthetic datasets and providing quantitative noise distribution information. Internal testing measured performance using peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM), whereas external tests conducted on 1.5T real-time cardiac cine, first-pass cardiac perfusion, brain, and spine MRIs assessed generalization across various sequences, contrasts, anatomies, and field strengths. Results SNRAware improved performance on internal tests conducted on a hold-out dataset of 3000 cine series. Models trained without reconstruction knowledge achieved the worst performance metrics. Improvement was architecture-agnostic for both convolution and transformer models; however, transformer models outperformed their convolutional counterparts. Additionally, 3D input tensors showed improved performance over 2D images. The best-performing model from the internal testing generalized well to external samples, delivering 6.5 × and 2.9 × contrast-to-noise ratio improvement for real-time cine and perfusion imaging, respectively. The model trained using only cardiac cine data generalized well to T1 MPRAGE (Magnetization-Prepared Rapid Gradient-Echo) brain 3D and T2 TSE (turbo spin-echo) spine MRIs. Conclusion The SNRAware training scheme leveraged data obtained during the image reconstruction process for deep learning-based MRI denoising training, resulting in improved performance and good generalization. ©RSNA, 2025.

Trägårdh E, Larsson M, Enqvist O, Gillberg T, Hildebrandt MG, Edenbrandt L

pubmed logopapersOct 22 2025
The PET Response Criteria in Solid Tumours (PERCIST) 1.0 provides a standardized framework for evaluating treatment response using [<sup>18</sup>F]fluorodeoxyglucose ([<sup>18</sup>F]FDG) positron emission tomography - computed tomography (PET-CT), but its clinical use is hindered by manual complexity. This study presents coPERCIST, an artificial intelligence (AI)-assisted module integrated into the RECOMIA platform that semi-automates and streamlines PERCIST analysis. coPERCIST performs organ segmentation and automates key steps of the PERCIST workflow, including background activity quantification, lesion detection, SULpeak calculation, and longitudinal lesion comparison. A novel image alignment method using organ-specific transformations and uncertainty estimation enables accurate lesion tracking over time. The system was evaluated in 58 oncological patients, each with two PET-CT scans. Up to three measurable lesions per patient were analysed. The AI-suggested liver and aorta volume of interest for threshold calculation were correct in all baseline and follow-up studies. Follow-up studies were classified as progressive metabolic disease (PMD) in 38 cases, stable metabolic disease (SMD) in 16, and partial metabolic response (PMR) in 4. Of 130 lesions evaluated, anatomical alignment was accurate in all cases, and pairwise SULpeak quantification was accurate in 95%. Pairwise SULpeak quantification failed in seven lesion pairs due to proximity to other lesions or misclassified physiological uptake. Review time was less than one minute for most cases. This study demonstrates the feasibility of AI-assisted PERCIST evaluation for [<sup>18</sup>F]FDG PET-CT, showing promising accuracy. coPERCIST offers potential for reproducible response assessment and supports future multicentre validation. It is freely available to researchers via the RECOMIA platform.

Juhyung Park, Rokgi Hong, Roh-Eul Yoo, Jaehyeon Koo, Se Young Chun, Seung Hong Choi, Jongho Lee

arxiv logopreprintOct 22 2025
Recent advancements in artificial intelligence have created transformative capabilities in image synthesis and generation, enabling diverse research fields to innovate at revolutionary speed and spectrum. In this study, we leverage this generative power to introduce a new paradigm for accelerating Magnetic Resonance Imaging (MRI), introducing a shift from image reconstruction to proactive predictive imaging. Despite being a cornerstone of modern patient care, MRI's lengthy acquisition times limit clinical throughput. Our novel framework addresses this challenge by first predicting a target contrast image, which then serves as a data-driven prior for reconstructing highly under-sampled data. This informative prior is predicted by a generative model conditioned on diverse data sources, such as other contrast images, previously scanned images, acquisition parameters, patient information. We demonstrate this approach with two key applications: (1) reconstructing FLAIR images using predictions from T1w and/or T2w scans, and (2) reconstructing T1w images using predictions from previously acquired T1w scans. The framework was evaluated on internal and multiple public datasets (total 14,921 scans; 1,051,904 slices), including multi-channel k-space data, for a range of high acceleration factors (x4, x8 and x12). The results demonstrate that our prediction-prior reconstruction method significantly outperforms other approaches, including those with alternative or no prior information. Through this framework we introduce a fundamental shift from image reconstruction towards a new paradigm of predictive imaging.

Safa Ben Atitallah, Maha Driss, Wadii Boulila, Anis Koubaa

arxiv logopreprintOct 22 2025
Alzheimer disease is a severe brain disorder that causes harm in various brain areas and leads to memory damage. The limited availability of labeled medical data poses a significant challenge for accurate Alzheimer disease detection. There is a critical need for effective methods to improve the accuracy of Alzheimer disease detection, considering the scarcity of labeled data, the complexity of the disease, and the constraints related to data privacy. To address this challenge, our study leverages the power of big data in the form of pre-trained Convolutional Neural Networks (CNNs) within the framework of Few-Shot Learning (FSL) and ensemble learning. We propose an ensemble approach based on a Prototypical Network (ProtoNet), a powerful method in FSL, integrating various pre-trained CNNs as encoders. This integration enhances the richness of features extracted from medical images. Our approach also includes a combination of class-aware loss and entropy loss to ensure a more precise classification of Alzheimer disease progression levels. The effectiveness of our method was evaluated using two datasets, the Kaggle Alzheimer dataset and the ADNI dataset, achieving an accuracy of 99.72% and 99.86%, respectively. The comparison of our results with relevant state-of-the-art studies demonstrated that our approach achieved superior accuracy and highlighted its validity and potential for real-world applications in early Alzheimer disease detection.

Ahsan Raza Siyal, Markus Haltmeier, Ruth Steiger, Malik Galijasevic, Elke Ruth Gizewski, Astrid Ellen Grams

arxiv logopreprintOct 22 2025
Deformable medical image registration is a fundamental task in medical image analysis. While deep learning-based methods have demonstrated superior accuracy and computational efficiency compared to traditional techniques, they often overlook the critical role of regularization in ensuring robustness and anatomical plausibility. We propose DARE (Deformable Adaptive Regularization Estimator), a novel registration framework that dynamically adjusts elastic regularization based on the gradient norm of the deformation field. Our approach integrates strain and shear energy terms, which are adaptively modulated to balance stability and flexibility. To ensure physically realistic transformations, DARE includes a folding-prevention mechanism that penalizes regions with negative deformation Jacobian. This strategy mitigates non-physical artifacts such as folding, avoids over-smoothing, and improves both registration accuracy and anatomical plausibility

Chen Ma, Jing Jiao, Shuyu Liang, Junhu Fu, Qin Wang, Zeju Li, Yuanyuan Wang, Yi Guo

arxiv logopreprintOct 22 2025
Foundation models for medical imaging demonstrate superior generalization capabilities across diverse anatomical structures and clinical applications. Their outstanding performance relies on substantial computational resources, limiting deployment in resource-constrained clinical environments. This paper presents TinyUSFM, the first lightweight ultrasound foundation model that maintains superior organ versatility and task adaptability of our large-scale Ultrasound Foundation Model (USFM) through knowledge distillation with strategically curated small datasets, delivering significant computational efficiency without sacrificing performance. Considering the limited capacity and representation ability of lightweight models, we propose a feature-gradient driven coreset selection strategy to curate high-quality compact training data, avoiding training degradation from low-quality redundant images. To preserve the essential spatial and frequency domain characteristics during knowledge transfer, we develop domain-separated masked image modeling assisted consistency-driven dynamic distillation. This novel framework adaptively transfers knowledge from large foundation models by leveraging teacher model consistency across different domain masks, specifically tailored for ultrasound interpretation. For evaluation, we establish the UniUS-Bench, the largest publicly available ultrasound benchmark comprising 8 classification and 10 segmentation datasets across 15 organs. Using only 200K images in distillation, TinyUSFM matches USFM's performance with just 6.36% of parameters and 6.40% of GFLOPs. TinyUSFM significantly outperforms the vanilla model by 9.45% in classification and 7.72% in segmentation, surpassing all state-of-the-art lightweight models, and achieving 84.91% average classification accuracy and 85.78% average segmentation Dice score across diverse medical devices and centers.

Zhang H, Miao L, Ma L, Sun X, Ouyang LN, Jing Y, Wang Y, Wang X, Wang P, Zhu L

pubmed logopapersOct 22 2025
Accurate prediction of the invasiveness of early-stage pulmonary adenocarcinoma presenting as ground-glass nodules (GGNs) remains highly challenging. This study aims to integrate radiomics features from non-contrast CT (NECT) and contrast-enhanced CT (CECT), deep learning features, and intratumoral habitat features to improve prediction accuracy and provide robust support for clinical personalized surgical decision-making. This dual-center retrospective study included 516 patients with pathologically confirmed GGNs (≤30mm) from December 2018 to September 2023. Patients from center 1 were randomly divided into training (276 patients) and internal-validation (120 patients) sets, while patients from center 2 were all included into external-validation (120 patients) set. Intratumoral habitat analysis (ITH) was performed on NECT and CECT images using the K-means clustering algorithm. Radiomic features were extracted from the lesion regions and clustered subregions, deep learning features were obtained via a fine-tuned ResNet50 model. After feature selection, eight predictive models were established. Additionally, a dynamic nomogram (the comprehensive model) was developed and subjected to explainable analysis using SHAP (SHapley Additive exPlanations). Model performance was assessed using area under the curve (AUC), decision curve analysis (DCA), and calibration curves. Among eight predictive models, the comprehensive model, which utilized multi-modal data as input demonstrated the highest accuracy in distinguishing invasive adenocarcinoma (IAC) from pre-invasive lesions (AAH/AIS/MIA). In the training set, the AUC was 0.92 (95% CI: 0.89-0.95), with 84% accuracy, 85% sensitivity, and 84% specificity. In the internal-validation set, the AUC was 0.90 (95% CI: 0.86-0.95), with 82% accuracy, 88% sensitivity, and 74% specificity. In the external-validation set, the AUC was 0.85 (95% CI: 0.80-0.91), with 80% accuracy, 80% sensitivity, and 80% specificity. DCA analysis showed that the nomogram provided the highest net benefit when the threshold probability was ≥0.4, and the Hosmer-Lemeshow test confirmed good calibration (P>0.05). SHAP analysis and the selected of optimal features revealed that wavelet-based texture features, deep learning features, and ITH features made significant contributions to the model's performance. The comprehensive model (radiomics, deep learning, ITH, clinical variables) enables reliable prediction of the invasiveness of GGNs-ADC. It bridges imaging and pathology, potentially advancing personalized surgical decision-making in early-stage lung adenocarcinoma.

Guo L, Zhang H, Ma C

pubmed logopapersOct 22 2025
Ultrasound imaging, as an economical, efficient, and non-invasive diagnostic tool, is widely used for breast lesion screening and diagnosis. However, the segmentation of lesion regions remains a significant challenge due to factors such as noise interference and the variability in image quality. To address this issue, we propose a novel deep learning model named enhanced segment anything model 2 (SAM2) for breast lesion segmentation (ESAM2-BLS). This model is an optimized version of the SAM2 architecture. ESAM2-BLS customizes and fine-tunes the pre-trained SAM2 model by introducing an adapter module, specifically designed to accommodate the unique characteristics of breast ultrasound images. The adapter module directly addresses ultrasound-specific challenges including speckle noise, low contrast boundaries, shadowing artifacts, and anisotropic resolution through targeted architectural elements such as channel attention mechanisms, specialized convolution kernels, and optimized skip connections. This optimization significantly improves segmentation accuracy, particularly for low-contrast and small lesion regions. Compared to traditional methods, ESAM2-BLS fully leverages the generalization capabilities of large models while incorporating multi-scale feature fusion and axial dilated depthwise convolution to effectively capture multi-level information from complex lesions. During the decoding process, the model enhances the identification of fine boundaries and small lesions through depthwise separable convolutions and skip connections, while maintaining a low computational cost. Visualization of the segmentation results and interpretability analysis demonstrate that ESAM2-BLS achieves an average Dice score of 0.9077 and 0.8633 in five-fold cross-validation across two datasets with over 1600 patients. These results significantly improve segmentation accuracy and robustness. This model provides an efficient, reliable, and specialized automated solution for early breast cancer screening and diagnosis.

Zou J, Cao Y

pubmed logopapersOct 22 2025
Time-dependent diffusion MRI enables quantification of tumor microstructural parameters useful for diagnosis and prognosis. Nevertheless, current model fitting approaches exhibit suboptimal bias-variance trade-offs; specifically, nonlinear least squares fitting (NLLS) demonstrated low bias but high variance, whereas supervised deep learning methods trained with mean squared error loss (MSE-Net) yielded low variance but elevated bias. This study investigates these bias-variance characteristics and proposes a method to control fitting bias and variance. Random walk with barrier model was used as a representative biophysical model. NLLS and MSE-Net were reformulated within the Bayesian framework to elucidate their bias-variance behaviors. We introduced B2V-Net, a supervised learning approach using a loss function with adjustable bias-variance weighting, to control bias-variance trade-off. B2V-Net was evaluated and compared against NLLS and MSE-Net numerically across a wide range of parameters and noise levels, as well as in vivo in patients with head and neck cancer. Flat posterior distributions that were not centered at ground truth parameters explained the bias-variance behaviors of NLLS and MSE-Net. B2V-Net controlled the bias-variance trade-off, achieving a 56% reduction in standard deviation relative to NLLS and an 18% reduction in bias compared to MSE-Net. In vivo parameter maps from B2V-Net demonstrated a balance between smoothness and accuracy. We demonstrated and explained the low bias-high variance of NLLS and the low variance-high bias of MSE-Net. The proposed B2V-Net can balance bias and variance. Our work provided insights and methods to guide the design of customized loss functions tailored to specific clinical imaging needs.

Shengyu Chen, Shihang Feng, Yi Luo, Xiaowei Jia, Youzuo Lin

arxiv logopreprintOct 22 2025
Ultrasound brain imaging remains challenging due to the large difference in sound speed between the skull and brain tissues and the difficulty of coupling large probes to the skull. This work aims to achieve quantitative transcranial ultrasound by reconstructing an accurate speed-of-sound (SoS) map of the brain. Traditional physics-based full-waveform inversion (FWI) is limited by weak signals caused by skull-induced attenuation, mode conversion, and phase aberration, as well as incomplete spatial coverage since full-aperture arrays are clinically impractical. In contrast, purely data-driven methods that learn directly from raw ultrasound data often fail to model the complex nonlinear and nonlocal wave propagation through bone, leading to anatomically plausible but quantitatively biased SoS maps under low signal-to-noise and sparse-aperture conditions. To address these issues, we propose BrainPuzzle, a hybrid two-stage framework that combines physical modeling with machine learning. In the first stage, reverse time migration (time-reversal acoustics) is applied to multi-angle acquisitions to produce migration fragments that preserve structural details even under low SNR. In the second stage, a transformer-based super-resolution encoder-decoder with a graph-based attention unit (GAU) fuses these fragments into a coherent and quantitatively accurate SoS image. A partial-array acquisition strategy using a movable low-count transducer set improves feasibility and coupling, while the hybrid algorithm compensates for the missing aperture. Experiments on two synthetic datasets show that BrainPuzzle achieves superior SoS reconstruction accuracy and image completeness, demonstrating its potential for advancing quantitative ultrasound brain imaging.
Page 7 of 6036030 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.