Sort by:
Page 25 of 45448 results

MRI super-resolution reconstruction using efficient diffusion probabilistic model with residual shifting.

Safari M, Wang S, Eidex Z, Li Q, Qiu RLJ, Middlebrooks EH, Yu DS, Yang X

pubmed logopapersJun 3 2025
Magnetic resonance imaging (MRI) is essential in clinical and research contexts, providing exceptional soft-tissue contrast. However, prolonged acquisition times often lead to patient discomfort and motion artifacts. Diffusion-based deep learning super-resolution (SR) techniques reconstruct high-resolution (HR) images from low-resolution (LR) pairs, but they involve extensive sampling steps, limiting real-time application. To overcome these issues, this study introduces a residual error-shifting mechanism markedly reducing sampling steps while maintaining vital anatomical details, thereby accelerating MRI reconstruction. We developed Res-SRDiff, a novel diffusion-based SR framework incorporating residual error shifting into the forward diffusion process. This integration aligns the degraded HR and LR distributions, enabling efficient HR image reconstruction. We evaluated Res-SRDiff using ultra-high-field brain T1 MP2RAGE maps and T2-weighted prostate images, benchmarking it against Bicubic, Pix2pix, CycleGAN, SPSR, I2SB, and TM-DDPM methods. Quantitative assessments employed peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), gradient magnitude similarity deviation (GMSD), and learned perceptual image patch similarity (LPIPS). Additionally, we qualitatively and quantitatively assessed the proposed framework's individual components through an ablation study and conducted a Likert-based image quality evaluation. Res-SRDiff significantly surpassed most comparison methods regarding PSNR, SSIM, and GMSD for both datasets, with statistically significant improvements (p-values≪0.05). The model achieved high-fidelity image reconstruction using only four sampling steps, drastically reducing computation time to under one second per slice. In contrast, traditional methods like TM-DDPM and I2SB required approximately 20 and 38 seconds per slice, respectively. Qualitative analysis showed Res-SRDiff effectively preserved fine anatomical details and lesion morphologies. The Likert study indicated that our method received the highest scores, 4.14±0.77(brain) and 4.80±0.40(prostate). Res-SRDiff demonstrates efficiency and accuracy, markedly improving computational speed and image quality. Incorporating residual error shifting into diffusion-based SR facilitates rapid, robust HR image reconstruction, enhancing clinical MRI workflow and advancing medical imaging research. Code available at https://github.com/mosaf/Res-SRDiff.

Enhancing Lesion Detection in Inflammatory Myelopathies: A Deep Learning-Reconstructed Double Inversion Recovery MRI Approach.

Fang Q, Yang Q, Wang B, Wen B, Xu G, He J

pubmed logopapersJun 3 2025
The imaging of inflammatory myelopathies has advanced significantly across time, with MRI techniques playing a pivotal role in enhancing lesion detection. However, the impact of deep learning (DL)-based reconstruction on 3D double inversion recovery (DIR) imaging for inflammatory myelopathies remains unassessed. This study aimed to compare the acquisition time, image quality, diagnostic confidence, and lesion detection rates among sagittal T2WI, standard DIR, and DL-reconstructed DIR in patients with inflammatory myelopathies. In this observational study, patients diagnosed with inflammatory myelopathies were recruited between June 2023 and March 2024. Each patient underwent sagittal conventional TSE sequences and standard 3D DIR (T2WI and standard 3D DIR were used as references for comparison), followed by an undersampled accelerated double inversion recovery deep learning (DIR<sub>DL</sub>) examination. Three neuroradiologists evaluated the images using a 4-point Likert scale (from 1 to 4) for overall image quality, perceived SNR, sharpness, artifacts, and diagnostic confidence. The acquisition times and lesion detection rates were also compared among the acquisition protocols. A total of 149 participants were evaluated (mean age, 40.6 [SD, 16.8] years; 71 women). The median acquisition time for DIR<sub>DL</sub> was significantly lower than for standard DIR (298 seconds [interquartile range, 288-301 seconds] versus 151 seconds [interquartile range, 148-155 seconds]; <i>P</i> < .001), showing a 49% time reduction. DIR<sub>DL</sub> images scored higher in overall quality, perceived SNR, and artifact noise reduction (all <i>P</i> < .001). There were no significant differences in sharpness (<i>P</i> = .07) or diagnostic confidence (<i>P</i> = .06) between the standard DIR and DIR<sub>DL</sub> protocols. Additionally, DIR<sub>DL</sub> detected 37% more lesions compared with T2WI (300 versus 219; <i>P</i> < .001). DIR<sub>DL</sub> significantly reduces acquisition time and improves image quality compared with standard DIR, without compromising diagnostic confidence. Additionally, DIR<sub>DL</sub> enhances lesion detection in patients with inflammatory myelopathies, making it a valuable tool in clinical practice. These findings underscore the potential for incorporating DIR<sub>DL</sub> into future imaging guidelines.

Ultra-High-Resolution Photon-Counting-Detector CT with a Dedicated Denoising Convolutional Neural Network for Enhanced Temporal Bone Imaging.

Chang S, Benson JC, Lane JI, Bruesewitz MR, Swicklik JR, Thorne JE, Koons EK, Carlson ML, McCollough CH, Leng S

pubmed logopapersJun 3 2025
Ultra-high-resolution (UHR) photon-counting-detector (PCD) CT improves image resolution but increases noise, necessitating the use of smoother reconstruction kernels that reduce resolution below the 0.125-mm maximum spatial resolution. A denoising convolutional neural network (CNN) was developed to reduce noise in images reconstructed with the available sharpest reconstruction kernel while preserving resolution for enhanced temporal bone visualization to address this issue. With institutional review board approval, the CNN was trained on 6 patient cases of clinical temporal bone imaging (1885 images) and tested on 20 independent cases using a dual-source PCD-CT (NAEOTOM Alpha). Images were reconstructed using quantum iterative reconstruction at strength 3 (QIR3) with both a clinical routine kernel (Hr84) and the sharpest available head kernel (Hr96). The CNN was applied to images reconstructed with Hr96 and QIR1 kernel. For each case, three series of images (Hr84-QIR3, Hr96-QIR3, and Hr96-CNN) were randomized for review by 2 neuroradiologists assessing the overall quality and delineating the modiolus, stapes footplate, and incudomallear joint. The CNN reduced noise by 80% compared with Hr96-QIR3 and by 50% relative to Hr84-QIR3, while maintaining high resolution. Compared with the conventional method at the same kernel (Hr96-QIR3), Hr96-CNN significantly decreased image noise (from 204.63 to 47.35 HU) and improved its structural similarity index (from 0.72 to 0.99). Hr96-CNN images ranked higher than Hr84-QIR3 and Hr96-QIR3 in overall quality (<i>P</i> < .001). Readers preferred Hr96-CNN for all 3 structures. The proposed CNN significantly reduced image noise in UHR PCD-CT, enabling the use of the sharpest kernel. This combination greatly enhanced diagnostic image quality and anatomic visualization.

Open-PMC-18M: A High-Fidelity Large Scale Medical Dataset for Multimodal Representation Learning

Negin Baghbanzadeh, Sajad Ashkezari, Elham Dolatabadi, Arash Afkanpour

arxiv logopreprintJun 3 2025
Compound figures, which are multi-panel composites containing diverse subfigures, are ubiquitous in biomedical literature, yet large-scale subfigure extraction remains largely unaddressed. Prior work on subfigure extraction has been limited in both dataset size and generalizability, leaving a critical open question: How does high-fidelity image-text alignment via large-scale subfigure extraction impact representation learning in vision-language models? We address this gap by introducing a scalable subfigure extraction pipeline based on transformer-based object detection, trained on a synthetic corpus of 500,000 compound figures, and achieving state-of-the-art performance on both ImageCLEF 2016 and synthetic benchmarks. Using this pipeline, we release OPEN-PMC-18M, a large-scale high quality biomedical vision-language dataset comprising 18 million clinically relevant subfigure-caption pairs spanning radiology, microscopy, and visible light photography. We train and evaluate vision-language models on our curated datasets and show improved performance across retrieval, zero-shot classification, and robustness benchmarks, outperforming existing baselines. We release our dataset, models, and code to support reproducible benchmarks and further study into biomedical vision-language modeling and representation learning.

Evaluating the performance and potential bias of predictive models for the detection of transthyretin cardiac amyloidosis

Hourmozdi, J., Easton, N., Benigeri, S., Thomas, J. D., Narang, A., Ouyang, D., Duffy, G., Upton, R., Hawkes, W., Akerman, A., Okwuosa, I., Kline, A., Kho, A. N., Luo, Y., Shah, S. J., Ahmad, F. S.

medrxiv logopreprintJun 2 2025
BackgroundDelays in the diagnosis of transthyretin amyloid cardiomyopathy (ATTR-CM) contribute to the significant morbidity of the condition, especially in the era of disease-modifying therapies. Screening for ATTR-CM with AI and other algorithms may improve timely diagnosis, but these algorithms have not been directly compared. ObjectivesThe aim of this study was to compare the performance of four algorithms for ATTR-CM detection in a heart failure population and assess the risk for harms due to model bias. MethodsWe identified patients in an integrated health system from 2010-2022 with ATTR-CM and age- and sex-matched them to controls with heart failure to target 5% prevalence. We compared the performance of a claims-based random forest model (Huda et al. model), a regression-based score (Mayo ATTR-CM), and two deep learning echo models (EchoNet-LVH and EchoGo(R) Amyloidosis). We evaluated for bias using standard fairness metrics. ResultsThe analytical cohort included 176 confirmed cases of ATTR-CM and 3192 control patients with 79.2% self-identified as White and 9.0% as Black. The Huda et al. model performed poorly (AUC 0.49). Both deep learning echo models had a higher AUC when compared to the Mayo ATTR-CM Score (EchoNet-LVH 0.88; EchoGo Amyloidosis 0.92; Mayo ATTR-CM Score 0.79; DeLong P<0.001 for both). Bias auditing met fairness criteria for equal opportunity among patients who identified as Black. ConclusionsDeep learning, echo-based models to detect ATTR-CM demonstrated best overall discrimination when compared to two other models in external validation with low risk of harms due to racial bias.

Impact of Optic Nerve Tortuosity, Globe Proptosis, and Size on Retinal Ganglion Cell Thickness Across General, Glaucoma, and Myopic Populations.

Chiang CYN, Wang X, Gardiner SK, Buist M, Girard MJA

pubmed logopapersJun 2 2025
The purpose of this study was to investigate the impact of optic nerve tortuosity (ONT), and the interaction of globe proptosis and size on retinal ganglion cell (RGC) thickness, using retinal nerve fiber layer (RNFL) thickness, across general, glaucoma, and myopic populations. This study analyzed 17,940 eyes from the UKBiobank cohort (ID 76442), including 72 glaucomatous and 2475 myopic eyes. Artificial intelligence models were developed to derive RNFL thickness corrected for ocular magnification from 3D optical coherence tomography scans and orbit features from 3D magnetic resonance images, including ONT, globe proptosis, axial length, and a novel feature: the interzygomatic line-to-posterior pole (ILPP) distance - a composite marker of globe proptosis and size. Generalized estimating equation (GEE) models evaluated associations between orbital and retinal features. RNFL thickness was positively correlated with ONT and ILPP distance (r = 0.065, P < 0.001 and r = 0.206, P < 0.001, respectively) in the general population. The same was true for glaucoma (r = 0.040, P = 0.74 and r = 0.224, P = 0.059), and for myopia (r = 0.069, P < 0.001 and r = 0.100, P < 0.001). GEE models revealed that straighter optic nerves and shorter ILPP distance were predictive of thinner RNFL in all populations. Straighter optic nerves and decreased ILPP distance could cause RNFL thinning, possibly due to greater traction forces. ILPP distance emerged as a potential biomarker of axonal health. These findings underscore the importance of orbit structures in RGC axonal health and warrant further research into orbit biomechanics.

Robust Uncertainty-Informed Glaucoma Classification Under Data Shift.

Rashidisabet H, Chan RVP, Leiderman YI, Vajaranant TS, Yi D

pubmed logopapersJun 2 2025
Standard deep learning (DL) models often suffer significant performance degradation on out-of-distribution (OOD) data, where test data differs from training data, a common challenge in medical imaging due to real-world variations. We propose a unified self-censorship framework as an alternative to the standard DL models for glaucoma classification using deep evidential uncertainty quantification. Our approach detects OOD samples at both the dataset and image levels. Dataset-level self-censorship enables users to accept or reject predictions for an entire new dataset based on model uncertainty, whereas image-level self-censorship refrains from making predictions on individual OOD images rather than risking incorrect classifications. We validated our approach across diverse datasets. Our dataset-level self-censorship method outperforms the standard DL model in OOD detection, achieving an average 11.93% higher area under the curve (AUC) across 14 OOD datasets. Similarly, our image-level self-censorship model improves glaucoma classification accuracy by an average of 17.22% across 4 external glaucoma datasets against baselines while censoring 28.25% more data. Our approach addresses the challenge of generalization in standard DL models for glaucoma classification across diverse datasets by selectively withholding predictions when the model is uncertain. This method reduces misclassification errors compared to state-of-the-art baselines, particularly for OOD cases. This study introduces a tunable framework that explores the trade-off between prediction accuracy and data retention in glaucoma prediction. By managing uncertainty in model outputs, the approach lays a foundation for future decision support tools aimed at improving the reliability of automated glaucoma diagnosis.

Slim UNETR++: A lightweight 3D medical image segmentation network for medical image analysis.

Jin J, Yang S, Tong J, Zhang K, Wang Z

pubmed logopapersJun 2 2025
Convolutional neural network (CNN) models, such as U-Net, V-Net, and DeepLab, have achieved remarkable results across various medical imaging modalities, and ultrasound. Additionally, hybrid Transformer-based segmentation methods have shown great potential in medical image analysis. Despite the breakthroughs in feature extraction through self-attention mechanisms, these methods are computationally intensive, especially for three-dimensional medical imaging, posing significant challenges to graphics processing unit (GPU) hardware. Consequently, the demand for lightweight models is increasing. To address this issue, we designed a high-accuracy yet lightweight model that combines the strengths of CNNs and Transformers. We introduce Slim UNEt TRansformers++ (Slim UNETR++), which builds upon Slim UNETR by incorporating Medical ConvNeXt (MedNeXt), Spatial-Channel Attention (SCA), and Efficient Paired-Attention (EPA) modules. This integration leverages the advantages of both CNN and Transformer architectures to enhance model accuracy. The core component of Slim UNETR++ is the Slim UNETR++ block, which facilitates efficient information exchange through a sparse self-attention mechanism and low-cost representation aggregation. We also introduced throughput as a performance metric to quantify data processing speed. Experimental results demonstrate that Slim UNETR++ outperforms other models in terms of accuracy and model size. On the BraTS2021 dataset, Slim UNETR++ achieved a Dice accuracy of 93.12% and a 95% Hausdorff distance (HD95) of 4.23mm, significantly surpassing mainstream relevant methods such as Swin UNETR.

Multicycle Dosimetric Behavior and Dose-Effect Relationships in [<sup>177</sup>Lu]Lu-DOTATATE Peptide Receptor Radionuclide Therapy.

Kayal G, Roseland ME, Wang C, Fitzpatrick K, Mirando D, Suresh K, Wong KK, Dewaraja YK

pubmed logopapersJun 2 2025
We investigated pharmacokinetics, dosimetric patterns, and absorbed dose (AD)-effect correlations in [<sup>177</sup>Lu]Lu-DOTATATE peptide receptor radionuclide therapy (PRRT) for metastatic neuroendocrine tumors (NETs) to develop strategies for future personalized dosimetry-guided treatments. <b>Methods:</b> Patients treated with standard [<sup>177</sup>Lu]Lu-DOTATATE PRRT were recruited for serial SPECT/CT imaging. Kidneys were segmented on CT using a deep learning algorithm, and tumors were segmented at each cycle using a SPECT gradient-based tool, guided by radiologist-defined contours on baseline CT/MRI. Dosimetry was performed using an automated workflow that included contour intensity-based SPECT-SPECT registration, generation of Monte Carlo dose-rate maps, and dose-rate fitting. Lesion-level response at first follow-up was evaluated using both radiologic (RECIST and modified RECIST) and [<sup>68</sup>Ga]Ga-DOTATATE PET-based criteria. Kidney toxicity was evaluated based on the estimated glomerular filtration rate (eGFR) at 9 mo after PRRT. <b>Results:</b> Dosimetry was performed after cycle 1 in 30 patients and after all cycles in 22 of 30 patients who completed SPECT/CT imaging after each cycle. Median cumulative tumor (<i>n</i> = 78) AD was 2.2 Gy/GBq (range, 0.1-20.8 Gy/GBq), whereas median kidney AD was 0.44 Gy/GBq (range, 0.25-0.96 Gy/GBq). The tumor-to-kidney AD ratio decreased with each cycle (median, 6.4, 5.7, 4.7, and 3.9 for cycles 1-4) because of a decrease in tumor AD, while kidney AD remained relatively constant. Higher-grade (grade 2) and pancreatic NETs showed a significantly larger drop in AD with each cycle, as well as significantly lower AD and effective half-life (T<sub>eff</sub>), than did low-grade (grade 1) and small intestinal NETs, respectively. T<sub>eff</sub> remained relatively constant with each cycle for both tumors and kidneys. Kidney T<sub>eff</sub> and AD were significantly higher in patients with low eGFR than in those with high eGFR. Tumor AD was not significantly associated with response measures. There was no nephrotoxicity higher than grade 2; however, a significant negative association was found in univariate analyses between eGFR at 9 mo and AD to the kidney, which improved in a multivariable model that also adjusted for baseline eGFR (cycle 1 AD, <i>P</i> = 0.020, adjusted <i>R</i> <sup>2</sup> = 0.57; cumulative AD, <i>P</i> = 0.049, adjusted <i>R</i> <sup>2</sup> = 0.65). The association between percentage change in eGFR and AD to the kidney was also significant in univariate analysis and after adjusting for baseline eGFR (cycle 1 AD, <i>P</i> = 0.006, adjusted <i>R</i> <sup>2</sup> = 0.21; cumulative AD, <i>P</i> = 0.019, adjusted <i>R</i> <sup>2</sup> = 0.21). <b>Conclusion:</b> The dosimetric behavior we report over different cycles and for different NET subgroups can be considered when optimizing PRRT to individual patients. The models we present for the relationship between eGFR and AD have potential for clinical use in predicting renal function early in the treatment course. Furthermore, reported pharmacokinetics for patient subgroups allow more appropriate selection of population parameters to be used in protocols with fewer imaging time points that facilitate more widespread adoption of dosimetry.

Multi-Organ metabolic profiling with [<sup>18</sup>F]F-FDG PET/CT predicts pathological response to neoadjuvant immunochemotherapy in resectable NSCLC.

Ma Q, Yang J, Guo X, Mu W, Tang Y, Li J, Hu S

pubmed logopapersJun 2 2025
To develop and validate a novel nomogram combining multi-organ PET metabolic metrics for major pathological response (MPR) prediction in resectable non-small cell lung cancer (rNSCLC) patients receiving neoadjuvant immunochemotherapy. This retrospective cohort included rNSCLC patients who underwent baseline [<sup>18</sup>F]F-FDG PET/CT prior to neoadjuvant immunochemotherapy at Xiangya Hospital from April 2020 to April 2024. Patients were randomly stratified into training (70%) and validation (30%) cohorts. Using deep learning-based automated segmentation, we quantified metabolic parameters (SUV<sub>mean</sub>, SUV<sub>max</sub>, SUV<sub>peak</sub>, MTV, TLG) and their ratio to liver metabolic parameters for primary tumors and nine key organs. Feature selection employed a tripartite approach: univariate analysis, LASSO regression, and random forest optimization. The final multivariable model was translated into a clinically interpretable nomogram, with validation assessing discrimination, calibration, and clinical utility. Among 115 patients (MPR rate: 63.5%, n = 73), five metabolic parameters emerged as predictive biomarkers for MPR: Spleen_SUV<sub>mean</sub>, Colon_SUV<sub>peak</sub>, Spine_TLG, Lesion_TLG, and Spleen-to-Liver SUV<sub>max</sub> ratio. The nomogram demonstrated consistent performance across cohorts (training AUC = 0.78 [95%CI 0.67-0.88]; validation AUC = 0.78 [95%CI 0.62-0.94]), with robust calibration and enhanced clinical net benefit on decision curve analysis. Compared to tumor-only parameters, the multi-organ model showed higher specificity (100% vs. 92%) and positive predictive value (100% vs. 90%) in the validation set, maintaining 76% overall accuracy. This first-reported multi-organ metabolic nomogram noninvasively predicts MPR in rNSCLC patients receiving neoadjuvant immunochemotherapy, outperforming conventional tumor-centric approaches. By quantifying systemic host-tumor metabolic crosstalk, this tool could help guide personalized therapeutic decisions while mitigating treatment-related risks, representing a paradigm shift towards precision immuno-oncology management.
Page 25 of 45448 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.