Sort by:
Page 203 of 6546537 results

Wang S, Liu C, Guo Y, Sang H, Li X, Lin L, Li X, Wu Y, Zhang L, Tian J, Li J, Wang Y

pubmed logopapersSep 2 2025
Light-chain cardiac amyloidosis (AL-CA) is a progressive heart disease with high mortality rate and variable prognosis. Presently used Mayo staging method can only stratify patients into four stages, highlighting the necessity for a more individualized prognosis prediction method. We aim to develop a novel deep learning (DL) model for whole-heart analysis of cardiovascular magnetic resonance-derived late gadolinium enhancement (LGE) images to predict individualized prognosis in AL-CA. This study included 394 patients with AL-CA who underwent standardized chemotherapy and had at least one year of follow-up. The approach involved automated segmentation of heart in LGE images and feature extraction using a Transformer-based DL model. To enhance feature differentiation and mitigate overfitting, a contrastive pretraining strategy was employed to accentuate distinct features between patients with different prognosis while clustering similar cases. Finally, an ensemble learning strategy was used to integrate predictions from 15 models at 15 survival time points into a comprehensive prognostic model. In the testing set of 79 patients, the DL model achieved a C-Index of 0.91 and an AUC of 0.95 in predicting 2.6-year survival (HR: 2.67), outperforming the Mayo model (C-Index=0.65, AUC=0.71). The DL model effectively distinguished patients with the same Mayo stage but different prognosis. Visualization techniques revealed that the model captures complex, high-dimensional prognostic features across multiple cardiac regions, extending beyond the amyloid-affected areas. This fully automated DL model can predict individualized prognosis of AL-CA through LGE images, which complements the presently used Mayo staging method.

Alsubaie M, Perera SM, Gu L, Subasi SB, Andronesi OC, Li X

pubmed logopapersSep 2 2025
High-resolution magnetic resonance spectroscopic imaging (MRSI) plays a crucial role in characterizing tumor metabolism and guiding clinical decisions for glioma patients. However, due to inherently low metabolite concentrations and signal-to-noise ratio (SNR) limitations, MRSI data are often acquired at low spatial resolution, hindering accurate visualization of tumor heterogeneity and margins. In this study, we propose a novel deep learning framework based on conditional denoising diffusion probabilistic models for super-resolution reconstruction of MRSI, with a particular focus on mutant isocitrate dehydrogenase (IDH) gliomas. The model progressively transforms noise into high-fidelity metabolite maps through a learned reverse diffusion process, conditioned on low-resolution inputs. Leveraging a Self-Attention UNet backbone, the proposed approach integrates global contextual features and achieves superior detail preservation. On simulated patient data, the proposed method achieved Structural Similarity Index Measure (SSIM) values of 0.956, 0.939, and 0.893; Peak Signal-to-Noise Ratio (PSNR) values of 29.73, 27.84, and 26.39 dB; and Learned Perceptual Image Patch Similarity (LPIPS) values of 0.025, 0.036, and 0.045 for upsampling factors of 2, 4, and 8, respectively, with LPIPS improvements statistically significant compared to all baselines ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>p</mi> <mo><</mo> <mn>0.01</mn></mrow> </math> ). We validated the framework on in vivo MRSI from healthy volunteers and glioma patients, where it accurately reconstructed small lesions, preserved critical textural and structural information, and enhanced tumor boundary delineation in metabolic ratio maps, revealing heterogeneity not visible in other approaches. These results highlight the promise of diffusion-based deep learning models as clinically relevant tools for noninvasive, high-resolution metabolic imaging in glioma and potentially other neurological disorders.

Wang Y, Wang T, Zheng F, Hao W, Hao Q, Zhang W, Yin P, Hong N

pubmed logopapersSep 2 2025
Soft  tissue sarcomas (STS) are heterogeneous malignancies with high recurrence rates (33-39%) post-surgery, necessitating improved prognostic tools. This study proposes a fusion model integrating deep transfer learning and radiomics from MRI to predict postoperative STS recurrence. Axial T2-weighted fat-suppressed imaging (T<sub>2</sub>WI) of 803 STS patients from two institutions was retrospectively collected and divided into training (n = 527), internal validation (n = 132), and external validation (n = 144) cohorts. Tumor segmentation was performed using the SegResNet model within the Auto3DSeg framework. Radiomic features and deep learning features were extracted. Feature selection employed LASSO regression, and the deep learning radiomic (DLR) model combined radiomic and deep learning signatures. Using the features, nine models were constructed based on three classifiers. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, accuracy, negative predictive value, and positive predictive value were calculated for performance evaluation. The SegResNet model achieved Dice coefficients of 0.728 after refinement. Recurrence rates were 22.8% (120/527) in the training, 25.0% (33/132) in the internal validation, and 32.6% (47/144) in the external validation cohorts. The DLR model (ExtraTrees) demonstrated superior performance, achieving an AUC of 0.818 in internal validation and 0.809 in external validation, better than the radiomic model (0.710, 0.612) and the deep learning model (0.751, 0.667). Sensitivity and specificity ranged from 0.702 to 0.976 and 0.732 to 0.830, respectively. Decision curve analysis confirmed superior clinical utility. The DLR model provides a robust, non-invasive tool for preoperative STS recurrence prediction, enabling personalized treatment decisions and postoperative management.

Shao L, Liang C, Yan Y, Zhu H, Jiang X, Bao M, Zang P, Huang X, Zhou H, Nie P, Wang L, Li J, Zhang S, Ren S

pubmed logopapersSep 2 2025
Prostate cancer is a leading health concern for men, yet current clinical assessments of tumor aggressiveness rely on invasive procedures that often lead to inconsistencies. There remains a critical need for accurate, noninvasive diagnosis and grading methods. Here we developed a foundation model trained on multiparametric magnetic resonance imaging (MRI) and paired pathology data for noninvasive diagnosis and grading of prostate cancer. Our model, MRI-based Predicted Transformer for Prostate Cancer (MRI-PTPCa), was trained under contrastive learning on nearly 1.3 million image-pathology pairs from over 5,500 patients in discovery, modeling, external and prospective cohorts. During real-world testing, prediction of MRI-PTPCa demonstrated consistency with pathology and superior performance (area under the curve above 0.978; grading accuracy 89.1%) compared with clinical measures and other prediction models. This work introduces a scalable, noninvasive approach to prostate cancer diagnosis and grading, offering a robust tool to support clinical decision-making while reducing reliance on biopsies.

Khalil MA, Bajger M, Skeats A, Delnooz C, Dwyer A, Lee G

pubmed logopapersSep 2 2025
The early and precise diagnosis of stroke plays an important role in its treatment planning. Computed Tomography (CT) is utilised as a first diagnostic tool for quick diagnosis and to rule out haemorrhage. Diffusion Magnetic Resonance Imaging (MRI) provides superior sensitivity in comparison to CT for detecting early acute ischaemia and small lesions. However, the long scan time and limited availability of MRI make it not feasible for emergency settings. To deal with this problem, this study presents a brain mask-guided and fidelity-constrained cycle-consistent generative adversarial network for translating CT images into diffusion MRI images for stroke diagnosis. A brain mask is concatenated with the input CT image and given as input to the generator to encourage more focus on the critical foreground areas. A fidelity-constrained loss is utilised to preserve details for better translation results. A publicly available dataset, A Paired CT-MRI Dataset for Ischemic Stroke Segmentation (APIS) is utilised to train and test the models. The proposed method yields MSE 197.45 [95% CI: 180.80, 214.10], PSNR 25.50 [95% CI: 25.10, 25.92], and SSIM 88.50 [95% CI: 87.50, 89.50] on a testing set. The proposed method significantly improves techniques based on UNet, cycle-consistent generative adversarial networks (CycleGAN) and Attention generative adversarial networks (GAN). Furthermore, an ablation study was performed, which demonstrates the effectiveness of incorporating fidelity-constrained loss and brain mask information as a soft guide in translating CT images into diffusion MRI images. The experimental results demonstrate that the proposed approach has the potential to support faster and precise diagnosis of stroke.

Xu X, Ba L, Lin L, Song Y, Zhao C, Yao S, Cao H, Chen X, Mu J, Yang L, Feng Y, Wang Y, Wang B, Zheng Z

pubmed logopapersSep 2 2025
Colorectal cancer (CRC) ranks as the second deadliest cancer globally, impacting patients' quality of life. Colonoscopy is the primary screening method for detecting adenomas and polyps, crucial for reducing long-term CRC risk, but it misses about 30% of cases. Efforts to improve detection rates include using AI to enhance colonoscopy. This study assesses the effectiveness and accuracy of a real-time AI-assisted polyp detection system during colonoscopy. The study included 390 patients aged 40 to 75 undergoing colonoscopies for either colorectal cancer screening (risk score ≥ 4) or clinical diagnosis. Participants were randomly assigned to an experimental group using software-assisted diagnosis or a control group with physician diagnosis. The software, a medical image processing tool with B/S and MVC architecture, operates on Windows 10 (64-bit) and supports real-time image handling and lesion identification via HDMI, SDI, AV, and DVI outputs from endoscopy devices. Expert evaluations of retrospective video lesions served as the gold standard. Efficacy was assessed by polyp per colonoscopy (PPC), adenoma per colonoscopy (APC), adenoma detection rate (ADR), and polyp detection rate (PDR), while accuracy was measured using sensitivity and specificity against the gold standard. In this multicenter, randomized controlled trial, computer-aided detection (CADe) significantly improved polyp detection rates (PDR), achieving 67.18% in the CADe group versus 56.92% in the control group. The CADe group identified more polyps, especially those 5 mm or smaller (61.03% vs. 56.92%). In addition, the CADe group demonstrated higher specificity (98.44%) and sensitivity (95.19%) in the FAS dataset, and improved sensitivity (95.82% vs. 77.53%) in the PPS dataset, with both groups maintaining 100% specificity. These results suggest that the AI-assisted system enhances PDR accuracy. This real-time computer-aided polyp detection system enhances efficacy by boosting adenoma and polyp detection rates, while also achieving high accuracy with excellent sensitivity and specificity.

Ensle F, Zecca F, Kerber B, Lohezic M, Wen Y, Kroschke J, Pawlus K, Guggenberger R

pubmed logopapersSep 2 2025
3D MR neurography is a useful diagnostic tool in head and neck disorders, but neurographic imaging remains challenging in this region. Optimal sequences for nerve visualization have not yet been established and may also differ between nerves. While deep learning (DL) reconstruction can enhance nerve depiction, particularly at 1.5T, studies in the head and neck are lacking. The purpose of this study was to compare double echo steady-state (DESS) and postcontrast STIR sequences in DL-reconstructed 3D MR neurography of the extraforaminal cranial and spinal nerves at 1.5T. Eighteen consecutive examinations of 18 patients undergoing head-and-neck MRI at 1.5T were retrospectively included (mean age: 51 ± 14 years, 11 women). 3D DESS and postcontrast 3D STIR sequences were obtained as part of the standard protocol, and reconstructed with a prototype DL algorithm. Two blinded readers qualitatively evaluated visualization of the inferior alveolar, lingual, facial, hypoglossal, greater occipital, lesser occipital, and greater auricular nerves, as well as overall image quality, vascular suppression, and artifacts. Additionally, apparent SNR and contrast-to-noise ratios of the inferior alveolar and greater occipital nerve were measured. Visual ratings and quantitative measurements, respectively, were compared between sequences by using Wilcoxon signed-rank test. DESS demonstrated significantly improved visualization of the lesser occipital nerve, greater auricular nerve, and proximal greater occipital nerve (<i>P</i> < .015). Postcontrast STIR showed significantly enhanced visualization of the lingual nerve, hypoglossal nerve, and distal inferior alveolar nerve (<i>P</i> < .001). The facial nerve, proximal inferior alveolar nerve, and distal greater occipital nerve did not demonstrate significant differences in visualization between sequences (<i>P</i> > .08). There was also no significant difference for overall image quality and artifacts. Postcontrast STIR achieved superior vascular suppression, reaching statistical significance for 1 reader (<i>P</i> = .039). Quantitatively, there was no significant difference between sequences (<i>P</i> > .05). Our findings suggest that 3D DESS generally provides improved visualization of spinal nerves, while postcontrast 3D STIR facilitates enhanced delineation of extraforaminal cranial nerves.

Puel U, Boukhzer S, Doyen M, Hossu G, Boubaker F, Frédérique G, Blum A, Teixeira PAG, Eliezer M, Parietti-Winkler C, Gillet R

pubmed logopapersSep 2 2025
Conventional CT imaging techniques are ineffective in adequately depicting the stapes. The purpose of this study was to evaluate the ability of high-resolution (HR), ultra-high-resolution (UHR) with and without deep learning reconstruction (DLR), and conebeam (CB)-CT scanners to image the stapes by using micro-CT as a reference. Eleven temporal bone specimens were imaged by using all imaging modalities. Subjective image analysis was performed by grading image quality on a Likert scale, and objective image analysis was performed by taking various measurements of the stapes superstructure and footplate. Image noise and radiation dose were also recorded. The global image quality scores were all worse than micro-CT (<i>P</i> ≤ .01). UHR-CT with and without DLR had the second-best global image quality scores (<i>P</i> > .99), which were both better than CB-CT (<i>P</i> = .01 for both). CB-CT had a better global image quality score than HR-CT (<i>P</i> = .01). Most of the measurements differed between HR-CT and micro-CT (<i>P</i> ≤ .02), but not between UHR-CT with and without DLR, CB-CT, and micro-CT (<i>P</i> > .06). The air noise value of UHR-CT with DLR was not different from CB-CT (<i>P</i> = .49), but HR-CT and UHR-CT without DLR exhibited higher values than UHR-CT with DLR (<i>P</i> ≤ .001). HR-CT and UHR-CT with and without DLR yielded the same effective radiation dose values of 1.23 ± 0.11 (1.13-1.35) mSv, which was 4 times higher than that of CB-CT (0.35 ± 0 mSv, <i>P</i> ≤ .01). UHR-CT with and without DLR offers comparable objective image analysis to CB-CT while providing superior subjective image quality. However, this is achieved at the cost of a higher radiation dose. Both CB-CT and UHR-CT with and without DLR are more effective than HR-CT in objective and subjective image analysis.

Gabriel A. B. do Nascimento, Vincent Dong, Guilherme J. Cavalcante, Alex Nguyen, Thaís G. do Rêgo, Yuri Malheiros, Telmo M. Silva Filho, Carla R. Zeballos Torrez, James C. Gee, Anne Marie McCarthy, Andrew D. A. Maidment, Bruno Barufaldi

arxiv logopreprintSep 2 2025
Accurate breast MRI lesion detection is critical for early cancer diagnosis, especially in high-risk populations. We present a classification pipeline that adapts a pretrained foundation model, the Medical Slice Transformer (MST), for breast lesion classification using dynamic contrast-enhanced MRI (DCE-MRI). Leveraging DINOv2-based self-supervised pretraining, MST generates robust per-slice feature embeddings, which are then used to train a Kolmogorov--Arnold Network (KAN) classifier. The KAN provides a flexible and interpretable alternative to conventional convolutional networks by enabling localized nonlinear transformations via adaptive B-spline activations. This enhances the model's ability to differentiate benign from malignant lesions in imbalanced and heterogeneous clinical datasets. Experimental results demonstrate that the MST+KAN pipeline outperforms the baseline MST classifier, achieving AUC = 0.80 \pm 0.02 while preserving interpretability through attention-based heatmaps. Our findings highlight the effectiveness of combining foundation model embeddings with advanced classification strategies for building robust and generalizable breast MRI analysis tools.

Tao Wang, Zhenxuan Zhang, Yuanbo Zhou, Xinlin Zhang, Yuanbin Chen, Tao Tan, Guang Yang, Tong Tong

arxiv logopreprintSep 2 2025
The effectiveness of convolutional neural networks in medical image segmentation relies on large-scale, high-quality annotations, which are costly and time-consuming to obtain. Even expert-labeled datasets inevitably contain noise arising from subjectivity and coarse delineations, which disrupt feature learning and adversely impact model performance. To address these challenges, this study propose a Geometric-Structural Dual-Guided Network (GSD-Net), which integrates geometric and structural cues to improve robustness against noisy annotations. It incorporates a Geometric Distance-Aware module that dynamically adjusts pixel-level weights using geometric features, thereby strengthening supervision in reliable regions while suppressing noise. A Structure-Guided Label Refinement module further refines labels with structural priors, and a Knowledge Transfer module enriches supervision and improves sensitivity to local details. To comprehensively assess its effectiveness, we evaluated GSD-Net on six publicly available datasets: four containing three types of simulated label noise, and two with multi-expert annotations that reflect real-world subjectivity and labeling inconsistencies. Experimental results demonstrate that GSD-Net achieves state-of-the-art performance under noisy annotations, achieving improvements of 2.52% on Kvasir, 22.76% on Shenzhen, 8.87% on BU-SUC, and 4.59% on BraTS2020 under SR simulated noise. The codes of this study are available at https://github.com/ortonwang/GSD-Net.
Page 203 of 6546537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.