Sort by:
Page 26 of 6046038 results

Luo J, Liu Y, Wang T, Liu T, Li J, Zhang P, Gui Z

pubmed logopapersOct 15 2025
Most low-dose computed tomography (LDCT) denoising methods based on CNN have some denoising effect, but their interpretability is very low due to the black-box nature of neural networks. To address this issue, we propose a novel fully sparse-regularized convolutional sparse coding model (CSC-ST) that integrates interpretable convolutional sparse coding with a CNN-based denoising framework, and design a convolutional neural network (CSCST-Net) to solve the CSC-ST model. Specifically, we develop a generalized sparse transform to enhance conventional transform sparsity, enabling the network to effectively learn and preserve the local sparsity characteristics of the original images. Furthermore, our solution integrates the Alternating Direction Method of Multipliers (ADMM) with gradient descent during the optimization process. Introduce adaptive convolutional dictionaries, enabling images to be represented with fewer sparse feature maps and reducing the number of model parameters. Experimental results on the Mayo Clinic dataset demonstrate that, compared to state-of-the-art methods, CSCST-Net demonstrates superior performance in noise removal, artifact suppression, and texture detail preservation. The effectiveness and practicability of the proposed model in practical application have strong advantages compared with other methods.

Du S, Shen M, Liu Y, Wei F, Lei Y

pubmed logopapersOct 15 2025

Contrast-Enhanced Computed Tomography (CECT) is a critical medical imaging modality, yet acquiring and annotating such datasets remains time-consuming. Generative models show potential in augmenting datasets, but existing methods mainly focus on single-organ CECT with small deformations and struggle to generate diverse data with large deformations. We aim to propose a novel biomechanics-guided CECT volume synthesizing model for generating deformed CECT volumes, and evaluate the effectiveness of deformation-augmented CECT datasets for downstream tasks. 
Approach: 
First, we develop a biomechanics-guided deformable CECT volume synthesizing framework using deformation as input to a Conditional Generative Adversarial Networks(cGAN), and using sequential deformations to further generate temporally consistent deformed CECT volumes. Second, we propose a module for transition region generation and contrast adjustment in CECT. Third, we trained the deformable synthesis model on liver and kidney CECT datasets and used it for dataset augmentation. The synthesized CECT volumes fidelity was verified through qualitative and quantitative tests. The augmented dataset's effectiveness was evaluated for downstream tasks, including segmentation and multi-organ deformable image registration.
Main Results: 
For image fidelity, the mean DSC and SSIM for the synthesized CECT volumes continuity are 0.838 and 0.988, higher than the real CT volumes. Our method outperforms existing approaches in comparative experiments. The specificity and sensitivity in radiologist turing test are 47.5% and 48.0%. Comparison between deformed ex vivo porcine liver CT and synthesized CECT shows the model generates realistic deformed CT. In segmentation, model on augmented datasets achieves a mean mAP@50 scores of 0.641, outperforming 0.399 without augmentation. In deformable image registration, DSC improves by 7% as the augmented training frames increases. 
Significance: 
The proposed model can synthesize deformable CECT volumes, augmenting dataset diversity and size. The synthesized CECT volumes reveal good volume continuity and perceptual similarity to real CECT. The augmented datasets can improve the performance for downstream tasks.

Zeng W, Yin F, Lei Y, Wu G, Yu J

pubmed logopapersOct 15 2025

Functional magnetic resonance imaging (fMRI) is crucial for identifying neurological disorder biomarkers, but current deep learning methods face some limitations. Template-dependent methods lack inter-subject specificity and generalizability due to fixed anatomical priors. Emerging template-free models often separate spatial and temporal processing, discarding temporal continuity. To address these limitations, we propose a novel axial slice-centric model that jointly models spatiotemporal representations through end-to-end processing of native 4D fMRI data. This eliminates template dependency while preserving intrinsic brain activity patterns.
Approach:
Our framework redefines 4D fMRI analysis by decomposing it into 3D spatiotemporal manifolds along the axial axis, enabling joint learning of spatial and temporal features and preserving individualized structure organization. A hierarchical encoder extracts local spatiotemporal interactions within each slice, progressively aggregating information to capture multi-granularity neural patterns. To maintain temporal continuity and computational efficiency, a differentiable TopK operation adaptively selects informative slices and time points, balancing computational demands with long-range temporal dependencies.
Main results:
Experimental results on the ADNI dataset (324 subjects) and a private disorder of consciousness dataset (164 subjects) demonstrate the effectiveness of our 4D fMRI framework in classifying neurological disorders. Specifically, on the ADNI dataset, our proposed model achieves 97% classification accuracy with over 25% reduction in FLOPs compared to baseline methods. On the private dataset, our model outperforms state-of-the-art approaches by 5% accuracy. Visualization of slice-level attention maps identify biomarkers consistent with previous research, demonstrating that our template-free framework can discover biomarkers comparable to those identified by template-dependent methods.
Significance:
Our joint spatiotemporal modeling framework, enabled by axial slice-centric decomposition of 4D fMRI data while preserving temporal continuity, achieves excellent complexity-accuracy trade-offs for brain disorder analysis. Biomarker visualization confirms its template-free capability to identify clinically-relevant neural patterns, offering an efficient and interpretable solution for 4D fMRI-based diagnosis.&#xD.

Dayao MT, Mayer AT, Trevino AE, Bar-Joseph Z

pubmed logopapersOct 15 2025
Hematoxylin and eosin (H&E) staining has been a standard in clinical histopathology for many decades but lacks molecular detail. Advances in multiplexed spatial proteomics imaging allow cell types and tissues to be annotated by their expression patterns as well as their morphological features. However, these technologies are at present unavailable in most clinical settings. In this work, we present a machine learning framework that leverages histopathology foundation models and paired H&E and spatial proteomic imaging data to enable enhanced cell type annotation on H&E-only datasets. We trained and evaluated our method on kidney datasets with paired H&E and spatial proteomic imaging data and found that models trained using our methods outperform models trained directly on the imaging data. We also show how our framework can be used to study biological differences between two major kidney diseases.

Kadhim M, Persson E, Haraldsson A, Gustafsson CJ, Nilsson M, Kügele M, Bäck S, Ceberg S

pubmed logopapersOct 15 2025
Precise patient positioning and daily anatomical verification are crucial in external beam radiotherapy to ensure accurate dose delivery and minimize harm to healthy tissues. However, Current image-guided radiotherapy techniques struggle to balance high-quality volumetric anatomical visualization and rapid low-dose imaging. Addressing this, reconstructing volumetric images from ultra-sparse X-ray projections holds promise for significantly reducing patient radiation exposure and potentially enabling real-time anatomy verification. Here, we present a novel DL-based framework that generates synthetic volumetric cone-beam CT in real-time from two orthogonal projection views and a reference planning CT for prostate cancer patients. Our model learns the mapping between 2D and 3D domains and generalizes across patients without retraining. We demonstrate that our framework produces high-fidelity volumetric reconstructions in real-time, potentially supporting clinical workflows without hardware modifications. This approach could reduce imaging dose and treatment time while preserving comprehensive anatomical information, offering a pathway for safer, more efficient prostate radiotherapy workflows. The online version contains supplementary material available at 10.1038/s41598-025-23781-7.

Yap A, Bae KT

pubmed logopapersOct 15 2025
When reporting on radiology follow-up examinations, radiologists should ensure that follow-up images and reference images are from the same patient to prevent misidentification errors. This is a nontrivial task when accounting for changes in the patient's condition and differences in image acquisition. Thus, we developed a system for automatic patient identification from radiographs using convolutional neural networks. Using deep metric learning, we trained multiple models to match radiographs of the chest, knees, pelvis, and hands. We also trained a model to match chest radiographs of multiple viewpoints (frontal and lateral). All models achieved over 0.98 true positive rate (TPR) at a 0.001 false positive rate (FPR), and over 0.96 rank-1 TPR on internal test datasets. The multi-view chest-radiograph CNN maintained over 0.98 TPR and 0.97 rank-1 TPR when matching frontal radiographs with lateral radiographs. Our work demonstrates the potential of radiographs as a biometric modality for subject identification, which has quality assurance applications in healthcare institutions.

Ohashi H, Ando H, Fujimoto M, Suzuki W, Sakai K, Amano T

pubmed logopapersOct 15 2025
Severe calcified lesions are difficult to assess using conventional coronary computed tomography angiography (CCTA). Artificial intelligence (AI)-driven reconstruction may help address this limitation. A 93-year-old man with angina underwent AI-enhanced high-resolution CCTA using the Aquilion ONE/INSIGHT Edition (Canon Medical Systems). The AI-enhanced imaging clearly revealed severe eccentric calcification from the distal left main trunk to the proximal left anterior descending artery. Intravascular ultrasound confirmed this finding. Percutaneous coronary intervention (PCI) was performed with orbital atherectomy, cutting balloon predilation, and drug-eluting stent implantation, achieving good stent expansion. AI-enhanced CCTA enabled the accurate visualization and measurement of calcified plaques, including their arc and thickness, facilitating the selection of appropriate devices and lesion preparation for PCI. AI-enhanced CCTA provides precise evaluation of severely calcified coronary plaques. It guides PCI strategy by accurately assessing calcium burden and plaque morphology.

Gu Z, Dogra S, Siriruchatanon M, Kneifati-Hayek J, Kang SK

pubmed logopapersOct 15 2025
Artificial intelligence (AI) applications for radiology workflow have the potential to improve patient and health-system-level outcomes through more efficient and accurate diagnosis and clinical decision making. For a variety of time-intensive steps, numerous types of applications are now available with variable reported measures and degrees of success. The tools we highlight aim to accelerate imaging acquisition, reduce cognitive and manual burden on radiologists and others involved in the care pathway, improve diagnostic accuracy, and shorten the time to clinical action based on imaging results. Most existing studies have focused on intermediate outcomes, such as task duration or time to the next step in care. In this article, we present an examination of AI applications across the medical imaging exam workflow, review examples of real-world evidence on these tools, and summarize the relevant performance metrics by application type. Beyond the more immediately acquired measures, to demonstrate benefit to patient health and economic outcomes, a more integrated assessment is necessary, and in an iterative fashion. To evolve beyond early workflow gains, interoperable tools must be tied to measurable downstream impacts, such as reduced disease severity, lower mortality, and shorter hospital stays, while we acknowledge that current empirical evaluations are limited.

Xu L, Yin Y, Wang X, Xu T, Zhang X, Feng T

pubmed logopapersOct 15 2025
Childhood trauma has enduring effects on emotional and cognitive functioning, yet its impact on procrastination, particularly from a neurodevelopmental perspective, remains poorly understood. To achieve this, we employed resting-state functional MRI in conjunction with standardized behavioral assessments of childhood trauma, trait anxiety, self-control, and procrastination across two datasets (discovery dataset: n = 760; validation dataset: n = 429). By leveraging the advanced predictive analytics-including connectome-based predictive modeling (CPM) and least absolute shrinkage and selection operator (LASSO) regression-we aimed to elucidate the neural basis linking childhood trauma to procrastination. Our behavioral results revealed that childhood trauma was a significant predictor of elevated procrastination tendencies, with this association mediated by increased trait anxiety and reduced self-control. At the neural level, the predictive modeling using CPM and LASSO regression demonstrated that functional connectivity within and between the frontoparietal network (FPN), salience network (SAN), visual network (VN), and cerebellum significantly predicted childhood trauma. These patterns likely reflect trauma-related disruptions in higher-order cognitive control (e.g., self-control) and increased affective reactivity (e.g., trait anxiety). More importantly, the mediation analyses further confirmed that trait anxiety and self-control jointly mediate the relationship between trauma-related neural network connectivity and procrastination. These findings presented novel evidence that childhood trauma is associated with procrastination via functional alterations in large-scale neural networks implicated in self-control and emotion regulation, providing critical insights into the long-term behavioral consequences of early-life adversity, and informing the development of targeted interventions to reduce procrastination in trauma-exposed individuals.

Peng CM, Chen CW, Hsieh CH, Cheng YY, Liao CH, Hsieh MF, Lin SC, Liu MC, Liu YJ

pubmed logopapersOct 15 2025
Esophageal cancer is a highly aggressive malignancy often diagnosed at an advanced stage, with poor prognosis and high recurrence rates despite curative treatment. Accurate prognostic tools are urgently needed to guide personalized management strategies. Recent research has demonstrated significant potential of integrating quantitative imaging biomarkers, specifically radiomics and sarcopenia, with machine learning (ML) techniques to enhance outcome prediction. This review systematically summarizes six recent studies (2022-2024) exploring integrated ML models combining sarcopenia and radiomics biomarkers with clinical parameters to predict survival in patients with esophageal and gastroesophageal cancers. Sample sizes ranged from 83 to 243 patients, with studies utilizing various imaging modalities (positron emission tomography/computed tomography and computed tomography) and model analysis approaches, including Cox regression, random forest, and light gradient boosting machine. These models incorporated features such as skeletal muscle indices, tumor texture, and shape descriptors. Models that combined clinical data, radiomics, and sarcopenia outperformed those using single modalities. These findings support the utility of multimodal imaging biomarkers in developing robust, individualized prognostic models. However, the retrospective nature of most studies highlights the need for standardization and external validation. This review underscores the potential of multimodal ML-based models in enhancing personalized risk stratification and treatment planning for esophageal cancer.
Page 26 of 6046038 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.