Sort by:
Page 2 of 1011010 results

FIND-Net -- Fourier-Integrated Network with Dictionary Kernels for Metal Artifact Reduction

Farid Tasharofi, Fuxin Fan, Melika Qahqaie, Mareike Thies, Andreas Maier

arxiv logopreprintAug 14 2025
Metal artifacts, caused by high-density metallic implants in computed tomography (CT) imaging, severely degrade image quality, complicating diagnosis and treatment planning. While existing deep learning algorithms have achieved notable success in Metal Artifact Reduction (MAR), they often struggle to suppress artifacts while preserving structural details. To address this challenge, we propose FIND-Net (Fourier-Integrated Network with Dictionary Kernels), a novel MAR framework that integrates frequency and spatial domain processing to achieve superior artifact suppression and structural preservation. FIND-Net incorporates Fast Fourier Convolution (FFC) layers and trainable Gaussian filtering, treating MAR as a hybrid task operating in both spatial and frequency domains. This approach enhances global contextual understanding and frequency selectivity, effectively reducing artifacts while maintaining anatomical structures. Experiments on synthetic datasets show that FIND-Net achieves statistically significant improvements over state-of-the-art MAR methods, with a 3.07% MAE reduction, 0.18% SSIM increase, and 0.90% PSNR improvement, confirming robustness across varying artifact complexities. Furthermore, evaluations on real-world clinical CT scans confirm FIND-Net's ability to minimize modifications to clean anatomical regions while effectively suppressing metal-induced distortions. These findings highlight FIND-Net's potential for advancing MAR performance, offering superior structural preservation and improved clinical applicability. Code is available at https://github.com/Farid-Tasharofi/FIND-Net

CT-Based radiomics and deep learning for the preoperative prediction of peritoneal metastasis in ovarian cancers.

Liu Y, Yin H, Li J, Wang Z, Wang W, Cui S

pubmed logopapersAug 13 2025
To develop a CT-based deep learning radiomics nomogram (DLRN) for the preoperative prediction of peritoneal metastasis (PM) in patients with ovarian cancer (OC). A total of 296 patients with OCs were randomly divided into training dataset (N = 207) and test dataset (N = 89). The radiomics features and DL features were extracted from CT images of each patient. Specifically, radiomics features were extracted from the 3D tumor regions, while DL features were extracted from the 2D slice with the largest tumor region of interest (ROI). The least absolute shrinkage and selection operator (LASSO) algorithm was used to select radiomics and DL features, and the radiomics score (Radscore) and DL score (Deepscore) were calculated. Multivariate logistic regression was employed to construct clinical model. The important clinical factors, radiomics and DL features were integrated to build the DLRN. The predictive performance of the models was evaluated using the area under the receiver operating characteristic curve (AUC) and DeLong's test. Nine radiomics features and 10 DL features were selected. Carbohydrate antigen 125 (CA-125) was the independent clinical predictor. In the training dataset, the AUC values of the clinical, radiomics and DL models were 0.618, 0.842, and 0.860, respectively. In the test dataset, the AUC values of these models were 0.591, 0.819 and 0.917, respectively. The DLRN showed better performance than other models in both training and test datasets with AUCs of 0.943 and 0.951, respectively. Decision curve analysis and calibration curve showed that the DLRN provided relatively high clinical benefit in both the training and test datasets. The DLRN demonstrated superior performance in predicting preoperative PM in patients with OC. This model offers a highly accurate and noninvasive tool for preoperative prediction, with substantial clinical potential to provide critical information for individualized treatment planning, thereby enabling more precise and effective management of OC patients.

SKOOTS: Skeleton oriented object segmentation for mitochondria

Buswinka, C. J., Osgood, R. T., Nitta, H., Indzhykulian, A. A.

biorxiv logopreprintAug 13 2025
Segmenting individual instances of mitochondria from imaging datasets can provide rich quantitative information, but is prohibitively time-consuming when done manually, prompting interest in the development of automated algorithms using deep neural networks. Existing solutions for various segmentation tasks are optimized for either: high-resolution three-dimensional imaging, relying on well-defined object boundaries (e.g., whole neuron segmentation in volumetric electron microscopy datasets); or low-resolution two-dimensional imaging, boundary-invariant but poorly suited to large 3D objects (e.g., whole-cell segmentation of light microscopy images). Mitochondria in whole-cell 3D electron microscopy datasets often lie in the middle ground - large, yet with ambiguous borders, challenging current segmentation tools. To address this, we developed skeleton-oriented object segmentation (SKOOTS) - a novel approach that efficiently segments large, densely packed mitochondria. SKOOTS accurately and efficiently segments mitochondria in previously difficult contexts and can also be applied to segment other objects in 3D light microscopy datasets. This approach bridges a critical gap between existing segmentation approaches, improving the utility of automated analysis of three-dimensional biomedical imaging data. We demonstrate the utility of SKOOTS by applying it to segment over 15,000 cochlear hair cell mitochondria across experimental conditions in under 2 hours on a consumer-grade PC, enabling downstream morphological analysis that revealed subtle structural changes following aminoglycoside exposure - differences not detectable using analysis approaches currently used in the field.

Explainable AI Technique in Lung Cancer Detection Using Convolutional Neural Networks

Nishan Rai, Sujan Khatri, Devendra Risal

arxiv logopreprintAug 13 2025
Early detection of lung cancer is critical to improving survival outcomes. We present a deep learning framework for automated lung cancer screening from chest computed tomography (CT) images with integrated explainability. Using the IQ-OTH/NCCD dataset (1,197 scans across Normal, Benign, and Malignant classes), we evaluate a custom convolutional neural network (CNN) and three fine-tuned transfer learning backbones: DenseNet121, ResNet152, and VGG19. Models are trained with cost-sensitive learning to mitigate class imbalance and evaluated via accuracy, precision, recall, F1-score, and ROC-AUC. While ResNet152 achieved the highest accuracy (97.3%), DenseNet121 provided the best overall balance in precision, recall, and F1 (up to 92%, 90%, 91%, respectively). We further apply Shapley Additive Explanations (SHAP) to visualize evidence contributing to predictions, improving clinical transparency. Results indicate that CNN-based approaches augmented with explainability can provide fast, accurate, and interpretable support for lung cancer screening, particularly in resource-limited settings.

MInDI-3D: Iterative Deep Learning in 3D for Sparse-view Cone Beam Computed Tomography

Daniel Barco, Marc Stadelmann, Martin Oswald, Ivo Herzig, Lukas Lichtensteiger, Pascal Paysan, Igor Peterlik, Michal Walczak, Bjoern Menze, Frank-Peter Schilling

arxiv logopreprintAug 13 2025
We present MInDI-3D (Medical Inversion by Direct Iteration in 3D), the first 3D conditional diffusion-based model for real-world sparse-view Cone Beam Computed Tomography (CBCT) artefact removal, aiming to reduce imaging radiation exposure. A key contribution is extending the "InDI" concept from 2D to a full 3D volumetric approach for medical images, implementing an iterative denoising process that refines the CBCT volume directly from sparse-view input. A further contribution is the generation of a large pseudo-CBCT dataset (16,182) from chest CT volumes of the CT-RATE public dataset to robustly train MInDI-3D. We performed a comprehensive evaluation, including quantitative metrics, scalability analysis, generalisation tests, and a clinical assessment by 11 clinicians. Our results show MInDI-3D's effectiveness, achieving a 12.96 (6.10) dB PSNR gain over uncorrected scans with only 50 projections on the CT-RATE pseudo-CBCT (independent real-world) test set and enabling an 8x reduction in imaging radiation exposure. We demonstrate its scalability by showing that performance improves with more training data. Importantly, MInDI-3D matches the performance of a 3D U-Net on real-world scans from 16 cancer patients across distortion and task-based metrics. It also generalises to new CBCT scanner geometries. Clinicians rated our model as sufficient for patient positioning across all anatomical sites and found it preserved lung tumour boundaries well.

BSA-Net: Boundary-prioritized spatial adaptive network for efficient left atrial segmentation.

Xu F, Tu W, Feng F, Yang J, Gunawardhana M, Gu Y, Huang J, Zhao J

pubmed logopapersAug 13 2025
Atrial fibrillation, a common cardiac arrhythmia with rapid and irregular atrial electrical activity, requires accurate left atrial segmentation for effective treatment planning. Recently, deep learning methods have gained encouraging success in left atrial segmentation. However, current methodologies critically depend on the assumption of consistently complete centered left atrium as input, which neglects the structural incompleteness and boundary discontinuities arising from random-crop operations during inference. In this paper, we propose BSA-Net, which exploits an adaptive adjustment strategy in both feature position and loss optimization to establish long-range feature relationships and strengthen robust intermediate feature representations in boundary regions. Specifically, we propose a Spatial-adaptive Convolution (SConv) that employs a shuffle operation combined with lightweight convolution to directly establish cross-positional relationships within regions of potential relevance. Moreover, we develop the dual Boundary Prioritized loss, which enhances boundary precision by differentially weighting foreground and background boundaries, thus optimizing complex boundary regions. With the above technologies, the proposed method enjoys a better speed-accuracy trade-off compared to current methods. BSA-Net attains Dice scores of 92.55%, 91.42%, and 84.67% on the LA, Utah, and Waikato datasets, respectively, with a mere 2.16 M parameters-approximately 80% fewer than other contemporary state-of-the-art models. Extensive experimental results on three benchmark datasets have demonstrated that BSA-Net, consistently and significantly outperforms existing state-of-the-art methods.

AST-n: A Fast Sampling Approach for Low-Dose CT Reconstruction using Diffusion Models

Tomás de la Sotta, José M. Saavedra, Héctor Henríquez, Violeta Chang, Aline Xavier

arxiv logopreprintAug 13 2025
Low-dose CT (LDCT) protocols reduce radiation exposure but increase image noise, compromising diagnostic confidence. Diffusion-based generative models have shown promise for LDCT denoising by learning image priors and performing iterative refinement. In this work, we introduce AST-n, an accelerated inference framework that initiates reverse diffusion from intermediate noise levels, and integrate high-order ODE solvers within conditioned models to further reduce sampling steps. We evaluate two acceleration paradigms--AST-n sampling and standard scheduling with high-order solvers -- on the Low Dose CT Grand Challenge dataset, covering head, abdominal, and chest scans at 10-25 % of standard dose. Conditioned models using only 25 steps (AST-25) achieve peak signal-to-noise ratio (PSNR) above 38 dB and structural similarity index (SSIM) above 0.95, closely matching standard baselines while cutting inference time from ~16 seg to under 1 seg per slice. Unconditional sampling suffers substantial quality loss, underscoring the necessity of conditioning. We also assess DDIM inversion, which yields marginal PSNR gains at the cost of doubling inference time, limiting its clinical practicality. Our results demonstrate that AST-n with high-order samplers enables rapid LDCT reconstruction without significant loss of image fidelity, advancing the feasibility of diffusion-based methods in clinical workflows.

Multimodal ensemble machine learning predicts neurological outcome within three hours after out of hospital cardiac arrest.

Kawai Y, Yamamoto K, Tsuruta K, Miyazaki K, Asai H, Fukushima H

pubmed logopapersAug 13 2025
This study aimed to determine if an ensemble (stacking) model that integrates three independently developed base models can reliably predict patients' neurological outcomes following out-of-hospital cardiac arrest (OHCA) within 3 h of arrival and outperform each individual model. This retrospective study included patients with OHCA (≥ 18 years) admitted directly to Nara Medical University between April 2015 and March 2024 who remained comatose for ≥ 3 h after arrival and had suitable head computed tomography (CT) images. The area under the receiver operating characteristic curve (AUC) and Briers scores were used to evaluate the performance of four models (resuscitation-related background OHCA score factors, bilateral pupil diameter, single-slice head CT within 3 h of arrival, and an ensemble stacked model combining these three models) in predicting favourable neurological outcomes at hospital discharge or 1 month, as defined by a Cerebral Performance Category scale of 1-2. Among 533 patients, 82 (15%) had favourable outcomes. The OHCA, pupil, and head CT models yielded AUCs of 0.76, 0.65, and 0.68 with Brier scores of 0.11, 0.13, and 0.12, respectively. The ensemble model outperformed the other models (AUC, 0.82; Brier score, 0.10), thereby supporting its application for early clinical decision-making and optimising resource allocation.

Development and validation of machine learning models to predict vertebral artery injury by C2 pedicle screws.

Ye B, Sun Y, Chen G, Wang B, Meng H, Shan L

pubmed logopapersAug 12 2025
Cervical 2 pedicle screw (C2PS) fixation is widely used in posterior cervical surgery but carries risks of vertebral artery injury (VAI), a rare yet severe complication. This study aimed to identify risk factors for VAI during C2PS placement and develop a machine learning (ML)-based predictive model to enhance preoperative risk assessment. Clinical and radiological data from 280 patients undergoing head and neck CT angiography were retrospectively analyzed. Three-dimensional reconstructed images simulated C2PS placement, classifying patients into injury (n = 98) and non-injury (n = 182) groups. Fifteen variables, including characteristic of patients and anatomic variables were evaluated. Eight ML algorithms were trained (70% training cohort) and validated (30% validation cohort). Model performance was assessed using AUC, sensitivity, specificity, and SHAP (SHapley Additive exPlanations) for interpretability. Six key risk factors were identified: pedicle diameter, high-riding vertebral artery (HRVA), intra-axial vertebral artery (IAVA), vertebral artery diameter (VAD), distance between the transverse foramen and the posterior end of the vertebral body (TFPEVB) and distance between the vertebral artery and the vertebral body (VAVB). The neural network model (NNet) demonstrated optimal predictive performance, achieving AUCs of 0.929 (training) and 0.936 (validation). SHAP analysis confirmed these variables as primary contributors to VAI risk. This study established an ML-driven predictive model for VAI during C2PS placement, highlighting six critical anatomical and radiological risk factors. Integrating this model into clinical workflows may optimize preoperative planning, reduce complications, and improve surgical outcomes. External validation in multicenter cohorts is warranted to enhance generalizability.

CRCFound: A Colorectal Cancer CT Image Foundation Model Based on Self-Supervised Learning.

Yang J, Cai D, Liu J, Zhuang Z, Zhao Y, Wang FA, Li C, Hu C, Gai B, Chen Y, Li Y, Wang L, Gao F, Wu X

pubmed logopapersAug 12 2025
Accurate risk stratification is crucial for determining the optimal treatment plan for patients with colorectal cancer (CRC). However, existing deep learning models perform poorly in the preoperative diagnosis of CRC and exhibit limited generalizability, primarily due to insufficient annotated data. To address these issues, CRCFound, a self-supervised learning-based CT image foundation model for CRC is proposed. After pretraining on 5137 unlabeled CRC CT images, CRCFound can learn universal feature representations and provide efficient and reliable adaptability for various clinical applications. Comprehensive benchmark tests are conducted on six different diagnostic tasks and two prognosis tasks to validate the performance of the pretrained model. Experimental results demonstrate that CRCFound can easily transfer to most CRC tasks and exhibit outstanding performance and generalization ability. Overall, CRCFound can solve the problem of insufficient annotated data and perform well in a wide range of downstream tasks of CRC, making it a promising solution for accurate diagnosis and personalized treatment of CRC patients.
Page 2 of 1011010 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.