Sort by:
Page 41 of 1411410 results

An effective brain stroke diagnosis strategy based on feature extraction and hybrid classifier.

Elsayed MS, Saleh GA, Saleh AI, Khalil AT

pubmed logopapersAug 14 2025
Stroke is a leading cause of death and long-term disability worldwide, and early detection remains a significant clinical challenge. This study proposes an Effective Brain Stroke Diagnosis Strategy (EBDS). The hybrid deep learning framework integrates Vision Transformer (ViT) and VGG16 to enable accurate and interpretable stroke detection from CT images. The model was trained and evaluated using a publicly available dataset from Kaggle, achieving impressive results: a test accuracy of 99.6%, a precision of 1.00 for normal cases and 0.98 for stroke cases, a recall of 0.99 for normal cases and 1.00 for stroke cases, and an overall F1-score of 0.99. These results demonstrate the robustness and reliability of the EBDS model, which outperforms several recent state-of-the-art methods. To enhance clinical trust, the model incorporates explainability techniques, such as Grad-CAM and LIME, which provide visual insights into its decision-making process. The EBDS framework is designed for real-time application in emergency settings, offering both high diagnostic performance and interpretability. This work addresses a critical research gap in early brain stroke diagnosis and contributes a scalable, explainable, and clinically relevant solution for medical imaging diagnostics.

Radiomics-based machine-learning method to predict extrahepatic metastasis in hepatocellular carcinoma after hepatectomy: a multicenter study.

He Y, Dong B, Hu B, Hao X, Xia N, Yang C, Dong Q, Zhu C

pubmed logopapersAug 14 2025
This study investigates the use of CT-based radiomics for predicting extrahepatic metastasis in hepatocellular carcinoma (HCC) following hepatectomy. We analyzed data from 374 patients from two centers (277 in the training cohort and 97 in an external validation cohort). Radiomic features were extracted from contrast-enhanced CT scans. Key features were identified using the least absolute shrinkage and selection operator (LASSO) to compute radiomics scores (radscore) for model development. A clinical model based on risk factors was also created. We developed a combined model integrating both radscore and clinical variables, constructing nomograms for personalized risk assessment. Model performance was compared via the Delong test, with calibration curves assessing prediction consistency. Decision curve analysis (DCA) was employed to assess the clinical utility and net benefit of the predictive models across different threshold probabilities, thereby evaluating their potential value in guiding clinical decision-making for extrahepatic metastasis. Radscore based on CT was an independent predictor of extrahepatic disease (p < 0.05). The combined model showed high predictive performance with an AUC of 87.2% (95% CI: 81.8%-92.6%) in the training group and 86.0% (95% CI: 69.4%-100%) in the validation group. Predictive performance of the combined model significantly outperformed both the radiomics and clinical models (p < 0.05). The DCA shows that the combined model has a higher net benefit in predicting extrahepatic metastases of HCC than the clinical model and radiomics model. The combined prediction model, utilizing CT radscore alongside clinical risk factors, effectively forecasts extrahepatic metastasis in HCC patients.

Automatic segmentation of cone beam CT images using treatment planning CT images in patients with prostate cancer.

Takayama Y, Kadoya N, Yamamoto T, Miyasaka Y, Kusano Y, Kajikawa T, Tomori S, Katsuta Y, Tanaka S, Arai K, Takeda K, Jingu K

pubmed logopapersAug 14 2025
Cone-beam computed tomography-based online adaptive radiotherapy (CBCT-based online ART) is currently used in clinical practice; however, deep learning-based segmentation of CBCT images remains challenging. Previous studies generated CBCT datasets for segmentation by adding contours outside clinical practice or synthesizing tissue contrast-enhanced diagnostic images paired with CBCT images. This study aimed to improve CBCT segmentation by matching the treatment planning CT (tpCT) image quality to CBCT images without altering the tpCT image or its contours. A deep-learning-based CBCT segmentation model was trained for the male pelvis using only the tpCT dataset. To bridge the quality gap between tpCT and routine CBCT images, an artificial pseudo-CBCT dataset was generated using Gaussian noise and Fourier domain adaptation (FDA) for 80 tpCT datasets (the hybrid FDA method). A five-fold cross-validation approach was used for model training. For comparison, atlas-based segmentation was performed with a registered tpCT dataset. The Dice similarity coefficient (DSC) assessed contour quality between the model-predicted and reference manual contours. The average DSC values for the clinical target volume, bladder, and rectum using the hybrid FDA method were 0.71 ± 0.08, 0.84 ± 0.08, and 0.78 ± 0.06, respectively. Conversely, the values for the model using plain tpCT were 0.40 ± 0.12, 0.17 ± 0.21, and 0.18 ± 0.14, and for the atlas-based model were 0.66 ± 0.13, 0.59 ± 0.16, and 0.66 ± 0.11, respectively. The segmentation model using the hybrid FDA method demonstrated significantly higher accuracy than models trained on plain tpCT datasets and those using atlas-based segmentation.

Cross-view Generalized Diffusion Model for Sparse-view CT Reconstruction

Jixiang Chen, Yiqun Lin, Yi Qin, Hualiang Wang, Xiaomeng Li

arxiv logopreprintAug 14 2025
Sparse-view computed tomography (CT) reduces radiation exposure by subsampling projection views, but conventional reconstruction methods produce severe streak artifacts with undersampled data. While deep-learning-based methods enable single-step artifact suppression, they often produce over-smoothed results under significant sparsity. Though diffusion models improve reconstruction via iterative refinement and generative priors, they require hundreds of sampling steps and struggle with stability in highly sparse regimes. To tackle these concerns, we present the Cross-view Generalized Diffusion Model (CvG-Diff), which reformulates sparse-view CT reconstruction as a generalized diffusion process. Unlike existing diffusion approaches that rely on stochastic Gaussian degradation, CvG-Diff explicitly models image-domain artifacts caused by angular subsampling as a deterministic degradation operator, leveraging correlations across sparse-view CT at different sample rates. To address the inherent artifact propagation and inefficiency of sequential sampling in generalized diffusion model, we introduce two innovations: Error-Propagating Composite Training (EPCT), which facilitates identifying error-prone regions and suppresses propagated artifacts, and Semantic-Prioritized Dual-Phase Sampling (SPDPS), an adaptive strategy that prioritizes semantic correctness before detail refinement. Together, these innovations enable CvG-Diff to achieve high-quality reconstructions with minimal iterations, achieving 38.34 dB PSNR and 0.9518 SSIM for 18-view CT using only \textbf{10} steps on AAPM-LDCT dataset. Extensive experiments demonstrate the superiority of CvG-Diff over state-of-the-art sparse-view CT reconstruction methods. The code is available at https://github.com/xmed-lab/CvG-Diff.

FIND-Net -- Fourier-Integrated Network with Dictionary Kernels for Metal Artifact Reduction

Farid Tasharofi, Fuxin Fan, Melika Qahqaie, Mareike Thies, Andreas Maier

arxiv logopreprintAug 14 2025
Metal artifacts, caused by high-density metallic implants in computed tomography (CT) imaging, severely degrade image quality, complicating diagnosis and treatment planning. While existing deep learning algorithms have achieved notable success in Metal Artifact Reduction (MAR), they often struggle to suppress artifacts while preserving structural details. To address this challenge, we propose FIND-Net (Fourier-Integrated Network with Dictionary Kernels), a novel MAR framework that integrates frequency and spatial domain processing to achieve superior artifact suppression and structural preservation. FIND-Net incorporates Fast Fourier Convolution (FFC) layers and trainable Gaussian filtering, treating MAR as a hybrid task operating in both spatial and frequency domains. This approach enhances global contextual understanding and frequency selectivity, effectively reducing artifacts while maintaining anatomical structures. Experiments on synthetic datasets show that FIND-Net achieves statistically significant improvements over state-of-the-art MAR methods, with a 3.07% MAE reduction, 0.18% SSIM increase, and 0.90% PSNR improvement, confirming robustness across varying artifact complexities. Furthermore, evaluations on real-world clinical CT scans confirm FIND-Net's ability to minimize modifications to clean anatomical regions while effectively suppressing metal-induced distortions. These findings highlight FIND-Net's potential for advancing MAR performance, offering superior structural preservation and improved clinical applicability. Code is available at https://github.com/Farid-Tasharofi/FIND-Net

SimAQ: Mitigating Experimental Artifacts in Soft X-Ray Tomography using Simulated Acquisitions

Jacob Egebjerg, Daniel Wüstner

arxiv logopreprintAug 14 2025
Soft X-ray tomography (SXT) provides detailed structural insight into whole cells but is hindered by experimental artifacts such as the missing wedge and by limited availability of annotated datasets. We present \method, a simulation pipeline that generates realistic cellular phantoms and applies synthetic artifacts to produce paired noisy volumes, sinograms, and reconstructions. We validate our approach by training a neural network primarily on synthetic data and demonstrate effective few-shot and zero-shot transfer learning on real SXT tomograms. Our model delivers accurate segmentations, enabling quantitative analysis of noisy tomograms without relying on large labeled datasets or complex reconstruction methods.

Development and validation of deep learning model for detection of obstructive coronary artery disease in patients with acute chest pain: a multi-center study.

Kim JY, Park J, Lee KH, Lee JW, Park J, Kim PK, Han K, Baek SE, Im DJ, Choi BW, Hur J

pubmed logopapersAug 14 2025
This study aimed to develop and validate a deep learning (DL) model to detect obstructive coronary artery disease (CAD, ≥ 50% stenosis) in coronary CT angiography (CCTA) among patients presenting to the emergency department (ED) with acute chest pain. The training dataset included 378 patients with acute chest pain who underwent CCTA (10,060 curved multiplanar reconstruction [MPR] images) from a single-center ED between January 2015 and December 2022. The external validation dataset included 298 patients from 3 ED centers between January 2021 and December 2022. A DL model based on You Only Look Once v4, requires manual preprocessing for curved MPR extraction and was developed using 15 manually preprocessed MPR images per major coronary artery. Model performance was evaluated per artery and per patient. The training dataset included 378 patients (mean age 61.3 ± 12.2 years, 58.2% men); the external dataset included 298 patients (mean age 58.3 ± 13.8 years, 54.6% men). Obstructive CAD prevalence in the external dataset was 27.5% (82/298). The DL model achieved per-artery sensitivity, specificity, positive predictive value, negative predictive value (NPV), and area under the curve (AUC) of 92.7%, 89.9%, 62.6%, 98.5%, and 0.919, respectively; and per-patient values of 93.3%, 80.7%, 67.7%, 96.6%, and 0.871, respectively. The DL model demonstrated high sensitivity and NPV for identifying obstructive CAD in patients with acute chest pain undergoing CCTA, indicating its potential utility in aiding ED physicians in CAD detection.

Deep Learning-Based Instance-Level Segmentation of Kidney and Liver Cysts in CT Images of Patients Affected by Polycystic Kidney Disease.

Gregory AV, Khalifa M, Im J, Ramanathan S, Elbarougy DE, Cruz C, Yang H, Denic A, Rule AD, Chebib FT, Dahl NK, Hogan MC, Harris PC, Torres VE, Erickson BJ, Potretzke TA, Kline TL

pubmed logopapersAug 14 2025
Total kidney and liver volumes are key image-based biomarkers to predict the severity of kidney and liver phenotype in autosomal dominant polycystic kidney disease (ADPKD). However, MRI-based advanced biomarkers like total cyst number (TCN) and cyst parenchyma surface area (CPSA) have been shown to more accurately assess cyst burden and improve the prediction of disease progression. The main aim of this study is to extend the calculation of advanced biomarkers to other imaging modalities; thus, we propose a fully automated model to segment kidney and liver cysts in CT images. Abdominal CTs of ADPKD patients were gathered retrospectively between 2001-2018. A 3D deep-learning method using the nnU-Net architecture was trained to learn cyst edges-cores and the non-cystic kidney/liver parenchyma. Separate segmentation models were trained for kidney cysts in contrast-enhanced CTs and liver cysts in non-contrast CTs using an active learning approach. Two experienced research fellows manually generated the reference standard segmentation, which were reviewed by an expert radiologist for accuracy. Two-hundred CT scans from 148 patients (mean age, 51.2 ± 14.1 years; 48% male) were utilized for model training (80%) and testing (20%). In the test set, both models showed good agreement with the reference standard segmentations, similar to the agreement between two independent human readers (model vs reader: TCNkidney/liver r=0.96/0.97 and CPSAkidney r=0.98), inter-reader: TCNkidney/liver r=0.96/0.98 and CPSAkidney r=0.99). Our study demonstrates that automated models can segment kidney and liver cysts accurately in CT scans of patients with ADPKD.

Lung-DDPM: Semantic Layout-guided Diffusion Models for Thoracic CT Image Synthesis.

Jiang Y, Lemarechal Y, Bafaro J, Abi-Rjeile J, Joubert P, Despres P, Manem V

pubmed logopapersAug 14 2025
With the rapid development of artificial intelligence (AI), AI-assisted medical imaging analysis demonstrates remarkable performance in early lung cancer screening. However, the costly annotation process and privacy concerns limit the construction of large-scale medical datasets, hampering the further application of AI in healthcare. To address the data scarcity in lung cancer screening, we propose Lung-DDPM, a thoracic CT image synthesis approach that effectively generates high-fidelity 3D synthetic CT images, which prove helpful in downstream lung nodule segmentation tasks. Our method is based on semantic layout-guided denoising diffusion probabilistic models (DDPM), enabling anatomically reasonable, seamless, and consistent sample generation even from incomplete semantic layouts. Our results suggest that the proposed method outperforms other state-of-the-art (SOTA) generative models in image quality evaluation and downstream lung nodule segmentation tasks. Specifically, Lung-DDPM achieved superior performance on our large validation cohort, with a Fréchet inception distance (FID) of 0.0047, maximum mean discrepancy (MMD) of 0.0070, and mean squared error (MSE) of 0.0024. These results were 7.4×, 3.1×, and 29.5× better than the second-best competitors, respectively. Furthermore, the lung nodule segmentation model, trained on a dataset combining real and Lung-DDPM-generated synthetic samples, attained a Dice Coefficient (Dice) of 0.3914 and sensitivity of 0.4393. This represents 8.8% and 18.6% improvements in Dice and sensitivity compared to the model trained solely on real samples. The experimental results highlight Lung-DDPM's potential for a broader range of medical imaging applications, such as general tumor segmentation, cancer survival estimation, and risk prediction. The code and pretrained models are available at https://github.com/Manem-Lab/Lung-DDPM/.

Data-Driven Abdominal Phenotypes of Type 2 Diabetes in Lean, Overweight, and Obese Cohorts

Lucas W. Remedios, Chloe Choe, Trent M. Schwartz, Dingjie Su, Gaurav Rudravaram, Chenyu Gao, Aravind R. Krishnan, Adam M. Saunders, Michael E. Kim, Shunxing Bao, Alvin C. Powers, Bennett A. Landman, John Virostko

arxiv logopreprintAug 14 2025
Purpose: Although elevated BMI is a well-known risk factor for type 2 diabetes, the disease's presence in some lean adults and absence in others with obesity suggests that detailed body composition may uncover abdominal phenotypes of type 2 diabetes. With AI, we can now extract detailed measurements of size, shape, and fat content from abdominal structures in 3D clinical imaging at scale. This creates an opportunity to empirically define body composition signatures linked to type 2 diabetes risk and protection using large-scale clinical data. Approach: To uncover BMI-specific diabetic abdominal patterns from clinical CT, we applied our design four times: once on the full cohort (n = 1,728) and once on lean (n = 497), overweight (n = 611), and obese (n = 620) subgroups separately. Briefly, our experimental design transforms abdominal scans into collections of explainable measurements through segmentation, classifies type 2 diabetes through a cross-validated random forest, measures how features contribute to model-estimated risk or protection through SHAP analysis, groups scans by shared model decision patterns (clustering from SHAP) and links back to anatomical differences (classification). Results: The random-forests achieved mean AUCs of 0.72-0.74. There were shared type 2 diabetes signatures in each group; fatty skeletal muscle, older age, greater visceral and subcutaneous fat, and a smaller or fat-laden pancreas. Univariate logistic regression confirmed the direction of 14-18 of the top 20 predictors within each subgroup (p < 0.05). Conclusions: Our findings suggest that abdominal drivers of type 2 diabetes may be consistent across weight classes.
Page 41 of 1411410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.