Sort by:
Page 25 of 1411403 results

End-to-end deep learning model with multi-channel and attention mechanisms for multi-class diagnosis in CT-T staging of advanced gastric cancer.

Liu B, Jiang P, Wang Z, Wang X, Wang Z, Peng C, Liu Z, Lu C, Pan D, Shan X

pubmed logopapersSep 3 2025
Homogeneous AI assessment is required for CT-T staging of gastric cancer. To construct an End-to-End CT-based Deep Learning (DL) model for tumor T-staging in advanced gastric cancer. A retrospective study was conducted on 460 cases of presurgical CT patients with advanced gastric cancer between 2011 and 2024. A Three-dimensional (3D)-Convolution (Conv)-UNet based automatic segmentation model was employed to segment tumors, and a SmallFocusNet-based ternary classification model was built for CT-T staging. Finally, these models were integrated to create an end-to-end DL model. The segmentation model's performance was assessed using the Dice similarity coefficient (DSC), Intersection over Union (IoU) and 95 % Hausdorff Distance (HD_95), while the classification model's performance was measured with thearea under the Receiver Operating Characteristic curve (AUC), sensitivity, specificity, and F1-score.Eventually, the end-to-end DL model was compared with the radiologist using the McNemar test. The data were divided into Dataset 1(423 cases for training and test set, mean age, 65.0 years ± 9.46 [SD]) and Dataset 2(37 cases for independent validation set, mean age, 68.8 years ± 9.28 [SD]). For segmentation task, the model achieved a DSC of 0.860 ± 0.065, an IoU of 0.760 ± 0.096 in test set of Dataset 1, and a DSC of 0.870 ± 0.164, an IoU of 0.793 ± 0.168 in Dataset 2. For classification task,the model demonstrated a macro-average AUC of 0.882(95 % CI 0.812-0.926), an average sensitivity of 76.9 % (95 % CI 67.6 %-85.3 %) in test set of Dataset 1 and a macro-average AUC of 0.862(95 % CI 0.723-0.942), an average sensitivity of 76.3 % (95 % CI 59.8 %-90.0 %) in Dataset 2. Meanwhile, the DL model's performance was better than that of radiologist (Accuracy was 91.9 %vs82.1 %, P = 0.007). The end-to-end DL model for CT-T staging is highly accurate and consistent in pre-treatment staging of advanced gastric cancer.

Disentangled deep learning method for interior tomographic reconstruction of low-dose X-ray CT.

Chen C, Zhang L, Gao H, Wang Z, Xing Y, Chen Z

pubmed logopapersSep 3 2025
Objective
Low-dose interior tomography integrates low-dose CT (LDCT) with region-of-interest (ROI) imaging which finds wide application in radiation dose reduction and high-resolution imaging. However, the combined effects of noise and data truncation pose great challenges for accurate tomographic reconstruction. This study aims to develop a novel reconstruction framework that achieves high-quality ROI reconstruction and efficient extension of recoverable region to provide innovative solutions to address coupled ill-posed problems.
Approach
We conducted a comprehensive analysis of projection data composition and angular sampling patterns in low-dose interior tomography. Based on this analysis, we proposed two novel deep learning-based reconstruction pipelines: (1) Deep Projection Extraction-based Reconstruction (DPER) that focuses on ROI reconstruction by disentangling and extracting noise and background projection contributions using a dual-domain deep neural network; and (2) DPER with Progressive extension (DPER-Pro) that enhances DPER by a progressive "coarse-to-fine" strategy for missing data compensation, enabling simultaneous ROI reconstruction and extension of recoverable regions. The proposed methods were rigorously evaluated through extensive experiments on simulated torso datasets and real CT scans of a torso phantom.
Main Results
The experimental results demonstrated that DPER effectively handles the coupled ill-posed problem and achieves high-quality ROI reconstructions by accurately extracting noise and background projections. DPER-Pro extends the recoverable region while preserving ROI image quality by leveraging disentangled projection components and angular sampling patterns. Both methods outperform competing approaches in reconstructing reliable structures, enhancing generalization, and mitigating noise and truncation artifacts.
Significance
This work presents a novel decoupled deep learning framework for low-dose interior tomography that provides a robust and effective solution to the challenges posed by noise and truncated projections. The proposed methods significantly improve ROI reconstruction quality while efficiently recovering structural information in exterior regions, offering a promising pathway for advancing low-dose ROI imaging across a wide range of applications.&#xD.

Convolutional neural network application for automated lung cancer detection on chest CT using Google AI Studio.

Aljneibi Z, Almenhali S, Lanca L

pubmed logopapersSep 3 2025
This study aimed to evaluate the diagnostic performance of an artificial intelligence (AI)-enhanced model for detecting lung cancer on computed tomography (CT) images of the chest. It assessed diagnostic accuracy, sensitivity, specificity, and interpretative consistency across normal, benign, and malignant cases. An exploratory analysis was performed using the publicly available IQ-OTH/NCCD dataset, comprising 110 CT cases (55 normal, 15 benign, 40 malignant). A pre-trained convolutional neural network in Google AI Studio was fine-tuned using 25 training images and tested on a separate image from each case. Quantitative evaluation of diagnostic accuracy and qualitative content analysis of AI-generated reports was conducted to assess diagnostic patterns and interpretative behavior. The AI model achieved an overall accuracy of 75.5 %, with a sensitivity of 74.5 % and specificity of 76.4 %. The area under the ROC curve (AUC) for all cases was 0.824 (95 % CI: 0.745-0.897), indicating strong discriminative power. Malignant cases had the highest classification performance (AUC = 0.902), while benign cases were more challenging to classify (AUC = 0.615). Qualitative analysis showed the AI used consistent radiological terminology, but demonstrated oversensitivity to ground-glass opacities, contributing to false positives in non-malignant cases. The AI model showed promising diagnostic potential, particularly in identifying malignancies. However, specificity limitations and interpretative errors in benign and normal cases underscore the need for human oversight and continued model refinement. AI-enhanced CT interpretation can improve efficiency in high-volume settings but should serve as a decision-support tool rather than a replacement for expert image review.

Fully-Guided Placement of Dental Implants Utilizing Nasopalatine Canal Fixation in a Novel Rotational Path Surgical Template Design: A Retrospective Case Series.

Ganz SD

pubmed logopapersSep 3 2025
Precise implant placement in the anterior and posterior maxilla often presents challenges due to variable bone and soft tissue anatomy. Many clinicians elect a freehand surgical approach because conventional surgical guides may not always be easy to design, fabricate, or utilize. Guided surgery has been proven to have advantages over freehand surgical protocols and therefore, the present study proposed utilizing the nasopalatine canal (NPC) as an anatomical reference and point of fixation for a novel rotational path surgical template during computer-aided implant surgery (CAIS). The present digital workflow combined artificial intelligence (AI) facilitated cone beam computed tomography (CBCT) software bone segmentation of the maxillary arch to assess the NPC and surrounding hard tissues, to design and fabricate static surgical guides to precisely place implants. After rotational engagement of the maxillary buccal undercuts, each novel surgical guide incorporated the NPC for fixation with a single pin to achieve initial stability. 22 consecutive patients requiring maxillary reconstruction received 123 implants (7 fully and 15 partially edentulous) utilizing a fully-guided surgical protocol to complete 4 overdenture and 18 full-arch fixed restorations. 12 patients required extensive maxillary bone augmentation before implant placement. 13 patients required delayed loading based on bone density and 9 patients were restoratively loaded within 24 to 96 hours post-surgery, accomplished with the use of photogrammetry for the fabrication of 3D-printed restorations. The initial implant success rate was 98.37% and 100% initial prosthetic success. The use of the NPC for fixation of surgical guides did not result in any neurovascular post-operative complications. The novel template concept can improve surgical outcomes using a bone-borne template design for implant-supported rehabilitation of the partial and fully edentulous maxillary arch. Preliminary case series confirmed controlled placement accuracy with limited risk of neurovascular complications for full-arch overdenture and fixed restorations. NPC is a vital maxillary anatomic landmark for implant planning, with an expanded role for the stabilization of novel surgical guide designs due to advancements in AI bone segmentation.

Evaluation of Stapes Image Quality with Ultra-High-Resolution CT in Comparison with Conebeam CT and High-Resolution CT in Cadaveric Heads.

Puel U, Boukhzer S, Doyen M, Hossu G, Boubaker F, Frédérique G, Blum A, Teixeira PAG, Eliezer M, Parietti-Winkler C, Gillet R

pubmed logopapersSep 2 2025
Conventional CT imaging techniques are ineffective in adequately depicting the stapes. The purpose of this study was to evaluate the ability of high-resolution (HR), ultra-high-resolution (UHR) with and without deep learning reconstruction (DLR), and conebeam (CB)-CT scanners to image the stapes by using micro-CT as a reference. Eleven temporal bone specimens were imaged by using all imaging modalities. Subjective image analysis was performed by grading image quality on a Likert scale, and objective image analysis was performed by taking various measurements of the stapes superstructure and footplate. Image noise and radiation dose were also recorded. The global image quality scores were all worse than micro-CT (<i>P</i> ≤ .01). UHR-CT with and without DLR had the second-best global image quality scores (<i>P</i> > .99), which were both better than CB-CT (<i>P</i> = .01 for both). CB-CT had a better global image quality score than HR-CT (<i>P</i> = .01). Most of the measurements differed between HR-CT and micro-CT (<i>P</i> ≤ .02), but not between UHR-CT with and without DLR, CB-CT, and micro-CT (<i>P</i> > .06). The air noise value of UHR-CT with DLR was not different from CB-CT (<i>P</i> = .49), but HR-CT and UHR-CT without DLR exhibited higher values than UHR-CT with DLR (<i>P</i> ≤ .001). HR-CT and UHR-CT with and without DLR yielded the same effective radiation dose values of 1.23 ± 0.11 (1.13-1.35) mSv, which was 4 times higher than that of CB-CT (0.35 ± 0 mSv, <i>P</i> ≤ .01). UHR-CT with and without DLR offers comparable objective image analysis to CB-CT while providing superior subjective image quality. However, this is achieved at the cost of a higher radiation dose. Both CB-CT and UHR-CT with and without DLR are more effective than HR-CT in objective and subjective image analysis.

Synthetic data generation with Worley-Perlin diffusion for robust subarachnoid hemorrhage detection in imbalanced CT Datasets.

Lu Z, Hu T, Oda M, Fuse Y, Saito R, Jinzaki M, Mori K

pubmed logopapersSep 2 2025
In this paper, we propose a novel generative model to produce high-quality SAH samples, enhancing SAH CT detection performance in imbalanced datasets. Previous methods, such as cost-sensitive learning and previous diffusion models, suffer from overfitting or noise-induced distortion, limiting their effectiveness. Accurate SAH sample generation is crucial for better detection. We propose the Worley-Perlin Diffusion Model (WPDM), leveraging Worley-Perlin noise to synthesize diverse, high-quality SAH images. WPDM addresses limitations of Gaussian noise (homogeneity) and Simplex noise (distortion), enhancing robustness for generating SAH images. Additionally, <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mtext>WPDM</mtext> <mtext>Fast</mtext></msub> </math> optimizes generation speed without compromising quality. WPDM effectively improved classification accuracy in datasets with varying imbalance ratios. Notably, a classifier trained with WPDM-generated samples achieved an F1-score of 0.857 on a 1:36 imbalance ratio, surpassing the state of the art by 2.3 percentage points. WPDM overcomes the limitations of Gaussian and Simplex noise-based models, generating high-quality, realistic SAH images. It significantly enhances classification performance in imbalanced settings, providing a robust solution for SAH CT detection.

Predictive modeling of hematoma expansion from non-contrast computed tomography in spontaneous intracerebral hemorrhage patients

Ironside, N., El Naamani, K., Rizvi, T., Shifat-E-Rabbi, M., Kundu, S., Becceril-Gaitan, A., Pas, K., Snyder, H., Chen, C.-J., Langefeld, C., Woo, D., Mayer, S. A., Connolly, E. S., Rohde, G. K., VISTA-ICH,, ERICH investigators,

medrxiv logopreprintSep 2 2025
Hematoma expansion is a consistent predictor of poor neurological outcome and mortality after spontaneous intracerebral hemorrhage (ICH). An incomplete understanding of its biophysiology has limited early preventative intervention. Transport-based morphometry (TBM) is a mathematical modeling technique that uses a physically meaningful metric to quantify and visualize discriminating image features that are not readily perceptible to the human eye. We hypothesized that TBM could discover relationships between hematoma morphology on initial Non-Contrast Computed Tomography (NCCT) and hematoma expansion. 170 spontaneous ICH patients enrolled in the multi-center international Virtual International Trials of Stroke Archive (VISTA-ICH) with time-series NCCT data were used for model derivation. Its performance was assessed on a test dataset of 170 patients from the Ethnic/Racial Variations of Intracerebral Hemorrhage (ERICH) study. A unique transport-based representation was produced from each presentation NCCT hematoma image to identify morphological features of expansion. The principal hematoma features identified by TBM were larger size, density heterogeneity, shape irregularity and peripheral density distribution. These were consistent with clinician-identified features of hematoma expansion, corroborating the hypothesis that morphological characteristics of the hematoma promote future growth. Incorporating these traits into a multivariable model comprising morphological, spatial and clinical information achieved a AUROC of 0.71 for quantifying 24-hour hematoma expansion risk in the test dataset. This outperformed existing clinician protocols and alternate machine learning methods, suggesting that TBM detected features with improved precision than by visual inspection alone. This pre-clinical study presents a quantitative and interpretable method for discovery and visualization of NCCT biomarkers of hematoma expansion in ICH patients. Because TBM has a direct physical meaning, its modeling of NCCT hematoma features can inform hypotheses for hematoma expansion mechanisms. It has potential future application as a clinical risk stratification tool.

An Artificial Intelligence System for Staging the Spheno-Occipital Synchondrosis.

Milani OH, Mills L, Nikho A, Tliba M, Ayyildiz H, Allareddy V, Ansari R, Cetin AE, Elnagar MH

pubmed logopapersSep 2 2025
The aim of this study was to develop, test and validate automated interpretable deep learning algorithms for the assessment and classification of the spheno-occipital synchondrosis (SOS) fusion stages from a cone beam computed tomography (CBCT). The sample consisted of 723 CBCT scans of orthodontic patients from private practices in the midwestern United States. The SOS fusion stages were classified by two orthodontists and an oral and maxillofacial radiologist. The advanced deep learning models employed consisted of ResNet, EfficientNet and ConvNeXt. Additionally, a new attention-based model, ConvNeXt + Conv Attention, was developed to enhance classification accuracy by integrating attention mechanisms for capturing subtle medical imaging features. Laslty, YOLOv11 was integrated for fully-automated region detection and segmentation. ConvNeXt + Conv Attention outperformed the other models and achieved a 88.94% accuracy with manual cropping and 82.49% accuracy in a fully automated workflow. This study introduces a novel artificial intelligence-based pipeline that reliably automates the classification of the SOS fusion stages using advanced deep learning models, with the highest accuracy achieved by ConvNext + Conv Attention. These models enhance the efficiency, scalability and consistency of SOS staging while minimising manual intervention from the clinician, underscoring the potential for AI-driven solutions in orthodontics and clinical workflows.

RegGAN-based contrast-free CT enhances esophageal cancer assessment: multicenter validation of automated tumor segmentation and T-staging.

Huang X, Li W, Wang Y, Wu Q, Li P, Xu K, Huang Y

pubmed logopapersSep 2 2025
This study aimed to develop a deep learning (DL) framework using registration-guided generative adversarial networks (RegGAN) to synthesize contrast-enhanced CT (Syn-CECT) from non-contrast CT (NCCT), enabling iodine-free esophageal cancer (EC) T-staging. A retrospective multicenter analysis included 1,092 EC patients (2013-2024) divided into training (N = 313), internal (N = 117), and external test cohorts (N = 116 and N = 546). RegGAN synthesized Syn-CECT by integrating registration and adversarial training to address NCCT-CECT misalignment. Tumor segmentation used CSSNet with hierarchical feature fusion, while T-staging employed a dual-path DL model combining radiomic features (from NCCT/Syn-CECT) and Vision Transformer-derived deep features. Performance was validated via quantitative metrics (NMAE, PSNR, SSIM), Dice scores, AUC, and reader studies comparing six clinicians with/without model assistance. RegGAN achieved Syn-CECT quality comparable to real CECT (NMAE = 0.1903, SSIM = 0.7723; visual scores: p ≥ 0.12). CSSNet produced accurate tumor segmentation (Dice = 0.89, 95% HD = 2.27 in external tests). The DL staging model outperformed machine learning (AUC = 0.7893-0.8360 vs. ≤ 0.8323), surpassing early-career clinicians (AUC = 0.641-0.757) and matching experts (AUC = 0.840). Syn-CECT-assisted clinicians improved diagnostic accuracy (AUC increase: ~ 0.1, p < 0.01), with decision curve analysis confirming clinical utility at > 35% risk threshold. The RegGAN-based framework eliminates contrast agents while maintaining diagnostic accuracy for EC segmentation (Dice > 0.88) and T-staging (AUC > 0.78). It offers a safe, cost-effective alternative for patients with iodine allergies or renal impairment and enhances diagnostic consistency across clinician experience levels. This approach addresses limitations of invasive staging and repeated contrast exposure, demonstrating transformative potential for resource-limited settings.

Automated coronary analysis in ultrahigh-spatial resolution photon-counting detector CT angiography: Clinical validation and intra-individual comparison with energy-integrating detector CT.

Kravchenko D, Hagar MT, Varga-Szemes A, Schoepf UJ, Schoebinger M, O'Doherty J, Gülsün MA, Laghi A, Laux GS, Vecsey-Nagy M, Emrich T, Tremamunno G

pubmed logopapersSep 1 2025
To evaluate a deep-learning algorithm for automated coronary artery analysis on ultrahigh-resolution photon-counting detector coronary computed tomography (CT) angiography and compared its performance to expert readers using invasive coronary angiography as reference. Thirty-two patients (mean age 68.6 years; 81 ​% male) underwent both energy-integrating detector and ultrahigh-resolution photon-counting detector CT within 30 days. Expert readers scored each image using the Coronary Artery Disease-Reporting and Data System classification, and compared to invasive angiography. After a three-month wash-out, one reader reanalyzed the photon-counting detector CT images assisted by the algorithm. Sensitivity, specificity, accuracy, inter-reader agreement, and reading times were recorded for each method. On 401 arterial segments, inter-reader agreement improved from substantial (κ ​= ​0.75) on energy-integrating detector CT to near-perfect (κ ​= ​0.86) on photon-counting detector CT. The algorithm alone achieved 85 ​% sensitivity, 91 ​% specificity, and 90 ​% accuracy on energy-integrating detector CT, and 85 ​%, 96 ​%, and 95 ​% on photon-counting detector CT. Compared to invasive angiography on photon-counting detector CT, manual and automated reads had similar sensitivity (67 ​%), but manual assessment slightly outperformed regarding specificity (85 ​% vs. 79 ​%) and accuracy (84 ​% vs. 78 ​%). When the reader was assisted by the algorithm, specificity rose to 97 ​% (p ​< ​0.001), accuracy to 95 ​%, and reading time decreased by 54 ​% (p ​< ​0.001). This deep-learning algorithm demonstrates high agreement with experts and improved diagnostic performance on photon-counting detector CT. Expert review augmented by the algorithm further increases specificity and dramatically reduces interpretation time.
Page 25 of 1411403 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.