Sort by:
Page 40 of 1411410 results

PAINT: Prior-aided Alternate Iterative NeTwork for Ultra-low-dose CT Imaging Using Diffusion Model-restored Sinogram.

Chen K, Zhang W, Deng Z, Zhou Y, Zhao J

pubmed logopapersAug 18 2025
Obtaining multiple CT scans from the same patient is required in many clinical scenarios, such as lung nodule screening and image-guided radiation therapy. Repeated scans would expose patients to higher radiation dose and increase the risk of cancer. In this study, we aim to achieve ultra-low-dose imaging for subsequent scans by collecting extremely undersampled sinogram via regional few-view scanning, and preserve image quality utilizing the preceding fullsampled scan as prior. To fully exploit prior information, we propose a two-stage framework consisting of diffusion model-based sinogram restoration and deep learning-based unrolled iterative reconstruction. Specifically, the undersampled sinogram is first restored by a conditional diffusion model with sinogram-domain prior guidance. Then, we formulate the undersampled data reconstruction problem as an optimization problem combining fidelity terms for both undersampled and restored data, along with a regularization term based on image-domain prior. Next, we propose Prior-aided Alternate Iterative NeTwork (PAINT) to solve the optimization problem. PAINT alternately updates the undersampled or restored data fidelity term, and unrolls the iterations to integrate neural network-based prior regularization. In the case of 112 mm field of view in simulated data experiments, our proposed framework achieved superior performance in terms of CT value accuracy and image details preservation. Clinical data experiments also demonstrated that our proposed framework outperformed the comparison methods in artifact reduction and structure recovery.

A prognostic model integrating radiomics and deep learning based on CT for survival prediction in laryngeal squamous cell carcinoma.

Jiang H, Xie K, Chen X, Ning Y, Yu Q, Lv F, Liu R, Zhou Y, Xia S, Peng J

pubmed logopapersAug 16 2025
Accurate prognostic prediction is crucial for patients with laryngeal squamous cell carcinoma (LSCC) to guide personalized treatment strategies. This study aimed to develop a comprehensive prognostic model leveraging clinical factors alongside radiomics and deep learning (DL) based on CT imaging to predict recurrence-free survival (RFS) in LSCC patients. We retrospectively enrolled 349 patients with LSCC from Center 1 (training set: n = 189; internal testing set: n = 82) and Center 2 (external testing set: n = 78). A combined model was developed using Cox regression analysis to predict RFS in LSCC patients by integrating independent clinical risk factors, radiomics score (RS), and deep learning score (DLS). Meanwhile, separate clinical, radiomics, and DL models were also constructed for comparison. Furthermore, the combined model was represented visually through a nomogram to provide personalized estimation of RFS, with its risk stratification capability evaluated using Kaplan-Meier analysis. The combined model achieved a higher C-index than did the clinical model, radiomics model, and DL model in the internal testing (0.810 vs. 0.634, 0.679, and 0.727, respectively) and external testing sets (0.742 vs. 0.602, 0.617, and 0.729, respectively). Additionally, following risk stratification via nomogram, patients in the low-risk group showed significantly higher survival probabilities compared to those in the high-risk group in the internal testing set [hazard ratio (HR) = 0.157, 95% confidence interval (CI): 0.063-0.392, p < 0.001] and external testing set (HR = 0.312, 95% CI: 0.137-0.711, p = 0.003). The proposed combined model demonstrated a reliable and accurate ability to predict RFS in patients with LSCC, potentially assisting in risk stratification.

An interpretable CT-based deep learning model for predicting overall survival in patients with bladder cancer: a multicenter study.

Zhang M, Zhao Y, Hao D, Song Y, Lin X, Hou F, Huang Y, Yang S, Niu H, Lu C, Wang H

pubmed logopapersAug 16 2025
Predicting the prognosis of bladder cancer remains challenging despite standard treatments. We developed an interpretable bladder cancer deep learning (BCDL) model using preoperative CT scans to predict overall survival. The model was trained on a cohort (n = 765) and validated in three independent cohorts (n = 438; n = 181; n = 72). The BCDL model outperformed other models in survival risk prediction, with the SHapley Additive exPlanation method identifying pixel-level features contributing to predictions. Patients were stratified into high- and low-risk groups using deep learning score cutoff. Adjuvant therapy significantly improved overall survival in high-risk patients (p = 0.028) and women in the low-risk group (p = 0.046). RNA sequencing analysis revealed differential gene expression and pathway enrichment between risk groups, with high-risk patients exhibiting an immunosuppressive microenvironment and altered microbial composition. Our BCDL model accurately predicts survival risk and supports personalized treatment strategies for improved clinical decision-making.

High sensitivity in spontaneous intracranial hemorrhage detection from emergency head CT scans using ensemble-learning approach.

Takala J, Peura H, Pirinen R, Väätäinen K, Terjajev S, Lin Z, Raj R, Korja M

pubmed logopapersAug 15 2025
Spontaneous intracranial hemorrhages have a high disease burden. Due to increasing medical imaging, new technological solutions for assisting in image interpretation are warranted. We developed a deep learning (DL) solution for spontaneous intracranial hemorrhage detection from head CT scans. The DL solution included four base convolutional neural networks (CNNs), which were trained using 300 head CT scans. A metamodel was trained on top of the four base CNNs, and simple post processing steps were applied to improve the solution's accuracy. The solution performance was evaluated using a retrospective dataset of consecutive emergency head CTs imaged in ten different emergency rooms. 7797 head CT scans were included in the validation dataset and 118 CT scans presented with spontaneous intracranial hemorrhage. The trained metamodel together with a simple rule-based post-processing step showed 89.8% sensitivity and 89.5% specificity for hemorrhage detection at the case-level. The solution detected all 78 spontaneous hemorrhage cases imaged presumably or confirmedly within 12 h from the symptom onset and identified five hemorrhages missed in the initial on-call reports. Although the success of DL algorithms depends on multiple factors, including training data versatility and quality of annotations, using the proposed ensemble-learning approach and rule-based post-processing may help clinicians to develop highly accurate DL solutions for clinical imaging diagnostics.

Spatio-temporal deep learning with temporal attention for indeterminate lung nodule classification.

Farina B, Carbajo Benito R, Montalvo-García D, Bermejo-Peláez D, Maceiras LS, Ledesma-Carbayo MJ

pubmed logopapersAug 15 2025
Lung cancer is the leading cause of cancer-related death worldwide. Deep learning-based computer-aided diagnosis (CAD) systems in screening programs enhance malignancy prediction, assist radiologists in decision-making, and reduce inter-reader variability. However, limited research has explored the analysis of repeated annual exams of indeterminate lung nodules to improve accuracy. We introduced a novel spatio-temporal deep learning framework, the global attention convolutional recurrent neural network (globAttCRNN), to predict indeterminate lung nodule malignancy using serial screening computed tomography (CT) images from the National Lung Screening Trial (NLST) dataset. The model comprises a lightweight 2D convolutional neural network for spatial feature extraction and a recurrent neural network with a global attention module to capture the temporal evolution of lung nodules. Additionally, we proposed new strategies to handle missing data in the temporal dimension to mitigate potential biases arising from missing time steps, including temporal augmentation and temporal dropout. Our model achieved an area under the receiver operating characteristic curve (AUC-ROC) of 0.954 in an independent test set of 175 lung nodules, each detected in multiple CT scans over patient follow-up, outperforming baseline single-time and multiple-time architectures. The temporal global attention module prioritizes informative time points, enabling the model to capture key spatial and temporal features while ignoring irrelevant or redundant information. Our evaluation emphasizes its potential as a valuable tool for the diagnosis and stratification of patients at risk of lung cancer.

Fine-Tuned Large Language Model for Extracting Pretreatment Pancreatic Cancer According to Computed Tomography Radiology Reports.

Hirakawa H, Yasaka K, Nomura T, Tsujimoto R, Sonoda Y, Kiryu S, Abe O

pubmed logopapersAug 15 2025
This study aimed to examine the performance of a fine-tuned large language model (LLM) in extracting pretreatment pancreatic cancer according to computed tomography (CT) radiology reports and to compare it with that of readers. This retrospective study included 2690, 886, and 378 CT reports for the training, validation, and test datasets, respectively. Clinical indication, image finding, and imaging diagnosis sections of the radiology report (used as input data) were reviewed and categorized into groups 0 (no pancreatic cancer), 1 (after treatment for pancreatic cancer), and 2 (pretreatment pancreatic cancer present) (used as reference data). A pre-trained Bidirectional Encoder Representation from the Transformers Japanese model was fine-tuned with the training and validation dataset. Group 1 data were undersampled and group 2 data were oversampled in the training dataset due to group imbalance. The best-performing model from the validation set was subsequently assessed using the test dataset for testing purposes. Additionally, three readers (readers 1, 2, and 3) were involved in classifying reports within the test dataset. The fine-tuned LLM and readers 1, 2, and 3 demonstrated an overall accuracy of 0.942, 0.984, 0.979, and 0.947; sensitivity for differentiating groups 0/1/2 of 0.944/0.960/0.921, 0.976/1.000/0.976, 0.984/0.984/0.968, and 1.000/1.000/0.841; and total time required for classification of 49 s, 2689 s, 3496 s, and 4887 s, respectively. Fine-tuned LLM effectively extracted patients with pretreatment pancreatic cancer according to CT radiology reports, and its performance was comparable to that of readers in a shorter time.

Efficient Image-to-Image Schrödinger Bridge for CT Field of View Extension

Zhenhao Li, Long Yang, Xiaojie Yin, Haijun Yu, Jiazhou Wang, Hongbin Han, Weigang Hu, Yixing Huang

arxiv logopreprintAug 15 2025
Computed tomography (CT) is a cornerstone imaging modality for non-invasive, high-resolution visualization of internal anatomical structures. However, when the scanned object exceeds the scanner's field of view (FOV), projection data are truncated, resulting in incomplete reconstructions and pronounced artifacts near FOV boundaries. Conventional reconstruction algorithms struggle to recover accurate anatomy from such data, limiting clinical reliability. Deep learning approaches have been explored for FOV extension, with diffusion generative models representing the latest advances in image synthesis. Yet, conventional diffusion models are computationally demanding and slow at inference due to their iterative sampling process. To address these limitations, we propose an efficient CT FOV extension framework based on the image-to-image Schr\"odinger Bridge (I$^2$SB) diffusion model. Unlike traditional diffusion models that synthesize images from pure Gaussian noise, I$^2$SB learns a direct stochastic mapping between paired limited-FOV and extended-FOV images. This direct correspondence yields a more interpretable and traceable generative process, enhancing anatomical consistency and structural fidelity in reconstructions. I$^2$SB achieves superior quantitative performance, with root-mean-square error (RMSE) values of 49.8\,HU on simulated noisy data and 152.0HU on real data, outperforming state-of-the-art diffusion models such as conditional denoising diffusion probabilistic models (cDDPM) and patch-based diffusion methods. Moreover, its one-step inference enables reconstruction in just 0.19s per 2D slice, representing over a 700-fold speedup compared to cDDPM (135s) and surpassing diffusionGAN (0.58s), the second fastest. This combination of accuracy and efficiency makes I$^2$SB highly suitable for real-time or clinical deployment.

Prospective validation of an artificial intelligence assessment in a cohort of applicants seeking financial compensation for asbestosis (PROSBEST).

Smesseim I, Lipman KBWG, Trebeschi S, Stuiver MM, Tissier R, Burgers JA, de Gooijer CJ

pubmed logopapersAug 15 2025
Asbestosis, a rare pneumoconiosis marked by diffuse pulmonary fibrosis, arises from prolonged asbestos exposure. Its diagnosis, guided by the Helsinki criteria, relies on exposure history, clinical findings, radiology, and lung function. However, interobserver variability complicates diagnoses and financial compensation. This study prospectively validated the sensitivity of an AI-driven assessment for asbestosis compensation in the Netherlands. Secondary objectives included evaluating specificity, accuracy, predictive values, area under the curve of the receiver operating characteristic (ROC-AUC), area under the precision-recall curve (PR-AUC), and interobserver variability. Between September 2020 and July 2022, 92 adult compensation applicants were assessed using both AI models and pulmonologists' reviews based on Dutch Health Council criteria. The AI model assigned an asbestosis probability score: negative (< 35), uncertain (35-66), or positive (≥ 66). Uncertain cases underwent additional reviews for a final determination. The AI assessment demonstrated sensitivity of 0.86 (95% confidence interval: 0.77-0.95), specificity of 0.85 (0.76-0.97), accuracy of 0.87 (0.79-0.93), ROC-AUC of 0.92 (0.84-0.97), and PR-AUC of 0.95 (0.89-0.99). Despite strong metrics, the sensitivity target of 98% was unmet. Pulmonologist reviews showed moderate to substantial interobserver variability. The AI-driven approach demonstrated robust accuracy but insufficient sensitivity for validation. Addressing interobserver variability and incorporating objective fibrosis measurements could enhance future reliability in clinical and compensation settings. The AI-driven assessment for financial compensation of asbestosis showed adequate accuracy but did not meet the required sensitivity for validation. We prospectively assessed the sensitivity of an AI-driven assessment procedure for financial compensation of asbestosis. The AI-driven asbestosis probability score underperformed across all metrics compared to internal testing. The AI-driven assessment procedure achieved a sensitivity of 0.86 (95% confidence interval: 0.77-0.95). It did not meet the predefined sensitivity target.

Aortic atherosclerosis evaluation using deep learning based on non-contrast CT: A retrospective multi-center study.

Yang M, Lyu J, Xiong Y, Mei A, Hu J, Zhang Y, Wang X, Bian X, Huang J, Li R, Xing X, Su S, Gao J, Lou X

pubmed logopapersAug 15 2025
Non-contrast CT (NCCT) is widely used in clinical practice and holds potential for large-scale atherosclerosis screening, yet its application in detecting and grading aortic atherosclerosis remains limited. To address this, we propose Aortic-AAE, an automated segmentation system based on a cascaded attention mechanism within the nnU-Net framework. The cascaded attention module enhances feature learning across complex anatomical structures, outperforming existing attention modules. Integrated preprocessing and post-processing ensure anatomical consistency and robustness across multi-center data. Trained on 435 labeled NCCT scans from three centers and validated on 388 independent cases, Aortic-AAE achieved 81.12% accuracy in aortic stenosis classification and 92.37% in Agatston scoring of calcified plaques, surpassing five state-of-the-art models. This study demonstrates the feasibility of using deep learning for accurate detection and grading of aortic atherosclerosis from NCCT, supporting improved diagnostic decisions and enhanced clinical workflows.

Restorative artificial intelligence-driven implant dentistry for immediate implant placement with an interim crown: A clinical report.

Marques VR, Soh D, Cerqueira G, Orgev A

pubmed logopapersAug 14 2025
Immediate implant placement into the extraction socket based on a restoratively driven approach poses challenges which might compromise the delivery of an immediate interim restoration on the day of surgery. The fabricated digital design of the interim restoration may require modification before delivery and may not maintain the planned form to support the gingival architecture for the future prosthetic volume for the emergence profile. This report demonstrates how to utilize the artificial intelligence (AI)-assisted segmentation of bone and tooth to enhance restoratively driven planning for immediate implant placement with an immediate interim restoration. A fractured maxillary central incisor was extracted after cone beam computed tomography (CBCT) analysis. AI-assisted segmentation from the digital imaging and communications in medicine (DICOM) file was used to separate the tooth segmentation and alveolar bone for the digital implant planning and AI-assisted design of the interim restoration copied from the natural tooth contour, optimizing the emergence profile. Immediate implant placement was completed after minimally traumatic extraction, and the AI-assisted interim restoration was delivered immediately. The AI-assisted workflow enabled predictable implant positioning based on restorative needs, reducing surgical time and optimizing delivery of the interim restoration for improved clinical outcomes. The emergence profile of the anatomic crown copied from the AI-workflow for the interim restoration guided soft tissue healing effectively.
Page 40 of 1411410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.