Sort by:
Page 13 of 1391390 results

Deep Learning for Standardized Head CT Reformatting: A Quantitative Analysis of Image Quality and Operator Variability.

Chang PD, Chu E, Floriolli D, Soun J, Fussell D

pubmed logopapersSep 23 2025
To validate a deep learning foundation model for automated head computed tomography (CT) reformatting and to quantify the quality, speed, and variability of conventional manual reformats in a real-world dataset. A foundation artificial intelligence (AI) model was used to create automated reformats for 1,763 consecutive non-contrast head CT examinations. Model accuracy was first validated on a 100-exam subset by assessing landmark detection as well as rotational, centering, and zoom error against expert manual annotations. The validated model was subsequently used as a reference standard to evaluate the quality and speed of the original technician-generated reformats from the full dataset. The AI model demonstrated high concordance with expert annotations, with a mean landmark localization error of 0.6-0.9 mm. Compared to expert-defined planes, AI-generated reformats exhibited a mean rotational error of 0.7 degrees, a mean centering error of 0.3%, and a mean zoom error of 0.4%. By contrast, technician-generated reformats demonstrated a mean rotational error of 11.2 degrees, a mean centering error of 6.4%, and a mean zoom error of 6.2%. Significant variability in manual reformat quality was observed across different factors including patient age, scanner location, report findings, and individual technician operators. Manual head CT reformatting is subject to substantial variability in both quality and speed. A single-shot deep learning foundation model can generate reformats with high accuracy and consistency. The implementation of such an automated method offers the potential to improve standardization, increase workflow efficiency, and reduce operational costs in clinical practice.

3D CoAt U SegNet-enhanced deep learning framework for accurate segmentation of acute ischemic stroke lesions from non-contrast CT scans.

Nag MK, Sadhu AK, Das S, Kumar C, Choudhary S

pubmed logopapersSep 23 2025
Segmenting ischemic stroke lesions from Non-Contrast CT (NCCT) scans is a complex task due to the hypo-intense nature of these lesions compared to surrounding healthy brain tissue and their iso-intensity with lateral ventricles in many cases. Identifying early acute ischemic stroke lesions in NCCT remains particularly challenging. Computer-assisted detection and segmentation can serve as valuable tools to support clinicians in stroke diagnosis. This paper introduces CoAt U SegNet, a novel deep learning model designed to detect and segment acute ischemic stroke lesions from NCCT scans. Unlike conventional 3D segmentation models, this study presents an advanced 3D deep learning approach to enhance delineation accuracy. Traditional machine learning models have struggled to achieve satisfactory segmentation performance, highlighting the need for more sophisticated techniques. For model training, 50 NCCT scans were used, with 10 scans for validation and 500 scans for testing. The encoder convolution blocks incorporated dilation rates of 1, 3, and 5 to capture multi-scale features effectively. Performance evaluation on 500 unseen NCCT scans yielded a Dice similarity score of 75% and a Jaccard index of 70%, demonstrating notable improvement in segmentation accuracy. An enhanced similarity index was employed to refine lesion segmentation, which can further aid in distinguishing the penumbra from the core infarct area, contributing to improved clinical decision-making.

Deep-learning-based prediction of significant portal hypertension with single cross-sectional non-enhanced CT.

Yamamoto A, Sato S, Ueda D, Walston SL, Kageyama K, Jogo A, Nakano M, Kotani K, Uchida-Kobayashi S, Kawada N, Miki Y

pubmed logopapersSep 22 2025
The purpose of this study was to establish a predictive deep learning (DL) model for clinically significant portal hypertension (CSPH) based on a single cross-sectional non-contrast CT image and to compare four representative positional images to determine the most suitable for the detection of CSPH. The study included 421 patients with chronic liver disease who underwent hepatic venous pressure gradient measurement at our institution between May 2007 and January 2024. Patients were randomly classified into training, validation, and test datasets at a ratio of 8:1:1. Non-contrast cross-sectional CT images from four target areas of interest were used to create four deep-learning-based models for predicting CSPH. The areas of interest were the umbilical portion of the portal vein (PV), the first right branch of the PV, the confluence of the splenic vein and PV, and the maximum cross-section of the spleen. The models were implemented using convolutional neural networks with a multilayer perceptron as the classifier. The model with the best predictive ability for CSPH was then compared to 13 conventional evaluation methods. Among the four areas, the umbilical portion of the PV had the highest predictive ability for CSPH (area under the curve [AUC]: 0.80). At the threshold maximizing the Youden index, sensitivity and specificity were 0.867 and 0.615, respectively. This DL model outperformed the ANTICIPATE model. We developed an algorithm that can predict CSPH immediately from a single slice of non-contrast CT, using the most suitable image of the umbilical portion of the PV. Question CSPH predicts complications but requires invasive hepatic venous pressure gradient measurement for diagnosis. Findings At the threshold maximizing the Youden index, sensitivity and specificity were 0.867 and 0.615, respectively. This DL model outperformed the ANTICIPATE model. Clinical relevance This study shows that a DL model can accurately predict CSPH from a single non-contrast CT image, providing a non-invasive alternative to invasive methods and aiding early detection and risk stratification in chronic liver disease without image manipulation.

Multitask radioclinical decision stratification in non-metastatic colon cancer: integrating MMR status, pT staging, and high-risk pathological factors.

Yang R, Liu J, Li L, Fan Y, Shu Y, Wu W, Shu J

pubmed logopapersSep 22 2025
Constructing a multi-task global decision support system based on preoperative enhanced CT features to predict the mismatch repair (MMR) status, T stage, and pathological risk factors (e.g., histological differentiation, lymphovascular invasion) for patients with non-metastatic colon cancer. 372 eligible non-metastatic colon cancer (NMCC) participants (training cohort: n = 260; testing cohort: n = 112) were enrolled from two institutions. The 34 features (imaging features: n = 27; clinical features: n = 7) were subjected to feature selection using LASSO, Boruta, ReliefF, mRMR, and XGBoost-RFE, respectively. In each of the three categories-MMR, pT staging, and pathological risk factors-four features were selected to construct the total feature set. Subsequently, the multitask model was built with 14 machine learning algorithms. The predictive performance of the machine model was evaluated using the area under the receiver operating characteristic curve (AUC). The final feature set for constructing the model is based on the mRMR feature screening method. For the final MMR classification, pT staging, and pathological risk factors, SVC, Bernoulli NB, and Decision Tree algorithm were selected respectively, with AUC scores of 0.80 [95% CI 0.71-0.89], 0.82 [95% CI 0.71-0.94], and 0.85 [95% CI 0.77-0.93] on the test set. Furthermore, a direct multiclass model constructed using the total feature set resulted in an average AUC of 0.77 across four management plans in the test set. The multi-task machine learning model proposed in this study enables non-invasive and precise preoperative stratification of patients with NMCC based on MMR status, pT stage, and pathological risk factors. This predictive tool demonstrates significant potential in facilitating preoperative risk stratification and guiding individualized therapeutic strategies.

Artificial Intelligence-Assisted Treatment Planning in an Interdisciplinary Rehabilitation in the Esthetic Zone.

Fonseca FJPO, Matias BBR, Pacheco P, Muraoka CSAS, Silva EVF, Sesma N

pubmed logopapersSep 22 2025
This case report elucidates the application of an integrated digital workflow in which diagnosis, planning, and execution were enhanced by artificial intelligence (AI), enabling an assertive interdisciplinary esthetic-functional rehabilitation. With AI-powered software, the sequence from orthodontic treatment to the final rehabilitation achieved high predictability, addressing patient's chief complaints. A patient presented with a missing maxillary left central incisor (tooth 11) and dissatisfaction with a removable partial denture. Clinical examination revealed a gummy smile, a deviated midline, and a disproportionate mesiodistal space relative to the midline. Initial documentation included photographs, intraoral scanning, and cone-beam computed tomography of the maxilla. These data were integrated into a digital planning software to create an interdisciplinary plan. This workflow included prosthetically guided orthodontic treatment with aligners, a motivational mockup, guided implant surgery, peri-implant soft tissue management, and final prosthetic rehabilitation using a CAD/CAM approach. This digital workflow enhanced communication among the multidisciplinary team and with the patient, ensuring highly predictable esthetic and functional outcomes. Comprehensive digital workflows improve diagnostic accuracy, streamline planning with AI, and facilitate patient understanding. This approach increases patient satisfaction, supports interdisciplinary collaboration, and promotes treatment adherence.

Development of a patient-specific cone-beam computed tomography dose optimization model using machine learning in image-guided radiation therapy.

Miura S

pubmed logopapersSep 22 2025
Cone-beam computed tomography (CBCT) is commonly utilized in radiation therapy to visualize soft tissues and bone structures. This study aims to develop a machine learning model that predicts optimal, patient-specific CBCT doses that minimize radiation exposure while maintaining soft tissue image quality in prostate radiation therapy. Phantom studies evaluated the relationship between dose and two image quality metrics: image standard deviation (SD) and contrast-to-noise ratio (CNR). In a prostate-simulating phantom, CNR did not significantly decrease at doses above 40% compared to the 100% dose. Based on low-contrast resolution, this value was selected as the minimum clinical dose level. In clinical image analysis, both SD and CNR degraded with decreasing dose, consistent with the phantom findings. The structural similarity index between CBCT and planning computed tomography (CT) significantly decreased at doses below 60%, with a mean value of 0.69 at 40%. Previous studies suggest that this level may correspond to acceptable registration accuracy within the typical planning target volume margins applied in image-guided radiotherapy. A machine learning model was developed to predict CBCT doses using patient-specific metrics from planning CT scans and CBCT image quality parameters. Among the tested models, support vector regression achieved the highest accuracy, with an R<sup>2</sup> value of 0.833 and a root mean squared error of 0.0876, and was therefore adopted for dose prediction. These results support the feasibility of patient-specific CBCT imaging protocols that reduce radiation dose while maintaining clinically acceptable image quality for soft tissue registration.

Conditional Diffusion Models for CT Image Synthesis from CBCT: A Systematic Review

Alzahra Altalib, Chunhui Li, Alessandro Perelli

arxiv logopreprintSep 22 2025
Objective: Cone-beam computed tomography (CBCT) provides a low-dose imaging alternative to conventional CT, but suffers from noise, scatter, and artifacts that degrade image quality. Synthetic CT (sCT) aims to translate CBCT to high-quality CT-like images for improved anatomical accuracy and dosimetric precision. Although deep learning approaches have shown promise, they often face limitations in generalizability and detail preservation. Conditional diffusion models (CDMs), with their iterative refinement process, offers a novel solution. This review systematically examines the use of CDMs for CBCT-to-sCT synthesis. Methods: A systematic search was conducted in Web of Science, Scopus, and Google Scholar for studies published between 2013 and 2024. Inclusion criteria targeted works employing conditional diffusion models specifically for sCT generation. Eleven relevant studies were identified and analyzed to address three questions: (1) What conditional diffusion methods are used? (2) How do they compare to conventional deep learning in accuracy? (3) What are their clinical implications? Results: CDMs incorporating anatomical priors and spatial-frequency features demonstrated improved structural preservation and noise robustness. Energy-guided and hybrid latent models enabled enhanced dosimetric accuracy and personalized image synthesis. Across studies, CDMs consistently outperformed traditional deep learning models in noise suppression and artefact reduction, especially in challenging cases like lung imaging and dual-energy CT. Conclusion: Conditional diffusion models show strong potential for generalized, accurate sCT generation from CBCT. However, clinical adoption remains limited. Future work should focus on scalability, real-time inference, and integration with multi-modal imaging to enhance clinical relevance.

Machine learning predicts severe adverse events and salvage success of CT-guided lung biopsy after nondiagnostic transbronchial lung biopsy.

Yang S, Hua Z, Chen Y, Liu L, Wang Z, Cheng Y, Wang J, Xu Z, Chen C

pubmed logopapersSep 22 2025
To address the unmet clinical need for validated risk stratification tools in salvage CT-guided percutaneous lung biopsy (PNLB) following nondiagnostic transbronchial lung biopsy (TBLB). We aimed to develop machine learning models predicting severe adverse events (SAEs) in PNLB (Model 1) and diagnostic success of salvage PNLB post-TBLB failure (Model 2). This multicenter predictive modeling study enrolled 2910 cases undergoing PNLB across two centers (Center 1: n = 2653 (2016-2020); Center 2: n = 257 (2017-2022)) with complete imaging and clinical documentation meeting predefined inclusion and exclusion criteria. Key variables were selected via LASSO regression, followed by development and validation of Model 1 (incorporating sex, smoking, pleural contact, lesion size, and puncture depth) and Model 2 (including age, lesion size, lesion characteristics, and post-bronchoscopic pathological categories (PBPCs)) using ten machine learning algorithms. Model performance was rigorously evaluated through discrimination metrics, calibration curves, and decision curve analysis to assess clinical applicability. A total of 2653 and 257 PNLB cases were included from two centers, where Model 1 achieved external validation ROC-AUC 0.717 (95% CI: 0.609-0.825) and PR-AUC 0.258 (95% CI: 0.0365-0.708), while Model 2 exhibited ROC-AUC 0.884 (95% CI: 0.784-0.984) and PR-AUC 0.852 (95% CI: 0.784-0.896), with XGBoost outperforming other algorithms. The dual XGBoost system stratifies salvage PNLB candidates by quantifying SAE risks (AUC = 0.717) versus diagnostic yield (AUC = 0.884), addressing the unmet need for personalized biopsy pathway optimization. Question Current tools cannot quantify severe adverse event (SAE) risks versus salvage diagnostic success for CT-guided lung biopsy (PNLB) after failed transbronchial biopsy (TBLB). Findings Dual XGBoost models successfully predicted the risks of PNLB SAEs (AUC = 0.717) and diagnostic success post-TBLB failure (AUC = 0.884) with validated clinical stratification benefits. Clinical relevance The dual XGBoost system guides clinical decision-making by integrating individual risk of SAEs with predictors of diagnostic success, enabling personalized salvage biopsy strategies that balance safety and diagnostic yield.

An Implicit Registration Framework Integrating Kolmogorov-Arnold Networks with Velocity Regularization for Image-Guided Radiation Therapy.

Sun P, Zhang C, Yang Z, Yin FF, Liu M

pubmed logopapersSep 22 2025
In image-guided radiation therapy (IGRT), deformable image registration between computed tomography (CT) and cone beam computed tomography (CBCT) images remain challenging due to the computational cost of iterative algorithms and the data dependence of supervised deep learning methods. Implicit neural representation (INR) provides a promising alternative, but conventional multilayer perceptron (MLP) might struggle to efficiently represent complex, nonlinear deformations. This study introduces a novel INR-based registration framework that models the deformation as a continuous, time-varying velocity field, parameterized by a Kolmogorov-Arnold Network (KAN) constructed using Jacobi polynomials. To our knowledge, this is the first integration of KAN into medical image registration, establishing a new paradigm beyond standard MLP-based INR. For improved efficiency, the KAN estimates low-dimensional principal components of the velocity field, which are reconstructed via inverse principal component analysis and temporally integrated to derive the final deformation. This approach achieves a ~70% improvement in computational efficiency relative to direct velocity field modeling while ensuring smooth and topology-preserving transformations through velocity regularization. Evaluation on a publicly available pelvic CT-CBCT dataset demonstrates up to 6% improvement in registration accuracy over traditional iterative methods and ~3% over MLP-based INR baselines, indicating the potential of the proposed method as an efficient and generalizable alternative for deformable registration.

Learning Contrastive Multimodal Fusion with Improved Modality Dropout for Disease Detection and Prediction

Yi Gu, Kuniaki Saito, Jiaxin Ma

arxiv logopreprintSep 22 2025
As medical diagnoses increasingly leverage multimodal data, machine learning models are expected to effectively fuse heterogeneous information while remaining robust to missing modalities. In this work, we propose a novel multimodal learning framework that integrates enhanced modalities dropout and contrastive learning to address real-world limitations such as modality imbalance and missingness. Our approach introduces learnable modality tokens for improving missingness-aware fusion of modalities and augments conventional unimodal contrastive objectives with fused multimodal representations. We validate our framework on large-scale clinical datasets for disease detection and prediction tasks, encompassing both visual and tabular modalities. Experimental results demonstrate that our method achieves state-of-the-art performance, particularly in challenging and practical scenarios where only a single modality is available. Furthermore, we show its adaptability through successful integration with a recent CT foundation model. Our findings highlight the effectiveness, efficiency, and generalizability of our approach for multimodal learning, offering a scalable, low-cost solution with significant potential for real-world clinical applications. The code is available at https://github.com/omron-sinicx/medical-modality-dropout.
Page 13 of 1391390 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.