Sort by:
Page 92 of 1411410 results

Optimization-based image reconstruction regularized with inter-spectral structural similarity for limited-angle dual-energy cone-beam CT.

Peng J, Wang T, Xie H, Qiu RLJ, Chang CW, Roper J, Yu DS, Tang X, Yang X

pubmed logopapersJun 25 2025

Limited-angle dual-energy (DE) cone-beam CT (CBCT) is considered as a potential solution to achieve fast and low-dose DE imaging on current CBCT scanners without hardware modification. However, its clinical implementations are hindered by the challenging image reconstruction from limited-angle projections. While optimization-based and deep learning-based methods have been proposed for image reconstruction, their utilization is limited by the requirement for X-ray spectra measurement or paired datasets for model training. This work aims to facilitate the clinical applications of fast and low-dose DE-CBCT by developing a practical solution for image reconstruction in limited-angle DE-CBCT.
Methods:
An inter-spectral structural similarity-based regularization was integrated into the iterative image reconstruction in limited-angle DE-CBCT. By enforcing the similarity between the DE images, limited-angle artifacts were efficiently reduced in the reconstructed DECBCT images. The proposed method was evaluated using two physical phantoms and three digital phantoms, demonstrating its efficacy in quantitative DECBCT imaging.
Results:
In all the studies, the proposed method achieves accurate image reconstruction without visible residual artifacts from limited-angle DE-CBCT projection data. In the digital phantom studies, the proposed method reduces the mean-absolute-error (MAE) from 309/290 HU to 14/20 HU, increases the peak signal-to-noise ratio (PSNR) from 40/39 dB to 70/67 dB, and improves the structural similarity index measurement (SSIM) from 0.74/0.72 to 1.00/1.00.
Conclusions:
The proposed method achieves accurate optimization-based image reconstruction in limited-angle DE-CBCT, showing great practical value in clinical implementations of limited-angle DE-CBCT.&#xD.

Efficacy of an Automated Pulmonary Embolism (PE) Detection Algorithm on Routine Contrast-Enhanced Chest CT Imaging for Non-PE Studies.

Troutt HR, Huynh KN, Joshi A, Ling J, Refugio S, Cramer S, Lopez J, Wei K, Imanzadeh A, Chow DS

pubmed logopapersJun 25 2025
The urgency to accelerate PE management and minimize patient risk has driven the development of artificial intelligence (AI) algorithms designed to provide a swift and accurate diagnosis in dedicated chest imaging (computed tomography pulmonary angiogram; CTPA) for suspected PE; however, the accuracy of AI algorithms in the detection of incidental PE in non-dedicated CT imaging studies remains unclear and untested. This study explores the potential for a commercial AI algorithm to identify incidental PE in non-dedicated contrast-enhanced CT chest imaging studies. The Viz PE algorithm was deployed to identify the presence of PE on 130 dedicated and 63 non-dedicated contrast-enhanced CT chest exams. The predictions for non-dedicated contrast-enhanced chest CT imaging studies were 90.48% accurate, with a sensitivity of 0.14 and specificity of 1.00. Our findings reflect that the Viz PE algorithm demonstrated an overall accuracy of 90.16%, with a specificity of 96% and a sensitivity of 41%. Although the high specificity is promising for ruling in PE, the low sensitivity highlights a limitation, as it indicates the algorithm may miss a substantial number of true-positive incidental PEs. This study demonstrates that commercial AI detection tools hold promise as integral support for detecting PE, particularly when there is a strong clinical indication for their use; however, current limitations in sensitivity, especially for incidental cases, underscore the need for ongoing radiologist oversight.

Machine Learning-Based Risk Assessment of Myasthenia Gravis Onset in Thymoma Patients and Analysis of Their Correlations and Causal Relationships.

Liu W, Wang W, Zhang H, Guo M

pubmed logopapersJun 25 2025
The study aims to utilize interpretable machine learning models to predict the risk of myasthenia gravis onset in thymoma patients and investigate the intrinsic correlations and causal relationships between them. A comprehensive retrospective analysis was conducted on 172 thymoma patients diagnosed at two medical centers between 2018 and 2024. The cohort was bifurcated into a training set (n = 134) and test set (n = 38) to develop and validate risk predictive models. Radiomic and deep features were extracted from tumor regions across three CT phases: non-enhanced, arterial, and venous. Through rigorous feature selection employing Spearman's rank correlation coefficient and LASSO (Least Absolute Shrinkage and Selection Operator) regularization, 12 optimal imaging features were identified. These were integrated with 11 clinical parameters and one pathological subtype variable to form a multi-dimensional feature matrix. Six machine learning algorithms were subsequently implemented for model construction and comparative analysis. We utilized SHAP (SHapley Additive exPlanation) to interpret the model and employed doubly robust learner to perform a potential causal analysis between thymoma and myasthenia gravis (MG). All six models demonstrated satisfactory predictive capabilities, with the support vector machine (SVM) model exhibiting superior performance on the test cohort. It achieved an area under the curve (AUC) of 0.904 (95% confidence interval [CI] 0.798-1.000), outperforming other models such as logistic regression, multilayer perceptron (MLP), and others. The model's predictive result substantiates the strong correlation between thymoma and MG. Additionally, our analysis revealed the existence of a significant causal relationship between them, and high-risk tumors significantly elevated the risk of MG by an average treatment effect (ATE) of 9.2%. This implies that thymoma patients with types B2 and B3 face a considerably high risk of developing MG compared to those with types A, AB, and B1. The model provides a novel and effective tool for evaluating the risk of MG development in patients with thymoma. Furthermore, correlation and causal analysis have unveiled pathways that connect tumor to the risk of MG, with a notably higher incidence of MG observed in high risk pathological subtypes. These insights contribute to a deeper understanding of MG and drive a paradigm shift in medical practice from passive treatment to proactive intervention.

[Advances in low-dose cone-beam computed tomography image reconstruction methods based on deep learning].

Shi J, Song Y, Li G, Bai S

pubmed logopapersJun 25 2025
Cone-beam computed tomography (CBCT) is widely used in dentistry, surgery, radiotherapy and other medical fields. However, repeated CBCT scans expose patients to additional radiation doses, increasing the risk of secondary malignant tumors. Low-dose CBCT image reconstruction technology, which employs advanced algorithms to reduce radiation dose while enhancing image quality, has emerged as a focal point of recent research. This review systematically examined deep learning-based methods for low-dose CBCT reconstruction. It compared different network architectures in terms of noise reduction, artifact removal, detail preservation, and computational efficiency, covering three approaches: image-domain, projection-domain, and dual-domain techniques. The review also explored how emerging technologies like multimodal fusion and self-supervised learning could enhance these methods. By summarizing the strengths and weaknesses of current approaches, this work provides insights to optimize low-dose CBCT algorithms and support their clinical adoption.

Recurrent Visual Feature Extraction and Stereo Attentions for CT Report Generation

Yuanhe Tian, Lei Mao, Yan Song

arxiv logopreprintJun 24 2025
Generating reports for computed tomography (CT) images is a challenging task, while similar to existing studies for medical image report generation, yet has its unique characteristics, such as spatial encoding of multiple images, alignment between image volume and texts, etc. Existing solutions typically use general 2D or 3D image processing techniques to extract features from a CT volume, where they firstly compress the volume and then divide the compressed CT slices into patches for visual encoding. These approaches do not explicitly account for the transformations among CT slices, nor do they effectively integrate multi-level image features, particularly those containing specific organ lesions, to instruct CT report generation (CTRG). In considering the strong correlation among consecutive slices in CT scans, in this paper, we propose a large language model (LLM) based CTRG method with recurrent visual feature extraction and stereo attentions for hierarchical feature modeling. Specifically, we use a vision Transformer to recurrently process each slice in a CT volume, and employ a set of attentions over the encoded slices from different perspectives to selectively obtain important visual information and align them with textual features, so as to better instruct an LLM for CTRG. Experiment results and further analysis on the benchmark M3D-Cap dataset show that our method outperforms strong baseline models and achieves state-of-the-art results, demonstrating its validity and effectiveness.

Predicting enamel depth distribution of maxillary teeth based on intraoral scanning: A machine learning study.

Chen D, He X, Li Q, Wang Z, Shen J, Shen J

pubmed logopapersJun 24 2025
Measuring enamel depth distribution (EDD) is of great importance for preoperative design of tooth preparations, restorative aesthetic preview and monitoring enamel wear. But, currently there are no non-invasive methods available to efficiently obtain EDD. This study aimed to develop a machine learning (ML) framework to achieve noninvasive and radiation-free EDD predictions with intraoral scanning (IOS) images. Cone-beam computed tomography (CBCT) and IOS images of right maxillary central incisors, canines, and first premolars from 200 volunteers were included and preprocessed with surface parameterization. During the training stage, the EDD ground truths were obtained from CBCT. Five-dimensional features (incisal-gingival position, mesial-distal position, local surface curvature, incisal-gingival stretch, mesial-distal stretch) were extracted on labial enamel surfaces and served as inputs to the ML models. An eXtreme gradient boosting (XGB) model was trained to establish the mapping of features to the enamel depth values. R<sup>2</sup> and mean absolute error (MAE) were utilized to evaluate the training accuracy of XGB model. In prediction stage, the predicted EDDs were compared with the ground truths, and the EDD discrepancies were analyzed using a paired t-test and Frobenius norm. The XGB model achieved superior performance in training with average R<sup>2</sup> and MAE values of 0.926 and 0.080, respectively. Independent validation confirmed its robust EDD prediction ability, showing no significant deviation from ground truths in paired t-test and low prediction errors (Frobenius norm: 12.566-18.312), despite minor noise in IOS-based predictions. This study performed preliminary validation of an IOS-based ML model for high-quality EDD prediction.

Differentiating adenocarcinoma and squamous cell carcinoma in lung cancer using semi automated segmentation and radiomics.

Vijitha R, Wickramasinghe WMIS, Perera PAS, Jayatissa RMGCSB, Hettiarachchi RT, Alwis HARV

pubmed logopapersJun 24 2025
Adenocarcinoma (AD) and squamous cell carcinoma (SCC) are frequently observed forms of non-small cell lung cancer (NSCLC), playing a significant role in global cancer mortality. This research categorizes NSCLC subtypes by analyzing image details using computer-assisted semi-automatic segmentation and radiomic features in model development. This study includes 80 patients with 50 AD and 30 SCC which were analyzed using 3D Slicer software and extracted 107 quantitative radiomic features per patient. After eliminating correlated attributes, LASSO binary logistic regression model and 10-fold cross-validation were used for feature selection. The Shapiro-Wilk test assessed radiomic score normality, and the Mann-Whitney U test compared score distributions. Random Forest (RF) and Support Vector Machine (SVM) classification models were implemented for subtype classification. Receiver-Operator Characteristic (ROC) curves evaluated the radiomics score, showing a moderate predictive ability with training set area under curve (AUC) of 0.679 (95 % CI, 0.541-0.871) and validation set AUC of 0.560 (95 % CI, 0.342-0.778). Rad-Score distributions were normal for AD and not normal for SCC. RF and SVM classification models, which are based on selected features, resulted RF accuracy (95 % CI) of 0.73 and SVM accuracy (95 % CI) of 0.87, with respective AUC values of 0.54 and 0.87. These findings enhance the understanding that the two subtypes of NSCLC can be differentiated. The study demonstrated radiomic analysis improves diagnostic accuracy and offers a non-invasive alternative. However, the AUCs and ROC curves for the machine learning models must be critically evaluated to ensure clinical acceptability. If robust, these models could reduce the need for biopsies and enhance personalized treatment planning. Further research is needed to validate these findings and integrate radiomics into NSCLC clinical practice.

MedErr-CT: A Visual Question Answering Benchmark for Identifying and Correcting Errors in CT Reports

Sunggu Kyung, Hyungbin Park, Jinyoung Seo, Jimin Sung, Jihyun Kim, Dongyeong Kim, Wooyoung Jo, Yoojin Nam, Sangah Park, Taehee Kwon, Sang Min Lee, Namkug Kim

arxiv logopreprintJun 24 2025
Computed Tomography (CT) plays a crucial role in clinical diagnosis, but the growing demand for CT examinations has raised concerns about diagnostic errors. While Multimodal Large Language Models (MLLMs) demonstrate promising comprehension of medical knowledge, their tendency to produce inaccurate information highlights the need for rigorous validation. However, existing medical visual question answering (VQA) benchmarks primarily focus on simple visual recognition tasks, lacking clinical relevance and failing to assess expert-level knowledge. We introduce MedErr-CT, a novel benchmark for evaluating medical MLLMs' ability to identify and correct errors in CT reports through a VQA framework. The benchmark includes six error categories - four vision-centric errors (Omission, Insertion, Direction, Size) and two lexical error types (Unit, Typo) - and is organized into three task levels: classification, detection, and correction. Using this benchmark, we quantitatively assess the performance of state-of-the-art 3D medical MLLMs, revealing substantial variation in their capabilities across different error types. Our benchmark contributes to the development of more reliable and clinically applicable MLLMs, ultimately helping reduce diagnostic errors and improve accuracy in clinical practice. The code and datasets are available at https://github.com/babbu3682/MedErr-CT.

ReMAR-DS: Recalibrated Feature Learning for Metal Artifact Reduction and CT Domain Transformation

Mubashara Rehman, Niki Martinel, Michele Avanzo, Riccardo Spizzo, Christian Micheloni

arxiv logopreprintJun 24 2025
Artifacts in kilo-Voltage CT (kVCT) imaging degrade image quality, impacting clinical decisions. We propose a deep learning framework for metal artifact reduction (MAR) and domain transformation from kVCT to Mega-Voltage CT (MVCT). The proposed framework, ReMAR-DS, utilizes an encoder-decoder architecture with enhanced feature recalibration, effectively reducing artifacts while preserving anatomical structures. This ensures that only relevant information is utilized in the reconstruction process. By infusing recalibrated features from the encoder block, the model focuses on relevant spatial regions (e.g., areas with artifacts) and highlights key features across channels (e.g., anatomical structures), leading to improved reconstruction of artifact-corrupted regions. Unlike traditional MAR methods, our approach bridges the gap between high-resolution kVCT and artifact-resistant MVCT, enhancing radiotherapy planning. It produces high-quality MVCT-like reconstructions, validated through qualitative and quantitative evaluations. Clinically, this enables oncologists to rely on kVCT alone, reducing repeated high-dose MVCT scans and lowering radiation exposure for cancer patients.

VoxelOpt: Voxel-Adaptive Message Passing for Discrete Optimization in Deformable Abdominal CT Registration

Hang Zhang, Yuxi Zhang, Jiazheng Wang, Xiang Chen, Renjiu Hu, Xin Tian, Gaolei Li, Min Liu

arxiv logopreprintJun 24 2025
Recent developments in neural networks have improved deformable image registration (DIR) by amortizing iterative optimization, enabling fast and accurate DIR results. However, learning-based methods often face challenges with limited training data, large deformations, and tend to underperform compared to iterative approaches when label supervision is unavailable. While iterative methods can achieve higher accuracy in such scenarios, they are considerably slower than learning-based methods. To address these limitations, we propose VoxelOpt, a discrete optimization-based DIR framework that combines the strengths of learning-based and iterative methods to achieve a better balance between registration accuracy and runtime. VoxelOpt uses displacement entropy from local cost volumes to measure displacement signal strength at each voxel, which differs from earlier approaches in three key aspects. First, it introduces voxel-wise adaptive message passing, where voxels with lower entropy receives less influence from their neighbors. Second, it employs a multi-level image pyramid with 27-neighbor cost volumes at each level, avoiding exponential complexity growth. Third, it replaces hand-crafted features or contrastive learning with a pretrained foundational segmentation model for feature extraction. In abdominal CT registration, these changes allow VoxelOpt to outperform leading iterative in both efficiency and accuracy, while matching state-of-the-art learning-based methods trained with label supervision. The source code will be available at https://github.com/tinymilky/VoxelOpt
Page 92 of 1411410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.