Sort by:
Page 91 of 1021015 results

Diagnostic value of fully automated CT pulmonary angiography in patients with chronic thromboembolic pulmonary hypertension and chronic thromboembolic disease.

Lin Y, Li M, Xie S

pubmed logopapersMay 20 2025
To evaluate the value of employing artificial intelligence (AI)-assisted CT pulmonary angiography (CTPA) for patients with chronic thromboembolic pulmonary hypertension (CTEPH) and chronic thromboembolic disease (CTED). A single-center, retrospective analysis of 350 sequential patients with right heart catheterization (RHC)-confirmed CTEPH, CTED, and normal controls was conducted. Parameters such as the main pulmonary artery diameter (MPAd), the ratio of MPA to ascending aorta diameter (MPAd/AAd), the ratio of right to left ventricle diameter (RVd/LVd), and the ratio of RV to LV volume (RVv/LVv) were evaluated using automated AI software and compared with manual analysis. The reliability was assessed through an intraclass correlation coefficient (ICC) analysis. The diagnostic accuracy was determined using receiver-operating characteristic (ROC) curves. Compared to CTED and control groups, CTEPH patients were significantly more likely to have elevated automatic CTPA metrics (all p < 0.001, respectively). Automated MPAd, MPAd/Aad, and RVv/LVv had a strong correlation with mPAP (r = 0.952, 0.904, and 0.815, respectively, all p < 0.001). The automated and manual CTPA analyses showed strong concordance. For the CTEPH and CTED categories, the optimal area under the curve (AU-ROC) reached 0.939 (CI: 0.908-0.969). In the CTEPH and control groups, the best AU-ROC was 0.970 (CI: 0.953-0.988). In the CTED and control groups, the best AU-ROC was 0.782 (CI: 0.724-0.840). Automated AI-driven CTPA analysis provides a dependable approach for evaluating patients with CTEPH, CTED, and normal controls, demonstrating excellent consistency and efficiency. Question Guidelines do not advocate for applying treatment protocols for CTEPH to patients with CTED; early detection of the condition is crucial. Findings Automated CTPA analysis was feasible in 100% of patients with good agreement and would have added information for early detection and identification. Clinical relevance Automated AI-driven CTPA analysis provides a reliable approach demonstrating excellent consistency and efficiency. Additionally, these noninvasive imaging findings may aid in treatment stratification and determining optimal intervention directed by RHC.

Non-Invasive Tumor Budding Evaluation and Correlation with Treatment Response in Bladder Cancer: A Multi-Center Cohort Study.

Li X, Zou C, Wang C, Chang C, Lin Y, Liang S, Zheng H, Liu L, Deng K, Zhang L, Liu B, Gao M, Cai P, Lao J, Xu L, Wu D, Zhao X, Wu X, Li X, Luo Y, Zhong W, Lin T

pubmed logopapersMay 20 2025
The clinical benefits of neoadjuvant chemoimmunotherapy (NACI) are demonstrated in patients with bladder cancer (BCa); however, more than half fail to achieve a pathological complete response (pCR). This study utilizes multi-center cohorts of 2322 patients with pathologically diagnosed BCa, collected between January 1, 2014, and December 31, 2023, to explore the correlation between tumor budding (TB) status and NACI response and disease prognosis. A deep learning model is developed to noninvasively evaluate TB status based on CT images. The deep learning model accurately predicts the TB status, with area under the curve values of 0.932 (95% confidence interval: 0.898-0.965) in the training cohort, 0.944 (0.897-0.991) in the internal validation cohort, 0.882 (0.832-0.933) in external validation cohort 1, 0.944 (0.908-0.981) in the external validation cohort 2, and 0.854 (0.739-0.970) in the NACI validation cohort. Patients predicted to have a high TB status exhibit a worse prognosis (p < 0.05) and a lower pCR rate of 25.9% (7/20) than those predicted to have a low TB status (pCR rate: 73.9% [17/23]; p < 0.001). Hence, this model may be a reliable, noninvasive tool for predicting TB status, aiding clinicians in prognosis assessment and NACI strategy formulation.

CT-guided CBCT Multi-Organ Segmentation Using a Multi-Channel Conditional Consistency Diffusion Model for Lung Cancer Radiotherapy.

Chen X, Qiu RLJ, Pan S, Shelton J, Yang X, Kesarwala AH

pubmed logopapersMay 20 2025
In cone beam computed tomography(CBCT)-guided adaptive radiotherapy, rapid and precise segmentation of organs-at-risk(OARs)is essential for accurate dose verification and online replanning. The quality of CBCT images obtained with current onboard CBCT imagers and clinical imaging protocols, however, is often compromised by artifacts such as scatter and motion, particularly for thoracic CBCTs. These artifacts not only degrade image contrast but also obscure anatomical boundaries, making accurate segmentation on CBCT images significantly more challenging compared to planning CT images. To address these persistent challenges, we propose a novel multi-channel conditional consistency diffusion model(MCCDM)for segmentation of OARs in thoracic CBCT images (CBCT-MCCDM), which harnesses its domain transfer capabilities to improve segmentation accuracy across different imaging modalities. By jointly training the MCCDM with CT images and their corresponding masks, our framework enables an end-to-end mapping learning process that generates accurate segmentation of OARs.&#xD;This CBCT-MCCDM was used to delineate esophagus, heart, the left and right lungs, and spinal cord on CBCT images from each patient with lung cancer. We quantitatively evaluated our approach by comparing model-generated contours with ground truth contours from 33 patients with lung cancer treated with 5-fraction stereotactic body radiation therapy (SBRT), demonstrating its potential to enhance segmentation accuracy despite the presence of challenging CBCT artifacts. The proposed method was evaluated using average Dice similarity coefficients (DSC), sensitivity, specificity, 95th Percentile Hausdorff Distance (HD95), and mean surface distance (MSD) for each of the five OARs. The method achieved average DSC values of 0.82, 0.88, 0.95, 0.96, and 0.96 for the esophagus, heart, left lung, right lung, and spinal cord, respectively. Sensitivity values were 0.813, 0.922, 0.956, 0.958, and 0.929, respectively, while specificity values were 0.991, 0.994, 0.996, 0.996, and 0.995, respectively. We compared the proposed method with two state-of-art methods, CBCT-only method and U-Net, and demonstrated that the proposed CBCT-MCCDM.

Thymoma habitat segmentation and risk prediction model using CT imaging and K-means clustering.

Liang Z, Li J, He S, Li S, Cai R, Chen C, Zhang Y, Deng B, Wu Y

pubmed logopapersMay 19 2025
Thymomas, though rare, present a wide range of clinical behaviors, from indolent to aggressive forms, making accurate risk stratification crucial for treatment planning. Traditional methods such as histopathology and radiological assessments often lack the ability to capture tumor heterogeneity, which can impact prognosis. Radiomics, combined with machine learning, provides a method to extract and analyze quantitative imaging features, offering the potential to improve tumor classification and risk prediction. By segmenting tumors into distinct habitat zones, it becomes possible to assess intratumoral heterogeneity more effectively. This study employs radiomics and machine learning techniques to enhance thymoma risk prediction, aiming to improve diagnostic consistency and reduce variability in radiologists' assessments. This study aims to identify different habitat zones within thymomas through CT imaging feature analysis and to establish a predictive model to differentiate between high and low-risk thymomas. Additionally, the study explores how this model can assist radiologists. We obtained CT imaging data from 133 patients with thymoma who were treated at the Affiliated Hospital of Guangdong Medical University from 2015 to 2023. Images from the plain scan phase, venous phase, arterial phase, and their differential images (subtracted images) were used. Tumor regions were segmented into three habitat zones using K-Means clustering. Imaging features from each habitat zone were extracted using the PyRadiomics (van Griethuysen, 2017) library. The 28 most distinguishing features were selected through Mann-Whitney U tests (Mann, 1947) and Spearman's correlation analysis (Spearman, 1904). Five predictive models were built using the same machine learning algorithm (Support Vector Machine [SVM]): Habitat1, Habitat2, Habitat3 (trained on features from individual tumor habitat regions), Habitat All (trained on combined features from all regions), and Intra (trained on intratumoral features), and their performances were evaluated for comparison. The models' diagnostic outcomes were compared with the diagnoses of four radiologists (two junior and two experienced physicians). The AUC (area under curve) for habitat zone 1 was 0.818, for habitat zone 2 was 0.732, and for habitat zone 3 was 0.763. The comprehensive model, which combined data from all habitat zones, achieved an AUC of 0.960, outperforming the model based on traditional radiomic features (AUC of 0.720). The model significantly improved the diagnostic accuracy of all four radiologists. The AUCs for junior radiologists 1 and 2 increased from 0.747 and 0.775 to 0.932 and 0.972, respectively, while for experienced radiologists 1 and 2, the AUCs increased from 0.932 and 0.859 to 0.977 and 0.972, respectively. This study successfully identified distinct habitat zones within thymomas through CT imaging feature analysis and developed an efficient predictive model that significantly improved diagnostic accuracy. This model offers a novel tool for risk assessment of thymomas and can aid in guiding clinical decision-making.

Non-orthogonal kV imaging guided patient position verification in non-coplanar radiation therapy with dataset-free implicit neural representation.

Ye S, Chen Y, Wang S, Xing L, Gao Y

pubmed logopapersMay 19 2025
Cone-beam CT (CBCT) is crucial for patient alignment and target verification in radiation therapy (RT). However, for non-coplanar beams, potential collisions between the treatment couch and the on-board imaging system limit the range that the gantry can be rotated. Limited-angle measurements are often insufficient to generate high-quality volumetric images for image-domain registration, therefore limiting the use of CBCT for position verification. An alternative to image-domain registration is to use a few 2D projections acquired by the onboard kV imager to register with the 3D planning CT for patient position verification, which is referred to as 2D-3D registration. The 2D-3D registration involves converting the 3D volume into a set of digitally reconstructed radiographs (DRRs) expected to be comparable to the acquired 2D projections. The domain gap between the generated DRRs and the acquired projections can happen due to the inaccurate geometry modeling in DRR generation and artifacts in the actual acquisitions. We aim to improve the efficiency and accuracy of the challenging 2D-3D registration problem in non-coplanar RT with limited-angle CBCT scans. We designed an accelerated, dataset-free, and patient-specific 2D-3D registration framework based on an implicit neural representation (INR) network and a composite similarity measure. The INR network consists of a lightweight three-layer multilayer perception followed by average pooling to calculate rigid motion parameters, which are used to transform the original 3D volume to the moving position. The Radon transform and imaging specifications at the moving position are used to generate DRRs with higher accuracy. We designed a composite similarity measure consisting of pixel-wise intensity difference and gradient differences between the generated DRRs and acquired projections to further reduce the impact of their domain gap on registration accuracy. We evaluated the proposed method on both simulation data and real phantom data acquired from a Varian TrueBeam machine. Comparisons with a conventional non-deep-learning registration approach and ablation studies on the composite similarity measure were conducted to demonstrate the efficacy of the proposed method. In the simulation data experiments, two X-ray projections of a head-and-neck image with <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><msup><mn>45</mn> <mo>∘</mo></msup> <annotation>${45}^\circ$</annotation></semantics> </math> discrepancy were used for the registration. The accuracy of the registration results was evaluated on experiments set up at four different moving positions with ground-truth moving parameters. The proposed method achieved sub-millimeter accuracy in translations and sub-degree accuracy in rotations. In the phantom experiments, a head-and-neck phantom was scanned at three different positions involving couch translations and rotations. We achieved translation errors of <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo><</mo> <mn>2</mn> <mspace></mspace> <mi>mm</mi></mrow> <annotation>$< 2\nobreakspace {\rm mm}$</annotation></semantics> </math> and subdegree accuracy for pitch and roll. Experiments on registration using different numbers of projections with varying angle discrepancies demonstrate the improved accuracy and robustness of the proposed method, compared to both the conventional registration approach and the proposed approach without certain components of the composite similarity measure. We proposed a dataset-free lightweight INR-based registration with a composite similarity measure for the challenging 2D-3D registration problem with limited-angle CBCT scans. Comprehensive evaluations of both simulation data and experimental phantom data demonstrated the efficiency, accuracy, and robustness of the proposed method.

Effectiveness of Artificial Intelligence in detecting sinonasal pathology using clinical imaging modalities: a systematic review.

Petsiou DP, Spinos D, Martinos A, Muzaffar J, Garas G, Georgalas C

pubmed logopapersMay 19 2025
Sinonasal pathology can be complex and requires a systematic and meticulous approach. Artificial Intelligence (AI) has the potential to improve diagnostic accuracy and efficiency in sinonasal imaging, but its clinical applicability remains an area of ongoing research. This systematic review evaluates the methodologies and clinical relevance of AI in detecting sinonasal pathology through radiological imaging. Key search terms included "artificial intelligence," "deep learning," "machine learning," "neural network," and "paranasal sinuses,". Abstract and full-text screening was conducted using predefined inclusion and exclusion criteria. Data were extracted on study design, AI architectures used (e.g., Convolutional Neural Networks (CNN), Machine Learning classifiers), and clinical characteristics, such as imaging modality (e.g., Computed Tomography (CT), Magnetic Resonance Imaging (MRI)). A total of 53 studies were analyzed, with 85% retrospective, 68% single-center, and 92.5% using internal databases. CT was the most common imaging modality (60.4%), and chronic rhinosinusitis without nasal polyposis (CRSsNP) was the most studied condition (34.0%). Forty-one studies employed neural networks, with classification as the most frequent AI task (35.8%). Key performance metrics included Area Under the Curve (AUC), accuracy, sensitivity, specificity, precision, and F1-score. Quality assessment based on CONSORT-AI yielded a mean score of 16.0 ± 2. AI shows promise in improving sinonasal imaging interpretation. However, as existing research is predominantly retrospective and single-center, further studies are needed to evaluate AI's generalizability and applicability. More research is also required to explore AI's role in treatment planning and post-treatment prediction for clinical integration.

Artificial intelligence based pulmonary vessel segmentation: an opportunity for automated three-dimensional planning of lung segmentectomy.

Mank QJ, Thabit A, Maat APWM, Siregar S, Van Walsum T, Kluin J, Sadeghi AH

pubmed logopapersMay 19 2025
This study aimed to develop an automated method for pulmonary artery and vein segmentation in both left and right lungs from computed tomography (CT) images using artificial intelligence (AI). The segmentations were evaluated using PulmoSR software, which provides 3D visualizations of patient-specific anatomy, potentially enhancing a surgeon's understanding of the lung structure. A dataset of 125 CT scans from lung segmentectomy patients at Erasmus MC was used. Manual annotations for pulmonary arteries and veins were created with 3D Slicer. nnU-Net models were trained for both lungs, assessed using Dice score, sensitivity, and specificity. Intraoperative recordings demonstrated clinical applicability. A paired t-test evaluated statistical significance of the differences between automatic and manual segmentations. The nnU-Net model, trained at full 3D resolution, achieved a mean Dice score between 0.91 and 0.92. The mean sensitivity and specificity were: left artery: 0.86 and 0.99, right artery: 0.84 and 0.99, left vein: 0.85 and 0.99, right vein: 0.85 and 0.99. The automatic method reduced segmentation time from ∼1.5 hours to under 5 min. Five cases were evaluated to demonstrate how the segmentations support lung segmentectomy procedures. P-values for Dice scores were all below 0.01, indicating statistical significance. The nnU-Net models successfully performed automatic segmentation of pulmonary arteries and veins in both lungs. When integrated with visualization tools, these automatic segmentations can enhance preoperative and intraoperative planning by providing detailed 3D views of patients anatomy.

Effect of low-dose colchicine on pericoronary inflammation and coronary plaque composition in chronic coronary disease: a subanalysis of the LoDoCo2 trial.

Fiolet ATL, Lin A, Kwiecinski J, Tutein Nolthenius J, McElhinney P, Grodecki K, Kietselaer B, Opstal TS, Cornel JH, Knol RJ, Schaap J, Aarts RAHM, Tutein Nolthenius AMFA, Nidorf SM, Velthuis BK, Dey D, Mosterd A

pubmed logopapersMay 19 2025
Low-dose colchicine (0.5 mg once daily) reduces the risk of major cardiovascular events in coronary disease, but its mechanism of action is not yet fully understood. We investigated whether low-dose colchicine is associated with changes in pericoronary inflammation and plaque composition in patients with chronic coronary disease. We performed a cross-sectional, nationwide, subanalysis of the Low-Dose Colchicine 2 Trial (LoDoCo2, n=5522). CT angiography studies were performed in 151 participants randomised to colchicine or placebo coronary after a median treatment duration of 28.2 months. Pericoronary adipose tissue (PCAT) attenuation measurements around proximal coronary artery segments and quantitative plaque analysis for the entire coronary tree were performed using artificial intelligence-enabled plaque analysis software. Median PCAT attenuation was not significantly different between the two groups (-79.5 Hounsfield units (HU) for colchicine versus -78.7 HU for placebo, p=0.236). Participants assigned to colchicine had a higher volume (169.6 mm<sup>3</sup> vs 113.1 mm<sup>3</sup>, p=0.041) and burden (9.6% vs 7.0%, p=0.035) of calcified plaque, and higher volume of dense calcified plaque (192.8 mm<sup>3</sup> vs 144.3 mm<sup>3</sup>, p=0.048) compared with placebo, independent of statin therapy. Colchicine treatment was associated with a lower burden of low-attenuation plaque in participants on a low-intensity statin, but not in those on a high-intensity statin (p<sub>interaction</sub>=0.037). Pericoronary inflammation did not differ among participants who received low-dose colchicine compared with placebo. Low-dose colchicine was associated with a higher volume of calcified plaque, particularly dense calcified plaque, which is considered a feature of plaque stability.

Non-invasive CT based multiregional radiomics for predicting pathologic complete response to preoperative neoadjuvant chemoimmunotherapy in non-small cell lung cancer.

Fan S, Xie J, Zheng S, Wang J, Zhang B, Zhang Z, Wang S, Cui Y, Liu J, Zheng X, Ye Z, Cui X, Yue D

pubmed logopapersMay 19 2025
This study aims to develop and validate a multiregional radiomics model to predict pathological complete response (pCR) to neoadjuvant chemoimmunotherapy in non-small cell lung cancer (NSCLC), and further evaluate the performance of the model in different specific subgroups (N2 stage and anti-PD-1/PD-L1). 216 patients with NSCLC who underwent neoadjuvant chemoimmunotherapy followed by surgical intervention were included and assigned to training and validation sets randomly. From pre-treatment baseline CT, one intratumoral (T) and two peritumoral regions (P<sub>3</sub>: 0-3 mm; P<sub>6</sub>: 0-6 mm) were extracted. Five radiomics models were developed using machine learning algorithms to predict pCR, utilizing selected features from intratumoral (T), peritumoral (P<sub>3</sub>, P<sub>6</sub>), and combined intra- and peritumoral regions (T + P<sub>3</sub>, T + P<sub>6</sub>). Additionally, the predictive efficacy of the optimal model was specifically assessed for patients in the N2 stage and anti-PD-1/PD-L1 subgroups. A total of 51.4 % (111/216) of patients exhibited pCR following neoadjuvant chemoimmunotherapy. Multivariable analysis identified that only the T + P<sub>3</sub> radiomics signature served as independent predictor of pCR (P < 0.001). The multiregional radiomics model (T + P<sub>3</sub>) exhibited superior predictive performance for pCR, achieving an area under the curve (AUC) of 0.75 in the validation cohort. Furthermore, this multiregional model maintained robust predictive accuracy in both N2 stage and anti-PD-1/PD-L1 subgroups, with an AUC of 0.829 and 0.833, respectively. The proposed multiregional radiomics model showed potential in predicting pCR in NSCLC after neoadjuvant chemoimmunotherapy, and demonstrated good predictive performance in different specific subgroups. This capability may assist clinicians in identifying suitable candidates for neoadjuvant chemoimmunotherapy and promote the advancement in precision therapy.

Learning Wavelet-Sparse FDK for 3D Cone-Beam CT Reconstruction

Yipeng Sun, Linda-Sophie Schneider, Chengze Ye, Mingxuan Gu, Siyuan Mei, Siming Bayer, Andreas Maier

arxiv logopreprintMay 19 2025
Cone-Beam Computed Tomography (CBCT) is essential in medical imaging, and the Feldkamp-Davis-Kress (FDK) algorithm is a popular choice for reconstruction due to its efficiency. However, FDK is susceptible to noise and artifacts. While recent deep learning methods offer improved image quality, they often increase computational complexity and lack the interpretability of traditional methods. In this paper, we introduce an enhanced FDK-based neural network that maintains the classical algorithm's interpretability by selectively integrating trainable elements into the cosine weighting and filtering stages. Recognizing the challenge of a large parameter space inherent in 3D CBCT data, we leverage wavelet transformations to create sparse representations of the cosine weights and filters. This strategic sparsification reduces the parameter count by $93.75\%$ without compromising performance, accelerates convergence, and importantly, maintains the inference computational cost equivalent to the classical FDK algorithm. Our method not only ensures volumetric consistency and boosts robustness to noise, but is also designed for straightforward integration into existing CT reconstruction pipelines. This presents a pragmatic enhancement that can benefit clinical applications, particularly in environments with computational limitations.
Page 91 of 1021015 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.