Sort by:
Page 19 of 1411403 results

Accuracy of AI-Based Algorithms in Pulmonary Embolism Detection on Computed Tomographic Pulmonary Angiography: An Updated Systematic Review and Meta-analysis.

Nabipoorashrafi SA, Seyedi A, Bahri RA, Yadegar A, Shomal-Zadeh M, Mohammadi F, Afshari SA, Firoozeh N, Noroozzadeh N, Khosravi F, Asadian S, Chalian H

pubmed logopapersSep 15 2025
Several artificial intelligence (AI) algorithms have been designed for detection of pulmonary embolism (PE) using computed tomographic pulmonary angiography (CTPA). Due to the rapid development of this field and the lack of an updated meta-analysis, we aimed to systematically review the available literature about the accuracy of AI-based algorithms to diagnose PE via CTPA. We searched EMBASE, PubMed, Web of Science, and Cochrane for studies assessing the accuracy of AI-based algorithms. Studies that reported sensitivity and specificity were included. The R software was used for univariate meta-analysis and drawing summary receiver operating characteristic (sROC) curves based on bivariate analysis. To explore the source of heterogeneity, sub-group analysis was performed (PROSPERO: CRD42024543107). A total of 1722 articles were found, and after removing duplicated records, 1185 were screened. Twenty studies with 26 AI models/population met inclusion criteria, encompassing 11,950 participants. Univariate meta-analysis showed a pooled sensitivity of 91.5% (95% CI 85.5-95.2) and specificity of 84.3 (95% CI 74.9-90.6) for PE detection. Additionally, in the bivariate sROC, the pooled area under the curved (AUC) was 0.923 out of 1, indicating a very high accuracy of AI algorithms in the detection of PE. Also, subgroup meta-analysis showed geographical area as a potential source of heterogeneity where the I<sup>2</sup> for sensitivity and specificity in the Asian article subgroup were 60% and 6.9%, respectively. Findings highlight the promising role of AI in accurately diagnosing PE while also emphasizing the need for further research to address regional variations and improve generalizability.

Enhanced value of chest computed tomography radiomics features in breast density classification.

Zhou W, Yang Q, Zhang H

pubmed logopapersSep 15 2025
This study investigates the correlation between chest computed tomography (CT) radiomics features and breast density classification, and aiming to develop an automated radiomics model for breast density assessment using chest CT images. The diagnostic performance was evaluated to establish a CT-based alternative for breast density classification in clinical practice. A retrospective analysis was conducted on patients who underwent both mammography and chest CT scans. The breast density classification results based on mammography images were used to guide the development of CT-based breast density classification models. Radiomic features were extracted from breast regions of interest (ROIs) segmented on chest CT images. The diagnostic performance was evaluated to establish a CT-based alternative for breast density classification in clinical practice. Following dimensionality reduction and selection of dominant radiomic features, four four-class classification models were established, including ① Extreme Gradient Boosting (XGBoost), ② One Vs Rest Classifier-Logistic Regression, ③ Gradient Boosting, and ④ Random Forest Classifier. The performance of these models in classifying breast density using CT images was then evaluated. A total of 330 patients, aged 23-79 years, were included for analysis. The breast ROIs were automatically segmented using a U-net neural network model and subsequently refined and calibrated manually. A total of 1427 radiomic features were extracted, and after dimensionality reduction and feature selection, 28 dominant features closely associated with breast density classification were obtained to construct four classification models. Among the tested models-XGBoost, One-vs-Rest Logistic Regression, Gradient Boosting Classifier, and Random Forest Classifier-the XGBoost model achieved the best performance, with a classification accuracy of 86.6%. Analysis of the receiver operating characteristic curves showed Area Under the Curve (AUC) values of 1.00, 0.93, 0.93, and 0.99 for the four breast density categories, along with a micro-averaged AUC of 0.97 and a macro-averaged AUC of 0.96. Chest CT scans, combined with imaging radiomics models, can accurately classify breast density, providing valuable information related to breast cancer risk stratification. The proposed classification model offers a promising tool for automated breast density assessment, which could enhance personalized breast cancer screening and clinical decision-making.

Image analysis of cardiac hepatopathy secondary to heart failure: Machine learning <i>vs</i> gastroenterologists and radiologists.

Miida S, Kamimura H, Fujiki S, Kobayashi T, Endo S, Maruyama H, Yoshida T, Watanabe Y, Kimura N, Abe H, Sakamaki A, Yokoo T, Tsukada M, Numano F, Kashimura T, Inomata T, Fuzawa Y, Hirata T, Horii Y, Ishikawa H, Nonaka H, Kamimura K, Terai S

pubmed logopapersSep 14 2025
Congestive hepatopathy, also known as nutmeg liver, is liver damage secondary to chronic heart failure (HF). Its morphological characteristics in terms of medical imaging are not defined and remain unclear. To leverage machine learning to capture imaging features of congestive hepatopathy using incidentally acquired computed tomography (CT) scans. We retrospectively analyzed 179 chronic HF patients who underwent echocardiography and CT within one year. Right HF severity was classified into three grades. Liver CT images at the paraumbilical vein level were used to develop a ResNet-based machine learning model to predict tricuspid regurgitation (TR) severity. Model accuracy was compared with that of six gastroenterology and four radiology experts. In the included patients, 120 were male (mean age: 73.1 ± 14.4 years). The accuracy of the results predicting TR severity from a single CT image for the machine learning model was significantly higher than the average accuracy of the experts. The model was found to be exceptionally reliable for predicting severe TR. Deep learning models, particularly those using ResNet architectures, can help identify morphological changes associated with TR severity, aiding in early liver dysfunction detection in patients with HF, thereby improving outcomes.

Disentanglement of Biological and Technical Factors via Latent Space Rotation in Clinical Imaging Improves Disease Pattern Discovery

Jeanny Pan, Philipp Seeböck, Christoph Fürböck, Svitlana Pochepnia, Jennifer Straub, Lucian Beer, Helmut Prosch, Georg Langs

arxiv logopreprintSep 14 2025
Identifying new disease-related patterns in medical imaging data with the help of machine learning enlarges the vocabulary of recognizable findings. This supports diagnostic and prognostic assessment. However, image appearance varies not only due to biological differences, but also due to imaging technology linked to vendors, scanning- or re- construction parameters. The resulting domain shifts impedes data representation learning strategies and the discovery of biologically meaningful cluster appearances. To address these challenges, we introduce an approach to actively learn the domain shift via post-hoc rotation of the data latent space, enabling disentanglement of biological and technical factors. Results on real-world heterogeneous clinical data showcase that the learned disentangled representation leads to stable clusters representing tissue-types across different acquisition settings. Cluster consistency is improved by +19.01% (ARI), +16.85% (NMI), and +12.39% (Dice) compared to the entangled representation, outperforming four state-of-the-art harmonization methods. When using the clusters to quantify tissue composition on idiopathic pulmonary fibrosis patients, the learned profiles enhance Cox survival prediction. This indicates that the proposed label-free framework facilitates biomarker discovery in multi-center routine imaging data. Code is available on GitHub https://github.com/cirmuw/latent-space-rotation-disentanglement.

Multi-encoder self-adaptive hard attention network with maximum intensity projections for lung nodule segmentation.

Usman M, Rehman A, Ur Rehman A, Shahid A, Khan TM, Razzak I, Chung M, Shin YG

pubmed logopapersSep 14 2025
Accurate lung nodule segmentation is crucial for early-stage lung cancer diagnosis, as it can substantially enhance patient survival rates. Computed tomography (CT) images are widely employed for early diagnosis in lung nodule analysis. However, the heterogeneity of lung nodules, size diversity, and the complexity of the surrounding environment pose challenges for developing robust nodule segmentation methods. In this study, we propose an efficient end-to-end framework, the Multi-Encoder Self-Adaptive Hard Attention Network (MESAHA-Net), which consists of three encoding paths, an attention block, and a decoder block that assimilates CT slice patches with both forward and backward maximum intensity projection (MIP) images. This synergy affords a profound contextual understanding of lung nodules and also results in a deluge of features. To manage the profusion of features generated, we incorporate a self-adaptive hard attention mechanism guided by region of interest (ROI) masks centered on nodular regions, which MESAHA-Net autonomously produces. The network sequentially undertakes slice-by-slice segmentation, emphasizing nodule regions to produce precise three-dimensional (3D) segmentation. The proposed framework has been comprehensively evaluated on the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset, the largest publicly available dataset for lung nodule segmentation. The results demonstrate that our approach is highly robust across various lung nodule types, outperforming previous state-of-the-art techniques in terms of segmentation performance and computational complexity, making it suitable for real-time clinical implementation of artificial intelligence (AI)-driven diagnostic tools.

Deep learning-based volume of interest imaging in helical CT for image quality improvement and radiation dose reduction.

Zhou Z, Inoue A, Cox CW, McCollough CH, Yu L

pubmed logopapersSep 13 2025
To develop a volume of interest (VOI) imaging technique in multi-detector-row helical CT to reduce radiation dose or improve image quality within the VOI. A deep-learning method based on a residual U-Net architecture, named VOI-Net, was developed to correct truncation artifacts in VOI helical CT. Three patient cases, a chest CT of interstitial lung disease and 2 abdominopelvic CT of liver tumour, were used for evaluation through simulation. VOI-Net effectively corrected truncation artifacts (root mean square error [RMSE] of 5.97 ± 2.98 Hounsfield Units [HU] for chest, 3.12 ± 1.93 HU, and 3.71 ± 1.87 HU for liver). Radiation dose was reduced by 71% without sacrificing image quality within a 10-cm diameter VOI, compared to a full scan field of view (FOV) of 50 cm. With the same total energy deposited as in a full FOV scan, image quality within the VOI matched that at 350% higher radiation dose. A radiologist confirmed improved lesion conspicuity and visibility of small linear reticulations associated with ground-glass opacity and liver tumour. Focusing radiation on the VOI and using VOI-Net in a helical scan, total radiation can be reduced or higher image quality equivalent to those at higher doses in standard full FOV scan can be achieved within the VOI. A targeted helical VOI imaging technique enabled by a deep-learning-based artifact correction method improves image quality within the VOI without increasing radiation dose.

Annotation-efficient deep learning detection and measurement of mediastinal lymph nodes in CT.

Olesinski A, Lederman R, Azraq Y, Sosna J, Joskowicz L

pubmed logopapersSep 13 2025
Manual detection and measurement of structures in volumetric scans is routine in clinical practice but is time-consuming and subject to observer variability. Automatic deep learning-based solutions are effective but require a large dataset of manual annotations by experts. We present a novel annotation-efficient semi-supervised deep learning method for automatic detection, segmentation, and measurement of the short axis length (SAL) of mediastinal lymph nodes (LNs) in contrast-enhanced CT (ceCT) scans. Our semi-supervised method combines the precision of expert annotations with the quantity advantages of pseudolabeled data. It uses an ensemble of 3D nnU-Net models trained on a few expert-annotated scans to generate pseudolabels on a large dataset of unannotated scans. The pseudolabels are then filtered to remove false positive LNs by excluding LNs outside the mediastinum and LNs overlapping with other anatomical structures. Finally, a single 3D nnU-Net model is trained using the filtered pseudo-labels. Our method optimizes the ratio of annotated/non-annotated dataset sizes to achieve the desired performance, thus reducing manual annotation effort. Experimental studies on three chest ceCT datasets with a total of 268 annotated scans (1817 LNs), of which 134 scans were used for testing and the remaining for ensemble training in batches of 17, 34, 67, and 134 scans, as well as 710 unannotated scans, show that the semi-supervised models' recall improvements were 11-24% (0.72-0.87) while maintaining comparable precision levels. The best model achieved mean SAL differences of 1.65 ± 0.92 mm for normal LNs and 4.25 ± 4.98 mm for enlarged LNs, both within the observer variability. Our semi-supervised method requires one-fourth to one-eighth less annotations to achieve a performance to supervised models trained on the same dataset for the automatic measurement of mediastinal LNs in chest ceCT. Using pseudolabels with anatomical filtering may be effective to overcome the challenges of the development of AI-based solutions in radiology.

Simulating Sinogram-Domain Motion and Correcting Image-Domain Artifacts Using Deep Learning in HR-pQCT Bone Imaging

Farhan Sadik, Christopher L. Newman, Stuart J. Warden, Rachel K. Surowiec

arxiv logopreprintSep 13 2025
Rigid-motion artifacts, such as cortical bone streaking and trabecular smearing, hinder in vivo assessment of bone microstructures in high-resolution peripheral quantitative computed tomography (HR-pQCT). Despite various motion grading techniques, no motion correction methods exist due to the lack of standardized degradation models. We optimize a conventional sinogram-based method to simulate motion artifacts in HR-pQCT images, creating paired datasets of motion-corrupted images and their corresponding ground truth, which enables seamless integration into supervised learning frameworks for motion correction. As such, we propose an Edge-enhanced Self-attention Wasserstein Generative Adversarial Network with Gradient Penalty (ESWGAN-GP) to address motion artifacts in both simulated (source) and real-world (target) datasets. The model incorporates edge-enhancing skip connections to preserve trabecular edges and self-attention mechanisms to capture long-range dependencies, facilitating motion correction. A visual geometry group (VGG)-based perceptual loss is used to reconstruct fine micro-structural features. The ESWGAN-GP achieves a mean signal-to-noise ratio (SNR) of 26.78, structural similarity index measure (SSIM) of 0.81, and visual information fidelity (VIF) of 0.76 for the source dataset, while showing improved performance on the target dataset with an SNR of 29.31, SSIM of 0.87, and VIF of 0.81. The proposed methods address a simplified representation of real-world motion that may not fully capture the complexity of in vivo motion artifacts. Nevertheless, because motion artifacts present one of the foremost challenges to more widespread adoption of this modality, these methods represent an important initial step toward implementing deep learning-based motion correction in HR-pQCT.

Association of artificial intelligence-screened interstitial lung disease with radiation pneumonitis in locally advanced non-small cell lung cancer.

Bacon H, McNeil N, Patel T, Welch M, Ye XY, Bezjak A, Lok BH, Raman S, Giuliani M, Cho BCJ, Sun A, Lindsay P, Liu G, Kandel S, McIntosh C, Tadic T, Hope A

pubmed logopapersSep 13 2025
Interstitial lung disease (ILD) has been correlated with an increased risk for radiation pneumonitis (RP) following lung SBRT, but the degree to which locally advanced NSCLC (LA-NSCLC) patients are affected has yet to be quantified. An algorithm to identify patients at high risk for RP may help clinicians mitigate risk. All LA-NSCLC patients treated with definitive radiotherapy at our institution from 2006 to 2021 were retrospectively assessed. A convolutional neural network was previously developed to identify patients with radiographic ILD using planning computed tomography (CT) images. All screen-positive (AI-ILD + ) patients were reviewed by a thoracic radiologist to identify true radiographic ILD (r-ILD). The association between the algorithm output, clinical and dosimetric variables, and the outcomes of grade ≥ 3 RP and mortality were assessed using univariate (UVA) and multivariable (MVA) logistic regression, and Kaplan-Meier survival analysis. 698 patients were included in the analysis. Grade (G) 0-5 RP was reported in 51 %, 27 %, 17 %, 4.4 %, 0.14 % and 0.57 % of patients, respectively. Overall, 23 % of patients were classified as AI-ILD + . On MVA, only AI-ILD status (OR 2.15, p = 0.03) and AI-ILD score (OR 35.27, p < 0.01) were significant predictors of G3 + RP. Median OS was 3.6 years in AI-ILD- patients and 2.3 years in AI-ILD + patients (NS). Patients with r-ILD had significantly higher rates of severe toxicities, with G3 + RP 25 % and G5 RP 7 %. R-ILD was associated with an increased risk for G3 + RP on MVA (OR 5.42, p < 0.01). Our AI-ILD algorithm detects patients with significantly increased risk for G3 + RP.

The best diagnostic approach for classifying ischemic stroke onset time: A systematic review and meta-analysis.

Zakariaee SS, Kadir DH, Molazadeh M, Abdi S

pubmed logopapersSep 12 2025
The success of intravenous thrombolysis with tPA (IV-tPA) as the fastest and easiest treatment for stroke patients is closely related to time since stroke onset (TSS). Administering IV-tPA after the recommended time interval (< 4.5 h) increases the risk of cerebral hemorrhage. Despite advances in diagnostic approaches have been made, the determination of TSS remains a clinical challenge. In this study, the performances of different diagnostic approaches were investigated to classify TSS. A systematic literature search was conducted in Web of Science, Pubmed, Scopus, Embase, and Cochrane databases until July 2025. The overall AUC, sensitivity, and specificity magnitudes with their 95%CIs were determined for each diagnostic approach to evaluate their classification performances. This systematic review retrieved a total number of 9030 stroke patients until July 2025. The results showed that the human readings of DWI-FLAIR mismatch as the current gold standard method with AUC = 0.71 (95%CI: 0.66-0.76), sensitivity = 0.62 (95%CI: 0.54-0.71), and specificity = 0.78 (95%CI: 0.72-0.84) has a moderate performance to identify the TSS. ML model fed by radiomic features of CT data with AUC = 0.89 (95%CI: 0.80-0.98), sensitivity = 0.85 (95%CI: 0.75-0.96), and specificity = 0.86 (95%CI: 0.73-1.00) has the best performance in classifying TSS among the models reviewed. ML models fed by radiomic features better classify TSS than the human reading of DWI-FLAIR mismatch. An efficient AI model fed by CT radiomic data could yield the best classification performance to determine patients' eligibility for IV-tPA treatment and improve treatment outcomes.
Page 19 of 1411403 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.