Sort by:
Page 414 of 4524519 results

Effectiveness of Artificial Intelligence in detecting sinonasal pathology using clinical imaging modalities: a systematic review.

Petsiou DP, Spinos D, Martinos A, Muzaffar J, Garas G, Georgalas C

pubmed logopapersMay 19 2025
Sinonasal pathology can be complex and requires a systematic and meticulous approach. Artificial Intelligence (AI) has the potential to improve diagnostic accuracy and efficiency in sinonasal imaging, but its clinical applicability remains an area of ongoing research. This systematic review evaluates the methodologies and clinical relevance of AI in detecting sinonasal pathology through radiological imaging. Key search terms included "artificial intelligence," "deep learning," "machine learning," "neural network," and "paranasal sinuses,". Abstract and full-text screening was conducted using predefined inclusion and exclusion criteria. Data were extracted on study design, AI architectures used (e.g., Convolutional Neural Networks (CNN), Machine Learning classifiers), and clinical characteristics, such as imaging modality (e.g., Computed Tomography (CT), Magnetic Resonance Imaging (MRI)). A total of 53 studies were analyzed, with 85% retrospective, 68% single-center, and 92.5% using internal databases. CT was the most common imaging modality (60.4%), and chronic rhinosinusitis without nasal polyposis (CRSsNP) was the most studied condition (34.0%). Forty-one studies employed neural networks, with classification as the most frequent AI task (35.8%). Key performance metrics included Area Under the Curve (AUC), accuracy, sensitivity, specificity, precision, and F1-score. Quality assessment based on CONSORT-AI yielded a mean score of 16.0 ± 2. AI shows promise in improving sinonasal imaging interpretation. However, as existing research is predominantly retrospective and single-center, further studies are needed to evaluate AI's generalizability and applicability. More research is also required to explore AI's role in treatment planning and post-treatment prediction for clinical integration.

Semiautomated segmentation of breast tumor on automatic breast ultrasound image using a large-scale model with customized modules.

Zhou Y, Ye M, Ye H, Zeng S, Shu X, Pan Y, Wu A, Liu P, Zhang G, Cai S, Chen S

pubmed logopapersMay 19 2025
To verify the capability of the Segment Anything Model for medical images in 3D (SAM-Med3D), tailored with low-rank adaptation (LoRA) strategies, in segmenting breast tumors in Automated Breast Ultrasound (ABUS) images. This retrospective study collected data from 329 patients diagnosed with breast cancer (average age 54 years). The dataset was randomly divided into training (n = 204), validation (n = 29), and test sets (n = 59). Two experienced radiologists manually annotated the regions of interest of each sample in the dataset, which served as ground truth for training and evaluating the SAM-Med3D model with additional customized modules. For semi-automatic tumor segmentation, points were randomly sampled within the lesion areas to simulate the radiologists' clicks in real-world scenarios. The segmentation performance was evaluated using the Dice coefficient. A total of 492 cases (200 from the "Tumor Detection, Segmentation, and Classification Challenge on Automated 3D Breast Ultrasound (TDSC-ABUS) 2023 challenge") were subjected to semi-automatic segmentation inference. The average Dice Similariy Coefficient (DSC) scores for the training, validation, and test sets of the Lishui dataset were 0.75, 0.78, and 0.75, respectively. The Breast Imaging Reporting and Data System (BI-RADS) categories of all samples range from BI-RADS 3 to 6, yielding an average DSC coefficient between 0.73 and 0.77. By categorizing the samples (lesion volumes ranging from 1.64 to 100.03 cm<sup>3</sup>) based on lesion size, the average DSC falls between 0.72 and 0.77.And the overall average DSC for the TDSC-ABUS 2023 challenge dataset was 0.79, with the test set achieving a sora-of-art scores of 0.79. The SAM-Med3D model with additional customized modules demonstrates good performance in semi-automatic 3D ABUS breast cancer tumor segmentation, indicating its feasibility for application in computer-aided diagnosis systems.

Development and validation of ultrasound-based radiomics deep learning model to identify bone erosion in rheumatoid arthritis.

Yan L, Xu J, Ye X, Lin M, Gong Y, Fang Y, Chen S

pubmed logopapersMay 19 2025
To develop and validate a deep learning radiomics fusion model (DLR) based on ultrasound (US) images to identify bone erosion in rheumatoid arthritis (RA) patients. A total of 432 patients with RA at two institutions were collected. Three hundred twelve patients from center 1 were randomly divided into a training set (N = 218) and an internal test set (N = 94) in a 7:3 ratio; meanwhile, 124 patients from center 2 were as an external test set. Radiomics (Rad) and deep learning (DL) features were extracted based on hand-crafted radiomics and deep transfer learning networks. The least absolute shrinkage and selection operator regression was employed to establish DLR fusion feature from the Rad and DL features. Subsequently, 10 machine learning algorithms were used to construct models and the final optimal model was selected. The performance of models was evaluated using receiver operating characteristic (ROC) and decision curve analysis (DCA). The diagnostic efficacy of sonographers was compared with and without the assistance of the optimal model. LR was chosen as the optimal algorithm for model construction account for superior performance (Rad/DL/DLR: area under the curve [AUC] = 0.906/0.974/0.979) in the training set. In the internal test set, DLR_LR as the final model had the highest AUC (AUC = 0.966), which was also validated in the external test set (AUC = 0.932). With the aid of DLR_LR model, the overall performance of both junior and senior sonographers improved significantly (P < 0.05), and there was no significant difference between the junior sonographer with DLR_LR model assistance and the senior sonographer without assistance (P > 0.05). DLR model based on US images is the best performer and is expected to become an important tool for identifying bone erosion in RA patients. Key Points • DLR model based on US images is the best performer in identifying BE in RA patients. • DLR model may assist the sonographers to improve the accuracy of BE evaluations.

Detection of carotid artery calcifications using artificial intelligence in dental radiographs: a systematic review and meta-analysis.

Arzani S, Soltani P, Karimi A, Yazdi M, Ayoub A, Khurshid Z, Galderisi D, Devlin H

pubmed logopapersMay 19 2025
Carotid artery calcifications are important markers of cardiovascular health, often associated with atherosclerosis and a higher risk of stroke. Recent research shows that dental radiographs can help identify these calcifications, allowing for earlier detection of vascular diseases. Advances in artificial intelligence (AI) have improved the ability to detect carotid calcifications in dental images, making it a useful screening tool. This systematic review and meta-analysis aimed to evaluate how accurately AI methods can identify carotid calcifications in dental radiographs. A systematic search in databases including PubMed, Scopus, Embase, and Web of Science for studies on AI algorithms used to detect carotid calcifications in dental radiographs was conducted. Two independent reviewers collected data on study aims, imaging techniques, and statistical measures such as sensitivity and specificity. A meta-analysis using random effects was performed, and the risk of bias was evaluated with the QUADAS-2 tool. Nine studies were suitable for qualitative analysis, while five provided data for quantitative analysis. These studies assessed AI algorithms using cone beam computed tomography (n = 3) and panoramic radiographs (n = 6). The sensitivity of the included studies ranged from 0.67 to 0.98 and specificity varied between 0.85 and 0.99. The overall effect size, by considering only one AI method in each study, resulted in a sensitivity of 0.92 [95% CI 0.81 to 0.97] and a specificity of 0.96 [95% CI 0.92 to 0.97]. The high sensitivity and specificity indicate that AI methods could be effective screening tools, enhancing the early detection of stroke and related cardiovascular risks. Not applicable.

Artificial intelligence based pulmonary vessel segmentation: an opportunity for automated three-dimensional planning of lung segmentectomy.

Mank QJ, Thabit A, Maat APWM, Siregar S, Van Walsum T, Kluin J, Sadeghi AH

pubmed logopapersMay 19 2025
This study aimed to develop an automated method for pulmonary artery and vein segmentation in both left and right lungs from computed tomography (CT) images using artificial intelligence (AI). The segmentations were evaluated using PulmoSR software, which provides 3D visualizations of patient-specific anatomy, potentially enhancing a surgeon's understanding of the lung structure. A dataset of 125 CT scans from lung segmentectomy patients at Erasmus MC was used. Manual annotations for pulmonary arteries and veins were created with 3D Slicer. nnU-Net models were trained for both lungs, assessed using Dice score, sensitivity, and specificity. Intraoperative recordings demonstrated clinical applicability. A paired t-test evaluated statistical significance of the differences between automatic and manual segmentations. The nnU-Net model, trained at full 3D resolution, achieved a mean Dice score between 0.91 and 0.92. The mean sensitivity and specificity were: left artery: 0.86 and 0.99, right artery: 0.84 and 0.99, left vein: 0.85 and 0.99, right vein: 0.85 and 0.99. The automatic method reduced segmentation time from ∼1.5 hours to under 5 min. Five cases were evaluated to demonstrate how the segmentations support lung segmentectomy procedures. P-values for Dice scores were all below 0.01, indicating statistical significance. The nnU-Net models successfully performed automatic segmentation of pulmonary arteries and veins in both lungs. When integrated with visualization tools, these automatic segmentations can enhance preoperative and intraoperative planning by providing detailed 3D views of patients anatomy.

Thymoma habitat segmentation and risk prediction model using CT imaging and K-means clustering.

Liang Z, Li J, He S, Li S, Cai R, Chen C, Zhang Y, Deng B, Wu Y

pubmed logopapersMay 19 2025
Thymomas, though rare, present a wide range of clinical behaviors, from indolent to aggressive forms, making accurate risk stratification crucial for treatment planning. Traditional methods such as histopathology and radiological assessments often lack the ability to capture tumor heterogeneity, which can impact prognosis. Radiomics, combined with machine learning, provides a method to extract and analyze quantitative imaging features, offering the potential to improve tumor classification and risk prediction. By segmenting tumors into distinct habitat zones, it becomes possible to assess intratumoral heterogeneity more effectively. This study employs radiomics and machine learning techniques to enhance thymoma risk prediction, aiming to improve diagnostic consistency and reduce variability in radiologists' assessments. This study aims to identify different habitat zones within thymomas through CT imaging feature analysis and to establish a predictive model to differentiate between high and low-risk thymomas. Additionally, the study explores how this model can assist radiologists. We obtained CT imaging data from 133 patients with thymoma who were treated at the Affiliated Hospital of Guangdong Medical University from 2015 to 2023. Images from the plain scan phase, venous phase, arterial phase, and their differential images (subtracted images) were used. Tumor regions were segmented into three habitat zones using K-Means clustering. Imaging features from each habitat zone were extracted using the PyRadiomics (van Griethuysen, 2017) library. The 28 most distinguishing features were selected through Mann-Whitney U tests (Mann, 1947) and Spearman's correlation analysis (Spearman, 1904). Five predictive models were built using the same machine learning algorithm (Support Vector Machine [SVM]): Habitat1, Habitat2, Habitat3 (trained on features from individual tumor habitat regions), Habitat All (trained on combined features from all regions), and Intra (trained on intratumoral features), and their performances were evaluated for comparison. The models' diagnostic outcomes were compared with the diagnoses of four radiologists (two junior and two experienced physicians). The AUC (area under curve) for habitat zone 1 was 0.818, for habitat zone 2 was 0.732, and for habitat zone 3 was 0.763. The comprehensive model, which combined data from all habitat zones, achieved an AUC of 0.960, outperforming the model based on traditional radiomic features (AUC of 0.720). The model significantly improved the diagnostic accuracy of all four radiologists. The AUCs for junior radiologists 1 and 2 increased from 0.747 and 0.775 to 0.932 and 0.972, respectively, while for experienced radiologists 1 and 2, the AUCs increased from 0.932 and 0.859 to 0.977 and 0.972, respectively. This study successfully identified distinct habitat zones within thymomas through CT imaging feature analysis and developed an efficient predictive model that significantly improved diagnostic accuracy. This model offers a novel tool for risk assessment of thymomas and can aid in guiding clinical decision-making.

Non-orthogonal kV imaging guided patient position verification in non-coplanar radiation therapy with dataset-free implicit neural representation.

Ye S, Chen Y, Wang S, Xing L, Gao Y

pubmed logopapersMay 19 2025
Cone-beam CT (CBCT) is crucial for patient alignment and target verification in radiation therapy (RT). However, for non-coplanar beams, potential collisions between the treatment couch and the on-board imaging system limit the range that the gantry can be rotated. Limited-angle measurements are often insufficient to generate high-quality volumetric images for image-domain registration, therefore limiting the use of CBCT for position verification. An alternative to image-domain registration is to use a few 2D projections acquired by the onboard kV imager to register with the 3D planning CT for patient position verification, which is referred to as 2D-3D registration. The 2D-3D registration involves converting the 3D volume into a set of digitally reconstructed radiographs (DRRs) expected to be comparable to the acquired 2D projections. The domain gap between the generated DRRs and the acquired projections can happen due to the inaccurate geometry modeling in DRR generation and artifacts in the actual acquisitions. We aim to improve the efficiency and accuracy of the challenging 2D-3D registration problem in non-coplanar RT with limited-angle CBCT scans. We designed an accelerated, dataset-free, and patient-specific 2D-3D registration framework based on an implicit neural representation (INR) network and a composite similarity measure. The INR network consists of a lightweight three-layer multilayer perception followed by average pooling to calculate rigid motion parameters, which are used to transform the original 3D volume to the moving position. The Radon transform and imaging specifications at the moving position are used to generate DRRs with higher accuracy. We designed a composite similarity measure consisting of pixel-wise intensity difference and gradient differences between the generated DRRs and acquired projections to further reduce the impact of their domain gap on registration accuracy. We evaluated the proposed method on both simulation data and real phantom data acquired from a Varian TrueBeam machine. Comparisons with a conventional non-deep-learning registration approach and ablation studies on the composite similarity measure were conducted to demonstrate the efficacy of the proposed method. In the simulation data experiments, two X-ray projections of a head-and-neck image with <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><msup><mn>45</mn> <mo>∘</mo></msup> <annotation>${45}^\circ$</annotation></semantics> </math> discrepancy were used for the registration. The accuracy of the registration results was evaluated on experiments set up at four different moving positions with ground-truth moving parameters. The proposed method achieved sub-millimeter accuracy in translations and sub-degree accuracy in rotations. In the phantom experiments, a head-and-neck phantom was scanned at three different positions involving couch translations and rotations. We achieved translation errors of <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo><</mo> <mn>2</mn> <mspace></mspace> <mi>mm</mi></mrow> <annotation>$< 2\nobreakspace {\rm mm}$</annotation></semantics> </math> and subdegree accuracy for pitch and roll. Experiments on registration using different numbers of projections with varying angle discrepancies demonstrate the improved accuracy and robustness of the proposed method, compared to both the conventional registration approach and the proposed approach without certain components of the composite similarity measure. We proposed a dataset-free lightweight INR-based registration with a composite similarity measure for the challenging 2D-3D registration problem with limited-angle CBCT scans. Comprehensive evaluations of both simulation data and experimental phantom data demonstrated the efficiency, accuracy, and robustness of the proposed method.

Development and Validation an Integrated Deep Learning Model to Assist Eosinophilic Chronic Rhinosinusitis Diagnosis: A Multicenter Study.

Li J, Mao N, Aodeng S, Zhang H, Zhu Z, Wang L, Liu Y, Qi H, Qiao H, Lin Y, Qiu Z, Yang T, Zha Y, Wang X, Wang W, Song X, Lv W

pubmed logopapersMay 19 2025
The assessment of eosinophilic chronic rhinosinusitis (eCRS) lacks accurate non-invasive preoperative prediction methods, relying primarily on invasive histopathological sections. This study aims to use computed tomography (CT) images and clinical parameters to develop an integrated deep learning model for the preoperative identification of eCRS and further explore the biological basis of its predictions. A total of 1098 patients with sinus CT images were included from two hospitals and were divided into training, internal, and external test sets. The region of interest of sinus lesions was manually outlined by an experienced radiologist. We utilized three deep learning models (3D-ResNet, 3D-Xception, and HR-Net) to extract features from CT images and calculate deep learning scores. The clinical signature and deep learning score were inputted into a support vector machine for classification. The receiver operating characteristic curve, sensitivity, specificity, and accuracy were used to evaluate the integrated deep learning model. Additionally, proteomic analysis was performed on 34 patients to explore the biological basis of the model's predictions. The area under the curve of the integrated deep learning model to predict eCRS was 0.851 (95% confidence interval [CI]: 0.77-0.93) and 0.821 (95% CI: 0.78-0.86) in the internal and external test sets. Proteomic analysis revealed that in patients predicted to be eCRS, 594 genes were dysregulated, and some of them were associated with pathways and biological processes such as chemokine signaling pathway. The proposed integrated deep learning model could effectively predict eCRS patients. This study provided a non-invasive way of identifying eCRS to facilitate personalized therapy, which will pave the way toward precision medicine for CRS.

Portable Ultrasound Bladder Volume Measurement Over Entire Volume Range Using a Deep Learning Artificial Intelligence Model in a Selected Cohort: A Proof of Principle Study.

Jeong HJ, Seol A, Lee S, Lim H, Lee M, Oh SJ

pubmed logopapersMay 19 2025
We aimed to prospectively investigate whether bladder volume measured using deep learning artificial intelligence (AI) algorithms (AI-BV) is more accurate than that measured using conventional methods (C-BV) if using a portable ultrasound bladder scanner (PUBS). Patients who underwent filling cystometry because of lower urinary tract symptoms between January 2021 and July 2022 were enrolled. Every time the bladder was filled serially with normal saline from 0 mL to maximum cystometric capacity in 50 mL increments, C-BV was measured using PUBS. Ultrasound images obtained during this process were manually annotated to define the bladder contour, which was used to build a deep learning AI model. The true bladder volume (T-BV) for each bladder volume range was compared with C-BV and AI-BV for analysis. We enrolled 250 patients (213 men and 37 women), and a deep learning AI model was established using 1912 bladder images. There was a significant difference between C-BV (205.5 ± 170.8 mL) and T-BV (190.5 ± 165.7 mL) (p = 0.001), but no significant difference between AI-BV (197.0 ± 161.1 mL) and T-BV (190.5 ± 165.7 mL) (p = 0.081). In bladder volume ranges of 101-150, 151-200, and 201-300 mL, there were significant differences in the percentage of volume differences between [C-BV and T-BV] and [AI-BV and T-BV] (p < 0.05), but no significant difference if converted to absolute values (p > 0.05). C-BV (R<sup>2</sup> = 0.91, p < 0.001) and AI-BV (R<sup>2</sup> = 0.90, p < 0.001) were highly correlated with T-BV. The mean difference between AI-BV and T-BV (6.5 ± 50.4) was significantly smaller than that between C-BV and T-BV (15.0 ± 50.9) (p = 0.001). Following image pre-processing, deep learning AI-BV more accurately estimated true BV than conventional methods in this selected cohort on internal validation. Determination of the clinical relevance of these findings and performance in external cohorts requires further study. The clinical trial was conducted using an approved product for its approved indication, so approval from the Ministry of Food and Drug Safety (MFDS) was not required. Therefore, there is no clinical trial registration number.

Federated Learning for Renal Tumor Segmentation and Classification on Multi-Center MRI Dataset.

Nguyen DT, Imami M, Zhao LM, Wu J, Borhani A, Mohseni A, Khunte M, Zhong Z, Shi V, Yao S, Wang Y, Loizou N, Silva AC, Zhang PJ, Zhang Z, Jiao Z, Kamel I, Liao WH, Bai H

pubmed logopapersMay 19 2025
Deep learning (DL) models for accurate renal tumor characterization may benefit from multi-center datasets for improved generalizability; however, data-sharing constraints necessitate privacy-preserving solutions like federated learning (FL). To assess the performance and reliability of FL for renal tumor segmentation and classification in multi-institutional MRI datasets. Retrospective multi-center study. A total of 987 patients (403 female) from six hospitals were included for analysis. 73% (723/987) had malignant renal tumors, primarily clear cell carcinoma (n = 509). Patients were split into training (n = 785), validation (n = 104), and test (n = 99) sets, stratified across three simulated institutions. MRI was performed at 1.5 T and 3 T using T2-weighted imaging (T2WI) and contrast-enhanced T1-weighted imaging (CE-T1WI) sequences. FL and non-FL approaches used nnU-Net for tumor segmentation and ResNet for its classification. FL-trained models across three simulated institutional clients with central weight aggregation, while the non-FL approach used centralized training on the full dataset. Segmentation was evaluated using Dice coefficients, and classification between malignant and benign lesions was assessed using accuracy, sensitivity, specificity, and area under the curves (AUCs). FL and non-FL performance was compared using the Wilcoxon test for segmentation Dice and Delong's test for AUC (p < 0.05). No significant difference was observed between FL and non-FL models in segmentation (Dice: 0.43 vs. 0.45, p = 0.202) or classification (AUC: 0.69 vs. 0.64, p = 0.959) on the test set. For classification, no significant difference was observed between the models in accuracy (p = 0.912), sensitivity (p = 0.862), or specificity (p = 0.847) on the test set. FL demonstrated comparable performance to non-FL approaches in renal tumor segmentation and classification, supporting its potential as a privacy-preserving alternative for multi-institutional DL models. 4. Stage 2.
Page 414 of 4524519 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.