Sort by:
Page 93 of 1291287 results

Artificial intelligence based pulmonary vessel segmentation: an opportunity for automated three-dimensional planning of lung segmentectomy.

Mank QJ, Thabit A, Maat APWM, Siregar S, Van Walsum T, Kluin J, Sadeghi AH

pubmed logopapersMay 19 2025
This study aimed to develop an automated method for pulmonary artery and vein segmentation in both left and right lungs from computed tomography (CT) images using artificial intelligence (AI). The segmentations were evaluated using PulmoSR software, which provides 3D visualizations of patient-specific anatomy, potentially enhancing a surgeon's understanding of the lung structure. A dataset of 125 CT scans from lung segmentectomy patients at Erasmus MC was used. Manual annotations for pulmonary arteries and veins were created with 3D Slicer. nnU-Net models were trained for both lungs, assessed using Dice score, sensitivity, and specificity. Intraoperative recordings demonstrated clinical applicability. A paired t-test evaluated statistical significance of the differences between automatic and manual segmentations. The nnU-Net model, trained at full 3D resolution, achieved a mean Dice score between 0.91 and 0.92. The mean sensitivity and specificity were: left artery: 0.86 and 0.99, right artery: 0.84 and 0.99, left vein: 0.85 and 0.99, right vein: 0.85 and 0.99. The automatic method reduced segmentation time from ∼1.5 hours to under 5 min. Five cases were evaluated to demonstrate how the segmentations support lung segmentectomy procedures. P-values for Dice scores were all below 0.01, indicating statistical significance. The nnU-Net models successfully performed automatic segmentation of pulmonary arteries and veins in both lungs. When integrated with visualization tools, these automatic segmentations can enhance preoperative and intraoperative planning by providing detailed 3D views of patients anatomy.

Advances in pancreatic cancer diagnosis: from DNA methylation to AI-Assisted imaging.

Sharma R, Komal K, Kumar S, Ghosh R, Pandey P, Gupta GD, Kumar M

pubmed logopapersMay 19 2025
Pancreatic Cancer (PC) is a highly aggressive tumor that is mainly diagnosed at later stages. Various imaging technologies, such as CT, MRI, and EUS, possess limitations in early PC diagnosis. Therefore, this review article explores the various innovative biomarkers for PC detection, such as DNA methylation, Noncoding RNAs, and proteomic biomarkers, and the role of AI in PC detection at early stages. Innovative biomarkers, such as DNA methylation genes, show higher specificity and sensitivity in PC diagnosis. Additionally, various non-coding RNAs, such as long non-coding RNAs (lncRNAs) and microRNAs, show high diagnostic accuracy and serve as diagnostic and prognostic biomarkers. Additionally, proteomic biomarkers retain higher diagnostic accuracy in different body fluids. Apart from this, the utilization of AI showed that AI surpassed the radiologist's diagnostic performance in PC detection. The combination of AI and advanced biomarkers can revolutionize early PC detection. However, large-scale, prospective studies are needed to validate its clinical utility. Further. standardization of biomarker panels and AI algorithms is a vital step toward their reliable applications in early PC detection, ultimately improving patient outcomes.

Thymoma habitat segmentation and risk prediction model using CT imaging and K-means clustering.

Liang Z, Li J, He S, Li S, Cai R, Chen C, Zhang Y, Deng B, Wu Y

pubmed logopapersMay 19 2025
Thymomas, though rare, present a wide range of clinical behaviors, from indolent to aggressive forms, making accurate risk stratification crucial for treatment planning. Traditional methods such as histopathology and radiological assessments often lack the ability to capture tumor heterogeneity, which can impact prognosis. Radiomics, combined with machine learning, provides a method to extract and analyze quantitative imaging features, offering the potential to improve tumor classification and risk prediction. By segmenting tumors into distinct habitat zones, it becomes possible to assess intratumoral heterogeneity more effectively. This study employs radiomics and machine learning techniques to enhance thymoma risk prediction, aiming to improve diagnostic consistency and reduce variability in radiologists' assessments. This study aims to identify different habitat zones within thymomas through CT imaging feature analysis and to establish a predictive model to differentiate between high and low-risk thymomas. Additionally, the study explores how this model can assist radiologists. We obtained CT imaging data from 133 patients with thymoma who were treated at the Affiliated Hospital of Guangdong Medical University from 2015 to 2023. Images from the plain scan phase, venous phase, arterial phase, and their differential images (subtracted images) were used. Tumor regions were segmented into three habitat zones using K-Means clustering. Imaging features from each habitat zone were extracted using the PyRadiomics (van Griethuysen, 2017) library. The 28 most distinguishing features were selected through Mann-Whitney U tests (Mann, 1947) and Spearman's correlation analysis (Spearman, 1904). Five predictive models were built using the same machine learning algorithm (Support Vector Machine [SVM]): Habitat1, Habitat2, Habitat3 (trained on features from individual tumor habitat regions), Habitat All (trained on combined features from all regions), and Intra (trained on intratumoral features), and their performances were evaluated for comparison. The models' diagnostic outcomes were compared with the diagnoses of four radiologists (two junior and two experienced physicians). The AUC (area under curve) for habitat zone 1 was 0.818, for habitat zone 2 was 0.732, and for habitat zone 3 was 0.763. The comprehensive model, which combined data from all habitat zones, achieved an AUC of 0.960, outperforming the model based on traditional radiomic features (AUC of 0.720). The model significantly improved the diagnostic accuracy of all four radiologists. The AUCs for junior radiologists 1 and 2 increased from 0.747 and 0.775 to 0.932 and 0.972, respectively, while for experienced radiologists 1 and 2, the AUCs increased from 0.932 and 0.859 to 0.977 and 0.972, respectively. This study successfully identified distinct habitat zones within thymomas through CT imaging feature analysis and developed an efficient predictive model that significantly improved diagnostic accuracy. This model offers a novel tool for risk assessment of thymomas and can aid in guiding clinical decision-making.

Non-orthogonal kV imaging guided patient position verification in non-coplanar radiation therapy with dataset-free implicit neural representation.

Ye S, Chen Y, Wang S, Xing L, Gao Y

pubmed logopapersMay 19 2025
Cone-beam CT (CBCT) is crucial for patient alignment and target verification in radiation therapy (RT). However, for non-coplanar beams, potential collisions between the treatment couch and the on-board imaging system limit the range that the gantry can be rotated. Limited-angle measurements are often insufficient to generate high-quality volumetric images for image-domain registration, therefore limiting the use of CBCT for position verification. An alternative to image-domain registration is to use a few 2D projections acquired by the onboard kV imager to register with the 3D planning CT for patient position verification, which is referred to as 2D-3D registration. The 2D-3D registration involves converting the 3D volume into a set of digitally reconstructed radiographs (DRRs) expected to be comparable to the acquired 2D projections. The domain gap between the generated DRRs and the acquired projections can happen due to the inaccurate geometry modeling in DRR generation and artifacts in the actual acquisitions. We aim to improve the efficiency and accuracy of the challenging 2D-3D registration problem in non-coplanar RT with limited-angle CBCT scans. We designed an accelerated, dataset-free, and patient-specific 2D-3D registration framework based on an implicit neural representation (INR) network and a composite similarity measure. The INR network consists of a lightweight three-layer multilayer perception followed by average pooling to calculate rigid motion parameters, which are used to transform the original 3D volume to the moving position. The Radon transform and imaging specifications at the moving position are used to generate DRRs with higher accuracy. We designed a composite similarity measure consisting of pixel-wise intensity difference and gradient differences between the generated DRRs and acquired projections to further reduce the impact of their domain gap on registration accuracy. We evaluated the proposed method on both simulation data and real phantom data acquired from a Varian TrueBeam machine. Comparisons with a conventional non-deep-learning registration approach and ablation studies on the composite similarity measure were conducted to demonstrate the efficacy of the proposed method. In the simulation data experiments, two X-ray projections of a head-and-neck image with <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><msup><mn>45</mn> <mo>∘</mo></msup> <annotation>${45}^\circ$</annotation></semantics> </math> discrepancy were used for the registration. The accuracy of the registration results was evaluated on experiments set up at four different moving positions with ground-truth moving parameters. The proposed method achieved sub-millimeter accuracy in translations and sub-degree accuracy in rotations. In the phantom experiments, a head-and-neck phantom was scanned at three different positions involving couch translations and rotations. We achieved translation errors of <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo><</mo> <mn>2</mn> <mspace></mspace> <mi>mm</mi></mrow> <annotation>$< 2\nobreakspace {\rm mm}$</annotation></semantics> </math> and subdegree accuracy for pitch and roll. Experiments on registration using different numbers of projections with varying angle discrepancies demonstrate the improved accuracy and robustness of the proposed method, compared to both the conventional registration approach and the proposed approach without certain components of the composite similarity measure. We proposed a dataset-free lightweight INR-based registration with a composite similarity measure for the challenging 2D-3D registration problem with limited-angle CBCT scans. Comprehensive evaluations of both simulation data and experimental phantom data demonstrated the efficiency, accuracy, and robustness of the proposed method.

Development and Validation an Integrated Deep Learning Model to Assist Eosinophilic Chronic Rhinosinusitis Diagnosis: A Multicenter Study.

Li J, Mao N, Aodeng S, Zhang H, Zhu Z, Wang L, Liu Y, Qi H, Qiao H, Lin Y, Qiu Z, Yang T, Zha Y, Wang X, Wang W, Song X, Lv W

pubmed logopapersMay 19 2025
The assessment of eosinophilic chronic rhinosinusitis (eCRS) lacks accurate non-invasive preoperative prediction methods, relying primarily on invasive histopathological sections. This study aims to use computed tomography (CT) images and clinical parameters to develop an integrated deep learning model for the preoperative identification of eCRS and further explore the biological basis of its predictions. A total of 1098 patients with sinus CT images were included from two hospitals and were divided into training, internal, and external test sets. The region of interest of sinus lesions was manually outlined by an experienced radiologist. We utilized three deep learning models (3D-ResNet, 3D-Xception, and HR-Net) to extract features from CT images and calculate deep learning scores. The clinical signature and deep learning score were inputted into a support vector machine for classification. The receiver operating characteristic curve, sensitivity, specificity, and accuracy were used to evaluate the integrated deep learning model. Additionally, proteomic analysis was performed on 34 patients to explore the biological basis of the model's predictions. The area under the curve of the integrated deep learning model to predict eCRS was 0.851 (95% confidence interval [CI]: 0.77-0.93) and 0.821 (95% CI: 0.78-0.86) in the internal and external test sets. Proteomic analysis revealed that in patients predicted to be eCRS, 594 genes were dysregulated, and some of them were associated with pathways and biological processes such as chemokine signaling pathway. The proposed integrated deep learning model could effectively predict eCRS patients. This study provided a non-invasive way of identifying eCRS to facilitate personalized therapy, which will pave the way toward precision medicine for CRS.

Portable Ultrasound Bladder Volume Measurement Over Entire Volume Range Using a Deep Learning Artificial Intelligence Model in a Selected Cohort: A Proof of Principle Study.

Jeong HJ, Seol A, Lee S, Lim H, Lee M, Oh SJ

pubmed logopapersMay 19 2025
We aimed to prospectively investigate whether bladder volume measured using deep learning artificial intelligence (AI) algorithms (AI-BV) is more accurate than that measured using conventional methods (C-BV) if using a portable ultrasound bladder scanner (PUBS). Patients who underwent filling cystometry because of lower urinary tract symptoms between January 2021 and July 2022 were enrolled. Every time the bladder was filled serially with normal saline from 0 mL to maximum cystometric capacity in 50 mL increments, C-BV was measured using PUBS. Ultrasound images obtained during this process were manually annotated to define the bladder contour, which was used to build a deep learning AI model. The true bladder volume (T-BV) for each bladder volume range was compared with C-BV and AI-BV for analysis. We enrolled 250 patients (213 men and 37 women), and a deep learning AI model was established using 1912 bladder images. There was a significant difference between C-BV (205.5 ± 170.8 mL) and T-BV (190.5 ± 165.7 mL) (p = 0.001), but no significant difference between AI-BV (197.0 ± 161.1 mL) and T-BV (190.5 ± 165.7 mL) (p = 0.081). In bladder volume ranges of 101-150, 151-200, and 201-300 mL, there were significant differences in the percentage of volume differences between [C-BV and T-BV] and [AI-BV and T-BV] (p < 0.05), but no significant difference if converted to absolute values (p > 0.05). C-BV (R<sup>2</sup> = 0.91, p < 0.001) and AI-BV (R<sup>2</sup> = 0.90, p < 0.001) were highly correlated with T-BV. The mean difference between AI-BV and T-BV (6.5 ± 50.4) was significantly smaller than that between C-BV and T-BV (15.0 ± 50.9) (p = 0.001). Following image pre-processing, deep learning AI-BV more accurately estimated true BV than conventional methods in this selected cohort on internal validation. Determination of the clinical relevance of these findings and performance in external cohorts requires further study. The clinical trial was conducted using an approved product for its approved indication, so approval from the Ministry of Food and Drug Safety (MFDS) was not required. Therefore, there is no clinical trial registration number.

Federated Learning for Renal Tumor Segmentation and Classification on Multi-Center MRI Dataset.

Nguyen DT, Imami M, Zhao LM, Wu J, Borhani A, Mohseni A, Khunte M, Zhong Z, Shi V, Yao S, Wang Y, Loizou N, Silva AC, Zhang PJ, Zhang Z, Jiao Z, Kamel I, Liao WH, Bai H

pubmed logopapersMay 19 2025
Deep learning (DL) models for accurate renal tumor characterization may benefit from multi-center datasets for improved generalizability; however, data-sharing constraints necessitate privacy-preserving solutions like federated learning (FL). To assess the performance and reliability of FL for renal tumor segmentation and classification in multi-institutional MRI datasets. Retrospective multi-center study. A total of 987 patients (403 female) from six hospitals were included for analysis. 73% (723/987) had malignant renal tumors, primarily clear cell carcinoma (n = 509). Patients were split into training (n = 785), validation (n = 104), and test (n = 99) sets, stratified across three simulated institutions. MRI was performed at 1.5 T and 3 T using T2-weighted imaging (T2WI) and contrast-enhanced T1-weighted imaging (CE-T1WI) sequences. FL and non-FL approaches used nnU-Net for tumor segmentation and ResNet for its classification. FL-trained models across three simulated institutional clients with central weight aggregation, while the non-FL approach used centralized training on the full dataset. Segmentation was evaluated using Dice coefficients, and classification between malignant and benign lesions was assessed using accuracy, sensitivity, specificity, and area under the curves (AUCs). FL and non-FL performance was compared using the Wilcoxon test for segmentation Dice and Delong's test for AUC (p < 0.05). No significant difference was observed between FL and non-FL models in segmentation (Dice: 0.43 vs. 0.45, p = 0.202) or classification (AUC: 0.69 vs. 0.64, p = 0.959) on the test set. For classification, no significant difference was observed between the models in accuracy (p = 0.912), sensitivity (p = 0.862), or specificity (p = 0.847) on the test set. FL demonstrated comparable performance to non-FL approaches in renal tumor segmentation and classification, supporting its potential as a privacy-preserving alternative for multi-institutional DL models. 4. Stage 2.

Transformer model based on Sonazoid contrast-enhanced ultrasound for microvascular invasion prediction in hepatocellular carcinoma.

Qin Q, Pang J, Li J, Gao R, Wen R, Wu Y, Liang L, Que Q, Liu C, Peng J, Lv Y, He Y, Lin P, Yang H

pubmed logopapersMay 19 2025
Microvascular invasion (MVI) is strongly associated with the prognosis of patients with hepatocellular carcinoma (HCC). To evaluate the value of Transformer models with Sonazoid contrast-enhanced ultrasound (CEUS) in the preoperative prediction of MVI. This retrospective study included 164 HCC patients. Deep learning features and radiomic features were extracted from arterial and Kupffer phase images, alongside the collection of clinicopathological parameters. Normality was assessed using the Shapiro-Wilk test. The Mann‒Whitney U-test and least absolute shrinkage and selection operator algorithm were applied to screen features. Transformer, radiomic, and clinical prediction models for MVI were constructed with logistic regression. Repeated random splits followed a 7:3 ratio, with model performance evaluated over 50 iterations. The area under the receiver operating characteristic curve (AUC, 95% confidence interval [CI]), sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), decision curve, and calibration curve were used to evaluate the performance of the models. The DeLong test was applied to compare performance between models. The Bonferroni method was used to control type I error rates arising from multiple comparisons. A two-sided p-value of < 0.05 was considered statistically significant. In the training set, the diagnostic performance of the arterial-phase Transformer (AT) and Kupffer-phase Transformer (KT) models were better than that of the radiomic and clinical (Clin) models (p < 0.0001). In the validation set, both the AT and KT models outperformed the radiomic and Clin models in terms of diagnostic performance (p < 0.05). The AUC (95% CI) for the AT model was 0.821 (0.72-0.925) with an accuracy of 80.0%, and the KT model was 0.859 (0.766-0.977) with an accuracy of 70.0%. Logistic regression analysis indicated that tumor size (p = 0.016) and alpha-fetoprotein (AFP) (p = 0.046) were independent predictors of MVI. Transformer models using Sonazoid CEUS have potential for effectively identifying MVI-positive patients preoperatively.

Functional MRI Analysis of Cortical Regions to Distinguish Lewy Body Dementia From Alzheimer's Disease.

Kashyap B, Hanson LR, Gustafson SK, Sherman SJ, Sughrue ME, Rosenbloom MH

pubmed logopapersMay 19 2025
Cortical regions such as parietal area H (PH) and the fundus of the superior temporal sulcus (FST) are involved in higher visual function and may play a role in dementia with Lewy bodies (DLB), which is frequently associated with hallucinations. The authors evaluated functional connectivity between these two regions for distinguishing participants with DLB from those with Alzheimer's disease (AD) or mild cognitive impairment (MCI) and from cognitively normal (CN) individuals to identify a functional connectivity MRI signature for DLB. Eighteen DLB participants completed cognitive testing and functional MRI scans and were matched to AD or MCI and CN individuals whose data were obtained from the Alzheimer's Disease Neuroimaging Initiative database (https://adni.loni.usc.edu). Images were analyzed with data from Human Connectome Project (HCP) comparison individuals by using a machine learning-based subject-specific HCP atlas based on diffusion tractography. Bihemispheric functional connectivity of the PH to left FST regions was reduced in the DLB group compared with the AD and CN groups (mean±SD connectivity score=0.307±0.009 vs. 0.456±0.006 and 0.433±0.006, respectively). No significant differences were detected among the groups in connectivity within basal ganglia structures, and no significant correlations were observed between neuropsychological testing results and functional connectivity between the PH and FST regions. Performances on clock-drawing and number-cancelation tests were significantly and negatively correlated with connectivity between the right caudate nucleus and right substantia nigra for DLB participants but not for AD or CN participants. The functional connectivity between PH and FST regions is uniquely affected by DLB and may help distinguish this condition from AD.

Current trends and emerging themes in utilizing artificial intelligence to enhance anatomical diagnostic accuracy and efficiency in radiotherapy.

Pezzino S, Luca T, Castorina M, Puleo S, Castorina S

pubmed logopapersMay 19 2025
Artificial intelligence (AI) incorporation into healthcare has proven revolutionary, especially in radiotherapy, where accuracy is critical. The purpose of the study is to present patterns and develop topics in the application of AI to improve the precision of anatomical diagnosis, delineation of organs, and therapeutic effectiveness in radiation and radiological imaging. We performed a bibliometric analysis of scholarly articles in the fields starting in 2014. Through an examination of research output from key contributing nations and institutions, an analysis of notable research subjects, and an investigation of trends in scientific terminology pertaining to AI in radiology and radiotherapy. Furthermore, we examined software solutions based on AI in these domains, with a specific emphasis on extracting anatomical features and recognizing organs for the purpose of treatment planning. Our investigation found a significant surge in papers pertaining to AI in the fields since 2014. Institutions such as Emory University and Memorial Sloan-Kettering Cancer Center made substantial contributions to the development of the United States and China as leading research-producing nations. Key study areas encompassed adaptive radiation informed by anatomical alterations, MR-Linac for enhanced vision of soft tissues, and multi-organ segmentation for accurate planning of radiotherapy. An evident increase in the frequency of phrases such as 'radiomics,' 'radiotherapy segmentation,' and 'dosiomics' was noted. The evaluation of AI-based software revealed a wide range of uses in several subdisciplinary fields of radiation and radiology, particularly in improving the identification of anatomical features for treatment planning and identifying organs at risk. The incorporation of AI in anatomical diagnosis in radiological imaging and radiotherapy is progressing rapidly, with substantial capacity to transform the precision of diagnoses and the effectiveness of treatment planning.
Page 93 of 1291287 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.