Sort by:
Page 125 of 1261258 results

The Role of Computed Tomography and Artificial Intelligence in Evaluating the Comorbidities of Chronic Obstructive Pulmonary Disease: A One-Stop CT Scanning for Lung Cancer Screening.

Lin X, Zhang Z, Zhou T, Li J, Jin Q, Li Y, Guan Y, Xia Y, Zhou X, Fan L

pubmed logopapersJan 1 2025
Chronic obstructive pulmonary disease (COPD) is a major cause of morbidity and mortality worldwide. Comorbidities in patients with COPD significantly increase morbidity, mortality, and healthcare costs, posing a significant burden on the management of COPD. Given the complex clinical manifestations and varying severity of COPD comorbidities, accurate diagnosis and evaluation are particularly important in selecting appropriate treatment options. With the development of medical imaging technology, AI-based chest CT, as a noninvasive imaging modality, provides a detailed assessment of COPD comorbidities. Recent studies have shown that certain radiographic features on chest CT can be used as alternative markers of comorbidities in COPD patients. CT-based radiomics features provided incremental predictive value than clinical risk factors only, predicting an AUC of 0.73 for COPD combined with CVD. However, AI has inherent limitations such as lack of interpretability, and further research is needed to improve them. This review evaluates the progress of AI technology combined with chest CT imaging in COPD comorbidities, including lung cancer, cardiovascular disease, osteoporosis, sarcopenia, excess adipose depots, and pulmonary hypertension, with the aim of improving the understanding of imaging and the management of COPD comorbidities for the purpose of improving disease screening, efficacy assessment, and prognostic evaluation.

Clinical-radiomics models with machine-learning algorithms to distinguish uncomplicated from complicated acute appendicitis in adults: a multiphase multicenter cohort study.

Li L, Sun Y, Sun Y, Gao Y, Zhang B, Qi R, Sheng F, Yang X, Liu X, Liu L, Lu C, Chen L, Zhang K

pubmed logopapersJan 1 2025
Increasing evidence suggests that non-operative management (NOM) with antibiotics could serve as a safe alternative to surgery for the treatment of uncomplicated acute appendicitis (AA). However, accurately differentiating between uncomplicated and complicated AA remains challenging. Our aim was to develop and validate machine-learning-based diagnostic models to differentiate uncomplicated from complicated AA. This was a multicenter cohort trial conducted from January 2021 and December 2022 across five tertiary hospitals. Three distinct diagnostic models were created, namely, the clinical-parameter-based model, the CT-radiomics-based model, and the clinical-radiomics-fused model. These models were developed using a comprehensive set of eight machine-learning algorithms, which included logistic regression (LR), support vector machine (SVM), random forest (RF), decision tree (DT), gradient boosting (GB), K-nearest neighbors (KNN), Gaussian Naïve Bayes (GNB), and multi-layer perceptron (MLP). The performance and accuracy of these diverse models were compared. All models exhibited excellent diagnostic performance in the training cohort, achieving a maximal AUC of 1.00. For the clinical-parameter model, the GB classifier yielded the optimal AUC of 0.77 (95% confidence interval [CI]: 0.64-0.90) in the testing cohort, while the LR classifier yielded the optimal AUC of 0.76 (95% CI: 0.66-0.86) in the validation cohort. For the CT-radiomics-based model, GB classifier achieved the best AUC of 0.74 (95% CI: 0.60-0.88) in the testing cohort, and SVM yielded an optimal AUC of 0.63 (95% CI: 0.51-0.75) in the validation cohort. For the clinical-radiomics-fused model, RF classifier yielded an optimal AUC of 0.84 (95% CI: 0.74-0.95) in the testing cohort and 0.76 (95% CI: 0.67-0.86) in the validation cohort. An open-access, user-friendly online tool was developed for clinical application. This multicenter study suggests that the clinical-radiomics-fused model, constructed using RF algorithm, effectively differentiated between complicated and uncomplicated AA.

AISIM: evaluating impacts of user interface elements of an AI assisting tool.

Wiratchawa K, Wanna Y, Junsawang P, Titapun A, Techasen A, Boonrod A, Laopaiboon V, Chamadol N, Bulathwela S, Intharah T

pubmed logopapersJan 1 2025
While Artificial Intelligence (AI) has demonstrated human-level capabilities in many prediction tasks, collaboration between humans and machines is crucial in mission-critical applications, especially in the healthcare sector. An important factor that enables successful human-AI collaboration is the user interface (UI). This paper evaluated the UI of BiTNet, an intelligent assisting tool for human biliary tract diagnosis via ultrasound images. We evaluated the UI of the assisting tool with 11 healthcare professionals through two main research questions: 1) did the assisting tool help improve the diagnosis performance of the healthcare professionals who use the tool? and 2) how did different UI elements of the assisting tool influence the users' decisions? To analyze the impacts of different UI elements without multiple rounds of experiments, we propose the novel AISIM strategy. We demonstrated that our proposed strategy, AISIM, can be used to analyze the influence of different elements in the user interface in one go. Our main findings show that the assisting tool improved the diagnostic performance of healthcare professionals from different levels of experience (OR  = 3.326, p-value <10-15). In addition, high AI prediction confidence and correct AI attention area provided higher than twice the odds that the users would follow the AI suggestion. Finally, the interview results agreed with the experimental result that BiTNet boosted the users' confidence when they were assigned to diagnose abnormality in the biliary tract from the ultrasound images.

Intelligent and precise auxiliary diagnosis of breast tumors using deep learning and radiomics.

Wang T, Zang B, Kong C, Li Y, Yang X, Yu Y

pubmed logopapersJan 1 2025
Breast cancer is the most common malignant tumor among women worldwide, and early diagnosis is crucial for reducing mortality rates. Traditional diagnostic methods have significant limitations in terms of accuracy and consistency. Imaging is a common technique for diagnosing and predicting breast cancer, but human error remains a concern. Increasingly, artificial intelligence (AI) is being employed to assist physicians in reducing diagnostic errors. We developed an intelligent diagnostic model combining deep learning and radiomics to enhance breast tumor diagnosis. The model integrates MobileNet with ResNeXt-inspired depthwise separable and grouped convolutions, improving feature processing and efficiency while reducing parameters. Using AI-Dhabyani and TCIA breast ultrasound datasets, we validated the model internally and externally, comparing it to VGG16, ResNet, AlexNet, and MobileNet. Results: The internal validation set achieved an accuracy of 83.84% with an AUC of 0.92, outperforming other models. The external validation set showed an accuracy of 69.44% with an AUC of 0.75, demonstrating high robustness and generalizability. Conclusions: We developed an intelligent diagnostic model using deep learning and radiomics to improve breast tumor diagnosis. The model combines MobileNet with ResNeXt-inspired depthwise separable and grouped convolutions, enhancing feature processing and efficiency while reducing parameters. It was validated internally and externally using the AI-Dhabyani and TCIA breast ultrasound datasets and compared with VGG16, ResNet, AlexNet, and MobileNet.

Application research of artificial intelligence software in the analysis of thyroid nodule ultrasound image characteristics.

Xu C, Wang Z, Zhou J, Hu F, Wang Y, Xu Z, Cai Y

pubmed logopapersJan 1 2025
Thyroid nodule, as a common clinical endocrine disease, has become increasingly prevalent worldwide. Ultrasound, as the premier method of thyroid imaging, plays an important role in accurately diagnosing and managing thyroid nodules. However, there is a high degree of inter- and intra-observer variability in image interpretation due to the different knowledge and experience of sonographers who have huge ultrasound examination tasks everyday. Artificial intelligence based on computer-aided diagnosis technology maybe improve the accuracy and time efficiency of thyroid nodules diagnosis. This study introduced an artificial intelligence software called SW-TH01/II to evaluate ultrasound image characteristics of thyroid nodules including echogenicity, shape, border, margin, and calcification. We included 225 ultrasound images from two hospitals in Shanghai, respectively. The sonographers and software performed characteristics analysis on the same group of images. We analyzed the consistency of the two results and used the sonographers' results as the gold standard to evaluate the accuracy of SW-TH01/II. A total of 449 images were included in the statistical analysis. For the seven indicators, the proportions of agreement between SW-TH01/II and sonographers' analysis results were all greater than 0.8. For the echogenicity (with very hypoechoic), aspect ratio and margin, the kappa coefficient between the two methods were above 0.75 (P < 0.001). The kappa coefficients of echogenicity (echotexture and echogenicity level), border and calcification between the two methods were above 0.6 (P < 0.001). The median time it takes for software and sonographers to interpret an image were 3 (2, 3) seconds and 26.5 (21.17, 34.33) seconds, respectively, and the difference were statistically significant (z = -18.36, P < 0.001). SW-TH01/II has a high degree of accuracy and great time efficiency benefits in judging the characteristics of thyroid nodule. It can provide more objective results and improve the efficiency of ultrasound examination. SW-TH01/II can be used to assist the sonographers in characterizing the thyroid nodule ultrasound images.

Auxiliary Diagnosis of Pulmonary Nodules' Benignancy and Malignancy Based on Machine Learning: A Retrospective Study.

Wang W, Yang B, Wu H, Che H, Tong Y, Zhang B, Liu H, Chen Y

pubmed logopapersJan 1 2025
Lung cancer, one of the most lethal malignancies globally, often presents insidiously as pulmonary nodules. Its nonspecific clinical presentation and heterogeneous imaging characteristics hinder accurate differentiation between benign and malignant lesions, while biopsy's invasiveness and procedural constraints underscore the critical need for non-invasive early diagnostic approaches. In this retrospective study, we analyzed outpatient and inpatient records from the First Medical Center of Chinese PLA General Hospital between 2011 and 2021, focusing on pulmonary nodules measuring 5-30mm on CT scans without overt signs of malignancy. Pathological examination served as the reference standard. Comparative experiments evaluated SVM, RF, XGBoost, FNN, and Atten_FNN using five-fold cross-validation to assess AUC, sensitivity, and specificity. The dataset was split 70%/30%, and stratified five-fold cross-validation was applied to the training set. The optimal model was interpreted with SHAP to identify the most influential predictive features. This study enrolled 3355 patients, including 1156 with benign and 2199 with malignant pulmonary nodules. The Atten_FNN model demonstrated superior performance in five-fold cross-validation, achieving an AUC of 0.82, accuracy of 0.75, sensitivity of 0.77, and F1 score of 0.80. SHAP analysis revealed key predictive factors: demographic variables (age, sex, BMI), CT-derived features (maximum nodule diameter, morphology, density, calcification, ground-glass opacity), and laboratory biomarkers (neuroendocrine markers, carcinoembryonic antigen). This study integrates electronic medical records and pathology data to predict pulmonary nodule malignancy using machine/deep learning models. SHAP-based interpretability analysis uncovered key clinical determinants. Acknowledging limitations in cross-center generalizability, we propose the development of a multimodal diagnostic systems that combines CT imaging and radiomics, to be validated in multi-center prospective cohorts to facilitate clinical translation. This framework establishes a novel paradigm for early precision diagnosis of lung cancer.

A novel spectral transformation technique based on special functions for improved chest X-ray image classification.

Aljohani A

pubmed logopapersJan 1 2025
Chest X-ray image classification plays an important role in medical diagnostics. Machine learning algorithms enhanced the performance of these classification algorithms by introducing advance techniques. These classification algorithms often requires conversion of a medical data to another space in which the original data is reduced to important values or moments. We developed a mechanism which converts a given medical image to a spectral space which have a base set composed of special functions. In this study, we propose a chest X-ray image classification method based on spectral coefficients. The spectral coefficients are based on an orthogonal system of Legendre type smooth polynomials. We developed the mathematical theory to calculate spectral moment in Legendre polynomails space and use these moments to train traditional classifier like SVM and random forest for a classification task. The procedure is applied to a latest data set of X-Ray images. The data set is composed of X-Ray images of three different classes of patients, normal, Covid infected and pneumonia. The moments designed in this study, when used in SVM or random forest improves its ability to classify a given X-Ray image at a high accuracy. A parametric study of the proposed approach is presented. The performance of these spectral moments is checked in Support vector machine and Random forest algorithm. The efficiency and accuracy of the proposed method is presented in details. All our simulation is performed in computation softwares, Matlab and Python. The image pre processing and spectral moments generation is performed in Matlab and the implementation of the classifiers is performed with python. It is observed that the proposed approach works well and provides satisfactory results (0.975 accuracy), however further studies are required to establish a more accurate and fast version of this approach.

MRISeqClassifier: A Deep Learning Toolkit for Precise MRI Sequence Classification.

Pan J, Chen Q, Sun C, Liang R, Bian J, Xu J

pubmed logopapersJan 1 2025
Magnetic Resonance Imaging (MRI) is a crucial diagnostic tool in medicine, widely used to detect and assess various health conditions. Different MRI sequences, such as T1-weighted, T2-weighted, and FLAIR, serve distinct roles by highlighting different tissue characteristics and contrasts. However, distinguishing them based solely on the description file is currently impossible due to confusing or incorrect annotations. Additionally, there is a notable lack of effective tools to differentiate these sequences. In response, we developed a deep learning-based toolkit tailored for small, unrefined MRI datasets. This toolkit enables precise sequence classification and delivers performance comparable to systems trained on large, meticulously curated datasets. Utilizing lightweight model architectures and incorporating a voting ensemble method, the toolkit enhances accuracy and stability. It achieves a 99% accuracy rate using only 10% of the data typically required in other research. The code is available at https://github.com/JinqianPan/MRISeqClassifier.

Improving lung cancer diagnosis and survival prediction with deep learning and CT imaging.

Wang X, Sharpnack J, Lee TCM

pubmed logopapersJan 1 2025
Lung cancer is a major cause of cancer-related deaths, and early diagnosis and treatment are crucial for improving patients' survival outcomes. In this paper, we propose to employ convolutional neural networks to model the non-linear relationship between the risk of lung cancer and the lungs' morphology revealed in the CT images. We apply a mini-batched loss that extends the Cox proportional hazards model to handle the non-convexity induced by neural networks, which also enables the training of large data sets. Additionally, we propose to combine mini-batched loss and binary cross-entropy to predict both lung cancer occurrence and the risk of mortality. Simulation results demonstrate the effectiveness of both the mini-batched loss with and without the censoring mechanism, as well as its combination with binary cross-entropy. We evaluate our approach on the National Lung Screening Trial data set with several 3D convolutional neural network architectures, achieving high AUC and C-index scores for lung cancer classification and survival prediction. These results, obtained from simulations and real data experiments, highlight the potential of our approach to improving the diagnosis and treatment of lung cancer.

Radiomics and Deep Learning as Important Techniques of Artificial Intelligence - Diagnosing Perspectives in Cytokeratin 19 Positive Hepatocellular Carcinoma.

Wang F, Yan C, Huang X, He J, Yang M, Xian D

pubmed logopapersJan 1 2025
Currently, there are inconsistencies among different studies on preoperative prediction of Cytokeratin 19 (CK19) expression in HCC using traditional imaging, radiomics, and deep learning. We aimed to systematically analyze and compare the performance of non-invasive methods for predicting CK19-positive HCC, thereby providing insights for the stratified management of HCC patients. A comprehensive literature search was conducted in PubMed, EMBASE, Web of Science, and the Cochrane Library from inception to February 2025. Two investigators independently screened and extracted data based on inclusion and exclusion criteria. Eligible studies were included, and key findings were summarized in tables to provide a clear overview. Ultimately, 22 studies involving 3395 HCC patients were included. 72.7% (16/22) focused on traditional imaging, 36.4% (8/22) on radiomics, 9.1% (2/22) on deep learning, and 54.5% (12/22) on combined models. The magnetic resonance imaging was the most commonly used imaging modality (19/22), and over half of the studies (12/22) were published between 2022 and 2025. Moreover, 27.3% (6/22) were multicenter studies, 36.4% (8/22) included a validation set, and only 13.6% (3/22) were prospective. The area under the curve (AUC) range of using clinical and traditional imaging was 0.560 to 0.917. The AUC ranges of radiomics were 0.648 to 0.951, and the AUC ranges of deep learning were 0.718 to 0.820. Notably, the AUC ranges of combined models of clinical, imaging, radiomics and deep learning were 0.614 to 0.995. Nevertheless, the multicenter external data were limited, with only 13.6% (3/22) incorporating validation. The combined model integrating traditional imaging, radiomics and deep learning achieves excellent potential and performance for predicting CK19 in HCC. Based on current limitations, future research should focus on building an easy-to-use dynamic online tool, combining multicenter-multimodal imaging and advanced deep learning approaches to enhance the accuracy and robustness of model predictions.
Page 125 of 1261258 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.