Sort by:
Page 134 of 1341340 results

AISIM: evaluating impacts of user interface elements of an AI assisting tool.

Wiratchawa K, Wanna Y, Junsawang P, Titapun A, Techasen A, Boonrod A, Laopaiboon V, Chamadol N, Bulathwela S, Intharah T

pubmed logopapersJan 1 2025
While Artificial Intelligence (AI) has demonstrated human-level capabilities in many prediction tasks, collaboration between humans and machines is crucial in mission-critical applications, especially in the healthcare sector. An important factor that enables successful human-AI collaboration is the user interface (UI). This paper evaluated the UI of BiTNet, an intelligent assisting tool for human biliary tract diagnosis via ultrasound images. We evaluated the UI of the assisting tool with 11 healthcare professionals through two main research questions: 1) did the assisting tool help improve the diagnosis performance of the healthcare professionals who use the tool? and 2) how did different UI elements of the assisting tool influence the users' decisions? To analyze the impacts of different UI elements without multiple rounds of experiments, we propose the novel AISIM strategy. We demonstrated that our proposed strategy, AISIM, can be used to analyze the influence of different elements in the user interface in one go. Our main findings show that the assisting tool improved the diagnostic performance of healthcare professionals from different levels of experience (OR  = 3.326, p-value <10-15). In addition, high AI prediction confidence and correct AI attention area provided higher than twice the odds that the users would follow the AI suggestion. Finally, the interview results agreed with the experimental result that BiTNet boosted the users' confidence when they were assigned to diagnose abnormality in the biliary tract from the ultrasound images.

Radiomic Model Associated with Tumor Microenvironment Predicts Immunotherapy Response and Prognosis in Patients with Locoregionally Advanced Nasopharyngeal Carcinoma.

Sun J, Wu X, Zhang X, Huang W, Zhong X, Li X, Xue K, Liu S, Chen X, Li W, Liu X, Shen H, You J, He W, Jin Z, Yu L, Li Y, Zhang S, Zhang B

pubmed logopapersJan 1 2025
<b>Background:</b> No robust biomarkers have been identified to predict the efficacy of programmed cell death protein 1 (PD-1) inhibitors in patients with locoregionally advanced nasopharyngeal carcinoma (LANPC). We aimed to develop radiomic models using pre-immunotherapy MRI to predict the response to PD-1 inhibitors and the patient prognosis. <b>Methods:</b> This study included 246 LANPC patients (training cohort, <i>n</i> = 117; external test cohort, <i>n</i> = 129) from 10 centers. The best-performing machine learning classifier was employed to create the radiomic models. A combined model was constructed by integrating clinical and radiomic data. A radiomic interpretability study was performed with whole slide images (WSIs) stained with hematoxylin and eosin (H&E) and immunohistochemistry (IHC). A total of 150 patient-level nuclear morphological features (NMFs) and 12 cell spatial distribution features (CSDFs) were extracted from WSIs. The correlation between the radiomic and pathological features was assessed using Spearman correlation analysis. <b>Results:</b> The radiomic model outperformed the clinical and combined models in predicting treatment response (area under the curve: 0.760 vs. 0.559 vs. 0.652). For overall survival estimation, the combined model performed comparably to the radiomic model but outperformed the clinical model (concordance index: 0.858 vs. 0.812 vs. 0.664). Six treatment response-related radiomic features correlated with 50 H&E-derived (146 pairs, |<i>r</i>|= 0.31 to 0.46) and 2 to 26 IHC-derived NMF, particularly for CD45RO (69 pairs, |<i>r</i>|= 0.31 to 0.48), CD8 (84, |<i>r</i>|= 0.30 to 0.59), PD-L1 (73, |<i>r</i>|= 0.32 to 0.48), and CD163 (53, |<i>r</i>| = 0.32 to 0.59). Eight prognostic radiomic features correlated with 11 H&E-derived (16 pairs, |<i>r</i>|= 0.48 to 0.61) and 2 to 31 IHC-derived NMF, particularly for PD-L1 (80 pairs, |<i>r</i>|= 0.44 to 0.64), CD45RO (65, |<i>r</i>|= 0.42 to 0.67), CD19 (35, |<i>r</i>|= 0.44 to 0.58), CD66b (61, |<i>r</i>| = 0.42 to 0.67), and FOXP3 (21, |<i>r</i>| = 0.41 to 0.71). In contrast, fewer CSDFs exhibited correlations with specific radiomic features. <b>Conclusion:</b> The radiomic model and combined model are feasible in predicting immunotherapy response and outcomes in LANPC patients. The radiology-pathology correlation suggests a potential biological basis for the predictive models.

The Role of Computed Tomography and Artificial Intelligence in Evaluating the Comorbidities of Chronic Obstructive Pulmonary Disease: A One-Stop CT Scanning for Lung Cancer Screening.

Lin X, Zhang Z, Zhou T, Li J, Jin Q, Li Y, Guan Y, Xia Y, Zhou X, Fan L

pubmed logopapersJan 1 2025
Chronic obstructive pulmonary disease (COPD) is a major cause of morbidity and mortality worldwide. Comorbidities in patients with COPD significantly increase morbidity, mortality, and healthcare costs, posing a significant burden on the management of COPD. Given the complex clinical manifestations and varying severity of COPD comorbidities, accurate diagnosis and evaluation are particularly important in selecting appropriate treatment options. With the development of medical imaging technology, AI-based chest CT, as a noninvasive imaging modality, provides a detailed assessment of COPD comorbidities. Recent studies have shown that certain radiographic features on chest CT can be used as alternative markers of comorbidities in COPD patients. CT-based radiomics features provided incremental predictive value than clinical risk factors only, predicting an AUC of 0.73 for COPD combined with CVD. However, AI has inherent limitations such as lack of interpretability, and further research is needed to improve them. This review evaluates the progress of AI technology combined with chest CT imaging in COPD comorbidities, including lung cancer, cardiovascular disease, osteoporosis, sarcopenia, excess adipose depots, and pulmonary hypertension, with the aim of improving the understanding of imaging and the management of COPD comorbidities for the purpose of improving disease screening, efficacy assessment, and prognostic evaluation.

XLLC-Net: A lightweight and explainable CNN for accurate lung cancer classification using histopathological images.

Jim JR, Rayed ME, Mridha MF, Nur K

pubmed logopapersJan 1 2025
Lung cancer imaging plays a crucial role in early diagnosis and treatment, where machine learning and deep learning have significantly advanced the accuracy and efficiency of disease classification. This study introduces the Explainable and Lightweight Lung Cancer Net (XLLC-Net), a streamlined convolutional neural network designed for classifying lung cancer from histopathological images. Using the LC25000 dataset, which includes three lung cancer classes and two colon cancer classes, we focused solely on the three lung cancer classes for this study. XLLC-Net effectively discerns complex disease patterns within these classes. The model consists of four convolutional layers and contains merely 3 million parameters, considerably reducing its computational footprint compared to existing deep learning models. This compact architecture facilitates efficient training, completing each epoch in just 60 seconds. Remarkably, XLLC-Net achieves a classification accuracy of 99.62% [Formula: see text] 0.16%, with precision, recall, and F1 score of 99.33% [Formula: see text] 0.30%, 99.67% [Formula: see text] 0.30%, and 99.70% [Formula: see text] 0.30%, respectively. Furthermore, the integration of Explainable AI techniques, such as Saliency Map and GRAD-CAM, enhances the interpretability of the model, offering clear visual insights into its decision-making process. Our results underscore the potential of lightweight DL models in medical imaging, providing high accuracy and rapid training while ensuring model transparency and reliability.

MRISeqClassifier: A Deep Learning Toolkit for Precise MRI Sequence Classification.

Pan J, Chen Q, Sun C, Liang R, Bian J, Xu J

pubmed logopapersJan 1 2025
Magnetic Resonance Imaging (MRI) is a crucial diagnostic tool in medicine, widely used to detect and assess various health conditions. Different MRI sequences, such as T1-weighted, T2-weighted, and FLAIR, serve distinct roles by highlighting different tissue characteristics and contrasts. However, distinguishing them based solely on the description file is currently impossible due to confusing or incorrect annotations. Additionally, there is a notable lack of effective tools to differentiate these sequences. In response, we developed a deep learning-based toolkit tailored for small, unrefined MRI datasets. This toolkit enables precise sequence classification and delivers performance comparable to systems trained on large, meticulously curated datasets. Utilizing lightweight model architectures and incorporating a voting ensemble method, the toolkit enhances accuracy and stability. It achieves a 99% accuracy rate using only 10% of the data typically required in other research. The code is available at https://github.com/JinqianPan/MRISeqClassifier.

OA-HybridCNN (OHC): An advanced deep learning fusion model for enhanced diagnostic accuracy in knee osteoarthritis imaging.

Liao Y, Yang G, Pan W, Lu Y

pubmed logopapersJan 1 2025
Knee osteoarthritis (KOA) is a leading cause of disability globally. Early and accurate diagnosis is paramount in preventing its progression and improving patients' quality of life. However, the inconsistency in radiologists' expertise and the onset of visual fatigue during prolonged image analysis often compromise diagnostic accuracy, highlighting the need for automated diagnostic solutions. In this study, we present an advanced deep learning model, OA-HybridCNN (OHC), which integrates ResNet and DenseNet architectures. This integration effectively addresses the gradient vanishing issue in DenseNet and augments prediction accuracy. To evaluate its performance, we conducted a thorough comparison with other deep learning models using five-fold cross-validation and external tests. The OHC model outperformed its counterparts across all performance metrics. In external testing, OHC exhibited an accuracy of 91.77%, precision of 92.34%, and recall of 91.36%. During the five-fold cross-validation, its average AUC and ACC were 86.34% and 87.42%, respectively. Deep learning, particularly exemplified by the OHC model, has greatly improved the efficiency and accuracy of KOA imaging diagnosis. The adoption of such technologies not only alleviates the burden on radiologists but also significantly enhances diagnostic precision.

Investigating methods to enhance interpretability and performance in cardiac MRI for myocardial scarring diagnosis using convolutional neural network classification and One Match.

Udin MH, Armstrong S, Kai A, Doyle ST, Pokharel S, Ionita CN, Sharma UC

pubmed logopapersJan 1 2025
Machine learning (ML) classification of myocardial scarring in cardiac MRI is often hindered by limited explainability, particularly with convolutional neural networks (CNNs). To address this, we developed One Match (OM), an algorithm that builds on template matching to improve on both the explainability and performance of ML myocardial scaring classification. By incorporating OM, we aim to foster trust in AI models for medical diagnostics and demonstrate that improved interpretability does not have to compromise classification accuracy. Using a cardiac MRI dataset from 279 patients, this study evaluates One Match, which classifies myocardial scarring in images by matching each image to a set of labeled template images. It uses the highest correlation score from these matches for classification and is compared to a traditional sequential CNN. Enhancements such as autodidactic enhancement (AE) and patient-level classifications (PLCs) were applied to improve the predictive accuracy of both methods. Results are reported as follows: accuracy, sensitivity, specificity, precision, and F1-score. The highest classification performance was observed with the OM algorithm when enhanced by both AE and PLCs, 95.3% accuracy, 92.3% sensitivity, 96.7% specificity, 92.3% precision, and 92.3% F1-score, marking a significant improvement over the base configurations. AE alone had a positive impact on OM increasing accuracy from 89.0% to 93.2%, but decreased the accuracy of the CNN from 85.3% to 82.9%. In contrast, PLCs improved accuracy for both the CNN and OM, raising the CNN's accuracy by 4.2% and OM's by 7.4%. This study demonstrates the effectiveness of OM in classifying myocardial scars, particularly when enhanced with AE and PLCs. The interpretability of OM also enabled the examination of misclassifications, providing insights that could accelerate development and foster greater trust among clinical stakeholders.

Comparative analysis of diagnostic performance in mammography: A reader study on the impact of AI assistance.

Ramli Hamid MT, Ab Mumin N, Abdul Hamid S, Mohd Ariffin N, Mat Nor K, Saib E, Mohamed NA

pubmed logopapersJan 1 2025
This study evaluates the impact of artificial intelligence (AI) assistance on the diagnostic performance of radiologists with varying levels of experience in interpreting mammograms in a Malaysian tertiary referral center, particularly in women with dense breasts. A retrospective study including 434 digital mammograms interpreted by two general radiologists (12 and 6 years of experience) and two trainees (2 years of experience). Diagnostic performance was assessed with and without AI assistance (Lunit INSIGHT MMG), using sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and area under the receiver operating characteristic curve (AUC). Inter-reader agreement was measured using kappa statistics. AI assistance significantly improved the diagnostic performance of all reader groups across all metrics (p < 0.05). The senior radiologist consistently achieved the highest sensitivity (86.5% without AI, 88.0% with AI) and specificity (60.5% without AI, 59.2% with AI). The junior radiologist demonstrated the highest PPV (56.9% without AI, 74.6% with AI) and NPV (90.3% without AI, 92.2% with AI). The trainees showed the lowest performance, but AI significantly enhanced their accuracy. AI assistance was particularly beneficial in interpreting mammograms of women with dense breasts. AI assistance significantly enhances the diagnostic accuracy and consistency of radiologists in mammogram interpretation, with notable benefits for less experienced readers. These findings support the integration of AI into clinical practice, particularly in resource-limited settings where access to specialized breast radiologists is constrained.

Enhancement of Fairness in AI for Chest X-ray Classification.

Jackson NJ, Yan C, Malin BA

pubmed logopapersJan 1 2024
The use of artificial intelligence (AI) in medicine has shown promise to improve the quality of healthcare decisions. However, AI can be biased in a manner that produces unfair predictions for certain demographic subgroups. In MIMIC-CXR, a publicly available dataset of over 300,000 chest X-ray images, diagnostic AI has been shown to have a higher false negative rate for racial minorities. We evaluated the capacity of synthetic data augmentation, oversampling, and demographic-based corrections to enhance the fairness of AI predictions. We show that adjusting unfair predictions for demographic attributes, such as race, is ineffective at improving fairness or predictive performance. However, using oversampling and synthetic data augmentation to modify disease prevalence reduced such disparities by 74.7% and 10.6%, respectively. Moreover, such fairness gains were accomplished without reduction in performance (95% CI AUC: [0.816, 0.820] versus [0.810, 0.819] versus [0.817, 0.821] for baseline, oversampling, and augmentation, respectively).

Ensuring Fairness in Detecting Mild Cognitive Impairment with MRI.

Tong B, Edwards T, Yang S, Hou B, Tarzanagh DA, Urbanowicz RJ, Moore JH, Ritchie MD, Davatzikos C, Shen L

pubmed logopapersJan 1 2024
Machine learning (ML) algorithms play a crucial role in the early and accurate diagnosis of Alzheimer's Disease (AD), which is essential for effective treatment planning. However, existing methods are not well-suited for identifying Mild Cognitive Impairment (MCI), a critical transitional stage between normal aging and AD. This inadequacy is primarily due to label imbalance and bias from different sensitve attributes in MCI classification. To overcome these challenges, we have designed an end-to-end fairness-aware approach for label-imbalanced classification, tailored specifically for neuroimaging data. This method, built on the recently developed FACIMS framework, integrates into STREAMLINE, an automated ML environment. We evaluated our approach against nine other ML algorithms and found that it achieves comparable balanced accuracy to other methods while prioritizing fairness in classifications with five different sensitive attributes. This analysis contributes to the development of equitable and reliable ML diagnostics for MCI detection.
Page 134 of 1341340 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.