Sort by:
Page 231 of 2342333 results

Clinical-radiomics models with machine-learning algorithms to distinguish uncomplicated from complicated acute appendicitis in adults: a multiphase multicenter cohort study.

Li L, Sun Y, Sun Y, Gao Y, Zhang B, Qi R, Sheng F, Yang X, Liu X, Liu L, Lu C, Chen L, Zhang K

pubmed logopapersJan 1 2025
Increasing evidence suggests that non-operative management (NOM) with antibiotics could serve as a safe alternative to surgery for the treatment of uncomplicated acute appendicitis (AA). However, accurately differentiating between uncomplicated and complicated AA remains challenging. Our aim was to develop and validate machine-learning-based diagnostic models to differentiate uncomplicated from complicated AA. This was a multicenter cohort trial conducted from January 2021 and December 2022 across five tertiary hospitals. Three distinct diagnostic models were created, namely, the clinical-parameter-based model, the CT-radiomics-based model, and the clinical-radiomics-fused model. These models were developed using a comprehensive set of eight machine-learning algorithms, which included logistic regression (LR), support vector machine (SVM), random forest (RF), decision tree (DT), gradient boosting (GB), K-nearest neighbors (KNN), Gaussian Naïve Bayes (GNB), and multi-layer perceptron (MLP). The performance and accuracy of these diverse models were compared. All models exhibited excellent diagnostic performance in the training cohort, achieving a maximal AUC of 1.00. For the clinical-parameter model, the GB classifier yielded the optimal AUC of 0.77 (95% confidence interval [CI]: 0.64-0.90) in the testing cohort, while the LR classifier yielded the optimal AUC of 0.76 (95% CI: 0.66-0.86) in the validation cohort. For the CT-radiomics-based model, GB classifier achieved the best AUC of 0.74 (95% CI: 0.60-0.88) in the testing cohort, and SVM yielded an optimal AUC of 0.63 (95% CI: 0.51-0.75) in the validation cohort. For the clinical-radiomics-fused model, RF classifier yielded an optimal AUC of 0.84 (95% CI: 0.74-0.95) in the testing cohort and 0.76 (95% CI: 0.67-0.86) in the validation cohort. An open-access, user-friendly online tool was developed for clinical application. This multicenter study suggests that the clinical-radiomics-fused model, constructed using RF algorithm, effectively differentiated between complicated and uncomplicated AA.

Ground-truth-free deep learning approach for accelerated quantitative parameter mapping with memory efficient learning.

Fujita N, Yokosawa S, Shirai T, Terada Y

pubmed logopapersJan 1 2025
Quantitative MRI (qMRI) requires the acquisition of multiple images with parameter changes, resulting in longer measurement times than conventional imaging. Deep learning (DL) for image reconstruction has shown a significant reduction in acquisition time and improved image quality. In qMRI, where the image contrast varies between sequences, preparing large, fully-sampled (FS) datasets is challenging. Recently, methods that do not require FS data such as self-supervised learning (SSL) and zero-shot self-supervised learning (ZSSSL) have been proposed. Another challenge is the large GPU memory requirement for DL-based qMRI image reconstruction, owing to the simultaneous processing of multiple contrast images. In this context, Kellman et al. proposed memory-efficient learning (MEL) to save the GPU memory. This study evaluated SSL and ZSSSL frameworks with MEL to accelerate qMRI. Three experiments were conducted using the following sequences: 2D T2 mapping/MSME (Experiment 1), 3D T1 mapping/VFA-SPGR (Experiment 2), and 3D T2 mapping/DESS (Experiment 3). Each experiment used the undersampled k-space data under acceleration factors of 4, 8, and 12. The reconstructed maps were evaluated using quantitative metrics. In this study, we performed three qMRI reconstruction measurements and compared the performance of the SL- and GT-free learning methods, SSL and ZSSSL. Overall, the performances of SSL and ZSSSL were only slightly inferior to those of SL, even under high AF conditions. The quantitative errors in diagnostically important tissues (WM, GM, and meniscus) were small, demonstrating that SL and ZSSSL performed comparably. Additionally, by incorporating a GPU memory-saving implementation, we demonstrated that the network can operate on a GPU with a small memory (<8GB) with minimal speed reduction. This study demonstrates the effectiveness of memory-efficient GT-free learning methods using MEL to accelerate qMRI.

Intelligent and precise auxiliary diagnosis of breast tumors using deep learning and radiomics.

Wang T, Zang B, Kong C, Li Y, Yang X, Yu Y

pubmed logopapersJan 1 2025
Breast cancer is the most common malignant tumor among women worldwide, and early diagnosis is crucial for reducing mortality rates. Traditional diagnostic methods have significant limitations in terms of accuracy and consistency. Imaging is a common technique for diagnosing and predicting breast cancer, but human error remains a concern. Increasingly, artificial intelligence (AI) is being employed to assist physicians in reducing diagnostic errors. We developed an intelligent diagnostic model combining deep learning and radiomics to enhance breast tumor diagnosis. The model integrates MobileNet with ResNeXt-inspired depthwise separable and grouped convolutions, improving feature processing and efficiency while reducing parameters. Using AI-Dhabyani and TCIA breast ultrasound datasets, we validated the model internally and externally, comparing it to VGG16, ResNet, AlexNet, and MobileNet. Results: The internal validation set achieved an accuracy of 83.84% with an AUC of 0.92, outperforming other models. The external validation set showed an accuracy of 69.44% with an AUC of 0.75, demonstrating high robustness and generalizability. Conclusions: We developed an intelligent diagnostic model using deep learning and radiomics to improve breast tumor diagnosis. The model combines MobileNet with ResNeXt-inspired depthwise separable and grouped convolutions, enhancing feature processing and efficiency while reducing parameters. It was validated internally and externally using the AI-Dhabyani and TCIA breast ultrasound datasets and compared with VGG16, ResNet, AlexNet, and MobileNet.

Investigating methods to enhance interpretability and performance in cardiac MRI for myocardial scarring diagnosis using convolutional neural network classification and One Match.

Udin MH, Armstrong S, Kai A, Doyle ST, Pokharel S, Ionita CN, Sharma UC

pubmed logopapersJan 1 2025
Machine learning (ML) classification of myocardial scarring in cardiac MRI is often hindered by limited explainability, particularly with convolutional neural networks (CNNs). To address this, we developed One Match (OM), an algorithm that builds on template matching to improve on both the explainability and performance of ML myocardial scaring classification. By incorporating OM, we aim to foster trust in AI models for medical diagnostics and demonstrate that improved interpretability does not have to compromise classification accuracy. Using a cardiac MRI dataset from 279 patients, this study evaluates One Match, which classifies myocardial scarring in images by matching each image to a set of labeled template images. It uses the highest correlation score from these matches for classification and is compared to a traditional sequential CNN. Enhancements such as autodidactic enhancement (AE) and patient-level classifications (PLCs) were applied to improve the predictive accuracy of both methods. Results are reported as follows: accuracy, sensitivity, specificity, precision, and F1-score. The highest classification performance was observed with the OM algorithm when enhanced by both AE and PLCs, 95.3% accuracy, 92.3% sensitivity, 96.7% specificity, 92.3% precision, and 92.3% F1-score, marking a significant improvement over the base configurations. AE alone had a positive impact on OM increasing accuracy from 89.0% to 93.2%, but decreased the accuracy of the CNN from 85.3% to 82.9%. In contrast, PLCs improved accuracy for both the CNN and OM, raising the CNN's accuracy by 4.2% and OM's by 7.4%. This study demonstrates the effectiveness of OM in classifying myocardial scars, particularly when enhanced with AE and PLCs. The interpretability of OM also enabled the examination of misclassifications, providing insights that could accelerate development and foster greater trust among clinical stakeholders.

Neurovision: A deep learning driven web application for brain tumour detection using weight-aware decision approach.

Santhosh TRS, Mohanty SN, Pradhan NR, Khan T, Derbali M

pubmed logopapersJan 1 2025
In recent times, appropriate diagnosis of brain tumour is a crucial task in medical system. Therefore, identification of a potential brain tumour is challenging owing to the complex behaviour and structure of the human brain. To address this issue, a deep learning-driven framework consisting of four pre-trained models viz DenseNet169, VGG-19, Xception, and EfficientNetV2B2 is developed to classify potential brain tumours from medical resonance images. At first, the deep learning models are trained and fine-tuned on the training dataset, obtained validation scores of trained models are considered as model-wise weights. Then, trained models are subsequently evaluated on the test dataset to generate model-specific predictions. In the weight-aware decision module, the class-bucket of a probable output class is updated with the weights of deep models when their predictions match the class. Finally, the bucket with the highest aggregated value is selected as the final output class for the input image. A novel weight-aware decision mechanism is a key feature of this framework, which effectively deals tie situations in multi-class classification compared to conventional majority-based techniques. The developed framework has obtained promising results of 98.7%, 97.52%, and 94.94% accuracy on three different datasets. The entire framework is seamlessly integrated into an end-to-end web-application for user convenience. The source code, dataset and other particulars are publicly released at https://github.com/SaiSanthosh1508/Brain-Tumour-Image-classification-app [Rishik Sai Santhosh, "Brain Tumour Image Classification Application," https://github.com/SaiSanthosh1508/Brain-Tumour-Image-classification-app] for academic, research and other non-commercial usage.

XLLC-Net: A lightweight and explainable CNN for accurate lung cancer classification using histopathological images.

Jim JR, Rayed ME, Mridha MF, Nur K

pubmed logopapersJan 1 2025
Lung cancer imaging plays a crucial role in early diagnosis and treatment, where machine learning and deep learning have significantly advanced the accuracy and efficiency of disease classification. This study introduces the Explainable and Lightweight Lung Cancer Net (XLLC-Net), a streamlined convolutional neural network designed for classifying lung cancer from histopathological images. Using the LC25000 dataset, which includes three lung cancer classes and two colon cancer classes, we focused solely on the three lung cancer classes for this study. XLLC-Net effectively discerns complex disease patterns within these classes. The model consists of four convolutional layers and contains merely 3 million parameters, considerably reducing its computational footprint compared to existing deep learning models. This compact architecture facilitates efficient training, completing each epoch in just 60 seconds. Remarkably, XLLC-Net achieves a classification accuracy of 99.62% [Formula: see text] 0.16%, with precision, recall, and F1 score of 99.33% [Formula: see text] 0.30%, 99.67% [Formula: see text] 0.30%, and 99.70% [Formula: see text] 0.30%, respectively. Furthermore, the integration of Explainable AI techniques, such as Saliency Map and GRAD-CAM, enhances the interpretability of the model, offering clear visual insights into its decision-making process. Our results underscore the potential of lightweight DL models in medical imaging, providing high accuracy and rapid training while ensuring model transparency and reliability.

Volumetric atlas of the rat inner ear from microCT and iDISCO+ cleared temporal bones.

Cossellu D, Vivado E, Batti L, Gantar I, Pizzala R, Perin P

pubmed logopapersJan 1 2025
Volumetric atlases are an invaluable tool in neuroscience and otolaryngology, greatly aiding experiment planning and surgical interventions, as well as the interpretation of experimental and clinical data. The rat is a major animal model for hearing and balance studies, and a detailed volumetric atlas for the rat central auditory system (Waxholm) is available. However, the Waxholm rat atlas only contains a low-resolution inner ear featuring five structures. In the present work, we segmented and annotated 34 structures in the rat inner ear, yielding a detailed volumetric inner ear atlas which can be integrated with the Waxholm rat brain atlas. We performed iodine-enhanced microCT and iDISCO+-based clearing and fluorescence lightsheet microscopy imaging on a sample of rat temporal bones. Image stacks were segmented in a semiautomated way, and 34 inner ear volumes were reconstructed from five samples. Using geometrical morphometry, high-resolution segmentations obtained from lightsheet and microCT stacks were registered into the coordinate system of the Waxholm rat atlas. Cleared sample autofluorescence was used for the reconstruction of most inner ear structures, including fluid-filled compartments, nerves and sensory epithelia, blood vessels, and connective tissue structures. Image resolution allowed reconstruction of thin ducts (reuniting, saccular and endolymphatic), and the utriculoendolymphatic valve. The vestibulocochlear artery coursing through bone was found to be associated to the reuniting duct, and to be visible both in cleared and microCT samples, thus allowing to infer duct location from microCT scans. Cleared labyrinths showed minimal shape distortions, as shown by alignment with microCT and Waxholm labyrinths. However, membranous labyrinths could display variable collapse of the superior division, especially the roof of canal ampullae, whereas the inferior division (saccule and cochlea) was well preserved, with the exception of Reissner's membrane that could display ruptures in the second cochlear turn. As an example of atlas use, the volumes reconstructed from segmentations were used to separate macrophage populations from the spiral ganglion, auditory neuron dendrites, and Organ of Corti. We have reconstructed 34 structures from the rat temporal bone, which are available as both image stacks and printable 3D objects in a shared repository for download. These can be used for teaching, localizing cells or other features within the ear, modeling auditory and vestibular sensory physiology and training of automated segmentation machine learning tools.

The Role of Computed Tomography and Artificial Intelligence in Evaluating the Comorbidities of Chronic Obstructive Pulmonary Disease: A One-Stop CT Scanning for Lung Cancer Screening.

Lin X, Zhang Z, Zhou T, Li J, Jin Q, Li Y, Guan Y, Xia Y, Zhou X, Fan L

pubmed logopapersJan 1 2025
Chronic obstructive pulmonary disease (COPD) is a major cause of morbidity and mortality worldwide. Comorbidities in patients with COPD significantly increase morbidity, mortality, and healthcare costs, posing a significant burden on the management of COPD. Given the complex clinical manifestations and varying severity of COPD comorbidities, accurate diagnosis and evaluation are particularly important in selecting appropriate treatment options. With the development of medical imaging technology, AI-based chest CT, as a noninvasive imaging modality, provides a detailed assessment of COPD comorbidities. Recent studies have shown that certain radiographic features on chest CT can be used as alternative markers of comorbidities in COPD patients. CT-based radiomics features provided incremental predictive value than clinical risk factors only, predicting an AUC of 0.73 for COPD combined with CVD. However, AI has inherent limitations such as lack of interpretability, and further research is needed to improve them. This review evaluates the progress of AI technology combined with chest CT imaging in COPD comorbidities, including lung cancer, cardiovascular disease, osteoporosis, sarcopenia, excess adipose depots, and pulmonary hypertension, with the aim of improving the understanding of imaging and the management of COPD comorbidities for the purpose of improving disease screening, efficacy assessment, and prognostic evaluation.

Application of artificial intelligence in X-ray imaging analysis for knee arthroplasty: A systematic review.

Zhang Z, Hui X, Tao H, Fu Z, Cai Z, Zhou S, Yang K

pubmed logopapersJan 1 2025
Artificial intelligence (AI) is a promising and powerful technology with increasing use in orthopedics. The global morbidity of knee arthroplasty is expanding. This study investigated the use of AI algorithms to review radiographs of knee arthroplasty. The Ovid-Embase, Web of Science, Cochrane Library, PubMed, China National Knowledge Infrastructure (CNKI), WeiPu (VIP), WanFang, and China Biology Medicine (CBM) databases were systematically screened from inception to March 2024 (PROSPERO study protocol registration: CRD42024507549). The quality assessment of the diagnostic accuracy studies tool assessed the risk of bias. A total of 21 studies were included in the analysis. Of these, 10 studies identified and classified implant brands, 6 measured implant size and component alignment, 3 detected implant loosening, and 2 diagnosed prosthetic joint infections (PJI). For classifying and identifying implant brands, 5 studies demonstrated near-perfect prediction with an area under the curve (AUC) ranging from 0.98 to 1.0, and 10 achieved accuracy (ACC) between 96-100%. Regarding implant measurement, one study showed an AUC of 0.62, and two others exhibited over 80% ACC in determining component sizes. Moreover, Artificial intelligence showed good to excellent reliability across all angles in three separate studies (Intraclass Correlation Coefficient > 0.78). In predicting PJI, one study achieved an AUC of 0.91 with a corresponding ACC of 90.5%, while another reported a positive predictive value ranging from 75% to 85%. For detecting implant loosening, the AUC was found to be at least as high as 0.976 with ACC ranging from 85.8% to 97.5%. These studies show that AI is promising in recognizing implants in knee arthroplasty. Future research should follow a rigorous approach to AI development, with comprehensive and transparent reporting of methods and the creation of open-source software programs and commercial tools that can provide clinicians with objective clinical decisions.

OA-HybridCNN (OHC): An advanced deep learning fusion model for enhanced diagnostic accuracy in knee osteoarthritis imaging.

Liao Y, Yang G, Pan W, Lu Y

pubmed logopapersJan 1 2025
Knee osteoarthritis (KOA) is a leading cause of disability globally. Early and accurate diagnosis is paramount in preventing its progression and improving patients' quality of life. However, the inconsistency in radiologists' expertise and the onset of visual fatigue during prolonged image analysis often compromise diagnostic accuracy, highlighting the need for automated diagnostic solutions. In this study, we present an advanced deep learning model, OA-HybridCNN (OHC), which integrates ResNet and DenseNet architectures. This integration effectively addresses the gradient vanishing issue in DenseNet and augments prediction accuracy. To evaluate its performance, we conducted a thorough comparison with other deep learning models using five-fold cross-validation and external tests. The OHC model outperformed its counterparts across all performance metrics. In external testing, OHC exhibited an accuracy of 91.77%, precision of 92.34%, and recall of 91.36%. During the five-fold cross-validation, its average AUC and ACC were 86.34% and 87.42%, respectively. Deep learning, particularly exemplified by the OHC model, has greatly improved the efficiency and accuracy of KOA imaging diagnosis. The adoption of such technologies not only alleviates the burden on radiologists but also significantly enhances diagnostic precision.
Page 231 of 2342333 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.