Sort by:
Page 6 of 1021014 results

Enhanced hyper tuning using bioinspired-based deep learning model for accurate lung cancer detection and classification.

Kumari J, Sinha S, Singh L

pubmed logopapersAug 9 2025
Lung cancer (LC) is one of the leading causes of cancer related deaths worldwide and early recognition is critical for enhancing patient outcomes. However, existing LC detection techniques face challenges such as high computational demands, complex data integration, scalability limitations, and difficulties in achieving rigorous clinical validation. This research proposes an Enhanced Hyper Tuning Deep Learning (EHTDL) model utilizing bioinspired algorithms to overcome these limitations and improve accuracy and efficiency of LC detection and classification. The methodology begins with the Smooth Edge Enhancement (SEE) technique for preprocessing CT images, followed by feature extraction using GLCM-based Texture Analysis. To refine the features and reduce dimensionality, a Hybrid Feature Selection approach combining Grey Wolf optimization (GWO) and Differential Evolution (DE) is employed. Precise lung segmentation is performed using Mask R-CNN to ensure accurate delineation of lung regions. A Deep Fractal Edge Classifier (DFEC) is introduced, consisting of five fractal blocks with convolutional layers and pooling to progressively learn LC characteristics. The proposed EHTDL model achieves remarkable performance metrics, including 99% accuracy, 100% precision, 98% recall, and 99% <i>F</i>1-score, demonstrating its robustness and effectiveness. The model's scalability and efficiency make it suitable for real-time clinical application offering a promising solution for early LC detection and significantly enhancing patient care.

Artificial intelligence with feature fusion empowered enhanced brain stroke detection and classification for disabled persons using biomedical images.

Alsieni M, Alyoubi KH

pubmed logopapersAug 9 2025
Brain stroke is an illness which affects almost every age group, particularly people over 65. There are two significant kinds of strokes: ischemic and hemorrhagic strokes. Blockage of brain vessels causes an ischemic stroke, while cracks in blood vessels in or around the brain cause a hemorrhagic stroke. In the prompt analysis of brain stroke, patients can live an easier life. Recognizing strokes using medical imaging is crucial for early diagnosis and treatment planning. Conversely, access to innovative imaging methods is restricted, particularly in emerging states, so it is challenging to analyze brain stroke cases of disabled people appropriately. Hence, the development of more accurate, faster, and more reliable diagnostic models for the timely recognition and efficient treatment of ischemic stroke is greatly needed. Artificial intelligence technologies, primarily deep learning (DL), have been widely employed in medical imaging, utilizing automated detection methods. This paper presents an Enhanced Brain Stroke Detection and Classification using Artificial Intelligence with Feature Fusion Technologies (EBSDC-AIFFT) model. This paper aims to develop an enhanced brain stroke detection system for individuals with disabilities, utilizing biomedical images to improve diagnostic accuracy. Initially, the image pre-processing stage involves various steps, including resizing, normalization, data augmentation, and data splitting, to enhance image quality. In addition, the EBSDC-AIFFT model combines the Inception-ResNet-v2 model, the convolutional block attention module-ResNet18 method, and the multi-axis vision transformer technique for feature extraction. Finally, the variational autoencoder (VAE) model is implemented for the classification process. The performance validation of the EBSDC-AIFFT technique is performed under the brain stroke CT image dataset. The comparison study of the EBSDC-AIFFT technique demonstrated a superior accuracy value of 99.09% over existing models.

Deep learning in rib fracture imaging: study quality assessment using the Must AI Criteria-10 (MAIC-10) checklist for artificial intelligence in medical imaging.

Getzmann JM, Nulle K, Mennini C, Viglino U, Serpi F, Albano D, Messina C, Fusco S, Gitto S, Sconfienza LM

pubmed logopapersAug 9 2025
To analyze the methodological quality of studies on deep learning (DL) in rib fracture imaging with the Must AI Criteria-10 (MAIC-10) checklist, and to report insights and experiences regarding the applicability of the MAIC-10 checklist. An electronic literature search was conducted on the PubMed database. After selection of articles, three radiologists independently rated the articles according to MAIC-10. Differences of the MAIC-10 score for each checklist item were assessed using the Fleiss' kappa coefficient. A total of 25 original articles discussing DL applications in rib fracture imaging were identified. Most studies focused on fracture detection (n = 21, 84%). In most of the research papers, internal cross-validation of the dataset was performed (n = 16, 64%), while only six studies (24%) conducted external validation. The mean MAIC-10 score of the 25 studies was 5.63 (SD, 1.84; range 1-8), with the item "clinical need" being reported most consistently (100%) and the item "study design" being most frequently reported incompletely (94.8%). The average inter-rater agreement for the MAIC-10 score was 0.771. The MAIC-10 checklist is a valid tool for assessing the quality of AI research in medical imaging with good inter-rater agreement. With regard to rib fracture imaging, items such as "study design", "explainability", and "transparency" were often not comprehensively addressed. AI in medical imaging has become increasingly common. Therefore, quality control systems of published literature such as the MAIC-10 checklist are needed to ensure high quality research output. Quality control systems are needed for research on AI in medical imaging. The MAIC-10 checklist is a valid tool to assess AI in medical imaging research quality. Checklist items such as "study design", "explainability", and "transparency" are frequently addressed incomprehensively.

Kidney volume after endovascular exclusion of abdominal aortic aneurysms by EVAR and FEVAR.

B S, C V, Turkia J B, Weydevelt E V, R P, F L, A K

pubmed logopapersAug 9 2025
Decreased kidney volume is a sign of renal aging and/or decreased vascularization. The aim of this study was to determine whether renal volume changes 24 months after exclusion of an abdominal aortic aneurysm (AAA), and to compare fenestrated (FEVAR) and subrenal (EVAR) stents. Retrospective single-center study from a prospective registry, including patients between 60 and 80 years with normal preoperative renal function (eGFR≥60 ml/min/1.73 m<sup>-2</sup>) who underwent fenestrated (FEVAR) or infrarenal (EVAR) stent grafts between 2015 and 2021. Patients had to have had an CT scan at 24 months of the study to be included. Exclusion criteria were renal branches, the presence of preoperative renal insufficiency, a single kidney, embolization or coverage of an accessory renal artery, occlusion of a renal artery during follow-up and mention of AAA rupture. Renal volume was measured using sizing software (EndoSize, therenva) based on fully automatic deep-learning segmentation of several anatomical structures (arterial lumen, bone structure, thrombus, heart, etc.), including the kidneys. In the presence of renal cysts, these were manually excluded from the segmentation. Forty-eight patients were included (24 EVAR vs. 24 FEVAR), 96 kidneys were segmented. There was no difference between groups in age (78.9±6.7 years vs. 69.4±6.8, p=0.89), eGFR 85.8 ± 12.4 [62-107] ml/min/1.73 m<sup>-2</sup> vs. 81 ± 16.2 [42-107] (p=0.36), and renal volume 170.9 ± 29.7 [123-276] mL vs. 165.3 ± 37.4 [115-298] (p=0.12). At 24 months in the EVAR group, there was a non-significant reduction in eGFR 84.1 ± 17.2 [61-128] ml/min/1.73 m<sup>-2</sup> vs. 81 ± 16.2 [42-107] (p=0.36) or renal volume 170.9 ± 29.7 [123-276] mL vs. 165.3 ± 37.4 [115-298] (p=0.12). In the FEVAR group, at 24 months there was a non-significant fall in eGFR 84.1 ± 17.2 [61-128] ml/min/1.73 m<sup>-2</sup> vs. 73.8 ± 21.4 [40-110] (p=0.09), while renal volume decreased significantly 182 ± 37.8 [123-293] mL vs. 158.9 ± 40.2 [45-258] (p=0.007). In this study, there appears to be a significant decrease in renal volume without a drop in eGFR 24 months after fenestrated stenting. This decrease may reflect changes in renal perfusion and could potentially be predictive of long-term renal impairment, although this cannot be confirmed within the limits of this small sample. Further studies with long-term follow-up are needed.

Quantitative radiomic analysis of computed tomography scans using machine and deep learning techniques accurately predicts histological subtypes of non-small cell lung cancer: A retrospective analysis.

Panchawagh S, Halder A, Haldule S, Sanker V, Lalwani D, Sequeria R, Naik H, Desai A

pubmed logopapersAug 9 2025
Non-small cell lung cancer (NSCLC) histological subtypes impact treatment decisions. While pre-surgical histopathological examination is ideal, it's not always possible. CT radiomic analysis shows promise in predicting NSCLC histological subtypes. To predict NSCLC histological subtypes using machine learning and deep learning models using Radiomic features. 422 lung CT scans from The Cancer Imaging Archive (TCIA) were analyzed. Primary neoplasms were segmented by expert radiologists. Using PyRadiomics, 2446 radiomic features were extracted; post-selection, 179 features remained. Machine learning models like logistic regression (LR), Support vector machine (SVM), Random Forest (RF), XGBoost, LightGBM, and CatBoost were employed, alongside a deep neural network (DNN) model. RF demonstrated the highest accuracy at 78 % (95 % CI: 70 %-84 %) and AUC-ROC at 94 % (95 % CI: 90 %-96 %). LightGBM, XGBoost, and CatBoost had AUC-ROC values of 95 %, 93 %, and 93 % respectively. The DNN's AUC was 94.4 % (95 % CI: 94.1 %-94.6 %). Logistic regression had the least efficacy. For histological subtype prediction, random forest, boosting models, and DNN were superior. Quantitative radiomic analysis with machine learning can accurately determine NSCLC histological subtypes. Random forest, ensemble models, and DNNs show significant promise for pre-operative NSCLC classification, which can streamline therapy decisions.

GPT-4 vs. Radiologists: who advances mediastinal tumor classification better across report quality levels? A cohort study.

Wen R, Li X, Chen K, Sun M, Zhu C, Xu P, Chen F, Ji C, Mi P, Li X, Deng X, Yang Q, Song W, Shang Y, Huang S, Zhou M, Wang J, Zhou C, Chen W, Liu C

pubmed logopapersAug 8 2025
Accurate mediastinal tumor classification is crucial for treatment planning, but diagnostic performance varies with radiologists' experience and report quality. To evaluate GPT-4's diagnostic accuracy in classifying mediastinal tumors from radiological reports compared to radiologists of different experience levels using radiological reports of varying quality. We conducted a retrospective study of 1,494 patients from five tertiary hospitals with mediastinal tumors diagnosed via chest CT and pathology. Radiological reports were categorized into low-, medium-, and high-quality based on predefined criteria assessed by experienced radiologists. Six radiologists (two residents, two attending radiologists, and two associate senior radiologists) and GPT-4 evaluated the chest CT reports. Diagnostic performance was analyzed overall, by report quality, and by tumor type using Wald χ2 tests and 95% CIs calculated via the Wilson method. GPT-4 achieved an overall diagnostic accuracy of 73.3% (95% CI: 71.0-75.5), comparable to associate senior radiologists (74.3%, 95% CI: 72.0-76.5; p >0.05). For low-quality reports, GPT-4 outperformed associate senior radiologists (60.8% vs. 51.1%, p<0.001). In high-quality reports, GPT-4 was comparable to attending radiologists (80.6% vs.79.4%, p>0.05). Diagnostic performance varied by tumor type: GPT-4 was comparable to radiology residents for neurogenic tumors (44.9% vs. 50.3%, p>0.05), similar to associate senior radiologists for teratomas (68.1% vs. 65.9%, p>0.05), and superior in diagnosing lymphoma (75.4% vs. 60.4%, p<0.001). GPT-4 demonstrated interpretation accuracy comparable to Associate Senior Radiologists, excelling in low-quality reports and outperforming them in diagnosing lymphoma. These findings underscore GPT-4's potential to enhance diagnostic performance in challenging diagnostic scenarios.

An Anisotropic Cross-View Texture Transfer with Multi-Reference Non-Local Attention for CT Slice Interpolation.

Uhm KH, Cho H, Hong SH, Jung SW

pubmed logopapersAug 8 2025
Computed tomography (CT) is one of the most widely used non-invasive imaging modalities for medical diagnosis. In clinical practice, CT images are usually acquired with large slice thicknesses due to the high cost of memory storage and operation time, resulting in an anisotropic CT volume with much lower inter-slice resolution than in-plane resolution. Since such inconsistent resolution may lead to difficulties in disease diagnosis, deep learning-based volumetric super-resolution methods have been developed to improve inter-slice resolution. Most existing methods conduct single-image super-resolution on the through-plane or synthesize intermediate slices from adjacent slices; however, the anisotropic characteristic of 3D CT volume has not been well explored. In this paper, we propose a novel cross-view texture transfer approach for CT slice interpolation by fully utilizing the anisotropic nature of 3D CT volume. Specifically, we design a unique framework that takes high-resolution in-plane texture details as a reference and transfers them to low-resolution through-plane images. To this end, we introduce a multi-reference non-local attention module that extracts meaningful features for reconstructing through-plane high-frequency details from multiple in-plane images. Through extensive experiments, we demonstrate that our method performs significantly better in CT slice interpolation than existing competing methods on public CT datasets including a real-paired benchmark, verifying the effectiveness of the proposed framework. The source code of this work is available at https://github.com/khuhm/ACVTT.

A Cohort Study of Pediatric Severe Community-Acquired Pneumonia Involving AI-Based CT Image Parameters and Electronic Health Record Data.

He M, Yuan J, Liu A, Pu R, Yu W, Wang Y, Wang L, Nie X, Yi J, Xue H, Xie J

pubmed logopapersAug 8 2025
Community-acquired pneumonia (CAP) is a significant concern for children worldwide and is associated with a high morbidity and mortality. To improve patient outcomes, early intervention and accurate diagnosis are essential. Artificial intelligence (AI) can mine and label imaging data and thus may contribute to precision research and personalized clinical management. The baseline characteristics of 230 children with severe CAP hospitalized from January 2023 to October 2024 were retrospectively analyzed. The patients were divided into two groups according to the presence of respiratory failure. The predictive ability of AI-derived chest CT (computed tomography) indices alone for respiratory failure was assessed via logistic regression analysis. ROC (receiver operating characteristic) curves were plotted for these regression models. After adjusting for age, white blood cell count, neutrophils, lymphocytes, creatinine, wheezing, and fever > 5 days, a greater number of involved lung lobes [odds ratio 1.347, 95% confidence interval (95% CI) 1.036-1.750, P = 0.026] and bilateral lung involvement (odds ratio 2.734, 95% CI 1.084-6.893, P = 0.033) were significantly associated with respiratory failure. The discriminatory power (as measured by the area under curve) of Model 2 and Model 3, which included electronic health record data and the accuracy of CT imaging features, was better than that of Model 0 and Model 1, which contained only the chest CT parameters. The sensitivity and specificity of Model 2 at the optimal critical value (0.441) were 84.3% and 59.8%, respectively. The sensitivity and specificity of Model 3 at the optimal critical value (0.446) were 68.6% and 76.0%, respectively. The use of AI-derived chest CT indices may achieve high diagnostic accuracy and guide precise interventions for patients with severe CAP. However, clinical, laboratory, and AI-derived chest CT indices should be included to accurately predict and treat severe CAP.

A Deep Learning Model to Detect Acute MCA Occlusion on High Resolution Non-Contrast Head CT.

Fussell DA, Lopez JL, Chang PD

pubmed logopapersAug 8 2025
To assess the feasibility and accuracy of a deep learning (DL) model to identify acute middle cerebral artery (MCA) occlusion using high resolution non-contrast CT (NCCT) imaging data. In this study, a total of 4,648 consecutive exams (July 2021 to December 2023) were retrospectively used for model training and validation, while an additional 1,011 consecutive exams (January 2024 to August 2024) were used for independent testing. Using high-resolution NCCT acquired at 1.0 mm slice thickness or less, MCA thrombus was labeled using same day CTA as ground-truth. A 3D DL model was trained for per-voxel thrombus segmentation, with the sum of positive voxels used to estimate likelihood of acute MCA occlusion. For detection of MCA M1 segment acute occlusion, the model yielded an AUROC of 0.952 [0.904 -1.00], accuracy of 93.6%[88.1 -98.2], sensitivity of 90.9% [83.1 -100], and specificity of 93.6% [88.0 -98.3]. Inclusion of M2 segment occlusions reduced performance only slightly, yielding an AUROC of 0.884 [0.825 -0.942], accuracy of 93.2% [85.1 -97.2], sensitivity of 77.4% [69.3 92.2], and specificity of 93.6% [85.1 -97.8]. A DL model can detect acute MCA occlusion from high resolution NCCT with accuracy approaching that of CTA. Using this tool, a majority of candidate thrombectomy patients may be identified with NCCT alone, which could aid stroke triage in settings that lack CTA or are otherwise resource constrained. DL= deep learning.

Clinical insights to improve medical deep learning design: A comprehensive review of methods and benefits.

Thornblad TAE, Ewals LJS, Nederend J, Luyer MDP, De With PHN, van der Sommen F

pubmed logopapersAug 8 2025
The success of deep learning and computer vision of natural images has led to an increased interest in medical image deep learning applications. However, introducing black-box deep learning models leaves little room for domain-specific knowledge when making the final diagnosis. For medical computer vision applications, not only accuracy, but also robustness, interpretability and explainability are essential to ensure trust for clinicians. Medical deep learning applications can therefore benefit from insights into the application at hand by involving clinical staff and considering the clinical diagnostic process. In this review, different clinically-inspired methods are surveyed, including clinical insights used at different stages of deep learning design for three-dimensional (3D) computed tomography (CT) image data. This review is conducted by investigating 400 research articles, covering different deep learning-based approaches for diagnosis of different diseases, in terms of including clinical insights in the published work. Based on this, a further detailed review is conducted of the 47 scientific articles using clinical inspiration. The clinically-inspired methods were found to be made with respect to preparation for training, 3D medical image data processing, integration of clinical data and model architecture selection and development. This highlights different ways in which domain-specific knowledge can be used in the design of deep learning systems.
Page 6 of 1021014 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.