Sort by:
Page 145 of 2082073 results

CCTA-Derived coronary plaque burden offers enhanced prognostic value over CAC scoring in suspected CAD patients.

Dahdal J, Jukema RA, Maaniitty T, Nurmohamed NS, Raijmakers PG, Hoek R, Driessen RS, Twisk JWR, Bär S, Planken RN, van Royen N, Nijveldt R, Bax JJ, Saraste A, van Rosendael AR, Knaapen P, Knuuti J, Danad I

pubmed logopapersMay 30 2025
To assess the prognostic utility of coronary artery calcium (CAC) scoring and coronary computed tomography angiography (CCTA)-derived quantitative plaque metrics for predicting adverse cardiovascular outcomes. The study enrolled 2404 patients with suspected coronary artery disease (CAD) but without a prior history of CAD. All participants underwent CAC scoring and CCTA, with plaque metrics quantified using an artificial intelligence (AI)-based tool (Cleerly, Inc). Percent atheroma volume (PAV) and non-calcified plaque volume percentage (NCPV%), reflecting total plaque burden and the proportion of non-calcified plaque volume normalized to vessel volume, were evaluated. The primary endpoint was a composite of all-cause mortality and non-fatal myocardial infarction (MI). Cox proportional hazard models, adjusted for clinical risk factors and early revascularization, were employed for analysis. During a median follow-up of 7.0 years, 208 patients (8.7%) experienced the primary endpoint, including 73 cases of MI (3%). The model incorporating PAV demonstrated superior discriminatory power for the composite endpoint (AUC = 0.729) compared to CAC scoring (AUC = 0.706, P = 0.016). In MI prediction, PAV (AUC = 0.791) significantly outperformed CAC (AUC = 0.699, P < 0.001), with NCPV% showing the highest prognostic accuracy (AUC = 0.814, P < 0.001). AI-driven assessment of coronary plaque burden enhances prognostic accuracy for future adverse cardiovascular events, highlighting the critical role of comprehensive plaque characterization in refining risk stratification strategies.

Multimodal AI framework for lung cancer diagnosis: Integrating CNN and ANN models for imaging and clinical data analysis.

Oncu E, Ciftci F

pubmed logopapersMay 30 2025
Lung cancer remains a leading cause of cancer-related mortality worldwide, emphasizing the critical need for accurate and early diagnostic solutions. This study introduces a novel multimodal artificial intelligence (AI) framework that integrates Convolutional Neural Networks (CNNs) and Artificial Neural Networks (ANNs) to improve lung cancer classification and severity assessment. The CNN model, trained on 1019 preprocessed CT images, classified lung tissue into four histological categories, adenocarcinoma, large cell carcinoma, squamous cell carcinoma, and normal, with a weighted accuracy of 92 %. Interpretability is enhanced using Gradient-weighted Class Activation Mapping (Grad-CAM), which highlights the salient image regions influencing the model's predictions. In parallel, an ANN trained on clinical data from 999 patients-spanning 24 key features such as demographic, symptomatic, and genetic factors-achieves 99 % accuracy in predicting cancer severity (low, medium, high). SHapley Additive exPlanations (SHAP) are employed to provide both global and local interpretability of the ANN model, enabling transparent decision-making. Both models were rigorously validated using k-fold cross-validation to ensure robustness and reduce overfitting. This hybrid approach effectively combines spatial imaging data and structured clinical information, demonstrating strong predictive performance and offering an interpretable and comprehensive AI-based solution for lung cancer diagnosis and management.

Multiclass ensemble framework for enhanced prostate gland Segmentation: Integrating Self-ONN decoders with EfficientNet.

Islam Sumon MS, Chowdhury MEH, Bhuiyan EH, Rahman MS, Khan MM, Al-Hashimi I, Mushtak A, Zoghoul SB

pubmed logopapersMay 30 2025
Digital pathology relies on the morphological architecture of prostate glands to recognize cancerous tissue. Prostate cancer (PCa) originates in walnut shaped prostate gland in the male reproductive system. Deep learning (DL) pipelines can assist in identifying these regions with advanced segmentation techniques which are effective in diagnosing and treating prostate diseases. This facilitates early detection, targeted biopsy, and accurate treatment planning, ensuring consistent, reproducible results while minimizing human error. Automated segmentation techniques trained on MRI datasets can aid in monitoring disease progression which leads to clinical support by developing patient-specific models for personalized medicine. In this study, we present multiclass segmentation models designed to localize the prostate gland and its zonal regions-specifically the peripheral zone (PZ), transition zone (TZ), and the whole gland-by combining EfficientNetB4 encoders with Self-organized Operational Neural Network (Self-ONN)-based decoders. Traditional convolutional neural networks (CNNs) rely on linear neuron models, which limit their ability to capture the complex dynamics of biological neural systems. In contrast, Operational Neural Networks (ONNs), particularly Self-ONNs, address this limitation by incorporating nonlinear and adaptive operations at the neuron level. We evaluated various encoder-decoder configurations and identified that the combination of an EfficientNet-based encoder with a Self-ONN-based decoder yielded the best performance. To further enhance segmentation accuracy, we employed the STAPLE method to ensemble the top three performing models. Our approach was tested on the large-scale, recently updated PI-CAI Challenge dataset using 5-fold cross-validation, achieving Dice scores of 95.33 % for the whole gland and 92.32 % for the combined PZ and TZ regions. These advanced segmentation techniques significantly improve the quality of PCa diagnosis and treatment, contributing to better patient care and outcomes.

Real-time brain tumor detection in intraoperative ultrasound: From model training to deployment in the operating room.

Cepeda S, Esteban-Sinovas O, Romero R, Singh V, Shett P, Moiyadi A, Zemmoura I, Giammalva GR, Del Bene M, Barbotti A, DiMeco F, West TR, Nahed BV, Arrese I, Hornero R, Sarabia R

pubmed logopapersMay 30 2025
Intraoperative ultrasound (ioUS) is a valuable tool in brain tumor surgery due to its versatility, affordability, and seamless integration into the surgical workflow. However, its adoption remains limited, primarily because of the challenges associated with image interpretation and the steep learning curve required for effective use. This study aimed to enhance the interpretability of ioUS images by developing a real-time brain tumor detection system deployable in the operating room. We collected 2D ioUS images from the BraTioUS and ReMIND datasets, annotated with expert-refined tumor labels. Using the YOLO11 architecture and its variants, we trained object detection models to identify brain tumors. The dataset included 1732 images from 192 patients, divided into training, validation, and test sets. Data augmentation expanded the training set to 11,570 images. In the test dataset, YOLO11s achieved the best balance of precision and computational efficiency, with a mAP@50 of 0.95, mAP@50-95 of 0.65, and a processing speed of 34.16 frames per second. The proposed solution was prospectively validated in a cohort of 20 consecutively operated patients diagnosed with brain tumors. Neurosurgeons confirmed its seamless integration into the surgical workflow, with real-time predictions accurately delineating tumor regions. These findings highlight the potential of real-time object detection algorithms to enhance ioUS-guided brain tumor surgery, addressing key challenges in interpretation and providing a foundation for future development of computer vision-based tools for neuro-oncological surgery.

Using Deep learning to Predict Cardiovascular Magnetic Resonance Findings from Echocardiography Videos.

Sahashi Y, Vukadinovic M, Duffy G, Li D, Cheng S, Berman DS, Ouyang D, Kwan AC

pubmed logopapersMay 30 2025
Echocardiography is the most common modality for assessing cardiac structure and function. While cardiac magnetic resonance (CMR) imaging is less accessible, CMR can provide unique tissue characterization including late gadolinium enhancement (LGE), T1 and T2 mapping, and extracellular volume (ECV) which are associated with tissue fibrosis, infiltration, and inflammation. Deep learning has been shown to uncover findings not recognized by clinicians, however it is unknown whether CMR-based tissue characteristics can be derived from echocardiography videos using deep learning. To assess the performance of a deep learning model applied to echocardiography to detect CMR-specific parameters including LGE presence, and abnormal T1, T2 or ECV. In a retrospective single-center study, adult patients with CMRs and echocardiography studies within 30 days were included. A video-based convolutional neural network was trained on echocardiography videos to predict CMR-derived labels including LGE presence, and abnormal T1, T2 or ECV across echocardiography views. The model was also trained to predict presence/absence of wall motion abnormality (WMA) as a positive control for model function. The model performance was evaluated in a held-out test dataset not used for training. The study population included 1,453 adult patients (mean age 56±18 years, 42% female) with 2,556 paired echocardiography studies occurring at a median of 2 days after CMR (interquartile range 2 days prior to 6 days after). The model had high predictive capability for presence of WMA (AUC 0.873 [95%CI 0.816-0.922]) which was used for positive control. However, the model was unable to reliably detect the presence of LGE (AUC 0.699 [0.613-0.780]), abnormal native T1 (AUC 0.614 [0.500-0.715]), T2 0.553 [0.420-0.692], or ECV 0.564 [0.455-0.691]). Deep learning applied to echocardiography accurately identified CMR-based WMA, but was unable to predict tissue characteristics, suggesting that signal for these tissue characteristics may not be present within ultrasound videos, and that the use of CMR for tissue characterization remains essential within cardiology.

The Impact of Model-based Deep-learning Reconstruction Compared with that of Compressed Sensing-Sensitivity Encoding on the Image Quality and Precision of Cine Cardiac MR in Evaluating Left-ventricular Volume and Strain: A Study on Healthy Volunteers.

Tsuneta S, Aono S, Kimura R, Kwon J, Fujima N, Ishizaka K, Nishioka N, Yoneyama M, Kato F, Minowa K, Kudo K

pubmed logopapersMay 30 2025
To evaluate the effect of model-based deep-learning reconstruction (DLR) compared with that of compressed sensing-sensitivity encoding (CS) on cine cardiac magnetic resonance (CMR). Cine CMR images of 10 healthy volunteers were obtained with reduction factors of 2, 4, 6, and 8 and reconstructed using CS and DLR. The visual image quality scores assessed sharpness, image noise, and artifacts. Left-ventricular (LV) end-diastolic volume (EDV), end-systolic volume (ESV), stroke volume (SV), and ejection fraction (EF) were manually measured. LV global circumferential strain (GCS) was automatically measured using the software. The precision of EDV, ESV, SV, EF, and GCS measurements was compared between CS and DLR using Bland-Altman analysis with full-sampling data as the gold standard. Compared with CS, DLR significantly improved image quality with reduction factors of 6 and 8. The precision of EDV and ESV with a reduction factor of 8, and GCS with reduction factors of 6 and 8 measurements improved with DLR compared with CS, whereas those of SV and EF measurements were not different between DLR and CS. The effect of DLR on cine CMR's image quality and precision in evaluating quantitative volume and strain was equal or superior to that of CS. DLR may replace CS for cine CMR.

Combining structural equation modeling analysis with machine learning for early malignancy detection in Bethesda Category III thyroid nodules.

Kasap ZA, Kurt B, Güner A, Özsağır E, Ercin ME

pubmed logopapersMay 30 2025
Atypia of Undetermined Significance (AUS), classified as Category III in the Bethesda Thyroid Cytopathology Reporting System, presents significant diagnostic challenges for clinicians. This study aims to develop a clinical decision support system that integrates structural equation modeling (SEM) and machine learning to predict malignancy in AUS thyroid nodules. The model integrates preoperative clinical data, ultrasonography (USG) findings, and cytopathological and morphometric variables. This retrospective cohort study was conducted between 2011 and 2019 at Karadeniz Technical University (KTU) Farabi Hospital. The dataset included 56 variables derived from 204 thyroid nodules diagnosed via ultrasound-guided fine-needle aspiration biopsy (FNAB) in 183 patients over 18 years. Logistic regression (LR) and SEM were used to identify risk factors for early thyroid cancer detection. Subsequently, machine learning algorithms-including Support Vector Machines (SVM), Naive Bayes (NB), and Decision Trees (DT) were used to construct decision support models. After feature selection with SEM, the SVM model achieved the highest performance, with an accuracy of 82 %, a specificity of 97 %, and an AUC value of 84 %. Additional models were developed for different scenarios, and their performance metrics were compared. Accurate preoperative prediction of malignancy in thyroid nodules is crucial for avoiding unnecessary surgeries. The proposed model supports more informed clinical decision-making by effectively identifying benign cases, thereby reducing surgical risk and improving patient care.

Evaluation of uncertainty estimation methods in medical image segmentation: Exploring the usage of uncertainty in clinical deployment.

Li S, Yuan M, Dai X, Zhang C

pubmed logopapersMay 30 2025
Uncertainty estimation methods are essential for the application of artificial intelligence (AI) models in medical image segmentation, particularly in addressing reliability and feasibility challenges in clinical deployment. Despite their significance, the adoption of uncertainty estimation methods in clinical practice remains limited due to the lack of a comprehensive evaluation framework tailored to their clinical usage. To address this gap, a simulation of uncertainty-assisted clinical workflows is conducted, highlighting the roles of uncertainty in model selection, sample screening, and risk visualization. Furthermore, uncertainty evaluation is extended to pixel, sample, and model levels to enable a more thorough assessment. At the pixel level, the Uncertainty Confusion Metric (UCM) is proposed, utilizing density curves to improve robustness against variability in uncertainty distributions and to assess the ability of pixel uncertainty to identify potential errors. At the sample level, the Expected Segmentation Calibration Error (ESCE) is introduced to provide more accurate calibration aligned with Dice, enabling more effective identification of low-quality samples. At the model level, the Harmonic Dice (HDice) metric is developed to integrate uncertainty and accuracy, mitigating the influence of dataset biases and offering a more robust evaluation of model performance on unseen data. Using this systematic evaluation framework, five mainstream uncertainty estimation methods are compared on organ and tumor datasets, providing new insights into their clinical applicability. Extensive experimental analyses validated the practicality and effectiveness of the proposed metrics. This study offers clear guidance for selecting appropriate uncertainty estimation methods in clinical settings, facilitating their integration into clinical workflows and ultimately improving diagnostic efficiency and patient outcomes.

Classification of biomedical lung cancer images using optimized binary bat technique by constructing oblique decision trees.

Aswal S, Ahuja NJ, Mehra R

pubmed logopapersMay 29 2025
Due to imbalanced data values and high-dimensional features of lung cancer from CT scans images creates significant challenges in clinical research. The improper classification of these images leads towards higher complexity in classification process. These critical issues compromise the extraction of biomedical traits and also design incomplete classification of lung cancer. As the conventional approaches are partially successful in dealing with the complex nature of high-dimensional and imbalanced biomedical data for lung cancer classification. Thus, there is a crucial need to develop a robust classification technique which can address these major concerns in the classification of lung cancer images. In this paper, we propose a novel structural formation of the oblique decision tree (OBT) using a swarm intelligence technique, namely, the Binary Bat Swarm Algorithm (BBSA). The application of BBSA enables a competitive recognition rate to make structural reforms while building OBT. Such integration improves the ability of the machine learning swarm classifier (MLSC) to handle high-dimensional features and imbalanced biomedical datasets. The adaptive feature selection using BBSA allows for the exploration and selection of relevant features required for classification from ODT. The ODT classifier introduces flexibility in decision boundaries, which enables it to capture complex linkages between biomedical data. The proposed MLSC model effectively handles high-dimensional, imbalanced lung cancer datasets using TCGA_LUSC_2016 and TCGA_LUAD_2016 modalities, achieving superior precision, recall, F-measure, and execution efficiency. The experiments are conducted in Python to evaluate the performance metrics that consistently demonstrate enhanced classification accuracy and reduced misclassification rates compared to existing methods. The MLSC is assessed in terms of both qualitative and quantitative measurements to study the capability of the proposed MLSC in classifying the instances more effectively than the conventional state-of-the-art methods.
Page 145 of 2082073 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.