Sort by:
Page 73 of 1351347 results

Automated Computer Vision Methods for Image Segmentation, Stereotactic Localization, and Functional Outcome Prediction of Basal Ganglia Hemorrhages.

Kashkoush A, Davison MA, Achey R, Gomes J, Rasmussen P, Kshettry VR, Moore N, Bain M

pubmed logopapersMay 30 2025
Basal ganglia intracranial hemorrhage (bgICH) morphology is associated with postoperative functional outcomes. We hypothesized that bgICH spatial representation modeling could be automated for functional outcome prediction after minimally invasive surgical (MIS) evacuation. A training set of 678 computed tomography head and computed tomography angiography images from 63 patients were used to train key-point detection and instance segmentation convolutional neural network-based models for anatomic landmark identification and bgICH segmentation. Anatomic landmarks included the bilateral orbital rims at the globe's maximum diameter and the posterior-most aspect of the tentorial incisura, which were used to define a universal stereotactic reference frame across patients. Convolutional neural network models were tested using volumetric computed tomography head/computed tomography angiography scans from 45 patients who underwent MIS bgICH evacuation with recorded modified Rankin Scales within one year after surgery. bgICH volumes were highly correlated (R2 = 0.95, P < .001) between manual (median 39-mL) and automatic (median 38-mL) segmentation methods. The absolute median difference between groups was 2-mL (IQR: 1-6 mL). Median localization accuracy (distance between automated and manually designated coordinate frames) was 4 mm (IQR: 3-6). Landmark coordinates were highly correlated in the x- (medial-lateral), y- (anterior-posterior), and z-axes (rostral-caudal) for all 3 landmarks (R2 range = 0.95-0.99, P < .001 for all). Functional outcome (modified Rankin Scale 4-6) was predicted with similar model performance using automated (area under the receiver operating characteristic curve = 0.81, 95% CI: 0.67-0.94) and manually (area under the receiver operating characteristic curve = 0.84, 95% CI: 0.72-0.96) constructed spatial representation models (P = .173). Computer vision models can accurately replicate bgICH manual segmentation, stereotactic localization, and prognosticate functional outcomes after MIS bgICH evacuation.

The value of artificial intelligence in PSMA PET: a pathway to improved efficiency and results.

Dadgar H, Hong X, Karimzadeh R, Ibragimov B, Majidpour J, Arabi H, Al-Ibraheem A, Khalaf AN, Anwar FM, Marafi F, Haidar M, Jafari E, Zarei A, Assadi M

pubmed logopapersMay 30 2025
This systematic review investigates the potential of artificial intelligence (AI) in improving the accuracy and efficiency of prostate-specific membrane antigen positron emission tomography (PSMA PET) scans for detecting metastatic prostate cancer. A comprehensive literature search was conducted across Medline, Embase, and Web of Science, adhering to PRISMA guidelines. Key search terms included "artificial intelligence," "machine learning," "deep learning," "prostate cancer," and "PSMA PET." The PICO framework guided the selection of studies focusing on AI's application in evaluating PSMA PET scans for staging lymph node and distant metastasis in prostate cancer patients. Inclusion criteria prioritized original English-language articles published up to October 2024, excluding studies using non-PSMA radiotracers, those analyzing only the CT component of PSMA PET-CT, studies focusing solely on intra-prostatic lesions, and non-original research articles. The review included 22 studies, with a mix of prospective and retrospective designs. AI algorithms employed included machine learning (ML), deep learning (DL), and convolutional neural networks (CNNs). The studies explored various applications of AI, including improving diagnostic accuracy, sensitivity, differentiation from benign lesions, standardization of reporting, and predicting treatment response. Results showed high sensitivity (62% to 97%) and accuracy (AUC up to 98%) in detecting metastatic disease, but also significant variability in positive predictive value (39.2% to 66.8%). AI demonstrates significant promise in enhancing PSMA PET scan analysis for metastatic prostate cancer, offering improved efficiency and potentially better diagnostic accuracy. However, the variability in performance and the "black box" nature of some algorithms highlight the need for larger prospective studies, improved model interpretability, and the continued involvement of experienced nuclear medicine physicians in interpreting AI-assisted results. AI should be considered a valuable adjunct, not a replacement, for expert clinical judgment.

Comparative analysis of natural language processing methodologies for classifying computed tomography enterography reports in Crohn's disease patients.

Dai J, Kim MY, Sutton RT, Mitchell JR, Goebel R, Baumgart DC

pubmed logopapersMay 30 2025
Imaging is crucial to assess disease extent, activity, and outcomes in inflammatory bowel disease (IBD). Artificial intelligence (AI) image interpretation requires automated exploitation of studies at scale as an initial step. Here we evaluate natural language processing to classify Crohn's disease (CD) on CTE. From our population representative IBD registry a sample of CD patients (male: 44.6%, median age: 50 IQR37-60) and controls (n = 981 each) CTE reports were extracted and split into training- (n = 1568), development- (n = 196), and testing (n = 198) datasets each with around 200 words and balanced numbers of labels, respectively. Predictive classification was evaluated with CNN, Bi-LSTM, BERT-110M, LLaMA-3.3-70B-Instruct and DeepSeek-R1-Distill-LLaMA-70B. While our custom IBDBERT finetuned on expert IBD knowledge (i.e. ACG, AGA, ECCO guidelines), outperformed rule- and rationale extraction-based classifiers (accuracy 88.6% with pre-tuning learning rate 0.00001, AUC 0.945) in predictive performance, LLaMA, but not DeepSeek achieved overall superior results (accuracy 91.2% vs. 88.9%, F1 0.907 vs. 0.874).

Three-dimensional automated segmentation of adolescent idiopathic scoliosis on computed tomography driven by deep learning: A retrospective study.

Ji Y, Mei X, Tan R, Zhang W, Ma Y, Peng Y, Zhang Y

pubmed logopapersMay 30 2025
Accurate vertebrae segmentation is crucial for modern surgical technologies, and deep learning networks provide valuable tools for this task. This study explores the application of advanced deep learning-based methods for segmenting vertebrae in computed tomography (CT) images of adolescent idiopathic scoliosis (AIS) patients. In this study, we collected a dataset of 31 samples from AIS patients, covering a wide range of spinal regions from cervical to lumbar vertebrae. High-resolution CT images were obtained for each sample, forming the basis of our segmentation analysis. We utilized 2 popular neural networks, U-Net and Attention U-Net, to segment the vertebrae in these CT images. Segmentation performance was rigorously evaluated using 2 key metrics: the Dice Coefficient Score to measure overlap between segmented and ground truth regions, and the Hausdorff distance (HD) to assess boundary dissimilarity. Both networks performed well, with U-Net achieving an average Dice coefficient of 92.2 ± 2.4% and an HD of 9.80 ± 1.34 mm. Attention U-Net showed similar results, with a Dice coefficient of 92.3 ± 2.9% and an HD of 8.67 ± 3.38 mm. When applied to the challenging anatomy of AIS, our findings align with literature results from advanced 3D U-Nets on healthy spines. Although no significant overall difference was observed between the 2 networks (P > .05), Attention U-Net exhibited an improved Dice coefficient (91.5 ± 0.0% vs 88.8 ± 0.1%, P = .151) and a significantly better HD (9.04 ± 4.51 vs. 13.60 ± 2.26 mm, P = .027) in critical scoliosis sites (mid-thoracic region), suggesting enhanced suitability for complex anatomy. Our study indicates that U-Net neural networks are feasible and effective for automated vertebrae segmentation in AIS patients using clinical 3D CT images. Attention U-Net demonstrated improved performance in thoracic levels, which are primary sites of scoliosis and may be more suitable for challenging anatomical regions.

Mammogram mastery: Breast cancer image classification using an ensemble of deep learning with explainable artificial intelligence.

Kumar Mondal P, Jahan MK, Byeon H

pubmed logopapersMay 30 2025
Breast cancer is a serious public health problem and is one of the leading causes of cancer-related deaths in women worldwide. Early detection of the disease can significantly increase the chances of survival. However, manual analysis of mammogram mastery images is complex and time-consuming, which can lead to disagreements among experts. For this reason, automated diagnostic systems can play a significant role in increasing the accuracy and efficiency of diagnosis. In this study, we present an effective deep learning (DL) method, which classifies mammogram mastery images into cancer and noncancer categories using a collected dataset. Our model is pretrained based on the Inception V3 architecture. First, we run 5-fold cross-validation tests on the fully trained and fine-tuned Inception V3 model. Next, we apply a combined method based on likelihood and mean, where the fine-tuned Inception V3 model demonstrated superior performance in classification. Our DL model achieved 99% accuracy and 99% F1 score. In addition, interpretable AI techniques were used to enhance the transparency of the classification process. The finely tuned Inception V3 model demonstrated the highest performance in classification, confirming its effectiveness in automatic breast cancer detection. The experimental results clearly indicate that our proposed DL-based method for breast cancer image classification is highly effective, especially its application in image-based diagnostic methods. This study brings to the fore the huge potential of AI-based solutions, which can play a significant role in increasing the accuracy and reliability of breast cancer diagnosis.

CCTA-Derived coronary plaque burden offers enhanced prognostic value over CAC scoring in suspected CAD patients.

Dahdal J, Jukema RA, Maaniitty T, Nurmohamed NS, Raijmakers PG, Hoek R, Driessen RS, Twisk JWR, Bär S, Planken RN, van Royen N, Nijveldt R, Bax JJ, Saraste A, van Rosendael AR, Knaapen P, Knuuti J, Danad I

pubmed logopapersMay 30 2025
To assess the prognostic utility of coronary artery calcium (CAC) scoring and coronary computed tomography angiography (CCTA)-derived quantitative plaque metrics for predicting adverse cardiovascular outcomes. The study enrolled 2404 patients with suspected coronary artery disease (CAD) but without a prior history of CAD. All participants underwent CAC scoring and CCTA, with plaque metrics quantified using an artificial intelligence (AI)-based tool (Cleerly, Inc). Percent atheroma volume (PAV) and non-calcified plaque volume percentage (NCPV%), reflecting total plaque burden and the proportion of non-calcified plaque volume normalized to vessel volume, were evaluated. The primary endpoint was a composite of all-cause mortality and non-fatal myocardial infarction (MI). Cox proportional hazard models, adjusted for clinical risk factors and early revascularization, were employed for analysis. During a median follow-up of 7.0 years, 208 patients (8.7%) experienced the primary endpoint, including 73 cases of MI (3%). The model incorporating PAV demonstrated superior discriminatory power for the composite endpoint (AUC = 0.729) compared to CAC scoring (AUC = 0.706, P = 0.016). In MI prediction, PAV (AUC = 0.791) significantly outperformed CAC (AUC = 0.699, P < 0.001), with NCPV% showing the highest prognostic accuracy (AUC = 0.814, P < 0.001). AI-driven assessment of coronary plaque burden enhances prognostic accuracy for future adverse cardiovascular events, highlighting the critical role of comprehensive plaque characterization in refining risk stratification strategies.

Multimodal AI framework for lung cancer diagnosis: Integrating CNN and ANN models for imaging and clinical data analysis.

Oncu E, Ciftci F

pubmed logopapersMay 30 2025
Lung cancer remains a leading cause of cancer-related mortality worldwide, emphasizing the critical need for accurate and early diagnostic solutions. This study introduces a novel multimodal artificial intelligence (AI) framework that integrates Convolutional Neural Networks (CNNs) and Artificial Neural Networks (ANNs) to improve lung cancer classification and severity assessment. The CNN model, trained on 1019 preprocessed CT images, classified lung tissue into four histological categories, adenocarcinoma, large cell carcinoma, squamous cell carcinoma, and normal, with a weighted accuracy of 92 %. Interpretability is enhanced using Gradient-weighted Class Activation Mapping (Grad-CAM), which highlights the salient image regions influencing the model's predictions. In parallel, an ANN trained on clinical data from 999 patients-spanning 24 key features such as demographic, symptomatic, and genetic factors-achieves 99 % accuracy in predicting cancer severity (low, medium, high). SHapley Additive exPlanations (SHAP) are employed to provide both global and local interpretability of the ANN model, enabling transparent decision-making. Both models were rigorously validated using k-fold cross-validation to ensure robustness and reduce overfitting. This hybrid approach effectively combines spatial imaging data and structured clinical information, demonstrating strong predictive performance and offering an interpretable and comprehensive AI-based solution for lung cancer diagnosis and management.

Multiclass ensemble framework for enhanced prostate gland Segmentation: Integrating Self-ONN decoders with EfficientNet.

Islam Sumon MS, Chowdhury MEH, Bhuiyan EH, Rahman MS, Khan MM, Al-Hashimi I, Mushtak A, Zoghoul SB

pubmed logopapersMay 30 2025
Digital pathology relies on the morphological architecture of prostate glands to recognize cancerous tissue. Prostate cancer (PCa) originates in walnut shaped prostate gland in the male reproductive system. Deep learning (DL) pipelines can assist in identifying these regions with advanced segmentation techniques which are effective in diagnosing and treating prostate diseases. This facilitates early detection, targeted biopsy, and accurate treatment planning, ensuring consistent, reproducible results while minimizing human error. Automated segmentation techniques trained on MRI datasets can aid in monitoring disease progression which leads to clinical support by developing patient-specific models for personalized medicine. In this study, we present multiclass segmentation models designed to localize the prostate gland and its zonal regions-specifically the peripheral zone (PZ), transition zone (TZ), and the whole gland-by combining EfficientNetB4 encoders with Self-organized Operational Neural Network (Self-ONN)-based decoders. Traditional convolutional neural networks (CNNs) rely on linear neuron models, which limit their ability to capture the complex dynamics of biological neural systems. In contrast, Operational Neural Networks (ONNs), particularly Self-ONNs, address this limitation by incorporating nonlinear and adaptive operations at the neuron level. We evaluated various encoder-decoder configurations and identified that the combination of an EfficientNet-based encoder with a Self-ONN-based decoder yielded the best performance. To further enhance segmentation accuracy, we employed the STAPLE method to ensemble the top three performing models. Our approach was tested on the large-scale, recently updated PI-CAI Challenge dataset using 5-fold cross-validation, achieving Dice scores of 95.33 % for the whole gland and 92.32 % for the combined PZ and TZ regions. These advanced segmentation techniques significantly improve the quality of PCa diagnosis and treatment, contributing to better patient care and outcomes.

Real-time brain tumor detection in intraoperative ultrasound: From model training to deployment in the operating room.

Cepeda S, Esteban-Sinovas O, Romero R, Singh V, Shett P, Moiyadi A, Zemmoura I, Giammalva GR, Del Bene M, Barbotti A, DiMeco F, West TR, Nahed BV, Arrese I, Hornero R, Sarabia R

pubmed logopapersMay 30 2025
Intraoperative ultrasound (ioUS) is a valuable tool in brain tumor surgery due to its versatility, affordability, and seamless integration into the surgical workflow. However, its adoption remains limited, primarily because of the challenges associated with image interpretation and the steep learning curve required for effective use. This study aimed to enhance the interpretability of ioUS images by developing a real-time brain tumor detection system deployable in the operating room. We collected 2D ioUS images from the BraTioUS and ReMIND datasets, annotated with expert-refined tumor labels. Using the YOLO11 architecture and its variants, we trained object detection models to identify brain tumors. The dataset included 1732 images from 192 patients, divided into training, validation, and test sets. Data augmentation expanded the training set to 11,570 images. In the test dataset, YOLO11s achieved the best balance of precision and computational efficiency, with a mAP@50 of 0.95, mAP@50-95 of 0.65, and a processing speed of 34.16 frames per second. The proposed solution was prospectively validated in a cohort of 20 consecutively operated patients diagnosed with brain tumors. Neurosurgeons confirmed its seamless integration into the surgical workflow, with real-time predictions accurately delineating tumor regions. These findings highlight the potential of real-time object detection algorithms to enhance ioUS-guided brain tumor surgery, addressing key challenges in interpretation and providing a foundation for future development of computer vision-based tools for neuro-oncological surgery.

Using Deep learning to Predict Cardiovascular Magnetic Resonance Findings from Echocardiography Videos.

Sahashi Y, Vukadinovic M, Duffy G, Li D, Cheng S, Berman DS, Ouyang D, Kwan AC

pubmed logopapersMay 30 2025
Echocardiography is the most common modality for assessing cardiac structure and function. While cardiac magnetic resonance (CMR) imaging is less accessible, CMR can provide unique tissue characterization including late gadolinium enhancement (LGE), T1 and T2 mapping, and extracellular volume (ECV) which are associated with tissue fibrosis, infiltration, and inflammation. Deep learning has been shown to uncover findings not recognized by clinicians, however it is unknown whether CMR-based tissue characteristics can be derived from echocardiography videos using deep learning. To assess the performance of a deep learning model applied to echocardiography to detect CMR-specific parameters including LGE presence, and abnormal T1, T2 or ECV. In a retrospective single-center study, adult patients with CMRs and echocardiography studies within 30 days were included. A video-based convolutional neural network was trained on echocardiography videos to predict CMR-derived labels including LGE presence, and abnormal T1, T2 or ECV across echocardiography views. The model was also trained to predict presence/absence of wall motion abnormality (WMA) as a positive control for model function. The model performance was evaluated in a held-out test dataset not used for training. The study population included 1,453 adult patients (mean age 56±18 years, 42% female) with 2,556 paired echocardiography studies occurring at a median of 2 days after CMR (interquartile range 2 days prior to 6 days after). The model had high predictive capability for presence of WMA (AUC 0.873 [95%CI 0.816-0.922]) which was used for positive control. However, the model was unable to reliably detect the presence of LGE (AUC 0.699 [0.613-0.780]), abnormal native T1 (AUC 0.614 [0.500-0.715]), T2 0.553 [0.420-0.692], or ECV 0.564 [0.455-0.691]). Deep learning applied to echocardiography accurately identified CMR-based WMA, but was unable to predict tissue characteristics, suggesting that signal for these tissue characteristics may not be present within ultrasound videos, and that the use of CMR for tissue characterization remains essential within cardiology.
Page 73 of 1351347 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.