Sort by:
Page 1 of 1281276 results
Next

Cerebral ischemia detection using deep learning techniques.

Pastor-Vargas R, Antón-Munárriz C, Haut JM, Robles-Gómez A, Paoletti ME, Benítez-Andrades JA

pubmed logopapersDec 1 2025
Cerebrovascular accident (CVA), commonly known as stroke, stands as a significant contributor to contemporary mortality and morbidity rates, often leading to lasting disabilities. Early identification is crucial in mitigating its impact and reducing mortality. Non-contrast computed tomography (NCCT) remains the primary diagnostic tool in stroke emergencies due to its speed, accessibility, and cost-effectiveness. NCCT enables the exclusion of hemorrhage and directs attention to ischemic causes resulting from arterial flow obstruction. Quantification of NCCT findings employs the Alberta Stroke Program Early Computed Tomography Score (ASPECTS), which evaluates affected brain structures. This study seeks to identify early alterations in NCCT density in patients with stroke symptoms using a binary classifier distinguishing NCCT scans with and without stroke. To achieve this, various well-known deep learning architectures, namely VGG3D, ResNet3D, and DenseNet3D, validated in the ImageNet challenges, are implemented with 3D images covering the entire brain volume. The training results of these networks are presented, wherein diverse parameters are examined for optimal performance. The DenseNet3D network emerges as the most effective model, attaining a training set accuracy of 98% and a test set accuracy of 95%. The aim is to alert medical professionals to potential stroke cases in their early stages based on NCCT findings displaying altered density patterns.

Convolutional autoencoder-based deep learning for intracerebral hemorrhage classification using brain CT images.

Nageswara Rao B, Acharya UR, Tan RS, Dash P, Mohapatra M, Sabut S

pubmed logopapersDec 1 2025
Intracerebral haemorrhage (ICH) is a common form of stroke that affects millions of people worldwide. The incidence is associated with a high rate of mortality and morbidity. Accurate diagnosis using brain non-contrast computed tomography (NCCT) is crucial for decision-making on potentially life-saving surgery. Limited access to expert readers and inter-observer variability imposes barriers to timeous and accurate ICH diagnosis. We proposed a hybrid deep learning model for automated ICH diagnosis using NCCT images, which comprises a convolutional autoencoder (CAE) to extract features with reduced data dimensionality and a dense neural network (DNN) for classification. In order to ensure that the model generalizes to new data, we trained it using tenfold cross-validation and holdout methods. Principal component analysis (PCA) based dimensionality reduction and classification is systematically implemented for comparison. The study dataset comprises 1645 ("ICH" class) and 1648 ("Normal" class belongs to patients with non-hemorrhagic stroke) labelled images obtained from 108 patients, who had undergone CT examination on a 64-slice computed tomography scanner at Kalinga Institute of Medical Sciences between 2020 and 2023. Our developed CAE-DNN hybrid model attained 99.84% accuracy, 99.69% sensitivity, 100% specificity, 100% precision, and 99.84% F1-score, which outperformed the comparator PCA-DNN model as well as the published results in the literature. In addition, using saliency maps, our CAE-DNN model can highlight areas on the images that are closely correlated with regions of ICH, which have been manually contoured by expert readers. The CAE-DNN model demonstrates the proof-of-concept for accurate ICH detection and localization, which can potentially be implemented to prioritize the treatment using NCCT images in clinical settings.

SurgPointTransformer: transformer-based vertebra shape completion using RGB-D imaging.

Massalimova A, Liebmann F, Jecklin S, Carrillo F, Farshad M, Fürnstahl P

pubmed logopapersDec 1 2025
State-of-the-art computer- and robot-assisted surgery systems rely on intraoperative imaging technologies such as computed tomography and fluoroscopy to provide detailed 3D visualizations of patient anatomy. However, these methods expose both patients and clinicians to ionizing radiation. This study introduces a radiation-free approach for 3D spine reconstruction using RGB-D data. Inspired by the "mental map" surgeons form during procedures, we present SurgPointTransformer, a shape completion method that reconstructs unexposed spinal regions from sparse surface observations. The method begins with a vertebra segmentation step that extracts vertebra-level point clouds for subsequent shape completion. SurgPointTransformer then uses an attention mechanism to learn the relationship between visible surface features and the complete spine structure. The approach is evaluated on an <i>ex vivo</i> dataset comprising nine samples, with CT-derived data used as ground truth. SurgPointTransformer significantly outperforms state-of-the-art baselines, achieving a Chamfer distance of 5.39 mm, an F-score of 0.85, an Earth mover's distance of 11.00 and a signal-to-noise ratio of 22.90 dB. These results demonstrate the potential of our method to reconstruct 3D vertebral shapes without exposing patients to ionizing radiation. This work contributes to the advancement of computer-aided and robot-assisted surgery by enhancing system perception and intelligence.

Breast tumor diagnosis via multimodal deep learning using ultrasound B-mode and Nakagami images.

Muhtadi S, Gallippi CM

pubmed logopapersNov 1 2025
We propose and evaluate multimodal deep learning (DL) approaches that combine ultrasound (US) B-mode and Nakagami parametric images for breast tumor classification. It is hypothesized that integrating tissue brightness information from B-mode images with scattering properties from Nakagami images will enhance diagnostic performance compared with single-input approaches. An EfficientNetV2B0 network was used to develop multimodal DL frameworks that took as input (i) numerical two-dimensional (2D) maps or (ii) rendered red-green-blue (RGB) representations of both B-mode and Nakagami data. The diagnostic performance of these frameworks was compared with single-input counterparts using 831 US acquisitions from 264 patients. In addition, gradient-weighted class activation mapping was applied to evaluate diagnostically relevant information utilized by the different networks. The multimodal architectures demonstrated significantly higher area under the receiver operating characteristic curve (AUC) values ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ) than their monomodal counterparts, achieving an average improvement of 10.75%. In addition, the multimodal networks incorporated, on average, 15.70% more diagnostically relevant tissue information. Among the multimodal models, those using RGB representations as input outperformed those that utilized 2D numerical data maps ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ). The top-performing multimodal architecture achieved a mean AUC of 0.896 [95% confidence interval (CI): 0.813 to 0.959] when performance was assessed at the image level and 0.848 (95% CI: 0.755 to 0.903) when assessed at the lesion level. Incorporating B-mode and Nakagami information together in a multimodal DL framework improved classification outcomes and increased the amount of diagnostically relevant information accessed by networks, highlighting the potential for automating and standardizing US breast cancer diagnostics to enhance clinical outcomes.

Robust evaluation of tissue-specific radiomic features for classifying breast tissue density grades.

Dong V, Mankowski W, Silva Filho TM, McCarthy AM, Kontos D, Maidment ADA, Barufaldi B

pubmed logopapersNov 1 2025
Breast cancer risk depends on an accurate assessment of breast density due to lesion masking. Although governed by standardized guidelines, radiologist assessment of breast density is still highly variable. Automated breast density assessment tools leverage deep learning but are limited by model robustness and interpretability. We assessed the robustness of a feature selection methodology (RFE-SHAP) for classifying breast density grades using tissue-specific radiomic features extracted from raw central projections of digital breast tomosynthesis screenings ( <math xmlns="http://www.w3.org/1998/Math/MathML"> <mrow> <msub><mrow><mi>n</mi></mrow> <mrow><mi>I</mi></mrow> </msub> <mo>=</mo> <mn>651</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"> <mrow> <msub><mrow><mi>n</mi></mrow> <mrow><mi>II</mi></mrow> </msub> <mo>=</mo> <mn>100</mn></mrow> </math> ). RFE-SHAP leverages traditional and explainable AI methods to identify highly predictive and influential features. A simple logistic regression (LR) classifier was used to assess classification performance, and unsupervised clustering was employed to investigate the intrinsic separability of density grade classes. LR classifiers yielded cross-validated areas under the receiver operating characteristic (AUCs) per density grade of [ <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>A</mi></mrow> </math> : <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.909</mn> <mo>±</mo> <mn>0.032</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>B</mi></mrow> </math> : <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.858</mn> <mo>±</mo> <mn>0.027</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>C</mi></mrow> </math> : <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.927</mn> <mo>±</mo> <mn>0.013</mn></mrow> </math> , <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>D</mi></mrow> </math> : <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.890</mn> <mo>±</mo> <mn>0.089</mn></mrow> </math> ] and an AUC of <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>0.936</mn> <mo>±</mo> <mn>0.016</mn></mrow> </math> for classifying patients as nondense or dense. In external validation, we observed per density grade AUCs of [ <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>A</mi></mrow> </math> : 0.880, <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>B</mi></mrow> </math> : 0.779, <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>C</mi></mrow> </math> : 0.878, <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>D</mi></mrow> </math> : 0.673] and nondense/dense AUC of 0.823. Unsupervised clustering highlighted the ability of these features to characterize different density grades. Our RFE-SHAP feature selection methodology for classifying breast tissue density generalized well to validation datasets after accounting for natural class imbalance, and the identified radiomic features properly captured the progression of density grades. Our results potentiate future research into correlating selected radiomic features with clinical descriptors of breast tissue density.

TFKT V2: task-focused knowledge transfer from natural images for computed tomography perceptual image quality assessment.

Rifa KR, Ahamed MA, Zhang J, Imran A

pubmed logopapersSep 1 2025
The accurate assessment of computed tomography (CT) image quality is crucial for ensuring diagnostic reliability while minimizing radiation dose. Radiologists' evaluations are time-consuming and labor-intensive. Existing automated approaches often require large CT datasets with predefined image quality assessment (IQA) scores, which often do not align well with clinical evaluations. We aim to develop a reference-free, automated method for CT IQA that closely reflects radiologists' evaluations, reducing the dependency on large annotated datasets. We propose Task-Focused Knowledge Transfer (TFKT), a deep learning-based IQA method leveraging knowledge transfer from task-similar natural image datasets. TFKT incorporates a hybrid convolutional neural network-transformer model, enabling accurate quality predictions by learning from natural image distortions with human-annotated mean opinion scores. The model is pre-trained on natural image datasets and fine-tuned on low-dose computed tomography perceptual image quality assessment data to ensure task-specific adaptability. Extensive evaluations demonstrate that the proposed TFKT method effectively predicts IQA scores aligned with radiologists' assessments on in-domain datasets and generalizes well to out-of-domain clinical pediatric CT exams. The model achieves robust performance without requiring high-dose reference images. Our model is capable of assessing the quality of <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mo>∼</mo> <mn>30</mn></mrow> </math> CT image slices in a second. The proposed TFKT approach provides a scalable, accurate, and reference-free solution for CT IQA. The model bridges the gap between traditional and deep learning-based IQA, offering clinically relevant and computationally efficient assessments applicable to real-world clinical settings.

Automated Whole-Brain Focal Cortical Dysplasia Detection Using MR Fingerprinting With Deep Learning.

Ding Z, Morris S, Hu S, Su TY, Choi JY, Blümcke I, Wang X, Sakaie K, Murakami H, Alexopoulos AV, Jones SE, Najm IM, Ma D, Wang ZI

pubmed logopapersJun 10 2025
Focal cortical dysplasia (FCD) is a common pathology for pharmacoresistant focal epilepsy, yet detection of FCD on clinical MRI is challenging. Magnetic resonance fingerprinting (MRF) is a novel quantitative imaging technique providing fast and reliable tissue property measurements. The aim of this study was to develop an MRF-based deep-learning (DL) framework for whole-brain FCD detection. We included patients with pharmacoresistant focal epilepsy and pathologically/radiologically diagnosed FCD, as well as age-matched and sex-matched healthy controls (HCs). All participants underwent 3D whole-brain MRF and clinical MRI scans. T1, T2, gray matter (GM), and white matter (WM) tissue fraction maps were reconstructed from a dictionary-matching algorithm based on the MRF acquisition. A 3D ROI was manually created for each lesion. All MRF maps and lesion labels were registered to the Montreal Neurological Institute space. Mean and SD T1 and T2 maps were calculated voxel-wise across using HC data. T1 and T2 <i>z</i>-score maps for each patient were generated by subtracting the mean HC map and dividing by the SD HC map. MRF-based morphometric maps were produced in the same manner as in the morphometric analysis program (MAP), based on MRF GM and WM maps. A no-new U-Net model was trained using various input combinations, with performance evaluated through leave-one-patient-out cross-validation. We compared model performance using various input combinations from clinical MRI and MRF to assess the impact of different input types on model effectiveness. We included 40 patients with FCD (mean age 28.1 years, 47.5% female; 11 with FCD IIa, 14 with IIb, 12 with mMCD, 3 with MOGHE) and 67 HCs. The DL model with optimal performance used all MRF-based inputs, including MRF-synthesized T1w, T1z, and T2z maps; tissue fraction maps; and morphometric maps. The patient-level sensitivity was 80% with an average of 1.7 false positives (FPs) per patient. Sensitivity was consistent across subtypes, lobar locations, and lesional/nonlesional clinical MRI. Models using clinical images showed lower sensitivity and higher FPs. The MRF-DL model also outperformed the established MAP18 pipeline in sensitivity, FPs, and lesion label overlap. The MRF-DL framework demonstrated efficacy for whole-brain FCD detection. Multiparametric MRF features from a single scan offer promising inputs for developing a deep-learning tool capable of detecting subtle epileptic lesions.

A ViTUNeT-based model using YOLOv8 for efficient LVNC diagnosis and automatic cleaning of dataset.

de Haro S, Bernabé G, García JM, González-Férez P

pubmed logopapersJun 4 2025
Left ventricular non-compaction is a cardiac condition marked by excessive trabeculae in the left ventricle's inner wall. Although various methods exist to measure these structures, the medical community still lacks consensus on the best approach. Previously, we developed DL-LVTQ, a tool based on a UNet neural network, to quantify trabeculae in this region. In this study, we expand the dataset to include new patients with Titin cardiomyopathy and healthy individuals with fewer trabeculae, requiring retraining of our models to enhance predictions. We also propose ViTUNeT, a neural network architecture combining U-Net and Vision Transformers to segment the left ventricle more accurately. Additionally, we train a YOLOv8 model to detect the ventricle and integrate it with ViTUNeT model to focus on the region of interest. Results from ViTUNet and YOLOv8 are similar to DL-LVTQ, suggesting dataset quality limits further accuracy improvements. To test this, we analyze MRI images and develop a method using two YOLOv8 models to identify and remove problematic images, leading to better results. Combining YOLOv8 with deep learning networks offers a promising approach for improving cardiac image analysis and segmentation.

Best Practices and Checklist for Reviewing Artificial Intelligence-Based Medical Imaging Papers: Classification.

Kline TL, Kitamura F, Warren D, Pan I, Korchi AM, Tenenholtz N, Moy L, Gichoya JW, Santos I, Moradi K, Avval AH, Alkhulaifat D, Blumer SL, Hwang MY, Git KA, Shroff A, Stember J, Walach E, Shih G, Langer SG

pubmed logopapersJun 4 2025
Recent advances in Artificial Intelligence (AI) methodologies and their application to medical imaging has led to an explosion of related research programs utilizing AI to produce state-of-the-art classification performance. Ideally, research culminates in dissemination of the findings in peer-reviewed journals. To date, acceptance or rejection criteria are often subjective; however, reproducible science requires reproducible review. The Machine Learning Education Sub-Committee of the Society for Imaging Informatics in Medicine (SIIM) has identified a knowledge gap and need to establish guidelines for reviewing these studies. This present work, written from the machine learning practitioner standpoint, follows a similar approach to our previous paper related to segmentation. In this series, the committee will address best practices to follow in AI-based studies and present the required sections with examples and discussion of requirements to make the studies cohesive, reproducible, accurate, and self-contained. This entry in the series focuses on image classification. Elements like dataset curation, data pre-processing steps, reference standard identification, data partitioning, model architecture, and training are discussed. Sections are presented as in a typical manuscript. The content describes the information necessary to ensure the study is of sufficient quality for publication consideration and, compared with other checklists, provides a focused approach with application to image classification tasks. The goal of this series is to provide resources to not only help improve the review process for AI-based medical imaging papers, but to facilitate a standard for the information that should be presented within all components of the research study.

A review on learning-based algorithms for tractography and human brain white matter tracts recognition.

Barati Shoorche A, Farnia P, Makkiabadi B, Leemans A

pubmed logopapersJun 4 2025
Human brain fiber tractography using diffusion magnetic resonance imaging is a crucial stage in mapping brain white matter structures, pre-surgical planning, and extracting connectivity patterns. Accurate and reliable tractography, by providing detailed geometric information about the position of neural pathways, minimizes the risk of damage during neurosurgical procedures. Both tractography itself and its post-processing steps such as bundle segmentation are usually used in these contexts. Many approaches have been put forward in the past decades and recently, multiple data-driven tractography algorithms and automatic segmentation pipelines have been proposed to address the limitations of traditional methods. Several of these recent methods are based on learning algorithms that have demonstrated promising results. In this study, in addition to introducing diffusion MRI datasets, we review learning-based algorithms such as conventional machine learning, deep learning, reinforcement learning and dictionary learning methods that have been used for white matter tract, nerve and pathway recognition as well as whole brain streamlines or whole brain tractogram creation. The contribution is to discuss both tractography and tract recognition methods, in addition to extending previous related reviews with most recent methods, covering architectures as well as network details, assess the efficiency of learning-based methods through a comprehensive comparison in this field, and finally demonstrate the important role of learning-based methods in tractography.
Page 1 of 1281276 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.