Sort by:
Page 1 of 72715 results
Next

Cerebral ischemia detection using deep learning techniques.

Pastor-Vargas R, Antón-Munárriz C, Haut JM, Robles-Gómez A, Paoletti ME, Benítez-Andrades JA

pubmed logopapersDec 1 2025
Cerebrovascular accident (CVA), commonly known as stroke, stands as a significant contributor to contemporary mortality and morbidity rates, often leading to lasting disabilities. Early identification is crucial in mitigating its impact and reducing mortality. Non-contrast computed tomography (NCCT) remains the primary diagnostic tool in stroke emergencies due to its speed, accessibility, and cost-effectiveness. NCCT enables the exclusion of hemorrhage and directs attention to ischemic causes resulting from arterial flow obstruction. Quantification of NCCT findings employs the Alberta Stroke Program Early Computed Tomography Score (ASPECTS), which evaluates affected brain structures. This study seeks to identify early alterations in NCCT density in patients with stroke symptoms using a binary classifier distinguishing NCCT scans with and without stroke. To achieve this, various well-known deep learning architectures, namely VGG3D, ResNet3D, and DenseNet3D, validated in the ImageNet challenges, are implemented with 3D images covering the entire brain volume. The training results of these networks are presented, wherein diverse parameters are examined for optimal performance. The DenseNet3D network emerges as the most effective model, attaining a training set accuracy of 98% and a test set accuracy of 95%. The aim is to alert medical professionals to potential stroke cases in their early stages based on NCCT findings displaying altered density patterns.

Convolutional autoencoder-based deep learning for intracerebral hemorrhage classification using brain CT images.

Nageswara Rao B, Acharya UR, Tan RS, Dash P, Mohapatra M, Sabut S

pubmed logopapersDec 1 2025
Intracerebral haemorrhage (ICH) is a common form of stroke that affects millions of people worldwide. The incidence is associated with a high rate of mortality and morbidity. Accurate diagnosis using brain non-contrast computed tomography (NCCT) is crucial for decision-making on potentially life-saving surgery. Limited access to expert readers and inter-observer variability imposes barriers to timeous and accurate ICH diagnosis. We proposed a hybrid deep learning model for automated ICH diagnosis using NCCT images, which comprises a convolutional autoencoder (CAE) to extract features with reduced data dimensionality and a dense neural network (DNN) for classification. In order to ensure that the model generalizes to new data, we trained it using tenfold cross-validation and holdout methods. Principal component analysis (PCA) based dimensionality reduction and classification is systematically implemented for comparison. The study dataset comprises 1645 ("ICH" class) and 1648 ("Normal" class belongs to patients with non-hemorrhagic stroke) labelled images obtained from 108 patients, who had undergone CT examination on a 64-slice computed tomography scanner at Kalinga Institute of Medical Sciences between 2020 and 2023. Our developed CAE-DNN hybrid model attained 99.84% accuracy, 99.69% sensitivity, 100% specificity, 100% precision, and 99.84% F1-score, which outperformed the comparator PCA-DNN model as well as the published results in the literature. In addition, using saliency maps, our CAE-DNN model can highlight areas on the images that are closely correlated with regions of ICH, which have been manually contoured by expert readers. The CAE-DNN model demonstrates the proof-of-concept for accurate ICH detection and localization, which can potentially be implemented to prioritize the treatment using NCCT images in clinical settings.

SurgPointTransformer: transformer-based vertebra shape completion using RGB-D imaging.

Massalimova A, Liebmann F, Jecklin S, Carrillo F, Farshad M, Fürnstahl P

pubmed logopapersDec 1 2025
State-of-the-art computer- and robot-assisted surgery systems rely on intraoperative imaging technologies such as computed tomography and fluoroscopy to provide detailed 3D visualizations of patient anatomy. However, these methods expose both patients and clinicians to ionizing radiation. This study introduces a radiation-free approach for 3D spine reconstruction using RGB-D data. Inspired by the "mental map" surgeons form during procedures, we present SurgPointTransformer, a shape completion method that reconstructs unexposed spinal regions from sparse surface observations. The method begins with a vertebra segmentation step that extracts vertebra-level point clouds for subsequent shape completion. SurgPointTransformer then uses an attention mechanism to learn the relationship between visible surface features and the complete spine structure. The approach is evaluated on an <i>ex vivo</i> dataset comprising nine samples, with CT-derived data used as ground truth. SurgPointTransformer significantly outperforms state-of-the-art baselines, achieving a Chamfer distance of 5.39 mm, an F-score of 0.85, an Earth mover's distance of 11.00 and a signal-to-noise ratio of 22.90 dB. These results demonstrate the potential of our method to reconstruct 3D vertebral shapes without exposing patients to ionizing radiation. This work contributes to the advancement of computer-aided and robot-assisted surgery by enhancing system perception and intelligence.

Application of Artificial Intelligence in rheumatic disease classification: an example of ankylosing spondylitis severity inspection model.

Chen CW, Tsai HH, Yeh CY, Yang CK, Tsou HK, Leong PY, Wei JC

pubmed logopapersDec 1 2025
The development of the Artificial Intelligence (AI)-based severity inspection model for ankylosing spondylitis (AS) could support health professionals to rapidly assess the severity of the disease, enhance proficiency, and reduce the demands of human resources. This paper aims to develop an AI-based severity inspection model for AS using patients' X-ray images and modified Stoke Ankylosing Spondylitis Spinal Score (mSASSS). The numerical simulation with AI is developed following the progress of data preprocessing, building and testing the model, and then the model. The training data is preprocessed by inviting three experts to check the X-ray images of 222 patients following the Gold Standard. The model is then developed through two stages, including keypoint detection and mSASSS evaluation. The two-stage AI-based severity inspection model for AS was developed to automatically detect spine points and evaluate mSASSS scores. At last, the data obtained from the developed model was compared with those from experts' assessment to analyse the accuracy of the model. The study was conducted in accordance with the ethical principles outlined in the Declaration of Helsinki. The spine point detection at the first stage achieved 1.57 micrometres in mean error distance with the ground truth, and the second stage of the classification network can reach 0.81 in mean accuracy. The model can correctly identify 97.4% patches belonging to mSASSS score 3, while those belonging to score 0 can still be classified into scores 1 or 2. The automatic severity inspection model for AS developed in this paper is accurate and can support health professionals in rapidly assessing the severity of AS, enhancing assessment proficiency, and reducing the demands of human resources.

TFKT V2: task-focused knowledge transfer from natural images for computed tomography perceptual image quality assessment.

Rifa KR, Ahamed MA, Zhang J, Imran A

pubmed logopapersSep 1 2025
The accurate assessment of computed tomography (CT) image quality is crucial for ensuring diagnostic reliability while minimizing radiation dose. Radiologists' evaluations are time-consuming and labor-intensive. Existing automated approaches often require large CT datasets with predefined image quality assessment (IQA) scores, which often do not align well with clinical evaluations. We aim to develop a reference-free, automated method for CT IQA that closely reflects radiologists' evaluations, reducing the dependency on large annotated datasets. We propose Task-Focused Knowledge Transfer (TFKT), a deep learning-based IQA method leveraging knowledge transfer from task-similar natural image datasets. TFKT incorporates a hybrid convolutional neural network-transformer model, enabling accurate quality predictions by learning from natural image distortions with human-annotated mean opinion scores. The model is pre-trained on natural image datasets and fine-tuned on low-dose computed tomography perceptual image quality assessment data to ensure task-specific adaptability. Extensive evaluations demonstrate that the proposed TFKT method effectively predicts IQA scores aligned with radiologists' assessments on in-domain datasets and generalizes well to out-of-domain clinical pediatric CT exams. The model achieves robust performance without requiring high-dose reference images. Our model is capable of assessing the quality of <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mo>∼</mo> <mn>30</mn></mrow> </math> CT image slices in a second. The proposed TFKT approach provides a scalable, accurate, and reference-free solution for CT IQA. The model bridges the gap between traditional and deep learning-based IQA, offering clinically relevant and computationally efficient assessments applicable to real-world clinical settings.

Ultrasound Displacement Tracking Techniques for Post-Stroke Myofascial Shear Strain Quantification.

Ashikuzzaman M, Huang J, Bonwit S, Etemadimanesh A, Ghasemi A, Debs P, Nickl R, Enslein J, Fayad LM, Raghavan P, Bell MAL

pubmed logopapersJun 24 2025
Ultrasound shear strain is a potential biomarker of myofascial dysfunction. However, the quality of estimated shear strains can be impacted by differences in ultrasound displacement tracking techniques, potentially altering clinical conclusions surrounding myofascial pain. This work assesses the reliability of four displacement estimation algorithms under a novel clinical hypothesis that the shear strain between muscles on a stroke-affected (paretic) shoulder with myofascial pain is lower than that on the non-paretic side of the same patient. After initial validation with simulations, four approaches were evaluated with in vivo data acquired from ten research participants with myofascial post-stroke shoulder pain: (1) Search is a common window-based method that determines displacements by searching for maximum normalized cross-correlations within windowed data, whereas (2) OVERWIND-Search, (3) SOUL-Search, and (4) $L1$-SOUL-Search fine-tune the Search initial estimates by optimizing cost functions comprising data and regularization terms, utilizing $L1$-norm-based first-order regularization, $L2$-norm-based first- and second-order regularization, and $L1$-norm-based first- and second-order regularization, respectively. SOUL-Search and $L1$-SOUL-Search most accurately and reliably estimate shear strain relative to our clinical hypothesis, when validated with visual inspection of ultrasound cine loops and quantitative T1$\rho$ magnetic resonance imaging. In addition, $L1$-SOUL-Search produced the most reliable displacement tracking performance by generating lateral displacement images with smooth displacement gradients (measured as the mean and variance of displacement derivatives) and sharp edges (which enables distinction of shoulder muscle layers). Among the four investigated methods, $L1$-SOUL-Search emerged as the most suitable option to investigate myofascial pain and dysfunction, despite the drawback of slow runtimes, which can potentially be resolved with a deep learning solution. This work advances musculoskeletal health, ultrasound shear strain imaging, and related applications by establishing the foundation required to develop reliable image-based biomarkers for accurate diagnoses and treatments.

Refining cardiac segmentation from MRI volumes with CT labels for fine anatomy of the ascending aorta.

Oda H, Wakamori M, Akita T

pubmed logopapersJun 24 2025
Magnetic resonance imaging (MRI) is time-consuming, posing challenges in capturing clear images of moving organs, such as cardiac structures, including complex structures such as the Valsalva sinus. This study evaluates a computed tomography (CT)-guided refinement approach for cardiac segmentation from MRI volumes, focused on preserving the detailed shape of the Valsalva sinus. Owing to the low spatial contrast around the Valsalva sinus in MRI, labels from separate computed tomography (CT) volumes are used to refine the segmentation. Deep learning techniques are employed to obtain initial segmentation from MRI volumes, followed by the detection of the ascending aorta's proximal point. This detected proximal point is then used to select the most similar label from CT volumes of other patients. Non-rigid registration is further applied to refine the segmentation. Experiments conducted on 20 MRI volumes with labels from 20 CT volumes exhibited a slight decrease in quantitative segmentation accuracy. The CT-guided method demonstrated the precision (0.908), recall (0.746), and Dice score (0.804) for the ascending aorta compared with those obtained by nnU-Net alone (0.903, 0.770, and 0.816, respectively). Although some outputs showed bulge-like structures near the Valsalva sinus, an improvement in quantitative segmentation accuracy could not be validated.

Bedside Ultrasound Vector Doppler Imaging System with GPU Processing and Deep Learning.

Nahas H, Yiu BYS, Chee AJY, Ishii T, Yu ACH

pubmed logopapersJun 24 2025
Recent innovations in vector flow imaging promise to bring the modality closer to clinical application and allow for more comprehensive high-frame-rate vascular assessments. One such innovation is plane-wave multi-angle vector Doppler, where pulsed Doppler principles from multiple steering angles are used to realize vector flow imaging at frame rates upward of 1,000 frames per second (fps). Currently, vector Doppler is limited by the presence of aliasing artifacts that have prevented its reliable realization at the bedside. In this work, we present a new aliasing-resistant vector Doppler imaging system that can be deployed at the bedside using a programmable ultrasound core, graphics processing unit (GPU) processing, and deep learning principles. The framework supports two operational modes: 1) live imaging at 17 fps where vector flow imaging serves to guide image view navigation in blood vessels with complex dynamics; 2) on-demand replay mode where flow data acquired at high frame rates of over 1,000 fps is depicted as a slow-motion playback at 60 fps using an aliasing-resistant vector projectile visualization. Using our new system, aliasing-free vector flow cineloops were successfully obtained in a stenosis phantom experiment and in human bifurcation imaging scans. This system represents a major engineering advance towards the clinical adoption of vector flow imaging.

Prompt learning with bounding box constraints for medical image segmentation.

Gaillochet M, Noori M, Dastani S, Desrosiers C, Lombaert H

pubmed logopapersJun 24 2025
Pixel-wise annotations are notoriously labourious and costly to obtain in the medical domain. To mitigate this burden, weakly supervised approaches based on bounding box annotations-much easier to acquire-offer a practical alternative. Vision foundation models have recently shown noteworthy segmentation performance when provided with prompts such as points or bounding boxes. Prompt learning exploits these models by adapting them to downstream tasks and automating segmentation, thereby reducing user intervention. However, existing prompt learning approaches depend on fully annotated segmentation masks. This paper proposes a novel framework that combines the representational power of foundation models with the annotation efficiency of weakly supervised segmentation. More specifically, our approach automates prompt generation for foundation models using only bounding box annotations. Our proposed optimization scheme integrates multiple constraints derived from box annotations with pseudo-labels generated by the prompted foundation model. Extensive experiments across multi-modal datasets reveal that our weakly supervised method achieves an average Dice score of 84.90% in a limited data setting, outperforming existing fully-supervised and weakly-supervised approaches. The code will be available upon acceptance.

Recurrent Visual Feature Extraction and Stereo Attentions for CT Report Generation

Yuanhe Tian, Lei Mao, Yan Song

arxiv logopreprintJun 24 2025
Generating reports for computed tomography (CT) images is a challenging task, while similar to existing studies for medical image report generation, yet has its unique characteristics, such as spatial encoding of multiple images, alignment between image volume and texts, etc. Existing solutions typically use general 2D or 3D image processing techniques to extract features from a CT volume, where they firstly compress the volume and then divide the compressed CT slices into patches for visual encoding. These approaches do not explicitly account for the transformations among CT slices, nor do they effectively integrate multi-level image features, particularly those containing specific organ lesions, to instruct CT report generation (CTRG). In considering the strong correlation among consecutive slices in CT scans, in this paper, we propose a large language model (LLM) based CTRG method with recurrent visual feature extraction and stereo attentions for hierarchical feature modeling. Specifically, we use a vision Transformer to recurrently process each slice in a CT volume, and employ a set of attentions over the encoded slices from different perspectives to selectively obtain important visual information and align them with textual features, so as to better instruct an LLM for CTRG. Experiment results and further analysis on the benchmark M3D-Cap dataset show that our method outperforms strong baseline models and achieves state-of-the-art results, demonstrating its validity and effectiveness.
Page 1 of 72715 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.