Sort by:
Page 198 of 3963955 results

Auto-Segmentation via deep-learning approaches for the assessment of flap volume after reconstructive surgery or radiotherapy in head and neck cancer.

Thariat J, Mesbah Z, Chahir Y, Beddok A, Blache A, Bourhis J, Fatallah A, Hatt M, Modzelewski R

pubmed logopapersJul 1 2025
Reconstructive flap surgery aims to restore the substance and function losses associated with tumor resection. Automatic flap segmentation could allow quantification of flap volume and correlations with functional outcomes after surgery or post-operative RT (poRT). Flaps being ectopic tissues of various components (fat, skin, fascia, muscle, bone) of various volume, shape and texture, the anatomical modifications, inflammation and edema of the postoperative bed make the segmentation task challenging. We built a artificial intelligence-enabled automatic soft-tissue flap segmentation method from CT scans of Head and Neck Cancer (HNC) patients. Ground-truth flap segmentation masks were delineated by two experts on postoperative CT scans of 148 HNC patients undergoing poRT. All CTs and flaps (free or pedicled, soft tissue only or bone) were kept, including those with artefacts, to ensure generalizability. A deep-learning nnUNetv2 framework was built using Hounsfield Units (HU) windowing to mimic radiological assessment. A transformer-based 2D "Segment Anything Model" (MedSAM) was also built and fine-tuned to medical CTs. Models were compared with the Dice Similarity Coefficient (DSC) and Hausdorff Distance 95th percentile (HD95) metrics. Flaps were in the oral cavity (N = 102), oropharynx (N = 26) or larynx/hypopharynx (N = 20). There were free flaps (N = 137), pedicled flaps (N = 11), of soft tissue flap-only (N = 92), reconstructed bone (N = 42), or bone resected without reconstruction (N = 40). The nnUNet-windowing model outperformed the nnUNetv2 and MedSam models. It achieved mean DSCs of 0.69 and HD95 of 25.6 mm using 5-fold cross-validation. Segmentation performed better in the absence of artifacts, and rare situations such as pedicled flaps, laryngeal primaries and resected bone without bone reconstruction (p < 0.01). Automatic flap segmentation demonstrates clinical performances that allow to quantify spontaneous and radiation-induced volume shrinkage of flaps. Free flaps achieved excellent performances; rare situations will be addressed by fine-tuning the network.

Attention-driven hybrid deep learning and SVM model for early Alzheimer's diagnosis using neuroimaging fusion.

Paduvilan AK, Livingston GAL, Kuppuchamy SK, Dhanaraj RK, Subramanian M, Al-Rasheed A, Getahun M, Soufiene BO

pubmed logopapersJul 1 2025
Alzheimer's Disease (AD) poses a significant global health challenge, necessitating early and accurate diagnosis to enable timely interventions. AD is a progressive neurodegenerative disorder that affects millions worldwide and is one of the leading causes of cognitive impairment in older adults. Early diagnosis is critical for enabling effective treatment strategies, slowing disease progression, and improving the quality of life for patients. Existing diagnostic methods often struggle with limited sensitivity, overfitting, and reduced reliability due to inadequate feature extraction, imbalanced datasets, and suboptimal model architectures. This study addresses these gaps by introducing an innovative methodology that combines SVM with Deep Learning (DL) to improve the classification performance of AD. Deep learning models extract high-level imaging features which are then concatenated with SVM kernels in a late-fusion ensemble. This hybrid design leverages deep representations for pattern recognition and SVM's robustness on small sample sets. This study provides a necessary tool for early-stage identification of possible cases, so enhancing the management and treatment options. This is attained by precisely classifying the disease from neuroimaging data. The approach integrates advanced data pre-processing, dynamic feature optimization, and attention-driven learning mechanisms to enhance interpretability and robustness. The research leverages a dataset of MRI and PET imaging, integrating novel fusion techniques to extract key biomarkers indicative of cognitive decline. Unlike prior approaches, this method effectively mitigates the challenges of data sparsity and dimensionality reduction while improving generalization across diverse datasets. Comparative analysis highlights a 15% improvement in accuracy, a 12% reduction in false positives, and a 10% increase in F1-score against state-of-the-art models such as HNC and MFNNC. The proposed method significantly outperforms existing techniques across metrics like accuracy, sensitivity, specificity, and computational efficiency, achieving an overall accuracy of 98.5%.

Cephalometric landmark detection using vision transformers with direct coordinate prediction.

Laitenberger F, Scheuer HT, Scheuer HA, Lilienthal E, You S, Friedrich RE

pubmed logopapersJul 1 2025
Cephalometric Landmark Detection (CLD), i.e. annotating interest points in lateral X-ray images, is the crucial first step of every orthodontic therapy. While CLD has immense potential for automation using Deep Learning methods, carefully crafted contemporary approaches using convolutional neural networks and heatmap prediction do not qualify for large-scale clinical application due to insufficient performance. We propose a novel approach using Vision Transformers (ViTs) with direct coordinate prediction, avoiding the memory-intensive heatmap prediction common in previous work. Through extensive ablation studies comparing our method against contemporary CNN architectures (ConvNext V2) and heatmap-based approaches (Segformer), we demonstrate that ViTs with coordinate prediction achieve superior performance with more than 2 mm improvement in mean radial error compared to state-of-the-art CLD methods. Our results show that while non-adapted CNN architectures perform poorly on the given task, contemporary approaches may be too tailored to specific datasets, failing to generalize to different and especially sparse datasets. We conclude that using general-purpose Vision Transformers with direct coordinate prediction shows great promise for future research on CLD and medical computer vision.

A superpixel based self-attention network for uterine fibroid segmentation in high intensity focused ultrasound guidance images.

Wen S, Zhang D, Lei Y, Yang Y

pubmed logopapersJul 1 2025
Ultrasound guidance images are widely used for high intensity focused ultrasound (HIFU) therapy; however, the speckles, acoustic shadows, and signal attenuation in ultrasound guidance images hinder the observation of the images by radiologists and make segmentation of ultrasound guidance images more difficult. To address these issues, we proposed the superpixel based attention network, a network integrating superpixels and self-attention mechanisms that can automatically segment tumor regions in ultrasound guidance images. The method is implemented based on the framework of region splitting and merging. The ultrasound guidance image is first over-segmented into superpixels, then features within the superpixels are extracted and encoded into superpixel feature matrices with the uniform size. The network takes superpixel feature matrices and their positional information as input, and classifies superpixels using self-attention modules and convolutional layers. Finally, the superpixels are merged based on the classification results to obtain the tumor region, achieving automatic tumor region segmentation. The method was applied to a local dataset consisting of 140 ultrasound guidance images from uterine fibroid HIFU therapy. The performance of the proposed method was quantitatively evaluated by comparing the segmentation results with those of the pixel-wise segmentation networks. The proposed method achieved 75.95% and 7.34% in mean intersection over union (IoU) and mean normalized Hausdorff distance (NormHD). In comparison to the segmentation transformer (SETR), this represents an improvement in performance by 5.52% for IoU and 1.49% for NormHD. Paired t-tests were conducted to evaluate the significant difference in IoU and NormHD between the proposed method and the comparison methods. All p-values of the paired t-tests were found to be less than 0.05. The analysis of evaluation metrics and segmentation results indicates that the proposed method performs better than existing pixel-wise segmentation networks in segmenting the tumor region on ultrasound guidance images.

Neural networks with personalized training for improved MOLLI T<sub>1</sub> mapping.

Gkatsoni O, Xanthis CG, Johansson S, Heiberg E, Arheden H, Aletras AH

pubmed logopapersJul 1 2025
The aim of this study was to develop a method for personalized training of Deep Neural Networks by means of an MRI simulator to improve MOLLI native T<sub>1</sub> estimates relative to conventional fitting methods. The proposed Personalized Training Neural Network (PTNN) for T<sub>1</sub> mapping was based on a neural network which was trained with simulated MOLLI signals generated for each individual scan, taking into account both the pulse sequence parameters and the heart rate triggers of the specific healthy volunteer. Experimental data from eleven phantoms and ten healthy volunteers were included in the study. In phantom studies, agreement between T<sub>1</sub> reference values and those obtained with the PTNN yielded a statistically significant smaller bias than conventional fitting estimates (-26.69 ± 29.5ms vs. -65.0 ± 33.25ms, p < 0.001). For in vivo studies, T<sub>1</sub> estimates derived from the PTNN yielded higher T<sub>1</sub> values (1152.4 ± 25.8ms myocardium, 1640.7 ± 30.6ms blood) than conventional fitting (1050.8 ± 24.7ms myocardium, 1597.2 ± 39.9ms blood). For PTNN, shortening the acquisition time by eliminating the pause between inversion pulses yielded higher myocardial T<sub>1</sub> values (1162.2 ± 19.7ms with pause vs. 1127.1 ± 19.7ms, p = 0.01 myocardium), (1624.7 ± 33.9ms with pause vs. 1645.4 ± 18.7ms, p = 0.16 blood). For conventional fitting statistically significant differences were found. Compared to T<sub>1</sub> maps derived by conventional fitting, PTNN is a post-processing method that yielded T<sub>1</sub> maps with higher values and better accuracy in phantoms for a physiological range of T<sub>1</sub> and T<sub>2</sub> values. In normal volunteers PTNN yielded higher T<sub>1</sub> values even with a shorter acquisition scheme of eight heartbeats scan time, without deploying new pulse sequences.

Stratifying trigeminal neuralgia and characterizing an abnormal property of brain functional organization: a resting-state fMRI and machine learning study.

Wu M, Qiu J, Chen Y, Jiang X

pubmed logopapersJul 1 2025
Increasing evidence suggests that primary trigeminal neuralgia (TN), including classical TN (CTN) and idiopathic TN (ITN), share biological, neuropsychological, and clinical features, despite differing diagnostic criteria. Neuroimaging studies have shown neurovascular compression (NVC) differences in these disorders. However, changes in brain dynamics across these two TN subtypes remain unknown. The authors aimed to examine the functional connectivity differences in CTN, ITN, and pain-free controls. A total of 93 subjects, 50 TN patients and 43 pain-free controls, underwent resting-state functional magnetic resonance imaging (rs-fMRI). All TN patients underwent surgery, and the NVC type was verified. Functional connectivity and spontaneous brain activity were analyzed, and the significant alterations in rs-fMRI indices were selected to train classification models. The patients with TN showed increased connectivity between several brain regions, such as the medial prefrontal cortex (mPFC) and left planum temporale and decreased connectivity between the mPFC and left superior frontal gyrus. CTN patients exhibited a further reduction in connectivity between the left insular lobe and left occipital pole. Compared to controls, TN patients had heightened neural activity in the frontal regions. The CTN patients showed reduced activity in the right temporal pole compared to that in the ITN patients. These patterns effectively distinguished TN patients from controls, with an accuracy of 74.19% and an area under the receiver operating characteristic curve of 0.80. This study revealed alterations in rs-fMRI metrics in TN patients compared to those in controls and is the first to show differences between CTN and ITN. The support vector machine model of rs-fMRI indices exhibited moderate performance on discriminating TN patients from controls. These findings have unveiled potential biomarkers for TN and its subtypes, which can be used for additional investigation of the pathophysiology of the disease.

Synergizing advanced algorithm of explainable artificial intelligence with hybrid model for enhanced brain tumor detection in healthcare.

Lamba K, Rani S, Shabaz M

pubmed logopapersJul 1 2025
Brain tumor causes life-threatening consequences due to which its timely detection and accurate classification are critical for determining appropriate treatment plans while focusing on the improved patient outcomes. However, conventional approaches of brain tumor diagnosis, such as Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) scans, are often labor-intensive, prone to human error, and completely reliable on expertise of radiologists.Thus, the integration of advanced techniques such as Machine Learning (ML) and Deep Learning (DL) has brought revolution in the healthcare sector due to their supporting features or properties having ability to analyze medical images in recent years, demonstrating great potential for achieving accurate and improved outcomes but also resulted in a few drawbacks due to their black-box nature. As understanding reasoning behind their predictions is still a great challenge for the healthcare professionals and raised a great concern about their trustworthiness, interpretability and transparency in clinical settings. Thus, an advanced algorithm of explainable artificial intelligence (XAI) has been synergized with hybrid model comprising of DenseNet201 network for extracting the most important features based on the input Magnetic resonance imaging (MRI) data following supervised algorithm, support vector machine (SVM) to distinguish distinct types of brain scans. To overcome this, an explainable hybrid framework has been proposed that integrates DenseNet201 for deep feature extraction with a Support Vector Machine (SVM) classifier for robust binary classification. A region-adaptive preprocessing pipeline is used to enhance tumor visibility and feature clarity. To address the need for interpretability, multiple XAI techniques-Grad-CAM, Integrated Gradients (IG), and Layer-wise Relevance Propagation (LRP) have been incorporated. Our comparative evaluation shows that LRP achieves the highest performance across all explainability metrics, with 98.64% accuracy, 0.74 F1-score, and 0.78 IoU. The proposed model provides transparent and highly accurate diagnostic predictions, offering a reliable clinical decision support tool. It achieves 0.9801 accuracy, 0.9223 sensitivity, 0.9909 specificity, 0.9154 precision, and 0.9360 F1-score, demonstrating strong potential for real-world brain tumor diagnosis and personalized treatment strategies.

Hybrid transfer learning and self-attention framework for robust MRI-based brain tumor classification.

Panigrahi S, Adhikary DRD, Pattanayak BK

pubmed logopapersJul 1 2025
Brain tumors are a significant contributor to cancer-related deaths worldwide. Accurate and prompt detection is crucial to reduce mortality rates and improve patient survival prospects. Magnetic Resonance Imaging (MRI) is crucial for diagnosis, but manual analysis is resource-intensive and error-prone, highlighting the need for robust Computer-Aided Diagnosis (CAD) systems. This paper proposes a novel hybrid model combining Transfer Learning (TL) and attention mechanisms to enhance brain tumor classification accuracy. Leveraging features from the pre-trained DenseNet201 Convolutional Neural Networks (CNN) model and integrating a Transformer-based architecture, our approach overcomes challenges like computational intensity, detail detection, and noise sensitivity. We also evaluated five additional pre-trained models-VGG19, InceptionV3, Xception, MobileNetV2, and ResNet50V2 and incorporated Multi-Head Self-Attention (MHSA) and Squeeze-and-Excitation Attention (SEA) blocks individually to improve feature representation. Using the Br35H dataset of 3,000 MRI images, our proposed DenseTransformer model achieved a consistent accuracy of 99.41%, demonstrating its reliability as a diagnostic tool. Statistical analysis using Z-test based on Cohen's Kappa Score, DeLong's test based on AUC Score and McNemar's test based on F1-score confirms the model's reliability. Additionally, Explainable AI (XAI) techniques like Gradient-weighted Class Activation Mapping (Grad-CAM) and Local Interpretable Model-agnostic Explanations (LIME) enhanced model transparency and interpretability. This study underscores the potential of hybrid Deep Learning (DL) models in advancing brain tumor diagnosis and improving patient outcomes.

Brain structural features with functional priori to classify Parkinson's disease and multiple system atrophy using diagnostic MRI.

Zhou K, Li J, Huang R, Yu J, Li R, Liao W, Lu F, Hu X, Chen H, Gao Q

pubmed logopapersJul 1 2025
Clinical two-dimensional (2D) MRI data has seen limited application in the early diagnosis of Parkinson's disease (PD) and multiple system atrophy (MSA) due to quality limitations, yet its diagnostic and therapeutic potential remains underexplored. This study presents a novel machine learning framework using reconstructed clinical images to accurately distinguish PD from MSA and identify disease-specific neuroimaging biomarkers. The structure constrained super-resolution network (SCSRN) algorithm was employed to reconstruct clinical 2D MRI data for 56 PD and 58 MSA patients. Features were derived from a functional template, and hierarchical SHAP-based feature selection improved model accuracy and interpretability. In the test set, the Extra Trees and logistic regression models based on the functional template demonstrated an improved accuracy rate of 95.65% and an AUC of 99%. The positive and negative impacts of various features predicting PD and MSA were clarified, with larger fourth ventricular and smaller brainstem volumes being most significant. The proposed framework provides new insights into the comprehensive utilization of clinical 2D MRI images to explore underlying neuroimaging biomarkers that can distinguish between PD and MSA, highlighting disease-specific alterations in brain morphology observed in these conditions.
Page 198 of 3963955 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.