Sort by:
Page 5 of 1241236 results

Integrating Background Knowledge in Medical Semantic Segmentation with Logic Tensor Networks

Luca Bergamin, Giovanna Maria Dimitri, Fabio Aiolli

arxiv logopreprintSep 26 2025
Semantic segmentation is a fundamental task in medical image analysis, aiding medical decision-making by helping radiologists distinguish objects in an image. Research in this field has been driven by deep learning applications, which have the potential to scale these systems even in the presence of noise and artifacts. However, these systems are not yet perfected. We argue that performance can be improved by incorporating common medical knowledge into the segmentation model's loss function. To this end, we introduce Logic Tensor Networks (LTNs) to encode medical background knowledge using first-order logic (FOL) rules. The encoded rules span from constraints on the shape of the produced segmentation, to relationships between different segmented areas. We apply LTNs in an end-to-end framework with a SwinUNETR for semantic segmentation. We evaluate our method on the task of segmenting the hippocampus in brain MRI scans. Our experiments show that LTNs improve the baseline segmentation performance, especially when training data is scarce. Despite being in its preliminary stages, we argue that neurosymbolic methods are general enough to be adapted and applied to other medical semantic segmentation tasks.

Leveraging multi-modal foundation model image encoders to enhance brain MRI-based headache classification.

Rafsani F, Sheth D, Che Y, Shah J, Siddiquee MMR, Chong CD, Nikolova S, Ross K, Dumkrieger G, Li B, Wu T, Schwedt TJ

pubmed logopapersSep 26 2025
Headaches are a nearly universal human experience traditionally diagnosed based solely on symptoms. Recent advances in imaging techniques and artificial intelligence (AI) have enabled the development of automated headache detection systems, which can enhance clinical diagnosis, especially when symptom-based evaluations are insufficient. Current AI models often require extensive data, limiting their clinical applicability where data availability is low. However, deep learning models, particularly pre-trained ones and fine-tuned with smaller, targeted datasets can potentially overcome this limitation. By leveraging BioMedCLIP, a pre-trained foundational model combining a vision transformer (ViT) image encoder with PubMedBERT text encoder, we fine-tuned the pre-trained ViT model for the specific purpose of classifying headaches and detecting biomarkers from brain MRI data. The dataset consisted of 721 individuals: 424 healthy controls (HC) from the IXI dataset and 297 local participants, including migraine sufferers (n = 96), individuals with acute post-traumatic headache (APTH, n = 48), persistent post-traumatic headache (PPTH, n = 49), and additional HC (n = 104). The model achieved high accuracy across multiple balanced test sets, including 89.96% accuracy for migraine versus HC, 88.13% for APTH versus HC, and 83.13% for PPTH versus HC, all validated through five-fold cross-validation for robustness. Brain regions identified by Gradient-weighted Class Activation Mapping analysis as responsible for migraine classification included the postcentral cortex, supramarginal gyrus, superior temporal cortex, and precuneus cortex; for APTH, rostral middle frontal and precentral cortices; and, for PPTH, cerebellar cortex and precentral cortex. To our knowledge, this is the first study to leverage a multimodal biomedical foundation model in the context of headache classification and biomarker detection using structural MRI, offering complementary insights into the causes and brain changes associated with headache disorders.

A novel deep neural architecture for efficient and scalable multidomain image classification.

Nobel SMN, Tasir MAM, Noor H, Monowar MM, Hamid MA, Sayeed MS, Islam MR, Mridha MF, Dey N

pubmed logopapersSep 26 2025
Deep learning has significantly advanced the field of computer vision; however, developing models that generalize effectively across diverse image domains remains a major research challenge. In this study, we introduce DeepFreqNet, a novel deep neural architecture specifically designed for high-performance multi-domain image classification. The innovative aspect of DeepFreqNet lies in its combination of three powerful components: multi-scale feature extraction for capturing patterns at different resolutions, depthwise separable convolutions for enhanced computational efficiency, and residual connections to maintain gradient flow and accelerate convergence. This hybrid design improves the architecture's ability to learn discriminative features and ensures scalability across domains with varying data complexities. Unlike traditional transfer learning models, DeepFreqNet adapts seamlessly to diverse datasets without requiring extensive reconfiguration. Experimental results from nine benchmark datasets, including MRI tumor classification, blood cell classification, and sign language recognition, demonstrate superior performance, achieving classification accuracies between 98.96% and 99.97%. These results highlight the effectiveness and versatility of DeepFreqNet, showcasing a significant improvement over existing state-of-the-art methods and establishing it as a robust solution for real-world image classification challenges.

Model-driven individualized transcranial direct current stimulation for the treatment of insomnia disorder: protocol for a randomized, sham-controlled, double-blind study.

Wang Y, Jia W, Zhang Z, Bai T, Xu Q, Jiang J, Wang Z

pubmed logopapersSep 26 2025
Insomnia disorder is a prevalent condition associated with significant negative impacts on health and daily functioning. Transcranial direct current stimulation (tDCS) has emerged as a potential technique for improving sleep. However, questions remain regarding its clinical efficacy, and there is a lack of standardized individualized stimulation protocols. This study aims to evaluate the efficacy of model-driven, individualized tDCS for treating insomnia disorder through a randomized, double-blind, sham-controlled trial. A total of 40 patients diagnosed with insomnia disorder will be recruited and randomly assigned to either an active tDCS group or a sham stimulation group. Individualized stimulation parameters will be determined through machine learning-based electric field modeling incorporating structural MRI and EEG data. Participants will undergo 10 sessions of tDCS (5 days/week for 2 consecutive weeks), with follow-up assessments conducted at 2 and 4 weeks after treatment. The primary outcome is the reduction in the Insomnia Severity Index (ISI) score at two weeks post-treatment. Secondary outcomes include changes in sleep parameters, anxiety, and depression scores. This study is expected to provide evidence for the effectiveness of individualized tDCS in improving sleep quality and reducing insomnia symptoms. This integrative approach, combining advanced neuroimaging and electrophysiological biomarkers, has the potential to establish an evidence-based framework for individualized brain stimulation, optimizing therapeutic outcomes. This study is registered at ClinicalTrials.gov (Identifier: NCT06671457) and was registered on 4 November 2024. The online version contains supplementary material available at 10.1186/s12888-025-07347-5.

Brain Tumor Classification from MRI Scans via Transfer Learning and Enhanced Feature Representation

Ahta-Shamul Hoque Emran, Hafija Akter, Abdullah Al Shiam, Abu Saleh Musa Miah, Anichur Rahman, Fahmid Al Farid, Hezerul Abdul Karim

arxiv logopreprintSep 26 2025
Brain tumors are abnormal cell growths in the central nervous system (CNS), and their timely detection is critical for improving patient outcomes. This paper proposes an automatic and efficient deep-learning framework for brain tumor detection from magnetic resonance imaging (MRI) scans. The framework employs a pre-trained ResNet50 model for feature extraction, followed by Global Average Pooling (GAP) and linear projection to obtain compact, high-level image representations. These features are then processed by a novel Dense-Dropout sequence, a core contribution of this work, which enhances non-linear feature learning, reduces overfitting, and improves robustness through diverse feature transformations. Another major contribution is the creation of the Mymensingh Medical College Brain Tumor (MMCBT) dataset, designed to address the lack of reliable brain tumor MRI resources. The dataset comprises MRI scans from 209 subjects (ages 9 to 65), including 3671 tumor and 13273 non-tumor images, all clinically verified under expert supervision. To overcome class imbalance, the tumor class was augmented, resulting in a balanced dataset well-suited for deep learning research.

Generating Synthetic MR Spectroscopic Imaging Data with Generative Adversarial Networks to Train Machine Learning Models.

Maruyama S, Takeshima H

pubmed logopapersSep 26 2025
To develop a new method to generate synthetic MR spectroscopic imaging (MRSI) data for training machine learning models. This study targeted routine MRI examination protocols with single voxel spectroscopy (SVS). A novel model derived from pix2pix generative adversarial networks was proposed to generate synthetic MRSI data using MRI and SVS data as inputs. T1- and T2-weighted, SVS, and reference MRSI data were acquired from healthy brains with clinically available sequences. The proposed model was trained to generate synthetic MRSI data. Quantitative evaluation involved the calculation of the mean squared error (MSE) against the reference and metabolite ratio value. The effect of the location of and the number of the SVS data on the quality of the synthetic MRSI data was investigated using the MSE. The synthetic MRSI data generated from the proposed model were visually closer to the reference. The 95% confidence interval (CI) of the metabolite ratio value of synthetic MRSI data overlapped with the reference for seven of eight metabolite ratios. The MSEs tended to be lower in the same location than in different locations. The MSEs among groups of numbers of SVS data were not significantly different. A new method was developed to generate MRSI data by integrating MRI and SVS data. Our method can potentially increase the volume of MRSI data training for other machine learning models by adding SVS acquisition to routine MRI examinations.

A novel open-source ultrasound dataset with deep learning benchmarks for spinal cord injury localization and anatomical segmentation.

Kumar A, Kotkar K, Jiang K, Bhimreddy M, Davidar D, Weber-Levine C, Krishnan S, Kerensky MJ, Liang R, Leadingham KK, Routkevitch D, Hersh AM, Ashayeri K, Tyler B, Suk I, Son J, Theodore N, Thakor N, Manbachi A

pubmed logopapersSep 26 2025
While deep learning has catalyzed breakthroughs across numerous domains, its broader adoption in clinical settings is inhibited by the costly and time-intensive nature of data acquisition and annotation. To further facilitate medical machine learning, we present an ultrasound dataset of 10,223 brightness-mode (B-mode) images consisting of sagittal slices of porcine spinal cords (N = 25) before and after a contusion injury. We additionally benchmark the performance metrics of several state-of-the-art object detection algorithms to localize the site of injury and semantic segmentation models to label the anatomy for comparison and creation of task-specific architectures. Finally, we evaluate the zero-shot generalization capabilities of the segmentation models on human ultrasound spinal cord images to determine whether training on our porcine dataset is sufficient for accurately interpreting human data. Our results show that the YOLOv8 detection model outperforms all evaluated models for injury localization, achieving a mean Average Precision (mAP50-95) score of 0.606. Segmentation metrics indicate that the DeepLabv3 segmentation model achieves the highest accuracy on unseen porcine anatomy, with a Mean Dice score of 0.587, while SAMed achieves the highest mean Dice score generalizing to human anatomy (0.445). To the best of our knowledge, this is the largest annotated dataset of spinal cord ultrasound images made publicly available to researchers and medical professionals, as well as the first public report of object detection and segmentation architectures to assess anatomical markers in the spinal cord for methodology development and clinical applications.

Hemorica: A Comprehensive CT Scan Dataset for Automated Brain Hemorrhage Classification, Segmentation, and Detection

Kasra Davoodi, Mohammad Hoseyni, Javad Khoramdel, Reza Barati, Reihaneh Mortazavi, Amirhossein Nikoofard, Mahdi Aliyari-Shoorehdeli, Jaber Hatam Parikhan

arxiv logopreprintSep 26 2025
Timely diagnosis of Intracranial hemorrhage (ICH) on Computed Tomography (CT) scans remains a clinical priority, yet the development of robust Artificial Intelligence (AI) solutions is still hindered by fragmented public data. To close this gap, we introduce Hemorica, a publicly available collection of 372 head CT examinations acquired between 2012 and 2024. Each scan has been exhaustively annotated for five ICH subtypes-epidural (EPH), subdural (SDH), subarachnoid (SAH), intraparenchymal (IPH), and intraventricular (IVH)-yielding patient-wise and slice-wise classification labels, subtype-specific bounding boxes, two-dimensional pixel masks and three-dimensional voxel masks. A double-reading workflow, preceded by a pilot consensus phase and supported by neurosurgeon adjudication, maintained low inter-rater variability. Comprehensive statistical analysis confirms the clinical realism of the dataset. To establish reference baselines, standard convolutional and transformer architectures were fine-tuned for binary slice classification and hemorrhage segmentation. With only minimal fine-tuning, lightweight models such as MobileViT-XS achieved an F1 score of 87.8% in binary classification, whereas a U-Net with a DenseNet161 encoder reached a Dice score of 85.5% for binary lesion segmentation that validate both the quality of the annotations and the sufficiency of the sample size. Hemorica therefore offers a unified, fine-grained benchmark that supports multi-task and curriculum learning, facilitates transfer to larger but weakly labelled cohorts, and facilitates the process of designing an AI-based assistant for ICH detection and quantification systems.

Interpreting Convolutional Neural Network Activation Maps with Hand-crafted Radiomics Features on Progression of Pediatric Craniopharyngioma after Irradiation Therapy

Wenjun Yang, Chuang Wang, Tina Davis, Jinsoo Uh, Chia-Ho Hua, Thomas E. Merchant

arxiv logopreprintSep 25 2025
Purpose: Convolutional neural networks (CNNs) are promising in predicting treatment outcome for pediatric craniopharyngioma while the decision mechanisms are difficult to interpret. We compared the activation maps of CNN with hand crafted radiomics features of a densely connected artificial neural network (ANN) to correlate with clinical decisions. Methods: A cohort of 100 pediatric craniopharyngioma patients were included. Binary tumor progression was classified by an ANN and CNN with input of T1w, T2w, and FLAIR MRI. Hand-crafted radiomic features were calculated from the MRI using the LifeX software and key features were selected by Group lasso regularization, comparing to the activation maps of CNN. We evaluated the radiomics models by accuracy, area under receiver operational curve (AUC), and confusion matrices. Results: The average accuracy of T1w, T2w, and FLAIR MRI was 0.85, 0.92, and 0.86 (ANOVA, F = 1.96, P = 0.18) with ANN; 0.83, 0.81, and 0.70 (ANOVA, F = 10.11, P = 0.003) with CNN. The average AUC of ANN was 0.91, 0.97, and 0.90; 0.86, 0.88, and 0.75 of CNN for the 3 MRI, respectively. The activation maps were correlated with tumor shape, min and max intensity, and texture features. Conclusions: The tumor progression for pediatric patients with craniopharyngioma achieved promising accuracy with ANN and CNN model. The activation maps extracted from different levels were interpreted with hand-crafted key features of ANN.

Deep-learning-based Radiomics on Mitigating Post-treatment Obesity for Pediatric Craniopharyngioma Patients after Surgery and Proton Therapy

Wenjun Yang, Chia-Ho Hua, Tina Davis, Jinsoo Uh, Thomas E. Merchant

arxiv logopreprintSep 25 2025
Purpose: We developed an artificial neural network (ANN) combining radiomics with clinical and dosimetric features to predict the extent of body mass index (BMI) increase after surgery and proton therapy, with advantage of improved accuracy and integrated key feature selection. Methods and Materials: Uniform treatment protocol composing of limited surgery and proton radiotherapy was given to 84 pediatric craniopharyngioma patients (aged 1-20 years). Post-treatment obesity was classified into 3 groups (<10%, 10-20%, and >20%) based on the normalized BMI increase during a 5-year follow-up. We developed a densely connected 4-layer ANN with radiomics calculated from pre-surgery MRI (T1w, T2w, and FLAIR), combining clinical and dosimetric features as input. Accuracy, area under operative curve (AUC), and confusion matrices were compared with random forest (RF) models in a 5-fold cross-validation. The Group lasso regularization optimized a sparse connection to input neurons to identify key features from high-dimensional input. Results: Classification accuracy of the ANN reached above 0.9 for T1w, T2w, and FLAIR MRI. Confusion matrices showed high true positive rates of above 0.9 while the false positive rates were below 0.2. Approximately 10 key features selected for T1w, T2w, and FLAIR MRI, respectively. The ANN improved classification accuracy by 10% or 5% when compared to RF models without or with radiomic features. Conclusion: The ANN model improved classification accuracy on post-treatment obesity compared to conventional statistics models. The clinical features selected by Group lasso regularization confirmed our practical observation, while the additional radiomic and dosimetric features could serve as imaging markers and mitigation methods on post-treatment obesity for pediatric craniopharyngioma patients.
Page 5 of 1241236 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.