Sort by:
Page 94 of 2252246 results

Redefining diagnostic lesional status in temporal lobe epilepsy with artificial intelligence.

Gleichgerrcht E, Kaestner E, Hassanzadeh R, Roth RW, Parashos A, Davis KA, Bagić A, Keller SS, Rüber T, Stoub T, Pardoe HR, Dugan P, Drane DL, Abrol A, Calhoun V, Kuzniecky RI, McDonald CR, Bonilha L

pubmed logopapersJun 3 2025
Despite decades of advancements in diagnostic MRI, 30%-50% of temporal lobe epilepsy (TLE) patients remain categorized as 'non-lesional' (i.e. MRI negative) based on visual assessment by human experts. MRI-negative patients face diagnostic uncertainty and significant delays in treatment planning. Quantitative MRI studies have demonstrated that MRI-negative patients often exhibit a TLE-specific pattern of temporal and limbic atrophy that might be too subtle for the human eye to detect. This signature pattern could be translated successfully into clinical use via advances in artificial intelligence in computer-aided MRI interpretation, thereby improving the detection of brain 'lesional' patterns associated with TLE. Here, we tested this hypothesis by using a three-dimensional convolutional neural network applied to a dataset of 1178 scans from 12 different centres, which was able to differentiate TLE from healthy controls with high accuracy (85.9% ± 2.8%), significantly outperforming support vector machines based on hippocampal (74.4% ± 2.6%) and whole-brain (78.3% ± 3.3%) volumes. Our analysis focused subsequently on a subset of patients who achieved sustained seizure freedom post-surgery as a gold standard for confirming TLE. Importantly, MRI-negative patients from this cohort were accurately identified as TLE 82.7% ± 0.9% of the time, an encouraging finding given that clinically these were all patients considered to be MRI negative (i.e. not radiographically different from controls). The saliency maps from the convolutional neural network revealed that limbic structures, particularly medial temporal, cingulate and orbitofrontal areas, were most influential in classification, confirming the importance of the well-established TLE signature atrophy pattern for diagnosis. Indeed, the saliency maps were similar in MRI-positive and MRI-negative TLE groups, suggesting that even when humans cannot distinguish more subtle levels of atrophy, these MRI-negative patients are on the same continuum common across all TLE patients. As such, artificial intelligence can identify TLE lesional patterns, and artificial intelligence-aided diagnosis has the potential to enhance the neuroimaging diagnosis of TLE greatly and to redefine the concept of 'lesional' TLE.

Open-PMC-18M: A High-Fidelity Large Scale Medical Dataset for Multimodal Representation Learning

Negin Baghbanzadeh, Sajad Ashkezari, Elham Dolatabadi, Arash Afkanpour

arxiv logopreprintJun 3 2025
Compound figures, which are multi-panel composites containing diverse subfigures, are ubiquitous in biomedical literature, yet large-scale subfigure extraction remains largely unaddressed. Prior work on subfigure extraction has been limited in both dataset size and generalizability, leaving a critical open question: How does high-fidelity image-text alignment via large-scale subfigure extraction impact representation learning in vision-language models? We address this gap by introducing a scalable subfigure extraction pipeline based on transformer-based object detection, trained on a synthetic corpus of 500,000 compound figures, and achieving state-of-the-art performance on both ImageCLEF 2016 and synthetic benchmarks. Using this pipeline, we release OPEN-PMC-18M, a large-scale high quality biomedical vision-language dataset comprising 18 million clinically relevant subfigure-caption pairs spanning radiology, microscopy, and visible light photography. We train and evaluate vision-language models on our curated datasets and show improved performance across retrieval, zero-shot classification, and robustness benchmarks, outperforming existing baselines. We release our dataset, models, and code to support reproducible benchmarks and further study into biomedical vision-language modeling and representation learning.

Co-Evidential Fusion with Information Volume for Medical Image Segmentation

Yuanpeng He, Lijian Li, Tianxiang Zhan, Chi-Man Pun, Wenpin Jiao, Zhi Jin

arxiv logopreprintJun 3 2025
Although existing semi-supervised image segmentation methods have achieved good performance, they cannot effectively utilize multiple sources of voxel-level uncertainty for targeted learning. Therefore, we propose two main improvements. First, we introduce a novel pignistic co-evidential fusion strategy using generalized evidential deep learning, extended by traditional D-S evidence theory, to obtain a more precise uncertainty measure for each voxel in medical samples. This assists the model in learning mixed labeled information and establishing semantic associations between labeled and unlabeled data. Second, we introduce the concept of information volume of mass function (IVUM) to evaluate the constructed evidence, implementing two evidential learning schemes. One optimizes evidential deep learning by combining the information volume of the mass function with original uncertainty measures. The other integrates the learning pattern based on the co-evidential fusion strategy, using IVUM to design a new optimization objective. Experiments on four datasets demonstrate the competitive performance of our method.

Multi-modal brain MRI synthesis based on SwinUNETR

Haowen Pang, Weiyan Guo, Chuyang Ye

arxiv logopreprintJun 3 2025
Multi-modal brain magnetic resonance imaging (MRI) plays a crucial role in clinical diagnostics by providing complementary information across different imaging modalities. However, a common challenge in clinical practice is missing MRI modalities. In this paper, we apply SwinUNETR to the synthesize of missing modalities in brain MRI. SwinUNETR is a novel neural network architecture designed for medical image analysis, integrating the strengths of Swin Transformer and convolutional neural networks (CNNs). The Swin Transformer, a variant of the Vision Transformer (ViT), incorporates hierarchical feature extraction and window-based self-attention mechanisms, enabling it to capture both local and global contextual information effectively. By combining the Swin Transformer with CNNs, SwinUNETR merges global context awareness with detailed spatial resolution. This hybrid approach addresses the challenges posed by the varying modality characteristics and complex brain structures, facilitating the generation of accurate and realistic synthetic images. We evaluate the performance of SwinUNETR on brain MRI datasets and demonstrate its superior capability in generating clinically valuable images. Our results show significant improvements in image quality, anatomical consistency, and diagnostic value.

Computer-Aided Decision Support Systems of Alzheimer's Disease Diagnosis - A Systematic Review.

Günaydın T, Varlı S

pubmed logopapersJun 3 2025
The incidence of Alzheimer's disease is rising with the increasing elderly population worldwide. While no cure exists, early diagnosis can significantly slow disease progression. Computer-aided diagnostic systems are becoming critical tools for assisting in the early detection of Alzheimer's disease. In this systematic review, we aim to evaluate recent advancements in computer-aided decision support systems for Alzheimer's disease diagnosis, focusing on data modalities, machine learning methods, and performance metrics. We conducted a systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Studies published between 2021 and 2024 were retrieved from PubMed, IEEEXplore and Web of Science, using search terms related to Alzheimer's disease classification, neuroimaging, machine learning, and diagnostic performance. A total of 39 studies met the inclusion criteria, focusing on the use of Magnetic Resonance Imaging, Positron Emission Tomography, and biomarkers for Alzheimer's disease classification using machine learning models. Multimodal approaches, combining Magnetic Resonance Imaging with Positron Emission Tomography and Cognitive assessments, outperformed single-modality studies in diagnostic accuracy reliability. Convolutional Neural Networks were the most commonly used machine learning models, followed by hybrid models and Random Forest. The highest accuracy reported for binary classification was 100%, while multi-class classification achieved up to 99.98%. Techniques like Synthetic Minority Over-sampling Technique and data augmentation were frequently employed to address data imbalance, improving model generalizability. Our review highlights the advantages of using multimodal data in computer-aided decision support systems for more accurate Alzheimer's disease diagnosis. However, we also identified several limitations, including data imbalance, small sample sizes, and the lack of external validation in most studies. Future research should utilize larger, more diverse datasets, incorporate longitudinal data, and validate models in real-world clinical trials. Additionally, there is a growing need for explainability in machine learning models to ensure they are interpretable and trusted in clinical settings. While computer-aided decision support systems show great promise in improving the early diagnosis of Alzheimer's disease, further work is needed to enhance their robustness, generalizability, and clinical applicability. By addressing these challenges, computer-aided decision support systems could play a pivotal role in the early detection and management of Alzheimer's disease, potentially improving patient outcomes and reducing healthcare costs.

A first-of-its-kind two-body statistical shape model of the arthropathic shoulder: enhancing biomechanics and surgical planning.

Blackman J, Giles JW

pubmed logopapersJun 3 2025
Statistical Shape Models are machine learning tools in computational orthopedics that enable the study of anatomical variability and the creation of synthetic models for pathogenetic analysis and surgical planning. Current models of the glenohumeral joint either describe individual bones or are limited to non-pathologic datasets, failing to capture coupled shape variation in arthropathic anatomy. We aimed to develop a novel combined scapula-proximal-humerus model applicable to clinical populations. Preoperative computed tomography scans from 45 Reverse Total Shoulder Arthroplasty patients were used to generate three-dimensional models of the scapula and proximal humerus. Correspondence point clouds were combined into a two-body shape model using Principal Component Analysis. Individual scapula-only and proximal-humerus-only shape models were also created for comparison. The models were validated using compactness, specificity, generalization ability, and leave-one-out cross-validation. The modes of variation for each model were also compared. The combined model was described using eigenvector decomposition into single body models. The models were further compared in their ability to predict the shape of one body when given the shape of its counterpart, and the generation of diverse realistic synthetic pairs de novo. The scapula and proximal-humerus models performed comparably to previous studies with median average leave-one-out cross-validation errors of 1.08 mm (IQR: 0.359 mm), and 0.521 mm (IQR: 0.111 mm); the combined model was similar with median error of 1.13 mm (IQR: 0.239 mm). The combined model described coupled variations between the shapes equalling 43.2% of their individual variabilities, including the relationship between glenoid and humeral head erosions. The combined model outperformed the individual models generatively with reduced missing shape prediction bias (> 10%) and uniformly diverse shape plausibility (uniformity p-value < .001 vs. .59). This study developed the first two-body scapulohumeral shape model that captures coupled variations in arthropathic shoulder anatomy and the first proximal-humeral statistical model constructed using a clinical dataset. While single-body models are effective for descriptive tasks, combined models excel in generating joint-level anatomy. This model can be used to augment computational analyses of synthetic populations investigating shoulder biomechanics and surgical planning.

Machine learning for classification of pediatric bipolar disorder with and without psychotic symptoms based on thalamic subregional structural volume.

Gao W, Zhang K, Jiao Q, Su L, Cui D, Lu S, Yang R

pubmed logopapersJun 3 2025
The thalamus plays a crucial role in sensory processing, emotional regulation, and cognitive functions, and its dysregulation may be implicated in psychosis. The aim of the present study was to examine the differences in thalamic subregional volumes between pediatric bipolar disorder patients with (P-PBD) and without psychotic symptoms (NP-PBD). Participants including 28 P-PBD, 26 NP-PBD, and 18 healthy controls (HCs) underwent structural magnetic resonance imaging (sMRI) scanning using a 3.0T MRI scanner. All T1-weighted imaging data were processed by FreeSurfer 7.4.0 software. The volumetric differences of thalamic subregions among three groups were compared by using analyses of covariance (ANCOVA) and post-hoc analyses. Additionally, we applied a standard support vector classification (SVC) model for pairwise comparison among the three groups to identify brain regions with significant volumetric differences. The ANCOVA revealed that significant volumetric differences were observed in the left pulvinar anterior (L_PuA) and left reuniens medial ventral (L_MV-re) thalamus among three groups. Post-hoc analysis revealed that patients with P-PBD exhibited decreased volumes in the L_PuA and L_MV-re when compared to the NP-PBD group and HCs, respectively. Furthermore, the SVC model revealed that the L_MV-re volume exhibited the best capacity to discriminate P-PBD from NP-PBD and HCs. The present findings demonstrated that reduced thalamic subregional volumes in the L_PuA and L_MV-re might be associated with psychotic symptoms in PBD.

Machine learning model for preoperative classification of stromal subtypes in salivary gland pleomorphic adenoma based on ultrasound histogram analysis.

Su HZ, Yang DH, Hong LC, Wu YH, Yu K, Zhang ZB, Zhang XD

pubmed logopapersJun 3 2025
Accurate preoperative discrimination of salivary gland pleomorphic adenoma (SPA) stromal subtypes is essential for therapeutic plannings. We aimed to establish and test machine learning (ML) models for classification of stromal subtypes in SPA based on ultrasound histogram analysis. A total of 256 SPA patients were enrolled in the study and categorized into two groups: stroma-low and stroma-high. The dataset was split into a training cohort with 177 patients and a validation cohort with 79 patients. The least absolute shrinkage and selection operator (LASSO) regression identified optimal features, which were then utilized to build predictive models using logistic regression (LR) and eight ML algorithms. The effectiveness of the models was evaluated using a range of performance metrics, with a particular focus on the area under the receiver operating characteristic curve (AUC). After LASSO regression, six key features (lesion size, shape, cystic areas, vascularity, mean, and skewness) were selected to develop predictive models. The AUCs ranged from 0.575 to 0.827 for the nine models. The support vector machine (SVM) algorithm achieved the highest performance with an AUC of 0.827, accompanied by an accuracy of 0.798, precision of 0.792, recall of 0.862, and an F1 score of 0.826. The LR algorithm also exhibited robust performance, achieving an AUC of 0.818, slightly trailing behind the SVM algorithm. Decision curve analysis indicated that the SVM-based model provided superior clinical utility compared to other models. The ML model based on ultrasound histogram analysis offers a precise and non-invasive approach for preoperative categorization of stromal subtypes in SPA.

Patient-specific prostate segmentation in kilovoltage images for radiation therapy intrafraction monitoring via deep learning.

Mylonas A, Li Z, Mueller M, Booth JT, Brown R, Gardner M, Kneebone A, Eade T, Keall PJ, Nguyen DT

pubmed logopapersJun 3 2025
During radiation therapy, the natural movement of organs can lead to underdosing the cancer and overdosing the healthy tissue, compromising treatment efficacy. Real-time image-guided adaptive radiation therapy can track the tumour and account for the motion. Typically, fiducial markers are implanted as a surrogate for the tumour position due to the low radiographic contrast of soft tissues in kilovoltage (kV) images. A segmentation approach that does not require markers would eliminate the costs, delays, and risks associated with marker implantation. We trained patient-specific conditional Generative Adversarial Networks for prostate segmentation in kV images. The networks were trained using synthetic kV images generated from each patient's own imaging and planning data, which are available prior to the commencement of treatment. We validated the networks on two treatment fractions from 30 patients using multi-centre data from two clinical trials. Here, we present a large-scale proof-of-principle study of x-ray-based markerless prostate segmentation for globally available cancer therapy systems. Our results demonstrate the feasibility of a deep learning approach using kV images to track prostate motion across the entire treatment arc for 30 patients with prostate cancer. The mean absolute deviation is 1.4 and 1.6 mm in the anterior-posterior/lateral and superior-inferior directions, respectively. Markerless segmentation via deep learning may enable real-time image guidance on conventional cancer therapy systems without requiring implanted markers or additional hardware, thereby expanding access to real-time adaptive radiation therapy.

A Novel Deep Learning Framework for Nipple Segmentation in Digital Mammography.

Rogozinski M, Hurtado J, Sierra-Franco CA, R Hall Barbosa C, Raposo A

pubmed logopapersJun 3 2025
This study introduces a novel methodology to enhance nipple segmentation in digital mammography, a critical component for accurate medical analysis and computer-aided detection systems. The nipple is a key anatomical landmark for multi-view and multi-modality breast image registration, where accurate localization is vital for ensuring image quality and enabling precise registration of anomalies across different mammographic views. The proposed approach significantly outperforms baseline methods, particularly in challenging cases where previous techniques failed. It achieved successful detection across all cases and reached a mean Intersection over Union (mIoU) of 0.63 in instances where the baseline failed entirely. Additionally, it yielded nearly a tenfold improvement in Hausdorff distance and consistent gains in overlap-based metrics, with the mIoU increasing from 0.7408 to 0.8011 in the craniocaudal (CC) view and from 0.7488 to 0.7767 in the mediolateral oblique (MLO) view. Furthermore, its generalizability suggests the potential for application to other breast imaging modalities and related domains facing challenges such as class imbalance and high variability in object characteristics.
Page 94 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.