Sort by:
Page 289 of 3463455 results

Patient-specific prediction of glioblastoma growth via reduced order modeling and neural networks.

Cerrone D, Riccobelli D, Gazzoni S, Vitullo P, Ballarin F, Falco J, Acerbi F, Manzoni A, Zunino P, Ciarletta P

pubmed logopapersJun 3 2025
Glioblastoma is among the most aggressive brain tumors in adults, characterized by patient-specific invasion patterns driven by the underlying brain microstructure. In this work, we present a proof-of-concept for a mathematical model of GBL growth, enabling real-time prediction and patient-specific parameter identification from longitudinal neuroimaging data. The framework exploits a diffuse-interface mathematical model to describe the tumor evolution and a reduced-order modeling strategy, relying on proper orthogonal decomposition, trained on synthetic data derived from patient-specific brain anatomies reconstructed from magnetic resonance imaging and diffusion tensor imaging. A neural network surrogate learns the inverse mapping from tumor evolution to model parameters, achieving significant computational speed-up while preserving high accuracy. To ensure robustness and interpretability, we perform both global and local sensitivity analyses, identifying the key biophysical parameters governing tumor dynamics and assessing the stability of the inverse problem solution. These results establish a methodological foundation for future clinical deployment of patient-specific digital twins in neuro-oncology.

A first-of-its-kind two-body statistical shape model of the arthropathic shoulder: enhancing biomechanics and surgical planning.

Blackman J, Giles JW

pubmed logopapersJun 3 2025
Statistical Shape Models are machine learning tools in computational orthopedics that enable the study of anatomical variability and the creation of synthetic models for pathogenetic analysis and surgical planning. Current models of the glenohumeral joint either describe individual bones or are limited to non-pathologic datasets, failing to capture coupled shape variation in arthropathic anatomy. We aimed to develop a novel combined scapula-proximal-humerus model applicable to clinical populations. Preoperative computed tomography scans from 45 Reverse Total Shoulder Arthroplasty patients were used to generate three-dimensional models of the scapula and proximal humerus. Correspondence point clouds were combined into a two-body shape model using Principal Component Analysis. Individual scapula-only and proximal-humerus-only shape models were also created for comparison. The models were validated using compactness, specificity, generalization ability, and leave-one-out cross-validation. The modes of variation for each model were also compared. The combined model was described using eigenvector decomposition into single body models. The models were further compared in their ability to predict the shape of one body when given the shape of its counterpart, and the generation of diverse realistic synthetic pairs de novo. The scapula and proximal-humerus models performed comparably to previous studies with median average leave-one-out cross-validation errors of 1.08 mm (IQR: 0.359 mm), and 0.521 mm (IQR: 0.111 mm); the combined model was similar with median error of 1.13 mm (IQR: 0.239 mm). The combined model described coupled variations between the shapes equalling 43.2% of their individual variabilities, including the relationship between glenoid and humeral head erosions. The combined model outperformed the individual models generatively with reduced missing shape prediction bias (> 10%) and uniformly diverse shape plausibility (uniformity p-value < .001 vs. .59). This study developed the first two-body scapulohumeral shape model that captures coupled variations in arthropathic shoulder anatomy and the first proximal-humeral statistical model constructed using a clinical dataset. While single-body models are effective for descriptive tasks, combined models excel in generating joint-level anatomy. This model can be used to augment computational analyses of synthetic populations investigating shoulder biomechanics and surgical planning.

Machine learning for classification of pediatric bipolar disorder with and without psychotic symptoms based on thalamic subregional structural volume.

Gao W, Zhang K, Jiao Q, Su L, Cui D, Lu S, Yang R

pubmed logopapersJun 3 2025
The thalamus plays a crucial role in sensory processing, emotional regulation, and cognitive functions, and its dysregulation may be implicated in psychosis. The aim of the present study was to examine the differences in thalamic subregional volumes between pediatric bipolar disorder patients with (P-PBD) and without psychotic symptoms (NP-PBD). Participants including 28 P-PBD, 26 NP-PBD, and 18 healthy controls (HCs) underwent structural magnetic resonance imaging (sMRI) scanning using a 3.0T MRI scanner. All T1-weighted imaging data were processed by FreeSurfer 7.4.0 software. The volumetric differences of thalamic subregions among three groups were compared by using analyses of covariance (ANCOVA) and post-hoc analyses. Additionally, we applied a standard support vector classification (SVC) model for pairwise comparison among the three groups to identify brain regions with significant volumetric differences. The ANCOVA revealed that significant volumetric differences were observed in the left pulvinar anterior (L_PuA) and left reuniens medial ventral (L_MV-re) thalamus among three groups. Post-hoc analysis revealed that patients with P-PBD exhibited decreased volumes in the L_PuA and L_MV-re when compared to the NP-PBD group and HCs, respectively. Furthermore, the SVC model revealed that the L_MV-re volume exhibited the best capacity to discriminate P-PBD from NP-PBD and HCs. The present findings demonstrated that reduced thalamic subregional volumes in the L_PuA and L_MV-re might be associated with psychotic symptoms in PBD.

Deep learning model for differentiating thyroid eye disease and orbital myositis on computed tomography (CT) imaging.

Ha SK, Lin LY, Shi M, Wang M, Han JY, Lee NG

pubmed logopapersJun 3 2025
To develop a deep learning model using orbital computed tomography (CT) imaging to accurately distinguish thyroid eye disease (TED) and orbital myositis, two conditions with overlapping clinical presentations. Retrospective, single-center cohort study spanning 12 years including normal controls, TED, and orbital myositis patients with orbital imaging and examination by an oculoplastic surgeon. A deep learning model employing a Visual Geometry Group-16 network was trained on various binary combinations of TED, orbital myositis, and controls using single slices of coronal orbital CT images. A total of 1628 images from 192 patients (110 TED, 51 orbital myositis, 31 controls) were included. The primary model comparing orbital myositis and TED had accuracy of 98.4% and area under the receiver operating characteristic curve (AUC) of 0.999. In detecting orbital myositis, it had a sensitivity, specificity, and F1 score of 0.964, 0.994, and 0.984, respectively. Deep learning models can differentiate TED and orbital myositis based on a single, coronal orbital CT image with high accuracy. Their ability to distinguish these conditions based not only on extraocular muscle enlargement but also other salient features suggests potential applications in diagnostics and treatment beyond these conditions.

petBrain: A New Pipeline for Amyloid, Tau Tangles and Neurodegeneration Quantification Using PET and MRI

Pierrick Coupé, Boris Mansencal, Floréal Morandat, Sergio Morell-Ortega, Nicolas Villain, Jose V. Manjón, Vincent Planche

arxiv logopreprintJun 3 2025
INTRODUCTION: Quantification of amyloid plaques (A), neurofibrillary tangles (T2), and neurodegeneration (N) using PET and MRI is critical for Alzheimer's disease (AD) diagnosis and prognosis. Existing pipelines face limitations regarding processing time, variability in tracer types, and challenges in multimodal integration. METHODS: We developed petBrain, a novel end-to-end processing pipeline for amyloid-PET, tau-PET, and structural MRI. It leverages deep learning-based segmentation, standardized biomarker quantification (Centiloid, CenTauR, HAVAs), and simultaneous estimation of A, T2, and N biomarkers. The pipeline is implemented as a web-based platform, requiring no local computational infrastructure or specialized software knowledge. RESULTS: petBrain provides reliable and rapid biomarker quantification, with results comparable to existing pipelines for A and T2. It shows strong concordance with data processed in ADNI databases. The staging and quantification of A/T2/N by petBrain demonstrated good agreement with CSF/plasma biomarkers, clinical status, and cognitive performance. DISCUSSION: petBrain represents a powerful and openly accessible platform for standardized AD biomarker analysis, facilitating applications in clinical research.

Open-PMC-18M: A High-Fidelity Large Scale Medical Dataset for Multimodal Representation Learning

Negin Baghbanzadeh, Sajad Ashkezari, Elham Dolatabadi, Arash Afkanpour

arxiv logopreprintJun 3 2025
Compound figures, which are multi-panel composites containing diverse subfigures, are ubiquitous in biomedical literature, yet large-scale subfigure extraction remains largely unaddressed. Prior work on subfigure extraction has been limited in both dataset size and generalizability, leaving a critical open question: How does high-fidelity image-text alignment via large-scale subfigure extraction impact representation learning in vision-language models? We address this gap by introducing a scalable subfigure extraction pipeline based on transformer-based object detection, trained on a synthetic corpus of 500,000 compound figures, and achieving state-of-the-art performance on both ImageCLEF 2016 and synthetic benchmarks. Using this pipeline, we release OPEN-PMC-18M, a large-scale high quality biomedical vision-language dataset comprising 18 million clinically relevant subfigure-caption pairs spanning radiology, microscopy, and visible light photography. We train and evaluate vision-language models on our curated datasets and show improved performance across retrieval, zero-shot classification, and robustness benchmarks, outperforming existing baselines. We release our dataset, models, and code to support reproducible benchmarks and further study into biomedical vision-language modeling and representation learning.

MobileTurkerNeXt: investigating the detection of Bankart and SLAP lesions using magnetic resonance images.

Gurger M, Esmez O, Key S, Hafeez-Baig A, Dogan S, Tuncer T

pubmed logopapersJun 2 2025
The landscape of computer vision is predominantly shaped by two groundbreaking methodologies: transformers and convolutional neural networks (CNNs). In this study, we aim to introduce an innovative mobile CNN architecture designed for orthopedic imaging that efficiently identifies both Bankart and SLAP lesions. Our approach involved the collection of two distinct magnetic resonance (MR) image datasets, with the primary goal of automating the detection of Bankart and SLAP lesions. A novel mobile CNN, dubbed MobileTurkerNeXt, forms the cornerstone of this research. This newly developed model, comprising roughly 1 million trainable parameters, unfolds across four principal stages: the stem, main, downsampling, and output phases. The stem phase incorporates three convolutional layers to initiate feature extraction. In the main phase, we introduce an innovative block, drawing inspiration from ConvNeXt, EfficientNet, and ResNet architectures. The downsampling phase utilizes patchify average pooling and pixel-wise convolution to effectively reduce spatial dimensions, while the output phase is meticulously engineered to yield classification outcomes. Our experimentation with MobileTurkerNeXt spanned three comparative scenarios: Bankart versus normal, SLAP versus normal, and a tripartite comparison of Bankart, SLAP, and normal cases. The model demonstrated exemplary performance, achieving test classification accuracies exceeding 96% across these scenarios. The empirical results underscore the MobileTurkerNeXt's superior classification process in differentiating among Bankart, SLAP, and normal conditions in orthopedic imaging. This underscores the potential of our proposed mobile CNN in advancing diagnostic capabilities and contributing significantly to the field of medical image analysis.

Current AI technologies in cancer diagnostics and treatment.

Tiwari A, Mishra S, Kuo TR

pubmed logopapersJun 2 2025
Cancer continues to be a significant international health issue, which demands the invention of new methods for early detection, precise diagnoses, and personalized treatments. Artificial intelligence (AI) has rapidly become a groundbreaking component in the modern era of oncology, offering sophisticated tools across the range of cancer care. In this review, we performed a systematic survey of the current status of AI technologies used for cancer diagnoses and therapeutic approaches. We discuss AI-facilitated imaging diagnostics using a range of modalities such as computed tomography, magnetic resonance imaging, positron emission tomography, ultrasound, and digital pathology, highlighting the growing role of deep learning in detecting early-stage cancers. We also explore applications of AI in genomics and biomarker discovery, liquid biopsies, and non-invasive diagnoses. In therapeutic interventions, AI-based clinical decision support systems, individualized treatment planning, and AI-facilitated drug discovery are transforming precision cancer therapies. The review also evaluates the effects of AI on radiation therapy, robotic surgery, and patient management, including survival predictions, remote monitoring, and AI-facilitated clinical trials. Finally, we discuss important challenges such as data privacy, interpretability, and regulatory issues, and recommend future directions that involve the use of federated learning, synthetic biology, and quantum-boosted AI. This review highlights the groundbreaking potential of AI to revolutionize cancer care by making diagnostics, treatments, and patient management more precise, efficient, and personalized.

Attention-enhanced residual U-Net: lymph node segmentation method with bimodal MRI images.

Qiu J, Chen C, Li M, Hong J, Dong B, Xu S, Lin Y

pubmed logopapersJun 2 2025
In medical images, lymph nodes (LNs) have fuzzy boundaries, diverse shapes and sizes, and structures similar to surrounding tissues. To automatically segment uterine LNs from sagittal magnetic resonance (MRI) scans, we combined T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI) images and tested the final results in our proposed model. This study used a data set of 158 MRI images of patients with FIGO staged LN confirmed by pathology. To improve the robustness of the model, data augmentation was applied to expand the data set. The training data was manually annotated by two experienced radiologists. The DWI and T2 images were fused and inputted into U-Net. The efficient channel attention (ECA) module was added to U-Net. A residual network was added to the encoding-decoding stage, named Efficient residual U-Net (ERU-Net), to obtain the final segmentation results and calculate the mean intersection-over-union (mIoU). The experimental results demonstrated that the ERU-Net network showed strong segmentation performance, which was significantly better than other segmentation networks. The mIoU reached 0.83, and the average pixel accuracy was 0.91. In addition, the precision was 0.90, and the corresponding recall was 0.91. In this study, ERU-Net successfully achieved the segmentation of LN in uterine MRI images. Compared with other segmentation networks, our network has the best segmentation effect on uterine LN. This provides a valuable reference for doctors to develop more effective and efficient treatment plans.

Medical World Model: Generative Simulation of Tumor Evolution for Treatment Planning

Yijun Yang, Zhao-Yang Wang, Qiuping Liu, Shuwen Sun, Kang Wang, Rama Chellappa, Zongwei Zhou, Alan Yuille, Lei Zhu, Yu-Dong Zhang, Jieneng Chen

arxiv logopreprintJun 2 2025
Providing effective treatment and making informed clinical decisions are essential goals of modern medicine and clinical care. We are interested in simulating disease dynamics for clinical decision-making, leveraging recent advances in large generative models. To this end, we introduce the Medical World Model (MeWM), the first world model in medicine that visually predicts future disease states based on clinical decisions. MeWM comprises (i) vision-language models to serve as policy models, and (ii) tumor generative models as dynamics models. The policy model generates action plans, such as clinical treatments, while the dynamics model simulates tumor progression or regression under given treatment conditions. Building on this, we propose the inverse dynamics model that applies survival analysis to the simulated post-treatment tumor, enabling the evaluation of treatment efficacy and the selection of the optimal clinical action plan. As a result, the proposed MeWM simulates disease dynamics by synthesizing post-treatment tumors, with state-of-the-art specificity in Turing tests evaluated by radiologists. Simultaneously, its inverse dynamics model outperforms medical-specialized GPTs in optimizing individualized treatment protocols across all metrics. Notably, MeWM improves clinical decision-making for interventional physicians, boosting F1-score in selecting the optimal TACE protocol by 13%, paving the way for future integration of medical world models as the second readers.
Page 289 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.