Sort by:
Page 18 of 3433422 results

Enhancing Oral Health Diagnostics With Hyperspectral Imaging and Computer Vision: Clinical Dataset Study.

Römer P, Ponciano JJ, Kloster K, Siegberg F, Plaß B, Vinayahalingam S, Al-Nawas B, Kämmerer PW, Klauer T, Thiem D

pubmed logopapersSep 11 2025
Diseases of the oral cavity, including oral squamous cell carcinoma, pose major challenges to health care worldwide due to their late diagnosis and complicated differentiation of oral tissues. The combination of endoscopic hyperspectral imaging (HSI) and deep learning (DL) models offers a promising approach to the demand for modern, noninvasive tissue diagnostics. This study presents a large-scale in vivo dataset designed to support DL-based segmentation and classification of healthy oral tissues. This study aimed to develop a comprehensive, annotated endoscopic HSI dataset of the oral cavity and to demonstrate automated, reliable differentiation of intraoral tissue structures by integrating endoscopic HSI with advanced machine learning methods. A total of 226 participants (166 women [73.5%], 60 men [26.5%], aged 24-87 years) were examined using an endoscopic HSI system, capturing spectral data in the range of 500 to 1000 nm. Oral structures in red, green, and blue and HSI scans were annotated using RectLabel Pro (by Ryo Kawamura). DeepLabv3 (Google Research) with a ResNet-50 backbone was adapted for endoscopic HSI segmentation. The model was trained for 50 epochs on 70% of the dataset, with 30% for evaluation. Performance metrics (precision, recall, and F1-score) confirmed its efficacy in distinguishing oral tissue types. DeepLabv3 (ResNet-101) and U-Net (EfficientNet-B0/ResNet-50) achieved the highest overall F1-scores of 0.857 and 0.84, respectively, particularly excelling in segmenting the mucosa (0.915), retractor (0.94), tooth (0.90), and palate (0.90). Variability analysis confirmed high spectral diversity across tissue classes, supporting the dataset's complexity and authenticity for realistic clinical conditions. The presented dataset addresses a key gap in oral health imaging by developing and validating robust DL algorithms for endoscopic HSI data. It enables accurate classification of oral tissue and paves the way for future applications in individualized noninvasive pathological tissue analysis, early cancer detection, and intraoperative diagnostics of oral diseases.

Training With Local Data Remains Important for Deep Learning MRI Prostate Cancer Detection.

Carere SG, Jewell J, Nasute Fauerbach PV, Emerson DB, Finelli A, Ghai S, Haider MA

pubmed logopapersSep 11 2025
Domain shift has been shown to have a major detrimental effect on AI model performance however prior studies on domain shift for MRI prostate cancer segmentation have been limited to small, or heterogenous cohorts. Our objective was to assess whether prostate cancer segmentation models trained on local MRI data continue to outperform those trained on external data with cohorts exceeding 1000. We simulated a multi-institutional consortium using the public PICAI dataset (PICAI-TRAIN: <i>1241 exams</i>, PICAI-TEST: <i>259</i>) and a local dataset (LOCAL-TRAIN: <i>1400 exams</i>, LOCAL-TEST: <i>308</i>). IRB approval was obtained and consent waived. We compared nnUNet-v2 models trained on the combined data (CENTRAL-TRAIN) and separately on PICAI-TRAIN and LOCAL-TRAIN. Accuracy was evaluated using the open source PICAI Score on LOCAL-TEST. Significance was tested using bootstrapping. Just 22% (309/1400) of LOCAL-TRAIN exams would be sufficient to match the performance of a model trained on PICAI-TRAIN. The CENTRAL-TRAIN performance was similar to LOCAL-TRAIN performance, with PICAI Scores [95% CI] of 65 [58-71] and 66 [60-72], respectively. Both of these models exceeded the model trained on PICAI-TRAIN alone which had a score of 58 [51-64] (<i>P</i> < .002). Reducing training set size did not alter these relative trends. Domain shift limits MRI prostate cancer segmentation performance even when training with over 1000 exams from 3 external institutions. Use of local data is paramount at these scales.

Mechanistic Learning with Guided Diffusion Models to Predict Spatio-Temporal Brain Tumor Growth

Daria Laslo, Efthymios Georgiou, Marius George Linguraru, Andreas Rauschecker, Sabine Muller, Catherine R. Jutzeler, Sarah Bruningk

arxiv logopreprintSep 11 2025
Predicting the spatio-temporal progression of brain tumors is essential for guiding clinical decisions in neuro-oncology. We propose a hybrid mechanistic learning framework that combines a mathematical tumor growth model with a guided denoising diffusion implicit model (DDIM) to synthesize anatomically feasible future MRIs from preceding scans. The mechanistic model, formulated as a system of ordinary differential equations, captures temporal tumor dynamics including radiotherapy effects and estimates future tumor burden. These estimates condition a gradient-guided DDIM, enabling image synthesis that aligns with both predicted growth and patient anatomy. We train our model on the BraTS adult and pediatric glioma datasets and evaluate on 60 axial slices of in-house longitudinal pediatric diffuse midline glioma (DMG) cases. Our framework generates realistic follow-up scans based on spatial similarity metrics. It also introduces tumor growth probability maps, which capture both clinically relevant extent and directionality of tumor growth as shown by 95th percentile Hausdorff Distance. The method enables biologically informed image generation in data-limited scenarios, offering generative-space-time predictions that account for mechanistic priors.

The Combined Use of Cervical Ultrasound and Deep Learning Improves the Detection of Patients at Risk for Spontaneous Preterm Delivery.

Sejer EPF, Pegios P, Lin M, Bashir Z, Wulff CB, Christensen AN, Nielsen M, Feragen A, Tolsgaard MG

pubmed logopapersSep 11 2025
Preterm birth is the leading cause of neonatal mortality and morbidity. While ultrasound-based cervical length measurement is the current standard for predicting preterm birth, its performance is limited. Artificial intelligence (AI) has shown potential in ultrasound analysis, yet few small-scale studies have evaluated its use for predicting preterm birth. To develop and validate an AI model for spontaneous preterm birth prediction from cervical ultrasound images and compare its performance to cervical length. In this multicenter study, we developed a deep learning-based AI model using data from women who underwent cervical ultrasound scans as part of antenatal care between 2008 and 2018 in Denmark. Indications for ultrasound were not systematically recorded, and scans were likely performed due to risk factors or symptoms of preterm labor. We compared the performance of the AI model with cervical length measurement for spontaneous preterm birth prediction by assessing the area under the curve (AUC), sensitivity, specificity, and likelihood ratios. Subgroup analyses evaluated model performance across baseline characteristics, and saliency heat maps identified anatomical features that influenced AI model predictions the most. The final dataset included 4,224 pregnancies and 7,862 cervical ultrasound images, with 50% resulting in spontaneous preterm birth. The AI model surpassed cervical length for predicting spontaneous preterm birth before 37 weeks with a sensitivity of 0.51 (95% CI 0.50-0.53) versus 0.41 (0.39-0.42) at a fixed specificity at 0.85, p<0.001, and a higher AUC of 0.75 (0.74-0.76) versus 0.67 (0.66-0.68), p<0.001. For identifying late preterm births at 34-37 weeks, the AI model had 36.6 % higher sensitivity than cervical length (0.47 versus 0.34, p<0.001). The AI model achieved higher AUCs across all subgroups, especially at earlier gestational ages. Saliency heat maps indicated that in 54% of preterm birth cases, the AI model focused on the posterior inner lining of the lower uterine segment, suggesting it incorporates more data than cervical length alone. To our knowledge, this is the first large-scale, multicenter study demonstrating that AI is more sensitive than cervical length measurement in identifying spontaneous preterm births across multiple characteristics, 19 hospital sites, and different ultrasound machines. The AI model performs particularly well at earlier gestational ages, enabling more timely prophylactic interventions.

Novel BDefRCNLSTM: an efficient ensemble deep learning approaches for enhanced brain tumor detection and categorization with segmentation.

Janapati M, Akthar S

pubmed logopapersSep 11 2025
Brain tumour detection and classification are critical for improving patient prognosis and treatment planning. However, manual identification from magnetic resonance imaging (MRI) scans is time-consuming, error-prone, and reliant on expert interpretation. The increasing complexity of tumour characteristics necessitates automated solutions to enhance accuracy and efficiency. This study introduces a novel ensemble deep learning model, boosted deformable and residual convolutional network with bi-directional convolutional long short-term memory (BDefRCNLSTM), for the classification and segmentation of brain tumours. The proposed framework integrates entropy-based local binary pattern (ELBP) for extracting spatial semantic features and employs the enhanced sooty tern optimisation (ESTO) algorithm for optimal feature selection. Additionally, an improved X-Net model is utilised for precise segmentation of tumour regions. The model is trained and evaluated on Figshare, Brain MRI, and Kaggle datasets using multiple performance metrics. Experimental results demonstrate that the proposed BDefRCNLSTM model achieves over 99% accuracy in both classification and segmentation, outperforming existing state-of-the-art approaches. The findings establish the proposed approach as a clinically viable solution for automated brain tumour diagnosis. The integration of optimised feature selection and advanced segmentation techniques improves diagnostic accuracy, potentially assisting radiologists in making faster and more reliable decisions.

Neurodevelopmental deviations in schizophrenia: Evidences from multimodal connectome-based brain ages.

Fan YS, Yang P, Zhu Y, Jing W, Xu Y, Xu Y, Guo J, Lu F, Yang M, Huang W, Chen H

pubmed logopapersSep 11 2025
Pathologic schizophrenia processes originate early in brain development, leading to detectable brain alterations via structural and functional magnetic resonance imaging (MRI). Recent MRI studies have sought to characterize disease effects from a brain age perspective, but developmental deviations from the typical brain age trajectory in youths with schizophrenia remain unestablished. This study investigated brain development deviations in early-onset schizophrenia (EOS) patients by applying machine learning algorithms to structural and functional MRI data. Multimodal MRI data, including T1-weighted MRI (T1w-MRI), diffusion MRI, and resting-state functional MRI (rs-fMRI) data, were collected from 80 antipsychotic-naive first-episode EOS patients and 91 typically developing (TD) controls. The morphometric similarity connectome (MSC), structural connectome (SC), and functional connectome (FC) were separately constructed by using these three modalities. According to these connectivity features, eight brain age estimation models were first trained with the TD group, the best of which was then used to predict brain ages in patients. Individual brain age gaps were assessed as brain ages minus chronological ages. Both the SC and MSC features performed well in brain age estimation, whereas the FC features did not. Compared with the TD controls, the EOS patients showed increased absolute brain age gaps when using the SC or MSC features, with opposite trends between childhood and adolescence. These increased brain age gaps for EOS patients were positively correlated with the severity of their clinical symptoms. These findings from a multimodal brain age perspective suggest that advanced brain age gaps exist early in youths with schizophrenia.

MetaLLMix : An XAI Aided LLM-Meta-learning Based Approach for Hyper-parameters Optimization

Mohammed Tiouti, Mohamed Bal-Ghaoui

arxiv logopreprintSep 11 2025
Effective model and hyperparameter selection remains a major challenge in deep learning, often requiring extensive expertise and computation. While AutoML and large language models (LLMs) promise automation, current LLM-based approaches rely on trial and error and expensive APIs, which provide limited interpretability and generalizability. We propose MetaLLMiX, a zero-shot hyperparameter optimization framework combining meta-learning, explainable AI, and efficient LLM reasoning. By leveraging historical experiment outcomes with SHAP explanations, MetaLLMiX recommends optimal hyperparameters and pretrained models without additional trials. We further employ an LLM-as-judge evaluation to control output format, accuracy, and completeness. Experiments on eight medical imaging datasets using nine open-source lightweight LLMs show that MetaLLMiX achieves competitive or superior performance to traditional HPO methods while drastically reducing computational cost. Our local deployment outperforms prior API-based approaches, achieving optimal results on 5 of 8 tasks, response time reductions of 99.6-99.9%, and the fastest training times on 6 datasets (2.4-15.7x faster), maintaining accuracy within 1-5% of best-performing baselines.

Virtual staining for 3D X-ray histology of bone implants

Sarah C. Irvine, Christian Lucas, Diana Krüger, Bianca Guedert, Julian Moosmann, Berit Zeller-Plumhoff

arxiv logopreprintSep 11 2025
Three-dimensional X-ray histology techniques offer a non-invasive alternative to conventional 2D histology, enabling volumetric imaging of biological tissues without the need for physical sectioning or chemical staining. However, the inherent greyscale image contrast of X-ray tomography limits its biochemical specificity compared to traditional histological stains. Within digital pathology, deep learning-based virtual staining has demonstrated utility in simulating stained appearances from label-free optical images. In this study, we extend virtual staining to the X-ray domain by applying cross-modality image translation to generate artificially stained slices from synchrotron-radiation-based micro-CT scans. Using over 50 co-registered image pairs of micro-CT and toluidine blue-stained histology from bone-implant samples, we trained a modified CycleGAN network tailored for limited paired data. Whole slide histology images were downsampled to match the voxel size of the CT data, with on-the-fly data augmentation for patch-based training. The model incorporates pixelwise supervision and greyscale consistency terms, producing histologically realistic colour outputs while preserving high-resolution structural detail. Our method outperformed Pix2Pix and standard CycleGAN baselines across SSIM, PSNR, and LPIPS metrics. Once trained, the model can be applied to full CT volumes to generate virtually stained 3D datasets, enhancing interpretability without additional sample preparation. While features such as new bone formation were able to be reproduced, some variability in the depiction of implant degradation layers highlights the need for further training data and refinement. This work introduces virtual staining to 3D X-ray imaging and offers a scalable route for chemically informative, label-free tissue characterisation in biomedical research.

Enhancing 3D Medical Image Understanding with Pretraining Aided by 2D Multimodal Large Language Models

Qiuhui Chen, Xuancheng Yao, Huping Ye, Yi Hong

arxiv logopreprintSep 11 2025
Understanding 3D medical image volumes is critical in the medical field, yet existing 3D medical convolution and transformer-based self-supervised learning (SSL) methods often lack deep semantic comprehension. Recent advancements in multimodal large language models (MLLMs) provide a promising approach to enhance image understanding through text descriptions. To leverage these 2D MLLMs for improved 3D medical image understanding, we propose Med3DInsight, a novel pretraining framework that integrates 3D image encoders with 2D MLLMs via a specially designed plane-slice-aware transformer module. Additionally, our model employs a partial optimal transport based alignment, demonstrating greater tolerance to noise introduced by potential noises in LLM-generated content. Med3DInsight introduces a new paradigm for scalable multimodal 3D medical representation learning without requiring human annotations. Extensive experiments demonstrate our state-of-the-art performance on two downstream tasks, i.e., segmentation and classification, across various public datasets with CT and MRI modalities, outperforming current SSL methods. Med3DInsight can be seamlessly integrated into existing 3D medical image understanding networks, potentially enhancing their performance. Our source code, generated datasets, and pre-trained models will be available at https://github.com/Qybc/Med3DInsight.

Medverse: A Universal Model for Full-Resolution 3D Medical Image Segmentation, Transformation and Enhancement

Jiesi Hu, Jianfeng Cao, Yanwu Yang, Chenfei Ye, Yixuan Zhang, Hanyang Peng, Ting Ma

arxiv logopreprintSep 11 2025
In-context learning (ICL) offers a promising paradigm for universal medical image analysis, enabling models to perform diverse image processing tasks without retraining. However, current ICL models for medical imaging remain limited in two critical aspects: they cannot simultaneously achieve high-fidelity predictions and global anatomical understanding, and there is no unified model trained across diverse medical imaging tasks (e.g., segmentation and enhancement) and anatomical regions. As a result, the full potential of ICL in medical imaging remains underexplored. Thus, we present \textbf{Medverse}, a universal ICL model for 3D medical imaging, trained on 22 datasets covering diverse tasks in universal image segmentation, transformation, and enhancement across multiple organs, imaging modalities, and clinical centers. Medverse employs a next-scale autoregressive in-context learning framework that progressively refines predictions from coarse to fine, generating consistent, full-resolution volumetric outputs and enabling multi-scale anatomical awareness. We further propose a blockwise cross-attention module that facilitates long-range interactions between context and target inputs while preserving computational efficiency through spatial sparsity. Medverse is extensively evaluated on a broad collection of held-out datasets covering previously unseen clinical centers, organs, species, and imaging modalities. Results demonstrate that Medverse substantially outperforms existing ICL baselines and establishes a novel paradigm for in-context learning. Code and model weights will be made publicly available. Our model are publicly available at https://github.com/jiesihu/Medverse.
Page 18 of 3433422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.