Sort by:
Page 178 of 6526512 results

Römer P, Ponciano JJ, Kloster K, Siegberg F, Plaß B, Vinayahalingam S, Al-Nawas B, Kämmerer PW, Klauer T, Thiem D

pubmed logopapersSep 11 2025
Diseases of the oral cavity, including oral squamous cell carcinoma, pose major challenges to health care worldwide due to their late diagnosis and complicated differentiation of oral tissues. The combination of endoscopic hyperspectral imaging (HSI) and deep learning (DL) models offers a promising approach to the demand for modern, noninvasive tissue diagnostics. This study presents a large-scale in vivo dataset designed to support DL-based segmentation and classification of healthy oral tissues. This study aimed to develop a comprehensive, annotated endoscopic HSI dataset of the oral cavity and to demonstrate automated, reliable differentiation of intraoral tissue structures by integrating endoscopic HSI with advanced machine learning methods. A total of 226 participants (166 women [73.5%], 60 men [26.5%], aged 24-87 years) were examined using an endoscopic HSI system, capturing spectral data in the range of 500 to 1000 nm. Oral structures in red, green, and blue and HSI scans were annotated using RectLabel Pro (by Ryo Kawamura). DeepLabv3 (Google Research) with a ResNet-50 backbone was adapted for endoscopic HSI segmentation. The model was trained for 50 epochs on 70% of the dataset, with 30% for evaluation. Performance metrics (precision, recall, and F1-score) confirmed its efficacy in distinguishing oral tissue types. DeepLabv3 (ResNet-101) and U-Net (EfficientNet-B0/ResNet-50) achieved the highest overall F1-scores of 0.857 and 0.84, respectively, particularly excelling in segmenting the mucosa (0.915), retractor (0.94), tooth (0.90), and palate (0.90). Variability analysis confirmed high spectral diversity across tissue classes, supporting the dataset's complexity and authenticity for realistic clinical conditions. The presented dataset addresses a key gap in oral health imaging by developing and validating robust DL algorithms for endoscopic HSI data. It enables accurate classification of oral tissue and paves the way for future applications in individualized noninvasive pathological tissue analysis, early cancer detection, and intraoperative diagnostics of oral diseases.

Yang X, Yu L, Tian Y, Yin G, Lv Q, Guo J

pubmed logopapersSep 11 2025
This study aims to investigate the feasibility of ultrasound technology for assessing muscle atrophy progression in a head-down bed rest model, providing a reference for monitoring muscle functional status in a microgravity environment. A 40-day head-down bed rest model using rhesus monkeys was established to simulate the microgravity environment in space. A dual-encoder parallel deep learning model was developed to extract features from B-mode ultrasound images and radiofrequency signals separately. Additionally, an up-sampling module incorporating the Coordinate Attention mechanism and the Pixel-attention-guided fusion module was designed to enhance direction and position awareness, as well as improve the recognition of target boundaries and detailed features. The evaluation efficacy of single ultrasound signals and fused signals was compared. The assessment accuracy reached approximately 87% through inter-individual cross-validation in 6 rhesus monkeys. The fusion of ultrasound signals significantly enhanced classification performance compared to using single modalities, such as B-mode images or radiofrequency signals. This study demonstrates that ultrasound technology combined with deep learning algorithms can effectively assess disuse muscle atrophy. The proposed approach offers a promising reference for diagnosing muscle atrophy under long-term immobilization, with significant application value and potential for widespread adoption.

Chamseddine E, Tlig L, Chaari L, Sayadi M

pubmed logopapersSep 11 2025
Robust 3D brain tumor MRI segmentation is significant for diagnosis and treatment. However, the tumor heterogeneity, irregular shape, and complicated texture are challenging. Deep learning has transformed medical image analysis by feature extraction directly from the data, greatly enhancing the accuracy of segmentation. The functionality of deep models can be complemented by adding modules like texture-sensitive customized convolution layers and attention mechanisms. These components allow the model to focus its attention on pertinent locations and boundary definition problems. In this paper, a texture-aware deep learning method that improves the U-Net structure by adding a trainable Gabor convolution layer in the input for rich textural feature capture is proposed. Such features are fused in parallel with standard convolutional outputs to better represent tumors. The model also utilizes dual attention modules, Squeeze-and-Excitation blocks in the encoder for dynamically adjusting channel-wise features and Attention Gates for boosting skip connections by removing trivial areas and weighting tumor areas. The working of each module is explored through explainable artificial intelligence methods to ensure interpretability. To address class imbalance, a weighted combined loss function is applied. The model achieves Dice coefficients of 91.62%, 89.92%, and 88.86% for whole tumor, tumor core, and enhancing tumor respectively on BraTS2021 dataset. Large-scale quantitative and qualitative evaluations on BraTS2021, validated on BraTS benchmarks, prove the accuracy and robustness of the proposed model. The proposed approach results are superior to benchmark U-Net and other state-of-the-art segmentation methods, offering a robust and interpretable solution for clinical use.

Fan YS, Yang P, Zhu Y, Jing W, Xu Y, Xu Y, Guo J, Lu F, Yang M, Huang W, Chen H

pubmed logopapersSep 11 2025
Pathologic schizophrenia processes originate early in brain development, leading to detectable brain alterations via structural and functional magnetic resonance imaging (MRI). Recent MRI studies have sought to characterize disease effects from a brain age perspective, but developmental deviations from the typical brain age trajectory in youths with schizophrenia remain unestablished. This study investigated brain development deviations in early-onset schizophrenia (EOS) patients by applying machine learning algorithms to structural and functional MRI data. Multimodal MRI data, including T1-weighted MRI (T1w-MRI), diffusion MRI, and resting-state functional MRI (rs-fMRI) data, were collected from 80 antipsychotic-naive first-episode EOS patients and 91 typically developing (TD) controls. The morphometric similarity connectome (MSC), structural connectome (SC), and functional connectome (FC) were separately constructed by using these three modalities. According to these connectivity features, eight brain age estimation models were first trained with the TD group, the best of which was then used to predict brain ages in patients. Individual brain age gaps were assessed as brain ages minus chronological ages. Both the SC and MSC features performed well in brain age estimation, whereas the FC features did not. Compared with the TD controls, the EOS patients showed increased absolute brain age gaps when using the SC or MSC features, with opposite trends between childhood and adolescence. These increased brain age gaps for EOS patients were positively correlated with the severity of their clinical symptoms. These findings from a multimodal brain age perspective suggest that advanced brain age gaps exist early in youths with schizophrenia.

Sejer EPF, Pegios P, Lin M, Bashir Z, Wulff CB, Christensen AN, Nielsen M, Feragen A, Tolsgaard MG

pubmed logopapersSep 11 2025
Preterm birth is the leading cause of neonatal mortality and morbidity. While ultrasound-based cervical length measurement is the current standard for predicting preterm birth, its performance is limited. Artificial intelligence (AI) has shown potential in ultrasound analysis, yet few small-scale studies have evaluated its use for predicting preterm birth. To develop and validate an AI model for spontaneous preterm birth prediction from cervical ultrasound images and compare its performance to cervical length. In this multicenter study, we developed a deep learning-based AI model using data from women who underwent cervical ultrasound scans as part of antenatal care between 2008 and 2018 in Denmark. Indications for ultrasound were not systematically recorded, and scans were likely performed due to risk factors or symptoms of preterm labor. We compared the performance of the AI model with cervical length measurement for spontaneous preterm birth prediction by assessing the area under the curve (AUC), sensitivity, specificity, and likelihood ratios. Subgroup analyses evaluated model performance across baseline characteristics, and saliency heat maps identified anatomical features that influenced AI model predictions the most. The final dataset included 4,224 pregnancies and 7,862 cervical ultrasound images, with 50% resulting in spontaneous preterm birth. The AI model surpassed cervical length for predicting spontaneous preterm birth before 37 weeks with a sensitivity of 0.51 (95% CI 0.50-0.53) versus 0.41 (0.39-0.42) at a fixed specificity at 0.85, p<0.001, and a higher AUC of 0.75 (0.74-0.76) versus 0.67 (0.66-0.68), p<0.001. For identifying late preterm births at 34-37 weeks, the AI model had 36.6 % higher sensitivity than cervical length (0.47 versus 0.34, p<0.001). The AI model achieved higher AUCs across all subgroups, especially at earlier gestational ages. Saliency heat maps indicated that in 54% of preterm birth cases, the AI model focused on the posterior inner lining of the lower uterine segment, suggesting it incorporates more data than cervical length alone. To our knowledge, this is the first large-scale, multicenter study demonstrating that AI is more sensitive than cervical length measurement in identifying spontaneous preterm births across multiple characteristics, 19 hospital sites, and different ultrasound machines. The AI model performs particularly well at earlier gestational ages, enabling more timely prophylactic interventions.

Patwardhan V, Balchander D, Fussell D, Joseph J, Joshi A, Troutt H, Ling J, Wei K, Weinberg B, Chow D

pubmed logopapersSep 11 2025
Patients increasingly have direct access to their medical record. Radiology reports are complex and difficult for patients to understand and contextualize. One solution is to use large language models (LLMs) to translate reports into patient-accessible language. Objective This review summarizes the existing literature on using LLMs for the simplification of patient radiology reports. We also propose guidelines for best practices in future studies. A systematic review was performed following PRISMA guidelines. Studies published and indexed using PubMed, Scopus, and Google Scholar up to February 2025 were included. Inclusion criteria comprised of studies that used large language models for simplification of diagnostic or interventional radiology reports for patients and evaluated readability. Exclusion criteria included non-English manuscripts, abstracts, conference presentations, review articles, retracted articles, and studies that did not focus on report simplification. The Mixed Methods Appraisal tool (MMAT) 2018 was used for bias assessment. Given the diversity of results, studies were categorized based on reporting methods, and qualitative and quantitative findings were presented to summarize key insights. A total of 2126 citations were identified and 17 were included in the qualitative analysis. 71% of studies utilized a single LLM, while 29% of studies utilized multiple LLMs. The most prevalent LLMs included ChatGPT, Google Bard/Gemini, Bing Chat, Claude, and Microsoft Copilot. All studies that assessed quantitative readability metrics (n=12) reported improvements. Assessment of simplified reports via qualitative methods demonstrated varied results with physician vs non-physician raters. LLMs demonstrate the potential to enhance the accessibility of radiology reports for patients, but the literature is limited by heterogeneity of inputs, models, and evaluation metrics across existing studies. We propose a set of best practice guidelines to standardize future LLM research.

Yihao Liu, Junyu Chen, Lianrui Zuo, Shuwen Wei, Brian D. Boyd, Carmen Andreescu, Olusola Ajilore, Warren D. Taylor, Aaron Carass, Bennett A. Landman

arxiv logopreprintSep 11 2025
Objective: Deep learning-based deformable image registration has achieved strong accuracy, but remains sensitive to variations in input image characteristics such as artifacts, field-of-view mismatch, or modality difference. We aim to develop a general training paradigm that improves the robustness and generalizability of registration networks. Methods: We introduce surrogate supervision, which decouples the input domain from the supervision domain by applying estimated spatial transformations to surrogate images. This allows training on heterogeneous inputs while ensuring supervision is computed in domains where similarity is well defined. We evaluate the framework through three representative applications: artifact-robust brain MR registration, mask-agnostic lung CT registration, and multi-modal MR registration. Results: Across tasks, surrogate supervision demonstrated strong resilience to input variations including inhomogeneity field, inconsistent field-of-view, and modality differences, while maintaining high performance on well-curated data. Conclusions: Surrogate supervision provides a principled framework for training robust and generalizable deep learning-based registration models without increasing complexity. Significance: Surrogate supervision offers a practical pathway to more robust and generalizable medical image registration, enabling broader applicability in diverse biomedical imaging scenarios.

Kavitha N, Anand B

pubmed logopapersSep 11 2025
Pneumonia represents a dangerous respiratory illness that leads to severe health problems when proper diagnosis does not occur, followed by an increase in deaths, particularly among at-risk populations. Appropriate treatment requires the correct identification of pneumonia types in conjunction with swift and accurate diagnosis. This paper presents the deep learning framework SqueezeViX-Net, specifically designed for pneumonia classification. The model benefits from a Self-Optimized Adaptive Enhancement (SOAE) method, which makes programmed changes to the dropout rate during the training process. The adaptive dropout adjustment mechanism leads to better model suitability and stability. The evaluation of SqueezeViX-Net is conducted through the analysis of extensive X-ray and CT image collections derived from publicly accessible Kaggle repositories. SqueezeViX-Net outperformed various established deep learning architectures, including DenseNet-121, ResNet-152V2, and EfficientNet-B7, when evaluated in terms of performance. The model demonstrated better results, as it performed with higher accuracy levels, surpassing both precision and recall metrics, as well as the F1-score metric. The validation process of this model was conducted using a range of pneumonia data sets, comprising both CT images and X-ray images, which demonstrated its ability to handle modality variations. SqueezeViX-Net integrates SOAE technology to develop an advanced framework that enables the specific identification of pneumonia for clinical use. The model demonstrates excellent diagnostic potential for medical staff through its dynamic learning capabilities and high precision, contributing to improved patient treatment outcomes.

Aung TMM, Khan AA

pubmed logopapersSep 11 2025
Accurate segmentation of small and irregular pulmonary nodules remains a significant challenge in lung cancer diagnosis, particularly in complex imaging backgrounds. Traditional U-Net models often struggle to capture long-range dependencies and integrate multi-scale features, limiting their effectiveness in addressing these challenges. To overcome these limitations, this study proposes an enhanced U-Net hybrid model that integrates multiple attention mechanisms to enhance feature representation and improve the precision of segmentation outcomes. The assessment of the proposed model was conducted using the LUNA16 dataset, which contains annotated CT scans of pulmonary nodules. Multiple attention mechanisms, including Spatial Attention (SA), Dilated Efficient Channel Attention (Dilated ECA), Convolutional Block Attention Module (CBAM), and Squeeze-and-Excitation (SE) Block, were integrated into a U-Net backbone. These modules were strategically combined to enhance both local and global feature representations. The model's architecture and training procedures were designed to address the challenges of segmenting small and irregular pulmonary nodules. The proposed model achieved a Dice similarity coefficient of 84.30%, significantly outperforming the baseline U-Net model. This result demonstrates improved accuracy in segmenting small and irregular pulmonary nodules. The integration of multiple attention mechanisms significantly enhances the model's ability to capture both local and global features, addressing key limitations of traditional U-Net architectures. SA preserves spatial features for small nodules, while Dilated ECA captures long-range dependencies. CBAM and SE further refine feature representations. Together, these modules improve segmentation performance in complex imaging backgrounds. A potential limitation is that performance may still be constrained in cases with extreme anatomical variability or lowcontrast lesions, suggesting directions for future research. The Enhanced U-Net hybrid model outperforms the traditional U-Net, effectively addressing challenges in segmenting small and irregular pulmonary nodules within complex imaging backgrounds.

Mohammed Tiouti, Mohamed Bal-Ghaoui

arxiv logopreprintSep 11 2025
Effective model and hyperparameter selection remains a major challenge in deep learning, often requiring extensive expertise and computation. While AutoML and large language models (LLMs) promise automation, current LLM-based approaches rely on trial and error and expensive APIs, which provide limited interpretability and generalizability. We propose MetaLLMiX, a zero-shot hyperparameter optimization framework combining meta-learning, explainable AI, and efficient LLM reasoning. By leveraging historical experiment outcomes with SHAP explanations, MetaLLMiX recommends optimal hyperparameters and pretrained models without additional trials. We further employ an LLM-as-judge evaluation to control output format, accuracy, and completeness. Experiments on eight medical imaging datasets using nine open-source lightweight LLMs show that MetaLLMiX achieves competitive or superior performance to traditional HPO methods while drastically reducing computational cost. Our local deployment outperforms prior API-based approaches, achieving optimal results on 5 of 8 tasks, response time reductions of 99.6-99.9%, and the fastest training times on 6 datasets (2.4-15.7x faster), maintaining accuracy within 1-5% of best-performing baselines.
Page 178 of 6526512 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.