Sort by:
Page 19 of 3433422 results

SqueezeViX-Net with SOAE: A Prevailing Deep Learning Framework for Accurate Pneumonia Classification using X-Ray and CT Imaging Modalities.

Kavitha N, Anand B

pubmed logopapersSep 11 2025
Pneumonia represents a dangerous respiratory illness that leads to severe health problems when proper diagnosis does not occur, followed by an increase in deaths, particularly among at-risk populations. Appropriate treatment requires the correct identification of pneumonia types in conjunction with swift and accurate diagnosis. This paper presents the deep learning framework SqueezeViX-Net, specifically designed for pneumonia classification. The model benefits from a Self-Optimized Adaptive Enhancement (SOAE) method, which makes programmed changes to the dropout rate during the training process. The adaptive dropout adjustment mechanism leads to better model suitability and stability. The evaluation of SqueezeViX-Net is conducted through the analysis of extensive X-ray and CT image collections derived from publicly accessible Kaggle repositories. SqueezeViX-Net outperformed various established deep learning architectures, including DenseNet-121, ResNet-152V2, and EfficientNet-B7, when evaluated in terms of performance. The model demonstrated better results, as it performed with higher accuracy levels, surpassing both precision and recall metrics, as well as the F1-score metric. The validation process of this model was conducted using a range of pneumonia data sets, comprising both CT images and X-ray images, which demonstrated its ability to handle modality variations. SqueezeViX-Net integrates SOAE technology to develop an advanced framework that enables the specific identification of pneumonia for clinical use. The model demonstrates excellent diagnostic potential for medical staff through its dynamic learning capabilities and high precision, contributing to improved patient treatment outcomes.

Enhancing Oral Health Diagnostics With Hyperspectral Imaging and Computer Vision: Clinical Dataset Study.

Römer P, Ponciano JJ, Kloster K, Siegberg F, Plaß B, Vinayahalingam S, Al-Nawas B, Kämmerer PW, Klauer T, Thiem D

pubmed logopapersSep 11 2025
Diseases of the oral cavity, including oral squamous cell carcinoma, pose major challenges to health care worldwide due to their late diagnosis and complicated differentiation of oral tissues. The combination of endoscopic hyperspectral imaging (HSI) and deep learning (DL) models offers a promising approach to the demand for modern, noninvasive tissue diagnostics. This study presents a large-scale in vivo dataset designed to support DL-based segmentation and classification of healthy oral tissues. This study aimed to develop a comprehensive, annotated endoscopic HSI dataset of the oral cavity and to demonstrate automated, reliable differentiation of intraoral tissue structures by integrating endoscopic HSI with advanced machine learning methods. A total of 226 participants (166 women [73.5%], 60 men [26.5%], aged 24-87 years) were examined using an endoscopic HSI system, capturing spectral data in the range of 500 to 1000 nm. Oral structures in red, green, and blue and HSI scans were annotated using RectLabel Pro (by Ryo Kawamura). DeepLabv3 (Google Research) with a ResNet-50 backbone was adapted for endoscopic HSI segmentation. The model was trained for 50 epochs on 70% of the dataset, with 30% for evaluation. Performance metrics (precision, recall, and F1-score) confirmed its efficacy in distinguishing oral tissue types. DeepLabv3 (ResNet-101) and U-Net (EfficientNet-B0/ResNet-50) achieved the highest overall F1-scores of 0.857 and 0.84, respectively, particularly excelling in segmenting the mucosa (0.915), retractor (0.94), tooth (0.90), and palate (0.90). Variability analysis confirmed high spectral diversity across tissue classes, supporting the dataset's complexity and authenticity for realistic clinical conditions. The presented dataset addresses a key gap in oral health imaging by developing and validating robust DL algorithms for endoscopic HSI data. It enables accurate classification of oral tissue and paves the way for future applications in individualized noninvasive pathological tissue analysis, early cancer detection, and intraoperative diagnostics of oral diseases.

Enhanced U-Net with Attention Mechanisms for Improved Feature Representation in Lung Nodule Segmentation.

Aung TMM, Khan AA

pubmed logopapersSep 11 2025
Accurate segmentation of small and irregular pulmonary nodules remains a significant challenge in lung cancer diagnosis, particularly in complex imaging backgrounds. Traditional U-Net models often struggle to capture long-range dependencies and integrate multi-scale features, limiting their effectiveness in addressing these challenges. To overcome these limitations, this study proposes an enhanced U-Net hybrid model that integrates multiple attention mechanisms to enhance feature representation and improve the precision of segmentation outcomes. The assessment of the proposed model was conducted using the LUNA16 dataset, which contains annotated CT scans of pulmonary nodules. Multiple attention mechanisms, including Spatial Attention (SA), Dilated Efficient Channel Attention (Dilated ECA), Convolutional Block Attention Module (CBAM), and Squeeze-and-Excitation (SE) Block, were integrated into a U-Net backbone. These modules were strategically combined to enhance both local and global feature representations. The model's architecture and training procedures were designed to address the challenges of segmenting small and irregular pulmonary nodules. The proposed model achieved a Dice similarity coefficient of 84.30%, significantly outperforming the baseline U-Net model. This result demonstrates improved accuracy in segmenting small and irregular pulmonary nodules. The integration of multiple attention mechanisms significantly enhances the model's ability to capture both local and global features, addressing key limitations of traditional U-Net architectures. SA preserves spatial features for small nodules, while Dilated ECA captures long-range dependencies. CBAM and SE further refine feature representations. Together, these modules improve segmentation performance in complex imaging backgrounds. A potential limitation is that performance may still be constrained in cases with extreme anatomical variability or lowcontrast lesions, suggesting directions for future research. The Enhanced U-Net hybrid model outperforms the traditional U-Net, effectively addressing challenges in segmenting small and irregular pulmonary nodules within complex imaging backgrounds.

Automatic approach for B-lines detection in lung ultrasound images using You Only Look Once algorithm.

Bottino A, Botrugno C, Casciaro E, Conversano F, Lay-Ekuakille A, Lombardi FA, Morello R, Pisani P, Vetrugno L, Casciaro S

pubmed logopapersSep 11 2025
B-lines are among the key artifact signs observed in Lung Ultrasound (LUS), playing a critical role in differentiating pulmonary diseases and assessing overall lung condition. However, their accurate detection and quantification can be time-consuming and technically challenging, especially for less experienced operators. This study aims to evaluate the performance of a YOLO (You Only Look Once)-based algorithm for the automated detection of B-lines, offering a novel tool to support clinical decision-making. The proposed approach is designed to improve the efficiency and consistency of LUS interpretation, particularly for non-expert practitioners, and to enhance its utility in guiding respiratory management. In this observational agreement study, 644 images from both anonymized internal and clinical online database were evaluated. After a quality selection step, 386 images remained available for analysis from 46 patients. Ground truth was established by blinded expert sonographer identifying B-lines within rectangular Region Of Interest (ROI) on each frame. Algorithm performances were assessed through Precision, Recall and F1 Score, whereas to quantify the agreement between the YOLO-based algorithm and the expert operator, weighted kappa (kw) statistics were employed. The algorithm achieved a precision of 0.92 (95% CI 0.89-0.94), recall of 0.81 (95% CI 0.77-0.85), and F1-score of 0.86 (95% CI 0.83-0.88). The weighted kappa was 0.68 (95% CI 0.64-0.72), indicating substantial agreement algorithm and expert annotations. The proposed algorithm has demonstrated its potential to significantly enhance diagnostic support by accurately detecting B-lines in LUS images.

U-ConvNext: A Robust Approach to Glioma Segmentation in Intraoperative Ultrasound.

Vahdani AM, Rahmani M, Pour-Rashidi A, Ahmadian A, Farnia P

pubmed logopapersSep 11 2025
Intraoperative tumor imaging is critical to achieving maximal safe resection during neurosurgery, especially for low-grade glioma resection. Given the convenience of ultrasound as an intraoperative imaging modality, but also the limitations of the ultrasound modality and the time-consuming process of manual tumor segmentation, we propose a learning-based model for the accurate segmentation of low-grade gliomas in ultrasound images. We developed a novel U-net-based architecture adopting the block architecture of the ConvNext V2 model, titled U-ConvNext, which also incorporates various architectural improvements including global response normalization, fine-tuned kernel sizes, and inception layers. We also adopted the CutMix data augmentation technique for semantic segmentation, aiming for enhanced texture detection. Conformal segmentation, a novel approach to conformal prediction for binary semantic segmentation, was also developed for uncertainty quantification, providing calibrated measures of model uncertainty in a visual format. The proposed models were trained and evaluated on three subsets of images in the RESECT dataset and achieved hold-out test Dice scores of 84.63%, 74.52%, and 90.82% on the "before," "during," and "after" subsets, respectively, which indicates increases of ~ 13-31% compared to the state of the art. Furthermore, external evaluation on the ReMIND dataset indicated a robust performance (dice score of 79.17% [95% CI: 77.82-81.62] and only a moderate decline of < 3% in expected calibration error. Our approach integrates various innovations in model design, model training, and uncertainty quantification, achieving improved results on the segmentation of low-grade glioma in ultrasound images during neurosurgery.

Training With Local Data Remains Important for Deep Learning MRI Prostate Cancer Detection.

Carere SG, Jewell J, Nasute Fauerbach PV, Emerson DB, Finelli A, Ghai S, Haider MA

pubmed logopapersSep 11 2025
Domain shift has been shown to have a major detrimental effect on AI model performance however prior studies on domain shift for MRI prostate cancer segmentation have been limited to small, or heterogenous cohorts. Our objective was to assess whether prostate cancer segmentation models trained on local MRI data continue to outperform those trained on external data with cohorts exceeding 1000. We simulated a multi-institutional consortium using the public PICAI dataset (PICAI-TRAIN: <i>1241 exams</i>, PICAI-TEST: <i>259</i>) and a local dataset (LOCAL-TRAIN: <i>1400 exams</i>, LOCAL-TEST: <i>308</i>). IRB approval was obtained and consent waived. We compared nnUNet-v2 models trained on the combined data (CENTRAL-TRAIN) and separately on PICAI-TRAIN and LOCAL-TRAIN. Accuracy was evaluated using the open source PICAI Score on LOCAL-TEST. Significance was tested using bootstrapping. Just 22% (309/1400) of LOCAL-TRAIN exams would be sufficient to match the performance of a model trained on PICAI-TRAIN. The CENTRAL-TRAIN performance was similar to LOCAL-TRAIN performance, with PICAI Scores [95% CI] of 65 [58-71] and 66 [60-72], respectively. Both of these models exceeded the model trained on PICAI-TRAIN alone which had a score of 58 [51-64] (<i>P</i> < .002). Reducing training set size did not alter these relative trends. Domain shift limits MRI prostate cancer segmentation performance even when training with over 1000 exams from 3 external institutions. Use of local data is paramount at these scales.

Neurodevelopmental deviations in schizophrenia: Evidences from multimodal connectome-based brain ages.

Fan YS, Yang P, Zhu Y, Jing W, Xu Y, Xu Y, Guo J, Lu F, Yang M, Huang W, Chen H

pubmed logopapersSep 11 2025
Pathologic schizophrenia processes originate early in brain development, leading to detectable brain alterations via structural and functional magnetic resonance imaging (MRI). Recent MRI studies have sought to characterize disease effects from a brain age perspective, but developmental deviations from the typical brain age trajectory in youths with schizophrenia remain unestablished. This study investigated brain development deviations in early-onset schizophrenia (EOS) patients by applying machine learning algorithms to structural and functional MRI data. Multimodal MRI data, including T1-weighted MRI (T1w-MRI), diffusion MRI, and resting-state functional MRI (rs-fMRI) data, were collected from 80 antipsychotic-naive first-episode EOS patients and 91 typically developing (TD) controls. The morphometric similarity connectome (MSC), structural connectome (SC), and functional connectome (FC) were separately constructed by using these three modalities. According to these connectivity features, eight brain age estimation models were first trained with the TD group, the best of which was then used to predict brain ages in patients. Individual brain age gaps were assessed as brain ages minus chronological ages. Both the SC and MSC features performed well in brain age estimation, whereas the FC features did not. Compared with the TD controls, the EOS patients showed increased absolute brain age gaps when using the SC or MSC features, with opposite trends between childhood and adolescence. These increased brain age gaps for EOS patients were positively correlated with the severity of their clinical symptoms. These findings from a multimodal brain age perspective suggest that advanced brain age gaps exist early in youths with schizophrenia.

Ultrasound Assessment of Muscle Atrophy During Short- and Medium-Term Head-Down Bed Rest.

Yang X, Yu L, Tian Y, Yin G, Lv Q, Guo J

pubmed logopapersSep 11 2025
This study aims to investigate the feasibility of ultrasound technology for assessing muscle atrophy progression in a head-down bed rest model, providing a reference for monitoring muscle functional status in a microgravity environment. A 40-day head-down bed rest model using rhesus monkeys was established to simulate the microgravity environment in space. A dual-encoder parallel deep learning model was developed to extract features from B-mode ultrasound images and radiofrequency signals separately. Additionally, an up-sampling module incorporating the Coordinate Attention mechanism and the Pixel-attention-guided fusion module was designed to enhance direction and position awareness, as well as improve the recognition of target boundaries and detailed features. The evaluation efficacy of single ultrasound signals and fused signals was compared. The assessment accuracy reached approximately 87% through inter-individual cross-validation in 6 rhesus monkeys. The fusion of ultrasound signals significantly enhanced classification performance compared to using single modalities, such as B-mode images or radiofrequency signals. This study demonstrates that ultrasound technology combined with deep learning algorithms can effectively assess disuse muscle atrophy. The proposed approach offers a promising reference for diagnosing muscle atrophy under long-term immobilization, with significant application value and potential for widespread adoption.

Surrogate Supervision for Robust and Generalizable Deformable Image Registration

Yihao Liu, Junyu Chen, Lianrui Zuo, Shuwen Wei, Brian D. Boyd, Carmen Andreescu, Olusola Ajilore, Warren D. Taylor, Aaron Carass, Bennett A. Landman

arxiv logopreprintSep 11 2025
Objective: Deep learning-based deformable image registration has achieved strong accuracy, but remains sensitive to variations in input image characteristics such as artifacts, field-of-view mismatch, or modality difference. We aim to develop a general training paradigm that improves the robustness and generalizability of registration networks. Methods: We introduce surrogate supervision, which decouples the input domain from the supervision domain by applying estimated spatial transformations to surrogate images. This allows training on heterogeneous inputs while ensuring supervision is computed in domains where similarity is well defined. We evaluate the framework through three representative applications: artifact-robust brain MR registration, mask-agnostic lung CT registration, and multi-modal MR registration. Results: Across tasks, surrogate supervision demonstrated strong resilience to input variations including inhomogeneity field, inconsistent field-of-view, and modality differences, while maintaining high performance on well-curated data. Conclusions: Surrogate supervision provides a principled framework for training robust and generalizable deep learning-based registration models without increasing complexity. Significance: Surrogate supervision offers a practical pathway to more robust and generalizable medical image registration, enabling broader applicability in diverse biomedical imaging scenarios.

The Combined Use of Cervical Ultrasound and Deep Learning Improves the Detection of Patients at Risk for Spontaneous Preterm Delivery.

Sejer EPF, Pegios P, Lin M, Bashir Z, Wulff CB, Christensen AN, Nielsen M, Feragen A, Tolsgaard MG

pubmed logopapersSep 11 2025
Preterm birth is the leading cause of neonatal mortality and morbidity. While ultrasound-based cervical length measurement is the current standard for predicting preterm birth, its performance is limited. Artificial intelligence (AI) has shown potential in ultrasound analysis, yet few small-scale studies have evaluated its use for predicting preterm birth. To develop and validate an AI model for spontaneous preterm birth prediction from cervical ultrasound images and compare its performance to cervical length. In this multicenter study, we developed a deep learning-based AI model using data from women who underwent cervical ultrasound scans as part of antenatal care between 2008 and 2018 in Denmark. Indications for ultrasound were not systematically recorded, and scans were likely performed due to risk factors or symptoms of preterm labor. We compared the performance of the AI model with cervical length measurement for spontaneous preterm birth prediction by assessing the area under the curve (AUC), sensitivity, specificity, and likelihood ratios. Subgroup analyses evaluated model performance across baseline characteristics, and saliency heat maps identified anatomical features that influenced AI model predictions the most. The final dataset included 4,224 pregnancies and 7,862 cervical ultrasound images, with 50% resulting in spontaneous preterm birth. The AI model surpassed cervical length for predicting spontaneous preterm birth before 37 weeks with a sensitivity of 0.51 (95% CI 0.50-0.53) versus 0.41 (0.39-0.42) at a fixed specificity at 0.85, p<0.001, and a higher AUC of 0.75 (0.74-0.76) versus 0.67 (0.66-0.68), p<0.001. For identifying late preterm births at 34-37 weeks, the AI model had 36.6 % higher sensitivity than cervical length (0.47 versus 0.34, p<0.001). The AI model achieved higher AUCs across all subgroups, especially at earlier gestational ages. Saliency heat maps indicated that in 54% of preterm birth cases, the AI model focused on the posterior inner lining of the lower uterine segment, suggesting it incorporates more data than cervical length alone. To our knowledge, this is the first large-scale, multicenter study demonstrating that AI is more sensitive than cervical length measurement in identifying spontaneous preterm births across multiple characteristics, 19 hospital sites, and different ultrasound machines. The AI model performs particularly well at earlier gestational ages, enabling more timely prophylactic interventions.
Page 19 of 3433422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.