Sort by:
Page 13 of 6036030 results

Sotaro K, Uemura K, Soufi M, Nishimura R, Miyamoto T, Higuchi R, Mae H, Takashima K, Otake Y, Tanaka Y, Takao M, Sugano N, Okada S, Hamada H

pubmed logopapersOct 20 2025
This study developed a method for assessing lower-limb lean mass measured by dual-energy X-ray absorptiometry (DXA-LM<sub>leg</sub>), using lower-limb muscle mass derived from computed tomography images (CT-MM). Further, the diagnostic performance of the model in detecting whole-body muscle mass (MM) loss, a key component in the assessment of sarcopenia, was evaluated using CT-MM to facilitate the timely initiation of treatment as needed. This retrospective study enrolled 227 patients who underwent hip surgery at two institutions. A deep neural network (DNN)-based method was employed in segmenting lower-limb CT images taken for surgical planning, and the CT-MM was calculated using two different density conversion methods: CT-MM1 (CT-MM calculated using the conventional method) and CT-MM2 (CT-MM calculated using the method by Aubrey et al.). Both CT-MMs were correlated with DXA-LM<sub>leg</sub>, and receiver operating characteristic (ROC) curve analysis was performed to evaluate the diagnostic accuracy of CT-MMs in detecting whole-body MM loss. In 222 cases that were successfully automatically analyzed, strong correlations were observed between CT-MM1 and DXA-LM<sub>leg</sub> (r<sub>s</sub> = 0.92-0.96) and between CT-MM2 and DXA-LM<sub>leg</sub> (r<sub>s</sub> = 0.86-0.92). ROC curve analysis revealed high diagnostic accuracy for whole-body MM loss (CT-MM1, area under the curve (AUC) = 0.96-0.97; CT-MM2, AUC = 0.91-0.93), with CT-MM1 demonstrating significantly better performance. CT-MMs were strongly correlated with DXA-LM<sub>leg</sub> and had a high diagnostic performance (AUC > 0.9) in detecting whole-body MM loss, supporting sarcopenia screening and preoperative clinical decision-making using routine CT scan.

Yovin Yahathugoda, Davide Prezzi, Piyalitt Ittichaiwong, Vicky Goh, Sebastien Ourselin, Michela Antonelli

arxiv logopreprintOct 20 2025
Active Surveillance (AS) is a treatment option for managing low and intermediate-risk prostate cancer (PCa), aiming to avoid overtreatment while monitoring disease progression through serial MRI and clinical follow-up. Accurate prostate segmentation is an important preliminary step for automating this process, enabling automated detection and diagnosis of PCa. However, existing deep-learning segmentation models are often trained on single-time-point and expertly annotated datasets, making them unsuitable for longitudinal AS analysis, where multiple time points and a scarcity of expert labels hinder their effective fine-tuning. To address these challenges, we propose MambaX-Net, a novel semi-supervised, dual-scan 3D segmentation architecture that computes the segmentation for time point t by leveraging the MRI and the corresponding segmentation mask from the previous time point. We introduce two new components: (i) a Mamba-enhanced Cross-Attention Module, which integrates the Mamba block into cross attention to efficiently capture temporal evolution and long-range spatial dependencies, and (ii) a Shape Extractor Module that encodes the previous segmentation mask into a latent anatomical representation for refined zone delination. Moreover, we introduce a semi-supervised self-training strategy that leverages pseudo-labels generated from a pre-trained nnU-Net, enabling effective learning without expert annotations. MambaX-Net was evaluated on a longitudinal AS dataset, and results showed that it significantly outperforms state-of-the-art U-Net and Transformer-based models, achieving superior prostate zone segmentation even when trained on limited and noisy data.

Xinwei Zhang, Hu Chen, Zhe Yuan, Sukun Tian, Peng Feng

arxiv logopreprintOct 20 2025
Foundation models for medical image segmentation have achieved remarkable performance. Adaptive fine-tuning of natural image segmentation foundation models is crucial for medical image segmentation tasks. However, some limitations exist in existing fine-tuning methods: 1) insufficient representation of high-level features and 2) the fine-tuning process disrupts the structural integrity of pretrained weights. Inspired by these critical problems, we propose an intelligent communication mixture-of-experts boosted-medical image segmentation foundation model, named IC-MoE, with twofold ideas: 1) We construct basic experts, semantic experts, and adaptive experts. Moreover, we implement a pixel probability adaptive voting strategy, which enables expert selection and fusion through label consistency and load balancing. This approach preliminarily enhances the representation capability of high-level features while preserving the structural integrity of pretrained weights. 2) We propose a semantic-guided contrastive learning method to address the issue of weak supervision in contrastive learning. This method further enhances the representation capability of high-level features while preserving the structural integrity of pretrained weights. Extensive experiments across three public medical image segmentation datasets demonstrate that the IC-MoE outperforms other SOTA models. Consequently, the proposed IC-MoE effectively supplements foundational medical image segmentation models with high-level features and pretrained structural integrity. We also validate the superior generalizability of the IC-MoE across diverse medical image segmentation scenarios.

Athanasios Angelakis, Amne Mousa, Micah L. A. Heldeweg, Laurens A. Biesheuvel, Mark A. Haaksma, Jasper M. Smit, Pieter R. Tuinman, Paul W. G. Elbers

arxiv logopreprintOct 20 2025
Differentiating cardiogenic pulmonary oedema (CPE) from non-cardiogenic and structurally normal lungs in lung ultrasound (LUS) videos remains challenging due to the high visual variability of non-cardiogenic inflammatory patterns (NCIP/ARDS-like), interstitial lung disease, and healthy lungs. This heterogeneity complicates automated classification as overlapping B-lines and pleural artefacts are common. We introduce ZACH-ViT (Zero-token Adaptive Compact Hierarchical Vision Transformer), a 0.25 M-parameter Vision Transformer variant that removes both positional embeddings and the [CLS] token, making it fully permutation-invariant and suitable for unordered medical image data. To enhance generalization, we propose ShuffleStrides Data Augmentation (SSDA), which permutes probe-view sequences and frame orders while preserving anatomical validity. ZACH-ViT was evaluated on 380 LUS videos from 95 critically ill patients against nine state-of-the-art baselines. Despite the heterogeneity of the non-cardiogenic group, ZACH-ViT achieved the highest validation and test ROC-AUC (0.80 and 0.79) with balanced sensitivity (0.60) and specificity (0.91), while all competing models collapsed to trivial classification. It trains 1.35x faster than Minimal ViT (0.62M parameters) with 2.5x fewer parameters, supporting real-time clinical deployment. These results show that aligning architectural design with data structure can outperform scale in small-data medical imaging.

Wang Z, Tian G, Wang Y, An G, Liu X, Gu X, Cao Y, Zhang W, Hao D, Liu Y

pubmed logopapersOct 20 2025
Aortic valve calcification is a common cause of stenosis. Echocardiography, although being the preferred and most prevailing diagnostic technique for aortic valve diseases, lacks effective methods for accurately rating the degree of aortic valve calcification. In this study, a dual-channel cross-feature fusion neural network was developed to classify the degree of aortic valve calcification. The dual-channel input is designed to accept both end-systolic and end-diastolic ultrasound images of a patient, thereby maximizing the retention of vital information from both phases of the cardiac cycle. Feature extraction scale was also dynamically adjusted using the squeeze-and-excitation module. To better integrate multilevel and multiscale information, a dual-branch feature fusion module with cross-feature fusion and multiscale feature extraction was designed, thereby enabling the network to merge global and local feature information. Moreover, to address the specific noise characteristics of ultrasound images and low valve occupancy in the aortic short-axis view, a unified preprocessing algorithm was developed. A total of 420 volunteers were internally selected and classified based on computed tomography scan calcification scores (140 cases per category: healthy, nonsevere, and severe).Each patient contributed 2-4 cardiac cycles, resulting in a final effective dataset of 1092 samples. The classification model achieved an accuracy, precision, F1 score, and recall of 96.79%, 98.59%, 97.97%, and 97.22%, respectively. The artificial intelligence-assisted diagnosis system proposed in this study exhibits high precision in evaluating the degree of aortic valve calcification, positioning echocardiographic examination as a promising alternative in routine aortic valve calcification analysis and screening.

Bhushan A, Misra P

pubmed logopapersOct 20 2025
Artificial Intelligence (AI) is revolutionizing biotechnology by accelerating advancements in drug discovery, genomics, medical imaging, and personalized medicine, thereby enhancing efficiency and reducing healthcare costs. This review emphasizes the transformative potential of multimodal AI-systems that integrate diverse data types such as genomic, clinical, and imaging data-to deliver more accurate and holistic biomedical insights. We explore AI's economic impact, role in driving innovation, and implications for both researchers and policymakers. Additionally, the review addresses key challenges, including data quality, algorithmic transparency, and ethical concerns, highlighting the urgent need for explainable AI models, robust regulatory frameworks, and equitable implementation to ensure responsible and impactful adoption across global healthcare systems.

Zhang L, Huang C, Xu Q, Cheng L

pubmed logopapersOct 20 2025
Thyroid nodules are highly prevalent in clinical practice, and their incidence has been steadily increasing in recent years, posing significant threats to human health. Traditional imaging examinations for thyroid nodules rely heavily on physicians' clinical experience and professional expertise, and are further limited by factors such as image resolution and inter-patient variability. These limitations hinder the accuracy and efficiency of clinical diagnosis. Leveraging its powerful image processing capabilities, deep learning has been widely applied in the extraction of nodule features and the preliminary classification of benign and malignant cases, bringing transformative advances to medical image analysis. In this review, we systematically summarize recent developments in the diagnosis of thyroid nodules using deep learning from three key perspectives: model architectures, training methods, and core tasks in thyroid nodule medical image analysis. We compare the various architectures, including CNNs, RNNs, GANs, transformers and hybrid models. We then summarize key challenges in thyroid nodule imaging, outline potential solutions, and consider how deep learning can be integrated into clinical workflows. Looking ahead, we discuss the future directions of enhancing the applicability of deep learning from model robustness, cross-domain adaptation, and clinical interpretability. Our work aims to provide valuable reference insights and directions for improvement for future related research.

Ghazal Danaee, Marc Niethammer, Jarrett Rushmore, Sylvain Bouix

arxiv logopreprintOct 20 2025
Deep-learning-based segmentation algorithms have substantially advanced the field of medical image analysis, particularly in structural delineations in MRIs. However, an important consideration is the intrinsic bias in the data. Concerns about unfairness, such as performance disparities based on sensitive attributes like race and sex, are increasingly urgent. In this work, we evaluate the results of three different segmentation models (UNesT, nnU-Net, and CoTr) and a traditional atlas-based method (ANTs), applied to segment the left and right nucleus accumbens (NAc) in MRI images. We utilize a dataset including four demographic subgroups: black female, black male, white female, and white male. We employ manually labeled gold-standard segmentations to train and test segmentation models. This study consists of two parts: the first assesses the segmentation performance of models, while the second measures the volumes they produce to evaluate the effects of race, sex, and their interaction. Fairness is quantitatively measured using a metric designed to quantify fairness in segmentation performance. Additionally, linear mixed models analyze the impact of demographic variables on segmentation accuracy and derived volumes. Training on the same race as the test subjects leads to significantly better segmentation accuracy for some models. ANTs and UNesT show notable improvements in segmentation accuracy when trained and tested on race-matched data, unlike nnU-Net, which demonstrates robust performance independent of demographic matching. Finally, we examine sex and race effects on the volume of the NAc using segmentations from the manual rater and from our biased models. Results reveal that the sex effects observed with manual segmentation can also be observed with biased models, whereas the race effects disappear in all but one model.

Tarki FE, Sharbatdar M, Zarrabi M, Vafaee F, Khanbabaei G

pubmed logopapersOct 20 2025
The significant global health threat of antimicrobial resistance is particularly noted among patients with cystic fibrosis, who experience increased morbidity and mortality due to persistent bacterial infections. The present research explores the feasibility of utilizing intranasal administration to improve the therapeutic effectiveness of antibiotics in individuals with cystic fibrosis (CF). This approach leverages the unique anatomical and physiological features of the nasal cavity for targeted drug delivery. Computational fluid dynamics (CFD) was employed to model drug deposition patterns in three demographic groups: a healthy adult aged 37, a healthy child aged 5, and a 6-year-old pediatric cystic fibrosis (CF) patient exhibiting nasal cavity structural anomalies. CT scan imaging was utilized to reconstruct nasal geometries, and airflow, particle trajectories, and deposition rates were analyzed across different breathing patterns, spray angles, and particle sizes. Machine learning models were developed as surrogate predictors of regional doses based on CFD simulations. The results emphasize the critical role of anatomical differences in optimizing intranasal drug delivery strategies, underscoring the necessity for tailored approaches in pediatric populations, especially for those with cystic fibrosis, to combat the escalating issue of antimicrobial resistance effectively. Insights into enhancing antibiotic delivery are provided to improve treatment outcomes and mitigate resistance development in vulnerable patient groups.

Dou Z, Lu C, Shen X, Gu C, Shen Y, Xu W, Qin S, Zhu J, Xu C, Li J

pubmed logopapersOct 20 2025
With pancreatic cancer’s dismal prognosis, developing accurate predictive tools is crucial for personalized treatment. This study aims to develop and evaluate a radiomics-3D deep learning fusion model to enhance survival prediction accuracy and explore its potential for clinical risk stratification in pancreatic cancer patients. This study retrospectively analyzed from pancreatic cancer patients treated at two hospitals between 2013 and 2023. Patients were split into training and test cohorts (7:3). Baseline clinical data and portal venous phase contrast-enhanced CT images were collected. Two physicians independently delineated tumor regions of interest (ROIs), and 1,037 radiomic features were extracted. After dimensionality reduction via Principal component analysis (PCA) and feature selection with LASSO regression, a radiomics model was developed using the random survival forest (RSF) algorithm to predict overall survival, accounting for censored data. A separate 3D-DenseNet model was trained using ROI-based image inputs to extract deep features. For fusion models, we adopted a binary classification approach to predict survival status at 1-, 2-, and 3-year time points. Radiomics features, 3D-DenseNet outputs, and clinical variables were integrated using logistic regression, random forest, support vector machine, and decision tree classifiers. Model performance was evaluated using receiver operating characteristic (ROC) curves, area under the curve (AUC), and accuracy. The best-performing fusion model was selected for clinical risk stratification. Kaplan-Meier curves and Log-rank tests were used to assess survival differences between risk groups. A total of 880 eligible patients were included in this study. In the test cohort, the performance of each model in predicting 1-year, 2-year, and 3-year survival was evaluated. The radiomics model achieved AUC values of 0.78, 0.85, and 0.91, with corresponding accuracies of 0.75, 0.77, and 0.77. The 3D-DenseNet model demonstrated AUC values of 0.81, 0.79, and 0.75, with accuracies of 0.72, 0.76, and 0.77. The fusion model, developed using logistic regression, exhibited superior predictive performance with AUC values of 0.87, 0.92, and 0.94, and accuracies of 0.84, 0.86, and 0.89, outperforming the individual unimodal models. Risk stratification based on the fusion model categorized patients into high-risk and low-risk groups, revealing a statistically significant difference in OS between the two groups (<i>P</i> < 0.001). Feature contribution analysis indicated that the 3D-DenseNet model had the greatest influence on the predictions of the fusion model, followed by the radiomics model. This study developed a fusion model incorporating radiomics features, deep learning-derived features, and clinical data, which outperformed unimodal models in predicting survival outcomes in pancreatic cancer and demonstrated potential utility in patient risk stratification. The online version contains supplementary material available at 10.1186/s12885-025-14889-0.
Page 13 of 6036030 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.