Sort by:
Page 396 of 7367356 results

Chi Ding, Qingchao Zhang, Ge Wang, Xiaojing Ye, Yunmei Chen

arxiv logopreprintJul 30 2025
We propose a learnable variational model that learns the features and leverages complementary information from both image and measurement domains for image reconstruction. In particular, we introduce a learned alternating minimization algorithm (LAMA) from our prior work, which tackles two-block nonconvex and nonsmooth optimization problems by incorporating a residual learning architecture in a proximal alternating framework. In this work, our goal is to provide a complete and rigorous convergence proof of LAMA and show that all accumulation points of a specified subsequence of LAMA must be Clarke stationary points of the problem. LAMA directly yields a highly interpretable neural network architecture called LAMA-Net. Notably, in addition to the results shown in our prior work, we demonstrate that the convergence property of LAMA yields outstanding stability and robustness of LAMA-Net in this work. We also show that the performance of LAMA-Net can be further improved by integrating a properly designed network that generates suitable initials, which we call iLAMA-Net. To evaluate LAMA-Net/iLAMA-Net, we conduct several experiments and compare them with several state-of-the-art methods on popular benchmark datasets for Sparse-View Computed Tomography.

Yueh-Po Peng, Vincent K. M. Cheung, Li Su

arxiv logopreprintJul 30 2025
A fundamental challenge in neuroscience is to decode mental states from brain activity. While functional magnetic resonance imaging (fMRI) offers a non-invasive approach to capture brain-wide neural dynamics with high spatial precision, decoding from fMRI data -- particularly from task-evoked activity -- remains challenging due to its high dimensionality, low signal-to-noise ratio, and limited within-subject data. Here, we leverage recent advances in computer vision and propose STDA-SwiFT, a transformer-based model that learns transferable representations from large-scale fMRI datasets via spatial-temporal divided attention and self-supervised contrastive learning. Using pretrained voxel-wise representations from 995 subjects in the Human Connectome Project (HCP), we show that our model substantially improves downstream decoding performance of task-evoked activity across multiple sensory and cognitive domains, even with minimal data preprocessing. We demonstrate performance gains from larger receptor fields afforded by our memory-efficient attention mechanism, as well as the impact of functional relevance in pretraining data when fine-tuning on small samples. Our work showcases transfer learning as a viable approach to harness large-scale datasets to overcome challenges in decoding brain activity from fMRI data.

Yuzhen Gao, Qianqian Wang, Yongheng Sun, Cui Wang, Yongquan Liang, Mingxia Liu

arxiv logopreprintJul 30 2025
Accurate identification of late-life depression (LLD) using structural brain MRI is essential for monitoring disease progression and facilitating timely intervention. However, existing learning-based approaches for LLD detection are often constrained by limited sample sizes (e.g., tens), which poses significant challenges for reliable model training and generalization. Although incorporating auxiliary datasets can expand the training set, substantial domain heterogeneity, such as differences in imaging protocols, scanner hardware, and population demographics, often undermines cross-domain transferability. To address this issue, we propose a Collaborative Domain Adaptation (CDA) framework for LLD detection using T1-weighted MRIs. The CDA leverages a Vision Transformer (ViT) to capture global anatomical context and a Convolutional Neural Network (CNN) to extract local structural features, with each branch comprising an encoder and a classifier. The CDA framework consists of three stages: (a) supervised training on labeled source data, (b) self-supervised target feature adaptation and (c) collaborative training on unlabeled target data. We first train ViT and CNN on source data, followed by self-supervised target feature adaptation by minimizing the discrepancy between classifier outputs from two branches to make the categorical boundary clearer. The collaborative training stage employs pseudo-labeled and augmented target-domain MRIs, enforcing prediction consistency under strong and weak augmentation to enhance domain robustness and generalization. Extensive experiments conducted on multi-site T1-weighted MRI data demonstrate that the CDA consistently outperforms state-of-the-art unsupervised domain adaptation methods.

Gu Y, Bai H, Chen M, Yang L, Zhang B, Wang J, Lu X, Li J, Liu X, Yu D, Zhao Y, Tang S, He Q

pubmed logopapersJul 30 2025
Accurate differential diagnosis of pneumonia remains a challenging task, as different types of pneumonia require distinct treatment strategies. Early and precise diagnosis is crucial for minimizing the risk of misdiagnosis and for effectively guiding clinical decision-making and monitoring treatment response. This study proposes the WSDC-ViT network to enhance computer-aided pneumonia detection and alleviate the diagnostic workload for radiologists. Unlike existing models such as Swin Transformer or CoAtNet, which primarily improve attention mechanisms through hierarchical designs or convolutional embedding, WSDC-ViT introduces a novel architecture that simultaneously enhances global and local feature extraction through a scalable self-attention mechanism and convolutional refinement. Specifically, the network integrates a scalable self-attention mechanism that decouples the query, key, and value dimensions to reduce computational overhead and improve contextual learning, while an interactive window-based attention module further strengthens long-range dependency modeling. Additionally, a convolution-based module equipped with a dynamic ReLU activation function is embedded within the transformer encoder to capture fine-grained local details and adaptively enhance feature expression. Experimental results demonstrate that the proposed method achieves an average classification accuracy of 95.13% and an F1-score of 95.63% on a chest X-ray dataset, along with 99.36% accuracy and a 99.34% F1-score on a CT dataset. These results highlight the model's superior performance compared to existing automated pneumonia classification approaches, underscoring its potential clinical applicability.

Ma CY, Fu Y, Liu L, Chen J, Li SY, Zhang L, Zhou JY

pubmed logopapersJul 30 2025
This study aimed to develop and validate a multi-temporal magnetic resonance imaging (MRI)-based delta-radiomics model to accurately predict severe acute radiation enteritis risk in patients undergoing total neoadjuvant therapy (TNT) for locally advanced rectal cancer (LARC). A retrospective analysis was conducted on the data from 92 patients with LARC who received TNT. All patients underwent pelvic MRI at baseline (pre-treatment) and after neoadjuvant radiotherapy (post-RT). Radiomic features of the primary tumor region were extracted from T2-weighted images at both timepoints. Four delta feature strategies were defined (absolute difference, percent change, ratio, and feature fusion) by concatenating pre- and post-RT features. Severe acute radiation enteritis (SARE) was defined as a composite CTCAE-based symptom score of ≥ 3 within the first 2 weeks of radiotherapy. Features were selected via statistical evaluation and least absolute shrinkage and selection operator regression. Support vector machine (SVM) classifiers were trained using baseline, post-RT, delta, and combined radiomic and clinical features. Model performance was evaluated in an independent test set based on the area under the curve (AUC) value and other metrics. Only the delta-fusion strategy retained stable radiomic features after selection, and outperformed the difference, percent, and ratio definitions in terms of feature stability and model performance. The SVM model, based on combined delta-fusion radiomics and clinical variables, demonstrated the best predictive performance and generalizability. In the independent test cohort, this combined model demonstrated an AUC value of 0.711, sensitivity of 88.9%, and F1-score of 0.696; these values surpassed those of models built with baseline-only or delta difference features. Integrating multi-temporal radiomic features via delta-fusion with clinical factors markedly improved early prediction of SARE in LARC. The delta-fusion approach outperformed conventional delta calculations, and demonstrated superior predictive performance. This highlights its potential in guiding individualized TNT sequencing and proactive toxicity management. NA.

Lin L, Ren Y, Jian W, Yang G, Zhang B, Zhu L, Zhao W, Meng H, Wang X, He Q

pubmed logopapersJul 30 2025
Radiation-induced xerostomia is a common sequela in patients who undergo head and neck radiation therapy. This study aims to develop a three-dimensional deep learning model to predict xerostomia by fusing data from the gross tumor volume primary (GTVp) channel and parotid glands (PGs) channel. Retrospective data were collected from 180 head and neck cancer patients. Xerostomia was defined as xerostomia of grade ≥ 2 occurring in the 6th month of radiation therapy. The dataset was split into 137 cases (58.4% xerostomia, 41.6% non-xerostomia) for training and 43 (55.8% xerostomia, 44.2% non-xerostomia) for testing. XeroNet was composed of GNet, PNet, and a Naive Bayes decision fusion layer. GNet processed data from the GTVp channel (CT, dose distributions corresponding and the GTVp contours). PNet processed data from the PGs channel (CT, dose distributions and the PGs contours). The Naive Bayes decision fusion layer was used to integrate the results from GNet and PNet. Model performance was evaluated using accuracy, F-score, sensitivity, specificity, and area under the receiver operator characteristic curve (AUC). The proposed model achieved promising prediction results. The accuracy, AUC, F-score, sensitivity and specificity were 0.779, 0.858, 0.797, 0.777, and 0.782, respectively. Features extracted from the CT and dose distributions in the GTVp and PGs regions were used to construct machine learning models. However, the performance of these models was inferior to our method. Compared with recent studies on xerostomia prediction, our method also showed better performance. The proposed model could effectively extract features from the GTVp and PGs channels, achieving good performance in xerostomia prediction.

Bonfanti-Gris M, Herrera A, Salido Rodríguez-Manzaneque MP, Martínez-Rus F, Pradíes G

pubmed logopapersJul 30 2025
This systematic review and meta-analysis aimed to summarize and evaluate the available information regarding the performance of deep learning methods for tooth detection and segmentation in orthopantomographies. Electronic databases (Medline, Embase and Cochrane) were searched up to September 2023 for relevant observational studies and both, randomized and controlled clinical trials. Two reviewers independently conducted the study selection, data extraction, and quality assessments. GRADE (Grading of Recommendations, Assessment, Development, and Evaluation) assessment was adopted for collective grading of the overall body of evidence. From the 2,207 records identified, 20 studies were included in the analysis. Meta-analysis was conducted for the comparison of mesiodens detection and segmentation (n = 6) using sensitivity and specificity as the two main diagnostic parameters. A graphical summary of the analysis was also plotted and a Hierarchical Summary Receiver Operating Characteristic curve, prediction region, summary point, and confidence region were illustrated. The included studies quantitative analysis showed pooled sensitivity, specificity, positive LR, negative LR, and diagnostic odds ratio of 0.92 (95% confidence interval [CI], 0.84-0.96), 0.94 (95% CI, 0.89-0.97), 15.7 (95% CI, 7.6-32.2), 0.08 (95% CI, 0.04-0.18), and 186 (95% CI, 44-793), respectively. A graphical summary of the meta-analysis was plotted based on sensitivity and specificity. Hierarchical Summary Receiver Operating Characteristic curves showed a positive correlation between logit-transformed sensitivity and specificity (r = 0.886). Based on the results of the meta-analysis and GRADE assessment, a moderate recommendation is advised to dental operators when relying on AI-based tools for tooth detection and segmentation in panoramic radiographs.

Dorosti S, Landry T, Brewer K, Forbes A, Davis C, Brown J

pubmed logopapersJul 30 2025
Glioblastoma multiforme (GBM) is the most aggressive type of brain cancer, making effective treatments essential to improve patient survival. To advance the understanding of GBM and develop more effective therapies, preclinical studies commonly use mouse models due to their genetic and physiological similarities to humans. In particular, the GL261 mouse glioma model is employed for its reproducible tumor growth and ability to mimic key aspects of human gliomas. Ultrasound imaging is a valuable modality in preclinical studies, offering real-time, non-invasive tumor monitoring and facilitating treatment response assessment. Furthermore, its potential therapeutic applications, such as in tumor ablation, expand its utility in preclinical studies. However, real-time segmentation of GL261 tumors during surgery introduces significant complexities, such as precise tumor boundary delineation and maintaining processing efficiency. Automated segmentation offers a solution, but its success relies on high-quality datasets with precise labeling. Our study introduces the first publicly available ultrasound dataset specifically developed to improve tumor segmentation in GL261 glioblastomas, providing 1,856 annotated images to support AI model development in preclinical research. This dataset bridges preclinical insights and clinical practice, laying the foundation for developing more accurate and effective tumor resection techniques.

Pattanayak S, Singh T, Kumar R

pubmed logopapersJul 30 2025
Neoadjuvant therapy plays a pivotal role in breast cancer treatment, particularly for patients aiming to conserve their breast by reducing tumor size pre-surgery. The ultimate goal of this treatment is achieving a pathologic complete response (pCR), which signifies the complete eradication of cancer cells, thereby lowering the likelihood of recurrence. This study introduces a novel predictive approach to identify patients likely to achieve pCR using radiomic features extracted from MR images, enhanced by the InceptionV3 model and cutting-edge validation methodologies. In our study, we gathered data from 255 unique Patient IDs sourced from the -SPY 2 MRI database with the goal of classifying pCR (pathological complete response). Our research introduced two key areas of novelty.Firstly, we explored the extraction of advanced features from the dcom series such as Area, Perimeter, Entropy, Intensity of the places where the intensity is more than the average intensity of the image. These features provided deeper insights into the characteristics of the MRI data and enhanced the discriminative power of our classification model.Secondly, we applied these extracted features along with combine pixel array of the dcom series of each patient to the numerous deep learning model along with InceptionV3 (GoogleNet) model which provides the best accuracy. To optimize the model's performance, we experimented with different combinations of loss functions, optimizer functions, and activation functions. Lastly, our classification results were subjected to validation using accuracy, AUC, Sensitivity, Specificity and F1 Score. These evaluation metrics provided a robust assessment of the model's performance and ensured the reliability of our findings. The successful combination of advanced feature extraction, utilization of the InceptionV3 model with tailored hyperparameters, and thorough validation using cutting-edge techniques significantly enhanced the accuracy and reliability of our pCR classification study. By adopting a collaborative approach that involved both radiologists and the computer-aided system, we achieved superior predictive performance for pCR, as evidenced by the impressive values obtained for the area under the curve (AUC) at 0.91 having an accuracy of .92. Overall, the combination of advanced feature extraction, leveraging the InceptionV3 model with customized hyperparameters, and rigorous validation using state-of-the-art techniques contributed to the accuracy and credibility of our pCR classification study.

Gillet R, Puel U, Amer A, Doyen M, Boubaker F, Assabah B, Hossu G, Gillet P, Blum A, Teixeira PAG

pubmed logopapersJul 30 2025
High-resolution CT (HR-CT) cannot image trabecular bone due to insufficient spatial resolution. Ultra-high-resolution CT may be a valuable alternative. We aimed to describe the accuracy of Canon Medical HR, super-high-resolution (SHR), and ultra-high-resolution (UHR)-CT in measuring trabecular bone microarchitectural parameters using micro-CT as a reference. Sixteen cadaveric distal tibial epiphyses were enrolled in this pre-clinical study. Images were acquired with HR-CT (i.e., 0.5 mm slice thickness/512<sup>2</sup> matrix) and SHR-CT (i.e., 0.25 mm slice thickness and 1024<sup>2</sup> matrix) with and without deep learning reconstruction (DLR) and UHR-CT (i.e., 0.25 mm slice thickness/2048<sup>2</sup> matrix) without DLR. Trabecular bone parameters were compared. Trabecular thickness was closest with UHR-CT but remained 1.37 times that of micro-CT (P < 0.001). With SHR-CT without and with DLR, it was 1.75 and 1.79 times that of micro-CT, respectively (P < 0.001), and 3.58 and 3.68 times that of micro-CT with HR-CT without and with DLR, respectively (P < 0.001). Trabecular separation was 0.7 times that of micro-CT with UHR-CT (P < 0.001), 0.93 and 0.94 times that of micro-CT with SHR-CT without and with DLR (P = 0.36 and 0.79, respectively), and 1.52 and 1.36 times that of micro-CT with HR-CT without and with DLR (P < 0.001). Bone volume/total volume was overestimated (i.e., 1.66 to 1.92 times that of micro-CT) by all techniques (P < 0.001). However, HR-CT values were superior to UHR-CT values (P = 0.03 and 0.01, without and with DLR, respectively). UHR and SHR-CT were the closest techniques to micro-CT and surpassed HR-CT.
Page 396 of 7367356 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,200+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.