Sort by:
Page 418 of 4494481 results

Deep learning predicts HER2 status in invasive breast cancer from multimodal ultrasound and MRI.

Fan Y, Sun K, Xiao Y, Zhong P, Meng Y, Yang Y, Du Z, Fang J

pubmed logopapersMay 16 2025
The preoperative human epidermal growth factor receptor type 2 (HER2) status of breast cancer is typically determined by pathological examination of a core needle biopsy, which influences the efficacy of neoadjuvant chemotherapy (NAC). However, the highly heterogeneous nature of breast cancer and the limitations of needle aspiration biopsy increase the instability of pathological evaluation. The aim of this study was to predict HER2 status in preoperative breast cancer using deep learning (DL) models based on ultrasound (US) and magnetic resonance imaging (MRI). The study included women with invasive breast cancer who underwent US and MRI at our institution between January 2021 and July 2024. US images and dynamic contrast-enhanced T1-weighted MRI images were used to construct DL models (DL-US: the DL model based on US; DL-MRI: the model based on MRI; and DL-MRI&US: the combined model based on both MRI and US). All classifications were based on postoperative pathological evaluation. Receiver operating characteristic analysis and the DeLong test were used to compare the diagnostic performance of the DL models. In the test cohort, DL-US differentiated the HER2 status of breast cancer with an AUC of 0.842 (95% CI: 0.708-0.931), and sensitivity and specificity of 89.5% and 79.3%, respectively. DL-MRI achieved an AUC of 0.800 (95% CI: 0.660-0.902), with sensitivity and specificity of 78.9% and 79.3%, respectively. DL-MRI&US yielded an AUC of 0.898 (95% CI: 0.777-0.967), with sensitivity and specificity of 63.2% and 100.0%, respectively.

Deep learning model based on ultrasound images predicts BRAF V600E mutation in papillary thyroid carcinoma.

Yu Y, Zhao C, Guo R, Zhang Y, Li X, Liu N, Lu Y, Han X, Tang X, Mao R, Peng C, Yu J, Zhou J

pubmed logopapersMay 16 2025
BRAF V600E mutation status detection facilitates prognosis prediction in papillary thyroid carcinoma (PTC). We developed a deep-learning model to determine the BRAF V600E status in PTC. PTC from three centers were collected as the training set (1341 patients), validation set (148 patients), and external test set (135 patients). After testing the performance of the ResNeSt-50, Vision Transformer, and Swin Transformer V2 (SwinT) models, SwinT was chosen as the optimal backbone. An integrated BrafSwinT model was developed by combining the backbone with a radiomics feature branch and a clinical parameter branch. BrafSwinT demonstrated an AUC of 0.869 in the external test set, outperforming the original SwinT, Vision Transformer, and ResNeSt-50 models (AUC: 0.782-0.824; <i>p</i> value: 0.017-0.041). BrafSwinT showed promising results in determining BRAF V600E mutation status in PTC based on routinely acquired ultrasound images and basic clinical information, thus facilitating risk stratification.

Evaluation of tumour pseudocapsule using computed tomography-based radiomics in pancreatic neuroendocrine tumours to predict prognosis and guide surgical strategy: a cohort study.

Wang Y, Gu W, Huang D, Zhang W, Chen Y, Xu J, Li Z, Zhou C, Chen J, Xu X, Tang W, Yu X, Ji S

pubmed logopapersMay 16 2025
To date, indications for a surgical approach of small pancreatic neuroendocrine tumours (PanNETs) remain controversial. This cohort study aimed to identify the pseudocapsule status preoperatively to estimate the rationality of enucleation and survival prognosis of PanNETs, particularly in small tumours. Clinicopathological data were collected from patients with PanNETs who underwent the first pancreatectomy at our hospital (n = 578) between February 2012 and September 2023. Kaplan-Meier curves were constructed to visualise prognostic differences. Five distinct tissue samples were obtained for single-cell RNA sequencing (scRNA-seq) to evaluate variations in the tumour microenvironment. Radiological features were extracted from preoperative arterial-phase contrast-enhanced computed tomography. The performance of the pseudocapsule radiomics model was assessed using the area under the curve (AUC) metric. 475 cases (mean [SD] age, 53.01 [12.20] years; female vs male, 1.24:1) were eligible for this study. The mean pathological diameter of tumour was 2.99 cm (median: 2.50 cm; interquartile range [IQR]: 1.50-4.00 cm). These cases were stratified into complete (223, 46.95%) and incomplete (252, 53.05%) pseudocapsule groups. A statistically significant difference in aggressive indicators was observed between the two groups (P < 0.001). Through scRNA-seq analysis, we identified that the incomplete group presented a markedly immunosuppressive microenvironment. Regarding the impact on recurrence-free survival, the 3-year and 5-year rates were 94.8% and 92.5%, respectively, for the complete pseudocapsule group, compared to 76.7% and 70.4% for the incomplete pseudocapsule group. The radiomics-predictive model has a significant discrimination for the state of the pseudocapsule, particularly in small tumours (AUC, 0.744; 95% CI, 0.652-0.837). By combining computed tomography-based radiomics and machine learning for preoperative identification of pseudocapsule status, the intact group is more likely to benefit from enucleation.

Escarcitys: A framework for enhancing medical image classification performance in scarcity of trainable samples scenarios.

Wang T, Dai Q, Xiong W

pubmed logopapersMay 16 2025
In the field of healthcare, the acquisition and annotation of medical images present significant challenges, resulting in a scarcity of trainable samples. This data limitation hinders the performance of deep learning models, creating bottlenecks in clinical applications. To address this issue, we construct a framework (EScarcityS) aimed at enhancing the success rate of disease diagnosis in scarcity of trainable medical image scenarios. Firstly, considering that Transformer-based deep learning networks rely on a large amount of trainable data, this study takes into account the unique characteristics of pathological regions. By extracting the feature representations of all particles in medical images at different granularities, a multi-granularity Transformer network (MGVit) is designed. This network leverages additional prior knowledge to assist the Transformer network during training, thereby reducing the data requirement to some extent. Next, the importance maps of particles at different granularities, generated by MGVit, are fused to construct disease probability maps corresponding to the images. Based on these maps, a disease probability map-guided diffusion generation model is designed to generate more realistic and interpretable synthetic data. Subsequently, authentic and synthetical data are mixed and used to retrain MGVit, aiming to enhance the accuracy of medical image classification in scarcity of trainable medical image scenarios. Finally, we conducted detailed experiments on four real medical image datasets to validate the effectiveness of EScarcityS and its specific modules.

UGoDIT: Unsupervised Group Deep Image Prior Via Transferable Weights

Shijun Liang, Ismail R. Alkhouri, Siddhant Gautam, Qing Qu, Saiprasad Ravishankar

arxiv logopreprintMay 16 2025
Recent advances in data-centric deep generative models have led to significant progress in solving inverse imaging problems. However, these models (e.g., diffusion models (DMs)) typically require large amounts of fully sampled (clean) training data, which is often impractical in medical and scientific settings such as dynamic imaging. On the other hand, training-data-free approaches like the Deep Image Prior (DIP) do not require clean ground-truth images but suffer from noise overfitting and can be computationally expensive as the network parameters need to be optimized for each measurement set independently. Moreover, DIP-based methods often overlook the potential of learning a prior using a small number of sub-sampled measurements (or degraded images) available during training. In this paper, we propose UGoDIT, an Unsupervised Group DIP via Transferable weights, designed for the low-data regime where only a very small number, M, of sub-sampled measurement vectors are available during training. Our method learns a set of transferable weights by optimizing a shared encoder and M disentangled decoders. At test time, we reconstruct the unseen degraded image using a DIP network, where part of the parameters are fixed to the learned weights, while the remaining are optimized to enforce measurement consistency. We evaluate UGoDIT on both medical (multi-coil MRI) and natural (super resolution and non-linear deblurring) image recovery tasks under various settings. Compared to recent standalone DIP methods, UGoDIT provides accelerated convergence and notable improvement in reconstruction quality. Furthermore, our method achieves performance competitive with SOTA DM-based and supervised approaches, despite not requiring large amounts of clean training data.

Diff-Unfolding: A Model-Based Score Learning Framework for Inverse Problems

Yuanhao Wang, Shirin Shoushtari, Ulugbek S. Kamilov

arxiv logopreprintMay 16 2025
Diffusion models are extensively used for modeling image priors for inverse problems. We introduce \emph{Diff-Unfolding}, a principled framework for learning posterior score functions of \emph{conditional diffusion models} by explicitly incorporating the physical measurement operator into a modular network architecture. Diff-Unfolding formulates posterior score learning as the training of an unrolled optimization scheme, where the measurement model is decoupled from the learned image prior. This design allows our method to generalize across inverse problems at inference time by simply replacing the forward operator without retraining. We theoretically justify our unrolling approach by showing that the posterior score can be derived from a composite model-based optimization formulation. Extensive experiments on image restoration and accelerated MRI show that Diff-Unfolding achieves state-of-the-art performance, improving PSNR by up to 2 dB and reducing LPIPS by $22.7\%$, while being both compact (47M parameters) and efficient (0.72 seconds per $256 \times 256$ image). An optimized C++/LibTorch implementation further reduces inference time to 0.63 seconds, underscoring the practicality of our approach.

Pretrained hybrid transformer for generalizable cardiac substructures segmentation from contrast and non-contrast CTs in lung and breast cancers

Aneesh Rangnekar, Nikhil Mankuzhy, Jonas Willmann, Chloe Choi, Abraham Wu, Maria Thor, Andreas Rimner, Harini Veeraraghavan

arxiv logopreprintMay 16 2025
AI automated segmentations for radiation treatment planning (RTP) can deteriorate when applied in clinical cases with different characteristics than training dataset. Hence, we refined a pretrained transformer into a hybrid transformer convolutional network (HTN) to segment cardiac substructures lung and breast cancer patients acquired with varying imaging contrasts and patient scan positions. Cohort I, consisting of 56 contrast-enhanced (CECT) and 124 non-contrast CT (NCCT) scans from patients with non-small cell lung cancers acquired in supine position, was used to create oracle with all 180 training cases and balanced (CECT: 32, NCCT: 32 training) HTN models. Models were evaluated on a held-out validation set of 60 cohort I patients and 66 patients with breast cancer from cohort II acquired in supine (n=45) and prone (n=21) positions. Accuracy was measured using DSC, HD95, and dose metrics. Publicly available TotalSegmentator served as the benchmark. The oracle and balanced models were similarly accurate (DSC Cohort I: 0.80 \pm 0.10 versus 0.81 \pm 0.10; Cohort II: 0.77 \pm 0.13 versus 0.80 \pm 0.12), outperforming TotalSegmentator. The balanced model, using half the training cases as oracle, produced similar dose metrics as manual delineations for all cardiac substructures. This model was robust to CT contrast in 6 out of 8 substructures and patient scan position variations in 5 out of 8 substructures and showed low correlations of accuracy to patient size and age. A HTN demonstrated robustly accurate (geometric and dose metrics) cardiac substructures segmentation from CTs with varying imaging and patient characteristics, one key requirement for clinical use. Moreover, the model combining pretraining with balanced distribution of NCCT and CECT scans was able to provide reliably accurate segmentations under varied conditions with far fewer labeled datasets compared to an oracle model.

A CVAE-based generative model for generalized B<sub>1</sub> inhomogeneity corrected chemical exchange saturation transfer MRI at 5 T.

Zhang R, Zhang Q, Wu Y

pubmed logopapersMay 15 2025
Chemical exchange saturation transfer (CEST) magnetic resonance imaging (MRI) has emerged as a powerful tool to image endogenous or exogenous macromolecules. CEST contrast highly depends on radiofrequency irradiation B<sub>1</sub> level. Spatial inhomogeneity of B<sub>1</sub> field would bias CEST measurement. Conventional interpolation-based B<sub>1</sub> correction method required CEST dataset acquisition under multiple B<sub>1</sub> levels, substantially prolonging scan time. The recently proposed supervised deep learning approach reconstructed B<sub>1</sub> inhomogeneity corrected CEST effect at the identical B<sub>1</sub> as of the training data, hindering its generalization to other B<sub>1</sub> levels. In this study, we proposed a Conditional Variational Autoencoder (CVAE)-based generative model to generate B<sub>1</sub> inhomogeneity corrected Z spectra from single CEST acquisition. The model was trained from pixel-wise source-target paired Z spectra under multiple B<sub>1</sub> with target B<sub>1</sub> as a conditional variable. Numerical simulation and healthy human brain imaging at 5 T were respectively performed to evaluate the performance of proposed model in B<sub>1</sub> inhomogeneity corrected CEST MRI. Results showed that the generated B<sub>1</sub>-corrected Z spectra agreed well with the reference averaged from regions with subtle B<sub>1</sub> inhomogeneity. Moreover, the performance of the proposed model in correcting B<sub>1</sub> inhomogeneity in APT CEST effect, as measured by both MTR<sub>asym</sub> and [Formula: see text] at 3.5 ppm, were superior over conventional Z/contrast-B<sub>1</sub>-interpolation and other deep learning methods, especially when target B<sub>1</sub> were not included in sampling or training dataset. In summary, the proposed model allows generalized B<sub>1</sub> inhomogeneity correction, benefiting quantitative CEST MRI in clinical routines.

Joint resting state and structural networks characterize pediatric bipolar patients compared to healthy controls: a multimodal fusion approach.

Yi X, Ma M, Wang X, Zhang J, Wu F, Huang H, Xiao Q, Xie A, Liu P, Grecucci A

pubmed logopapersMay 15 2025
Pediatric bipolar disorder (PBD) is a highly debilitating condition, characterized by alternating episodes of mania and depression, with intervening periods of remission. Limited information is available about the functional and structural abnormalities in PBD, particularly when comparing type I with type II subtypes. Resting-state brain activity and structural grey matter, assessed through MRI, may provide insight into the neurobiological biomarkers of this disorder. In this study, Resting state Regional Homogeneity (ReHo) and grey matter concentration (GMC) data of 58 PBD patients, and 21 healthy controls matched for age, gender, education and IQ, were analyzed in a data fusion unsupervised machine learning approach known as transposed Independent Vector Analysis. Two networks significantly differed between BPD and HC. The first network included fronto- medial regions, such as the medial and superior frontal gyrus, the cingulate, and displayed higher ReHo and GMC values in PBD compared to HC. The second network included temporo-posterior regions, as well as the insula, the caudate and the precuneus and displayed lower ReHo and GMC values in PBD compared to HC. Additionally, two networks differ between type-I vs type-II in PBD: an occipito-cerebellar network with increased ReHo and GMC in type-I compared to type-II, and a fronto-parietal network with decreased ReHo and GMC in type-I compared to type-II. Of note, the first network positively correlated with depression scores. These findings shed new light on the functional and structural abnormalities displayed by pediatric bipolar patients.

Measuring the severity of knee osteoarthritis with an aberration-free fast line scanning Raman imaging system.

Jiao C, Ye J, Liao J, Li J, Liang J, He S

pubmed logopapersMay 15 2025
Osteoarthritis (OA) is a major cause of disability worldwide, with symptoms like joint pain, limited functionality, and decreased quality of life, potentially leading to deformity and irreversible damage. Chemical changes in joint tissues precede imaging alterations, making early diagnosis challenging for conventional methods like X-rays. Although Raman imaging provides detailed chemical information, it is time-consuming. This paper aims to achieve rapid osteoarthritis diagnosis and grading using a self-developed Raman imaging system combined with deep learning denoising and acceleration algorithms. Our self-developed aberration-corrected line-scanning confocal Raman imaging device acquires a line of Raman spectra (hundreds of points) per scan using a galvanometer or displacement stage, achieving spatial and spectral resolutions of 2 μm and 0.2 nm, respectively. Deep learning algorithms enhance the imaging speed by over 4 times through effective spectrum denoising and signal-to-noise ratio (SNR) improvement. By leveraging the denoising capabilities of deep learning, we are able to acquire high-quality Raman spectral data with a reduced integration time, thereby accelerating the imaging process. Experiments on the tibial plateau of osteoarthritis patients compared three excitation wavelengths (532, 671, and 785 nm), with 671 nm chosen for optimal SNR and minimal fluorescence. Machine learning algorithms achieved a 98 % accuracy in distinguishing articular from calcified cartilage and a 97 % accuracy in differentiating osteoarthritis grades I to IV. Our fast Raman imaging system, combining an aberration-corrected line-scanning confocal Raman imager with deep learning denoising, offers improved imaging speed and enhanced spectral and spatial resolutions. It enables rapid, label-free detection of osteoarthritis severity and can identify early compositional changes before clinical imaging, allowing precise grading and tailored treatment, thus advancing orthopedic diagnostics and improving patient outcomes.
Page 418 of 4494481 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.