Sort by:
Page 255 of 2922917 results

Deep learning predicts HER2 status in invasive breast cancer from multimodal ultrasound and MRI.

Fan Y, Sun K, Xiao Y, Zhong P, Meng Y, Yang Y, Du Z, Fang J

pubmed logopapersMay 16 2025
The preoperative human epidermal growth factor receptor type 2 (HER2) status of breast cancer is typically determined by pathological examination of a core needle biopsy, which influences the efficacy of neoadjuvant chemotherapy (NAC). However, the highly heterogeneous nature of breast cancer and the limitations of needle aspiration biopsy increase the instability of pathological evaluation. The aim of this study was to predict HER2 status in preoperative breast cancer using deep learning (DL) models based on ultrasound (US) and magnetic resonance imaging (MRI). The study included women with invasive breast cancer who underwent US and MRI at our institution between January 2021 and July 2024. US images and dynamic contrast-enhanced T1-weighted MRI images were used to construct DL models (DL-US: the DL model based on US; DL-MRI: the model based on MRI; and DL-MRI&US: the combined model based on both MRI and US). All classifications were based on postoperative pathological evaluation. Receiver operating characteristic analysis and the DeLong test were used to compare the diagnostic performance of the DL models. In the test cohort, DL-US differentiated the HER2 status of breast cancer with an AUC of 0.842 (95% CI: 0.708-0.931), and sensitivity and specificity of 89.5% and 79.3%, respectively. DL-MRI achieved an AUC of 0.800 (95% CI: 0.660-0.902), with sensitivity and specificity of 78.9% and 79.3%, respectively. DL-MRI&US yielded an AUC of 0.898 (95% CI: 0.777-0.967), with sensitivity and specificity of 63.2% and 100.0%, respectively.

The imaging crisis in axial spondyloarthritis.

Diekhoff T, Poddubnyy D

pubmed logopapersMay 16 2025
Imaging holds a pivotal yet contentious role in the early diagnosis of axial spondyloarthritis. Although MRI has enhanced our ability to detect early inflammatory changes, particularly bone marrow oedema in the sacroiliac joints, the poor specificity of this finding introduces a substantial risk of overdiagnosis. The well intentioned push by rheumatologists towards earlier intervention could inadvertently lead to the misclassification of mechanical or degenerative conditions (eg, osteitis condensans ilii) as inflammatory disease, especially in the absence of structural lesions. Diagnostic uncertainty is further fuelled by anatomical variability, sex differences, and suboptimal imaging protocols. Current strategies-such as quantifying bone marrow oedema and analysing its distribution patterns, and integrating clinical and laboratory data-offer partial guidance for avoiding overdiagnosis but fall short of resolving the core diagnostic dilemma. Emerging imaging technologies, including high-resolution sequences, quantitative MRI, radiomics, and artificial intelligence, could improve diagnostic precision, but these tools remain exploratory. This Viewpoint underscores the need for a shift in imaging approaches, recognising that although timely diagnosis and treatment is essential to prevent long-term structural damage, robust and reliable imaging criteria are also needed. Without such advances, the imaging field risks repeating past missteps seen in other rheumatological conditions.

Deep learning model based on ultrasound images predicts BRAF V600E mutation in papillary thyroid carcinoma.

Yu Y, Zhao C, Guo R, Zhang Y, Li X, Liu N, Lu Y, Han X, Tang X, Mao R, Peng C, Yu J, Zhou J

pubmed logopapersMay 16 2025
BRAF V600E mutation status detection facilitates prognosis prediction in papillary thyroid carcinoma (PTC). We developed a deep-learning model to determine the BRAF V600E status in PTC. PTC from three centers were collected as the training set (1341 patients), validation set (148 patients), and external test set (135 patients). After testing the performance of the ResNeSt-50, Vision Transformer, and Swin Transformer V2 (SwinT) models, SwinT was chosen as the optimal backbone. An integrated BrafSwinT model was developed by combining the backbone with a radiomics feature branch and a clinical parameter branch. BrafSwinT demonstrated an AUC of 0.869 in the external test set, outperforming the original SwinT, Vision Transformer, and ResNeSt-50 models (AUC: 0.782-0.824; <i>p</i> value: 0.017-0.041). BrafSwinT showed promising results in determining BRAF V600E mutation status in PTC based on routinely acquired ultrasound images and basic clinical information, thus facilitating risk stratification.

Evaluation of tumour pseudocapsule using computed tomography-based radiomics in pancreatic neuroendocrine tumours to predict prognosis and guide surgical strategy: a cohort study.

Wang Y, Gu W, Huang D, Zhang W, Chen Y, Xu J, Li Z, Zhou C, Chen J, Xu X, Tang W, Yu X, Ji S

pubmed logopapersMay 16 2025
To date, indications for a surgical approach of small pancreatic neuroendocrine tumours (PanNETs) remain controversial. This cohort study aimed to identify the pseudocapsule status preoperatively to estimate the rationality of enucleation and survival prognosis of PanNETs, particularly in small tumours. Clinicopathological data were collected from patients with PanNETs who underwent the first pancreatectomy at our hospital (n = 578) between February 2012 and September 2023. Kaplan-Meier curves were constructed to visualise prognostic differences. Five distinct tissue samples were obtained for single-cell RNA sequencing (scRNA-seq) to evaluate variations in the tumour microenvironment. Radiological features were extracted from preoperative arterial-phase contrast-enhanced computed tomography. The performance of the pseudocapsule radiomics model was assessed using the area under the curve (AUC) metric. 475 cases (mean [SD] age, 53.01 [12.20] years; female vs male, 1.24:1) were eligible for this study. The mean pathological diameter of tumour was 2.99 cm (median: 2.50 cm; interquartile range [IQR]: 1.50-4.00 cm). These cases were stratified into complete (223, 46.95%) and incomplete (252, 53.05%) pseudocapsule groups. A statistically significant difference in aggressive indicators was observed between the two groups (P < 0.001). Through scRNA-seq analysis, we identified that the incomplete group presented a markedly immunosuppressive microenvironment. Regarding the impact on recurrence-free survival, the 3-year and 5-year rates were 94.8% and 92.5%, respectively, for the complete pseudocapsule group, compared to 76.7% and 70.4% for the incomplete pseudocapsule group. The radiomics-predictive model has a significant discrimination for the state of the pseudocapsule, particularly in small tumours (AUC, 0.744; 95% CI, 0.652-0.837). By combining computed tomography-based radiomics and machine learning for preoperative identification of pseudocapsule status, the intact group is more likely to benefit from enucleation.

Escarcitys: A framework for enhancing medical image classification performance in scarcity of trainable samples scenarios.

Wang T, Dai Q, Xiong W

pubmed logopapersMay 16 2025
In the field of healthcare, the acquisition and annotation of medical images present significant challenges, resulting in a scarcity of trainable samples. This data limitation hinders the performance of deep learning models, creating bottlenecks in clinical applications. To address this issue, we construct a framework (EScarcityS) aimed at enhancing the success rate of disease diagnosis in scarcity of trainable medical image scenarios. Firstly, considering that Transformer-based deep learning networks rely on a large amount of trainable data, this study takes into account the unique characteristics of pathological regions. By extracting the feature representations of all particles in medical images at different granularities, a multi-granularity Transformer network (MGVit) is designed. This network leverages additional prior knowledge to assist the Transformer network during training, thereby reducing the data requirement to some extent. Next, the importance maps of particles at different granularities, generated by MGVit, are fused to construct disease probability maps corresponding to the images. Based on these maps, a disease probability map-guided diffusion generation model is designed to generate more realistic and interpretable synthetic data. Subsequently, authentic and synthetical data are mixed and used to retrain MGVit, aiming to enhance the accuracy of medical image classification in scarcity of trainable medical image scenarios. Finally, we conducted detailed experiments on four real medical image datasets to validate the effectiveness of EScarcityS and its specific modules.

Artificial intelligence-guided distal radius fracture detection on plain radiographs in comparison with human raters.

Ramadanov N, John P, Hable R, Schreyer AG, Shabo S, Prill R, Salzmann M

pubmed logopapersMay 16 2025
The aim of this study was to compare the performance of artificial intelligence (AI) in detecting distal radius fractures (DRFs) on plain radiographs with the performance of human raters. We retrospectively analysed all wrist radiographs taken in our hospital since the introduction of AI-guided fracture detection from 11 September 2023 to 10 September 2024. The ground truth was defined by the radiological report of a board-certified radiologist based solely on conventional radiographs. The following parameters were calculated: True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN), accuracy (%), Cohen's Kappa coefficient, F1 score, sensitivity (%), specificity (%), Youden Index (J Statistic). In total 1145 plain radiographs of the wrist were taken between 11 September 2023 and 10 September 2024. The mean age of the included patients was 46.6 years (± 27.3), ranging from 2 to 99 years and 59.0% were female. According to the ground truth, of the 556 anteroposterior (AP) radiographs, 225 cases (40.5%) had a DRF, and of the 589 lateral view radiographs, 240 cases (40.7%) had a DRF. The AI system showed the following results on AP radiographs: accuracy (%): 95.90; Cohen's Kappa: 0.913; F1 score: 0.947; sensitivity (%): 92.02; specificity (%): 98.45; Youden Index: 90.47. The orthopedic surgeon achieved a sensitivity of 91.5%, specificity of 97.8%, an overall accuracy of 95.1%, F1 score of 0.943, and Cohen's kappa of 0.901. These results were comparable to those of the AI model. AI-guided detection of DRF demonstrated diagnostic performance nearly identical to that of an experienced orthopedic surgeon across all key metrics. The marginal differences observed in sensitivity and specificity suggest that AI can reliably support clinical fracture assessment based solely on conventional radiographs.

UGoDIT: Unsupervised Group Deep Image Prior Via Transferable Weights

Shijun Liang, Ismail R. Alkhouri, Siddhant Gautam, Qing Qu, Saiprasad Ravishankar

arxiv logopreprintMay 16 2025
Recent advances in data-centric deep generative models have led to significant progress in solving inverse imaging problems. However, these models (e.g., diffusion models (DMs)) typically require large amounts of fully sampled (clean) training data, which is often impractical in medical and scientific settings such as dynamic imaging. On the other hand, training-data-free approaches like the Deep Image Prior (DIP) do not require clean ground-truth images but suffer from noise overfitting and can be computationally expensive as the network parameters need to be optimized for each measurement set independently. Moreover, DIP-based methods often overlook the potential of learning a prior using a small number of sub-sampled measurements (or degraded images) available during training. In this paper, we propose UGoDIT, an Unsupervised Group DIP via Transferable weights, designed for the low-data regime where only a very small number, M, of sub-sampled measurement vectors are available during training. Our method learns a set of transferable weights by optimizing a shared encoder and M disentangled decoders. At test time, we reconstruct the unseen degraded image using a DIP network, where part of the parameters are fixed to the learned weights, while the remaining are optimized to enforce measurement consistency. We evaluate UGoDIT on both medical (multi-coil MRI) and natural (super resolution and non-linear deblurring) image recovery tasks under various settings. Compared to recent standalone DIP methods, UGoDIT provides accelerated convergence and notable improvement in reconstruction quality. Furthermore, our method achieves performance competitive with SOTA DM-based and supervised approaches, despite not requiring large amounts of clean training data.

Diff-Unfolding: A Model-Based Score Learning Framework for Inverse Problems

Yuanhao Wang, Shirin Shoushtari, Ulugbek S. Kamilov

arxiv logopreprintMay 16 2025
Diffusion models are extensively used for modeling image priors for inverse problems. We introduce \emph{Diff-Unfolding}, a principled framework for learning posterior score functions of \emph{conditional diffusion models} by explicitly incorporating the physical measurement operator into a modular network architecture. Diff-Unfolding formulates posterior score learning as the training of an unrolled optimization scheme, where the measurement model is decoupled from the learned image prior. This design allows our method to generalize across inverse problems at inference time by simply replacing the forward operator without retraining. We theoretically justify our unrolling approach by showing that the posterior score can be derived from a composite model-based optimization formulation. Extensive experiments on image restoration and accelerated MRI show that Diff-Unfolding achieves state-of-the-art performance, improving PSNR by up to 2 dB and reducing LPIPS by $22.7\%$, while being both compact (47M parameters) and efficient (0.72 seconds per $256 \times 256$ image). An optimized C++/LibTorch implementation further reduces inference time to 0.63 seconds, underscoring the practicality of our approach.

Patient-Specific Dynamic Digital-Physical Twin for Coronary Intervention Training: An Integrated Mixed Reality Approach

Shuo Wang, Tong Ren, Nan Cheng, Rong Wang, Li Zhang

arxiv logopreprintMay 16 2025
Background and Objective: Precise preoperative planning and effective physician training for coronary interventions are increasingly important. Despite advances in medical imaging technologies, transforming static or limited dynamic imaging data into comprehensive dynamic cardiac models remains challenging. Existing training systems lack accurate simulation of cardiac physiological dynamics. This study develops a comprehensive dynamic cardiac model research framework based on 4D-CTA, integrating digital twin technology, computer vision, and physical model manufacturing to provide precise, personalized tools for interventional cardiology. Methods: Using 4D-CTA data from a 60-year-old female with three-vessel coronary stenosis, we segmented cardiac chambers and coronary arteries, constructed dynamic models, and implemented skeletal skinning weight computation to simulate vessel deformation across 20 cardiac phases. Transparent vascular physical models were manufactured using medical-grade silicone. We developed cardiac output analysis and virtual angiography systems, implemented guidewire 3D reconstruction using binocular stereo vision, and evaluated the system through angiography validation and CABG training applications. Results: Morphological consistency between virtual and real angiography reached 80.9%. Dice similarity coefficients for guidewire motion ranged from 0.741-0.812, with mean trajectory errors below 1.1 mm. The transparent model demonstrated advantages in CABG training, allowing direct visualization while simulating beating heart challenges. Conclusion: Our patient-specific digital-physical twin approach effectively reproduces both anatomical structures and dynamic characteristics of coronary vasculature, offering a dynamic environment with visual and tactile feedback valuable for education and clinical planning.

Pretrained hybrid transformer for generalizable cardiac substructures segmentation from contrast and non-contrast CTs in lung and breast cancers

Aneesh Rangnekar, Nikhil Mankuzhy, Jonas Willmann, Chloe Choi, Abraham Wu, Maria Thor, Andreas Rimner, Harini Veeraraghavan

arxiv logopreprintMay 16 2025
AI automated segmentations for radiation treatment planning (RTP) can deteriorate when applied in clinical cases with different characteristics than training dataset. Hence, we refined a pretrained transformer into a hybrid transformer convolutional network (HTN) to segment cardiac substructures lung and breast cancer patients acquired with varying imaging contrasts and patient scan positions. Cohort I, consisting of 56 contrast-enhanced (CECT) and 124 non-contrast CT (NCCT) scans from patients with non-small cell lung cancers acquired in supine position, was used to create oracle with all 180 training cases and balanced (CECT: 32, NCCT: 32 training) HTN models. Models were evaluated on a held-out validation set of 60 cohort I patients and 66 patients with breast cancer from cohort II acquired in supine (n=45) and prone (n=21) positions. Accuracy was measured using DSC, HD95, and dose metrics. Publicly available TotalSegmentator served as the benchmark. The oracle and balanced models were similarly accurate (DSC Cohort I: 0.80 \pm 0.10 versus 0.81 \pm 0.10; Cohort II: 0.77 \pm 0.13 versus 0.80 \pm 0.12), outperforming TotalSegmentator. The balanced model, using half the training cases as oracle, produced similar dose metrics as manual delineations for all cardiac substructures. This model was robust to CT contrast in 6 out of 8 substructures and patient scan position variations in 5 out of 8 substructures and showed low correlations of accuracy to patient size and age. A HTN demonstrated robustly accurate (geometric and dose metrics) cardiac substructures segmentation from CTs with varying imaging and patient characteristics, one key requirement for clinical use. Moreover, the model combining pretraining with balanced distribution of NCCT and CECT scans was able to provide reliably accurate segmentations under varied conditions with far fewer labeled datasets compared to an oracle model.
Page 255 of 2922917 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.