Sort by:
Page 48 of 3523515 results

CT-based Radiomics Signature of Visceral Adipose Tissue for Prediction of Early Recurrence in Patients With NMIBC: a Multicentre Cohort Study.

Yu N, Li J, Cao D, Chen X, Yang D, Jiang N, Wu J, Zhao C, Zheng Y, Chen Y, Jin X

pubmed logopapersAug 7 2025
The objective of this study is to investigate the predictive ability of abdominal fat features derived from computed tomography (CT) to predict early recurrence within a year following the initial transurethral resection of bladder tumor (TURBT) in patients with non-muscle-invasive bladder cancer (NMIBC). A predictive model is constructed in combination with clinical factors to aid in the evaluation of the risk of early recurrence among patients with NMIBC after initial TURBT. This retrospective study enrolled 325 NMIBC patients from three centers. Machine-learning-based visceral adipose tissue (VAT) radiomics models (VAT-RM) and subcutaneous adipose tissue (SAT) radiomics models (SAT-RM) were constructed to identify patients with early recurrence. A combined model integrating VAT-RM and clinical factors was established. The predictive performance of each variable and model was analyzed using the area under the receiver operating characteristic curve (AUC). The net benefit of each variable and model was presented through decision curve analysis (DCA). The calibration was evaluated utilizing the Hosmer-Lemeshow test. The VAT-RM demonstrated satisfactory performance in the training cohort (AUC = 0.853, 95% CI 0.768-0.937), test cohort 1 (AUC = 0.823, 95% CI 0.730-0.916), and test cohort 2 (AUC = 0.808, 95% CI 0.681-0.935). Across all cohorts, the AUC values of the VAT-RM were higher than those of the SAT-RM (P < 0.001). The DCA curves further confirmed that the clinical net profit of the VAT-RM was superior to that of the SAT-RM. In multivariate logistic regression analysis, the VAT-RM emerged as the most significant independent predictor (odds ratio [OR] = 0.295, 95% CI 0.141-0.508, P < 0.001). The fusion model exhibited excellent AUC values of 0.938, 0.909, and 0.905 across three cohorts. The fusion model surpassed the traditional risk assessment frameworks in both predictive efficacy and clinical net benefit. VAT serves as a crucial factor in early postoperative recurrence in NMIBC patients. The VAT-RM can accurately identify high-risk patients with early postoperative recurrence, offering significant advantages over SAT-RM. The new predictive model constructed by integrating the VAT-RM and clinical factors exhibits excellent predictive performance, clinical net benefits, and calibration accuracy.

UltimateSynth: MRI Physics for Pan-Contrast AI

Adams, R., Huynh, K. M., Zhao, W., Hu, S., Lyu, W., Ahmad, S., Ma, D., Yap, P.-T.

biorxiv logopreprintAug 7 2025
Magnetic resonance imaging (MRI) is commonly used in healthcare for its ability to generate diverse tissue contrasts without ionizing radiation. However, this flexibility complicates downstream analysis, as computational tools are often tailored to specific types of MRI and lack generalizability across the full spectrum of scans used in healthcare. Here, we introduce a versatile framework for the development and validation of AI models that can robustly process and analyze the full spectrum of scans achievable with MRI, enabling model deployment across scanner models, scan sequences, and age groups. Core to our framework is UltimateSynth, a technology that combines tissue physiology and MR physics in synthesizing realistic images across a comprehensive range of meaningful contrasts. This pan-contrast capability bolsters the AI development life cycle through efficient data labeling, generalizable model training, and thorough performance benchmarking. We showcase the effectiveness of UltimateSynth by training an off-the-shelf U-Net to generalize anatomical segmentation across any MR contrast. The U-Net yields highly robust tissue volume estimates, with variability under 4% across 150,000 unique-contrast images, 3.8% across 2,000+ low-field 0.3T scans, and 3.5% across 8,000+ images spanning the human lifespan from ages 0 to 100.

A Multimodal Deep Learning Ensemble Framework for Building a Spine Surgery Triage System.

Siavashpour M, McCabe E, Nataraj A, Pareek N, Zaiane O, Gross D

pubmed logopapersAug 7 2025
Spinal radiology reports and physician-completed questionnaires serve as crucial resources for medical decision-making for patients experiencing low back and neck pain. However, due to the time-consuming nature of this process, individuals with severe conditions may experience a deterioration in their health before receiving professional care. In this work, we propose an ensemble framework built on top of pre-trained BERT-based models which can classify patients on their need for surgery given their different data modalities including radiology reports and questionnaires. Our results demonstrate that our approach exceeds previous studies, effectively integrating information from multiple data modalities and serving as a valuable tool to assist physicians in decision making.

Lower Extremity Bypass Surveillance and Peak Systolic Velocities Value Prediction Using Recurrent Neural Networks.

Luo X, Tahabi FM, Rollins DM, Sawchuk AP

pubmed logopapersAug 7 2025
Routine duplex ultrasound surveillance is recommended after femoral-popliteal and femoral-tibial-pedal vein bypass grafts at various post-operative intervals. Currently, there is no systematic method for bypass graft surveillance using a set of peak systolic velocities (PSVs) collected during these exams. This research aims to explore the use of recurrent neural networks to predict the next set of PSVs, which can then indicate occlusion status. Recurrent neural network models were developed to predict occlusion and stenosis based on one to three prior sets of PSVs, with a sequence-to-sequence model utilized to forecast future PSVs within the stent graft and nearby arteries. The study employed 5-fold cross-validation for model performance comparison, revealing that the BiGRU model outperformed BiLSTM when two or more sets of PSVs were included, demonstrating that increasing duplex ultrasound exams improve prediction accuracy and reduces error rates. This work establishes a basis for integrating comprehensive clinical data, including demographics, comorbidities, symptoms, and other risk factors, with PSVs to enhance lower extremity bypass graft surveillance predictions.

Longitudinal development of sex differences in the limbic system is associated with age, puberty and mental health

Matte Bon, G., Walther, J., Comasco, E., Derntl, B., Kaufmann, T.

medrxiv logopreprintAug 7 2025
Sex differences in mental health become more evident across adolescence, with a two-fold increase of prevalence of mood disorders in females compared to males. The brain underpinnings remain understudied. Here, we investigated the role of age, puberty and mental health in determining the longitudinal development of sex differences in brain structure. We captured sex differences in limbic and non-limbic structures using machine learning models trained in cross-sectional brain imaging data of 1132 youths, yielding limbic and non-limbic estimates of brain sex. Applied to two independent longitudinal samples (total: 8184 youths), our models revealed pronounced sex differences in brain structure with increasing age. For females, brain sex was sensitive to pubertal development (menarche) over time and, for limbic structures, to mood-related mental health. Our findings highlight the limbic system as a key contributor to the development of sex differences in the brain and the potential of machine learning models for brain sex classification to investigate sex-specific processes relevant to mental health.

MM2CT: MR-to-CT translation for multi-modal image fusion with mamba

Chaohui Gong, Zhiying Wu, Zisheng Huang, Gaofeng Meng, Zhen Lei, Hongbin Liu

arxiv logopreprintAug 7 2025
Magnetic resonance (MR)-to-computed tomography (CT) translation offers significant advantages, including the elimination of radiation exposure associated with CT scans and the mitigation of imaging artifacts caused by patient motion. The existing approaches are based on single-modality MR-to-CT translation, with limited research exploring multimodal fusion. To address this limitation, we introduce Multi-modal MR to CT (MM2CT) translation method by leveraging multimodal T1- and T2-weighted MRI data, an innovative Mamba-based framework for multi-modal medical image synthesis. Mamba effectively overcomes the limited local receptive field in CNNs and the high computational complexity issues in Transformers. MM2CT leverages this advantage to maintain long-range dependencies modeling capabilities while achieving multi-modal MR feature integration. Additionally, we incorporate a dynamic local convolution module and a dynamic enhancement module to improve MRI-to-CT synthesis. The experiments on a public pelvis dataset demonstrate that MM2CT achieves state-of-the-art performance in terms of Structural Similarity Index Measure (SSIM) and Peak Signal-to-Noise Ratio (PSNR). Our code is publicly available at https://github.com/Gots-ch/MM2CT.

Improving Radiology Report Generation with Semantic Understanding.

Ahn S, Park H, Yoo J, Choi J

pubmed logopapersAug 7 2025
This study proposes RRG-LLM, a model designed to enhance RRG by effectively learning medical domain with minimal computational resources. Initially, LLM is finetuned by LoRA, enabling efficient adaptation to the medical domain. Subsequently, only the linear projection layer that project the image into text is finetuned to extract important information from the radiology image and project it onto the text dimension. Proposed model demonstrated notable improvements in report generation. The performance of ROUGE-L was improved by 0.096 (51.7%) and METEOR by 0.046 (42.85%) compared to the baseline model.

Sparse transformer and multipath decision tree: a novel approach for efficient brain tumor classification.

Li P, Jin Y, Wang M, Liu F

pubmed logopapersAug 7 2025
Early classification of brain tumors is the key to effective treatment. With advances in medical imaging technology, automated classification algorithms face challenges due to tumor diversity. Although Swin Transformer is effective in handling high-resolution images, it encounters difficulties with small datasets and high computational complexity. This study introduces SparseSwinMDT, a novel model that combines sparse token representation with multipath decision trees. Experimental results show that SparseSwinMDT achieves an accuracy of 99.47% in brain tumor classification, significantly outperforming existing methods while reducing computation time, making it particularly suitable for resource-constrained medical environments.

Gastrointestinal bleeding detection on digital subtraction angiography using convolutional neural networks with and without temporal information.

Smetanick D, Naidu S, Wallace A, Knuttinen MG, Patel I, Alzubaidi S

pubmed logopapersAug 7 2025
Digital subtraction angiography (DSA) offers a real-time approach to locating lower gastrointestinal (GI) bleeding. However, many sources of bleeding are not easily visible on angiograms. This investigation aims to develop a machine learning tool that can locate GI bleeding on DSA prior to transarterial embolization. All mesenteric artery angiograms and arterial embolization DSA images obtained in the interventional radiology department between January 1, 2007, and December 31, 2021, were analyzed. These images were acquired using fluoroscopy imaging systems (Siemens Healthineers, USA). Thirty-nine unique series of bleeding images were augmented to train two-dimensional (2D) and three-dimensional (3D) residual neural networks (ResUNet++) for image segmentation. The 2D ResUNet++ network was trained on 3,548 images and tested on 394 images, whereas the 3D ResUNet++ network was trained on 316 3D objects and tested on 35 objects. For each case, both manually cropped images focused on the GI bleed and uncropped images were evaluated, with a superimposition post-processing (SIPP) technique applied to both image types. Based on both quantitative and qualitative analyses, the 2D ResUNet++ network significantly outperformed the 3D ResUNet++ model. In the qualitative evaluation, the 2D ResUNet++ model achieved the highest accuracy across both 128 × 128 and 256 × 256 input resolutions when enhanced with the SIPP technique, reaching accuracy rates between 95% and 97%. However, despite the improved detection consistency provided by SIPP, a reduction in Dice similarity coefficients was observed compared with models without post-processing. Specifically, the 2D ResUNet++ model combined with SIPP achieved a Dice accuracy of only 80%. This decline is primarily attributed to an increase in false positive predictions introduced by the temporal propagation of segmentation masks across frames. Both 2D and 3D ResUNet++ networks can be trained to locate GI bleeding on DSA images prior to transarterial embolization. However, further research and refinement are needed before this technology can be implemented in DSA for real-time prediction. Automated detection of GI bleeding in DSA may reduce time to embolization, thereby improving patient outcomes.

Few-Shot Deployment of Pretrained MRI Transformers in Brain Imaging Tasks

Mengyu Li, Guoyao Shen, Chad W. Farris, Xin Zhang

arxiv logopreprintAug 7 2025
Machine learning using transformers has shown great potential in medical imaging, but its real-world applicability remains limited due to the scarcity of annotated data. In this study, we propose a practical framework for the few-shot deployment of pretrained MRI transformers in diverse brain imaging tasks. By utilizing the Masked Autoencoder (MAE) pretraining strategy on a large-scale, multi-cohort brain MRI dataset comprising over 31 million slices, we obtain highly transferable latent representations that generalize well across tasks and datasets. For high-level tasks such as classification, a frozen MAE encoder combined with a lightweight linear head achieves state-of-the-art accuracy in MRI sequence identification with minimal supervision. For low-level tasks such as segmentation, we propose MAE-FUnet, a hybrid architecture that fuses multiscale CNN features with pretrained MAE embeddings. This model consistently outperforms other strong baselines in both skull stripping and multi-class anatomical segmentation under data-limited conditions. With extensive quantitative and qualitative evaluations, our framework demonstrates efficiency, stability, and scalability, suggesting its suitability for low-resource clinical environments and broader neuroimaging applications.
Page 48 of 3523515 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.