Sort by:
Page 155 of 3993984 results

Patient-Specific Deep Learning Tracking Framework for Real-Time 2D Target Localization in Magnetic Resonance Imaging-Guided Radiation Therapy.

Lombardo E, Velezmoro L, Marschner SN, Rabe M, Tejero C, Papadopoulou CI, Sui Z, Reiner M, Corradini S, Belka C, Kurz C, Riboldi M, Landry G

pubmed logopapersJul 15 2025
We propose a tumor tracking framework for 2D cine magnetic resonance imaging (MRI) based on a pair of deep learning (DL) models relying on patient-specific (PS) training. The chosen DL models are: (1) an image registration transformer and (2) an auto-segmentation convolutional neural network (CNN). We collected over 1,400,000 cine MRI frames from 219 patients treated on a 0.35 T MRI-linac plus 7500 frames from additional 35 patients that were manually labeled and subdivided into fine-tuning, validation, and testing sets. The transformer was first trained on the unlabeled data (without segmentations). We then continued training (with segmentations) either on the fine-tuning set or for PS models based on 8 randomly selected frames from the first 5 seconds of each patient's cine MRI. The PS auto-segmentation CNN was trained from scratch with the same 8 frames for each patient, without pre-training. Furthermore, we implemented B-spline image registration as a conventional model, as well as different baselines. Output segmentations of all models were compared on the testing set using the Dice similarity coefficient, the 50% and 95% Hausdorff distance (HD<sub>50%</sub>/HD<sub>95%</sub>), and the root-mean-square-error of the target centroid in superior-inferior direction. The PS transformer and CNN significantly outperformed all other models, achieving a median (interquartile range) dice similarity coefficient of 0.92 (0.03)/0.90 (0.04), HD<sub>50%</sub> of 1.0 (0.1)/1.0 (0.4) mm, HD<sub>95%</sub> of 3.1 (1.9)/3.8 (2.0) mm, and root-mean-square-error of the target centroid in superior-inferior direction of 0.7 (0.4)/0.9 (1.0) mm on the testing set. Their inference time was about 36/8 ms per frame and PS fine-tuning required 3 min for labeling and 8/4 min for training. The transformer was better than the CNN in 9/12 patients, the CNN better in 1/12 patients, and the 2 PS models achieved the same performance on the remaining 2/12 testing patients. For targets in the thorax, abdomen, and pelvis, we found 2 PS DL models to provide accurate real-time target localization during MRI-guided radiotherapy.

An interpretable machine learning model for predicting bone marrow invasion in patients with lymphoma via <sup>18</sup>F-FDG PET/CT: a multicenter study.

Zhu X, Lu D, Wu Y, Lu Y, He L, Deng Y, Mu X, Fu W

pubmed logopapersJul 15 2025
Accurate identification of bone marrow invasion (BMI) is critical for determining the prognosis of and treatment strategies for lymphoma. Although bone marrow biopsy (BMB) is the current gold standard, its invasive nature and sampling errors highlight the necessity for noninvasive alternatives. We aimed to develop and validate an interpretable machine learning model that integrates clinical data, <sup>18</sup>F-fluorodeoxyglucose positron emission tomography/computed tomography (<sup>18</sup>F-FDG PET/CT) parameters, radiomic features, and deep learning features to predict BMI in lymphoma patients. We included 159 newly diagnosed lymphoma patients (118 from Center I and 41 from Center II), excluding those with prior treatments, incomplete data, or under 18 years of age. Data from Center I were randomly allocated to training (n = 94) and internal test (n = 24) sets; Center II served as an external validation set (n = 41). Clinical parameters, PET/CT features, radiomic characteristics, and deep learning features were comprehensively analyzed and integrated into machine learning models. Model interpretability was elucidated via Shapley Additive exPlanations (SHAPs). Additionally, a comparative diagnostic study evaluated reader performance with and without model assistance. BMI was confirmed in 70 (44%) patients. The key clinical predictors included B symptoms and platelet count. Among the tested models, the ExtraTrees classifier achieved the best performance. For external validation, the combined model (clinical + PET/CT + radiomics + deep learning) achieved an area under the receiver operating characteristic curve (AUC) of 0.886, outperforming models that use only clinical (AUC 0.798), radiomic (AUC 0.708), or deep learning features (AUC 0.662). SHAP analysis revealed that PET radiomic features (especially PET_lbp_3D_m1_glcm_DependenceEntropy), platelet count, and B symptoms were significant predictors of BMI. Model assistance significantly enhanced junior reader performance (AUC improved from 0.663 to 0.818, p = 0.03) and improved senior reader accuracy, although not significantly (AUC 0.768 to 0.867, p = 0.10). Our interpretable machine learning model, which integrates clinical, imaging, radiomic, and deep learning features, demonstrated robust BMI prediction performance and notably enhanced physician diagnostic accuracy. These findings underscore the clinical potential of interpretable AI to complement medical expertise and potentially reduce the reliance on invasive BMB for lymphoma staging.

Evaluation of Artificial Intelligence-based diagnosis for facial fractures, advantages compared with conventional imaging diagnosis: a systematic review and meta-analysis.

Ju J, Qu Z, Qing H, Ding Y, Peng L

pubmed logopapersJul 15 2025
Currently, the application of convolutional neural networks (CNNs) in artificial intelligence (AI) for medical imaging diagnosis has emerged as a highly promising tool. In particular, AI-assisted diagnosis holds significant potential for orthopedic and emergency department physicians by improving diagnostic efficiency and enhancing the overall patient experience. This systematic review and meta-analysis has the objective of assessing the application of AI in diagnosing facial fractures and evaluating its diagnostic performance. This study adhered to the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and PRISMA-Diagnostic Test Accuracy (PRISMA-DTA). A comprehensive literature search was conducted in the PubMed, Cochrane Library, and Web of Science databases to identify original articles published up to December 2024. The risk of bias and applicability of the included studies were assessed using the QUADAS-2 tool. The results were analyzed using a Summary Receiver Operating Characteristic (SROC) curve. A total of 16 studies were included in the analysis, with contingency tables extracted from 11 of them. The pooled sensitivity was 0.889 (95% CI: 0.844-0.922), and the pooled specificity was 0.888 (95% CI: 0.834-0.926). The area under the Summary Receiver Operating Characteristic (SROC) curve was 0.911. In the subgroup analysis of nasal and mandibular fractures, the pooled sensitivity for nasal fractures was 0.851 (95% CI: 0.806-0.887), and the pooled specificity was 0.883 (95% CI: 0.862-0.902). For mandibular fractures, the pooled sensitivity was 0.905 (95% CI: 0.836-0.947), and the pooled specificity was 0.895 (95% CI: 0.824-0.940). AI can be developed as an auxiliary tool to assist clinicians in diagnosing facial fractures. The results demonstrate high overall sensitivity and specificity, along with a robust performance reflected by the high area under the SROC curve. This study has been prospectively registered on Prospero, ID:CRD42024618650, Creat Date:10 Dec 2024. https://www.crd.york.ac.uk/PROSPERO/view/CRD42024618650 .

<sup>18</sup>F-FDG PET-based liver segmentation using deep-learning.

Kaneko Y, Miwa K, Yamao T, Miyaji N, Nishii R, Yamazaki K, Nishikawa N, Yusa M, Higashi T

pubmed logopapersJul 15 2025
Organ segmentation using <sup>18</sup>F-FDG PET images alone has not been extensively explored. Segmentation based methods based on deep learning (DL) have traditionally relied on CT or MRI images, which are vulnerable to alignment issues and artifacts. This study aimed to develop a DL approach for segmenting the entire liver based solely on <sup>18</sup>F-FDG PET images. We analyzed data from 120 patients who were assessed using <sup>18</sup>F-FDG PET. A three-dimensional (3D) U-Net model from nnUNet and preprocessed PET images served as DL and input images, respectively, for the model. The model was trained with 5-fold cross-validation on data from 100 patients, and segmentation accuracy was evaluated on an independent test set of 20 patients. Accuracy was assessed using Intersection over Union (IoU), Dice coefficient, and liver volume. Image quality was evaluated using mean (SUVmean) and maximum (SUVmax) standardized uptake value and signal-to-noise ratio (SNR). The model achieved an average IoU of 0.89 and an average Dice coefficient of 0.94 based on test data from 20 patients, indicating high segmentation accuracy. No significant discrepancies in image quality metrics were identified compared with ground truth. Liver regions were accurately extracted from <sup>18</sup>F-FDG PET images which allowed rapid and stable evaluation of liver uptake in individual patients without the need for CT or MRI assessments.

An efficient deep learning based approach for automated identification of cervical vertebrae fracture as a clinical support aid.

Singh M, Tripathi U, Patel KK, Mohit K, Pathak S

pubmed logopapersJul 15 2025
Cervical vertebrae fractures pose a significant risk to a patient's health. The accurate diagnosis and prompt treatment need to be provided for effective treatment. Moreover, the automated analysis of the cervical vertebrae fracture is of utmost important, as deep learning models have been widely used and play significant role in identification and classification. In this paper, we propose a novel hybrid transfer learning approach for the identification and classification of fractures in axial CT scan slices of the cervical spine. We utilize the publicly available RSNA (Radiological Society of North America) dataset of annotated cervical vertebrae fractures for our experiments. The CT scan slices undergo preprocessing and analysis to extract features, employing four distinct pre-trained transfer learning models to detect abnormalities in the cervical vertebrae. The top-performing model, Inception-ResNet-v2, is combined with the upsampling component of U-Net to form a hybrid architecture. The hybrid model demonstrates superior performance over traditional deep learning models, achieving an overall accuracy of 98.44% on 2,984 test CT scan slices, which represents a 3.62% improvement over the 95% accuracy of predictions made by radiologists. This study advances clinical decision support systems, equipping medical professionals with a powerful tool for timely intervention and accurate diagnosis of cervical vertebrae fractures, thereby enhancing patient outcomes and healthcare efficiency.

Are Vision Foundation Models Ready for Out-of-the-Box Medical Image Registration?

Hanxue Gu, Yaqian Chen, Nicholas Konz, Qihang Li, Maciej A. Mazurowski

arxiv logopreprintJul 15 2025
Foundation models, pre-trained on large image datasets and capable of capturing rich feature representations, have recently shown potential for zero-shot image registration. However, their performance has mostly been tested in the context of rigid or less complex structures, such as the brain or abdominal organs, and it remains unclear whether these models can handle more challenging, deformable anatomy. Breast MRI registration is particularly difficult due to significant anatomical variation between patients, deformation caused by patient positioning, and the presence of thin and complex internal structure of fibroglandular tissue, where accurate alignment is crucial. Whether foundation model-based registration algorithms can address this level of complexity remains an open question. In this study, we provide a comprehensive evaluation of foundation model-based registration algorithms for breast MRI. We assess five pre-trained encoders, including DINO-v2, SAM, MedSAM, SSLSAM, and MedCLIP, across four key breast registration tasks that capture variations in different years and dates, sequences, modalities, and patient disease status (lesion versus no lesion). Our results show that foundation model-based algorithms such as SAM outperform traditional registration baselines for overall breast alignment, especially under large domain shifts, but struggle with capturing fine details of fibroglandular tissue. Interestingly, additional pre-training or fine-tuning on medical or breast-specific images in MedSAM and SSLSAM, does not improve registration performance and may even decrease it in some cases. Further work is needed to understand how domain-specific training influences registration and to explore targeted strategies that improve both global alignment and fine structure accuracy. We also publicly release our code at \href{https://github.com/mazurowski-lab/Foundation-based-reg}{Github}.

COLI: A Hierarchical Efficient Compressor for Large Images

Haoran Wang, Hanyu Pei, Yang Lyu, Kai Zhang, Li Li, Feng-Lei Fan

arxiv logopreprintJul 15 2025
The escalating adoption of high-resolution, large-field-of-view imagery amplifies the need for efficient compression methodologies. Conventional techniques frequently fail to preserve critical image details, while data-driven approaches exhibit limited generalizability. Implicit Neural Representations (INRs) present a promising alternative by learning continuous mappings from spatial coordinates to pixel intensities for individual images, thereby storing network weights rather than raw pixels and avoiding the generalization problem. However, INR-based compression of large images faces challenges including slow compression speed and suboptimal compression ratios. To address these limitations, we introduce COLI (Compressor for Large Images), a novel framework leveraging Neural Representations for Videos (NeRV). First, recognizing that INR-based compression constitutes a training process, we accelerate its convergence through a pretraining-finetuning paradigm, mixed-precision training, and reformulation of the sequential loss into a parallelizable objective. Second, capitalizing on INRs' transformation of image storage constraints into weight storage, we implement Hyper-Compression, a novel post-training technique to substantially enhance compression ratios while maintaining minimal output distortion. Evaluations across two medical imaging datasets demonstrate that COLI consistently achieves competitive or superior PSNR and SSIM metrics at significantly reduced bits per pixel (bpp), while accelerating NeRV training by up to 4 times.

Identification of high-risk hepatoblastoma in the CHIC risk stratification system based on enhanced CT radiomics features.

Yang Y, Si J, Zhang K, Li J, Deng Y, Wang F, Liu H, He L, Chen X

pubmed logopapersJul 15 2025
Survival of patients with high-risk hepatoblastoma remains low, and early identification of high-risk hepatoblastoma is critical. To investigate the clinical value of contrast-enhanced computed tomography (CECT) radiomics in predicting high-risk hepatoblastoma. Clinical and CECT imaging data were retrospectively collected from 162 children who were treated at our hospital and pathologically diagnosed with hepatoblastoma. Patients were categorized into high-risk and non-high-risk groups according to the Children's Hepatic Tumors International Collaboration - Hepatoblastoma Study (CHIC-HS). Subsequently, these cases were randomized into training and test groups in a ratio of 7:3. The region of interest (ROI) was first outlined in the pre-treatment venous images, and subsequently the best features were extracted and filtered, and the radiomics model was built by three machine learning methods: namely, Bagging Decision Tree (BDT), Logistic Regression (LR), and Stochastic Gradient Descent (SGD). The AUC, 95 % CI, and accuracy of the model were calculated, and the model performance was evaluated by the DeLong test. The AUCs of the Bagging decision tree model were 0.966 (95 % CI: 0.938-0.994) and 0.875 (95 % CI: 0.77-0.98) for the training and test sets, respectively, with accuracies of 0.841 and 0.816,respectively. The logistic regression model has AUCs of 0.901 (95 % CI: 0.839-0.963) and 0.845 (95 % CI: 0.721-0.968) for the training and test sets, with accuracies of 0.788 and 0.735, respectively. The stochastic gradient descent model has AUCs of 0.788 (95 % CI: 0.712 -0.863) and 0.742 (95 % CI: 0.627-0.857) with accuracies of 0.735 and 0.653, respectively. CECT-based imaging histology identifies high-risk hepatoblastomas and may provide additional imaging biomarkers for identifying high-risk hepatoblastomas.

A diffusion model for universal medical image enhancement.

Fei B, Li Y, Yang W, Gao H, Xu J, Ma L, Yang Y, Zhou P

pubmed logopapersJul 15 2025
The development of medical imaging techniques has made a significant contribution to clinical decision-making. However, the existence of suboptimal imaging quality, as indicated by irregular illumination or imbalanced intensity, presents significant obstacles in automating disease screening, analysis, and diagnosis. Existing approaches for natural image enhancement are mostly trained with numerous paired images, presenting challenges in data collection and training costs, all while lacking the ability to generalize effectively. Here, we introduce a pioneering training-free Diffusion Model for Universal Medical Image Enhancement, named UniMIE. UniMIE demonstrates its unsupervised enhancement capabilities across various medical image modalities without the need for any fine-tuning. It accomplishes this by relying solely on a single pre-trained model from ImageNet. We conduct a comprehensive evaluation on 13 imaging modalities and over 15 medical types, demonstrating better qualities, robustness, and accuracy than other modality-specific and data-inefficient models. By delivering high-quality enhancement and corresponding accuracy downstream tasks across a wide range of tasks, UniMIE exhibits considerable potential to accelerate the advancement of diagnostic tools and customized treatment plans. UniMIE represents a transformative approach to medical image enhancement, offering a versatile and robust solution that adapts to diverse imaging conditions. By improving image quality and facilitating better downstream analyses, UniMIE has the potential to revolutionize clinical workflows and enhance diagnostic accuracy across a wide range of medical applications.

Assessing MRI-based Artificial Intelligence Models for Preoperative Prediction of Microvascular Invasion in Hepatocellular Carcinoma: A Systematic Review and Meta-analysis.

Han X, Shan L, Xu R, Zhou J, Lu M

pubmed logopapersJul 15 2025
To evaluate the performance of magnetic resonance imaging (MRI)-based artificial intelligence (AI) in the preoperative prediction of microvascular invasion (MVI) in patients with hepatocellular carcinoma (HCC). A systematic search of PubMed, Embase, and Web of Science was conducted up to May 2025, following PRISMA guidelines. Studies using MRI-based AI models with histopathologically confirmed MVI were included. Study quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) framework. Statistical synthesis used bivariate random-effects models. Twenty-nine studies were included, totaling 2838 internal and 1161 external validation cases. Pooled internal validation showed a sensitivity of 0.81 (95% CI: 0.76-0.85), specificity of 0.82 (95% CI: 0.78-0.85), diagnostic odds ratio (DOR) of 19.33 (95% CI: 13.15-28.42), and area under the curve (AUC) of 0.88 (95% CI: 0.85-0.91). External validation yielded a comparable AUC of 0.85. Traditional machine learning methods achieved higher sensitivity than deep learning approaches in both internal and external validation cohorts (both P < 0.05). Studies incorporating both radiomics and clinical features demonstrated superior sensitivity and specificity compared to radiomics-only models (P < 0.01). MRI-based AI demonstrates high performance for preoperative prediction of MVI in HCC, particularly for MRI-based models that combine multimodal imaging and clinical variables. However, substantial heterogeneity and low GRADE levels may affect the strength of the evidence, highlighting the need for methodological standardization and multicenter prospective validation to ensure clinical applicability.
Page 155 of 3993984 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.