Sort by:
Page 438 of 4494481 results

Machine learning model for diagnosing salivary gland adenoid cystic carcinoma based on clinical and ultrasound features.

Su HZ, Li ZY, Hong LC, Wu YH, Zhang F, Zhang ZB, Zhang XD

pubmed logopapersMay 8 2025
To develop and validate machine learning (ML) models for diagnosing salivary gland adenoid cystic carcinoma (ACC) in the salivary glands based on clinical and ultrasound features. A total of 365 patients with ACC or non-ACC of the salivary glands treated at two centers were enrolled in training cohort, internal and external validation cohorts. Synthetic minority oversampling technique was used to address the class imbalance. The least absolute shrinkage and selection operator (LASSO) regression identified optimal features, which were subsequently utilized to construct predictive models employing five ML algorithms. The performance of the models was evaluated across a comprehensive array of learning metrics, prominently the area under the receiver operating characteristic curve (AUC). Through LASSO regression analysis, six key features-sex, pain symptoms, number, cystic areas, rat tail sign, and polar vessel-were identified and subsequently utilized to develop five ML models. Among these models, the support vector machine (SVM) model demonstrated superior performance, achieving the highest AUCs of 0.899 and 0.913, accuracy of 90.54% and 91.53%, and F1 scores of 0.774 and 0.783 in both the internal and external validation cohorts, respectively. Decision curve analysis further revealed that the SVM model offered enhanced clinical utility compared to the other models. The ML model based on clinical and US features provide an accurate and noninvasive method for distinguishing ACC from non-ACC. This machine learning model, constructed based on clinical and ultrasound characteristics, serves as a valuable tool for the identification of salivary gland adenoid cystic carcinoma. Rat tail sign and polar vessel on US predict adenoid cystic carcinoma (ACC). Machine learning models based on clinical and US features can identify ACC. The support vector machine model performed robustly and accurately.

Advancement of an automatic segmentation pipeline for metallic artifact removal in post-surgical ACL MRI.

Barnes DA, Murray CJ, Molino J, Beveridge JE, Kiapour AM, Murray MM, Fleming BC

pubmed logopapersMay 8 2025
Magnetic resonance imaging (MRI) has the potential to identify post-operative risk factors for re-tearing an anterior cruciate ligament (ACL) using a combination of imaging signal intensity (SI) and cross-sectional area measurements of the healing ACL. During surgery micro-debris can result from drilling the osseous tunnels for graft and/or suture insertion. The debris presents a limitation when using post-surgical MRI to assess reinjury risk as it causes rapid magnetic field variations during acquisition, leading to signal loss within a voxel. The present study demonstrates how K-means clustering can refine an automatic segmentation algorithm to remove the lost signal intensity values induced by the artifacts in the image. MRI data were obtained from 82 patients enrolled in three prospective clinical trials of ACL surgery. Constructive Interference in Steady State MRIs were collected at 6 months post-operation. Manual segmentation of the ACL with metallic artifacts removed served as the gold standard. The accuracy of the automatic ACL segmentations was compared using Dice coefficient, sensitivity, and precision. The performance of the automatic segmentation was comparable to manual segmentation (Dice coefficient = .81, precision = .81, sensitivity = .82). The normalized average signal intensity was calculated as 1.06 (±0.25) for the automatic and 1.04 (±0.23) for the manual segmentation, yielding a difference of 2%. These metrics emphasize the automatic segmentation model's ability to precisely capture ACL signal intensity while excluding artifact regions. The automatic artifact segmentation model described here could enhance qMRI's clinical utility by allowing for more accurate and time-efficient segmentations of the ACL.

Impact of tracer uptake rate on quantification accuracy of myocardial blood flow in PET: A simulation study.

Hong X, Sanaat A, Salimi Y, Nkoulou R, Arabi H, Lu L, Zaidi H

pubmed logopapersMay 8 2025
Cardiac perfusion PET is commonly used to assess ischemia and cardiovascular risk, which enables quantitative measurements of myocardial blood flow (MBF) through kinetic modeling. However, the estimation of kinetic parameters is challenging due to the noisy nature of short dynamic frames and limited sample data points. This work aimed to investigate the errors in MBF estimation in PET through a simulation study and to evaluate different parameter estimation approaches, including a deep learning (DL) method. Simulated studies were generated using digital phantoms based on cardiac segmentations from 55 clinical CT images. We employed the irreversible 2-tissue compartmental model and simulated dynamic <sup>13</sup>N-ammonia PET scans under both rest and stress conditions (220 cases each). The simulations covered a rest K<sub>1</sub> range of 0.6 to 1.2 and a stress K<sub>1</sub> range of 1.2 to 3.6 (unit: mL/min/g) in the myocardium. A transformer-based DL model was trained on the simulated dataset to predict parametric images (PIMs) from noisy PET image frames and was validated using 5-fold cross-validation. We compared the DL method with the voxel-wise nonlinear least squares (NLS) fitting applied to the dynamic images, using either Gaussian filter (GF) smoothing (GF-NLS) or a dynamic nonlocal means (DNLM) algorithm for denoising (DNLM-NLS). Two patients with coronary CT angiography (CTA) and fractional flow reserve (FFR) were enrolled to test the feasibility of applying DL models on clinical PET data. The DL method showed clearer image structures with reduced noise compared to the traditional NLS-based methods. In terms of mean absolute relative error (MARE), as the rest K<sub>1</sub> values increased from 0.6 to 1.2 mL/min/g, the overall bias in myocardium K<sub>1</sub> estimates decreased from approximately 58% to 45% for the NLS-based methods while the DL method showed a reduction in MARE from 42% to 18%. For stress data, as the stress K<sub>1</sub> decreased from 3.6 to 1.2 mL/min/g, the MARE increased from 30% to 70% for the GF-NLS method. In contrast, both the DNLM-NLS (average: 42%) and the DL methods (average: 20%) demonstrated significantly smaller MARE changes as stress K<sub>1</sub> varied. Regarding the regional mean bias (±standard deviation), the GF-NLS method had a bias of 6.30% (±8.35%) of rest K<sub>1</sub>, compared to 1.10% (±8.21%) for DNLM-NLS and 6.28% (±14.05%) for the DL method. For the stress K<sub>1</sub>, the GF-NLS showed a mean bias of 10.72% (±9.34%) compared to 1.69% (±8.82%) for DNLM-NLS and -10.55% (±9.81%) for the DL method. This study showed that an increase in the tracer uptake rate (K<sub>1</sub>) corresponded to improved accuracy and precision in MBF quantification, whereas lower tracer uptake resulted in higher noise in dynamic PET and poorer parameter estimates. Utilizing denoising techniques or DL approaches can mitigate noise-induced bias in PET parametric imaging.

Comparative analysis of open-source against commercial AI-based segmentation models for online adaptive MR-guided radiotherapy.

Langner D, Nachbar M, Russo ML, Boeke S, Gani C, Niyazi M, Thorwarth D

pubmed logopapersMay 8 2025
Online adaptive magnetic resonance-guided radiotherapy (MRgRT) has emerged as a state-of-the-art treatment option for multiple tumour entities, accounting for daily anatomical and tumour volume changes, thus allowing sparing of relevant organs at risk (OARs). However, the annotation of treatment-relevant anatomical structures in context of online plan adaptation remains challenging, often relying on commercial segmentation solutions due to limited availability of clinically validated alternatives. The aim of this study was to investigate whether an open-source artificial intelligence (AI) segmentation network can compete with the annotation accuracy of a commercial solution, both trained on the identical dataset, questioning the need for commercial models in clinical practice. For 47 pelvic patients, T2w MR imaging data acquired on a 1.5 T MR-Linac were manually contoured, identifying prostate, seminal vesicles, rectum, anal canal, bladder, penile bulb, and bony structures. These training data were used for the generation of an in-house AI segmentation model, a nnU-Net with residual encoder architecture featuring a streamlined single image inference pipeline, and re-training of a commercial solution. For quantitative evaluation, 20 MR images were contoured by a radiation oncologist, considered as ground truth contours (GTC) and compared with the in-house/commercial AI-based contours (iAIC/cAIC) using Dice Similarity Coefficient (DSC), 95% Hausdorff distances (HD95), and surface DSC (sDSC). For qualitative evaluation, four radiation oncologists assessed the usability of OAR/target iAIC within an online adaptive workflow using a four-point Likert scale: (1) acceptable without modification, (2) requiring minor adjustments, (3) requiring major adjustments, and (4) not usable. Patient-individual annotations were generated in a median [range] time of 23 [16-34] s for iAIC and 152 [121-198] s for cAIC, respectively. OARs showed a maximum median DSC of 0.97/0.97 (iAIC/cAIC) for bladder and minimum median DSC of 0.78/0.79 (iAIC/cAIC) for anal canal/penile bulb. Maximal respectively minimal median HD95 were detected for rectum with 17.3/20.6 mm (iAIC/cAIC) and for bladder with 5.6/6.0 mm (iAIC/cAIC). Overall, the average median DSC/HD95 values were 0.87/11.8mm (iAIC) and 0.83/10.2mm (cAIC) for OAR/targets and 0.90/11.9mm (iAIC) and 0.91/16.5mm (cAIC) for bony structures. For a tolerance of 3 mm, the highest and lowest sDSC were determined for bladder (iAIC:1.00, cAIC:0.99) and prostate in iAIC (0.89) and anal canal in cAIC (0.80), respectively. Qualitatively, 84.8% of analysed contours were considered as clinically acceptable for iAIC, while 12.9% required minor and 2.3% major adjustments or were classed as unusable. Contour-specific analysis showed that iAIC achieved the highest mean scores with 1.00 for the anal canal and the lowest with 1.61 for the prostate. This study demonstrates that open-source segmentation framework can achieve comparable annotation accuracy to commercial solutions for pelvic anatomy in online adaptive MRgRT. The adapted framework not only maintained high segmentation performance, with 84.8% of contours accepted by physicians or requiring only minor corrections (12.9%) but also enhanced clinical workflow efficiency of online adaptive MRgRT through reduced inference times. These findings establish open-source frameworks as viable alternatives to commercial systems in supervised clinical workflows.

Weakly supervised language models for automated extraction of critical findings from radiology reports.

Das A, Talati IA, Chaves JMZ, Rubin D, Banerjee I

pubmed logopapersMay 8 2025
Critical findings in radiology reports are life threatening conditions that need to be communicated promptly to physicians for timely management of patients. Although challenging, advancements in natural language processing (NLP), particularly large language models (LLMs), now enable the automated identification of key findings from verbose reports. Given the scarcity of labeled critical findings data, we implemented a two-phase, weakly supervised fine-tuning approach on 15,000 unlabeled Mayo Clinic reports. This fine-tuned model then automatically extracted critical terms on internal (Mayo Clinic, n = 80) and external (MIMIC-III, n = 123) test datasets, validated against expert annotations. Model performance was further assessed on 5000 MIMIC-IV reports using LLM-aided metrics, G-eval and Prometheus. Both manual and LLM-based evaluations showed improved task alignment with weak supervision. The pipeline and model, publicly available under an academic license, can aid in critical finding extraction for research and clinical use ( https://github.com/dasavisha/CriticalFindings_Extract ).

MoRe-3DGSMR: Motion-resolved reconstruction framework for free-breathing pulmonary MRI based on 3D Gaussian representation

Tengya Peng, Ruyi Zha, Qing Zou

arxiv logopreprintMay 8 2025
This study presents an unsupervised, motion-resolved reconstruction framework for high-resolution, free-breathing pulmonary magnetic resonance imaging (MRI), utilizing a three-dimensional Gaussian representation (3DGS). The proposed method leverages 3DGS to address the challenges of motion-resolved 3D isotropic pulmonary MRI reconstruction by enabling data smoothing between voxels for continuous spatial representation. Pulmonary MRI data acquisition is performed using a golden-angle radial sampling trajectory, with respiratory motion signals extracted from the center of k-space in each radial spoke. Based on the estimated motion signal, the k-space data is sorted into multiple respiratory phases. A 3DGS framework is then applied to reconstruct a reference image volume from the first motion state. Subsequently, a patient-specific convolutional neural network is trained to estimate the deformation vector fields (DVFs), which are used to generate the remaining motion states through spatial transformation of the reference volume. The proposed reconstruction pipeline is evaluated on six datasets from six subjects and bench-marked against three state-of-the-art reconstruction methods. The experimental findings demonstrate that the proposed reconstruction framework effectively reconstructs high-resolution, motion-resolved pulmonary MR images. Compared with existing approaches, it achieves superior image quality, reflected by higher signal-to-noise ratio and contrast-to-noise ratio. The proposed unsupervised 3DGS-based reconstruction method enables accurate motion-resolved pulmonary MRI with isotropic spatial resolution. Its superior performance in image quality metrics over state-of-the-art methods highlights its potential as a robust solution for clinical pulmonary MR imaging.

Ultrasound-based deep learning radiomics for enhanced axillary lymph node metastasis assessment: a multicenter study.

Zhang D, Zhou W, Lu WW, Qin XC, Zhang XY, Luo YH, Wu J, Wang JL, Zhao JJ, Zhang CX

pubmed logopapersMay 8 2025
Accurate preoperative assessment of axillary lymph node metastasis (ALNM) in breast cancer is crucial for guiding treatment decisions. This study aimed to develop a deep-learning radiomics model for assessing ALNM and to evaluate its impact on radiologists' diagnostic accuracy. This multicenter study included 866 breast cancer patients from 6 hospitals. The data were categorized into training, internal test, external test, and prospective test sets. Deep learning and handcrafted radiomics features were extracted from ultrasound images of primary tumors and lymph nodes. The tumor score and LN score were calculated following feature selection, and a clinical-radiomics model was constructed based on these scores along with clinical-ultrasonic risk factors. The model's performance was validated across the 3 test sets. Additionally, the diagnostic performance of radiologists, with and without model assistance, was evaluated. The clinical-radiomics model demonstrated robust discrimination with AUCs of 0.94, 0.92, 0.91, and 0.95 in the training, internal test, external test, and prospective test sets, respectively. It surpassed the clinical model and single score in all sets (P < .05). Decision curve analysis and clinical impact curves validated the clinical utility of the clinical-radiomics model. Moreover, the model significantly improved radiologists' diagnostic accuracy, with AUCs increasing from 0.71 to 0.82 for the junior radiologist and from 0.75 to 0.85 for the senior radiologist. The clinical-radiomics model effectively predicts ALNM in breast cancer patients using noninvasive ultrasound features. Additionally, it enhances radiologists' diagnostic accuracy, potentially optimizing resource allocation in breast cancer management.

Systematic review and epistemic meta-analysis to advance binomial AI-radiomics integration for predicting high-grade glioma progression and enhancing patient management.

Chilaca-Rosas MF, Contreras-Aguilar MT, Pallach-Loose F, Altamirano-Bustamante NF, Salazar-Calderon DR, Revilla-Monsalve C, Heredia-Gutiérrez JC, Conde-Castro B, Medrano-Guzmán R, Altamirano-Bustamante MM

pubmed logopapersMay 8 2025
High-grade gliomas, particularly glioblastoma (MeSH:Glioblastoma), are among the most aggressive and lethal central nervous system tumors, necessitating advanced diagnostic and prognostic strategies. This systematic review and epistemic meta-analysis explore the integration of Artificial Intelligence (AI) and Radiomics Inter-field (AIRI) to enhance predictive modeling for tumor progression. A comprehensive literature search identified 19 high-quality studies, which were analyzed to evaluate radiomic features and machine learning models in predicting overall survival (OS) and progression-free survival (PFS). Key findings highlight the predictive strength of specific MRI-derived radiomic features such as log-filter and Gabor textures and the superior performance of Support Vector Machines (SVM) and Random Forest (RF) models, achieving high accuracy and AUC scores (e.g., 98% AUC and 98.7% accuracy for OS). This research demonstrates the current state of the AIRI field and shows that current articles report their results with different performance indicators and metrics, making outcomes heterogenous and hard to integrate knowledge. Additionally, it was explored that today some articles use biased methodologies. This study proposes a structured AIRI development roadmap and guidelines, to avoid bias and make results comparable, emphasizing standardized feature extraction and AI model training to improve reproducibility across clinical settings. By advancing precision medicine, AIRI integration has the potential to refine clinical decision-making and enhance patient outcomes.

Effective data selection via deep learning processes and corresponding learning strategies in ultrasound image classification.

Lee H, Kwak JY, Lee E

pubmed logopapersMay 8 2025
In this study, we propose a novel approach to enhancing transfer learning by optimizing data selection through deep learning techniques and corresponding innovative learning strategies. This method is particularly beneficial when the available dataset has reached its limit and cannot be further expanded. Our approach focuses on maximizing the use of existing data to improve learning outcomes which offers an effective solution for data-limited applications in medical imaging classification. The proposed method consists of two stages. In the first stage, an original network performs the initial classification. When the original network exhibits low confidence in its predictions, ambiguous classifications are passed to a secondary decision-making step involving a newly trained network, referred to as the True network. The True network shares the same architecture as the original network but is trained on a subset of the original dataset that is selected based on consensus among multiple independent networks. It is then used to verify the classification results of the original network, identifying and correcting any misclassified images. To evaluate the effectiveness of our approach, we conducted experiments using thyroid nodule ultrasound images with the ResNet101 and Vision Transformer architectures along with eleven other pre-trained neural networks. The proposed method led to performance improvements across all five key metrics, accuracy, sensitivity, specificity, F1-score, and AUC, compared to using only the original or True networks in ResNet101. Additionally, the True network showed strong performance when applied to the Vision Transformer and similar enhancements were observed across multiple convolutional neural network architectures. Furthermore, to assess the robustness and adaptability of our method across different medical imaging modalities, we applied it to dermoscopic images and observed similar performance enhancements. These results provide evidence of the effectiveness of our approach in improving transfer learning-based medical image classification without requiring additional training data.

Patient-specific uncertainty calibration of deep learning-based autosegmentation networks for adaptive MRI-guided lung radiotherapy.

Rabe M, Meliadò EF, Marschner S, Belka C, Corradini S, Van den Berg CAT, Landry G, Kurz C

pubmed logopapersMay 8 2025
Uncertainty assessment of deep learning autosegmentation (DLAS) models can support contour corrections in adaptive radiotherapy (ART), e.g. by utilizing Monte Carlo Dropout (MCD) uncertainty maps. However, poorly calibrated uncertainties at the patient level often render these clinically nonviable. We evaluated population-based and patient-specific DLAS accuracy and uncertainty calibration and propose a patient-specific post-training uncertainty calibration method for DLAS in ART.&#xD;&#xD;Approach. The study included 122 lung cancer patients treated with a low-field MR-linac (80/19/23 training/validation/test cases). Ten single-label 3D-U-Net population-based baseline models (BM) were trained with dropout using planning MRIs (pMRIs) and contours for nine organs-at-riks (OARs) and gross tumor volumes (GTVs). Patient-specific models (PS) were created by fine-tuning BMs with each test patient's pMRI. Model uncertainty was assessed with MCD, averaged into probability maps. Uncertainty calibration was evaluated with reliability diagrams and expected calibration error (ECE). A proposed post-training calibration method rescaled MCD probabilities for fraction images in BM (calBM) and PS (calPS) after fitting reliability diagrams from pMRIs. All models were evaluated on fraction images using Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95) and ECE. Metrics were compared among models for all OARs combined (n=163), and the GTV (n=23), using Friedman and posthoc-Nemenyi tests (α=0.05).&#xD;&#xD;Main results. For the OARs, patient-specific fine-tuning significantly (p<0.001) increased median DSC from 0.78 (BM) to 0.86 (PS) and reduced HD95 from 14mm (BM) to 6.0mm (PS). Uncertainty calibration achieved substantial reductions in ECE, from 0.25 (BM) to 0.091 (calBM) and 0.22 (PS) to 0.11 (calPS) (p<0.001), without significantly affecting DSC or HD95 (p>0.05). For the GTV, BM performance was poor (DSC=0.05) but significantly (p<0.001) improved with PS training (DSC=0.75) while uncertainty calibration reduced ECE from 0.22 (PS) to 0.15 (calPS) (p=0.45).&#xD;&#xD;Significance. Post-training uncertainty calibration yields geometrically accurate DLAS models with well-calibrated uncertainty estimates, crucial for ART applications.
Page 438 of 4494481 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.