Sort by:
Page 131 of 3403392 results

Ultrasound Radio Frequency Time Series for Tissue Typing: Experiments on In-Vivo Breast Samples Using Texture-Optimized Features and Multi-Origin Method of Classification (MOMC).

Arab M, Fallah A, Rashidi S, Dastjerdi MM, Ahmadinejad N

pubmed logopapersJun 30 2025
One of the most promising auxiliaries for screening breast cancer (BC) is ultrasound (US) radio-frequency (RF) time series. It has the superiority of not requiring any supplementary equipment over other methods. This article sought to propound a machine learning (ML) method for the automated categorization of breast lesions-categorized as benign, probably benign, suspicious, or malignant-using features extracted from the accumulated US RF time series. In this research, 220 data points of the categories as mentioned earlier, recorded from 118 patients, were analyzed. The RFTSBU dataset was registered by a SuperSonic Imagine Aixplorer® medical/research system fitted with a linear transducer. The expert radiologist manually selected regions of interest (ROIs) in B-mode images before extracting 283 features from each ROI in the ML approach, utilizing textural features such as Gabor filter (GF), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), gray-level size zone matrix (GLSZM), and gray-level dependence matrix (GLDM). Subsequently, the particle swarm optimization (PSO) narrowed the features to 131 highly effective ones. Ultimately, the features underwent classification using an innovative multi-origin method classification (MOMC), marking a significant leap in BC diagnosis. Employing 5-fold cross-validation, the study achieved notable accuracy rates of 98.57 ± 1.09%, 91.53 ± 0.89%, and 83.71 ± 1.30% for 2-, 3-, and 4-class classifications, respectively, using MOMC-SVM and MOMC-ensemble classifiers. This research introduces an innovative ML-based approach to differentiate between diverse breast lesion types using in vivo US RF time series data. The findings underscore its efficacy in enhancing classification accuracy, promising significant strides in computer-aided diagnosis (CAD) for BC screening.

Assessment of quantitative staging PET/computed tomography parameters using machine learning for early detection of progression in diffuse large B-cell lymphoma.

Aksu A, Us A, Küçüker KA, Solmaz Ş, Turgut B

pubmed logopapersJun 30 2025
This study aimed to investigate the role of volumetric and dissemination parameters obtained from pretreatment 18-fluorodeoxyglucose PET/computed tomography (18F-FDG PET/CT) in predicting progression/relapse in patients with diffuse large B-cell lymphoma (DLBCL) with machine learning algorithms. Patients diagnosed with DLBCL histopathologically, treated with rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisone, and followed for at least 1 year were reviewed retrospectively. Quantitative parameters such as tumor volume [total metabolic tumor volume (tMTV)], tumor burden [total lesion glycolysis (tTLG)], and the longest distance between two tumor foci (Dmax) were obtained from PET images with a standard uptake value threshold of 4.0. The MTV obtained from the volume of interest with the highest volume was noted as metabolic bulk volume (MBV). By analyzing the patients' PET parameters and clinical information with machine learning algorithms, models that attempt to predict progression/recurrence over 1 year were obtained. Of the 90 patients included, 16 had progression within 1 year. Significant differences were found in tMTV, tTLG, MBV, and Dmax values between patients with and without progression. The area under curve (AUC) of the model obtained with clinical data was 0.701. While a model with an AUC of 0.871 was obtained with a random forest algorithm using PET parameters, the model obtained with the Naive Bayes algorithm including clinical data in PET parameters had an AUC of 0.838. Using quantitative parameters derived from staging PET with machine learning algorithms may enable us to detect early progression in patients with DLBCL and improve early risk stratification and guide treatment decisions in these patients.

A Deep Learning-Based De-Artifact Diffusion Model for Removing Motion Artifacts in Knee MRI.

Li Y, Gong T, Zhou Q, Wang H, Yan X, Xi Y, Shi Z, Deng W, Shi F, Wang Y

pubmed logopapersJun 30 2025
Motion artifacts are common for knee MRI, which usually lead to rescanning. Effective removal of motion artifacts would be clinically useful. To construct an effective deep learning-based model to remove motion artifacts for knee MRI using real-world data. Retrospective. Model construction: 90 consecutive patients (1997 2D slices) who had knee MRI images with motion artifacts paired with immediately rescanned images without artifacts served as ground truth. Internal test dataset: 25 patients (795 slices) from another period; external test dataset: 39 patients (813 slices) from another hospital. 3-T/1.5-T knee MRI with T1-weighted imaging, T2-weighted imaging, and proton-weighted imaging. A deep learning-based supervised conditional diffusion model was constructed. Objective metrics (root mean square error [RMSE], peak signal-to-noise ratio [PSNR], structural similarity [SSIM]) and subjective ratings were used for image quality assessment, which were compared with three other algorithms (enhanced super-resolution [ESR], enhanced deep super-resolution, and ESR using a generative adversarial network). Diagnostic performance of the output images was compared with the rescanned images. The Kappa Test, Pearson chi-square test, Fredman's rank-sum test, and the marginal homogeneity test. A p value < 0.05 was considered statistically significant. Subjective ratings showed significant improvements in the output images compared to the input, with no significant difference from the ground truth. The constructed method demonstrated the smallest RMSE (11.44  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  5.47 in the validation cohort; 13.95  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  4.32 in the external test cohort), the largest PSNR (27.61  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  3.20 in the validation cohort; 25.64  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  2.67 in the external test cohort) and SSIM (0.97  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  0.04 in the validation cohort; 0.94  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  0.04 in the external test cohort) compared to the other three algorithms. The output images achieved comparable diagnostic capability as the ground truth for multiple anatomical structures. The constructed model exhibited feasibility and effectiveness, and outperformed multiple other algorithms for removing motion artifacts in knee MRI. Level 3. Stage 2.

Derivation and validation of an artificial intelligence-based plaque burden safety cut-off for long-term acute coronary syndrome from coronary computed tomography angiography.

Bär S, Knuuti J, Saraste A, Klén R, Kero T, Nabeta T, Bax JJ, Danad I, Nurmohamed NS, Jukema RA, Knaapen P, Maaniitty T

pubmed logopapersJun 30 2025
Artificial intelligence (AI) has enabled accurate and fast plaque quantification from coronary computed tomography angiography (CCTA). However, AI detects any coronary plaque in up to 97% of patients. To avoid overdiagnosis, a plaque burden safety cut-off for future coronary events is needed. Percent atheroma volume (PAV) was quantified with AI-guided quantitative computed tomography in a blinded fashion. Safety cut-off derivation was performed in the Turku CCTA registry (Finland), and pre-defined as ≥90% sensitivity for acute coronary syndrome (ACS). External validation was performed in the Amsterdam CCTA registry (the Netherlands). In the derivation cohort, 100/2271 (4.4%) patients experienced ACS (median follow-up 6.9 years). A threshold of PAV ≥ 2.6% was derived with 90.0% sensitivity and negative predictive value (NPV) of 99.0%. In the validation cohort 27/568 (4.8%) experienced ACS (median follow-up 6.7 years) with PAV ≥ 2.6% showing 92.6% sensitivity and 99.0% NPV for ACS. In the derivation cohort, 45.2% of patients had PAV < 2.6 vs. 4.3% with PAV 0% (no plaque) (P < 0.001) (validation cohort: 34.3% PAV < 2.6 vs. 2.6% PAV 0%; P < 0.001). Patients with PAV ≥ 2.6% had higher adjusted ACS rates in the derivation [Hazard ratio (HR) 4.65, 95% confidence interval (CI) 2.33-9.28, P < 0.001] and validation cohort (HR 7.31, 95% CI 1.62-33.08, P = 0.010), respectively. This study suggests that PAV up to 2.6% quantified by AI is associated with low-ACS risk in two independent patient cohorts. This cut-off may be helpful for clinical application of AI-guided CCTA analysis, which detects any plaque in up to 96-97% of patients.

Hybrid strategy of coronary atherosclerosis characterization with T1-weighted MRI and CT angiography to non-invasively predict periprocedural myocardial injury.

Matsumoto H, Higuchi S, Li D, Tanisawa H, Isodono K, Irie D, Ohya H, Kitamura R, Kaneko K, Nakazawa M, Suzuki K, Komori Y, Hondera T, Cadet S, Lee HL, Christodoulou AG, Slomka PJ, Dey D, Xie Y, Shinke T

pubmed logopapersJun 30 2025
Coronary computed tomography angiography (CCTA) and magnetic resonance imaging (MRI) can predict periprocedural myocardial injury (PMI) after percutaneous coronary intervention (PCI). We aimed to investigate whether integrating MRI with CCTA, using the latest imaging and quantitative techniques, improves PMI prediction and to explore a potential hybrid CCTA-MRI strategy. This prospective, multi-centre study conducted coronary atherosclerosis T1-weighted characterization MRI for patients scheduled for elective PCI for an atherosclerotic lesion detected on CCTA without prior revascularization. PMI was defined as post-PCI troponin-T > 5× the upper reference limit. Using deep learning-enabled software, volumes of total plaque, calcified plaque, non-calcified plaque (NCP), and low-attenuation plaque (LAP; < 30 Hounsfield units) were quantified on CCTA. In non-contrast T1-weighted MRI, high-intensity plaque (HIP) volume was quantified as voxels with signal intensity exceeding that of the myocardium, weighted by their respective intensities. Of the 132 lesions from 120 patients, 43 resulted in PMI. In the CCTA-only strategy, LAP volume (P = 0.012) and NCP volume (P = 0.016) were independently associated with PMI. In integrating MRI with CCTA, LAP volume (P = 0.029), and HIP volume (P = 0.024) emerged as independent predictors. MRI integration with CCTA achieved a higher C-statistic value than CCTA alone (0.880 vs. 0.738; P = 0.004). A hybrid CCTA-MRI strategy, incorporating MRI for lesions with intermediate PMI risk based on CCTA, maintained superior diagnostic accuracy over the CCTA-only strategy (0.803 vs. 0.705; P = 0.028). Integrating MRI with CCTA improves PMI prediction compared with CCTA alone.

Enhancing weakly supervised data augmentation networks for thyroid nodule assessment using traditional and doppler ultrasound images.

Keatmanee C, Songsaeng D, Klabwong S, Nakaguro Y, Kunapinun A, Ekpanyapong M, Dailey MN

pubmed logopapersJun 30 2025
Thyroid ultrasound (US) is an essential tool for detecting and characterizing thyroid nodules. In this study, we propose an innovative approach to enhance thyroid nodule assessment by integrating Doppler US images with grayscale US images through weakly supervised data augmentation networks (WSDAN). Our method reduces background noise by replacing inefficient augmentation strategies, such as random cropping, with an advanced technique guided by bounding boxes derived from Doppler US images. This targeted augmentation significantly improves model performance in both classification and localization of thyroid nodules. The training dataset comprises 1288 paired grayscale and Doppler US images, with an additional 190 pairs used for three-fold cross-validation. To evaluate the model's efficacy, we tested it on a separate set of 190 grayscale US images. Compared to five state-of-the-art models and the original WSDAN, our Enhanced WSDAN model achieved superior performance. For classification, it reached an accuracy of 91%. For localization, it achieved Dice and Jaccard indices of 75% and 87%, respectively, demonstrating its potential as a valuable clinical tool.

$μ^2$Tokenizer: Differentiable Multi-Scale Multi-Modal Tokenizer for Radiology Report Generation

Siyou Li, Pengyao Qin, Huanan Wu, Dong Nie, Arun J. Thirunavukarasu, Juntao Yu, Le Zhang

arxiv logopreprintJun 30 2025
Automated radiology report generation (RRG) aims to produce detailed textual reports from clinical imaging, such as computed tomography (CT) scans, to improve the accuracy and efficiency of diagnosis and provision of management advice. RRG is complicated by two key challenges: (1) inherent complexity in extracting relevant information from imaging data under resource constraints, and (2) difficulty in objectively evaluating discrepancies between model-generated and expert-written reports. To address these challenges, we propose $\mu^2$LLM, a $\underline{\textbf{mu}}$ltiscale $\underline{\textbf{mu}}$ltimodal large language models for RRG tasks. The novel ${\mu}^2$Tokenizer, as an intermediate layer, integrates multi-modal features from the multiscale visual tokenizer and the text tokenizer, then enhances report generation quality through direct preference optimization (DPO), guided by GREEN-RedLlama. Experimental results on four large CT image-report medical datasets demonstrate that our method outperforms existing approaches, highlighting the potential of our fine-tuned $\mu^2$LLMs on limited data for RRG tasks. At the same time, for prompt engineering, we introduce a five-stage, LLM-driven pipeline that converts routine CT reports into paired visual-question-answer triples and citation-linked reasoning narratives, creating a scalable, high-quality supervisory corpus for explainable multimodal radiology LLM. All code, datasets, and models will be publicly available in our official repository. https://github.com/Siyou-Li/u2Tokenizer

Self-Supervised Multiview Xray Matching

Mohamad Dabboussi, Malo Huard, Yann Gousseau, Pietro Gori

arxiv logopreprintJun 30 2025
Accurate interpretation of multi-view radiographs is crucial for diagnosing fractures, muscular injuries, and other anomalies. While significant advances have been made in AI-based analysis of single images, current methods often struggle to establish robust correspondences between different X-ray views, an essential capability for precise clinical evaluations. In this work, we present a novel self-supervised pipeline that eliminates the need for manual annotation by automatically generating a many-to-many correspondence matrix between synthetic X-ray views. This is achieved using digitally reconstructed radiographs (DRR), which are automatically derived from unannotated CT volumes. Our approach incorporates a transformer-based training phase to accurately predict correspondences across two or more X-ray views. Furthermore, we demonstrate that learning correspondences among synthetic X-ray views can be leveraged as a pretraining strategy to enhance automatic multi-view fracture detection on real data. Extensive evaluations on both synthetic and real X-ray datasets show that incorporating correspondences improves performance in multi-view fracture classification.

Multimodal, Multi-Disease Medical Imaging Foundation Model (MerMED-FM)

Yang Zhou, Chrystie Wan Ning Quek, Jun Zhou, Yan Wang, Yang Bai, Yuhe Ke, Jie Yao, Laura Gutierrez, Zhen Ling Teo, Darren Shu Jeng Ting, Brian T. Soetikno, Christopher S. Nielsen, Tobias Elze, Zengxiang Li, Linh Le Dinh, Lionel Tim-Ee Cheng, Tran Nguyen Tuan Anh, Chee Leong Cheng, Tien Yin Wong, Nan Liu, Iain Beehuat Tan, Tony Kiat Hon Lim, Rick Siow Mong Goh, Yong Liu, Daniel Shu Wei Ting

arxiv logopreprintJun 30 2025
Current artificial intelligence models for medical imaging are predominantly single modality and single disease. Attempts to create multimodal and multi-disease models have resulted in inconsistent clinical accuracy. Furthermore, training these models typically requires large, labour-intensive, well-labelled datasets. We developed MerMED-FM, a state-of-the-art multimodal, multi-specialty foundation model trained using self-supervised learning and a memory module. MerMED-FM was trained on 3.3 million medical images from over ten specialties and seven modalities, including computed tomography (CT), chest X-rays (CXR), ultrasound (US), pathology patches, color fundus photography (CFP), optical coherence tomography (OCT) and dermatology images. MerMED-FM was evaluated across multiple diseases and compared against existing foundational models. Strong performance was achieved across all modalities, with AUROCs of 0.988 (OCT); 0.982 (pathology); 0.951 (US); 0.943 (CT); 0.931 (skin); 0.894 (CFP); 0.858 (CXR). MerMED-FM has the potential to be a highly adaptable, versatile, cross-specialty foundation model that enables robust medical imaging interpretation across diverse medical disciplines.
Page 131 of 3403392 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.