Sort by:
Page 1 of 56552 results
Next

Dual-Parallel Artificial Intelligence Framework for Breast Cancer Grading via High-Intensity Ultrasound and Biomarkers.

Parwekar P, Agrawal KK, Ali J, Gundagatti S, Rajpoot DS, Ahmed T, Vidyarthi A

pubmed logopapersOct 1 2025
<b><i>Background:</i></b> Accurate and noninvasive breast cancer grading and therapy monitoring remain critical challenges in oncology. Traditional methods often rely on invasive histopathological assessments or imaging-only techniques, which may not fully capture the molecular and morphological intricacies of tumor response. <b><i>Method:</i></b> This article presents a novel, noninvasive framework for breast cancer analysis and therapy monitoring that combines two parallel mechanisms: (1) a dual-stream convolutional neural network (CNN) processing high-intensity ultrasound images, and (2) a biomarker-aware CNN stream utilizing patient-specific breast cancer biomarkers, including carbohydrate antigen 15-3, carcinoembryonic antigen, and human epidermal growth factor receptor 2 levels. The imaging stream extracts spatial and morphological features, while the biomarker stream encodes quantitative molecular indicators, enabling a multimodal understanding of tumor characteristics. The outputs from both streams are fused to predict the cancer grade (G1-G3) with high reliability. <b><i>Results:</i></b> Experimental evaluation on a cohort of pre- and postchemotherapy patients demonstrated the effectiveness of the proposed approach, achieving an overall grading accuracy of 97.8%, with an area under the curve of 0.981 for malignancy classification. The model also enables quantitative post-therapy analysis, revealing an average tumor response improvement of 41.3% across the test set, as measured by predicted regression in grade and changes in biomarker-imaging correlation. <b><i>Conclusions:</i></b> This dual-parallel artificial intelligence strategy offers a promising noninvasive alternative to traditional histopathological and imaging-alone methods, supporting real-time cancer monitoring and personalized treatment evaluation. The integration of high-resolution imaging with biomolecular data significantly enhances diagnostic depth, paving the way for intelligent, patient-specific breast cancer management.

Deep Learning-Based CAD System for Enhanced Breast Lesion Classification and Grading Using RFTSDP Approach.

Ghehi EN, Fallah A, Rashidi S, Dastjerdi MM

pubmed logopapersOct 1 2025
Accurate detection of breast lesion type is crucial for optimizing treatment; however, due to the limited precision of current diagnostic methods, biopsies are often required. To address this limitation, we proposed radio frequency time series dynamic processing (RFTSDP) in 2020, which analyzes the dynamic response of tissue and the impact of scatterer displacement on RF echoes during controlled stimulations to enhance diagnostic information. We developed a vibration-generating device and collected ultrafast ultrasound data from 11 ex vivo breast tissue samples under different stimulations. Deep learning (DL) was used for automated feature extraction and lesion classification into 2, 3, and 5 categories. The performance of the convolutional neural network (CNN)-based RFTSDP method was compared with traditional machine learning techniques, which involved spectral and nonlinear feature extraction from RF time series, followed by a support vector machine (SVM). With 65 Hz vibration, the DL-based RFTSDP method achieved 99.53 ± 0.47% accuracy in classifying and grading breast lesions. CNN consistently outperformed SVM, particularly under vibratory stimulation. In 5-class classification, CNN reached 98.01% versus 95.64% for SVM, with the difference being statistically significant (P < .05). Furthermore, the CNN-based RFTSDP method showed a 28.67% improvement in classification accuracy compared to the non-stimulation condition and the analysis of focused raw data. We developed a DL-based CAD system capable of classifying and grading breast lesions. This study demonstrates that the proposed system not only enhances classification but also ensures increased stability and robustness compared to traditional methods.

A roadmap for artificial intelligence in pain medicine: current status, opportunities, and requirements.

Adams MCB, Bowness JS, Nelson AM, Hurley RW, Narouze S

pubmed logopapersOct 1 2025
Artificial intelligence (AI) represents a transformative opportunity for pain medicine, offering potential solutions to longstanding challenges in pain assessment and management. This review synthesizes the current state of AI applications with a strategic framework for implementation, highlighting established adaptation pathways from adjacent medical fields. In acute pain, AI systems have achieved regulatory approval for ultrasound guidance in regional anesthesia and shown promise in automated pain scoring through facial expression analysis. For chronic pain management, machine learning algorithms have improved diagnostic accuracy for musculoskeletal conditions and enhanced treatment selection through predictive modeling. Successful integration requires interdisciplinary collaboration and physician coleadership throughout the development process, with specific adaptations needed for pain-specific challenges. This roadmap outlines a comprehensive methodological framework for AI in pain medicine, emphasizing four key phases: problem definition, algorithm development, validation, and implementation. Critical areas for future development include perioperative pain trajectory prediction, real-time procedural guidance, and personalized treatment optimization. Success ultimately depends on maintaining strong partnerships between clinicians, developers, and researchers while addressing ethical, regulatory, and educational considerations.

CardioBench: Do Echocardiography Foundation Models Generalize Beyond the Lab?

Darya Taratynova, Ahmed Aly, Numan Saeed, Mohammad Yaqub

arxiv logopreprintOct 1 2025
Foundation models (FMs) are reshaping medical imaging, yet their application in echocardiography remains limited. While several echocardiography-specific FMs have recently been introduced, no standardized benchmark exists to evaluate them. Echocardiography poses unique challenges, including noisy acquisitions, high frame redundancy, and limited public datasets. Most existing solutions evaluate on private data, restricting comparability. To address this, we introduce CardioBench, a comprehensive benchmark for echocardiography FMs. CardioBench unifies eight publicly available datasets into a standardized suite spanning four regression and five classification tasks, covering functional, structural, diagnostic, and view recognition endpoints. We evaluate several leading FM, including cardiac-specific, biomedical, and general-purpose encoders, under consistent zero-shot, probing, and alignment protocols. Our results highlight complementary strengths across model families: temporal modeling is critical for functional regression, retrieval provides robustness under distribution shift, and domain-specific text encoders capture physiologically meaningful axes. General-purpose encoders transfer strongly and often close the gap with probing, but struggle with fine-grained distinctions like view classification and subtle pathology recognition. By releasing preprocessing, splits, and public evaluation pipelines, CardioBench establishes a reproducible reference point and offers actionable insights to guide the design of future echocardiography foundation models.

Unsupervised Unfolded rPCA (U2-rPCA): Deep Interpretable Clutter Filtering for Ultrasound Microvascular Imaging

Huaying Li, Liansheng Wang, Yinran Chen

arxiv logopreprintOct 1 2025
High-sensitivity clutter filtering is a fundamental step in ultrasound microvascular imaging. Singular value decomposition (SVD) and robust principal component analysis (rPCA) are the main clutter filtering strategies. However, both strategies are limited in feature modeling and tissue-blood flow separation for high-quality microvascular imaging. Recently, deep learning-based clutter filtering has shown potential in more thoroughly separating tissue and blood flow signals. However, the existing supervised filters face the challenges of interpretability and lack of in-vitro and in-vivo ground truths. While the interpretability issue can be addressed by algorithm deep unfolding, the training ground truth remains unsolved. To this end, this paper proposes an unsupervised unfolded rPCA (U2-rPCA) method that preserves mathematical interpretability and is insusceptible to learning labels. Specifically, U2-rPCA is unfolded from an iteratively reweighted least squares (IRLS) rPCA baseline with intrinsic low-rank and sparse regularization. A sparse-enhancement unit is added to the network to strengthen its capability to capture the sparse micro-flow signals. U2-rPCA is like an adaptive filter that is trained with part of the image sequence and then used for the following frames. Experimental validations on a in-silico dataset and public in-vivo datasets demonstrated the outperformance of U2-rPCA when compared with the SVD-based method, the rPCA baseline, and another deep learning-based filter. Particularly, the proposed method improved the contrastto-noise ratio (CNR) of the power Doppler image by 2 dB to 10 dB when compared with other methods. Furthermore, the effectiveness of the building modules of U2-rPCA was validated through ablation studies.

Artificial intelligence in regional anesthesia.

Harris J, Kamming D, Bowness JS

pubmed logopapersOct 1 2025
Artificial intelligence (AI) is having an increasing impact on healthcare. In ultrasound-guided regional anesthesia (UGRA), commercially available devices exist that augment traditional grayscale ultrasound imaging by highlighting key sono-anatomical structures in real-time. We review the latest evidence supporting this emerging technology and consider the opportunities and challenges to its widespread deployment. The existing literature is limited and heterogenous, which impedes full appraisal of systems, comparison between devices, and informed adoption. AI-based devices promise to improve clinical practice and training in UGRA, though their impact on patient outcomes and provision of UGRA techniques is unclear at this early stage. Calls for standardization across both UGRA and AI are increasing, with greater clinical leadership required. Emerging AI applications in UGRA warrant further study due to an opaque and fragmented evidence base. Robust and consistent evaluation and reporting of algorithm performance, in a representative clinical context, will expedite discovery and appropriate deployment of AI in UGRA. A clinician-focused approach to the development, evaluation, and implementation of this exciting branch of AI has huge potential to advance the human art of regional anesthesia.

Dolphin v1.0 Technical Report

Taohan Weng, Chi zhang, Chaoran Yan, Siya Liu, Xiaoyang Liu, Yalun Wu, Boyang Wang, Boyan Wang, Jiren Ren, Kaiwen Yan, Jinze Yu, Kaibing Hu, Henan Liu, Haoyun zheng, Anjie Le, Hongcheng Guo

arxiv logopreprintSep 30 2025
Ultrasound is crucial in modern medicine but faces challenges like operator dependence, image noise, and real-time scanning, hindering AI integration. While large multimodal models excel in other medical imaging areas, they struggle with ultrasound's complexities. To address this, we introduce Dolphin v1.0 (V1) and its reasoning-augmented version, Dolphin R1-the first large-scale multimodal ultrasound foundation models unifying diverse clinical tasks in a single vision-language framework.To tackle ultrasound variability and noise, we curated a 2-million-scale multimodal dataset, combining textbook knowledge, public data, synthetic samples, and general corpora. This ensures robust perception, generalization, and clinical adaptability.The Dolphin series employs a three-stage training strategy: domain-specialized pretraining, instruction-driven alignment, and reinforcement-based refinement. Dolphin v1.0 delivers reliable performance in classification, detection, regression, and report generation. Dolphin R1 enhances diagnostic inference, reasoning transparency, and interpretability through reinforcement learning with ultrasound-specific rewards.Evaluated on U2-Bench across eight ultrasound tasks, Dolphin R1 achieves a U2-score of 0.5835-over twice the second-best model (0.2968) setting a new state of the art. Dolphin v1.0 also performs competitively, validating the unified framework. Comparisons show reasoning-enhanced training significantly improves diagnostic accuracy, consistency, and interpretability, highlighting its importance for high-stakes medical AI.

Leveraging ChatGPT for Report Error Audit: An Accuracy-Driven and Cost-Efficient Solution for Ophthalmic Imaging Reports.

Xu Y, Kang D, Shi D, Tham YC, Grzybowski A, Jin K

pubmed logopapersSep 30 2025
Accurate ophthalmic imaging reports, including fundus fluorescein angiography (FFA) and ocular B-scan ultrasound, are essential for effective clinical decision-making. The current process, involving drafting by residents followed by review by ophthalmic technicians and ophthalmologists, is time-consuming and prone to errors. This study evaluates the effectiveness of ChatGPT-4o in auditing errors in FFA and ocular B-scan reports and assesses its potential to reduce time and costs within the reporting workflow. Preliminary 100 FFA and 80 ocular B-scan reports drafted by residents were analyzed using GPT-4o to identify the errors in identifying left or right eye and incorrect anatomical descriptions. The accuracy of GPT-4o was compared to retinal specialists, general ophthalmologists, and ophthalmic technicians. Additionally, a cost-effective analysis was conducted to estimate time and cost savings from integrating GPT-4o into the reporting process. A pilot real-world validation with 20 erroneous reports was also performed between GPT-4o and human reviewers. GPT-4o demonstrated a detection rate of 79.0% (158 of 200; 95% CI 73.0-85.0) across all examinations, which was comparable to the average detection performance of general ophthalmologists (78.0% [155 of 200; 95% CI 72.0-83.0]; P ≥ 0.09). Integration of GPT-4o reduced the average report review time by 86%, completing 180 ophthalmic reports in approximately 0.27 h compared to 2.17-3.19 h by human ophthalmologists. Additionally, compared to human reviewers, GPT-4o lowered the cost from $0.21 to $0.03 per report (savings of $0.18). In the real-world evaluation, GPT-4o detected 18 of 20 errors with no false positives, compared to 95-100% by human reviewers. GPT-4o effectively enhances the accuracy of ophthalmic imaging reports by identifying and correcting common errors. Its implementation can potentially alleviate the workload of ophthalmologists, streamline the reporting process, and reduce associated costs, thereby improving overall clinical workflow and patient outcomes.

Automating prostate volume acquisition using abdominal ultrasound scans for prostate-specific antigen density calculations.

Bennett RD, Barrett T, Sushentsev N, Sanmugalingam N, Lee KL, Gnanapragasam VJ, Tse ZTH

pubmed logopapersSep 30 2025
Proposed methods for prostate cancer screening are currently prohibitively expensive (due to the high costs of imaging equipment such as magnetic resonance imaging and traditional ultrasound systems), inadequate in their detection rates, require highly trained specialists, and/or are invasive, resulting in patient discomfort. These limitations make population-wide screening for prostate cancer challenging. Machine learning techniques applied to abdominal ultrasound scanning may help alleviate some of these disadvantages. Abdominal ultrasound scans are comparatively low cost and exhibit minimal patient discomfort, and machine learning can be applied to mitigate against the high operator-dependent variability of ultrasound scanning. In this study, a state-of-the-art machine learning model was compared to an expert radiologist and trainee radiologist registrars of varying experience when estimating prostate volume from abdominal ultrasound images, a crucial step in detecting prostate cancer using prostate-specific antigen density. The machine learning model calculated prostatic volume by marking out dimensions of the prolate ellipsoid formula from two orthogonal images of the prostate acquired with abdominal ultrasound scans (which could be conducted by operators with minimal experience in a primary care setting). While both the algorithm and the registrars showed high correlation with the expert ([Formula: see text]) it was found that the model outperformed the trainees in both accuracy (lowest average volume error of [Formula: see text]) and consistency (lowest IQR of [Formula: see text] and lowest average volume standard deviation of [Formula: see text]). The results are promising for the future development of an automated prostate cancer screening workflow using machine learning and abdominal ultrasound scans.

Dolphin v1.0 Technical Report

Taohan Weng, Chi zhang, Chaoran Yan, Siya Liu, Xiaoyang Liu, Yalun Wu, Boyang Wang, Boyan Wang, Jiren Ren, Kaiwen Yan, Jinze Yu, Kaibing Hu, Henan Liu, Haoyun Zheng, Zhenyu Liu, Duo Zhang, Xiaoqing Guo, Anjie Le, Hongcheng Guo

arxiv logopreprintSep 30 2025
Ultrasound is crucial in modern medicine but faces challenges like operator dependence, image noise, and real-time scanning, hindering AI integration. While large multimodal models excel in other medical imaging areas, they struggle with ultrasound's complexities. To address this, we introduce Dolphin v1.0 (V1) and its reasoning-augmented version, Dolphin R1-the first large-scale multimodal ultrasound foundation models unifying diverse clinical tasks in a single vision-language framework.To tackle ultrasound variability and noise, we curated a 2-million-scale multimodal dataset, combining textbook knowledge, public data, synthetic samples, and general corpora. This ensures robust perception, generalization, and clinical adaptability.The Dolphin series employs a three-stage training strategy: domain-specialized pretraining, instruction-driven alignment, and reinforcement-based refinement. Dolphin v1.0 delivers reliable performance in classification, detection, regression, and report generation. Dolphin R1 enhances diagnostic inference, reasoning transparency, and interpretability through reinforcement learning with ultrasound-specific rewards.Evaluated on U2-Bench across eight ultrasound tasks, Dolphin R1 achieves a U2-score of 0.5835-over twice the second-best model (0.2968) setting a new state of the art. Dolphin v1.0 also performs competitively, validating the unified framework. Comparisons show reasoning-enhanced training significantly improves diagnostic accuracy, consistency, and interpretability, highlighting its importance for high-stakes medical AI.
Page 1 of 56552 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.