Sort by:
Page 359 of 6646636 results

Sandhu JK, Sharma C, Kaur A, Pandey SK, Sinha A, Shreyas J

pubmed logopapersJul 18 2025
Advancements in diagnostic technology are required to improve patient outcomes and facilitate early diagnosis, as breast cancer is a substantial global health concern. This research discusses the creation of a unique Deep Learning (DL) Ensemble Deep Learning based on a Clinical Decision Support System (EDL-CDSS) that enables the precise and expeditious diagnosis of breast cancer. Numerous DL models are combined in the proposed EDL-CDSS to create an ensemble method that optimizes the advantages and reduces the disadvantages of individual techniques. The team improves its capacity to extricate intricate patterns and features from medical imaging data by incorporating the Kelm Extreme Learning Machine (KELM), Deep Belief Network (DBN), and other DL architectures. Comprehensive testing has been conducted across various datasets to assess the efficacy of this system in comparison to individual DL models and traditional diagnostic methods. Among other objectives, the evaluation prioritizes precision, sensitivity, specificity, F1-score, accuracy, and overall accuracy to mitigate false positives and negatives. The experiment's conclusion exhibits a remarkable accuracy of 96.14% in comparison to prior advanced methodologies.

Li B, Powell D, Lee R

pubmed logopapersJul 18 2025
Artificial intelligence (AI) is already having a significant impact on healthcare. For example, AI-guided imaging can improve the diagnosis/treatment of vascular diseases, which affect over 200 million people globally. Recently, Chiu and colleagues (2024) developed an AI algorithm that supports nurses with no ultrasound training in diagnosing abdominal aortic aneurysms (AAA) with similar accuracy as ultrasound-trained physicians. This technology can therefore improve AAA screening; however, achieving clinical impact with new AI technologies requires careful consideration of commercialization strategies, including funding, compliance with safety and regulatory frameworks, health technology assessment, regulatory approval, reimbursement, and clinical guideline integration.

Safarian A, Mirshahvalad SA, Farbod A, Jung T, Nasrollahi H, Schweighofer-Zwink G, Rendl G, Pirich C, Vali R, Beheshti M

pubmed logopapersJul 18 2025
The integration of artificial intelligence (AI) into [<sup>18</sup>F]FDG PET/CT imaging continues to expand, offering new opportunities for more precise, consistent, and personalized oncologic evaluations. Building on the foundation established in Part I, this second part explores AI-driven innovations across a broader range of malignancies, including hematological, genitourinary, melanoma, and central nervous system tumors as well applications of AI in pediatric oncology. Radiomics and machine learning algorithms are being explored for their ability to enhance diagnostic accuracy, reduce interobserver variability, and inform complex clinical decision-making, such as identifying patients with refractory lymphoma, assessing pseudoprogression in melanoma, or predicting brain metastases in extracranial malignancies. Additionally, AI-assisted lesion segmentation, quantitative feature extraction, and heterogeneity analysis are contributing to improved prediction of treatment response and long-term survival outcomes. Despite encouraging results, variability in imaging protocols, segmentation methods, and validation strategies across studies continues to challenge reproducibility and remains a barrier to clinical translation. This review evaluates recent advancements of AI, its current clinical applications, and emphasizes the need for robust standardization and prospective validation to ensure the reproducibility and generalizability of AI tools in PET imaging and clinical practice.

Shaikh K, Lozano PR, Evangelou S, Wu EH, Nurmohamed NS, Madan N, Verghese D, Shekar C, Waheed A, Siddiqui S, Kolossváry M, Almeida S, Coombes T, Suchá D, Trivedi SJ, Ihdayhid AR

pubmed logopapersJul 18 2025
Advancements in cardiac computed tomography angiography (CCTA) have enabled the extraction of physiological data from an anatomy-based imaging modality. This review outlines the key methodologies for deriving fractional flow reserve (FFR) from CCTA, with a focus on two primary methods: 1) computational fluid dynamics-based FFR (CT-FFR) and 2) plaque-derived ischemia assessment using artificial intelligence and quantitative plaque metrics. These techniques have expanded the role of CCTA beyond anatomical assessment, allowing for concurrent evaluation of coronary physiology without the need for invasive testing. This review provides an overview of the principles, workflows, and limitations of each technique and aims to inform on the current state and future direction of non-invasive coronary physiology assessment.

Shen L, Yuan Y, Liu J, Cheng Y, Liao Q, Shi R, Xiong T, Xu H, Wang L, Yang Z

pubmed logopapersJul 18 2025
To evaluate image quality and diagnostic performance of prostate biparametric MRI (bi-MRI) with deep learning reconstruction (DLR). This prospective study included 61 adult male urological patients undergoing prostate MRI with standard-of-care (SOC) and fast protocols. Sequences included T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), and apparent diffusion coefficient (ADC) maps. DLR images were generated from FAST datasets. Three groups (SOC, FAST, DLR) were compared using: (1) five-point Likert scale, (2) signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), (3) lesion slope profiles, (4) dorsal capsule edge rise distance (ERD). PI-RADS scores were assigned to dominant lesions. ADC values were measured in histopathologically confirmed cases. Diagnostic performance was analyzed via receiver operating characteristic (ROC) curves (accuracy/sensitivity/specificity). Statistical tests included Friedman test, one-way ANOVA with post hoc analyses, and DeLong test for ROC comparisons (P<0.05). FAST scanning protocols reduced acquisition time by nearly half compared to the SOC scanning protocol. When compared to T2WI<sub>FAST</sub>, DLR significantly improved SNR, CNR, slope profile, and ERD (P < 0.05). Similarly, DLR significantly enhanced SNR, CNR, and image sharpness when compared to DWI<sub>FAST</sub> (P < 0.05). No significant differences were observed in PI-RADS scores and ADC values between groups (P > 0.05). The areas under the ROC curves, sensitivity, and specificity of ADC values for distinguishing benign and malignant lesions remained consistent (P > 0.05). DLR enhances image quality in fast prostate bi-MRI while preserving PI-RADS classification accuracy and ADC diagnostic performance.

Luo X, Wang B, Shi Q, Wang Z, Lai H, Liu H, Qin Y, Chen F, Song X, Ge L, Zhang L, Bian Z, Chen Y

pubmed logopapersJul 18 2025
This study aimed to systematically map the development methods, scope, and limitations of existing artificial intelligence (AI) reporting guidelines in medicine and to explore their applicability to generative AI (GAI) tools, such as large language models (LLMs). We reported a scoping review adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR). Five information sources were searched, including MEDLINE (via PubMed), EQUATOR Network, CNKI, FAIRsharing, and Google Scholar, from inception to December 31, 2024. Two reviewers independently screened records and extracted data using a predefined Excel template. Data included guideline characteristics (e.g., development methods, target audience, AI domain), adherence to EQUATOR Network recommendations, and consensus methodologies. Discrepancies were resolved by a third reviewer. 68 AI reporting guidelines were included. 48.5% focused on general AI, while only 7.4% addressed GAI/LLMs. Methodological rigor was limited: 39.7% described development processes, 42.6% involved multidisciplinary experts, and 33.8% followed EQUATOR recommendations. Significant overlap existed, particularly in medical imaging (20.6% of guidelines). GAI-specific guidelines (14.7%) lacked comprehensive coverage and methodological transparency. Existing AI reporting guidelines in medicine have suboptimal methodological rigor, redundancy, and insufficient coverage of GAI applications. Future and updated guidelines should prioritize standardized development processes, multidisciplinary collaboration, and expanded focus on emerging AI technologies like LLMs.

Praneeth Namburi, Roger Pallarès-López, Jessica Rosendorf, Duarte Folgado, Brian W. Anthony

arxiv logopreprintJul 18 2025
Ultrasound technology enables safe, non-invasive imaging of dynamic tissue behavior, making it a valuable tool in medicine, biomechanics, and sports science. However, accurately tracking tissue motion in B-mode ultrasound remains challenging due to speckle noise, low edge contrast, and out-of-plane movement. These challenges complicate the task of tracking anatomical landmarks over time, which is essential for quantifying tissue dynamics in many clinical and research applications. This manuscript introduces DUSTrack (Deep learning and optical flow-based toolkit for UltraSound Tracking), a semi-automated framework for tracking arbitrary points in B-mode ultrasound videos. We combine deep learning with optical flow to deliver high-quality and robust tracking across diverse anatomical structures and motion patterns. The toolkit includes a graphical user interface that streamlines the generation of high-quality training data and supports iterative model refinement. It also implements a novel optical-flow-based filtering technique that reduces high-frequency frame-to-frame noise while preserving rapid tissue motion. DUSTrack demonstrates superior accuracy compared to contemporary zero-shot point trackers and performs on par with specialized methods, establishing its potential as a general and foundational tool for clinical and biomechanical research. We demonstrate DUSTrack's versatility through three use cases: cardiac wall motion tracking in echocardiograms, muscle deformation analysis during reaching tasks, and fascicle tracking during ankle plantarflexion. As an open-source solution, DUSTrack offers a powerful, flexible framework for point tracking to quantify tissue motion from ultrasound videos. DUSTrack is available at https://github.com/praneethnamburi/DUSTrack.

Shravan Venkatraman, Pavan Kumar S, Rakesh Raj Madavan, Chandrakala S

arxiv logopreprintJul 18 2025
Accurate classification of computed tomography (CT) images is essential for diagnosis and treatment planning, but existing methods often struggle with the subtle and spatially diverse nature of pathological features. Current approaches typically process images uniformly, limiting their ability to detect localized abnormalities that require focused analysis. We introduce UGPL, an uncertainty-guided progressive learning framework that performs a global-to-local analysis by first identifying regions of diagnostic ambiguity and then conducting detailed examination of these critical areas. Our approach employs evidential deep learning to quantify predictive uncertainty, guiding the extraction of informative patches through a non-maximum suppression mechanism that maintains spatial diversity. This progressive refinement strategy, combined with an adaptive fusion mechanism, enables UGPL to integrate both contextual information and fine-grained details. Experiments across three CT datasets demonstrate that UGPL consistently outperforms state-of-the-art methods, achieving improvements of 3.29%, 2.46%, and 8.08% in accuracy for kidney abnormality, lung cancer, and COVID-19 detection, respectively. Our analysis shows that the uncertainty-guided component provides substantial benefits, with performance dramatically increasing when the full progressive learning pipeline is implemented. Our code is available at: https://github.com/shravan-18/UGPL

Šimon Kubov, Simon Klíčník, Jakub Dandár, Zdeněk Straka, Karolína Kvaková, Daniel Kvak

arxiv logopreprintJul 18 2025
Scoliosis affects roughly 2 to 4 percent of adolescents, and treatment decisions depend on precise Cobb angle measurement. Manual assessment is time consuming and subject to inter observer variation. We conducted a retrospective, multi centre evaluation of a fully automated deep learning software (Carebot AI Bones, Spine Measurement functionality; Carebot s.r.o.) on 103 standing anteroposterior whole spine radiographs collected from ten hospitals. Two musculoskeletal radiologists independently measured each study and served as reference readers. Agreement between the AI and each radiologist was assessed with Bland Altman analysis, mean absolute error (MAE), root mean squared error (RMSE), Pearson correlation coefficient, and Cohen kappa for four grade severity classification. Against Radiologist 1 the AI achieved an MAE of 3.89 degrees (RMSE 4.77 degrees) with a bias of 0.70 degrees and limits of agreement from minus 8.59 to plus 9.99 degrees. Against Radiologist 2 the AI achieved an MAE of 3.90 degrees (RMSE 5.68 degrees) with a bias of 2.14 degrees and limits from minus 8.23 to plus 12.50 degrees. Pearson correlations were r equals 0.906 and r equals 0.880 (inter reader r equals 0.928), while Cohen kappa for severity grading reached 0.51 and 0.64 (inter reader kappa 0.59). These results demonstrate that the proposed software reproduces expert level Cobb angle measurements and categorical grading across multiple centres, suggesting its utility for streamlining scoliosis reporting and triage in clinical workflows.

Ningyong Wu, Jinzhi Wang, Wenhong Zhao, Chenzhan Yu, Zhigang Xiu, Duwei Dai

arxiv logopreprintJul 18 2025
The growing volume of medical imaging data has increased the need for automated diagnostic tools, especially for musculoskeletal injuries like rib fractures, commonly detected via CT scans. Manual interpretation is time-consuming and error-prone. We propose OrthoInsight, a multi-modal deep learning framework for rib fracture diagnosis and report generation. It integrates a YOLOv9 model for fracture detection, a medical knowledge graph for retrieving clinical context, and a fine-tuned LLaVA language model for generating diagnostic reports. OrthoInsight combines visual features from CT images with expert textual data to deliver clinically useful outputs. Evaluated on 28,675 annotated CT images and expert reports, it achieves high performance across Diagnostic Accuracy, Content Completeness, Logical Coherence, and Clinical Guidance Value, with an average score of 4.28, outperforming models like GPT-4 and Claude-3. This study demonstrates the potential of multi-modal learning in transforming medical image analysis and providing effective support for radiologists.
Page 359 of 6646636 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.