Sort by:
Page 106 of 1241233 results

[Pulmonary vascular interventions: innovating through adaptation and advancing through differentiation].

Li J, Wan J

pubmed logopapersMay 12 2025
Pulmonary vascular intervention technology, with its minimally invasive and precise advantages, has been a groundbreaking advancement in the treatment of pulmonary vascular diseases. Techniques such as balloon pulmonary angioplasty (BPA), pulmonary artery stenting, and percutaneous pulmonary artery denervation (PADN) have significantly improved the prognoses for conditions such as chronic thromboembolic pulmonary hypertension (CTEPH), pulmonary artery stenosis, and pulmonary arterial hypertension (PAH). Although based on coronary intervention (PCI) techniques such as guidewire manipulation and balloon dilatation, pulmonary vascular interventions require specific modifications to address the unique characteristics of the pulmonary circulation, low pressure, thin-walled vessels, and complex branching, to mitigate risks of perforation and thrombosis. Future directions include the development of dedicated instruments, multi-modality imaging guidance, artificial intelligence-assisted procedures, and molecular interventional therapies. These innovations aim to establish an independent theoretical framework for pulmonary vascular interventions, facilitating their transition from "adjuvant therapies" to "core treatments" in clinical practice.

Evaluating the reference accuracy of large language models in radiology: a comparative study across subspecialties.

Güneş YC, Cesur T, Çamur E

pubmed logopapersMay 12 2025
This study aimed to compare six large language models (LLMs) [Chat Generative Pre-trained Transformer (ChatGPT)o1-preview, ChatGPT-4o, ChatGPT-4o with canvas, Google Gemini 1.5 Pro, Claude 3.5 Sonnet, and Claude 3 Opus] in generating radiology references, assessing accuracy, fabrication, and bibliographic completeness. In this cross-sectional observational study, 120 open-ended questions were administered across eight radiology subspecialties (neuroradiology, abdominal, musculoskeletal, thoracic, pediatric, cardiac, head and neck, and interventional radiology), with 15 questions per subspecialty. Each question prompted the LLMs to provide responses containing four references with in-text citations and complete bibliographic details (authors, title, journal, publication year/month, volume, issue, page numbers, and PubMed Identifier). References were verified using Medline, Google Scholar, the Directory of Open Access Journals, and web searches. Each bibliographic element was scored for correctness, and a composite final score [(FS): 0-36] was calculated by summing the correct elements and multiplying this by a 5-point verification score for content relevance. The FS values were then categorized into a 5-point Likert scale reference accuracy score (RAS: 0 = fabricated; 4 = fully accurate). Non-parametric tests (Kruskal-Wallis, Tamhane's T2, Wilcoxon signed-rank test with Bonferroni correction) were used for statistical comparisons. Claude 3.5 Sonnet demonstrated the highest reference accuracy, with 80.8% fully accurate references (RAS 4) and a fabrication rate of 3.1%, significantly outperforming all other models (<i>P</i> < 0.001). Claude 3 Opus ranked second, achieving 59.6% fully accurate references and a fabrication rate of 18.3% (<i>P</i> < 0.001). ChatGPT-based models (ChatGPT-4o, ChatGPT-4o with canvas, and ChatGPT o1-preview) exhibited moderate accuracy, with fabrication rates ranging from 27.7% to 52.9% and <8% fully accurate references. Google Gemini 1.5 Pro had the lowest performance, achieving only 2.7% fully accurate references and the highest fabrication rate of 60.6% (<i>P</i> < 0.001). Reference accuracy also varied by subspecialty, with neuroradiology and cardiac radiology outperforming pediatric and head and neck radiology. Claude 3.5 Sonnet significantly outperformed all other models in generating verifiable radiology references, and Claude 3 Opus showed moderate performance. In contrast, ChatGPT models and Google Gemini 1.5 Pro delivered substantially lower accuracy with higher rates of fabricated references, highlighting current limitations in automated academic citation generation. The high accuracy of Claude 3.5 Sonnet can improve radiology literature reviews, research, and education with dependable references. The poor performance of other models, with high fabrication rates, risks misinformation in clinical and academic settings and highlights the need for refinement to ensure safe and effective use.

AutoFRS: an externally validated, annotation-free approach to computational preoperative complication risk stratification in pancreatic surgery - an experimental study.

Kolbinger FR, Bhasker N, Schön F, Cser D, Zwanenburg A, Löck S, Hempel S, Schulze A, Skorobohach N, Schmeiser HM, Klotz R, Hoffmann RT, Probst P, Müller B, Bodenstedt S, Wagner M, Weitz J, Kühn JP, Distler M, Speidel S

pubmed logopapersMay 12 2025
The risk of postoperative pancreatic fistula (POPF), one of the most dreaded complications after pancreatic surgery, can be predicted from preoperative imaging and tabular clinical routine data. However, existing studies suffer from limited clinical applicability due to a need for manual data annotation and a lack of external validation. We propose AutoFRS (automated fistula risk score software), an externally validated end-to-end prediction tool for POPF risk stratification based on multimodal preoperative data. We trained AutoFRS on preoperative contrast-enhanced computed tomography imaging and clinical data from 108 patients undergoing pancreatic head resection and validated it on an external cohort of 61 patients. Prediction performance was assessed using the area under the receiver operating characteristic curve (AUC) and balanced accuracy. In addition, model performance was compared to the updated alternative fistula risk score (ua-FRS), the current clinical gold standard method for intraoperative POPF risk stratification. AutoFRS achieved an AUC of 0.81 and a balanced accuracy of 0.72 in internal validation and an AUC of 0.79 and a balanced accuracy of 0.70 in external validation. In a patient subset with documented intraoperative POPF risk factors, AutoFRS (AUC: 0.84 ± 0.05) performed on par with the uaFRS (AUC: 0.85 ± 0.06). The AutoFRS web application facilitates annotation-free prediction of POPF from preoperative imaging and clinical data based on the AutoFRS prediction model. POPF can be predicted from multimodal clinical routine data without human data annotation, automating the risk prediction process. We provide additional evidence of the clinical feasibility of preoperative POPF risk stratification and introduce a software pipeline for future prospective evaluation.

Real-world Evaluation of Computer-aided Pulmonary Nodule Detection Software Sensitivity and False Positive Rate.

El Alam R, Jhala K, Hammer MM

pubmed logopapersMay 12 2025
Evaluate the false positive rate (FPR) of nodule detection software in real-world use. A total of 250 nonenhanced chest computed tomography (CT) examinations were randomly selected from an academic institution and submitted to the ClearRead nodule detection system (Riverain Technologies). Detected findings were reviewed by a thoracic imaging fellow. Nodules were classified as true nodules, lymph nodes, or other findings (branching opacity, vessel, mucus plug, etc.), and FPR was recorded. FPR was compared with the initial published FPR in the literature. True diagnosis was based on pathology or follow-up stability. For cases with malignant nodules, we recorded whether malignancy was detected by clinical radiology report (which was performed without software assistance) and/or ClearRead. Twenty-one CTs were excluded due to a lack of thin-slice images, and 229 CTs were included. A total of 594 findings were reported by ClearRead, of which 362 (61%) were true nodules and 232 (39%) were other findings. Of the true nodules, 297 were solid nodules, of which 79 (27%) were intrapulmonary lymph nodes. The mean findings identified by ClearRead per scan was 2.59. ClearRead mean FPR was 1.36, greater than the published rate of 0.58 (P<0.0001). If we consider true lung nodules <6 mm as false positive, FPR is 2.19. A malignant nodule was present in 30 scans; ClearRead identified it in 26 (87%), and the clinical report identified it in 28 (93%) (P=0.32). In real-world use, ClearRead had a much higher FPR than initially reported but a similar sensitivity for malignant nodule detection compared with unassisted radiologists.

Paradigm-Shifting Attention-based Hybrid View Learning for Enhanced Mammography Breast Cancer Classification with Multi-Scale and Multi-View Fusion.

Zhao H, Zhang C, Wang F, Li Z, Gao S

pubmed logopapersMay 12 2025
Breast cancer poses a serious threat to women's health, and its early detection is crucial for enhancing patient survival rates. While deep learning has significantly advanced mammographic image analysis, existing methods struggle to balance between view consistency with input adaptability. Furthermore, current models face challenges in accurately capturing multi-scale features, especially when subtle lesion variations across different scales are involved. To address this challenge, this paper proposes a Hybrid View Learning (HVL) paradigm that unifies traditional Single-View and Multi-View Learning approaches. The core component of this paradigm, our Attention-based Hybrid View Learning (AHVL) framework, incorporates two essential attention mechanisms: Contrastive Switch Attention (CSA) and Selective Pooling Attention (SPA). The CSA mechanism flexibly alternates between self-attention and cross-attention based on data integrity, integrating a pre-trained language model for contrastive learning to enhance model stability. Meanwhile, the SPA module employs multi-scale feature pooling and selection to capture critical features from mammographic images, overcoming the limitations of traditional models that struggle with fine-grained lesion detection. Experimental validation on the INbreast and CBIS-DDSM datasets shows that the AHVL framework outperforms both single-view and multi-view methods, especially under extreme view missing conditions. Even with an 80% missing rate on both datasets, AHVL maintains the highest accuracy and experiences the smallest performance decline in metrics like F1 score and AUC-PR, demonstrating its robustness and stability. This study redefines mammographic image analysis by leveraging attention-based hybrid view processing, setting a new standard for precise and efficient breast cancer diagnosis.

Benchmarking Radiology Report Generation From Noisy Free-Texts.

Yuan Y, Zheng Y, Qu L

pubmed logopapersMay 12 2025
Automatic radiology report generation can enhance diagnostic efficiency and accuracy. However, clean open-source imaging scan-report pairs are limited in scale and variety. Moreover, the vast amount of radiological texts available online is often too noisy to be directly employed. To address this challenge, we introduce a novel task called Noisy Report Refinement (NRR), which generates radiology reports from noisy free-texts. To achieve this, we propose a report refinement pipeline that leverages large language models (LLMs) enhanced with guided self-critique and report selection strategies. To address the inability of existing radiology report generation metrics in measuring cleanliness, radiological usefulness, and factual correctness across various modalities of reports in NRR task, we introduce a new benchmark, NRRBench, for NRR evaluation. This benchmark includes two online-sourced datasets and four clinically explainable LLM-based metrics: two metrics evaluate the matching rate of radiology entities and modality-specific template attributes respectively, one metric assesses report cleanliness, and a combined metric evaluates overall NRR performance. Experiments demonstrate that guided self-critique and report selection strategies significantly improve the quality of refined reports. Additionally, our proposed metrics show a much higher correlation with noisy rate and error count of reports than radiology report generation metrics in evaluating NRR.

Artificial intelligence-assisted diagnosis of early allograft dysfunction based on ultrasound image and data.

Meng Y, Wang M, Niu N, Zhang H, Yang J, Zhang G, Liu J, Tang Y, Wang K

pubmed logopapersMay 12 2025
Early allograft dysfunction (EAD) significantly affects liver transplantation prognosis. This study evaluated the effectiveness of artificial intelligence (AI)-assisted methods in accurately diagnosing EAD and identifying its causes. The primary metric for assessing the accuracy was the area under the receiver operating characteristic curve (AUC). Accuracy, sensitivity, and specificity were calculated and analyzed to compare the performance of the AI models with each other and with radiologists. EAD classification followed the criteria established by Olthoff et al. A total of 582 liver transplant patients who underwent transplantation between December 2012 and June 2021 were selected. Among these, 117 patients (mean age 33.5 ± 26.5 years, 80 men) were evaluated. The ultrasound parameters, images, and clinical information of patients were extracted from the database to train the AI model. The AUC for the ultrasound-spectrogram fusion network constructed from four ultrasound images and medical data was 0.968 (95%CI: 0.940, 0.991), outperforming radiologists by 30% for all metrics. AI assistance significantly improved diagnostic accuracy, sensitivity, and specificity (P < 0.050) for both experienced and less-experienced physicians. EAD lacks efficient diagnosis and causation analysis methods. The integration of AI and ultrasound enhances diagnostic accuracy and causation analysis. By modeling only images and data related to blood flow, the AI model effectively analyzed patients with EAD caused by abnormal blood supply. Our model can assist radiologists in reducing judgment discrepancies, potentially benefitting patients with EAD in underdeveloped regions. Furthermore, it enables targeted treatment for those with abnormal blood supply.

New developments in imaging in ALS.

Kleinerova J, Querin G, Pradat PF, Siah WF, Bede P

pubmed logopapersMay 12 2025
Neuroimaging in ALS has contributed considerable academic insights in recent years demonstrating genotype-specific topological changes decades before phenoconversion and characterising longitudinal propagation patterns in specific phenotypes. It has elucidated the radiological underpinnings of specific clinical phenomena such as pseudobulbar affect, apathy, behavioural change, spasticity, and language deficits. Academic concepts such as sexual dimorphism, motor reserve, cognitive reserve, adaptive changes, connectivity-based propagation, pathological stages, and compensatory mechanisms have also been evaluated by imaging. The underpinnings of extra-motor manifestations such as cerebellar, sensory, extrapyramidal and cognitive symptoms have been studied by purpose-designed imaging protocols. Clustering approaches have been implemented to uncover radiologically distinct disease subtypes and machine-learning models have been piloted to accurately classify individual patients into relevant diagnostic, phenotypic, and prognostic categories. Prediction models have been developed for survival in symptomatic patients and phenoconversion in asymptomatic mutation carriers. A range of novel imaging modalities have been implemented and 7 Tesla MRI platforms are increasingly being used in ALS studies. Non-ALS MND conditions, such as PLS, SBMA, and SMA, are now also being increasingly studied by quantitative neuroimaging approaches. A unifying theme of recent imaging papers is the departure from describing focal brain changes to focusing on dynamic structural and functional connectivity alterations. Progressive cortico-cortical, cortico-basal, cortico-cerebellar, cortico-bulbar, and cortico-spinal disconnection has been consistently demonstrated by recent studies and recognised as the primary driver of clinical decline. These studies have led the reconceptualisation of ALS as a "network" or "circuitry disease".

Deep Learning for Detecting Periapical Bone Rarefaction in Panoramic Radiographs: A Systematic Review and Critical Assessment.

da Silva-Filho JE, da Silva Sousa Z, de-Araújo APC, Fornagero LDS, Machado MP, de Aguiar AWO, Silva CM, de Albuquerque DF, Gurgel-Filho ED

pubmed logopapersMay 12 2025
To evaluate deep learning (DL)-based models for detecting periapical bone rarefaction (PBRs) in panoramic radiographs (PRs), analyzing their feasibility and performance in dental practice. A search was conducted across seven databases and partial grey literature up to November 15, 2024, using Medical Subject Headings and entry terms related to DL, PBRs, and PRs. Studies assessing DL-based models for detecting and classifying PBRs in conventional PRs were included, while those using non-PR imaging or focusing solely on non-PBR lesions were excluded. Two independent reviewers performed screening, data extraction, and quality assessment using the Quality Assessment of Diagnostic Accuracy Studies-2 tool, with conflicts resolved by a third reviewer. Twelve studies met the inclusion criteria, mostly from Asia (58.3%). The risk of bias was moderate in 10 studies (83.3%) and high in 2 (16.7%). DL models showed moderate to high performance in PBR detection (sensitivity: 26-100%; specificity: 51-100%), with U-NET and YOLO being the most used algorithms. Only one study (8.3%) distinguished Periapical Granuloma from Periapical Cysts, revealing a classification gap. Key challenges included limited generalization due to small datasets, anatomical superimpositions in PRs, and variability in reported metrics, compromising models comparison. This review underscores that DL-based has the potential to become a valuable tool in dental image diagnostics, but it cannot yet be considered a definitive practice. Multicenter collaboration is needed to diversify data and democratize those tools. Standardized performance reporting is critical for fair comparability between different models.

Groupwise image registration with edge-based loss for low-SNR cardiac MRI.

Lei X, Schniter P, Chen C, Ahmad R

pubmed logopapersMay 12 2025
The purpose of this study is to perform image registration and averaging of multiple free-breathing single-shot cardiac images, where the individual images may have a low signal-to-noise ratio (SNR). To address low SNR encountered in single-shot imaging, especially at low field strengths, we propose a fast deep learning (DL)-based image registration method, called Averaging Morph with Edge Detection (AiM-ED). AiM-ED jointly registers multiple noisy source images to a noisy target image and utilizes a noise-robust pre-trained edge detector to define the training loss. We validate AiM-ED using synthetic late gadolinium enhanced (LGE) images from the MR extended cardiac-torso (MRXCAT) phantom and free-breathing single-shot LGE images from healthy subjects (24 slices) and patients (5 slices) under various levels of added noise. Additionally, we demonstrate the clinical feasibility of AiM-ED by applying it to data from patients (6 slices) scanned on a 0.55T scanner. Compared with a traditional energy-minimization-based image registration method and DL-based VoxelMorph, images registered using AiM-ED exhibit higher values of recovery SNR and three perceptual image quality metrics. An ablation study shows the benefit of both jointly processing multiple source images and using an edge map in AiM-ED. For single-shot LGE imaging, AiM-ED outperforms existing image registration methods in terms of image quality. With fast inference, minimal training data requirements, and robust performance at various noise levels, AiM-ED has the potential to benefit single-shot CMR applications.
Page 106 of 1241233 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.