Sort by:
Page 32 of 34334 results

[Pulmonary vascular interventions: innovating through adaptation and advancing through differentiation].

Li J, Wan J

pubmed logopapersMay 12 2025
Pulmonary vascular intervention technology, with its minimally invasive and precise advantages, has been a groundbreaking advancement in the treatment of pulmonary vascular diseases. Techniques such as balloon pulmonary angioplasty (BPA), pulmonary artery stenting, and percutaneous pulmonary artery denervation (PADN) have significantly improved the prognoses for conditions such as chronic thromboembolic pulmonary hypertension (CTEPH), pulmonary artery stenosis, and pulmonary arterial hypertension (PAH). Although based on coronary intervention (PCI) techniques such as guidewire manipulation and balloon dilatation, pulmonary vascular interventions require specific modifications to address the unique characteristics of the pulmonary circulation, low pressure, thin-walled vessels, and complex branching, to mitigate risks of perforation and thrombosis. Future directions include the development of dedicated instruments, multi-modality imaging guidance, artificial intelligence-assisted procedures, and molecular interventional therapies. These innovations aim to establish an independent theoretical framework for pulmonary vascular interventions, facilitating their transition from "adjuvant therapies" to "core treatments" in clinical practice.

Real-world Evaluation of Computer-aided Pulmonary Nodule Detection Software Sensitivity and False Positive Rate.

El Alam R, Jhala K, Hammer MM

pubmed logopapersMay 12 2025
Evaluate the false positive rate (FPR) of nodule detection software in real-world use. A total of 250 nonenhanced chest computed tomography (CT) examinations were randomly selected from an academic institution and submitted to the ClearRead nodule detection system (Riverain Technologies). Detected findings were reviewed by a thoracic imaging fellow. Nodules were classified as true nodules, lymph nodes, or other findings (branching opacity, vessel, mucus plug, etc.), and FPR was recorded. FPR was compared with the initial published FPR in the literature. True diagnosis was based on pathology or follow-up stability. For cases with malignant nodules, we recorded whether malignancy was detected by clinical radiology report (which was performed without software assistance) and/or ClearRead. Twenty-one CTs were excluded due to a lack of thin-slice images, and 229 CTs were included. A total of 594 findings were reported by ClearRead, of which 362 (61%) were true nodules and 232 (39%) were other findings. Of the true nodules, 297 were solid nodules, of which 79 (27%) were intrapulmonary lymph nodes. The mean findings identified by ClearRead per scan was 2.59. ClearRead mean FPR was 1.36, greater than the published rate of 0.58 (P<0.0001). If we consider true lung nodules <6 mm as false positive, FPR is 2.19. A malignant nodule was present in 30 scans; ClearRead identified it in 26 (87%), and the clinical report identified it in 28 (93%) (P=0.32). In real-world use, ClearRead had a much higher FPR than initially reported but a similar sensitivity for malignant nodule detection compared with unassisted radiologists.

Deep Learning for Detecting Periapical Bone Rarefaction in Panoramic Radiographs: A Systematic Review and Critical Assessment.

da Silva-Filho JE, da Silva Sousa Z, de-Araújo APC, Fornagero LDS, Machado MP, de Aguiar AWO, Silva CM, de Albuquerque DF, Gurgel-Filho ED

pubmed logopapersMay 12 2025
To evaluate deep learning (DL)-based models for detecting periapical bone rarefaction (PBRs) in panoramic radiographs (PRs), analyzing their feasibility and performance in dental practice. A search was conducted across seven databases and partial grey literature up to November 15, 2024, using Medical Subject Headings and entry terms related to DL, PBRs, and PRs. Studies assessing DL-based models for detecting and classifying PBRs in conventional PRs were included, while those using non-PR imaging or focusing solely on non-PBR lesions were excluded. Two independent reviewers performed screening, data extraction, and quality assessment using the Quality Assessment of Diagnostic Accuracy Studies-2 tool, with conflicts resolved by a third reviewer. Twelve studies met the inclusion criteria, mostly from Asia (58.3%). The risk of bias was moderate in 10 studies (83.3%) and high in 2 (16.7%). DL models showed moderate to high performance in PBR detection (sensitivity: 26-100%; specificity: 51-100%), with U-NET and YOLO being the most used algorithms. Only one study (8.3%) distinguished Periapical Granuloma from Periapical Cysts, revealing a classification gap. Key challenges included limited generalization due to small datasets, anatomical superimpositions in PRs, and variability in reported metrics, compromising models comparison. This review underscores that DL-based has the potential to become a valuable tool in dental image diagnostics, but it cannot yet be considered a definitive practice. Multicenter collaboration is needed to diversify data and democratize those tools. Standardized performance reporting is critical for fair comparability between different models.

Batch Augmentation with Unimodal Fine-tuning for Multimodal Learning

H M Dipu Kabir, Subrota Kumar Mondal, Mohammad Ali Moni

arxiv logopreprintMay 10 2025
This paper proposes batch augmentation with unimodal fine-tuning to detect the fetus's organs from ultrasound images and associated clinical textual information. We also prescribe pre-training initial layers with investigated medical data before the multimodal training. At first, we apply a transferred initialization with the unimodal image portion of the dataset with batch augmentation. This step adjusts the initial layer weights for medical data. Then, we apply neural networks (NNs) with fine-tuned initial layers to images in batches with batch augmentation to obtain features. We also extract information from descriptions of images. We combine this information with features obtained from images to train the head layer. We write a dataloader script to load the multimodal data and use existing unimodal image augmentation techniques with batch augmentation for the multimodal data. The dataloader brings a new random augmentation for each batch to get a good generalization. We investigate the FPU23 ultrasound and UPMC Food-101 multimodal datasets. The multimodal large language model (LLM) with the proposed training provides the best results among the investigated methods. We receive near state-of-the-art (SOTA) performance on the UPMC Food-101 dataset. We share the scripts of the proposed method with traditional counterparts at the following repository: github.com/dipuk0506/multimodal

Adherence to SVS Abdominal Aortic Aneurysm Guidelines Among Pati ents Detected by AI-Based Algorithm.

Wilson EM, Yao K, Kostiuk V, Bader J, Loh S, Mojibian H, Fischer U, Ochoa Chaar CI, Aboian E

pubmed logopapersMay 9 2025
This study evaluates adherence to the latest Society for Vascular Surgery (SVS) guidelines on imaging surveillance, physician evaluation, and surgical intervention for abdominal aortic aneurysm (AAA). AI-based natural language processing applied retrospectively identified AAA patients from imaging scans at a tertiary care center between January-March 2019 and 2021, excluding the pandemic period. Retrospective chart review assessed demographics, comorbidities, imaging, and follow-up adherence. Statistical significance was set at p<0.05. Among 479 identified patients, 279 remained in the final cohort following exclusion of deceased patients. Imaging surveillance adherence was 67.7% (189/279), with males comprising 72.5% (137/189) (Figure 1). The mean age for adherent patients was 73.9 (SD ±9.5) vs. 75.2 (SD ±10.8) for non-adherent patients (Table 1). Adherent females were significantly younger than non-adherent females (76.7 vs. 81.1 years; p=0.003) with no significant age difference in adherent males. Adherent patients were more likely to be evaluated by a vascular provider within six months (p<0.001), but aneurysm size did not affect imaging adherence: 3.0-4.0cm (p=0.24), 4.0-5.0cm (p=0.88), >5.0cm (p=0.29). Based on SVS surgical criteria, 18 males (AAA >5.5cm) and 17 females (AAA >5.0cm) qualified for intervention and repair rates increased in 2021. 34 males (20 in 2019 v. 14 in 2021) and 7 females (2021 only) received surgical intervention below the threshold for repair. Despite consistent SVS guidelines, adherence remains moderate. AI-based detection and follow-up algorithms may enhance adherence and long-term AAA patient outcomes, however further research is needed to assess the specific impacts of AI.

Application of a pulmonary nodule detection program using AI technology to ultra-low-dose CT: differences in detection ability among various image reconstruction methods.

Tsuchiya N, Kobayashi S, Nakachi R, Tomori Y, Yogi A, Iida G, Ito J, Nishie A

pubmed logopapersMay 9 2025
This study aimed to investigate the performance of an artificial intelligence (AI)-based lung nodule detection program in ultra-low-dose CT (ULDCT) imaging, with a focus on the influence of various image reconstruction methods on detection accuracy. A chest phantom embedded with artificial lung nodules (solid and ground-glass nodules [GGNs]; diameters: 12 mm, 8 mm, 5 mm, and 3 mm) was scanned using six combinations of tube currents (160 mA, 80 mA, and 10 mA) and voltages (120 kV and 80 kV) on a Canon Aquilion One CT scanner. Images were reconstructed using filtered back projection (FBP), hybrid iterative reconstruction (HIR), model-based iterative reconstruction (MBIR), and deep learning reconstruction (DLR). Nodule detection was performed using an AI-based lung nodule detection program, and performance metrics were analyzed across different reconstruction methods and radiation dose protocols. At the lowest dose protocol (80 kV, 10 mA), FBP showed a 0% detection rate for all nodule sizes. HIR and DLR consistently achieved 100% detection rates for solid nodules ≥ 5 mm and GGNs ≥ 8 mm. No method detected 3 mm GGNs under any protocol. DLR demonstrated the highest detection rates, even under ultra-low-dose settings, while maintaining high image quality. AI-based lung nodule detection in ULDCT is strongly dependent on the choice of image reconstruction method.

The present and future of lung cancer screening: latest evidence.

Gutiérrez Alliende J, Kazerooni EA, Crosbie PAJ, Xie X, Sharma A, Reis J

pubmed logopapersMay 9 2025
Lung cancer is the leading cause of cancer-related mortality worldwide. Early lung cancer detection improves lung cancer-related mortality and survival. This report summarizes presentations and panel discussions from a webinar, "The Present and Future of Lung Cancer Screening: Latest Evidence and AI Perspectives." The webinar provided the perspectives of experts from the United States, United Kingdom, and China on evidence-based recommendations and management in lung cancer screening (LCS), barriers, and the role of artificial intelligence (AI). With several countries now incorporating the utilization of AI in their screening programs, AI offers potential solutions to some of the challenges associated with LCS.

Artificial Intelligence in Vascular Neurology: Applications, Challenges, and a Review of AI Tools for Stroke Imaging, Clinical Decision Making, and Outcome Prediction Models.

Alqadi MM, Vidal SGM

pubmed logopapersMay 9 2025
Artificial intelligence (AI) promises to compress stroke treatment timelines, yet its clinical return on investment remains uncertain. We interrogate state‑of‑the‑art AI platforms across imaging, workflow orchestration, and outcome prediction to clarify value drivers and execution risks. Convolutional, recurrent, and transformer architectures now trigger large‑vessel‑occlusion alerts, delineate ischemic core in seconds, and forecast 90‑day function. Commercial deployments-RapidAI, Viz.ai, Aidoc-report double‑digit reductions in door‑to‑needle metrics and expanded thrombectomy eligibility. However, dataset bias, opaque reasoning, and limited external validation constrain scalability. Hybrid image‑plus‑clinical models elevate predictive accuracy but intensify data‑governance demands. AI can operationalize precision stroke care, but enterprise‑grade adoption requires federated data pipelines, explainable‑AI dashboards, and fit‑for‑purpose regulation. Prospective multicenter trials and continuous lifecycle surveillance are mandatory to convert algorithmic promise into reproducible, equitable patient benefit.

Towards Better Cephalometric Landmark Detection with Diffusion Data Generation

Dongqian Guo, Wencheng Han, Pang Lyu, Yuxi Zhou, Jianbing Shen

arxiv logopreprintMay 9 2025
Cephalometric landmark detection is essential for orthodontic diagnostics and treatment planning. Nevertheless, the scarcity of samples in data collection and the extensive effort required for manual annotation have significantly impeded the availability of diverse datasets. This limitation has restricted the effectiveness of deep learning-based detection methods, particularly those based on large-scale vision models. To address these challenges, we have developed an innovative data generation method capable of producing diverse cephalometric X-ray images along with corresponding annotations without human intervention. To achieve this, our approach initiates by constructing new cephalometric landmark annotations using anatomical priors. Then, we employ a diffusion-based generator to create realistic X-ray images that correspond closely with these annotations. To achieve precise control in producing samples with different attributes, we introduce a novel prompt cephalometric X-ray image dataset. This dataset includes real cephalometric X-ray images and detailed medical text prompts describing the images. By leveraging these detailed prompts, our method improves the generation process to control different styles and attributes. Facilitated by the large, diverse generated data, we introduce large-scale vision detection models into the cephalometric landmark detection task to improve accuracy. Experimental results demonstrate that training with the generated data substantially enhances the performance. Compared to methods without using the generated data, our approach improves the Success Detection Rate (SDR) by 6.5%, attaining a notable 82.2%. All code and data are available at: https://um-lab.github.io/cepha-generation
Page 32 of 34334 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.