Sort by:
Page 166 of 1691682 results

CLIF-Net: Intersection-guided Cross-view Fusion Network for Infection Detection from Cranial Ultrasound.

Yu M, Peterson MR, Burgoine K, Harbaugh T, Olupot-Olupot P, Gladstone M, Hagmann C, Cowan FM, Weeks A, Morton SU, Mulondo R, Mbabazi-Kabachelor E, Schiff SJ, Monga V

pubmed logopapersMay 15 2025
This paper addresses the problem of detecting possible serious bacterial infection (pSBI) of infancy, i.e. a clinical presentation consistent with bacterial sepsis in newborn infants using cranial ultrasound (cUS) images. The captured image set for each patient enables multiview imagery: coronal and sagittal, with geometric overlap. To exploit this geometric relation, we develop a new learning framework, called the intersection-guided Crossview Local- and Image-level Fusion Network (CLIF-Net). Our technique employs two distinct convolutional neural network branches to extract features from coronal and sagittal images with newly developed multi-level fusion blocks. Specifically, we leverage the spatial position of these images to locate the intersecting region. We then identify and enhance the semantic features from this region across multiple levels using cross-attention modules, facilitating the acquisition of mutually beneficial and more representative features from both views. The final enhanced features from the two views are then integrated and projected through the image-level fusion layer, outputting pSBI and non-pSBI class probabilities. We contend that our method of exploiting multi-view cUS images enables a first of its kind, robust 3D representation tailored for pSBI detection. When evaluated on a dataset of 302 cUS scans from Mbale Regional Referral Hospital in Uganda, CLIF-Net demonstrates substantially enhanced performance, surpassing the prevailing state-of-the-art infection detection techniques.

CheXGenBench: A Unified Benchmark For Fidelity, Privacy and Utility of Synthetic Chest Radiographs

Raman Dutt, Pedro Sanchez, Yongchen Yao, Steven McDonagh, Sotirios A. Tsaftaris, Timothy Hospedales

arxiv logopreprintMay 15 2025
We introduce CheXGenBench, a rigorous and multifaceted evaluation framework for synthetic chest radiograph generation that simultaneously assesses fidelity, privacy risks, and clinical utility across state-of-the-art text-to-image generative models. Despite rapid advancements in generative AI for real-world imagery, medical domain evaluations have been hindered by methodological inconsistencies, outdated architectural comparisons, and disconnected assessment criteria that rarely address the practical clinical value of synthetic samples. CheXGenBench overcomes these limitations through standardised data partitioning and a unified evaluation protocol comprising over 20 quantitative metrics that systematically analyse generation quality, potential privacy vulnerabilities, and downstream clinical applicability across 11 leading text-to-image architectures. Our results reveal critical inefficiencies in the existing evaluation protocols, particularly in assessing generative fidelity, leading to inconsistent and uninformative comparisons. Our framework establishes a standardised benchmark for the medical AI community, enabling objective and reproducible comparisons while facilitating seamless integration of both existing and future generative models. Additionally, we release a high-quality, synthetic dataset, SynthCheX-75K, comprising 75K radiographs generated by the top-performing model (Sana 0.6B) in our benchmark to support further research in this critical domain. Through CheXGenBench, we establish a new state-of-the-art and release our framework, models, and SynthCheX-75K dataset at https://raman1121.github.io/CheXGenBench/

Ordered-subsets Multi-diffusion Model for Sparse-view CT Reconstruction

Pengfei Yu, Bin Huang, Minghui Zhang, Weiwen Wu, Shaoyu Wang, Qiegen Liu

arxiv logopreprintMay 15 2025
Score-based diffusion models have shown significant promise in the field of sparse-view CT reconstruction. However, the projection dataset is large and riddled with redundancy. Consequently, applying the diffusion model to unprocessed data results in lower learning effectiveness and higher learning difficulty, frequently leading to reconstructed images that lack fine details. To address these issues, we propose the ordered-subsets multi-diffusion model (OSMM) for sparse-view CT reconstruction. The OSMM innovatively divides the CT projection data into equal subsets and employs multi-subsets diffusion model (MSDM) to learn from each subset independently. This targeted learning approach reduces complexity and enhances the reconstruction of fine details. Furthermore, the integration of one-whole diffusion model (OWDM) with complete sinogram data acts as a global information constraint, which can reduce the possibility of generating erroneous or inconsistent sinogram information. Moreover, the OSMM's unsupervised learning framework provides strong robustness and generalizability, adapting seamlessly to varying sparsity levels of CT sinograms. This ensures consistent and reliable performance across different clinical scenarios. Experimental results demonstrate that OSMM outperforms traditional diffusion models in terms of image quality and noise resilience, offering a powerful and versatile solution for advanced CT imaging in sparse-view scenarios.

Assessing artificial intelligence in breast screening with stratified results on 306 839 mammograms across geographic regions, age, breast density and ethnicity: A Retrospective Investigation Evaluating Screening (ARIES) study.

Oberije CJG, Currie R, Leaver A, Redman A, Teh W, Sharma N, Fox G, Glocker B, Khara G, Nash J, Ng AY, Kecskemethy PD

pubmed logopapersMay 14 2025
Evaluate an Artificial Intelligence (AI) system in breast screening through stratified results across age, breast density, ethnicity and screening centres, from different UK regions. A large-scale retrospective study evaluating two variations of using AI as an independent second reader in double reading was executed. Stratifications were conducted for clinical and operational metrics. Data from 306 839 mammography cases screened between 2017 and 2021 were used and included three different UK regions.The impact on safety and effectiveness was assessed using clinical metrics: cancer detection rate and positive predictive value, stratified according to age, breast density and ethnicity. Operational impact was assessed through reading workload and recall rate, measured overall and per centre.Non-inferiority was tested for AI workflows compared with human double reading, and when passed, superiority was tested. AI interval cancer (IC) flag rate was assessed to estimate additional cancer detection opportunity with AI that cannot be assessed retrospectively. The AI workflows passed non-inferiority or superiority tests for every metric across all subgroups, with workload savings between 38.3% and 43.7%. The AI standalone flagged 41.2% of ICs overall, ranging between 33.3% and 46.8% across subgroups, with the highest detection rate for dense breasts. Human double reading and AI workflows showed the same performance disparities across subgroups. The AI integrations maintained or improved performance at all metrics for all subgroups while achieving significant workload reduction. Moreover, complementing these integrations with AI as an additional reader can improve cancer detection. The granularity of assessment showed that screening with the AI-system integrations was as safe as standard double reading across heterogeneous populations.

Automated scout-image-based estimation of contrast agent dosing: a deep learning approach

Schirrmeister, R., Taleb, L., Friemel, P., Reisert, M., Bamberg, F., Weiss, J., Rau, A.

medrxiv logopreprintMay 12 2025
We developed and tested a deep-learning-based algorithm for the approximation of contrast agent dosage based on computed tomography (CT) scout images. We prospectively enrolled 817 patients undergoing clinically indicated CT imaging, predominantly of the thorax and/or abdomen. Patient weight was collected by study staff prior to the examination 1) with a weight scale and 2) as self-reported. Based on the scout images, we developed an EfficientNet convolutional neural network pipeline to estimate the optimal contrast agent dose based on patient weight and provide a browser-based user interface as a versatile open-source tool to account for different contrast agent compounds. We additionally analyzed the body-weight-informative CT features by synthesizing representative examples for different weights using in-context learning and dataset distillation. The cohort consisted of 533 thoracic, 70 abdominal and 229 thoracic-abdominal CT scout scans. Self-reported patient weight was statistically significantly lower than manual measurements (75.13 kg vs. 77.06 kg; p < 10-5, Wilcoxon signed-rank test). Our pipeline predicted patient weight with a mean absolute error of 3.90 {+/-} 0.20 kg (corresponding to a roughly 4.48 - 11.70 ml difference in contrast agent depending on the agent) in 5-fold cross-validation and is publicly available at https://tinyurl.com/ct-scout-weight. Interpretability analysis revealed that both larger anatomical shape and higher overall attenuation were predictive of body weight. Our open-source deep learning pipeline allows for the automatic estimation of accurate contrast agent dosing based on scout images in routine CT imaging studies. This approach has the potential to streamline contrast agent dosing workflows, improve efficiency, and enhance patient safety by providing quick and accurate weight estimates without additional measurements or reliance on potentially outdated records. The models performance may vary depending on patient positioning and scout image quality and the approach requires validation on larger patient cohorts and other clinical centers. Author SummaryAutomation of medical workflows using AI has the potential to increase reproducibility while saving costs and time. Here, we investigated automating the estimation of the required contrast agent dosage for CT examinations. We trained a deep neural network to predict the body weight from the initial 2D CT Scout images that are required prior to the actual CT examination. The predicted weight is then converted to a contrast agent dosage based on contrast-agent-specific conversion factors. To facilitate application in clinical routine, we developed a user-friendly browser-based user interface that allows clinicians to select a contrast agent or input a custom conversion factor to receive dosage suggestions, with local data processing in the browser. We also investigate what image characteristics predict body weight and find plausible relationships such as higher attenuation and larger anatomical shapes correlating with higher body weights. Our work goes beyond prior work by implementing a single model for a variety of anatomical regions, providing an accessible user interface and investigating the predictive characteristics of the images.

CirnetamorNet: An ultrasonic temperature measurement network for microwave hyperthermia based on deep learning.

Cui F, Du Y, Qin L, Li B, Li C, Meng X

pubmed logopapersMay 9 2025
Microwave thermotherapy is a promising approach for cancer treatment, but accurate noninvasive temperature monitoring remains challenging. This study aims to achieve accurate temperature prediction during microwave thermotherapy by efficiently integrating multi-feature data, thereby improving the accuracy and reliability of noninvasive thermometry techniques. We proposed an enhanced recurrent neural network architecture, namely CirnetamorNet. The experimental data acquisition system is developed by using the material that simulates the characteristics of human tissue to construct the body model. Ultrasonic image data at different temperatures were collected, and 5 parameters with high temperature correlation were extracted from gray scale covariance matrix and Homodyned-K distribution. Using multi-feature data as input and temperature prediction as output, the CirnetamorNet model is constructed by multi-head attention mechanism. Model performance was evaluated by analyzing training losses, predicting mean square error and accuracy, and ablation experiments were performed to evaluate the contribution of each module. Compared with common models, the CirnetamorNet model performs well, with training losses as low as 1.4589 and mean square error of only 0.1856. Its temperature prediction accuracy of 0.3°C exceeds that of many advanced models. Ablation experiments show that the removal of any key module of the model will lead to performance degradation, which proves that the collaboration of all modules is significant for improving the performance of the model. The proposed CirnetamorNet model exhibits exceptional performance in noninvasive thermometry for microwave thermotherapy. It offers a novel approach to multi-feature data fusion in the medical field and holds significant practical application value.

Prompt Engineering for Large Language Models in Interventional Radiology.

Dietrich N, Bradbury NC, Loh C

pubmed logopapersMay 7 2025
Prompt engineering plays a crucial role in optimizing artificial intelligence (AI) and large language model (LLM) outputs by refining input structure, a key factor in medical applications where precision and reliability are paramount. This Clinical Perspective provides an overview of prompt engineering techniques and their relevance to interventional radiology (IR). It explores key strategies, including zero-shot, one- or few-shot, chain-of-thought, tree-of-thought, self-consistency, and directional stimulus prompting, demonstrating their application in IR-specific contexts. Practical examples illustrate how these techniques can be effectively structured for workplace and clinical use. Additionally, the article discusses best practices for designing effective prompts and addresses challenges in the clinical use of generative AI, including data privacy and regulatory concerns. It concludes with an outlook on the future of generative AI in IR, highlighting advances including retrieval-augmented generation, domain-specific LLMs, and multimodal models.

Multistage Diffusion Model With Phase Error Correction for Fast PET Imaging.

Gao Y, Huang Z, Xie X, Zhao W, Yang Q, Yang X, Yang Y, Zheng H, Liang D, Liu J, Chen R, Hu Z

pubmed logopapersMay 7 2025
Fast PET imaging is clinically important for reducing motion artifacts and improving patient comfort. While recent diffusion-based deep learning methods have shown promise, they often fail to capture the true PET degradation process, suffer from accumulated inference errors, introduce artifacts, and require extensive reconstruction iterations. To address these challenges, we propose a novel multistage diffusion framework tailored for fast PET imaging. At the coarse level, we design a multistage structure to approximate the temporal non-linear PET degradation process in a data-driven manner, using paired PET images collected under different acquisition duration. A Phase Error Correction Network (PECNet) ensures consistency across stages by correcting accumulated deviations. At the fine level, we introduce a deterministic cold diffusion mechanism, which simulates intra-stage degradation through interpolation between known acquisition durations-significantly reducing reconstruction iterations to as few as 10. Evaluations on [<sup>68</sup>Ga]FAPI and [<sup>18</sup>F]FDG PET datasets demonstrate the superiority of our approach, achieving peak PSNRs of 36.2 dB and 39.0 dB, respectively, with average SSIMs over 0.97. Our framework offers high-fidelity PET imaging with fewer iterations, making it practical for accelerated clinical imaging.

The added value of artificial intelligence using Quantib Prostate for the detection of prostate cancer at multiparametric magnetic resonance imaging.

Russo T, Quarta L, Pellegrino F, Cosenza M, Camisassa E, Lavalle S, Apostolo G, Zaurito P, Scuderi S, Barletta F, Marzorati C, Stabile A, Montorsi F, De Cobelli F, Brembilla G, Gandaglia G, Briganti A

pubmed logopapersMay 7 2025
Artificial intelligence (AI) has been proposed to assist radiologists in reporting multiparametric magnetic resonance imaging (mpMRI) of the prostate. We evaluate the diagnostic performance of radiologists with different levels of experience when reporting mpMRI with the support of available AI-based software (Quantib Prostate). This is a single-center study (NCT06298305) involving 110 patients. Those with a positive mpMRI (PI-RADS ≥ 3) underwent targeted plus systematic biopsy (TBx plus SBx), while those with a negative mpMRI but a high clinical suspicion of prostate cancer (PCa) underwent SBx. Three readers with different levels of experience, identified as R1, R2, and R3 reviewed all mpMRI. Inter-reader agreement among the three readers with or without the assistance of Quantib Prostate as well as sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and diagnostic accuracy for the detection of clinically significant PCa (csPCa) were assessed. 102 patients underwent prostate biopsy and the csPCa detection rate was 47%. Using Quantib Prostate resulted in an increased number of lesions identified for R3 (101 vs. 127). Inter-reader agreement slightly increased when using Quantib Prostate from 0.37 to 0.41 without vs. with Quantib Prostate, respectively. PPV, NPV and diagnostic accuracy (measured by the area under the curve [AUC]) of R3 improved (0.51 vs. 0.55, 0.65 vs.0.82 and 0.56 vs. 0.62, respectively). Conversely, no changes were observed for R1 and R2. Using Quantib Prostate did not enhance the detection rate of csPCa for readers with some experience in prostate imaging. However, for an inexperienced reader, this AI-based software is demonstrated to improve the performance. Name of registry: clinicaltrials.gov. NCT06298305. Date of registration: 2022-09.

New Targets for Imaging in Nuclear Medicine.

Brink A, Paez D, Estrada Lobato E, Delgado Bolton RC, Knoll P, Korde A, Calapaquí Terán AK, Haidar M, Giammarile F

pubmed logopapersMay 6 2025
Nuclear medicine is rapidly evolving with new molecular imaging targets and advanced computational tools that promise to enhance diagnostic precision and personalized therapy. Recent years have seen a surge in novel PET and SPECT tracers, such as those targeting prostate-specific membrane antigen (PSMA) in prostate cancer, fibroblast activation protein (FAP) in tumor stroma, and tau protein in neurodegenerative disease. These tracers enable more specific visualization of disease processes compared to traditional agents, fitting into a broader shift toward precision imaging in oncology and neurology. In parallel, artificial intelligence (AI) and machine learning techniques are being integrated into tracer development and image analysis. AI-driven methods can accelerate radiopharmaceutical discovery, optimize pharmacokinetic properties, and assist in interpreting complex imaging datasets. This editorial provides an expanded overview of emerging imaging targets and techniques, including theranostic applications that pair diagnosis with radionuclide therapy, and examines how AI is augmenting nuclear medicine. We discuss the implications of these advancements within the field's historical trajectory and address the regulatory, manufacturing, and clinical challenges that must be navigated. Innovations in molecular targeting and AI are poised to transform nuclear medicine practice, enabling more personalized diagnostics and radiotheranostic strategies in the era of precision healthcare.
Page 166 of 1691682 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.