Sort by:
Page 16 of 40395 results

3D Quantification of Viral Transduction Efficiency in Living Human Retinal Organoids

Rogler, T. S., Salbaum, K. A., Brinkop, A. T., Sonntag, S. M., James, R., Shelton, E. R., Thielen, A., Rose, R., Babutzka, S., Klopstock, T., Michalakis, S., Serwane, F.

biorxiv logopreprintJun 4 2025
The development of therapeutics builds on testing their efficiency in vitro. To optimize gene therapies, for example, fluorescent reporters expressed by treated cells are typically utilized as readouts. Traditionally, their global fluorescence signal has been used as an estimate of transduction efficiency. However, analysis in individual cells within a living 3D tissue remains a challenge. Readout on a single-cell level can be realized via fluo-rescence-based flow cytometry at the cost of tissue dissociation and loss of spatial information. Complementary, spatial information is accessible via immunofluorescence of fixed samples. Both approaches impede time-dependent studies on the delivery of the vector to the cells. Here, quantitative 3D characterization of viral transduction efficiencies in living retinal organoids is introduced. The approach combines quantified gene delivery efficiency in space and time, leveraging human retinal organ-oids, engineered adeno-associated virus (AAV) vectors, confocal live imaging, and deep learning-based image segmentation. The integration of these tools in an organoid imaging and analysis pipeline allows quantitative testing of future treatments and other gene delivery methods. It has the potential to guide the development of therapies in biomedical applications.

Retrieval-Augmented Generation with Large Language Models in Radiology: From Theory to Practice.

Fink A, Rau A, Reisert M, Bamberg F, Russe MF

pubmed logopapersJun 4 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Large language models (LLMs) hold substantial promise in addressing the growing workload in radiology, but recent studies also reveal limitations, such as hallucinations and opacity in sources for LLM responses. Retrieval-augmented Generation (RAG) based LLMs offer a promising approach to streamline radiology workflows by integrating reliable, verifiable, and customizable information. Ongoing refinement is critical to enable RAG models to manage large amounts of input data and to engage in complex multiagent dialogues. This report provides an overview of recent advances in LLM architecture, including few-shot and zero-shot learning, RAG integration, multistep reasoning, and agentic RAG, and identifies future research directions. Exemplary cases demonstrate the practical application of these techniques in radiology practice. ©RSNA, 2025.

Recent Advances in Medical Image Classification

Loan Dao, Ngoc Quoc Ly

arxiv logopreprintJun 4 2025
Medical image classification is crucial for diagnosis and treatment, benefiting significantly from advancements in artificial intelligence. The paper reviews recent progress in the field, focusing on three levels of solutions: basic, specific, and applied. It highlights advances in traditional methods using deep learning models like Convolutional Neural Networks and Vision Transformers, as well as state-of-the-art approaches with Vision Language Models. These models tackle the issue of limited labeled data, and enhance and explain predictive results through Explainable Artificial Intelligence.

Gender and Ethnicity Bias of Text-to-Image Generative Artificial Intelligence in Medical Imaging, Part 2: Analysis of DALL-E 3.

Currie G, Hewis J, Hawk E, Rohren E

pubmed logopapersJun 4 2025
Disparity among gender and ethnicity remains an issue across medicine and health science. Only 26%-35% of trainee radiologists are female, despite more than 50% of medical students' being female. Similar gender disparities are evident across the medical imaging professions. Generative artificial intelligence text-to-image production could reinforce or amplify gender biases. <b>Methods:</b> In March 2024, DALL-E 3 was utilized via GPT-4 to generate a series of individual and group images of medical imaging professionals: radiologist, nuclear medicine physician, radiographer, nuclear medicine technologist, medical physicist, radiopharmacist, and medical imaging nurse. Multiple iterations of images were generated using a variety of prompts. Collectively, 120 images were produced for evaluation of 524 characters. All images were independently analyzed by 3 expert reviewers from medical imaging professions for apparent gender and skin tone. <b>Results:</b> Collectively (individual and group images), 57.4% (<i>n</i> = 301) of medical imaging professionals were depicted as male, 42.4% (<i>n</i> = 222) as female, and 91.2% (<i>n</i> = 478) as having a light skin tone. The male gender representation was 65% for radiologists, 62% for nuclear medicine physicians, 52% for radiographers, 56% for nuclear medicine technologists, 62% for medical physicists, 53% for radiopharmacists, and 26% for medical imaging nurses. For all professions, this overrepresents men compared with women. There was no representation of persons with a disability. <b>Conclusion:</b> This evaluation reveals a significant overrepresentation of the male gender associated with generative artificial intelligence text-to-image production using DALL-E 3 across the medical imaging professions. Generated images have a disproportionately high representation of white men, which is not representative of the diversity of the medical imaging professions.

Average Calibration Losses for Reliable Uncertainty in Medical Image Segmentation

Theodore Barfoot, Luis C. Garcia-Peraza-Herrera, Samet Akcay, Ben Glocker, Tom Vercauteren

arxiv logopreprintJun 4 2025
Deep neural networks for medical image segmentation are often overconfident, compromising both reliability and clinical utility. In this work, we propose differentiable formulations of marginal L1 Average Calibration Error (mL1-ACE) as an auxiliary loss that can be computed on a per-image basis. We compare both hard- and soft-binning approaches to directly improve pixel-wise calibration. Our experiments on four datasets (ACDC, AMOS, KiTS, BraTS) demonstrate that incorporating mL1-ACE significantly reduces calibration errors, particularly Average Calibration Error (ACE) and Maximum Calibration Error (MCE), while largely maintaining high Dice Similarity Coefficients (DSCs). We find that the soft-binned variant yields the greatest improvements in calibration, over the Dice plus cross-entropy loss baseline, but often compromises segmentation performance, with hard-binned mL1-ACE maintaining segmentation performance, albeit with weaker calibration improvement. To gain further insight into calibration performance and its variability across an imaging dataset, we introduce dataset reliability histograms, an aggregation of per-image reliability diagrams. The resulting analysis highlights improved alignment between predicted confidences and true accuracies. Overall, our approach not only enhances the trustworthiness of segmentation predictions but also shows potential for safer integration of deep learning methods into clinical workflows. We share our code here: https://github.com/cai4cai/Average-Calibration-Losses

Advancements in Artificial Intelligence Applications for Cardiovascular Disease Research

Yuanlin Mo, Haishan Huang, Bocheng Liang, Weibo Ma

arxiv logopreprintJun 4 2025
Recent advancements in artificial intelligence (AI) have revolutionized cardiovascular medicine, particularly through integration with computed tomography (CT), magnetic resonance imaging (MRI), electrocardiography (ECG) and ultrasound (US). Deep learning architectures, including convolutional neural networks and generative adversarial networks, enable automated analysis of medical imaging and physiological signals, surpassing human capabilities in diagnostic accuracy and workflow efficiency. However, critical challenges persist, including the inability to validate input data accuracy, which may propagate diagnostic errors. This review highlights AI's transformative potential in precision diagnostics while underscoring the need for robust validation protocols to ensure clinical reliability. Future directions emphasize hybrid models integrating multimodal data and adaptive algorithms to refine personalized cardiovascular care.

PARADIM: A Platform to Support Research at the Interface of Data Science and Medical Imaging.

Lemaréchal Y, Couture G, Pelletier F, Lefol R, Asselin PL, Ouellet S, Bernard J, Ebrahimpour L, Manem VSK, Topalis J, Schachtner B, Jodogne S, Joubert P, Jeblick K, Ingrisch M, Després P

pubmed logopapersJun 3 2025
This paper describes PARADIM, a digital infrastructure designed to support research at the interface of data science and medical imaging, with a focus on Research Data Management best practices. The platform is built from open-source components and rooted in the FAIR principles through strict compliance with the DICOM standard. It addresses key needs in data curation, governance, privacy, and scalable resource management. Supporting every stage of the data science discovery cycle, the platform offers robust functionalities for user identity and access management, data de-identification, storage, annotation, as well as model training and evaluation. Rich metadata are generated all along the research lifecycle to ensure the traceability and reproducibility of results. PARADIM hosts several medical image collections and allows the automation of large-scale, computationally intensive pipelines (e.g., automatic segmentation, dose calculations, AI model evaluation). The platform fills a gap at the interface of data science and medical imaging, where digital infrastructures are key in the development, evaluation, and deployment of innovative solutions in the real world.

Open-PMC-18M: A High-Fidelity Large Scale Medical Dataset for Multimodal Representation Learning

Negin Baghbanzadeh, Sajad Ashkezari, Elham Dolatabadi, Arash Afkanpour

arxiv logopreprintJun 3 2025
Compound figures, which are multi-panel composites containing diverse subfigures, are ubiquitous in biomedical literature, yet large-scale subfigure extraction remains largely unaddressed. Prior work on subfigure extraction has been limited in both dataset size and generalizability, leaving a critical open question: How does high-fidelity image-text alignment via large-scale subfigure extraction impact representation learning in vision-language models? We address this gap by introducing a scalable subfigure extraction pipeline based on transformer-based object detection, trained on a synthetic corpus of 500,000 compound figures, and achieving state-of-the-art performance on both ImageCLEF 2016 and synthetic benchmarks. Using this pipeline, we release OPEN-PMC-18M, a large-scale high quality biomedical vision-language dataset comprising 18 million clinically relevant subfigure-caption pairs spanning radiology, microscopy, and visible light photography. We train and evaluate vision-language models on our curated datasets and show improved performance across retrieval, zero-shot classification, and robustness benchmarks, outperforming existing baselines. We release our dataset, models, and code to support reproducible benchmarks and further study into biomedical vision-language modeling and representation learning.

Evaluating the Diagnostic Accuracy of ChatGPT-4.0 for Classifying Multimodal Musculoskeletal Masses: A Comparative Study with Human Raters.

Bosbach WA, Schoeni L, Beisbart C, Senge JF, Mitrakovic M, Anderson SE, Achangwa NR, Divjak E, Ivanac G, Grieser T, Weber MA, Maurer MH, Sanal HT, Daneshvar K

pubmed logopapersJun 3 2025
Novel artificial intelligence tools have the potential to significantly enhance productivity in medicine, while also maintaining or even improving treatment quality. In this study, we aimed to evaluate the current capability of ChatGPT-4.0 to accurately interpret multimodal musculoskeletal tumor cases.We created 25 cases, each containing images from X-ray, computed tomography, magnetic resonance imaging, or scintigraphy. ChatGPT-4.0 was tasked with classifying each case using a six-option, two-choice question, where both a primary and a secondary diagnosis were allowed. For performance evaluation, human raters also assessed the same cases.When only the primary diagnosis was taken into account, the accuracy of human raters was greater than that of ChatGPT-4.0 by a factor of nearly 2 (87% vs. 44%). However, in a setting that also considered secondary diagnoses, the performance gap shrank substantially (accuracy: 94% vs. 71%). Power analysis relying on Cohen's w confirmed the adequacy of the sample set size (n: 25).The tested artificial intelligence tool demonstrated lower performance than human raters. Considering factors such as speed, constant availability, and potential future improvements, it appears plausible that artificial intelligence tools could serve as valuable assistance systems for doctors in future clinical settings. · ChatGPT-4.0 classifies musculoskeletal cases using multimodal imaging inputs.. · Human raters outperform AI in primary diagnosis accuracy by a factor of nearly two.. · Including secondary diagnoses improves AI performance and narrows the gap.. · AI demonstrates potential as an assistive tool in future radiological workflows.. · Power analysis confirms robustness of study findings with the current sample size.. · Bosbach WA, Schoeni L, Beisbart C et al. Evaluating the Diagnostic Accuracy of ChatGPT-4.0 for Classifying Multimodal Musculoskeletal Masses: A Comparative Study with Human Raters. Rofo 2025; DOI 10.1055/a-2594-7085.

A Review of Intracranial Aneurysm Imaging Modalities, from CT to State-of-the-Art MR.

Allaw S, Khabaz K, Given TC, Montas D, Alcazar-Felix RJ, Srinath A, Kass-Hout T, Carroll TJ, Hurley MC, Polster SP

pubmed logopapersJun 3 2025
Traditional guidance for intracranial aneurysm (IA) management is dichotomized by rupture status. Fundamental to the management of ruptured aneurysm is the detection and treatment of SAH, along with securing the aneurysm by the safest technique. On the other hand, unruptured aneurysms first require a careful assessment of their natural history versus treatment risk, including an imaging assessment of aneurysm size, location, and morphology, along with additional evidence-based risk factors such as smoking, hypertension, and family history. Unfortunately, a large proportion of ruptured aneurysms are in the lower risk size category (<7 mm), putting a premium on discovering a more refined noninvasive biomarker to detect and stratify aneurysm instability before rupture. In this review of aneurysm work-up, we cover the gamut of established imaging modalities (eg, CT, CTA, DSA, FLAIR, 3D TOF-MRA, contrast-enhanced-MRA) as well as more novel MR techniques (MR vessel wall imaging, dynamic contrast-enhanced MRI, computational fluid dynamics). Additionally, we evaluate the current landscape of artificial intelligence software and its integration into diagnostic and risk-stratification pipelines for IAs. These advanced MR techniques, increasingly complemented with artificial intelligence models, offer a paradigm shift by evaluating factors beyond size and morphology, including vessel wall inflammation, permeability, and hemodynamics. Additionally, we provide our institution's scan parameters for many of these modalities as a reference. Ultimately, this review provides an organized, up-to-date summary of the array of available modalities/sequences for IA imaging to help build protocols focused on IA characterization.
Page 16 of 40395 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.