Sort by:
Page 2 of 659 results

Emergency radiology: roadmap for radiology departments.

Aydin S, Ece B, Cakmak V, Kocak B, Onur MR

pubmed logopapersJun 20 2025
Emergency radiology has evolved into a significant subspecialty over the past 2 decades, facing unique challenges including escalating imaging volumes, increasing study complexity, and heightened expectations from clinicians and patients. This review provides a comprehensive overview of the key requirements for an effective emergency radiology unit. Emergency radiologists play a crucial role in real-time decision-making by providing continuous 24/7 support, requiring expertise across various organ systems and close collaboration with emergency physicians and specialists. Beyond image interpretation, emergency radiologists are responsible for organizing staff schedules, planning equipment, determining imaging protocols, and establishing standardized reporting systems. Operational considerations in emergency radiology departments include efficient scheduling models such as circadian-based scheduling, strategic equipment organization with primary imaging modalities positioned near emergency departments, and effective imaging management through structured ordering systems and standardized protocols. Preparedness for mass casualty incidents requires a well-organized workflow process map detailing steps from patient transfer to image acquisition and interpretation, with clear task allocation and imaging pathways. Collaboration between emergency radiologists and physicians is essential, with accurate communication facilitated through various channels and structured reporting templates. Artificial intelligence has emerged as a transformative tool in emergency radiology, offering potential benefits in both interpretative domains (detecting intracranial hemorrhage, pulmonary embolism, acute ischemic stroke) and non-interpretative applications (triage systems, protocol assistance, quality control). Despite implementation challenges including clinician skepticism, financial considerations, and ethical issues, AI can enhance diagnostic accuracy and workflow optimization. Teleradiology provides solutions for staff shortages, particularly during off-hours, with hybrid models allowing radiologists to work both on-site and remotely. This review aims to guide stakeholders in establishing and maintaining efficient emergency radiology services to improve patient outcomes.

Enhancing Free-hand 3D Photoacoustic and Ultrasound Reconstruction using Deep Learning.

Lee S, Kim S, Seo M, Park S, Imrus S, Ashok K, Lee D, Park C, Lee S, Kim J, Yoo JH, Kim M

pubmed logopapersJun 13 2025
This study introduces a motion-based learning network with a global-local self-attention module (MoGLo-Net) to enhance 3D reconstruction in handheld photoacoustic and ultrasound (PAUS) imaging. Standard PAUS imaging is often limited by a narrow field of view (FoV) and the inability to effectively visualize complex 3D structures. The 3D freehand technique, which aligns sequential 2D images for 3D reconstruction, faces significant challenges in accurate motion estimation without relying on external positional sensors. MoGLo-Net addresses these limitations through an innovative adaptation of the self-attention mechanism, which effectively exploits the critical regions, such as fully-developed speckle areas or high-echogenic tissue regions within successive ultrasound images to accurately estimate the motion parameters. This facilitates the extraction of intricate features from individual frames. Additionally, we employ a patch-wise correlation operation to generate a correlation volume that is highly correlated with the scanning motion. A custom loss function was also developed to ensure robust learning with minimized bias, leveraging the characteristics of the motion parameters. Experimental evaluations demonstrated that MoGLo-Net surpasses current state-of-the-art methods in both quantitative and qualitative performance metrics. Furthermore, we expanded the application of 3D reconstruction technology beyond simple B-mode ultrasound volumes to incorporate Doppler ultrasound and photoacoustic imaging, enabling 3D visualization of vasculature. The source code for this study is publicly available at: https://github.com/pnu-amilab/US3D.

MRI-CORE: A Foundation Model for Magnetic Resonance Imaging

Haoyu Dong, Yuwen Chen, Hanxue Gu, Nicholas Konz, Yaqian Chen, Qihang Li, Maciej A. Mazurowski

arxiv logopreprintJun 13 2025
The widespread use of Magnetic Resonance Imaging (MRI) and the rise of deep learning have enabled the development of powerful predictive models for a wide range of diagnostic tasks in MRI, such as image classification or object segmentation. However, training models for specific new tasks often requires large amounts of labeled data, which is difficult to obtain due to high annotation costs and data privacy concerns. To circumvent this issue, we introduce MRI-CORE (MRI COmprehensive Representation Encoder), a vision foundation model pre-trained using more than 6 million slices from over 110,000 MRI volumes across 18 main body locations. Experiments on five diverse object segmentation tasks in MRI demonstrate that MRI-CORE can significantly improve segmentation performance in realistic scenarios with limited labeled data availability, achieving an average gain of 6.97% 3D Dice Coefficient using only 10 annotated slices per task. We further demonstrate new model capabilities in MRI such as classification of image properties including body location, sequence type and institution, and zero-shot segmentation. These results highlight the value of MRI-CORE as a generalist vision foundation model for MRI, potentially lowering the data annotation resource barriers for many applications.

HSENet: Hybrid Spatial Encoding Network for 3D Medical Vision-Language Understanding

Yanzhao Shi, Xiaodan Zhang, Junzhong Ji, Haoning Jiang, Chengxin Zheng, Yinong Wang, Liangqiong Qu

arxiv logopreprintJun 11 2025
Automated 3D CT diagnosis empowers clinicians to make timely, evidence-based decisions by enhancing diagnostic accuracy and workflow efficiency. While multimodal large language models (MLLMs) exhibit promising performance in visual-language understanding, existing methods mainly focus on 2D medical images, which fundamentally limits their ability to capture complex 3D anatomical structures. This limitation often leads to misinterpretation of subtle pathologies and causes diagnostic hallucinations. In this paper, we present Hybrid Spatial Encoding Network (HSENet), a framework that exploits enriched 3D medical visual cues by effective visual perception and projection for accurate and robust vision-language understanding. Specifically, HSENet employs dual-3D vision encoders to perceive both global volumetric contexts and fine-grained anatomical details, which are pre-trained by dual-stage alignment with diagnostic reports. Furthermore, we propose Spatial Packer, an efficient multimodal projector that condenses high-resolution 3D spatial regions into a compact set of informative visual tokens via centroid-based compression. By assigning spatial packers with dual-3D vision encoders, HSENet can seamlessly perceive and transfer hybrid visual representations to LLM's semantic space, facilitating accurate diagnostic text generation. Experimental results demonstrate that our method achieves state-of-the-art performance in 3D language-visual retrieval (39.85% of R@100, +5.96% gain), 3D medical report generation (24.01% of BLEU-4, +8.01% gain), and 3D visual question answering (73.60% of Major Class Accuracy, +1.99% gain), confirming its effectiveness. Our code is available at https://github.com/YanzhaoShi/HSENet.

Diagnostic and Technological Advances in Magnetic Resonance (Focusing on Imaging Technique and the Gadolinium-Based Contrast Media), Computed Tomography (Focusing on Photon Counting CT), and Ultrasound-State of the Art.

Runge VM, Heverhagen JT

pubmed logopapersJun 9 2025
Magnetic resonance continues to evolve and advance as a critical imaging modality for disease diagnosis and monitoring. Hardware and software advances continue to propel this modality to the forefront of the field of diagnostic imaging. Next generation MR contrast media, specifically gadolinium chelates with improved relaxivity and stability (relative to the provided contrast effect), have emerged providing a further boost to the field. Concern regarding gadolinium deposition in the body with primarily the weaker gadolinium chelates (which have been now removed from the market, at least in Europe) continues to be at the forefront of clinicians' minds. This has driven renewed interest in possible development of manganese-based contrast media. The development of photon counting CT and its clinical introduction have made possible a further major advance in CT image quality, along with the potential for decreasing radiation dose. The possibility of major clinical advances in thoracic, cardiac, and musculoskeletal imaging were first recognized, with its broader impact - across all organ systems - now also recognized. The utility of routine acquisition (without penalty in time or radiation dose) of full spectral multi-energy data is now also being recognized as an additional major advance made possible by photon counting CT. Artificial intelligence is now being used in the background across most imaging platforms and modalities, making possible further advances in imaging technique and image quality, although this field is nowhere yet near to realizing its full potential. And last, but not least, the field of ultrasound is on the cusp of further major advances in availability (with development of very low-cost systems) and a possible new generation of microbubble contrast media.

Dose to circulating blood in intensity-modulated total body irradiation, total marrow irradiation, and total marrow and lymphoid irradiation.

Guo B, Cherian S, Murphy ES, Sauter CS, Sobecks RM, Rotz S, Hanna R, Scott JG, Xia P

pubmed logopapersJun 8 2025
Multi-isocentric intensity-modulated (IM) total body irradiation (TBI), total marrow irradiation (TMI), and total marrow and lymphoid irradiation (TMLI) are gaining popularity. A question arises on the impact of the interplay between blood circulation and dynamic delivery on blood dose. This study answers the question by introducing a new whole-body blood circulation modeling technique. A whole-body CT with intravenous contrast was used to develop the blood circulation model. Fifteen organs and tissues, heart chambers, and great vessels were segmented using a deep-learning-based auto-contouring software. The main blood vessels were segmented using an in-house algorithm. Blood density, velocity, time-to-heart, and perfusion distributions were derived for systole, diastole, and portal circulations and used to simulate trajectories of blood particles during delivery. With the same prescription of 12 Gy in 8 fractions, doses to circulating blood were calculated for three plans: (1) an IM-TBI plan prescribing uniform dose to the whole body while reducing lung and kidney doses; (2) a TMI plan treating all bones; and (3) a TMLI plan treating all bones, major lymph nodes, and spleen; TMI and TMLI plans were optimized to reduce doses to non-target tissue. Circulating blood received 1.57 ± 0.43 Gy, 1.04 ± 0.32 Gy, and 1.09 ± 0.32 Gy in one fraction and 12.60 ± 1.21 Gy, 8.34 ± 0.88 Gy, and 8.71 ± 0.92 Gy in 8 fractions in IM-TBI, TMI, and TMLI, respectively. The interplay effect of blood motion with IM delivery did not change the mean dose, but changed the dose heterogeneity of the circulating blood. Fractionation reduced the blood dose heterogeneity. A novel whole-body blood circulating model was developed based on patient-specific anatomy and realistic blood dynamics, concentration, and perfusion. Using the blood circulation model, we developed a dosimetry tool for circulating blood in IM-TBI, TMI, and TMLI.

De-identification of medical imaging data: a comprehensive tool for ensuring patient privacy.

Rempe M, Heine L, Seibold C, Hörst F, Kleesiek J

pubmed logopapersJun 7 2025
Medical imaging data employed in research frequently comprises sensitive Protected Health Information (PHI) and Personal Identifiable Information (PII), which is subject to rigorous legal frameworks such as the General Data Protection Regulation (GDPR) or the Health Insurance Portability and Accountability Act (HIPAA). Consequently, these types of data must be de-identified prior to utilization, which presents a significant challenge for many researchers. Given the vast array of medical imaging data, it is necessary to employ a variety of de-identification techniques. To facilitate the de-identification process for medical imaging data, we have developed an open-source tool that can be used to de-identify Digital Imaging and Communications in Medicine (DICOM) magnetic resonance images, computer tomography images, whole slide images and magnetic resonance twix raw data. Furthermore, the implementation of a neural network enables the removal of text within the images. The proposed tool reaches comparable results to current state-of-the-art algorithms at reduced computational time (up to × 265). The tool also manages to fully de-identify image data of various types, such as Neuroimaging Informatics Technology Initiative (NIfTI) or Whole Slide Image (WSI-)DICOMS. The proposed tool automates an elaborate de-identification pipeline for multiple types of inputs, reducing the need for additional tools used for de-identification of imaging data. Question How can researchers effectively de-identify sensitive medical imaging data while complying with legal frameworks to protect patient health information? Findings We developed an open-source tool that automates the de-identification of various medical imaging formats, enhancing the efficiency of de-identification processes. Clinical relevance This tool addresses the critical need for robust and user-friendly de-identification solutions in medical imaging, facilitating data exchange in research while safeguarding patient privacy.

Deep learning-enabled MRI phenotyping uncovers regional body composition heterogeneity and disease associations in two European population cohorts

Mertens, C. J., Haentze, H., Ziegelmayer, S., Kather, J. N., Truhn, D., Kim, S. H., Busch, F., Weller, D., Wiestler, B., Graf, M., Bamberg, F., Schlett, C. L., Weiss, J. B., Ringhof, S., Can, E., Schulz-Menger, J., Niendorf, T., Lammert, J., Molwitz, I., Kader, A., Hering, A., Meddeb, A., Nawabi, J., Schulze, M. B., Keil, T., Willich, S. N., Krist, L., Hadamitzky, M., Hannemann, A., Bassermann, F., Rueckert, D., Pischon, T., Hapfelmeier, A., Makowski, M. R., Bressem, K. K., Adams, L. C.

medrxiv logopreprintJun 6 2025
Body mass index (BMI) does not account for substantial inter-individual differences in regional fat and muscle compartments, which are relevant for the prevalence of cardiometabolic and cancer conditions. We applied a validated deep learning pipeline for automated segmentation of whole-body MRI scans in 45,851 adults from the UK Biobank and German National Cohort, enabling harmonized quantification of visceral (VAT), gluteofemoral (GFAT), and abdominal subcutaneous adipose tissue (ASAT), liver fat fraction (LFF), and trunk muscle volume. Associations with clinical conditions were evaluated using compartment measures adjusted for age, sex, height, and BMI. Our analysis demonstrates that regional adiposity and muscle volume show distinct associations with cardiometabolic and cancer prevalence, and that substantial disease heterogeneity exists within BMI strata. The analytic framework and reference data presented here will support future risk stratification efforts and facilitate the integration of automated MRI phenotyping into large-scale population and clinical research.

Clinical validation of a deep learning model for low-count PET image enhancement.

Long Q, Tian Y, Pan B, Xu Z, Zhang W, Xu L, Fan W, Pan T, Gong NJ

pubmed logopapersJun 5 2025
To investigate the effects of the deep learning model RaDynPET on fourfold reduced-count whole-body PET examinations. A total of 120 patients (84 internal cohorts and 36 external cohorts) undergoing <sup>18</sup>F-FDG PET/CT examinations were enrolled. PET images were reconstructed using OSEM algorithm with 120-s (G120) and 30-s (G30) list-mode data. RaDynPET was developed to generate enhanced images (R30) from G30. Two experienced nuclear medicine physicians independently evaluated subjective image quality using a 5-point Likert scale. Standardized uptake values (SUV), standard deviations, liver signal-to-noise ratio (SNR), lesion tumor-to-background ratio (TBR), and contrast-to-noise ratio (CNR) were compared. Subgroup analyses evaluated performance across demographics, and lesion detectability were evaluated using external datasets. RaDynPET was also compared to other deep learning methods. In internal cohorts, R30 demonstrated significantly higher image quality scores than G30 and G120. R30 showed excellent agreement with G120 for liver and lesion SUV values and surpassed G120 in liver SNR and CNR. Liver SNR and CNR of R30 were comparable to G120 in thin group, and the CNR of R30 was comparable to G120 in young age group. In external cohorts, R30 maintained strong SUV agreement with G120, with lesion-level sensitivity and specificity of 95.45% and 98.41%, respectively. There was no statistical difference in lesion detection between R30 and G120. RaDynPET achieved the highest PSNR and SSIM among deep learning methods. The RaDynPET model effectively restored high image quality while maintaining SUV agreement for <sup>18</sup>F-FDG PET scans acquired in 25% of the standard acquisition time.

Retrieval-Augmented Generation with Large Language Models in Radiology: From Theory to Practice.

Fink A, Rau A, Reisert M, Bamberg F, Russe MF

pubmed logopapersJun 4 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Large language models (LLMs) hold substantial promise in addressing the growing workload in radiology, but recent studies also reveal limitations, such as hallucinations and opacity in sources for LLM responses. Retrieval-augmented Generation (RAG) based LLMs offer a promising approach to streamline radiology workflows by integrating reliable, verifiable, and customizable information. Ongoing refinement is critical to enable RAG models to manage large amounts of input data and to engage in complex multiagent dialogues. This report provides an overview of recent advances in LLM architecture, including few-shot and zero-shot learning, RAG integration, multistep reasoning, and agentic RAG, and identifies future research directions. Exemplary cases demonstrate the practical application of these techniques in radiology practice. ©RSNA, 2025.
Page 2 of 659 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.