Sort by:
Page 51 of 100991 results

Placenta segmentation redefined: review of deep learning integration of magnetic resonance imaging and ultrasound imaging.

Jittou A, Fazazy KE, Riffi J

pubmed logopapersJul 15 2025
Placental segmentation is critical for the quantitative analysis of prenatal imaging applications. However, segmenting the placenta using magnetic resonance imaging (MRI) and ultrasound is challenging because of variations in fetal position, dynamic placental development, and image quality. Most segmentation methods define regions of interest with different shapes and intensities, encompassing the entire placenta or specific structures. Recently, deep learning has emerged as a key approach that offer high segmentation performance across diverse datasets. This review focuses on the recent advances in deep learning techniques for placental segmentation in medical imaging, specifically MRI and ultrasound modalities, and cover studies from 2019 to 2024. This review synthesizes recent research, expand knowledge in this innovative area, and highlight the potential of deep learning approaches to significantly enhance prenatal diagnostics. These findings emphasize the importance of selecting appropriate imaging modalities and model architectures tailored to specific clinical scenarios. In addition, integrating both MRI and ultrasound can enhance segmentation performance by leveraging complementary information. This review also discusses the challenges associated with the high costs and limited availability of advanced imaging technologies. It provides insights into the current state of placental segmentation techniques and their implications for improving maternal and fetal health outcomes, underscoring the transformative impact of deep learning on prenatal diagnostics.

Assessment of local recurrence risk in extremity high-grade osteosarcoma through multimodality radiomics integration.

Luo Z, Liu R, Li J, Ye Q, Zhou Z, Shen X

pubmed logopapersJul 15 2025
BackgroundA timely assessment of local recurrence (LoR) risk in extremity high-grade osteosarcoma is crucial for optimizing treatment strategies and improving patient outcomes.PurposeTo explore the potential of machine-learning algorithms in predicting LoR in patients with osteosarcoma.Material and MethodsData from patients with high-grade osteosarcoma who underwent preoperative radiograph and multiparametric magnetic resonance imaging (MRI) were collected. Machine-learning models were developed and trained on this dataset to predict LoR. The study involved selecting relevant features, training the models, and evaluating their performance using the receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC). DeLong's test was utilized for comparing the AUCs.ResultsThe performance (AUC, sensitivity, specificity, and accuracy) of four classifiers (random forest [RF], support vector machine, logistic regression, and extreme gradient boosting) using radiograph-MRI as image inputs were stable (all Hosmer-Lemeshow index >0.05) with the fair to good prognosis efficacy. The RF classifier using radiograph-MRI features as training inputs exhibited better performance (AUC = 0.806, 0.868) than that using MRI only (AUC = 0.774, 0.771) and radiograph only (AUC = 0.613 and 0.627) in the training and testing sets (<i>P</i> <0.05) while the other three classifiers showed no difference between MRI-only and radiograph-MRI models.ConclusionThis study provides valuable insights into the use of machine learning for predicting LoR in osteosarcoma patients. These findings emphasize the potential of integrating radiomics data with algorithms to improve prognostic assessments.

A literature review of radio-genomics in breast cancer: Lessons and insights for low and middle-income countries.

Mooghal M, Shaikh K, Shaikh H, Khan W, Siddiqui MS, Jamil S, Vohra LM

pubmed logopapersJul 15 2025
To improve precision medicine in breast cancer (BC) decision-making, radio-genomics is an emerging branch of artificial intelligence (AI) that links cancer characteristics assessed radiologically with the histopathology and genomic properties of the tumour. By employing MRIs, mammograms, and ultrasounds to uncover distinctive radiomics traits that potentially predict genomic abnormalities, this review attempts to find literature that links AI-based models with the genetic mutations discovered in BC patients. The review's findings can be used to create AI-based population models for low and middle-income countries (LMIC) and evaluate how well they predict outcomes for our cohort.Magnetic resonance imaging (MRI) appears to be the modality employed most frequently to research radio-genomics in BC patients in our systemic analysis. According to the papers we analysed, genetic markers and mutations linked to imaging traits, such as tumour size, shape, enhancing patterns, as well as clinical outcomes of treatment response, disease progression, and survival, can be identified by employing AI. The use of radio-genomics can help LMICs get through some of the barriers that keep the general population from having access to high-quality cancer care, thereby improving the health outcomes for BC patients in these regions. It is imperative to ensure that emerging technologies are used responsibly, in a way that is accessible to and affordable for all patients, regardless of their socio-economic condition.

Generative AI enables medical image segmentation in ultra low-data regimes.

Zhang L, Jindal B, Alaa A, Weinreb R, Wilson D, Segal E, Zou J, Xie P

pubmed logopapersJul 14 2025
Semantic segmentation of medical images is pivotal in applications like disease diagnosis and treatment planning. While deep learning automates this task effectively, it struggles in ultra low-data regimes for the scarcity of annotated segmentation masks. To address this, we propose a generative deep learning framework that produces high-quality image-mask pairs as auxiliary training data. Unlike traditional generative models that separate data generation from model training, ours uses multi-level optimization for end-to-end data generation. This allows segmentation performance to guide the generation process, producing data tailored to improve segmentation outcomes. Our method demonstrates strong generalization across 11 medical image segmentation tasks and 19 datasets, covering various diseases, organs, and modalities. It improves performance by 10-20% (absolute) in both same- and out-of-domain settings and requires 8-20 times less training data than existing approaches. This greatly enhances the feasibility and cost-effectiveness of deep learning in data-limited medical imaging scenarios.

Self-supervised Upsampling for Reconstructions with Generalized Enhancement in Photoacoustic Computed Tomography.

Deng K, Luo Y, Zuo H, Chen Y, Gu L, Liu MY, Lan H, Luo J, Ma C

pubmed logopapersJul 14 2025
Photoacoustic computed tomography (PACT) is an emerging hybrid imaging modality with potential applications in biomedicine. A major roadblock to the widespread adoption of PACT is the limited number of detectors, which gives rise to spatial aliasing and manifests as streak artifacts in the reconstructed image. A brute-force solution to the problem is to increase the number of detectors, which, however, is often undesirable due to escalated costs. In this study, we present a novel self-supervised learning approach, to overcome this long-standing challenge. We found that small blocks of PACT channel data show similarity at various downsampling rates. Based on this observation, a neural network trained on downsampled data can reliably perform accurate interpolation without requiring densely-sampled ground truth data, which is typically unavailable in real practice. Our method has undergone validation through numerical simulations, controlled phantom experiments, as well as ex vivo and in vivo animal tests, across multiple PACT systems. We have demonstrated that our technique provides an effective and cost-efficient solution to address the under-sampling issue in PACT, thereby enhancing the capabilities of this imaging technology.

Human-centered explainability evaluation in clinical decision-making: a critical review of the literature.

Bauer JM, Michalowski M

pubmed logopapersJul 14 2025
This review paper comprehensively summarizes healthcare provider (HCP) evaluation of explanations produced by explainable artificial intelligence methods to support point-of-care, patient-specific, clinical decision-making (CDM) within medical settings. It highlights the critical need to incorporate human-centered (HCP) evaluation approaches based on their CDM needs, processes, and goals. The review was conducted in Ovid Medline and Scopus databases, following the Institute of Medicine's methodological standards and PRISMA guidelines. An individual study appraisal was conducted using design-specific appraisal tools. MaxQDA software was used for data extraction and evidence table procedures. Of the 2673 unique records retrieved, 25 records were included in the final sample. Studies were excluded if they did not meet this review's definitions of HCP evaluation (1156), healthcare use (995), explainable AI (211), and primary research (285), and if they were not available in English (1). The sample focused primarily on physicians and diagnostic imaging use cases and revealed wide-ranging evaluation measures. The synthesis of sampled studies suggests a potential common measure of clinical explainability with 3 indicators of interpretability, fidelity, and clinical value. There is an opportunity to extend the current model-centered evaluation approaches to incorporate human-centered metrics, supporting the transition into practice. Future research should aim to clarify and expand key concepts in HCP evaluation, propose a comprehensive evaluation model positioned in current theoretical knowledge, and develop a valid instrument to support comparisons.

X-ray2CTPA: leveraging diffusion models to enhance pulmonary embolism classification.

Cahan N, Klang E, Aviram G, Barash Y, Konen E, Giryes R, Greenspan H

pubmed logopapersJul 14 2025
Chest X-rays or chest radiography (CXR), commonly used for medical diagnostics, typically enables limited imaging compared to computed tomography (CT) scans, which offer more detailed and accurate three-dimensional data, particularly contrast-enhanced scans like CT Pulmonary Angiography (CTPA). However, CT scans entail higher costs, greater radiation exposure, and are less accessible than CXRs. In this work, we explore cross-modal translation from a 2D low contrast-resolution X-ray input to a 3D high contrast and spatial-resolution CTPA scan. Driven by recent advances in generative AI, we introduce a novel diffusion-based approach to this task. We employ the synthesized 3D images in a classification framework and show improved AUC in a Pulmonary Embolism (PE) categorization task, using the initial CXR input. Furthermore, we evaluate the model's performance using quantitative metrics, ensuring diagnostic relevance of the generated images. The proposed method is generalizable and capable of performing additional cross-modality translations in medical imaging. It may pave the way for more accessible and cost-effective advanced diagnostic tools. The code for this project is available: https://github.com/NoaCahan/X-ray2CTPA .

Deep Learning Applications in Lymphoma Imaging.

Sorin V, Cohen I, Lekach R, Partovi S, Raskin D

pubmed logopapersJul 14 2025
Lymphomas are a diverse group of disorders characterized by the clonal proliferation of lymphocytes. While definitive diagnosis of lymphoma relies on histopathology, immune-phenotyping and additional molecular analyses, imaging modalities such as PET/CT, CT, and MRI play a central role in the diagnostic process and management, from assessing disease extent, to evaluation of response to therapy and detecting recurrence. Artificial intelligence (AI), particularly deep learning models like convolutional neural networks (CNNs), is transforming lymphoma imaging by enabling automated detection, segmentation, and classification. This review elaborates on recent advancements in deep learning for lymphoma imaging and its integration into clinical practice. Challenges include obtaining high-quality, annotated datasets, addressing biases in training data, and ensuring consistent model performance. Ongoing efforts are focused on enhancing model interpretability, incorporating diverse patient populations to improve generalizability, and ensuring safe and effective integration of AI into clinical workflows, with the goal of improving patient outcomes.

3D Wavelet Latent Diffusion Model for Whole-Body MR-to-CT Modality Translation

Jiaxu Zheng, Meiman He, Xuhui Tang, Xiong Wang, Tuoyu Cao, Tianyi Zeng, Lichi Zhang, Chenyu You

arxiv logopreprintJul 14 2025
Magnetic Resonance (MR) imaging plays an essential role in contemporary clinical diagnostics. It is increasingly integrated into advanced therapeutic workflows, such as hybrid Positron Emission Tomography/Magnetic Resonance (PET/MR) imaging and MR-only radiation therapy. These integrated approaches are critically dependent on accurate estimation of radiation attenuation, which is typically facilitated by synthesizing Computed Tomography (CT) images from MR scans to generate attenuation maps. However, existing MR-to-CT synthesis methods for whole-body imaging often suffer from poor spatial alignment between the generated CT and input MR images, and insufficient image quality for reliable use in downstream clinical tasks. In this paper, we present a novel 3D Wavelet Latent Diffusion Model (3D-WLDM) that addresses these limitations by performing modality translation in a learned latent space. By incorporating a Wavelet Residual Module into the encoder-decoder architecture, we enhance the capture and reconstruction of fine-scale features across image and latent spaces. To preserve anatomical integrity during the diffusion process, we disentangle structural and modality-specific characteristics and anchor the structural component to prevent warping. We also introduce a Dual Skip Connection Attention mechanism within the diffusion model, enabling the generation of high-resolution CT images with improved representation of bony structures and soft-tissue contrast.

A Survey on Medical Image Compression: From Traditional to Learning-Based

Guofeng Tong, Sixuan Liu, Yang Lv, Hanyu Pei, Feng-Lei Fan

arxiv logopreprintJul 13 2025
The exponential growth of medical imaging has created significant challenges in data storage, transmission, and management for healthcare systems. In this vein, efficient compression becomes increasingly important. Unlike natural image compression, medical image compression prioritizes preserving diagnostic details and structural integrity, imposing stricter quality requirements and demanding fast, memory-efficient algorithms that balance computational complexity with clinically acceptable reconstruction quality. Meanwhile, the medical imaging family includes a plethora of modalities, each possessing different requirements. For example, 2D medical image (e.g., X-rays, histopathological images) compression focuses on exploiting intra-slice spatial redundancy, while volumetric medical image faces require handling intra-slice and inter-slice spatial correlations, and 4D dynamic imaging (e.g., time-series CT/MRI, 4D ultrasound) additionally demands processing temporal correlations between consecutive time frames. Traditional compression methods, grounded in mathematical transforms and information theory principles, provide solid theoretical foundations, predictable performance, and high standardization levels, with extensive validation in clinical environments. In contrast, deep learning-based approaches demonstrate remarkable adaptive learning capabilities and can capture complex statistical characteristics and semantic information within medical images. This comprehensive survey establishes a two-facet taxonomy based on data structure (2D vs 3D/4D) and technical approaches (traditional vs learning-based), thereby systematically presenting the complete technological evolution, analyzing the unique technical challenges, and prospecting future directions in medical image compression.
Page 51 of 100991 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.