Sort by:
Page 62 of 99986 results

Optimizing imaging modalities for sarcoma subtypes in radiation therapy: State of the art.

Beddok A, Kaur H, Khurana S, Dercle L, El Ayachi R, Jouglar E, Mammar H, Mahe M, Najem E, Rozenblum L, Thariat J, El Fakhri G, Helfre S

pubmed logopapersJul 1 2025
The choice of imaging modalities is essential in sarcoma management, as different techniques provide complementary information depending on tumor subtype and anatomical location. This narrative review examines the role of imaging in sarcoma characterization and treatment planning, particularly in the context of radiation therapy (RT). Magnetic resonance imaging (MRI) provides superior soft tissue contrast, enabling detailed assessment of tumor extent and peritumoral involvement. Computed tomography (CT) is particularly valuable for detecting osseous involvement, periosteal reactions, and calcifications, complementing MRI in sarcomas involving bone or calcified lesions. The combination of MRI and CT enhances tumor delineation, particularly for complex sites such as retroperitoneal and uterine sarcomas, where spatial relationships with adjacent organs are critical. In vascularized sarcomas, such as alveolar soft-part sarcomas, the integration of MRI with CT or MR angiography facilitates accurate mapping of tumor margins. Positron emission tomography with [18 F]-fluorodeoxyglucose ([18 F]-FDG PET) provides functional insights, identifying metabolically active regions within tumors to guide dose escalation. Although its role in routine staging is limited, [18 F]-FDG PET and emerging PET tracers offer promise for refining RT planning. Advances in artificial intelligence further enhance imaging precision, enabling more accurate contouring and treatment optimization. This review highlights how the integration of imaging modalities, tailored to specific sarcoma subtypes, supports precise RT delivery while minimizing damage to surrounding tissues. These strategies underline the importance of multidisciplinary approaches in improving sarcoma management and outcomes through multi-image-based RT planning.

Worldwide research trends on artificial intelligence in head and neck cancer: a bibliometric analysis.

Silvestre-Barbosa Y, Castro VT, Di Carvalho Melo L, Reis PED, Leite AF, Ferreira EB, Guerra ENS

pubmed logopapersJul 1 2025
This bibliometric analysis aims to explore scientific data on Artificial Intelligence (AI) and Head and Neck Cancer (HNC). AI-related HNC articles from the Web of Science Core Collection were searched. VosViewer and Biblioshiny/Bibiometrix for R Studio were used for data synthesis. This analysis covered key characteristics such as sources, authors, affiliations, countries, citations and top cited articles, keyword analysis, and trending topics. A total of 1,019 papers from 1995 to 2024 were included. Among them, 71.6% were original research articles, 7.6% were reviews, and 20.8% took other forms. The fifty most cited documents highlighted radiology as the most explored specialty, with an emphasis on deep learning models for segmentation. The publications have been increasing, with an annual growth rate of 94.4% after 2016. Among the 20 most productive countries, 14 are high-income economies. The keywords of strong citation revealed 2 main clusters: radiomics and radiotherapy. The most frequently keywords include machine learning, deep learning, artificial intelligence, and head and neck cancer, with recent emphasis on diagnosis, survival prediction, and histopathology. There has been an increase in the use of AI in HNC research since 2016 and indicated a notable disparity in publication quantity between high-income and low/middle-income countries. Future research should prioritize clinical validation and standardization to facilitate the integration of AI in HNC management, particularly in underrepresented regions.

Uncertainty-aware deep learning for segmentation of primary tumor and pathologic lymph nodes in oropharyngeal cancer: Insights from a multi-center cohort.

De Biase A, Sijtsema NM, van Dijk LV, Steenbakkers R, Langendijk JA, van Ooijen P

pubmed logopapersJul 1 2025
Information on deep learning (DL) tumor segmentation accuracy on a voxel and a structure level is essential for clinical introduction. In a previous study, a DL model was developed for oropharyngeal cancer (OPC) primary tumor (PT) segmentation in PET/CT images and voxel-level predicted probabilities (TPM) quantifying model certainty were introduced. This study extended the network to simultaneously generate TPMs for PT and pathologic lymph nodes (PL) and explored whether structure-level uncertainty in TPMs predicts segmentation model accuracy in an independent external cohort. We retrospectively gathered PET/CT images and manual delineations of gross tumor volume of the PT (GTVp) and PL (GTVln) of 407 OPC patients treated with (chemo)radiation in our institute. The HECKTOR 2022 challenge dataset served as external test set. The pre-existing architecture was modified for multi-label segmentation. Multiple models were trained, and the non-binarized ensemble average of TPMs was considered per patient. Segmentation accuracy was quantified by surface and aggregate DSC, model uncertainty by coefficient of variation (CV) of multiple predictions. Predicted GTVp and GTVln segmentations in the external test achieved 0.75 and 0.70 aggregate DSC. Patient-specific CV and surface DSC showed a significant correlation for both structures (-0.54 and -0.66 for GTVp and GTVln) in the external set, indicating significant calibration. Significant accuracy versus uncertainty calibration was achieved for TPMs in both internal and external test sets, indicating the potential use of quantified uncertainty from TPMs to identify cases with lower GTVp and GTVln segmentation accuracy, independently of the dataset.

A systematic review of generative AI approaches for medical image enhancement: Comparing GANs, transformers, and diffusion models.

Oulmalme C, Nakouri H, Jaafar F

pubmed logopapersJul 1 2025
Medical imaging is a vital diagnostic tool that provides detailed insights into human anatomy but faces challenges affecting its accuracy and efficiency. Advanced generative AI models offer promising solutions. Unlike previous reviews with a narrow focus, a comprehensive evaluation across techniques and modalities is necessary. This systematic review integrates the three state-of-the-art leading approaches, GANs, Diffusion Models, and Transformers, examining their applicability, methodologies, and clinical implications in improving medical image quality. Using the PRISMA framework, 63 studies from 989 were selected via Google Scholar and PubMed, focusing on GANs, Transformers, and Diffusion Models. Articles from ACM, IEEE Xplore, and Springer were analyzed. Generative AI techniques show promise in improving image resolution, reducing noise, and enhancing fidelity. GANs generate high-quality images, Transformers utilize global context, and Diffusion Models are effective in denoising and reconstruction. Challenges include high computational costs, limited dataset diversity, and issues with generalizability, with a focus on quantitative metrics over clinical applicability. This review highlights the transformative impact of GANs, Transformers, and Diffusion Models in advancing medical imaging. Future research must address computational and generalization challenges, emphasize open science, and validate these techniques in diverse clinical settings to unlock their full potential. These efforts could enhance diagnostic accuracy, lower costs, and improve patient outcome.

MedScale-Former: Self-guided multiscale transformer for medical image segmentation.

Karimijafarbigloo S, Azad R, Kazerouni A, Merhof D

pubmed logopapersJul 1 2025
Accurate medical image segmentation is crucial for enabling automated clinical decision procedures. However, existing supervised deep learning methods for medical image segmentation face significant challenges due to their reliance on extensive labeled training data. To address this limitation, our novel approach introduces a dual-branch transformer network operating on two scales, strategically encoding global contextual dependencies while preserving local information. To promote self-supervised learning, our method leverages semantic dependencies between different scales, generating a supervisory signal for inter-scale consistency. Additionally, it incorporates a spatial stability loss within each scale, fostering self-supervised content clustering. While intra-scale and inter-scale consistency losses enhance feature uniformity within clusters, we introduce a cross-entropy loss function atop the clustering score map to effectively model cluster distributions and refine decision boundaries. Furthermore, to account for pixel-level similarities between organ or lesion subpixels, we propose a selective kernel regional attention module as a plug and play component. This module adeptly captures and outlines organ or lesion regions, slightly enhancing the definition of object boundaries. Our experimental results on skin lesion, lung organ, and multiple myeloma plasma cell segmentation tasks demonstrate the superior performance of our method compared to state-of-the-art approaches.

MDAL: Modality-difference-based active learning for multimodal medical image analysis via contrastive learning and pointwise mutual information.

Wang H, Jin Q, Du X, Wang L, Guo Q, Li H, Wang M, Song Z

pubmed logopapersJul 1 2025
Multimodal medical images reveal different characteristics of the same anatomy or lesion, offering significant clinical value. Deep learning has achieved widespread success in medical image analysis with large-scale labeled datasets. However, annotating medical images is expensive and labor-intensive for doctors, and the variations between different modalities further increase the annotation cost for multimodal images. This study aims to minimize the annotation cost for multimodal medical image analysis. We proposes a novel active learning framework MDAL based on modality differences for multimodal medical images. MDAL quantifies the sample-wise modality differences through pointwise mutual information estimated by multimodal contrastive learning. We hypothesize that samples with larger modality differences are more informative for annotation and further propose two sampling strategies based on these differences: MaxMD and DiverseMD. Moreover, MDAL could select informative samples in one shot without initial labeled data. We evaluated MDAL on public brain glioma and meningioma segmentation datasets and an in-house ovarian cancer classification dataset. MDAL outperforms other advanced active learning competitors. Besides, when using only 20%, 20%, and 15% of labeled samples in these datasets, MDAL reaches 99.6%, 99.9%, and 99.3% of the performance of supervised training with full labeled dataset, respectively. The results show that our proposed MDAL could significantly reduce the annotation cost for multimodal medical image analysis. We expect MDAL could be further extended to other multimodal medical data for lower annotation costs.

Integrated brain connectivity analysis with fMRI, DTI, and sMRI powered by interpretable graph neural networks.

Qu G, Zhou Z, Calhoun VD, Zhang A, Wang YP

pubmed logopapersJul 1 2025
Multimodal neuroimaging data modeling has become a widely used approach but confronts considerable challenges due to their heterogeneity, which encompasses variability in data types, scales, and formats across modalities. This variability necessitates the deployment of advanced computational methods to integrate and interpret diverse datasets within a cohesive analytical framework. In our research, we combine functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and structural MRI (sMRI) for joint analysis. This integration capitalizes on the unique strengths of each modality and their inherent interconnections, aiming for a comprehensive understanding of the brain's connectivity and anatomical characteristics. Utilizing the Glasser atlas for parcellation, we integrate imaging-derived features from multiple modalities - functional connectivity from fMRI, structural connectivity from DTI, and anatomical features from sMRI - within consistent regions. Our approach incorporates a masking strategy to differentially weight neural connections, thereby facilitating an amalgamation of multimodal imaging data. This technique enhances interpretability at the connectivity level, transcending traditional analyses centered on singular regional attributes. The model is applied to the Human Connectome Project's Development study to elucidate the associations between multimodal imaging and cognitive functions throughout youth. The analysis demonstrates improved prediction accuracy and uncovers crucial anatomical features and neural connections, deepening our understanding of brain structure and function. This study not only advances multimodal neuroimaging analytics by offering a novel method for integrative analysis of diverse imaging modalities but also improves the understanding of intricate relationships between brain's structural and functional networks and cognitive development.

Hybrid strategy of coronary atherosclerosis characterization with T1-weighted MRI and CT angiography to non-invasively predict periprocedural myocardial injury.

Matsumoto H, Higuchi S, Li D, Tanisawa H, Isodono K, Irie D, Ohya H, Kitamura R, Kaneko K, Nakazawa M, Suzuki K, Komori Y, Hondera T, Cadet S, Lee HL, Christodoulou AG, Slomka PJ, Dey D, Xie Y, Shinke T

pubmed logopapersJun 30 2025
Coronary computed tomography angiography (CCTA) and magnetic resonance imaging (MRI) can predict periprocedural myocardial injury (PMI) after percutaneous coronary intervention (PCI). We aimed to investigate whether integrating MRI with CCTA, using the latest imaging and quantitative techniques, improves PMI prediction and to explore a potential hybrid CCTA-MRI strategy. This prospective, multi-centre study conducted coronary atherosclerosis T1-weighted characterization MRI for patients scheduled for elective PCI for an atherosclerotic lesion detected on CCTA without prior revascularization. PMI was defined as post-PCI troponin-T > 5× the upper reference limit. Using deep learning-enabled software, volumes of total plaque, calcified plaque, non-calcified plaque (NCP), and low-attenuation plaque (LAP; < 30 Hounsfield units) were quantified on CCTA. In non-contrast T1-weighted MRI, high-intensity plaque (HIP) volume was quantified as voxels with signal intensity exceeding that of the myocardium, weighted by their respective intensities. Of the 132 lesions from 120 patients, 43 resulted in PMI. In the CCTA-only strategy, LAP volume (P = 0.012) and NCP volume (P = 0.016) were independently associated with PMI. In integrating MRI with CCTA, LAP volume (P = 0.029), and HIP volume (P = 0.024) emerged as independent predictors. MRI integration with CCTA achieved a higher C-statistic value than CCTA alone (0.880 vs. 0.738; P = 0.004). A hybrid CCTA-MRI strategy, incorporating MRI for lesions with intermediate PMI risk based on CCTA, maintained superior diagnostic accuracy over the CCTA-only strategy (0.803 vs. 0.705; P = 0.028). Integrating MRI with CCTA improves PMI prediction compared with CCTA alone.

Towards 3D Semantic Image Synthesis for Medical Imaging

Wenwu Tang, Khaled Seyam, Bin Yang

arxiv logopreprintJun 30 2025
In the medical domain, acquiring large datasets is challenging due to both accessibility issues and stringent privacy regulations. Consequently, data availability and privacy protection are major obstacles to applying machine learning in medical imaging. To address this, our study proposes the Med-LSDM (Latent Semantic Diffusion Model), which operates directly in the 3D domain and leverages de-identified semantic maps to generate synthetic data as a method of privacy preservation and data augmentation. Unlike many existing methods that focus on generating 2D slices, Med-LSDM is designed specifically for 3D semantic image synthesis, making it well-suited for applications requiring full volumetric data. Med-LSDM incorporates a guiding mechanism that controls the 3D image generation process by applying a diffusion model within the latent space of a pre-trained VQ-GAN. By operating in the compressed latent space, the model significantly reduces computational complexity while still preserving critical 3D spatial details. Our approach demonstrates strong performance in 3D semantic medical image synthesis, achieving a 3D-FID score of 0.0054 on the conditional Duke Breast dataset and similar Dice scores (0.70964) to those of real images (0.71496). These results demonstrate that the synthetic data from our model have a small domain gap with real data and are useful for data augmentation.

Precision and Personalization: How Large Language Models Redefining Diagnostic Accuracy in Personalized Medicine - A Systematic Literature Review.

Aththanagoda AKNL, Kulathilake KASH, Abdullah NA

pubmed logopapersJun 30 2025
Personalized medicine aims to tailor medical treatments to the unique characteristics of each patient, but its effectiveness relies on achieving diagnostic accuracy to fully understand individual variability in disease response and treatment efficacy. This systematic literature review explores the role of large language models (LLMs) in enhancing diagnostic precision and supporting the advancement of personalized medicine. A comprehensive search was conducted across Web of Science, Science Direct, Scopus, and IEEE Xplore, targeting peer-reviewed articles published in English between January 2020 and March 2025 that applied LLMs within personalized medicine contexts. Following PRISMA guidelines, 39 relevant studies were selected and systematically analyzed. The findings indicate a growing integration of LLMs across key domains such as clinical informatics, medical imaging, patient-specific diagnosis, and clinical decision support. LLMs have shown potential in uncovering subtle data patterns critical for accurate diagnosis and personalized treatment planning. This review highlights the expanding role of LLMs in improving diagnostic accuracy in personalized medicine, offering insights into their performance, applications, and challenges, while also acknowledging limitations in generalizability due to variable model performance and dataset biases. The review highlights the importance of addressing challenges related to data privacy, model interpretability, and reliability across diverse clinical scenarios. For successful clinical integration, future research must focus on refining LLM technologies, ensuring ethical standards, and validating models continuously to safeguard effective and responsible use in healthcare environments.
Page 62 of 99986 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.