Sort by:
Page 26 of 2072068 results

Development of Multiparametric Prognostic Models for Stereotactic Magnetic Resonance Guided Radiation Therapy of Pancreatic Cancers.

Michalet M, Valenzuela G, Nougaret S, Tardieu M, Azria D, Riou O

pubmed logopapersJul 1 2025
Stereotactic magnetic resonance guided adaptive radiation therapy (SMART) is a new option for local treatment of unresectable pancreatic ductal adenocarcinoma, showing interesting survival and local control (LC) results. Despite this, some patients will experience early local and/or metastatic recurrence leading to death. We aimed to develop multiparametric prognostic models for these patients. All patients treated in our institution with SMART for an unresectable pancreatic ductal adenocarcinoma between October 21, 2019, and August 5, 2022 were included. Several initial clinical characteristics as well as dosimetric data of SMART were recorded. Radiomics data from 0.35-T simulation magnetic resonance imaging were extracted. All these data were combined to build prognostic models of overall survival (OS) and LC using machine learning algorithms. Eighty-three patients with a median age of 64.9 years were included. A majority of patients had a locally advanced pancreatic cancer (77%). The median OS was 21 months after SMART completion and 27 months after chemotherapy initiation. The 6- and 12-month post-SMART OS was 87.8% (IC95%, 78.2%-93.2%) and 70.9% (IC95%, 58.8%-80.0%), respectively. The best model for OS was the Cox proportional hazard survival analysis using clinical data, with a concordance index inverse probability of censoring weighted of 0.87. Tested on its 12-month OS prediction capacity, this model had good performance (sensitivity 67%, specificity 71%, and area under the curve 0.90). The median LC was not reached. The 6- and 12-month post-SMART LC was 92.4% [IC95%, 83.7%-96.6%] and 76.3% [IC95%, 62.6%-85.5%], respectively. The best model for LC was the component-wise gradient boosting survival analysis using clinical and radiomics data, with a concordance index inverse probability of censoring weighted of 0.80. Tested on its 9-month LC prediction capacity, this model had good performance (sensitivity 50%, specificity 97%, and area under the curve 0.78). Combining clinical and radiomics data in multiparametric prognostic models using machine learning algorithms showed good performance for the prediction of OS and LC. External validation of these models will be needed.

Evaluating a large language model's accuracy in chest X-ray interpretation for acute thoracic conditions.

Ostrovsky AM

pubmed logopapersJul 1 2025
The rapid advancement of artificial intelligence (AI) has great ability to impact healthcare. Chest X-rays are essential for diagnosing acute thoracic conditions in the emergency department (ED), but interpretation delays due to radiologist availability can impact clinical decision-making. AI models, including deep learning algorithms, have been explored for diagnostic support, but the potential of large language models (LLMs) in emergency radiology remains largely unexamined. This study assessed ChatGPT's feasibility in interpreting chest X-rays for acute thoracic conditions commonly encountered in the ED. A subset of 1400 images from the NIH Chest X-ray dataset was analyzed, representing seven pathology categories: Atelectasis, Effusion, Emphysema, Pneumothorax, Pneumonia, Mass, and No Finding. ChatGPT 4.0, utilizing the "X-Ray Interpreter" add-on, was evaluated for its diagnostic performance across these categories. ChatGPT demonstrated high performance in identifying normal chest X-rays, with a sensitivity of 98.9 %, specificity of 93.9 %, and accuracy of 94.7 %. However, the model's performance varied across pathologies. The best results were observed in diagnosing pneumonia (sensitivity 76.2 %, specificity 93.7 %) and pneumothorax (sensitivity 77.4 %, specificity 89.1 %), while performance for atelectasis and emphysema was lower. ChatGPT demonstrates potential as a supplementary tool for differentiating normal from abnormal chest X-rays, with promising results for certain pathologies like pneumonia. However, its diagnostic accuracy for more subtle conditions requires improvement. Further research integrating ChatGPT with specialized image recognition models could enhance its performance, offering new possibilities in medical imaging and education.

A deep-learning model to predict the completeness of cytoreductive surgery in colorectal cancer with peritoneal metastasis☆.

Lin Q, Chen C, Li K, Cao W, Wang R, Fichera A, Han S, Zou X, Li T, Zou P, Wang H, Ye Z, Yuan Z

pubmed logopapersJul 1 2025
Colorectal cancer (CRC) with peritoneal metastasis (PM) is associated with poor prognosis. The Peritoneal Cancer Index (PCI) is used to evaluate the extent of PM and to select Cytoreductive Surgery (CRS). However, PCI score is not accurate to guide patient's selection for CRS. We have developed a novel AI framework of decoupling feature alignment and fusion (DeAF) by deep learning to aid selection of PM patients and predict surgical completeness of CRS. 186 CRC patients with PM recruited from four tertiary hospitals were enrolled. In the training cohort, deep learning was used to train the DeAF model using Simsiam algorithms by contrast CT images and then fuse clinicopathological parameters to increase performance. The accuracy, sensitivity, specificity, and AUC by ROC were evaluated both in the internal validation cohort and three external cohorts. The DeAF model demonstrated a robust accuracy to predict the completeness of CRS with AUC of 0.9 (95 % CI: 0.793-1.000) in internal validation cohort. The model can guide selection of suitable patients and predict potential benefits from CRS. The high predictive performance in predicting CRS completeness were validated in three external cohorts with AUC values of 0.906(95 % CI: 0.812-1.000), 0.960(95 % CI: 0.885-1.000), and 0.933 (95 % CI: 0.791-1.000), respectively. The novel DeAF framework can aid surgeons to select suitable PM patients for CRS and predict the completeness of CRS. The model can change surgical decision-making and provide potential benefits for PM patients.

A systematic review of generative AI approaches for medical image enhancement: Comparing GANs, transformers, and diffusion models.

Oulmalme C, Nakouri H, Jaafar F

pubmed logopapersJul 1 2025
Medical imaging is a vital diagnostic tool that provides detailed insights into human anatomy but faces challenges affecting its accuracy and efficiency. Advanced generative AI models offer promising solutions. Unlike previous reviews with a narrow focus, a comprehensive evaluation across techniques and modalities is necessary. This systematic review integrates the three state-of-the-art leading approaches, GANs, Diffusion Models, and Transformers, examining their applicability, methodologies, and clinical implications in improving medical image quality. Using the PRISMA framework, 63 studies from 989 were selected via Google Scholar and PubMed, focusing on GANs, Transformers, and Diffusion Models. Articles from ACM, IEEE Xplore, and Springer were analyzed. Generative AI techniques show promise in improving image resolution, reducing noise, and enhancing fidelity. GANs generate high-quality images, Transformers utilize global context, and Diffusion Models are effective in denoising and reconstruction. Challenges include high computational costs, limited dataset diversity, and issues with generalizability, with a focus on quantitative metrics over clinical applicability. This review highlights the transformative impact of GANs, Transformers, and Diffusion Models in advancing medical imaging. Future research must address computational and generalization challenges, emphasize open science, and validate these techniques in diverse clinical settings to unlock their full potential. These efforts could enhance diagnostic accuracy, lower costs, and improve patient outcome.

Uncertainty-aware deep learning for segmentation of primary tumor and pathologic lymph nodes in oropharyngeal cancer: Insights from a multi-center cohort.

De Biase A, Sijtsema NM, van Dijk LV, Steenbakkers R, Langendijk JA, van Ooijen P

pubmed logopapersJul 1 2025
Information on deep learning (DL) tumor segmentation accuracy on a voxel and a structure level is essential for clinical introduction. In a previous study, a DL model was developed for oropharyngeal cancer (OPC) primary tumor (PT) segmentation in PET/CT images and voxel-level predicted probabilities (TPM) quantifying model certainty were introduced. This study extended the network to simultaneously generate TPMs for PT and pathologic lymph nodes (PL) and explored whether structure-level uncertainty in TPMs predicts segmentation model accuracy in an independent external cohort. We retrospectively gathered PET/CT images and manual delineations of gross tumor volume of the PT (GTVp) and PL (GTVln) of 407 OPC patients treated with (chemo)radiation in our institute. The HECKTOR 2022 challenge dataset served as external test set. The pre-existing architecture was modified for multi-label segmentation. Multiple models were trained, and the non-binarized ensemble average of TPMs was considered per patient. Segmentation accuracy was quantified by surface and aggregate DSC, model uncertainty by coefficient of variation (CV) of multiple predictions. Predicted GTVp and GTVln segmentations in the external test achieved 0.75 and 0.70 aggregate DSC. Patient-specific CV and surface DSC showed a significant correlation for both structures (-0.54 and -0.66 for GTVp and GTVln) in the external set, indicating significant calibration. Significant accuracy versus uncertainty calibration was achieved for TPMs in both internal and external test sets, indicating the potential use of quantified uncertainty from TPMs to identify cases with lower GTVp and GTVln segmentation accuracy, independently of the dataset.

Worldwide research trends on artificial intelligence in head and neck cancer: a bibliometric analysis.

Silvestre-Barbosa Y, Castro VT, Di Carvalho Melo L, Reis PED, Leite AF, Ferreira EB, Guerra ENS

pubmed logopapersJul 1 2025
This bibliometric analysis aims to explore scientific data on Artificial Intelligence (AI) and Head and Neck Cancer (HNC). AI-related HNC articles from the Web of Science Core Collection were searched. VosViewer and Biblioshiny/Bibiometrix for R Studio were used for data synthesis. This analysis covered key characteristics such as sources, authors, affiliations, countries, citations and top cited articles, keyword analysis, and trending topics. A total of 1,019 papers from 1995 to 2024 were included. Among them, 71.6% were original research articles, 7.6% were reviews, and 20.8% took other forms. The fifty most cited documents highlighted radiology as the most explored specialty, with an emphasis on deep learning models for segmentation. The publications have been increasing, with an annual growth rate of 94.4% after 2016. Among the 20 most productive countries, 14 are high-income economies. The keywords of strong citation revealed 2 main clusters: radiomics and radiotherapy. The most frequently keywords include machine learning, deep learning, artificial intelligence, and head and neck cancer, with recent emphasis on diagnosis, survival prediction, and histopathology. There has been an increase in the use of AI in HNC research since 2016 and indicated a notable disparity in publication quantity between high-income and low/middle-income countries. Future research should prioritize clinical validation and standardization to facilitate the integration of AI in HNC management, particularly in underrepresented regions.

LUNETR: Language-Infused UNETR for precise pancreatic tumor segmentation in 3D medical image.

Shi Z, Zhang R, Wei X, Yu C, Xie H, Hu Z, Chen X, Zhang Y, Xie B, Luo Z, Peng W, Xie X, Li F, Long X, Li L, Hu L

pubmed logopapersJul 1 2025
The identification of early micro-lesions and adjacent blood vessels in CT scans plays a pivotal role in the clinical diagnosis of pancreatic cancer, considering its aggressive nature and high fatality rate. Despite the widespread application of deep learning methods for this task, several challenges persist: (1) the complex background environment in abdominal CT scans complicates the accurate localization of potential micro-tumors; (2) the subtle contrast between micro-lesions within pancreatic tissue and the surrounding tissues makes it challenging for models to capture these features accurately; and (3) tumors that invade adjacent blood vessels pose significant barriers to surgical procedures. To address these challenges, we propose LUNETR (Language-Infused UNETR), an advanced multimodal encoder model that combines textual and image information for precise medical image segmentation. The integration of an autoencoding language model with cross-attention enabling our model to effectively leverage semantic associations between textual and image data, thereby facilitating precise localization of potential pancreatic micro-tumors. Additionally, we designed a Multi-scale Aggregation Attention (MSAA) module to comprehensively capture both spatial and channel characteristics of global multi-scale image data, enhancing the model's capacity to extract features from micro-lesions embedded within pancreatic tissue. Furthermore, in order to facilitate precise segmentation of pancreatic tumors and nearby blood vessels and address the scarcity of multimodal medical datasets, we collaborated with Zhuzhou Central Hospital to construct a multimodal dataset comprising CT images and corresponding pathology reports from 135 pancreatic cancer patients. Our experimental results surpass current state-of-the-art models, with the incorporation of the semantic encoder improving the average Dice score for pancreatic tumor segmentation by 2.23 %. For the Medical Segmentation Decathlon (MSD) liver and lung cancer datasets, our model achieved an average Dice score improvement of 4.31 % and 3.67 %, respectively, demonstrating the efficacy of the LUNETR.

Deep Guess acceleration for explainable image reconstruction in sparse-view CT.

Loli Piccolomini E, Evangelista D, Morotti E

pubmed logopapersJul 1 2025
Sparse-view Computed Tomography (CT) is an emerging protocol designed to reduce X-ray dose radiation in medical imaging. Reconstructions based on the traditional Filtered Back Projection algorithm suffer from severe artifacts due to sparse data. In contrast, Model-Based Iterative Reconstruction (MBIR) algorithms, though better at mitigating noise through regularization, are too computationally costly for clinical use. This paper introduces a novel technique, denoted as the Deep Guess acceleration scheme, using a trained neural network both to quicken the regularized MBIR and to enhance the reconstruction accuracy. We integrate state-of-the-art deep learning tools to initialize a clever starting guess for a proximal algorithm solving a non-convex model and thus computing a (mathematically) interpretable solution image in a few iterations. Experimental results on real and synthetic CT images demonstrate the Deep Guess effectiveness in (very) sparse tomographic protocols, where it overcomes its mere variational counterpart and many data-driven approaches at the state of the art. We also consider a ground truth-free implementation and test the robustness of the proposed framework to noise.

Optimizing imaging modalities for sarcoma subtypes in radiation therapy: State of the art.

Beddok A, Kaur H, Khurana S, Dercle L, El Ayachi R, Jouglar E, Mammar H, Mahe M, Najem E, Rozenblum L, Thariat J, El Fakhri G, Helfre S

pubmed logopapersJul 1 2025
The choice of imaging modalities is essential in sarcoma management, as different techniques provide complementary information depending on tumor subtype and anatomical location. This narrative review examines the role of imaging in sarcoma characterization and treatment planning, particularly in the context of radiation therapy (RT). Magnetic resonance imaging (MRI) provides superior soft tissue contrast, enabling detailed assessment of tumor extent and peritumoral involvement. Computed tomography (CT) is particularly valuable for detecting osseous involvement, periosteal reactions, and calcifications, complementing MRI in sarcomas involving bone or calcified lesions. The combination of MRI and CT enhances tumor delineation, particularly for complex sites such as retroperitoneal and uterine sarcomas, where spatial relationships with adjacent organs are critical. In vascularized sarcomas, such as alveolar soft-part sarcomas, the integration of MRI with CT or MR angiography facilitates accurate mapping of tumor margins. Positron emission tomography with [18 F]-fluorodeoxyglucose ([18 F]-FDG PET) provides functional insights, identifying metabolically active regions within tumors to guide dose escalation. Although its role in routine staging is limited, [18 F]-FDG PET and emerging PET tracers offer promise for refining RT planning. Advances in artificial intelligence further enhance imaging precision, enabling more accurate contouring and treatment optimization. This review highlights how the integration of imaging modalities, tailored to specific sarcoma subtypes, supports precise RT delivery while minimizing damage to surrounding tissues. These strategies underline the importance of multidisciplinary approaches in improving sarcoma management and outcomes through multi-image-based RT planning.

Deformable image registration with strategic integration pyramid framework for brain MRI.

Zhang Y, Zhu Q, Xie B, Li T

pubmed logopapersJul 1 2025
Medical image registration plays a crucial role in medical imaging, with a wide range of clinical applications. In this context, brain MRI registration is commonly used in clinical practice for accurate diagnosis and treatment planning. In recent years, deep learning-based deformable registration methods have achieved remarkable results. However, existing methods have not been flexible and efficient in handling the feature relationships of anatomical structures at different levels when dealing with large deformations. To address this limitation, we propose a novel strategic integration registration network based on the pyramid structure. Our strategy mainly includes two aspects of integration: fusion of features at different scales, and integration of different neural network structures. Specifically, we design a CNN encoder and a Transformer decoder to efficiently extract and enhance both global and local features. Moreover, to overcome the error accumulation issue inherent in pyramid structures, we introduce progressive optimization iterations at the lowest scale for deformation field generation. This approach more efficiently handles the spatial relationships of images while improving accuracy. We conduct extensive evaluations across multiple brain MRI datasets, and experimental results show that our method outperforms other deep learning-based methods in terms of registration accuracy and robustness.
Page 26 of 2072068 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.