Sort by:
Page 57 of 2922917 results

A vision transformer-convolutional neural network framework for decision-transparent dual-energy X-ray absorptiometry recommendations using chest low-dose CT.

Kuo DP, Chen YC, Cheng SJ, Hsieh KL, Li YT, Kuo PC, Chang YC, Chen CY

pubmed logopapersJul 1 2025
This study introduces an ensemble framework that integrates Vision Transformer (ViT) and Convolutional Neural Networks (CNN) models to leverage their complementary strengths, generating visualized and decision-transparent recommendations for dual-energy X-ray absorptiometry (DXA) scans from chest low-dose computed tomography (LDCT). The framework was developed using data from 321 individuals and validated with an independent test cohort of 186 individuals. It addresses two classification tasks: (1) distinguishing normal from abnormal bone mineral density (BMD) and (2) differentiating osteoporosis from non-osteoporosis. Three field-of-view (FOV) settings-fitFOV (entire vertebra), halfFOV (vertebral body only), and largeFOV (fitFOV + 20 %)-were analyzed to assess their impact on model performance. Model predictions were weighted and combined to enhance classification accuracy, and visualizations were generated to improve decision transparency. DXA scans were recommended for individuals classified as having abnormal BMD or osteoporosis. The ensemble framework significantly outperformed individual models in both classification tasks (McNemar test, p < 0.001). In the development cohort, it achieved 91.6 % accuracy for task 1 with largeFOV (area under the receiver operating characteristic curve [AUROC]: 0.97) and 86.0 % accuracy for task 2 with fitFOV (AUROC: 0.94). In the test cohort, it demonstrated 86.6 % accuracy for task 1 (AUROC: 0.93) and 76.9 % accuracy for task 2 (AUROC: 0.99). DXA recommendation accuracy was 91.6 % and 87.1 % in the development and test cohorts, respectively, with notably high accuracy for osteoporosis detection (98.7 % and 100 %). This combined ViT-CNN framework effectively assesses bone status from LDCT images, particularly when utilizing fitFOV and largeFOV settings. By visualizing classification confidence and vertebral abnormalities, the proposed framework enhances decision transparency and supports clinicians in making informed DXA recommendations following opportunistic osteoporosis screening.

Development and validation of a nomogram for predicting bone marrow involvement in lymphoma patients based on <sup>18</sup>F-FDG PET radiomics and clinical factors.

Lu D, Zhu X, Mu X, Huang X, Wei F, Qin L, Liu Q, Fu W, Deng Y

pubmed logopapersJul 1 2025
This study aimed to develop and validate a nomogram combining <sup>18</sup>F-FDG PET radiomics and clinical factors to non-invasively predict bone marrow involvement (BMI) in patients with lymphoma. A radiomics nomogram was developed using monocentric data, randomly divided into a training set (70%) and a test set (30%). Bone marrow biopsy (BMB) served as the gold standard for BMI diagnosis. Independent clinical risk factors were identified through univariate and multivariate logistic regression analyses to construct a clinical model. Radiomics features were extracted from PET and CT images and selected using least absolute shrinkage and selection operator (LASSO) regression, yielding a radiomics score (Rad<sub>score</sub>) for each patient. Models based on clinical factors, CT Rad<sub>score</sub>, and PET Rad<sub>score</sub> were established and evaluated using eight machine learning algorithms to identify the optimal prediction model. A combined model was constructed and presented as a nomogram. Model performance was assessed using the area under the receiver operating characteristic curve (AUC), calibration curves, and decision curve analysis (DCA). A total of 160 patients were included, of whom 70 had BMI based on BMB results. The training group comprised 112 patients (BMI: 56, without BMI: 56), while the test group included 48 patients (BMI: 14, without BMI: 34). Independent risk factors, including the number of extranodal involvements and B symptoms, were incorporated into the clinical model. In the clinical model, CT Rad<sub>score</sub>, and PET Rad<sub>score</sub>, the AUCs in the test set were 0.820 (95% CI: 0.705-0.935), 0.538 (95% CI: 0.351-0.723), and 0.836 (95% CI: 0.686-0.986). Due to the limited diagnostic performance of CT Rad<sub>score</sub>, the nomogram was constructed using PET Rad<sub>score</sub> and the clinical model. The radiomics nomogram achieved AUCs of 0.916 (95% CI: 0.865-0.967) in the training set and 0.863 (95% CI: 0.763-0.964) in the test set. Calibration curves and DCA confirmed the nomogram's discrimination, calibration, and clinical utility in both sets. By integrating PET Rad<sub>score</sub>, the number of extranodal involvements, and B symptoms, this <sup>18</sup>F-FDG PET radiomics-based nomogram offers a non-invasive method to predict bone marrow status in lymphoma patients, providing nuclear medicine physicians with valuable decision support for pre-treatment evaluation.

MedScale-Former: Self-guided multiscale transformer for medical image segmentation.

Karimijafarbigloo S, Azad R, Kazerouni A, Merhof D

pubmed logopapersJul 1 2025
Accurate medical image segmentation is crucial for enabling automated clinical decision procedures. However, existing supervised deep learning methods for medical image segmentation face significant challenges due to their reliance on extensive labeled training data. To address this limitation, our novel approach introduces a dual-branch transformer network operating on two scales, strategically encoding global contextual dependencies while preserving local information. To promote self-supervised learning, our method leverages semantic dependencies between different scales, generating a supervisory signal for inter-scale consistency. Additionally, it incorporates a spatial stability loss within each scale, fostering self-supervised content clustering. While intra-scale and inter-scale consistency losses enhance feature uniformity within clusters, we introduce a cross-entropy loss function atop the clustering score map to effectively model cluster distributions and refine decision boundaries. Furthermore, to account for pixel-level similarities between organ or lesion subpixels, we propose a selective kernel regional attention module as a plug and play component. This module adeptly captures and outlines organ or lesion regions, slightly enhancing the definition of object boundaries. Our experimental results on skin lesion, lung organ, and multiple myeloma plasma cell segmentation tasks demonstrate the superior performance of our method compared to state-of-the-art approaches.

Phantom-based evaluation of image quality in Transformer-enhanced 2048-matrix CT imaging at low and ultralow doses.

Li Q, Liu L, Zhang Y, Zhang L, Wang L, Pan Z, Xu M, Zhang S, Xie X

pubmed logopapersJul 1 2025
To compare the quality of standard 512-matrix, standard 1024-matrix, and Swin2SR-based 2048-matrix phantom images under different scanning protocols. The Catphan 600 phantom was scanned using a multidetector CT scanner under two protocols: 120 kV/100 mA (CT dose index volume = 3.4 mGy) to simulate low-dose CT, and 70 kV/40 mA (0.27 mGy) to simulate ultralow-dose CT. Raw data were reconstructed into standard 512-matrix images using three methods: filtered back projection (FBP), adaptive statistical iterative reconstruction at 40% intensity (ASIR-V), and deep learning image reconstruction at high intensity (DLIR-H). The Swin2SR super-resolution model was used to generate 2048-matrix images (Swin2SR-2048), while the super-resolution convolutional neural network (SRCNN) model generated 2048-matrix images (SRCNN-2048). The quality of 2048-matrix images generated by the two models (Swin2SR and SRCNN) was compared. Image quality was evaluated by ImQuest software (v7.2.0.0, Duke University) based on line pair clarity, task-based transfer function (TTF), image noise, and noise power spectrum (NPS). At equivalent radiation doses and reconstruction method, Swin2SR-2048 images identified more line pairs than both standard-512 and standard-1024 images. Except for the 0.27 mGy/DLIR-H/standard kernel sequence, TTF-50% of Teflon increased after super-resolution processing. Statistically significant differences in TTF-50% were observed between the standard 512, 1024, and Swin2SR-2048 images (all p < 0.05). Swin2SR-2048 images exhibited lower image noise and NPS<sub>peak</sub> compared to both standard 512- and 1024-matrix images, with significant differences observed in all three matrix types (all p < 0.05). Swin2SR-2048 images also demonstrated superior quality compared to SRCNN-2048, with significant differences in image noise (p < 0.001), NPS<sub>peak</sub> (p < 0.05), and TTF-50% for Teflon (p < 0.05). Transformer-enhanced 2048-matrix CT images improve spatial resolution and reduce image noise compared to standard-512 and -1024 matrix images.

Development of Multiparametric Prognostic Models for Stereotactic Magnetic Resonance Guided Radiation Therapy of Pancreatic Cancers.

Michalet M, Valenzuela G, Nougaret S, Tardieu M, Azria D, Riou O

pubmed logopapersJul 1 2025
Stereotactic magnetic resonance guided adaptive radiation therapy (SMART) is a new option for local treatment of unresectable pancreatic ductal adenocarcinoma, showing interesting survival and local control (LC) results. Despite this, some patients will experience early local and/or metastatic recurrence leading to death. We aimed to develop multiparametric prognostic models for these patients. All patients treated in our institution with SMART for an unresectable pancreatic ductal adenocarcinoma between October 21, 2019, and August 5, 2022 were included. Several initial clinical characteristics as well as dosimetric data of SMART were recorded. Radiomics data from 0.35-T simulation magnetic resonance imaging were extracted. All these data were combined to build prognostic models of overall survival (OS) and LC using machine learning algorithms. Eighty-three patients with a median age of 64.9 years were included. A majority of patients had a locally advanced pancreatic cancer (77%). The median OS was 21 months after SMART completion and 27 months after chemotherapy initiation. The 6- and 12-month post-SMART OS was 87.8% (IC95%, 78.2%-93.2%) and 70.9% (IC95%, 58.8%-80.0%), respectively. The best model for OS was the Cox proportional hazard survival analysis using clinical data, with a concordance index inverse probability of censoring weighted of 0.87. Tested on its 12-month OS prediction capacity, this model had good performance (sensitivity 67%, specificity 71%, and area under the curve 0.90). The median LC was not reached. The 6- and 12-month post-SMART LC was 92.4% [IC95%, 83.7%-96.6%] and 76.3% [IC95%, 62.6%-85.5%], respectively. The best model for LC was the component-wise gradient boosting survival analysis using clinical and radiomics data, with a concordance index inverse probability of censoring weighted of 0.80. Tested on its 9-month LC prediction capacity, this model had good performance (sensitivity 50%, specificity 97%, and area under the curve 0.78). Combining clinical and radiomics data in multiparametric prognostic models using machine learning algorithms showed good performance for the prediction of OS and LC. External validation of these models will be needed.

Evaluating a large language model's accuracy in chest X-ray interpretation for acute thoracic conditions.

Ostrovsky AM

pubmed logopapersJul 1 2025
The rapid advancement of artificial intelligence (AI) has great ability to impact healthcare. Chest X-rays are essential for diagnosing acute thoracic conditions in the emergency department (ED), but interpretation delays due to radiologist availability can impact clinical decision-making. AI models, including deep learning algorithms, have been explored for diagnostic support, but the potential of large language models (LLMs) in emergency radiology remains largely unexamined. This study assessed ChatGPT's feasibility in interpreting chest X-rays for acute thoracic conditions commonly encountered in the ED. A subset of 1400 images from the NIH Chest X-ray dataset was analyzed, representing seven pathology categories: Atelectasis, Effusion, Emphysema, Pneumothorax, Pneumonia, Mass, and No Finding. ChatGPT 4.0, utilizing the "X-Ray Interpreter" add-on, was evaluated for its diagnostic performance across these categories. ChatGPT demonstrated high performance in identifying normal chest X-rays, with a sensitivity of 98.9 %, specificity of 93.9 %, and accuracy of 94.7 %. However, the model's performance varied across pathologies. The best results were observed in diagnosing pneumonia (sensitivity 76.2 %, specificity 93.7 %) and pneumothorax (sensitivity 77.4 %, specificity 89.1 %), while performance for atelectasis and emphysema was lower. ChatGPT demonstrates potential as a supplementary tool for differentiating normal from abnormal chest X-rays, with promising results for certain pathologies like pneumonia. However, its diagnostic accuracy for more subtle conditions requires improvement. Further research integrating ChatGPT with specialized image recognition models could enhance its performance, offering new possibilities in medical imaging and education.

A deep-learning model to predict the completeness of cytoreductive surgery in colorectal cancer with peritoneal metastasis☆.

Lin Q, Chen C, Li K, Cao W, Wang R, Fichera A, Han S, Zou X, Li T, Zou P, Wang H, Ye Z, Yuan Z

pubmed logopapersJul 1 2025
Colorectal cancer (CRC) with peritoneal metastasis (PM) is associated with poor prognosis. The Peritoneal Cancer Index (PCI) is used to evaluate the extent of PM and to select Cytoreductive Surgery (CRS). However, PCI score is not accurate to guide patient's selection for CRS. We have developed a novel AI framework of decoupling feature alignment and fusion (DeAF) by deep learning to aid selection of PM patients and predict surgical completeness of CRS. 186 CRC patients with PM recruited from four tertiary hospitals were enrolled. In the training cohort, deep learning was used to train the DeAF model using Simsiam algorithms by contrast CT images and then fuse clinicopathological parameters to increase performance. The accuracy, sensitivity, specificity, and AUC by ROC were evaluated both in the internal validation cohort and three external cohorts. The DeAF model demonstrated a robust accuracy to predict the completeness of CRS with AUC of 0.9 (95 % CI: 0.793-1.000) in internal validation cohort. The model can guide selection of suitable patients and predict potential benefits from CRS. The high predictive performance in predicting CRS completeness were validated in three external cohorts with AUC values of 0.906(95 % CI: 0.812-1.000), 0.960(95 % CI: 0.885-1.000), and 0.933 (95 % CI: 0.791-1.000), respectively. The novel DeAF framework can aid surgeons to select suitable PM patients for CRS and predict the completeness of CRS. The model can change surgical decision-making and provide potential benefits for PM patients.

A systematic review of generative AI approaches for medical image enhancement: Comparing GANs, transformers, and diffusion models.

Oulmalme C, Nakouri H, Jaafar F

pubmed logopapersJul 1 2025
Medical imaging is a vital diagnostic tool that provides detailed insights into human anatomy but faces challenges affecting its accuracy and efficiency. Advanced generative AI models offer promising solutions. Unlike previous reviews with a narrow focus, a comprehensive evaluation across techniques and modalities is necessary. This systematic review integrates the three state-of-the-art leading approaches, GANs, Diffusion Models, and Transformers, examining their applicability, methodologies, and clinical implications in improving medical image quality. Using the PRISMA framework, 63 studies from 989 were selected via Google Scholar and PubMed, focusing on GANs, Transformers, and Diffusion Models. Articles from ACM, IEEE Xplore, and Springer were analyzed. Generative AI techniques show promise in improving image resolution, reducing noise, and enhancing fidelity. GANs generate high-quality images, Transformers utilize global context, and Diffusion Models are effective in denoising and reconstruction. Challenges include high computational costs, limited dataset diversity, and issues with generalizability, with a focus on quantitative metrics over clinical applicability. This review highlights the transformative impact of GANs, Transformers, and Diffusion Models in advancing medical imaging. Future research must address computational and generalization challenges, emphasize open science, and validate these techniques in diverse clinical settings to unlock their full potential. These efforts could enhance diagnostic accuracy, lower costs, and improve patient outcome.

Uncertainty-aware deep learning for segmentation of primary tumor and pathologic lymph nodes in oropharyngeal cancer: Insights from a multi-center cohort.

De Biase A, Sijtsema NM, van Dijk LV, Steenbakkers R, Langendijk JA, van Ooijen P

pubmed logopapersJul 1 2025
Information on deep learning (DL) tumor segmentation accuracy on a voxel and a structure level is essential for clinical introduction. In a previous study, a DL model was developed for oropharyngeal cancer (OPC) primary tumor (PT) segmentation in PET/CT images and voxel-level predicted probabilities (TPM) quantifying model certainty were introduced. This study extended the network to simultaneously generate TPMs for PT and pathologic lymph nodes (PL) and explored whether structure-level uncertainty in TPMs predicts segmentation model accuracy in an independent external cohort. We retrospectively gathered PET/CT images and manual delineations of gross tumor volume of the PT (GTVp) and PL (GTVln) of 407 OPC patients treated with (chemo)radiation in our institute. The HECKTOR 2022 challenge dataset served as external test set. The pre-existing architecture was modified for multi-label segmentation. Multiple models were trained, and the non-binarized ensemble average of TPMs was considered per patient. Segmentation accuracy was quantified by surface and aggregate DSC, model uncertainty by coefficient of variation (CV) of multiple predictions. Predicted GTVp and GTVln segmentations in the external test achieved 0.75 and 0.70 aggregate DSC. Patient-specific CV and surface DSC showed a significant correlation for both structures (-0.54 and -0.66 for GTVp and GTVln) in the external set, indicating significant calibration. Significant accuracy versus uncertainty calibration was achieved for TPMs in both internal and external test sets, indicating the potential use of quantified uncertainty from TPMs to identify cases with lower GTVp and GTVln segmentation accuracy, independently of the dataset.

Worldwide research trends on artificial intelligence in head and neck cancer: a bibliometric analysis.

Silvestre-Barbosa Y, Castro VT, Di Carvalho Melo L, Reis PED, Leite AF, Ferreira EB, Guerra ENS

pubmed logopapersJul 1 2025
This bibliometric analysis aims to explore scientific data on Artificial Intelligence (AI) and Head and Neck Cancer (HNC). AI-related HNC articles from the Web of Science Core Collection were searched. VosViewer and Biblioshiny/Bibiometrix for R Studio were used for data synthesis. This analysis covered key characteristics such as sources, authors, affiliations, countries, citations and top cited articles, keyword analysis, and trending topics. A total of 1,019 papers from 1995 to 2024 were included. Among them, 71.6% were original research articles, 7.6% were reviews, and 20.8% took other forms. The fifty most cited documents highlighted radiology as the most explored specialty, with an emphasis on deep learning models for segmentation. The publications have been increasing, with an annual growth rate of 94.4% after 2016. Among the 20 most productive countries, 14 are high-income economies. The keywords of strong citation revealed 2 main clusters: radiomics and radiotherapy. The most frequently keywords include machine learning, deep learning, artificial intelligence, and head and neck cancer, with recent emphasis on diagnosis, survival prediction, and histopathology. There has been an increase in the use of AI in HNC research since 2016 and indicated a notable disparity in publication quantity between high-income and low/middle-income countries. Future research should prioritize clinical validation and standardization to facilitate the integration of AI in HNC management, particularly in underrepresented regions.
Page 57 of 2922917 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.