Sort by:
Page 10 of 1861852 results

Deep Learning Based on Ultrasound Images Differentiates Parotid Gland Pleomorphic Adenomas and Warthin Tumors.

Li Y, Zou M, Zhou X, Long X, Liu X, Yao Y

pubmed logopapersJul 1 2025
Exploring the clinical significance of employing deep learning methodologies on ultrasound images for the development of an automated model to accurately identify pleomorphic adenomas and Warthin tumors in salivary glands. A retrospective study was conducted on 91 patients who underwent ultrasonography examinations between January 2016 and December 2023 and were subsequently diagnosed with pleomorphic adenoma or Warthin's tumor based on postoperative pathological findings. A total of 526 ultrasonography images were collected for analysis. Convolutional neural network (CNN) models, including ResNet18, MobileNetV3Small, and InceptionV3, were trained and validated using these images for the differentiation of pleomorphic adenoma and Warthin's tumor. Performance evaluation metrics such as receiver operating characteristic (ROC) curves, area under the curve (AUC), sensitivity, specificity, positive predictive value, and negative predictive value were utilized. Two ultrasound physicians, with varying levels of expertise, conducted independent evaluations of the ultrasound images. Subsequently, a comparative analysis was performed between the diagnostic outcomes of the ultrasound physicians and the results obtained from the best-performing model. Inter-rater agreement between routine ultrasonography interpretation by the two expert ultrasonographers and the automatic identification diagnosis of the best model in relation to pathological results was assessed using kappa tests. The deep learning models achieved favorable performance in differentiating pleomorphic adenoma from Warthin's tumor. The ResNet18, MobileNetV3Small, and InceptionV3 models exhibited diagnostic accuracies of 82.4% (AUC: 0.932), 87.0% (AUC: 0.946), and 77.8% (AUC: 0.811), respectively. Among these models, MobileNetV3Small demonstrated the highest performance. The experienced ultrasonographer achieved a diagnostic accuracy of 73.5%, with sensitivity, specificity, positive predictive value, and negative predictive value of 73.7%, 73.3%, 77.8%, and 68.8%, respectively. The less-experienced ultrasonographer achieved a diagnostic accuracy of 69.0%, with sensitivity, specificity, positive predictive value, and negative predictive value of 66.7%, 71.4%, 71.4%, and 66.7%, respectively. The kappa test revealed strong consistency between the best-performing deep learning model and postoperative pathological diagnoses (kappa value: .778, <i>p</i>-value < .001). In contrast, the less-experienced ultrasonographer demonstrated poor consistency in image interpretations (kappa value: .380, <i>p</i>-value < .05). The diagnostic accuracy of the best deep learning model was significantly higher than that of the ultrasonographers, and the experienced ultrasonographer exhibited higher diagnostic accuracy than the less-experienced one. This study demonstrates the promising performance of a deep learning-based method utilizing ultrasonography images for the differentiation of pleomorphic adenoma and Warthin's tumor. The approach reduces subjective errors, provides decision support for clinicians, and improves diagnostic consistency.

Semi-supervised temporal attention network for lung 4D CT ventilation estimation.

Xue P, Zhang J, Ma L, Li Y, Ji H, Ren T, Hu Z, Ren M, Zhang Z, Dong E

pubmed logopapersJul 1 2025
Computed tomography (CT)-derived ventilation estimation, also known as CT ventilation imaging (CTVI), is emerging as a potentially crucial tool for designing functional avoidance radiotherapy treatment plans and evaluating therapy responses. However, most conventional CTVI methods are highly dependent on deformation fields from image registration to track volume variations, making them susceptible to registration errors and resulting in low estimation accuracy. In addition, existing deep learning-based CTVI methods typically have the issue of requiring a large amount of labeled data and cannot fully utilize temporal characteristics of 4D CT images. To address these issues, we propose a semi-supervised temporal attention (S<sup>2</sup>TA) network for lung 4D CT ventilation estimation. Specifically, the semi-supervised learning framework involves a teacher model for generating pseudo-labels from unlabeled 4D CT images, to train a student model that takes both labeled and unlabeled 4D CT images as input. The teacher model is updated as the moving average of the instantly trained student, to prevent it from being abruptly impacted by incorrect pseudo-labels. Furthermore, to fully exploit the temporal information of 4D CT images, a temporal attention architecture is designed to effectively capture the temporal relationships across multiple phases in 4D CT image sequence. Extensive experiments on three publicly available thoracic 4D CT datasets show that our proposed method can achieve higher estimation accuracy than state-of-the-art methods, which could potentially be used for lung functional avoidance radiotherapy and treatment response modeling.

Improved unsupervised 3D lung lesion detection and localization by fusing global and local features: Validation in 3D low-dose computed tomography.

Lee JH, Oh SJ, Kim K, Lim CY, Choi SH, Chung MJ

pubmed logopapersJul 1 2025
Unsupervised anomaly detection (UAD) is crucial in low-dose computed tomography (LDCT). Recent AI technologies, leveraging global features, have enabled effective UAD with minimal training data of normal patients. However, this approach, devoid of utilizing local features, exhibits vulnerability in detecting deep lesions within the lungs. In other words, while the conventional use of global features can achieve high specificity, it often comes with limited sensitivity. Developing a UAD AI model with high sensitivity is essential to prevent false negatives, especially in screening patients with diseases demonstrating high mortality rates. We have successfully pioneered a new LDCT UAD AI model that leverages local features, achieving a previously unattainable increase in sensitivity compared to global methods (17.5% improvement). Furthermore, by integrating this approach with conventional global-based techniques, we have successfully consolidated the advantages of each model - high sensitivity from the local model and high specificity from the global model - into a single, unified, trained model (17.6% and 33.5% improvement, respectively). Without the need for additional training, we anticipate achieving significant diagnostic efficacy in various LDCT applications, where both high sensitivity and specificity are essential, using our fixed model. Code is available at https://github.com/kskim-phd/Fusion-UADL.

A vision transformer-convolutional neural network framework for decision-transparent dual-energy X-ray absorptiometry recommendations using chest low-dose CT.

Kuo DP, Chen YC, Cheng SJ, Hsieh KL, Li YT, Kuo PC, Chang YC, Chen CY

pubmed logopapersJul 1 2025
This study introduces an ensemble framework that integrates Vision Transformer (ViT) and Convolutional Neural Networks (CNN) models to leverage their complementary strengths, generating visualized and decision-transparent recommendations for dual-energy X-ray absorptiometry (DXA) scans from chest low-dose computed tomography (LDCT). The framework was developed using data from 321 individuals and validated with an independent test cohort of 186 individuals. It addresses two classification tasks: (1) distinguishing normal from abnormal bone mineral density (BMD) and (2) differentiating osteoporosis from non-osteoporosis. Three field-of-view (FOV) settings-fitFOV (entire vertebra), halfFOV (vertebral body only), and largeFOV (fitFOV + 20 %)-were analyzed to assess their impact on model performance. Model predictions were weighted and combined to enhance classification accuracy, and visualizations were generated to improve decision transparency. DXA scans were recommended for individuals classified as having abnormal BMD or osteoporosis. The ensemble framework significantly outperformed individual models in both classification tasks (McNemar test, p < 0.001). In the development cohort, it achieved 91.6 % accuracy for task 1 with largeFOV (area under the receiver operating characteristic curve [AUROC]: 0.97) and 86.0 % accuracy for task 2 with fitFOV (AUROC: 0.94). In the test cohort, it demonstrated 86.6 % accuracy for task 1 (AUROC: 0.93) and 76.9 % accuracy for task 2 (AUROC: 0.99). DXA recommendation accuracy was 91.6 % and 87.1 % in the development and test cohorts, respectively, with notably high accuracy for osteoporosis detection (98.7 % and 100 %). This combined ViT-CNN framework effectively assesses bone status from LDCT images, particularly when utilizing fitFOV and largeFOV settings. By visualizing classification confidence and vertebral abnormalities, the proposed framework enhances decision transparency and supports clinicians in making informed DXA recommendations following opportunistic osteoporosis screening.

Development and validation of a nomogram for predicting bone marrow involvement in lymphoma patients based on <sup>18</sup>F-FDG PET radiomics and clinical factors.

Lu D, Zhu X, Mu X, Huang X, Wei F, Qin L, Liu Q, Fu W, Deng Y

pubmed logopapersJul 1 2025
This study aimed to develop and validate a nomogram combining <sup>18</sup>F-FDG PET radiomics and clinical factors to non-invasively predict bone marrow involvement (BMI) in patients with lymphoma. A radiomics nomogram was developed using monocentric data, randomly divided into a training set (70%) and a test set (30%). Bone marrow biopsy (BMB) served as the gold standard for BMI diagnosis. Independent clinical risk factors were identified through univariate and multivariate logistic regression analyses to construct a clinical model. Radiomics features were extracted from PET and CT images and selected using least absolute shrinkage and selection operator (LASSO) regression, yielding a radiomics score (Rad<sub>score</sub>) for each patient. Models based on clinical factors, CT Rad<sub>score</sub>, and PET Rad<sub>score</sub> were established and evaluated using eight machine learning algorithms to identify the optimal prediction model. A combined model was constructed and presented as a nomogram. Model performance was assessed using the area under the receiver operating characteristic curve (AUC), calibration curves, and decision curve analysis (DCA). A total of 160 patients were included, of whom 70 had BMI based on BMB results. The training group comprised 112 patients (BMI: 56, without BMI: 56), while the test group included 48 patients (BMI: 14, without BMI: 34). Independent risk factors, including the number of extranodal involvements and B symptoms, were incorporated into the clinical model. In the clinical model, CT Rad<sub>score</sub>, and PET Rad<sub>score</sub>, the AUCs in the test set were 0.820 (95% CI: 0.705-0.935), 0.538 (95% CI: 0.351-0.723), and 0.836 (95% CI: 0.686-0.986). Due to the limited diagnostic performance of CT Rad<sub>score</sub>, the nomogram was constructed using PET Rad<sub>score</sub> and the clinical model. The radiomics nomogram achieved AUCs of 0.916 (95% CI: 0.865-0.967) in the training set and 0.863 (95% CI: 0.763-0.964) in the test set. Calibration curves and DCA confirmed the nomogram's discrimination, calibration, and clinical utility in both sets. By integrating PET Rad<sub>score</sub>, the number of extranodal involvements, and B symptoms, this <sup>18</sup>F-FDG PET radiomics-based nomogram offers a non-invasive method to predict bone marrow status in lymphoma patients, providing nuclear medicine physicians with valuable decision support for pre-treatment evaluation.

MedScale-Former: Self-guided multiscale transformer for medical image segmentation.

Karimijafarbigloo S, Azad R, Kazerouni A, Merhof D

pubmed logopapersJul 1 2025
Accurate medical image segmentation is crucial for enabling automated clinical decision procedures. However, existing supervised deep learning methods for medical image segmentation face significant challenges due to their reliance on extensive labeled training data. To address this limitation, our novel approach introduces a dual-branch transformer network operating on two scales, strategically encoding global contextual dependencies while preserving local information. To promote self-supervised learning, our method leverages semantic dependencies between different scales, generating a supervisory signal for inter-scale consistency. Additionally, it incorporates a spatial stability loss within each scale, fostering self-supervised content clustering. While intra-scale and inter-scale consistency losses enhance feature uniformity within clusters, we introduce a cross-entropy loss function atop the clustering score map to effectively model cluster distributions and refine decision boundaries. Furthermore, to account for pixel-level similarities between organ or lesion subpixels, we propose a selective kernel regional attention module as a plug and play component. This module adeptly captures and outlines organ or lesion regions, slightly enhancing the definition of object boundaries. Our experimental results on skin lesion, lung organ, and multiple myeloma plasma cell segmentation tasks demonstrate the superior performance of our method compared to state-of-the-art approaches.

Development of Multiparametric Prognostic Models for Stereotactic Magnetic Resonance Guided Radiation Therapy of Pancreatic Cancers.

Michalet M, Valenzuela G, Nougaret S, Tardieu M, Azria D, Riou O

pubmed logopapersJul 1 2025
Stereotactic magnetic resonance guided adaptive radiation therapy (SMART) is a new option for local treatment of unresectable pancreatic ductal adenocarcinoma, showing interesting survival and local control (LC) results. Despite this, some patients will experience early local and/or metastatic recurrence leading to death. We aimed to develop multiparametric prognostic models for these patients. All patients treated in our institution with SMART for an unresectable pancreatic ductal adenocarcinoma between October 21, 2019, and August 5, 2022 were included. Several initial clinical characteristics as well as dosimetric data of SMART were recorded. Radiomics data from 0.35-T simulation magnetic resonance imaging were extracted. All these data were combined to build prognostic models of overall survival (OS) and LC using machine learning algorithms. Eighty-three patients with a median age of 64.9 years were included. A majority of patients had a locally advanced pancreatic cancer (77%). The median OS was 21 months after SMART completion and 27 months after chemotherapy initiation. The 6- and 12-month post-SMART OS was 87.8% (IC95%, 78.2%-93.2%) and 70.9% (IC95%, 58.8%-80.0%), respectively. The best model for OS was the Cox proportional hazard survival analysis using clinical data, with a concordance index inverse probability of censoring weighted of 0.87. Tested on its 12-month OS prediction capacity, this model had good performance (sensitivity 67%, specificity 71%, and area under the curve 0.90). The median LC was not reached. The 6- and 12-month post-SMART LC was 92.4% [IC95%, 83.7%-96.6%] and 76.3% [IC95%, 62.6%-85.5%], respectively. The best model for LC was the component-wise gradient boosting survival analysis using clinical and radiomics data, with a concordance index inverse probability of censoring weighted of 0.80. Tested on its 9-month LC prediction capacity, this model had good performance (sensitivity 50%, specificity 97%, and area under the curve 0.78). Combining clinical and radiomics data in multiparametric prognostic models using machine learning algorithms showed good performance for the prediction of OS and LC. External validation of these models will be needed.

Evaluating a large language model's accuracy in chest X-ray interpretation for acute thoracic conditions.

Ostrovsky AM

pubmed logopapersJul 1 2025
The rapid advancement of artificial intelligence (AI) has great ability to impact healthcare. Chest X-rays are essential for diagnosing acute thoracic conditions in the emergency department (ED), but interpretation delays due to radiologist availability can impact clinical decision-making. AI models, including deep learning algorithms, have been explored for diagnostic support, but the potential of large language models (LLMs) in emergency radiology remains largely unexamined. This study assessed ChatGPT's feasibility in interpreting chest X-rays for acute thoracic conditions commonly encountered in the ED. A subset of 1400 images from the NIH Chest X-ray dataset was analyzed, representing seven pathology categories: Atelectasis, Effusion, Emphysema, Pneumothorax, Pneumonia, Mass, and No Finding. ChatGPT 4.0, utilizing the "X-Ray Interpreter" add-on, was evaluated for its diagnostic performance across these categories. ChatGPT demonstrated high performance in identifying normal chest X-rays, with a sensitivity of 98.9 %, specificity of 93.9 %, and accuracy of 94.7 %. However, the model's performance varied across pathologies. The best results were observed in diagnosing pneumonia (sensitivity 76.2 %, specificity 93.7 %) and pneumothorax (sensitivity 77.4 %, specificity 89.1 %), while performance for atelectasis and emphysema was lower. ChatGPT demonstrates potential as a supplementary tool for differentiating normal from abnormal chest X-rays, with promising results for certain pathologies like pneumonia. However, its diagnostic accuracy for more subtle conditions requires improvement. Further research integrating ChatGPT with specialized image recognition models could enhance its performance, offering new possibilities in medical imaging and education.

A deep-learning model to predict the completeness of cytoreductive surgery in colorectal cancer with peritoneal metastasis☆.

Lin Q, Chen C, Li K, Cao W, Wang R, Fichera A, Han S, Zou X, Li T, Zou P, Wang H, Ye Z, Yuan Z

pubmed logopapersJul 1 2025
Colorectal cancer (CRC) with peritoneal metastasis (PM) is associated with poor prognosis. The Peritoneal Cancer Index (PCI) is used to evaluate the extent of PM and to select Cytoreductive Surgery (CRS). However, PCI score is not accurate to guide patient's selection for CRS. We have developed a novel AI framework of decoupling feature alignment and fusion (DeAF) by deep learning to aid selection of PM patients and predict surgical completeness of CRS. 186 CRC patients with PM recruited from four tertiary hospitals were enrolled. In the training cohort, deep learning was used to train the DeAF model using Simsiam algorithms by contrast CT images and then fuse clinicopathological parameters to increase performance. The accuracy, sensitivity, specificity, and AUC by ROC were evaluated both in the internal validation cohort and three external cohorts. The DeAF model demonstrated a robust accuracy to predict the completeness of CRS with AUC of 0.9 (95 % CI: 0.793-1.000) in internal validation cohort. The model can guide selection of suitable patients and predict potential benefits from CRS. The high predictive performance in predicting CRS completeness were validated in three external cohorts with AUC values of 0.906(95 % CI: 0.812-1.000), 0.960(95 % CI: 0.885-1.000), and 0.933 (95 % CI: 0.791-1.000), respectively. The novel DeAF framework can aid surgeons to select suitable PM patients for CRS and predict the completeness of CRS. The model can change surgical decision-making and provide potential benefits for PM patients.

Uncertainty-aware deep learning for segmentation of primary tumor and pathologic lymph nodes in oropharyngeal cancer: Insights from a multi-center cohort.

De Biase A, Sijtsema NM, van Dijk LV, Steenbakkers R, Langendijk JA, van Ooijen P

pubmed logopapersJul 1 2025
Information on deep learning (DL) tumor segmentation accuracy on a voxel and a structure level is essential for clinical introduction. In a previous study, a DL model was developed for oropharyngeal cancer (OPC) primary tumor (PT) segmentation in PET/CT images and voxel-level predicted probabilities (TPM) quantifying model certainty were introduced. This study extended the network to simultaneously generate TPMs for PT and pathologic lymph nodes (PL) and explored whether structure-level uncertainty in TPMs predicts segmentation model accuracy in an independent external cohort. We retrospectively gathered PET/CT images and manual delineations of gross tumor volume of the PT (GTVp) and PL (GTVln) of 407 OPC patients treated with (chemo)radiation in our institute. The HECKTOR 2022 challenge dataset served as external test set. The pre-existing architecture was modified for multi-label segmentation. Multiple models were trained, and the non-binarized ensemble average of TPMs was considered per patient. Segmentation accuracy was quantified by surface and aggregate DSC, model uncertainty by coefficient of variation (CV) of multiple predictions. Predicted GTVp and GTVln segmentations in the external test achieved 0.75 and 0.70 aggregate DSC. Patient-specific CV and surface DSC showed a significant correlation for both structures (-0.54 and -0.66 for GTVp and GTVln) in the external set, indicating significant calibration. Significant accuracy versus uncertainty calibration was achieved for TPMs in both internal and external test sets, indicating the potential use of quantified uncertainty from TPMs to identify cases with lower GTVp and GTVln segmentation accuracy, independently of the dataset.
Page 10 of 1861852 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.