Sort by:
Page 43 of 49489 results

Integrating SEResNet101 and SE-VGG19 for advanced cervical lesion detection: a step forward in precision oncology.

Ye Y, Chen Y, Pan J, Li P, Ni F, He H

pubmed logopapersMay 28 2025
Cervical cancer remains a significant global health issue, with accurate differentiation between low-grade (LSIL) and high-grade squamous intraepithelial lesions (HSIL) crucial for effective screening and management. Current methods, such as Pap smears and HPV testing, often fall short in sensitivity and specificity. Deep learning models hold the potential to enhance the accuracy of cervical cancer screening but require thorough evaluation to ascertain their practical utility. This study compares the performance of two advanced deep learning models, SEResNet101 and SE-VGG19, in classifying cervical lesions using a dataset of 3,305 high-quality colposcopy images. We assessed the models based on their accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). The SEResNet101 model demonstrated superior performance over SE-VGG19 across all evaluated metrics. Specifically, SEResNet101 achieved a sensitivity of 95%, a specificity of 97%, and an AUC of 0.98, compared to 89% sensitivity, 93% specificity, and an AUC of 0.94 for SE-VGG19. These findings suggest that SEResNet101 could significantly reduce both over- and under-treatment rates by enhancing diagnostic precision. Our results indicate that SEResNet101 offers a promising enhancement over existing screening methods, integrating advanced deep learning algorithms to significantly improve the precision of cervical lesion classification. This study advocates for the inclusion of SEResNet101 in clinical workflows to enhance cervical cancer screening protocols, thereby improving patient outcomes. Future work should focus on multicentric trials to validate these findings and facilitate widespread clinical adoption.

Automatic identification of Parkinsonism using clinical multi-contrast brain MRI: a large self-supervised vision foundation model strategy.

Suo X, Chen M, Chen L, Luo C, Kemp GJ, Lui S, Sun H

pubmed logopapersMay 27 2025
Valid non-invasive biomarkers for Parkinson's disease (PD) and Parkinson-plus syndrome (PPS) are urgently needed. Based on our recent self-supervised vision foundation model the Shift Window UNET TRansformer (Swin UNETR), which uses clinical multi-contrast whole brain MRI, we aimed to develop an efficient and practical model ('SwinClassifier') for the discrimination of PD vs PPS using routine clinical MRI scans. We used 75,861 clinical head MRI scans including T1-weighted, T2-weighted and fluid attenuated inversion recovery imaging as a pre-training dataset to develop a foundation model, using self-supervised learning with a cross-contrast context recovery task. Then clinical head MRI scans from n = 1992 participants with PD and n = 1989 participants with PPS were used as a downstream PD vs PPS classification dataset. We then assessed SwinClassifier's performance in confusion matrices compared to a comparative self-supervised vanilla Vision Transformer (ViT) autoencoder ('ViTClassifier'), and to two convolutional neural networks (DenseNet121 and ResNet50) trained from scratch. SwinClassifier showed very good performance (F1 score 0.83, 95% confidence interval [CI] [0.79-0.87], AUC 0.89) in PD vs PPS discrimination in independent test datasets (n = 173 participants with PD and n = 165 participants with PPS). This self-supervised classifier with pretrained weights outperformed the ViTClassifier and convolutional classifiers trained from scratch (F1 score 0.77-0.82, AUC 0.83-0.85). Occlusion sensitivity mapping in the correctly-classified cases (n = 160 PD and n = 114 PPS) highlighted the brain regions guiding discrimination mainly in sensorimotor and midline structures including cerebellum, brain stem, ventricle and basal ganglia. Our self-supervised digital model based on routine clinical head MRI discriminated PD vs PPS with good accuracy and sensitivity. With incremental improvements the approach may be diagnostically useful in early disease. National Key Research and Development Program of China.

Improving Breast Cancer Diagnosis in Ultrasound Images Using Deep Learning with Feature Fusion and Attention Mechanism.

Asif S, Yan Y, Feng B, Wang M, Zheng Y, Jiang T, Fu R, Yao J, Lv L, Song M, Sui L, Yin Z, Wang VY, Xu D

pubmed logopapersMay 27 2025
Early detection of malignant lesions in ultrasound images is crucial for effective cancer diagnosis and treatment. While traditional methods rely on radiologists, deep learning models can improve accuracy, reduce errors, and enhance efficiency. This study explores the application of a deep learning model for classifying benign and malignant lesions, focusing on its performance and interpretability. In this study, we proposed a feature fusion-based deep learning model for classifying benign and malignant lesions in ultrasound images. The model leverages advanced architectures such as MobileNetV2 and DenseNet121, enhanced with feature fusion and attention mechanisms to boost classification accuracy. The clinical dataset comprises 2171 images collected from 1758 patients between December 2020 and May 2024. Additionally, we utilized the publicly available BUSI dataset, consisting of 780 images from female patients aged 25 to 75, collected in 2018. To enhance interpretability, we applied Grad-CAM, Saliency Maps, and shapley additive explanations (SHAP) techniques to explain the model's decision-making. A comparative analysis with radiologists of varying expertise levels is also conducted. The proposed model exhibited the highest performance, achieving an AUC of 0.9320 on our private dataset and an area under the curve (AUC) of 0.9834 on the public dataset, significantly outperforming traditional deep convolutional neural network models. It also exceeded the diagnostic performance of radiologists, showcasing its potential as a reliable tool for medical image classification. The model's success can be attributed to its incorporation of advanced architectures, feature fusion, and attention mechanisms. The model's decision-making process was further clarified using interpretability techniques like Grad-CAM, Saliency Maps, and SHAP, offering insights into its ability to focus on relevant image features for accurate classification. The proposed deep learning model offers superior accuracy in classifying benign and malignant lesions in ultrasound images, outperforming traditional models and radiologists. Its strong performance, coupled with interpretability techniques, demonstrates its potential as a reliable and efficient tool for medical diagnostics. The datasets generated and analyzed during the current study are not publicly available due to the nature of this research and participants of this study, but may be available from the corresponding author on reasonable request.

Interpretable Machine Learning Models for Differentiating Glioblastoma From Solitary Brain Metastasis Using Radiomics.

Xia X, Wu W, Tan Q, Gou Q

pubmed logopapersMay 27 2025
To develop and validate interpretable machine learning models for differentiating glioblastoma (GB) from solitary brain metastasis (SBM) using radiomics features from contrast-enhanced T1-weighted MRI (CE-T1WI), and to compare the impact of low-order and high-order features on model performance. A cohort of 434 patients with histopathologically confirmed GB (226 patients) and SBM (208 patients) was retrospectively analyzed. Radiomic features were derived from CE-T1WI, with feature selection conducted through minimum redundancy maximum relevance and least absolute shrinkage and selection operator regression. Machine learning models, including GradientBoost and lightGBM (LGBM), were trained using low-order and high-order features. The performance of the models was assessed through receiver operating characteristic analysis and computation of the area under the curve, along with other indicators, including accuracy, specificity, and sensitivity. SHapley Additive Explanations (SHAP) analysis is used to measure the influence of each feature on the model's predictions. The performances of various machine learning models on both the training and validation datasets were notably different. For the training group, the LGBM, CatBoost, multilayer perceptron (MLP), and GradientBoost models achieved the highest AUC scores, all exceeding 0.9, demonstrating strong discriminative power. The LGBM model exhibited the best stability, with a minimal AUC difference of only 0.005 between the training and test sets, suggesting strong generalizability. Among the validation group results, the GradientBoost classifier achieved the maximum AUC of 0.927, closely followed by random forest at 0.925. GradientBoost also demonstrated high sensitivity (0.911) and negative predictive value (NPV, 0.889), effectively identifying true positives. The LGBM model showed the highest test accuracy (86.2%) and performed excellently in terms of sensitivity (0.911), NPV (0.895), and positive predictive value (PPV, 0.837). The models utilizing high-order features outperformed those based on low-order features in all the metrics. SHAP analysis further enhances model interpretability, providing insights into feature importance and contributions to classification decisions. Machine learning techniques based on radiomics can effectively distinguish GB from SBM, with gradient boosting tree-based models such as LGBMs demonstrating superior performance. High-order features significantly improve model accuracy and robustness. SHAP technology enhances the interpretability and transparency of models for distinguishing brain tumors, providing intuitive visualization of the contribution of radiomic features to classification.

An orchestration learning framework for ultrasound imaging: Prompt-Guided Hyper-Perception and Attention-Matching Downstream Synchronization.

Lin Z, Li S, Wang S, Gao Z, Sun Y, Lam CT, Hu X, Yang X, Ni D, Tan T

pubmed logopapersMay 27 2025
Ultrasound imaging is pivotal in clinical diagnostics due to its affordability, portability, safety, real-time capability, and non-invasive nature. It is widely utilized for examining various organs, such as the breast, thyroid, ovary, cardiac, and more. However, the manual interpretation and annotation of ultrasound images are time-consuming and prone to variability among physicians. While single-task artificial intelligence (AI) solutions have been explored, they are not ideal for scaling AI applications in medical imaging. Foundation models, although a trending solution, often struggle with real-world medical datasets due to factors such as noise, variability, and the incapability of flexibly aligning prior knowledge with task adaptation. To address these limitations, we propose an orchestration learning framework named PerceptGuide for general-purpose ultrasound classification and segmentation. Our framework incorporates a novel orchestration mechanism based on prompted hyper-perception, which adapts to the diverse inductive biases required by different ultrasound datasets. Unlike self-supervised pre-trained models, which require extensive fine-tuning, our approach leverages supervised pre-training to directly capture task-relevant features, providing a stronger foundation for multi-task and multi-organ ultrasound imaging. To support this research, we compiled a large-scale Multi-task, Multi-organ public ultrasound dataset (M<sup>2</sup>-US), featuring images from 9 organs and 16 datasets, encompassing both classification and segmentation tasks. Our approach employs four specific prompts-Object, Task, Input, and Position-to guide the model, ensuring task-specific adaptability. Additionally, a downstream synchronization training stage is introduced to fine-tune the model for new data, significantly improving generalization capabilities and enabling real-world applications. Experimental results demonstrate the robustness and versatility of our framework in handling multi-task and multi-organ ultrasound image processing, outperforming both specialist models and existing general AI solutions. Compared to specialist models, our method improves segmentation from 82.26% to 86.45%, classification from 71.30% to 79.08%, while also significantly reducing model parameters.

Development of a No-Reference CT Image Quality Assessment Method Using RadImageNet Pre-trained Deep Learning Models.

Ohashi K, Nagatani Y, Yamazaki A, Yoshigoe M, Iwai K, Uemura R, Shimomura M, Tanimura K, Ishida T

pubmed logopapersMay 27 2025
Accurate assessment of computed tomography (CT) image quality is crucial for ensuring diagnostic accuracy, optimizing imaging protocols, and preventing excessive radiation exposure. In clinical settings, where high-quality reference images are often unavailable, developing no-reference image quality assessment (NR-IQA) methods is essential. Recently, CT-NR-IQA methods using deep learning have been widely studied; however, significant challenges remain in handling multiple degradation factors and accurately reflecting real-world degradations. To address these issues, we propose a novel CT-NR-IQA method. Our approach utilizes a dataset that combines two degradation factors (noise and blur) to train convolutional neural network (CNN) models capable of handling multiple degradation factors. Additionally, we leveraged RadImageNet pre-trained models (ResNet50, DenseNet121, InceptionV3, and InceptionResNetV2), allowing the models to learn deep features from large-scale real clinical images, thus enhancing adaptability to real-world degradations without relying on artificially degraded images. The models' performances were evaluated by measuring the correlation between the subjective scores and predicted image quality scores for both artificially degraded and real clinical image datasets. The results demonstrated positive correlations between the subjective and predicted scores for both datasets. In particular, ResNet50 showed the best performance, with a correlation coefficient of 0.910 for the artificially degraded images and 0.831 for the real clinical images. These findings indicate that the proposed method could serve as a potential surrogate for subjective assessment in CT-NR-IQA.

Deep learning model for malignancy prediction of TI-RADS 4 thyroid nodules with high-risk characteristics using multimodal ultrasound: A multicentre study.

Chu X, Wang T, Chen M, Li J, Wang L, Wang C, Wang H, Wong ST, Chen Y, Li H

pubmed logopapersMay 26 2025
The automatic screening of thyroid nodules using computer-aided diagnosis holds great promise in reducing missed and misdiagnosed cases in clinical practice. However, most current research focuses on single-modal images and does not fully leverage the comprehensive information from multimodal medical images, limiting model performance. To enhance screening accuracy, this study uses a deep learning framework that integrates high-dimensional convolutions of B-mode ultrasound (BMUS) and strain elastography (SE) images to predict the malignancy of TI-RADS 4 thyroid nodules with high-risk features. First, we extract nodule regions from the images and expand the boundary areas. Then, adaptive particle swarm optimization (APSO) and contrast limited adaptive histogram equalization (CLAHE) algorithms are applied to enhance ultrasound image contrast. Finally, deep learning techniques are used to extract and fuse high-dimensional features from both ultrasound modalities to classify benign and malignant thyroid nodules. The proposed model achieved an AUC of 0.937 (95 % CI 0.917-0.949) and 0.927 (95 % CI 0.907-0.948) in the test and external validation sets, respectively, demonstrating strong generalization ability. When compared with the diagnostic performance of three groups of radiologists, the model outperformed them significantly. Meanwhile, with the model's assistance, all three radiologist groups showed improved diagnostic performance. Furthermore, heatmaps generated by the model show a high alignment with radiologists' expertise, further confirming its credibility. The results indicate that our model can assist in clinical thyroid nodule diagnosis, reducing the risk of missed and misdiagnosed diagnoses, particularly for high-risk populations, and holds significant clinical value.

The extent of Skeletal muscle wasting in prolonged critical illness and its association with survival: insights from a retrospective single-center study.

Kolck J, Hosse C, Fehrenbach U, Beetz NL, Auer TA, Pille C, Geisel D

pubmed logopapersMay 26 2025
Muscle wasting in critically ill patients, particularly those with prolonged hospitalization, poses a significant challenge to recovery and long-term outcomes. The aim of this study was to characterize long-term muscle wasting trajectories in ICU patients with acute respiratory distress syndrome (ARDS) due to COVID-19 and acute pancreatitis (AP), to evaluate correlations between muscle wasting and patient outcomes, and to identify clinically feasible thresholds that have the potential to enhance patient care strategies. A collective of 154 ICU patients (100 AP and 54 COVID-19 ARDS) with a minimum ICU stay of 10 days and at least three abdominal CT scans were retrospectively analyzed. AI-driven segmentation of CT scans quantified changes in psoas muscle area (PMA). A mixed model analysis was used to assess the correlation between mortality and muscle wasting, Cox regression was applied to identify potential predictors of survival. Muscle loss rates, survival thresholds and outcome correlations were assessed using Kaplan-Meier and receiver operating characteristic (ROC) analyses. Muscle loss in ICU patients was most pronounced in the first two weeks, peaking at -2.42% and - 2.39% psoas muscle area (PMA) loss per day in weeks 1 and 2, respectively, followed by a progressive decline. The median total PMA loss was 48.3%, with significantly greater losses in non-survivors. Mixed model analysis confirmed correlation of muscle wasting with mortality. Cox regression identified visceral adipose tissue (VAT), sequential organ failure assessment (SOFA) score and muscle wasting as significant risk factors, while increased skeletal muscle area (SMA) was protective. ROC and Kaplan-Meier analyses showed strong correlations between PMA loss thresholds and survival, with daily loss > 4% predicting the worst survival (39.7%). To our knowledge, This is the first study to highlight the substantial progression of muscle wasting in prolonged hospitalized ICU patients. The mortality-related thresholds for muscle wasting rates identified in this study may provide a basis for clinical risk stratification. Future research should validate these findings in larger cohorts and explore strategies to mitigate muscle loss. Not applicable.

Evolution of deep learning tooth segmentation from CT/CBCT images: a systematic review and meta-analysis.

Kot WY, Au Yeung SY, Leung YY, Leung PH, Yang WF

pubmed logopapersMay 26 2025
Deep learning has been utilized to segment teeth from computed tomography (CT) or cone-beam CT (CBCT). However, the performance of deep learning is unknown due to multiple models and diverse evaluation metrics. This systematic review and meta-analysis aims to evaluate the evolution and performance of deep learning in tooth segmentation. We systematically searched PubMed, Web of Science, Scopus, IEEE Xplore, arXiv.org, and ACM for studies investigating deep learning in human tooth segmentation from CT/CBCT. Included studies were assessed using the Quality Assessment of Diagnostic Accuracy Study (QUADAS-2) tool. Data were extracted for meta-analyses by random-effects models. A total of 30 studies were included in the systematic review, and 28 of them were included for meta-analyses. Various deep learning algorithms were categorized according to the backbone network, encompassing single-stage convolutional models, convolutional models with U-Net architecture, Transformer models, convolutional models with attention mechanisms, and combinations of multiple models. Convolutional models with U-Net architecture were the most commonly used deep learning algorithms. The integration of attention mechanism within convolutional models has become a new topic. 29 evaluation metrics were identified, with Dice Similarity Coefficient (DSC) being the most popular. The pooled results were 0.93 [0.93, 0.93] for DSC, 0.86 [0.85, 0.87] for Intersection over Union (IoU), 0.22 [0.19, 0.24] for Average Symmetric Surface Distance (ASSD), 0.92 [0.90, 0.94] for sensitivity, 0.71 [0.26, 1.17] for 95% Hausdorff distance, and 0.96 [0.93, 0.98] for precision. No significant difference was observed in the segmentation of single-rooted or multi-rooted teeth. No obvious correlation between sample size and segmentation performance was observed. Multiple deep learning algorithms have been successfully applied to tooth segmentation from CT/CBCT and their evolution has been well summarized and categorized according to their backbone structures. In future, studies are needed with standardized protocols and open labelled datasets.

Research-based clinical deployment of artificial intelligence algorithm for prostate MRI.

Harmon SA, Tetreault J, Esengur OT, Qin M, Yilmaz EC, Chang V, Yang D, Xu Z, Cohen G, Plum J, Sherif T, Levin R, Schmidt-Richberg A, Thompson S, Coons S, Chen T, Choyke PL, Xu D, Gurram S, Wood BJ, Pinto PA, Turkbey B

pubmed logopapersMay 26 2025
A critical limitation to deployment and utilization of Artificial Intelligence (AI) algorithms in radiology practice is the actual integration of algorithms directly into the clinical Picture Archiving and Communications Systems (PACS). Here, we sought to integrate an AI-based pipeline for prostate organ and intraprostatic lesion segmentation within a clinical PACS environment to enable point-of-care utilization under a prospective clinical trial scenario. A previously trained, publicly available AI model for segmentation of intra-prostatic findings on multiparametric Magnetic Resonance Imaging (mpMRI) was converted into a containerized environment compatible with MONAI Deploy Express. An inference server and dedicated clinical PACS workflow were established within our institution for evaluation of real-time use of the AI algorithm. PACS-based deployment was prospectively evaluated in two phases: first, a consecutive cohort of patients undergoing diagnostic imaging at our institution and second, a consecutive cohort of patients undergoing biopsy based on mpMRI findings. The AI pipeline was executed from within the PACS environment by the radiologist. AI findings were imported into clinical biopsy planning software for target definition. Metrics analyzing deployment success, timing, and detection performance were recorded and summarized. In phase one, clinical PACS deployment was successfully executed in 57/58 cases and were obtained within one minute of activation (median 33 s [range 21-50 s]). Comparison with expert radiologist annotation demonstrated stable model performance compared to independent validation studies. In phase 2, 40/40 cases were successfully executed via PACS deployment and results were imported for biopsy targeting. Cancer detection rates for prostate cancer were 82.1% for ROI targets detected by both AI and radiologist, 47.8% in targets proposed by AI and accepted by radiologist, and 33.3% in targets identified by the radiologist alone. Integration of novel AI algorithms requiring multi-parametric input into clinical PACS environment is feasible and model outputs can be used for downstream clinical tasks.
Page 43 of 49489 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.