Sort by:
Page 107 of 2402393 results

Prediction of OncotypeDX recurrence score using H&E stained WSI images

Cohen, S., Shamai, G., Sabo, E., Cretu, A., Barshack, I., Goldman, T., Bar-Sela, G., Pearson, A. T., Huo, D., Howard, F. M., Kimmel, R., Mayer, C.

medrxiv logopreprintJul 21 2025
The OncotypeDX 21-gene assay is a widely adopted tool for estimating recurrence risk and informing chemotherapy decisions in early-stage, hormone receptor-positive, HER2-negative breast cancer. Although informative, its high cost and long turnaround time limit accessibility and delay treatment in low- and middle-income countries, creating a need for alternative solutions. This study presents a deep learning-based approach for predicting OncotypeDX recurrence scores directly from hematoxylin and eosin-stained whole slide images. Our approach leverages a deep learning foundation model pre-trained on 171,189 slides via self-supervised learning, which is fine-tuned for our task. The model was developed and validated using five independent cohorts, out of which three are external. On the two external cohorts that include OncotypeDX scores, the model achieved an AUC of 0.825 and 0.817, and identified 21.9% and 25.1% of the patients as low-risk with sensitivity of 0.97 and 0.95 and negative predictive value of 0.97 and 0.96, showing strong generalizability despite variations in staining protocols and imaging devices. Kaplan-Meier analysis demonstrated that patients classified as low-risk by the model had a significantly better prognosis than those classified as high-risk, with a hazard ratio of 4.1 (P<0.001) and 2.0 (P<0.01) on the two external cohorts that include patient outcomes. This artificial intelligence-driven solution offers a rapid, cost-effective, and scalable alternative to genomic testing, with the potential to enhance personalized treatment planning, especially in resource-constrained settings.

Noninvasive Deep Learning System for Preoperative Diagnosis of Follicular-Like Thyroid Neoplasms Using Ultrasound Images: A Multicenter, Retrospective Study.

Shen H, Huang Y, Yan W, Zhang C, Liang T, Yang D, Feng X, Liu S, Wang Y, Cao W, Cheng Y, Chen H, Ni Q, Wang F, You J, Jin Z, He W, Sun J, Yang D, Liu L, Cao B, Zhang X, Li Y, Pei S, Zhang S, Zhang B

pubmed logopapersJul 21 2025
To propose a deep learning (DL) system for the preoperative diagnosis of follicular-like thyroid neoplasms (FNs) using routine ultrasound images. Preoperative diagnosis of malignancy in nodules suspicious for an FN remains challenging. Ultrasound, fine-needle aspiration cytology, and intraoperative frozen section pathology cannot unambiguously distinguish between benign and malignant FNs, leading to unnecessary biopsies and operations in benign nodules. This multicenter, retrospective study included 3634 patients who underwent ultrasound and received a definite diagnosis of FN from 11 centers, comprising thyroid follicular adenoma (n=1748), follicular carcinoma (n=299), and follicular variant of papillary thyroid carcinoma (n=1587). Four DL models including Inception-v3, ResNet50, Inception-ResNet-v2, and DenseNet161 were constructed on a training set (n=2587, 6178 images) and were verified on an internal validation set (n=648, 1633 images) and an external validation set (n=399, 847 images). The diagnostic efficacy of the DL models was evaluated against the ACR TI-RADS regarding the area under the curve (AUC), sensitivity, specificity, and unnecessary biopsy rate. When externally validated, the four DL models yielded robust and comparable performance, with AUCs of 82.2%-85.2%, sensitivities of 69.6%-76.0%, and specificities of 84.1%-89.2%, which outperformed the ACR TI-RADS. Compared to ACR TI-RADS, the DL models showed a higher biopsy rate of malignancy (71.6% -79.9% vs 37.7%, P<0.001) and a significantly lower unnecessary FNAB rate (8.5% -12.8% vs 40.7%, P<0.001). This study provides a noninvasive DL tool for accurate preoperative diagnosis of FNs, showing better performance than ACR TI-RADS and reducing unnecessary invasive interventions.

Fully automated pedicle screw manufacturer identification in plain radiograph with deep learning methods.

Waranusast R, Riyamongkol P, Weerakul S, Chaibhuddanugul N, Laoruengthana A, Mahatthanatrakul A

pubmed logopapersJul 21 2025
Pedicle screw manufacturer identification is crucial for revision surgery planning; however, this information is occasionally unavailable. We developed a deep learning-based algorithm to identify the pedicle screw manufacturer from plain radiographs. We collected anteroposterior (AP) and lateral radiographs from 276 patients who had thoracolumbar spine surgery with pedicle screws from three international manufacturers. The samples were randomly assigned to training sets (178), validation sets (40), and test sets (58). The algorithm incorporated a convolutional neural network (CNN) model to classify the radiograph as AP and lateral, followed by YOLO object detection to locate the pedicle screw. Another CNN classifier model then identified the manufacturer of each pedicle screw in AP and lateral views. The voting scheme determined the final classification. For comparison, two spine surgeons independently evaluated the same test set, and the accuracy was compared. The mean age of the patients was 59.5 years, with 1,887 pedicle screws included. The algorithm achieved a perfect accuracy of 100% for the AP radiograph, 98.9% for the lateral radiograph, and 100% when both views were considered. By comparison, the spine surgeons achieved 97.1% accuracy. Statistical analysis revealed near-perfect agreement between the algorithm and the surgeons. We have successfully developed an algorithm for pedicle screw manufacturer identification, which demonstrated excellent accuracy and was comparable to experienced spine surgeons.

Cascaded Multimodal Deep Learning in the Differential Diagnosis, Progression Prediction, and Staging of Alzheimer's and Frontotemporal Dementia

Guarnier, G., Reinelt, J., Molloy, E. N., Mihai, P. G., Einaliyan, P., Valk, S., Modestino, A., Ugolini, M., Mueller, K., Wu, Q., Babayan, A., Castellaro, M., Villringer, A., Scherf, N., Thierbach, K., Schroeter, M. L., Alzheimers Disease Neuroimaging Initiative,, Australian Imaging Biomarkers and Lifestyle flagship study of ageing,, Frontotemporal Lobar Degeneration Neuroimaging Initiative,

medrxiv logopreprintJul 21 2025
Dementia is a complex condition whose multifaceted nature poses significant challenges in the diagnosis, prognosis, and treatment of patients. Despite the availability of large open-source data fueling a wealth of promising research, effective translation of preclinical findings to clinical practice remains difficult. This barrier is largely due to the complexity of unstructured and disparate preclinical and clinical data, which traditional analytical methods struggle to handle. Novel analytical techniques involving Deep Learning (DL), however, are gaining significant traction in this regard. Here, we have investigated the potential of a cascaded multimodal DL-based system (TelDem), assessing the ability to integrate and analyze a large, heterogeneous dataset (n=7,159 patients), applied to three clinically relevant use cases. Using a Cascaded Multi-Modal Mixing Transformer (CMT), we assessed TelDems validity and (using a Cross-Modal Fusion Norm - CMFN) model explainability in (i) differential diagnosis between healthy individuals, AD, and three sub-types of frontotemporal lobar degeneration (ii) disease staging from healthy cognition to mild cognitive impairment (MCI) and AD, and (iii) predicting progression from MCI to AD. Our findings show that the CMT enhances diagnostic and prognostic accuracy when incorporating multimodal data compared to unimodal modeling and that cerebrospinal fluid (CSF) biomarkers play a key role in accurate model decision making. These results reinforce the power of DL technology in tapping deeper into already existing data, thereby accelerating preclinical dementia research by utilizing clinically relevant information to disentangle complex dementia pathophysiology.

[A multi-feature fusion-based model for fetal orientation classification from intrapartum ultrasound videos].

Zheng Z, Yang X, Wu S, Zhang S, Lyu G, Liu P, Wang J, He S

pubmed logopapersJul 20 2025
To construct an intelligent analysis model for classifying fetal orientation during intrapartum ultrasound videos based on multi-feature fusion. The proposed model consists of the Input, Backbone Network and Classification Head modules. The Input module carries out data augmentation to improve the sample quality and generalization ability of the model. The Backbone Network was responsible for feature extraction based on Yolov8 combined with CBAM, ECA, PSA attention mechanism and AIFI feature interaction module. The Classification Head consists of a convolutional layer and a softmax function to output the final probability value of each class. The images of the key structures (the eyes, face, head, thalamus, and spine) were annotated with frames by physicians for model training to improve the classification accuracy of the anterior occipital, posterior occipital, and transverse occipital orientations. The experimental results showed that the proposed model had excellent performance in the tire orientation classification task with the classification accuracy reaching 0.984, an area under the PR curve (average accuracy) of 0.993, and area under the ROC curve of 0.984, and a kappa consistency test score of 0.974. The prediction results by the deep learning model were highly consistent with the actual classification results. The multi-feature fusion model proposed in this study can efficiently and accurately classify fetal orientation in intrapartum ultrasound videos.

CXR-TFT: Multi-Modal Temporal Fusion Transformer for Predicting Chest X-ray Trajectories

Mehak Arora, Ayman Ali, Kaiyuan Wu, Carolyn Davis, Takashi Shimazui, Mahmoud Alwakeel, Victor Moas, Philip Yang, Annette Esper, Rishikesan Kamaleswaran

arxiv logopreprintJul 19 2025
In intensive care units (ICUs), patients with complex clinical conditions require vigilant monitoring and prompt interventions. Chest X-rays (CXRs) are a vital diagnostic tool, providing insights into clinical trajectories, but their irregular acquisition limits their utility. Existing tools for CXR interpretation are constrained by cross-sectional analysis, failing to capture temporal dynamics. To address this, we introduce CXR-TFT, a novel multi-modal framework that integrates temporally sparse CXR imaging and radiology reports with high-frequency clinical data, such as vital signs, laboratory values, and respiratory flow sheets, to predict the trajectory of CXR findings in critically ill patients. CXR-TFT leverages latent embeddings from a vision encoder that are temporally aligned with hourly clinical data through interpolation. A transformer model is then trained to predict CXR embeddings at each hour, conditioned on previous embeddings and clinical measurements. In a retrospective study of 20,000 ICU patients, CXR-TFT demonstrated high accuracy in forecasting abnormal CXR findings up to 12 hours before they became radiographically evident. This predictive capability in clinical data holds significant potential for enhancing the management of time-sensitive conditions like acute respiratory distress syndrome, where early intervention is crucial and diagnoses are often delayed. By providing distinctive temporal resolution in prognostic CXR analysis, CXR-TFT offers actionable 'whole patient' insights that can directly improve clinical outcomes.

Performance comparison of medical image classification systems using TensorFlow Keras, PyTorch, and JAX

Merjem Bećirović, Amina Kurtović, Nordin Smajlović, Medina Kapo, Amila Akagić

arxiv logopreprintJul 19 2025
Medical imaging plays a vital role in early disease diagnosis and monitoring. Specifically, blood microscopy offers valuable insights into blood cell morphology and the detection of hematological disorders. In recent years, deep learning-based automated classification systems have demonstrated high potential in enhancing the accuracy and efficiency of blood image analysis. However, a detailed performance analysis of specific deep learning frameworks appears to be lacking. This paper compares the performance of three popular deep learning frameworks, TensorFlow with Keras, PyTorch, and JAX, in classifying blood cell images from the publicly available BloodMNIST dataset. The study primarily focuses on inference time differences, but also classification performance for different image sizes. The results reveal variations in performance across frameworks, influenced by factors such as image resolution and framework-specific optimizations. Classification accuracy for JAX and PyTorch was comparable to current benchmarks, showcasing the efficiency of these frameworks for medical image classification.

2.5D Deep Learning-Based Prediction of Pathological Grading of Clear Cell Renal Cell Carcinoma Using Contrast-Enhanced CT: A Multicenter Study.

Yang Z, Jiang H, Shan S, Wang X, Kou Q, Wang C, Jin P, Xu Y, Liu X, Zhang Y, Zhang Y

pubmed logopapersJul 19 2025
To develop and validate a deep learning model based on arterial phase-enhanced CT for predicting the pathological grading of clear cell renal cell carcinoma (ccRCC). Data from 564 patients diagnosed with ccRCC from five distinct hospitals were retrospectively analyzed. Patients from centers 1 and 2 were randomly divided into a training set (n=283) and an internal test set (n=122). Patients from centers 3, 4, and 5 served as external validation sets 1 (n=60), 2 (n=38), and 3 (n=61), respectively. A 2D model, a 2.5D model (three-slice input), and a radiomics-based multi-layer perceptron (MLP) model were developed. Model performance was evaluated using the area under the curve (AUC), accuracy, and sensitivity. The 2.5D model outperformed the 2D and MLP models. Its AUCs were 0.959 (95% CI: 0.9438-0.9738) for the training set, 0.879 (95% CI: 0.8401-0.9180) for the internal test set, and 0.870 (95% CI: 0.8076-0.9334), 0.862 (95% CI: 0.7581-0.9658), and 0.849 (95% CI: 0.7766-0.9216) for the three external validation sets, respectively. The corresponding accuracy values were 0.895, 0.836, 0.827, 0.825, and 0.839. Compared to the MLP model, the 2.5D model achieved significantly higher AUCs (increases of 0.150 [p<0.05], 0.112 [p<0.05], and 0.088 [p<0.05]) and accuracies (increases of 0.077 [p<0.05], 0.075 [p<0.05], and 0.101 [p<0.05]) in the external validation sets. The 2.5D model based on 2.5D CT image input demonstrated improved predictive performance for the WHO/ISUP grading of ccRCC.

Enhancing cardiac disease detection via a fusion of machine learning and medical imaging.

Yu T, Chen K

pubmed logopapersJul 19 2025
Cardiovascular illnesses continue to be a predominant cause of mortality globally, underscoring the necessity for prompt and precise diagnosis to mitigate consequences and healthcare expenditures. This work presents a complete hybrid methodology that integrates machine learning techniques with medical image analysis to improve the identification of cardiovascular diseases. This research integrates many imaging modalities such as echocardiography, cardiac MRI, and chest radiographs with patient health records, enhancing diagnosis accuracy beyond standard techniques that depend exclusively on numerical clinical data. During the preprocessing phase, essential visual elements are collected from medical pictures utilizing image processing methods and convolutional neural networks (CNNs). These are subsequently integrated with clinical characteristics and input into various machine learning classifiers, including Support Vector Machines (SVM), Random Forest (RF), XGBoost, and Deep Neural Networks (DNNs), to differentiate between healthy persons and patients with cardiovascular illnesses. The proposed method attained a remarkable diagnostic accuracy of up to 96%, exceeding models reliant exclusively on clinical data. This study highlights the capability of integrating artificial intelligence with medical imaging to create a highly accurate and non-invasive diagnostic instrument for cardiovascular disease.

A novel hybrid convolutional and transformer network for lymphoma classification.

Sikkandar MY, Sundaram SG, Almeshari MN, Begum SS, Sankari ES, Alduraywish YA, Obidallah WJ, Alotaibi FM

pubmed logopapersJul 19 2025
Lymphoma poses a critical health challenge worldwide, demanding computer aided solutions towards diagnosis, treatment, and research to significantly enhance patient outcomes and combat this pervasive disease. Accurate classification of lymphoma subtypes from Whole Slide Images (WSIs) remains a complex challenge due to morphological similarities among subtypes and the limitations of models that fail to jointly capture local and global features. Traditional diagnostic methods, limited by subjectivity and inconsistencies, highlight the need for advanced, Artificial Intelligence (AI)-driven solutions. This study proposes a hybrid deep learning framework-Hybrid Convolutional and Transformer Network for Lymphoma Classification (HCTN-LC)-designed to enhance the precision and interpretability of lymphoma subtype classification. The model employs a dual-pathway architecture that combines a lightweight SqueezeNet for local feature extraction with a Vision Transformer (ViT) for capturing global context. A Feature Fusion and Enhancement Module (FFEM) is introduced to dynamically integrate features from both pathways. The model is trained and evaluated on a large WSI dataset encompassing three lymphoma subtypes: CLL, FL, and MCL. HCTN-LC achieves superior performance with an overall accuracy of 99.87%, sensitivity of 99.87%, specificity of 99.93%, and AUC of 0.9991, outperforming several recent hybrid models. Grad-CAM visualizations confirm the model's focus on diagnostically relevant regions. The proposed HCTN-LC demonstrates strong potential for real-time and low-resource clinical deployment, offering a robust and interpretable AI tool for hematopathological diagnosis.
Page 107 of 2402393 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.