Sort by:
Page 212 of 2522511 results

Development and validation of clinical-radiomics deep learning model based on MRI for endometrial cancer molecular subtypes classification.

Yue W, Han R, Wang H, Liang X, Zhang H, Li H, Yang Q

pubmed logopapersMay 16 2025
This study aimed to develop and validate a clinical-radiomics deep learning (DL) model based on MRI for endometrial cancer (EC) molecular subtypes classification. This multicenter retrospective study included EC patients undergoing surgery, MRI, and molecular pathology diagnosis across three institutions from January 2020 to March 2024. Patients were divided into training, internal, and external validation cohorts. A total of 386 handcrafted radiomics features were extracted from each MR sequence, and MoCo-v2 was employed for contrastive self-supervised learning to extract 2048 DL features per patient. Feature selection integrated selected features into 12 machine learning methods. Model performance was evaluated with the AUC. A total of 526 patients were included (mean age, 55.01 ± 11.07). The radiomics model and clinical model demonstrated comparable performance across the internal and external validation cohorts, with macro-average AUCs of 0.70 vs 0.69 and 0.70 vs 0.67 (p = 0.51), respectively. The radiomics DL model, compared to the radiomics model, improved AUCs for POLEmut (0.68 vs 0.79), NSMP (0.71 vs 0.74), and p53abn (0.76 vs 0.78) in the internal validation (p = 0.08). The clinical-radiomics DL Model outperformed both the clinical model and radiomics DL model (macro-average AUC = 0.79 vs 0.69 and 0.73, in the internal validation [p = 0.02], 0.74 vs 0.67 and 0.69 in the external validation [p = 0.04]). The clinical-radiomics DL model based on MRI effectively distinguished EC molecular subtypes and demonstrated strong potential, with robust validation across multiple centers. Future research should explore larger datasets to further uncover DL's potential. Our clinical-radiomics DL model based on MRI has the potential to distinguish EC molecular subtypes. This insight aids in guiding clinicians in tailoring individualized treatments for EC patients. Accurate classification of EC molecular subtypes is crucial for prognostic risk assessment. The clinical-radiomics DL model outperformed both the clinical model and the radiomics DL model. The MRI features exhibited better diagnostic performance for POLEmut and p53abn.

Lightweight hybrid transformers-based dyslexia detection using cross-modality data.

Sait ARW, Alkhurayyif Y

pubmed logopapersMay 16 2025
Early and precise diagnosis of dyslexia is crucial for implementing timely intervention to reduce its effects. Timely identification can improve the individual's academic and cognitive performance. Traditional dyslexia detection (DD) relies on lengthy, subjective, restricted behavioral evaluations and interviews. Due to the limitations, deep learning (DL) models have been explored to improve DD by analyzing complex neurological, behavioral, and visual data. DL architectures, including convolutional neural networks (CNNs) and vision transformers (ViTs), encounter challenges in extracting meaningful patterns from cross-modality data. The lack of model interpretability and limited computational power restricts these models' generalizability across diverse datasets. To overcome these limitations, we propose an innovative model for DD using magnetic resonance imaging (MRI), electroencephalography (EEG), and handwriting images. We introduce a model, leveraging hybrid transformer-based feature extraction, including SWIN-Linformer for MRI, LeViT-Performer for handwriting images, and graph transformer networks (GTNs) with multi-attention mechanisms for EEG data. A multi-modal attention-based feature fusion network was used to fuse the extracted features in order to guarantee the integration of key multi-modal features. We enhance Dartbooster XGBoost (DXB)-based classification using Bayesian optimization with Hyperband (BOHB) algorithm. In order to reduce computational overhead, we employ a quantization-aware training technique. The local interpretable model-agnostic explanations (LIME) technique and gradient-weighted class activation mapping (Grad-CAM) were adopted to enable model interpretability. Five public repositories were used to train and test the proposed model. The experimental outcomes demonstrated that the proposed model achieves an accuracy of 99.8% with limited computational overhead, outperforming baseline models. It sets a novel standard for DD, offering potential for early identification and timely intervention. In the future, advanced feature fusion and quantization techniques can be utilized to achieve optimal results in resource-constrained environments.

Impact of test set composition on AI performance in pediatric wrist fracture detection in X-rays.

Till T, Scherkl M, Stranger N, Singer G, Hankel S, Flucher C, Hržić F, Štajduhar I, Tschauner S

pubmed logopapersMay 16 2025
To evaluate how different test set sampling strategies-random selection and balanced sampling-affect the performance of artificial intelligence (AI) models in pediatric wrist fracture detection using radiographs, aiming to highlight the need for standardization in test set design. This retrospective study utilized the open-sourced GRAZPEDWRI-DX dataset of 6091 pediatric wrist radiographs. Two test sets, each containing 4588 images, were constructed: one using a balanced approach based on case difficulty, projection type, and fracture presence and the other a random selection. EfficientNet and YOLOv11 models were trained and validated on 18,762 radiographs and tested on both sets. Binary classification and object detection tasks were evaluated using metrics such as precision, recall, F1 score, AP50, and AP50-95. Statistical comparisons between test sets were performed using nonparametric tests. Performance metrics significantly decreased in the balanced test set with more challenging cases. For example, the precision for YOLOv11 models decreased from 0.95 in the random set to 0.83 in the balanced set. Similar trends were observed for recall, accuracy, and F1 score, indicating that models trained on easy-to-recognize cases performed poorly on more complex ones. These results were consistent across all model variants tested. AI models for pediatric wrist fracture detection exhibit reduced performance when tested on balanced datasets containing more difficult cases, compared to randomly selected cases. This highlights the importance of constructing representative and standardized test sets that account for clinical complexity to ensure robust AI performance in real-world settings. Question Do different sampling strategies based on samples' complexity have an influence in deep learning models' performance in fracture detection? Findings AI performance in pediatric wrist fracture detection significantly drops when tested on balanced datasets with more challenging cases, compared to randomly selected cases. Clinical relevance Without standardized and validated test datasets for AI that reflect clinical complexities, performance metrics may be overestimated, limiting the utility of AI in real-world settings.

Diagnostic challenges of carpal tunnel syndrome in patients with congenital thenar hypoplasia: a comprehensive review.

Naghizadeh H, Salkhori O, Akrami S, Khabiri SS, Arabzadeh A

pubmed logopapersMay 16 2025
Carpal Tunnel Syndrome (CTS) is the most common entrapment neuropathy, frequently presenting with pain, numbness, and muscle weakness due to median nerve compression. However, diagnosing CTS becomes particularly challenging in patients with Congenital Thenar Hypoplasia (CTH), a rare congenital anomaly characterized by underdeveloped thenar muscles. The overlapping symptoms of CTH and CTS, such as thumb weakness, impaired hand function, and thenar muscle atrophy, can obscure the identification of median nerve compression. This review highlights the diagnostic complexities arising from this overlap and evaluates existing clinical, imaging, and electrophysiological assessment methods. While traditional diagnostic tests, including Phalen's and Tinel's signs, exhibit limited sensitivity in CTH patients, advanced imaging modalities like ultrasonography (US), magnetic resonance imaging (MRI), and diffusion tensor imaging (DTI) provide valuable insights into structural abnormalities. Additionally, emerging technologies such as artificial intelligence (AI) enhance diagnostic precision by automating imaging analysis and identifying subtle nerve alterations. Combining clinical history, functional assessments, and advanced imaging, an interdisciplinary approach is critical to differentiate between CTH-related anomalies and CTS accurately. This comprehensive review underscores the need for tailored diagnostic protocols to improve early detection, personalised management, and outcomes for this unique patient population.

Artificial intelligence generated 3D body composition predicts dose modifications in patients undergoing neoadjuvant chemotherapy for rectal cancer.

Besson A, Cao K, Mardinli A, Wirth L, Yeung J, Kokelaar R, Gibbs P, Reid F, Yeung JM

pubmed logopapersMay 16 2025
Chemotherapy administration is a balancing act between giving enough to achieve the desired tumour response while limiting adverse effects. Chemotherapy dosing is based on body surface area (BSA). Emerging evidence suggests body composition plays a crucial role in the pharmacokinetic and pharmacodynamic profile of cytotoxic agents and could inform optimal dosing. This study aims to assess how lumbosacral body composition influences adverse events in patients receiving neoadjuvant chemotherapy for rectal cancer. A retrospective study (February 2013 to March 2023) examined the impact of body composition on neoadjuvant treatment outcomes for rectal cancer patients. Staging CT scans were analysed using a validated AI model to measure lumbosacral skeletal muscle (SM), intramuscular adipose tissue (IMAT), visceral adipose tissue (VAT), and subcutaneous adipose tissue volume and density. Multivariate analyses explored the relationship between body composition and chemotherapy outcomes. 242 patients were included (164 males, 78 Females), median age 63.4 years. Chemotherapy dose reductions occurred more frequently in females (26.9% vs. 15.9%, p = 0.042) and in females with greater VAT density (-82.7 vs. -89.1, p = 0.007) and SM: IMAT + VAT volume ratio (1.99 vs. 1.36, p = 0.042). BSA was a poor predictor of dose reduction (AUC 0.397, sensitivity 38%, specificity 60%) for female patients, whereas the SM: IMAT + VAT volume ratio (AUC 0.651, sensitivity 76%, specificity 61%) and VAT density (AUC 0.699, sensitivity 57%, specificity 74%) showed greater predictive ability. Body composition didn't influence dose adjustment of male patients. Lumbosacral body composition outperformed BSA in predicting adverse events in female patients with rectal cancer undergoing neoadjuvant chemotherapy.

How early can we detect diabetic retinopathy? A narrative review of imaging tools for structural assessment of the retina.

Vaughan M, Denmead P, Tay N, Rajendram R, Michaelides M, Patterson E

pubmed logopapersMay 16 2025
Despite current screening models, enhanced imaging modalities, and treatment regimens, diabetic retinopathy (DR) remains one of the leading causes of vision loss in working age adults. DR can result in irreversible structural and functional retinal damage, leading to visual impairment and reduced quality of life. Given potentially irreversible photoreceptor damage, diagnosis and treatment at the earliest stages will provide the best opportunity to avoid visual disturbances or retinopathy progression. We will review herein the current structural imaging methods used for DR assessment and their capability of detecting DR in the first stages of disease. Imaging tools, such as fundus photography, optical coherence tomography, fundus fluorescein angiography, optical coherence tomography angiography and adaptive optics-assisted imaging will be reviewed. Finally, we describe the future of DR screening programmes and the introduction of artificial intelligence as an innovative approach to detecting subtle changes in the diabetic retina. CLINICAL TRIAL REGISTRATION NUMBER: N/A.

Pancreas segmentation using AI developed on the largest CT dataset with multi-institutional validation and implications for early cancer detection.

Mukherjee S, Antony A, Patnam NG, Trivedi KH, Karbhari A, Nagaraj M, Murlidhar M, Goenka AH

pubmed logopapersMay 16 2025
Accurate and fully automated pancreas segmentation is critical for advancing imaging biomarkers in early pancreatic cancer detection and for biomarker discovery in endocrine and exocrine pancreatic diseases. We developed and evaluated a deep learning (DL)-based convolutional neural network (CNN) for automated pancreas segmentation using the largest single-institution dataset to date (n = 3031 CTs). Ground truth segmentations were performed by radiologists, which were used to train a 3D nnU-Net model through five-fold cross-validation, generating an ensemble of top-performing models. To assess generalizability, the model was externally validated on the multi-institutional AbdomenCT-1K dataset (n = 585), for which volumetric segmentations were newly generated by expert radiologists and will be made publicly available. In the test subset (n = 452), the CNN achieved a mean Dice Similarity Coefficient (DSC) of 0.94 (SD 0.05), demonstrating high spatial concordance with radiologist-annotated volumes (Concordance Correlation Coefficient [CCC]: 0.95). On the AbdomenCT-1K dataset, the model achieved a DSC of 0.96 (SD 0.04) and a CCC of 0.98, confirming its robustness across diverse imaging conditions. The proposed DL model establishes new performance benchmarks for fully automated pancreas segmentation, offering a scalable and generalizable solution for large-scale imaging biomarker research and clinical translation.

Residual self-attention vision transformer for detecting acquired vitelliform lesions and age-related macular drusen.

Powroznik P, Skublewska-Paszkowska M, Nowomiejska K, Gajda-Deryło B, Brinkmann M, Concilio M, Toro MD, Rejdak R

pubmed logopapersMay 16 2025
Retinal diseases recognition is still a challenging task. Many deep learning classification methods and their modifications have been developed for medical imaging. Recently, Vision Transformers (ViT) have been applied for classification of retinal diseases with great success. Therefore, in this study a novel method was proposed, the Residual Self-Attention Vision Transformer (RS-A ViT), for automatic detection of acquired vitelliform lesions (AVL), macular drusen as well as distinguishing them from healthy cases. The Residual Self-Attention module instead of Self-Attention was applied in order to improve model's performance. The new tool outperforms the classical deep learning methods, like EfficientNet, InceptionV3, ResNet50 and VGG16. The RS-A ViT method also exceeds the ViT algorithm, reaching 96.62%. For the purpose of this research a new dataset was created that combines AVL data gathered from two research centers and drusen as well as normal cases from the OCT dataset. The augmentation methods were applied in order to enlarge the samples. The Grad-CAM interpretability method indicated that this model analyses the appropriate areas in optical coherence tomography images in order to detect retinal diseases. The results proved that the presented RS-A ViT model has a great potential in classification retinal disorders with high accuracy and thus may be applied as a supportive tool for ophthalmologists.

Multicenter development of a deep learning radiomics and dosiomics nomogram to predict radiation pneumonia risk in non-small cell lung cancer.

Wang X, Zhang A, Yang H, Zhang G, Ma J, Ye S, Ge S

pubmed logopapersMay 16 2025
Radiation pneumonia (RP) is the most common side effect of chest radiotherapy, and can affect patients' quality of life. This study aimed to establish a combined model of radiomics, dosiomics, deep learning (DL) based on simulated location CT and dosimetry images combining with clinical parameters to improve the predictive ability of ≥ 2 grade RP (RP2) in patients with non-small cell lung cancer (NSCLC). This study retrospectively collected 245 patients with NSCLC who received radiotherapy from three hospitals. 162 patients from Hospital I were randomly divided into training cohort and internal validation cohort according to 7:3. 83 patients from two other hospitals served as an external validation cohort. Multivariate analysis was used to screen independent clinical predictors and establish clinical model (CM). The radiomic and dosiomics (RD) features and DL features were extracted from simulated location CT and dosimetry images based on the region of interest (ROI) of total lung-PTV (TL-PTV). The features screened by the t-test and least absolute shrinkage and selection operator (LASSO) were used to construct the RD and DL model, and RD-score and DL-score were calculated. RD-score, DL-score and independent clinical features were combined to establish deep learning radiomics and dosiomics nomogram (DLRDN). The model performance was evaluated by area under the curve (AUC). Three clinical factors, including V20, V30, and mean lung dose (MLD), were used to establish the CM. 7 RD features including 4 radiomics features and 3 dosiomics features were selected to establish RD model. 10 DL features were selected to establish DL model. Among the different models, DLRDN showed the best predictions, with the AUCs of 0.891 (0.826-0.957), 0.825 (0.693-0.957), and 0.801 (0.698-0.904) in the training cohort, internal validation cohort and external validation cohort, respectively. DCA showed that DLRDN had a higher overall net benefit than other models. The calibration curve showed that the predicted value of DLRDN was in good agreement with the actual value. Overall, radiomics, dosiomics, and DL features based on simulated location CT and dosimetry images have the potential to help predict RP2. The combination of multi-dimensional data produced the optimal predictive model, which could provide guidance for clinicians.

Deep learning progressive distill for predicting clinical response to conversion therapy from preoperative CT images of advanced gastric cancer patients.

Han S, Zhang T, Deng W, Han S, Wu H, Jiang B, Xie W, Chen Y, Deng T, Wen X, Liu N, Fan J

pubmed logopapersMay 16 2025
Identifying patients suitable for conversion therapy through early non-invasive screening is crucial for tailoring treatment in advanced gastric cancer (AGC). This study aimed to develop and validate a deep learning method, utilizing preoperative computed tomography (CT) images, to predict the response to conversion therapy in AGC patients. This retrospective study involved 140 patients. We utilized Progressive Distill (PD) methodology to construct a deep learning model for predicting clinical response to conversion therapy based on preoperative CT images. Patients in the training set (n = 112) and in the test set (n = 28) were sourced from The First Affiliated Hospital of Wenzhou Medical University between September 2017 and November 2023. Our PD models' performance was compared with baseline models and those utilizing Knowledge Distillation (KD), with evaluation metrics including accuracy, sensitivity, specificity, receiver operating characteristic curves, areas under the receiver operating characteristic curve (AUCs), and heat maps. The PD model exhibited the best performance, demonstrating robust discrimination of clinical response to conversion therapy with an AUC of 0.99 and accuracy of 99.11% in the training set, and 0.87 AUC and 85.71% accuracy in the test set. Sensitivity and specificity were 97.44% and 100% respectively in the training set, 85.71% and 85.71% each in the test set, suggesting absence of discernible bias. The deep learning model of PD method accurately predicts clinical response to conversion therapy in AGC patients. Further investigation is warranted to assess its clinical utility alongside clinicopathological parameters.
Page 212 of 2522511 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.