Sort by:
Page 137 of 2432422 results

Multitask Deep Learning Based on Longitudinal CT Images Facilitates Prediction of Lymph Node Metastasis and Survival in Chemotherapy-Treated Gastric Cancer.

Qiu B, Zheng Y, Liu S, Song R, Wu L, Lu C, Yang X, Wang W, Liu Z, Cui Y

pubmed logopapersJul 2 2025
Accurate preoperative assessment of lymph node metastasis (LNM) and overall survival (OS) status is essential for patients with locally advanced gastric cancer receiving neoadjuvant chemotherapy, providing timely guidance for clinical decision-making. However, current approaches to evaluate LNM and OS have limited accuracy. In this study, we used longitudinal CT images from 1,021 patients with locally advanced gastric cancer to develop and validate a multitask deep learning model, named co-attention tri-oriented spatial Mamba (CTSMamba), to simultaneously predict LNM and OS. CTSMamba was trained and validated on 398 patients, and the performance was further validated on 623 patients at two additional centers. Notably, CTSMamba exhibited significantly more robust performance than a clinical model in predicting LNM across all of the cohorts. Additionally, integrating CTSMamba survival scores with clinical predictors further improved personalized OS prediction. These results support the potential of CTSMamba to accurately predict LNM and OS from longitudinal images, potentially providing clinicians with a tool to inform individualized treatment approaches and optimized prognostic strategies. CTSMamba is a multitask deep learning model trained on longitudinal CT images of neoadjuvant chemotherapy-treated locally advanced gastric cancer that accurately predicts lymph node metastasis and overall survival to inform clinical decision-making. This article is part of a special series: Driving Cancer Discoveries with Computational Research, Data Science, and Machine Learning/AI.

Deep learning-based sex estimation of 3D hyoid bone models in a Croatian population using adapted PointNet++ network.

Jerković I, Bašić Ž, Kružić I

pubmed logopapersJul 2 2025
This study investigates a deep learning approach for sex estimation using 3D hyoid bone models derived from computed tomography (CT) scans of a Croatian population. We analyzed 202 hyoid samples (101 male, 101 female), converting CT-derived meshes into 2048-point clouds for processing with an adapted PointNet++ network. The model, optimized for small datasets with 1D convolutional layers and global size features, was first applied in an unsupervised framework. Unsupervised clustering achieved 87.10% accuracy, identifying natural sex-based morphological patterns. Subsequently, supervised classification with a support vector machine yielded an accuracy of 88.71% (Matthews Correlation Coefficient, MCC = 0.7746) on a test set (n = 62). Interpretability analysis highlighted key regions influencing classification, with males exhibiting larger, U-shaped hyoids and females showing smaller, more open structures. Despite the modest sample size, the method effectively captured sex differences, providing a data-efficient and interpretable tool. This flexible approach, combining computational efficiency with practical insights, demonstrates potential for aiding sex estimation in cases with limited skeletal remains and may support broader applications in forensic anthropology.

Topological Signatures vs. Gradient Histograms: A Comparative Study for Medical Image Classification

Faisal Ahmed, Mohammad Alfrad Nobel Bhuiyan

arxiv logopreprintJul 2 2025
We present the first comparative study of two fundamentally distinct feature extraction techniques: Histogram of Oriented Gradients (HOG) and Topological Data Analysis (TDA), for medical image classification using retinal fundus images. HOG captures local texture and edge patterns through gradient orientation histograms, while TDA, using cubical persistent homology, extracts high-level topological signatures that reflect the global structure of pixel intensities. We evaluate both methods on the large APTOS dataset for two classification tasks: binary detection (normal versus diabetic retinopathy) and five-class diabetic retinopathy severity grading. From each image, we extract 26244 HOG features and 800 TDA features, using them independently to train seven classical machine learning models with 10-fold cross-validation. XGBoost achieved the best performance in both cases: 94.29 percent accuracy (HOG) and 94.18 percent (TDA) on the binary task; 74.41 percent (HOG) and 74.69 percent (TDA) on the multi-class task. Our results show that both methods offer competitive performance but encode different structural aspects of the images. This is the first work to benchmark gradient-based and topological features on retinal imagery. The techniques are interpretable, applicable to other medical imaging domains, and suitable for integration into deep learning pipelines.

Are Vision Transformer Representations Semantically Meaningful? A Case Study in Medical Imaging

Montasir Shams, Chashi Mahiul Islam, Shaeke Salman, Phat Tran, Xiuwen Liu

arxiv logopreprintJul 2 2025
Vision transformers (ViTs) have rapidly gained prominence in medical imaging tasks such as disease classification, segmentation, and detection due to their superior accuracy compared to conventional deep learning models. However, due to their size and complex interactions via the self-attention mechanism, they are not well understood. In particular, it is unclear whether the representations produced by such models are semantically meaningful. In this paper, using a projected gradient-based algorithm, we show that their representations are not semantically meaningful and they are inherently vulnerable to small changes. Images with imperceptible differences can have very different representations; on the other hand, images that should belong to different semantic classes can have nearly identical representations. Such vulnerability can lead to unreliable classification results; for example, unnoticeable changes cause the classification accuracy to be reduced by over 60\%. %. To the best of our knowledge, this is the first work to systematically demonstrate this fundamental lack of semantic meaningfulness in ViT representations for medical image classification, revealing a critical challenge for their deployment in safety-critical systems.

Classifying and diagnosing Alzheimer's disease with deep learning using 6735 brain MRI images.

Mousavi SM, Moulaei K, Ahmadian L

pubmed logopapersJul 2 2025
Traditional diagnostic methods for Alzheimer's disease often suffer from low accuracy and lengthy processing times, delaying crucial interventions and patient care. Deep convolutional neural networks trained on MRI data can enhance diagnostic precision. This study aims to utilize deep convolutional neural networks (CNNs) trained on MRI data for Alzheimer's disease diagnosis and classification. In this study, the Alzheimer MRI Preprocessed Dataset was used, which includes 6735 brain structural MRI scan images. After data preprocessing and normalization, four models Xception, VGG19, VGG16 and InceptionResNetV2 were utilized. Generalization and hyperparameter tuning were applied to improve training. Early stopping and dynamic learning rate were used to prevent overfitting. Model performance was evaluated based on accuracy, F-score, recall, and precision. The InceptionResnetV2 model showed superior performance in predicting Alzheimer's patients with an accuracy, F-score, recall, and precision of 0.99. Then, the Xception model excelled in precision, recall, and F-score, with values of 0.97 and an accuracy of 96.89. Notably, InceptionResnetV2 and VGG19 demonstrated faster learning, reaching convergence sooner and requiring fewer training iterations than other models. The InceptionResNetV2 model achieved the highest performance, with precision, recall, and F-score of 100% for both mild and moderate dementia classes. The Xception model also performed well, attaining 100% for the moderate dementia class and 99-100% for the mild dementia class. Additionally, the VGG16 and VGG19 models showed strong results, with VGG16 reaching 100% precision, recall, and F-score for the moderate dementia class. Deep convolutional neural networks enhance Alzheimer's diagnosis, surpassing traditional methods with improved precision and efficiency. Models like InceptionResnetV2 show outstanding performance, potentially speeding up patient interventions.

CareAssist GPT improves patient user experience with a patient centered approach to computer aided diagnosis.

Algarni A

pubmed logopapersJul 2 2025
The rapid integration of artificial intelligence (AI) into healthcare has enhanced diagnostic accuracy; however, patient engagement and satisfaction remain significant challenges that hinder the widespread acceptance and effectiveness of AI-driven clinical tools. This study introduces CareAssist-GPT, a novel AI-assisted diagnostic model designed to improve both diagnostic accuracy and the patient experience through real-time, understandable, and empathetic communication. CareAssist-GPT combines high-resolution X-ray images, real-time physiological vital signs, and clinical notes within a unified predictive framework using deep learning. Feature extraction is performed using convolutional neural networks (CNNs), gated recurrent units (GRUs), and transformer-based NLP modules. Model performance was evaluated in terms of accuracy, precision, recall, specificity, and response time, alongside patient satisfaction through a structured user feedback survey. CareAssist-GPT achieved a diagnostic accuracy of 95.8%, improving by 2.4% over conventional models. It reported high precision (94.3%), recall (93.8%), and specificity (92.7%), with an AUC-ROC of 0.97. The system responded within 500 ms-23.1% faster than existing tools-and achieved a patient satisfaction score of 9.3 out of 10, demonstrating its real-time usability and communicative effectiveness. CareAssist-GPT significantly enhances the diagnostic process by improving accuracy and fostering patient trust through transparent, real-time explanations. These findings position it as a promising patient-centered AI solution capable of transforming healthcare delivery by bridging the gap between advanced diagnostics and human-centered communication.

A computationally frugal open-source foundation model for thoracic disease detection in lung cancer screening programs

Niccolò McConnell, Pardeep Vasudev, Daisuke Yamada, Daryl Cheng, Mehran Azimbagirad, John McCabe, Shahab Aslani, Ahmed H. Shahin, Yukun Zhou, The SUMMIT Consortium, Andre Altmann, Yipeng Hu, Paul Taylor, Sam M. Janes, Daniel C. Alexander, Joseph Jacob

arxiv logopreprintJul 2 2025
Low-dose computed tomography (LDCT) imaging employed in lung cancer screening (LCS) programs is increasing in uptake worldwide. LCS programs herald a generational opportunity to simultaneously detect cancer and non-cancer-related early-stage lung disease. Yet these efforts are hampered by a shortage of radiologists to interpret scans at scale. Here, we present TANGERINE, a computationally frugal, open-source vision foundation model for volumetric LDCT analysis. Designed for broad accessibility and rapid adaptation, TANGERINE can be fine-tuned off the shelf for a wide range of disease-specific tasks with limited computational resources and training data. Relative to models trained from scratch, TANGERINE demonstrates fast convergence during fine-tuning, thereby requiring significantly fewer GPU hours, and displays strong label efficiency, achieving comparable or superior performance with a fraction of fine-tuning data. Pretrained using self-supervised learning on over 98,000 thoracic LDCTs, including the UK's largest LCS initiative to date and 27 public datasets, TANGERINE achieves state-of-the-art performance across 14 disease classification tasks, including lung cancer and multiple respiratory diseases, while generalising robustly across diverse clinical centres. By extending a masked autoencoder framework to 3D imaging, TANGERINE offers a scalable solution for LDCT analysis, departing from recent closed, resource-intensive models by combining architectural simplicity, public availability, and modest computational requirements. Its accessible, open-source lightweight design lays the foundation for rapid integration into next-generation medical imaging tools that could transform LCS initiatives, allowing them to pivot from a singular focus on lung cancer detection to comprehensive respiratory disease management in high-risk populations.

Effect of artificial intelligence-aided differentiation of adenomatous and non-adenomatous colorectal polyps at CT colonography on radiologists' therapy management.

Grosu S, Fabritius MP, Winkelmann M, Puhr-Westerheide D, Ingenerf M, Maurus S, Graser A, Schulz C, Knösel T, Cyran CC, Ricke J, Kazmierczak PM, Ingrisch M, Wesp P

pubmed logopapersJul 1 2025
Adenomatous colorectal polyps require endoscopic resection, as opposed to non-adenomatous hyperplastic colorectal polyps. This study aims to evaluate the effect of artificial intelligence (AI)-assisted differentiation of adenomatous and non-adenomatous colorectal polyps at CT colonography on radiologists' therapy management. Five board-certified radiologists evaluated CT colonography images with colorectal polyps of all sizes and morphologies retrospectively and decided whether the depicted polyps required endoscopic resection. After a primary unassisted reading based on current guidelines, a second reading with access to the classification of a radiomics-based random-forest AI-model labelling each polyp as "non-adenomatous" or "adenomatous" was performed. Performance was evaluated using polyp histopathology as the reference standard. 77 polyps in 59 patients comprising 118 polyp image series (47% supine position, 53% prone position) were evaluated unassisted and AI-assisted by five independent board-certified radiologists, resulting in a total of 1180 readings (subsequent polypectomy: yes or no). AI-assisted readings had higher accuracy (76% +/- 1% vs. 84% +/- 1%), sensitivity (78% +/- 6% vs. 85% +/- 1%), and specificity (73% +/- 8% vs. 82% +/- 2%) in selecting polyps eligible for polypectomy (p < 0.001). Inter-reader agreement was improved in the AI-assisted readings (Fleiss' kappa 0.69 vs. 0.92). AI-based characterisation of colorectal polyps at CT colonography as a second reader might enable a more precise selection of polyps eligible for subsequent endoscopic resection. However, further studies are needed to confirm this finding and histopathologic polyp evaluation is still mandatory. Question This is the first study evaluating the impact of AI-based polyp classification in CT colonography on radiologists' therapy management. Findings Compared with unassisted reading, AI-assisted reading had higher accuracy, sensitivity, and specificity in selecting polyps eligible for polypectomy. Clinical relevance Integrating an AI tool for colorectal polyp classification in CT colonography could further improve radiologists' therapy recommendations.

Machine-learning model based on ultrasomics for non-invasive evaluation of fibrosis in IgA nephropathy.

Huang Q, Huang F, Chen C, Xiao P, Liu J, Gao Y

pubmed logopapersJul 1 2025
To develop and validate an ultrasomics-based machine-learning (ML) model for non-invasive assessment of interstitial fibrosis and tubular atrophy (IF/TA) in patients with IgA nephropathy (IgAN). In this multi-center retrospective study, 471 patients with primary IgA nephropathy from four institutions were included (training, n = 275; internal testing, n = 69; external testing, n = 127; respectively). The least absolute shrinkage and selection operator logistic regression with tenfold cross-validation was used to identify the most relevant features. The ML models were constructed based on ultrasomics. The Shapley Additive Explanation (SHAP) was used to explore the interpretability of the models. Logistic regression analysis was employed to combine ultrasomics, clinical data, and ultrasound imaging characteristics, creating a comprehensive model. A receiver operating characteristic curve, calibration, decision curve, and clinical impact curve were used to evaluate prediction performance. To differentiate between mild and moderate-to-severe IF/TA, three prediction models were developed: the Rad_SVM_Model, Clinic_LR_Model, and Rad_Clinic_Model. The area under curves of these three models were 0.861, 0.884, and 0.913 in the training cohort, and 0.760, 0.860, and 0.894 in the internal validation cohort, as well as 0.794, 0.865, and 0.904 in the external validation cohort. SHAP identified the contribution of radiomics features. Difference analysis showed that there were significant differences between radiomics features and fibrosis. The comprehensive model was superior to that of individual indicators and performed well. We developed and validated a model that combined ultrasomics, clinical data, and clinical ultrasonic characteristics based on ML to assess the extent of fibrosis in IgAN. Question Currently, there is a lack of a comprehensive ultrasomics-based machine-learning model for non-invasive assessment of the extent of Immunoglobulin A nephropathy (IgAN) fibrosis. Findings We have developed and validated a robust and interpretable machine-learning model based on ultrasomics for assessing the degree of fibrosis in IgAN. Clinical relevance The machine-learning model developed in this study has significant interpretable clinical relevance. The ultrasomics-based comprehensive model had the potential for non-invasive assessment of fibrosis in IgAN, which helped evaluate disease progress.

Noninvasive identification of HER2 status by integrating multiparametric MRI-based radiomics model with the vesical imaging-reporting and data system (VI-RADS) score in bladder urothelial carcinoma.

Luo C, Li S, Han Y, Ling J, Wu X, Chen L, Wang D, Chen J

pubmed logopapersJul 1 2025
HER2 expression is crucial for the application of HER2-targeted antibody-drug conjugates. This study aims to construct a predictive model by integrating multiparametric magnetic resonance imaging (mpMRI) based multimodal radiomics and the Vesical Imaging-Reporting and Data System (VI-RADS) score for noninvasive identification of HER2 status in bladder urothelial carcinoma (BUC). A total of 197 patients were retrospectively enrolled and randomly divided into a training cohort (n = 145) and a testing cohort (n = 52). The multimodal radiomics features were derived from mpMRI, which were also utilized for VI-RADS score evaluation. LASSO algorithm and six machine learning methods were applied for radiomics feature screening and model construction. The optimal radiomics model was selected to integrate with VI-RADS score to predict HER2 status, which was determined by immunohistochemistry. The performance of predictive model was evaluated by receiver operating characteristic curve with area under the curve (AUC). Among the enrolled patients, 110 (55.8%) patients were demonstrated with HER2-positive and 87 (44.2%) patients were HER2-negative. Eight features were selected to establish radiomics signature. The optimal radiomics signature achieved the AUC values of 0.841 (95% CI 0.779-0.904) in the training cohort and 0.794 (95%CI 0.650-0.938) in the testing cohort, respectively. The KNN model was selected to evaluate the significance of radiomics signature and VI-RADS score, which were integrated as a predictive nomogram. The AUC values for the nomogram in the training and testing cohorts were 0.889 (95%CI 0.840-0.938) and 0.826 (95%CI 0.702-0.950), respectively. Our study indicated the predictive model based on the integration of mpMRI-based radiomics and VI-RADS score could accurately predict HER2 status in BUC. The model might aid clinicians in tailoring individualized therapeutic strategies.
Page 137 of 2432422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.