Sort by:
Page 151 of 2432424 results

Hybrid model integration with explainable AI for brain tumor diagnosis: a unified approach to MRI analysis and prediction.

Vamsidhar D, Desai P, Joshi S, Kolhar S, Deshpande N, Gite S

pubmed logopapersJul 1 2025
Effective treatment for brain tumors relies on accurate detection because this is a crucial health condition. Medical imaging plays a pivotal role in improving tumor detection and diagnosis in the early stage. This study presents two approaches to the tumor detection problem focusing on the healthcare domain. A combination of image processing, vision transformer (ViT), and machine learning algorithms is the first approach that focuses on analyzing medical images. The second approach is the parallel model integration technique, where we first integrate two pre-trained deep learning models, ResNet101, and Xception, followed by applying local interpretable model-agnostic explanations (LIME) to explain the model. The results obtained an accuracy of 98.17% for the combination of vision transformer, random forest and contrast-limited adaptive histogram equalization and 99. 67% for the parallel model integration (ResNet101 and Xception). Based on these results, this paper proposed the deep learning approach-parallel model integration technique as the most effective method. Future work aims to extend the model to multi-class classification for tumor type detection and improve model generalization for broader applicability.

Transformer attention fusion for fine grained medical image classification.

Badar D, Abbas J, Alsini R, Abbas T, ChengLiang W, Daud A

pubmed logopapersJul 1 2025
Fine-grained visual classification is fundamental for medical image applications because it detects minor lesions. Diabetic retinopathy (DR) is a preventable cause of blindness, which requires exact and timely diagnosis to prevent vision damage. The challenges automated DR classification systems face include irregular lesions, uneven distributions between image classes, and inconsistent image quality that reduces diagnostic accuracy during early detection stages. Our solution to these problems includes MSCAS-Net (Multi-Scale Cross and Self-Attention Network), which uses the Swin Transformer as the backbone. It extracts features at three different resolutions (12 × 12, 24 × 24, 48 × 48), allowing it to detect subtle local features and global elements. This model uses self-attention mechanics to improve spatial connections between single scales and cross-attention to automatically match feature patterns across multiple scales, thereby developing a comprehensive information structure. The model becomes better at detecting significant lesions because of its dual mechanism, which focuses on both attention points. MSCAS-Net displays the best performance on APTOS and DDR and IDRID benchmarks by reaching accuracy levels of 93.8%, 89.80% and 86.70%, respectively. Through its algorithm, the model solves problems with imbalanced datasets and inconsistent image quality without needing data augmentation because it learns stable features. MSCAS-Net demonstrates a breakthrough in automated DR diagnostics since it combines high diagnostic precision with interpretable abilities to become an efficient AI-powered clinical decision support system. The presented research demonstrates how fine-grained visual classification methods benefit detecting and treating DR during its early stages.

Muscle-Driven prognostication in gastric cancer: A multicenter deep learning framework integrating Iliopsoas and erector spinae radiomics for 5-Year survival prediction.

Hong Y, Zhang P, Teng Z, Cheng K, Zhang Z, Cheng Y, Cao G, Chen B

pubmed logopapersJul 1 2025
This study developed a 5-year survival prediction model for gastric cancer patients by combining radiomics and deep learning, focusing on CT-based 2D and 3D features of the iliopsoas and erector spinae muscles. Retrospective data from 705 patients across two centers were analyzed, with clinical variables assessed via Cox regression and radiomic features extracted using deep learning. The 2D model outperformed the 3D approach, leading to feature fusion across five dimensions, optimized via logistic regression. Results showed no significant association between clinical baseline characteristics and survival, but the 2D model demonstrated strong prognostic performance (AUC ~ 0.8), with attention heatmaps emphasizing spinal muscle regions. The 3D model underperformed due to irrelevant data. The final integrated model achieved stable predictive accuracy, confirming the link between muscle mass and survival. This approach advances precision medicine by enabling personalized prognosis and exploring 3D imaging feasibility, offering insights for gastric cancer research.

Development and validation of an MRI spatiotemporal interaction model for early noninvasive prediction of neoadjuvant chemotherapy response in breast cancer: a multicentre study.

Tang W, Jin C, Kong Q, Liu C, Chen S, Ding S, Liu B, Feng Z, Li Y, Dai Y, Zhang L, Chen Y, Han X, Liu S, Chen D, Weng Z, Liu W, Wei X, Jiang X, Zhou Q, Mao N, Guo Y

pubmed logopapersJul 1 2025
The accurate and early evaluation of response to neoadjuvant chemotherapy (NAC) in breast cancer is crucial for optimizing treatment strategies and minimizing unnecessary interventions. While deep learning (DL)-based approaches have shown promise in medical imaging analysis, existing models often fail to comprehensively integrate spatial and temporal tumor dynamics. This study aims to develop and validate a spatiotemporal interaction (STI) model based on longitudinal MRI data to predict pathological complete response (pCR) to NAC in breast cancer patients. This study included retrospective and prospective datasets from five medical centers in China, collected from June 2018 to December 2024. These datasets were assigned to the primary cohort (including training and internal validation sets), external validation cohorts, and a prospective validation cohort. DCE-MRI scans from both pre-NAC (T0) and early-NAC (T1) stages were collected for each patient, along with surgical pathology results. A Siamese network-based STI model was developed, integrating spatial features from tumor segmentation with temporal dependencies using a transformer-based multi-head attention mechanism. This model was designed to simultaneously capture spatial heterogeneity and temporal dynamics, enabling accurate prediction of NAC response. The STI model's performance was evaluated using the area under the ROC curve (AUC) and Precision-Recall curve (AP), accuracy, sensitivity, and specificity. Additionally, the I-SPY1 and I-SPY2 datasets were used for Kaplan-Meier survival analysis and to explore the biological basis of the STI model, respectively. The prospective cohort was registered with Chinese Clinical Trial Registration Centre (ChiCTR2500102170). A total of 1044 patients were included in this study, with the pCR rate ranging from 23.8% to 35.9%. The STI model demonstrated good performance in early prediction of NAC response in breast cancer. In the external validation cohorts, the AUC values were 0.923 (95% CI: 0.859-0.987), 0.892 (95% CI: 0.821-0.963), and 0.913 (95% CI: 0.835-0.991), all outperforming the single-timepoint T0 or T1 models, as well as models with spatial information added (all p < 0.05, Delong test). Additionally, the STI model significantly outperformed the clinical model (p < 0.05, Delong test) and radiologists' predictions. In the prospective validation cohort, the STI model identified 90.2% (37/41) of non-pCR and 82.6% (19/23) of pCR patients, reducing misclassification rates by 58.7% and 63.3% compared to radiologists. This indicates that these patients might benefit from treatment adjustment or continued therapy in the early NAC stage. Survival analysis showed a significant correlation between the STI model and both recurrence-free survival (RFS) and overall survival (OS) in breast cancer patients. Further investigation revealed that favorable NAC responses predicted by the STI model were closely linked to upregulated immune-related genes and enhanced immune cell infiltration. Our study established a novel noninvasive STI model that integrates the spatiotemporal evolution of MRI before and during NAC to achieve early and accurate pCR prediction, offering potential guidance for personalized treatment. This study was supported by the National Natural Science Foundation of China (82302314, 62271448, 82171920, 81901711), Basic and Applied Basic Research Foundation of Guangdong Province (2022A1515110792, 2023A1515220097, 2024A1515010653), Medical Scientific Research Foundation of Guangdong Province (A2023073, A2024116), Science and Technology Projects in Guangzhou (2023A04J1275, 2024A03J1030, 2025A03J4163, 2025A03J4162); Guangzhou First People's Hospital Frontier Medical Technology Project (QY-C04).

Ultrasound-based classification of follicular thyroid Cancer using deep convolutional neural networks with transfer learning.

Agyekum EA, Yuzhi Z, Fang Y, Agyekum DN, Wang X, Issaka E, Li C, Shen X, Qian X, Wu X

pubmed logopapersJul 1 2025
This study aimed to develop and validate convolutional neural network (CNN) models for distinguishing follicular thyroid carcinoma (FTC) from follicular thyroid adenoma (FTA). Additionally, this current study compared the performance of CNN models with the American College of Radiology Thyroid Imaging Reporting and Data System (ACR-TIRADS) and Chinese Thyroid Imaging Reporting and Data System (C-TIRADS) ultrasound-based malignancy risk stratification systems. A total of 327 eligible patients with FTC and FTA who underwent preoperative thyroid ultrasound examination were retrospectively enrolled between August 2017, and August 2024. Patients were randomly assigned to a training cohort (n = 263) and a test cohort (n = 64) in an 8:2 ratio using stratified sampling. Five CNN models, including VGG16, ResNet101, MobileNetV2, ResNet152, and ResNet50, pre-trained with ImageNet, were developed and tested to distinguish FTC from FTA. The CNN models exhibited good performance, yielding areas under the receiver operating characteristic curve (AUC) ranging from 0.64 to 0.77. The ResNet152 model demonstrated the highest AUC (0.77; 95% CI, 0.67-0.87) for distinguishing between FTC and FTA. Decision curve and calibration curve analyses demonstrated the models' favorable clinical value and calibration. Furthermore, when comparing the performance of the developed models with that of the C-TIRADS and ACR-TIRADS systems, the models developed in this study demonstrated superior performance. This can potentially guide appropriate management of FTC in patients with follicular neoplasms.

Anterior cruciate ligament tear detection based on Res2Net modified by improved Lévy flight distribution.

Yang P, Liu Y, Liu F, Han M, Abdi Y

pubmed logopapersJul 1 2025
Anterior Cruciate Ligament (ACL) tears are common in sports and can provide noteworthy health issues. Therefore, accurately diagnosing of tears is important for the early and proper treatment. However, traditional diagnostic methods, such as clinical assessments and MRI, have limitations in terms of accuracy and efficiency. This study introduces a new diagnostic approach by combining of the deep learning architecture Res2Net with an improved version of the Lévy flight distribution (ILFD) to improve the detection of ACL tears in knee MRI images. The Res2Net model is known for its ability to extract important features and classify them effectively. By optimizing the model using the ILFD algorithm, the diagnostic efficiency is greatly improved. For validation of the proposed model's efficiency, it has been applied into two standard datasets including Stanford University Medical Center and Clinical Hospital Centre Rijeka. Comparative analysis with existing diagnostic methods, including 14 layers ResNet-14, Compact Parallel Deep Convolutional Neural Network (CPDCNN), Convolutional Neural Network (CNN), Generative Adversarial Network (GAN), and combined CNN and Modified Golden Search Algorithm (CNN/MGSA) shows that the suggested Res2Net/ILFD model performs better in various metrics, including precision, recall, accuracy, f1-score, and specificity, and Matthews correlation coefficient.

Lessons learned from RadiologyNET foundation models for transfer learning in medical radiology.

Napravnik M, Hržić F, Urschler M, Miletić D, Štajduhar I

pubmed logopapersJul 1 2025
Deep learning models require large amounts of annotated data, which are hard to obtain in the medical field, as the annotation process is laborious and depends on expert knowledge. This data scarcity hinders a model's ability to generalise effectively on unseen data, and recently, foundation models pretrained on large datasets have been proposed as a promising solution. RadiologyNET is a custom medical dataset that comprises 1,902,414 medical images covering various body parts and modalities of image acquisition. We used the RadiologyNET dataset to pretrain several popular architectures (ResNet18, ResNet34, ResNet50, VGG16, EfficientNetB3, EfficientNetB4, InceptionV3, DenseNet121, MobileNetV3Small and MobileNetV3Large). We compared the performance of ImageNet and RadiologyNET foundation models against training from randomly initialiased weights on several publicly available medical datasets: (i) Segmentation-LUng Nodule Analysis Challenge, (ii) Regression-RSNA Pediatric Bone Age Challenge, (iii) Binary classification-GRAZPEDWRI-DX and COVID-19 datasets, and (iv) Multiclass classification-Brain Tumor MRI dataset. Our results indicate that RadiologyNET-pretrained models generally perform similarly to ImageNet models, with some advantages in resource-limited settings. However, ImageNet-pretrained models showed competitive performance when fine-tuned on sufficient data. The impact of modality diversity on model performance was tested, with the results varying across tasks, highlighting the importance of aligning pretraining data with downstream applications. Based on our findings, we provide guidelines for using foundation models in medical applications and publicly release our RadiologyNET-pretrained models to support further research and development in the field. The models are available at https://github.com/AIlab-RITEH/RadiologyNET-TL-models .

Prediction of axillary lymph node metastasis in triple negative breast cancer using MRI radiomics and clinical features.

Shen Y, Huang R, Zhang Y, Zhu J, Li Y

pubmed logopapersJul 1 2025
To develop and validate a machine learning-based prediction model to predict axillary lymph node (ALN) metastasis in triple negative breast cancer (TNBC) patients using magnetic resonance imaging (MRI) and clinical characteristics. This retrospective study included TNBC patients from the First Affiliated Hospital of Soochow University and Jiangsu Province Hospital (2016-2023). We analyzed clinical characteristics and radiomic features from T2-weighted MRI. Using LASSO regression for feature selection, we applied Logistic Regression (LR), Random Forest (RF), and Support Vector Machine (SVM) to build prediction models. A total of 163 patients, with a median age of 53 years (range: 24-73), were divided into a training group (n = 115) and a validation group (n = 48). Among them, 54 (33.13%) had ALN metastasis, and 109 (66.87%) were non-metastasis. Nottingham grade (P = 0.005), tumor size (P = 0.016) were significant difference between non-metastasis cases and metastasis cases. In the validation set, the LR-based combined model achieved the highest AUC (0.828, 95%CI: 0.706-0.950) with excellent sensitivity (0.813) and accuracy (0.812). Although the RF-based model had the highest AUC in the training set and the highest specificity (0.906) in the validation set, its performance was less consistent compared to the LR model. MRI-T2WI radiomic features predict ALN metastasis in TNBC, with integration into clinical models enhancing preoperative predictions and personalizing management.

Knowledge mapping of ultrasound technology and triple-negative breast cancer: a visual and bibliometric analysis.

Wan Y, Shen Y, Wang J, Zhang T, Fu X

pubmed logopapersJul 1 2025
This study aims to explore the application of ultrasound technology in triple-negative breast cancer (TNBC) using bibliometric methods. It presents a visual knowledge map to exhibit global research dynamics and elucidates the research directions, hotspots, trends, and frontiers in this field. The Web of Science Core Collection database was used, and CiteSpace and VOSviewer software were employed to visualize the annual publication volume, collaborative networks (including countries, institutions, and authors), citation characteristics (such as references, co-citations, and publications), as well as keywords (including emergence and clustering) related to ultrasound applications in TNBC over the past 15 years. A total of 310 papers were included. The first paper was published in 2010, and after that, publications in this field really took off, especially after 2020. China emerged as the leading country in terms of publication volume, while Shanghai Jiao Tong University had the highest output among institutions. Memorial Sloan Kettering Cancer Center was recognized as a key research institution within this domain. Adrada BE was the most prolific author in terms of publication count. Ko Es held the highest citation frequency among authors. Co-occurrence analysis of keywords revealed that the top three keywords by frequency were "triple-negative breast cancer," "breast cancer," and "sonography." The timeline visualization indicated a strong temporal continuity in the clusters of "breast cancer," "recommendations," "biopsy," "estrogen receptor," and "radiomics." The keyword with the highest emergence value was "neoplasms" (6.80). Trend analysis of emerging terms indicated a growing focus on "machine learning approaches," "prognosis," and "molecular subtypes," with "machine learning approach" emerging as a significant keyword currently. This study provided a systematic analysis of the current state of ultrasound technology applications in TNBC. It highlighted that "machine learning methods" have emerged as a central focus and frontier in this research area, both presently and for the foreseeable future. The findings offer valuable theoretical insights for the application of ultrasound technology in TNBC diagnosis and treatment and establish a solid foundation for further advancements in medical imaging research related to TNBC.

FPGA implementation of deep learning architecture for ankylosing spondylitis detection from MRI.

Kocaoğlu S

pubmed logopapersJul 1 2025
Ankylosing Spondylitis (AS), commonly known as Bechterew's disease, is a complex, potentially disabling disease that develops slowly over time and progresses to radiographic sacroiliitis. The etiology of this disease is poorly understood, making it difficult to diagnose. Therefore, treatment is also delayed. This study aims to diagnose AS with an automated system that classifies axial magnetic resonance imaging (MRI) sequences of AS patients. Recently, the application of deep learning neural networks (DLNNs) for MRI classification has become widespread. The implementation of this process on computer-independent end devices is advantageous due to its high computational power and low latency requirements. In this research, an MRI dataset containing images from 527 individuals was used. A deep learning architecture on a Field Programmable Gate Array (FPGA) card was implemented and analyzed. The results show that the classification performed on FPGA in AS diagnosis yields successful results close to the classification performed on CPU.
Page 151 of 2432424 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.