Sort by:
Page 159 of 1701699 results

Paradigm-Shifting Attention-based Hybrid View Learning for Enhanced Mammography Breast Cancer Classification with Multi-Scale and Multi-View Fusion.

Zhao H, Zhang C, Wang F, Li Z, Gao S

pubmed logopapersMay 12 2025
Breast cancer poses a serious threat to women's health, and its early detection is crucial for enhancing patient survival rates. While deep learning has significantly advanced mammographic image analysis, existing methods struggle to balance between view consistency with input adaptability. Furthermore, current models face challenges in accurately capturing multi-scale features, especially when subtle lesion variations across different scales are involved. To address this challenge, this paper proposes a Hybrid View Learning (HVL) paradigm that unifies traditional Single-View and Multi-View Learning approaches. The core component of this paradigm, our Attention-based Hybrid View Learning (AHVL) framework, incorporates two essential attention mechanisms: Contrastive Switch Attention (CSA) and Selective Pooling Attention (SPA). The CSA mechanism flexibly alternates between self-attention and cross-attention based on data integrity, integrating a pre-trained language model for contrastive learning to enhance model stability. Meanwhile, the SPA module employs multi-scale feature pooling and selection to capture critical features from mammographic images, overcoming the limitations of traditional models that struggle with fine-grained lesion detection. Experimental validation on the INbreast and CBIS-DDSM datasets shows that the AHVL framework outperforms both single-view and multi-view methods, especially under extreme view missing conditions. Even with an 80% missing rate on both datasets, AHVL maintains the highest accuracy and experiences the smallest performance decline in metrics like F1 score and AUC-PR, demonstrating its robustness and stability. This study redefines mammographic image analysis by leveraging attention-based hybrid view processing, setting a new standard for precise and efficient breast cancer diagnosis.

Artificial intelligence-assisted diagnosis of early allograft dysfunction based on ultrasound image and data.

Meng Y, Wang M, Niu N, Zhang H, Yang J, Zhang G, Liu J, Tang Y, Wang K

pubmed logopapersMay 12 2025
Early allograft dysfunction (EAD) significantly affects liver transplantation prognosis. This study evaluated the effectiveness of artificial intelligence (AI)-assisted methods in accurately diagnosing EAD and identifying its causes. The primary metric for assessing the accuracy was the area under the receiver operating characteristic curve (AUC). Accuracy, sensitivity, and specificity were calculated and analyzed to compare the performance of the AI models with each other and with radiologists. EAD classification followed the criteria established by Olthoff et al. A total of 582 liver transplant patients who underwent transplantation between December 2012 and June 2021 were selected. Among these, 117 patients (mean age 33.5 ± 26.5 years, 80 men) were evaluated. The ultrasound parameters, images, and clinical information of patients were extracted from the database to train the AI model. The AUC for the ultrasound-spectrogram fusion network constructed from four ultrasound images and medical data was 0.968 (95%CI: 0.940, 0.991), outperforming radiologists by 30% for all metrics. AI assistance significantly improved diagnostic accuracy, sensitivity, and specificity (P < 0.050) for both experienced and less-experienced physicians. EAD lacks efficient diagnosis and causation analysis methods. The integration of AI and ultrasound enhances diagnostic accuracy and causation analysis. By modeling only images and data related to blood flow, the AI model effectively analyzed patients with EAD caused by abnormal blood supply. Our model can assist radiologists in reducing judgment discrepancies, potentially benefitting patients with EAD in underdeveloped regions. Furthermore, it enables targeted treatment for those with abnormal blood supply.

New developments in imaging in ALS.

Kleinerova J, Querin G, Pradat PF, Siah WF, Bede P

pubmed logopapersMay 12 2025
Neuroimaging in ALS has contributed considerable academic insights in recent years demonstrating genotype-specific topological changes decades before phenoconversion and characterising longitudinal propagation patterns in specific phenotypes. It has elucidated the radiological underpinnings of specific clinical phenomena such as pseudobulbar affect, apathy, behavioural change, spasticity, and language deficits. Academic concepts such as sexual dimorphism, motor reserve, cognitive reserve, adaptive changes, connectivity-based propagation, pathological stages, and compensatory mechanisms have also been evaluated by imaging. The underpinnings of extra-motor manifestations such as cerebellar, sensory, extrapyramidal and cognitive symptoms have been studied by purpose-designed imaging protocols. Clustering approaches have been implemented to uncover radiologically distinct disease subtypes and machine-learning models have been piloted to accurately classify individual patients into relevant diagnostic, phenotypic, and prognostic categories. Prediction models have been developed for survival in symptomatic patients and phenoconversion in asymptomatic mutation carriers. A range of novel imaging modalities have been implemented and 7 Tesla MRI platforms are increasingly being used in ALS studies. Non-ALS MND conditions, such as PLS, SBMA, and SMA, are now also being increasingly studied by quantitative neuroimaging approaches. A unifying theme of recent imaging papers is the departure from describing focal brain changes to focusing on dynamic structural and functional connectivity alterations. Progressive cortico-cortical, cortico-basal, cortico-cerebellar, cortico-bulbar, and cortico-spinal disconnection has been consistently demonstrated by recent studies and recognised as the primary driver of clinical decline. These studies have led the reconceptualisation of ALS as a "network" or "circuitry disease".

Cardiac imaging for the detection of ischemia: current status and future perspectives.

Rodriguez C, Pappas L, Le Hong Q, Baquero L, Nagel E

pubmed logopapersMay 12 2025
Coronary artery disease is the main cause of mortality worldwide mandating early detection, appropriate treatment, and follow-up. Noninvasive cardiac imaging techniques allow detection of obstructive coronary heart disease by direct visualization of the arteries or myocardial blood flow reduction. These techniques have made remarkable progress since their introduction, achieving high diagnostic precision. This review aims at evaluating these noninvasive cardiac imaging techniques, rendering a thorough overview of diagnostic decision-making for detection of ischemia. We discuss the latest advances in the field such as computed tomography angiography, single-photon emission tomography, positron emission tomography, and cardiac magnetic resonance; their main advantages and disadvantages, their most appropriate use and prospects. For the review, we analyzed the literature from 2009 to 2024 on noninvasive cardiac imaging in the diagnosis of coronary artery disease. The review included the 78 publications considered most relevant, including landmark trials, review articles and guidelines. The progress in cardiac imaging is anticipated to overcome various limitations such as high costs, radiation exposure, artifacts, and differences in interpretation among observers. It is expected to lead to more automated scanning processes, and with the assistance of artificial intelligence-driven post-processing software, higher accuracy and reproducibility may be attained.

Multi-Plane Vision Transformer for Hemorrhage Classification Using Axial and Sagittal MRI Data

Badhan Kumar Das, Gengyan Zhao, Boris Mailhe, Thomas J. Re, Dorin Comaniciu, Eli Gibson, Andreas Maier

arxiv logopreprintMay 12 2025
Identifying brain hemorrhages from magnetic resonance imaging (MRI) is a critical task for healthcare professionals. The diverse nature of MRI acquisitions with varying contrasts and orientation introduce complexity in identifying hemorrhage using neural networks. For acquisitions with varying orientations, traditional methods often involve resampling images to a fixed plane, which can lead to information loss. To address this, we propose a 3D multi-plane vision transformer (MP-ViT) for hemorrhage classification with varying orientation data. It employs two separate transformer encoders for axial and sagittal contrasts, using cross-attention to integrate information across orientations. MP-ViT also includes a modality indication vector to provide missing contrast information to the model. The effectiveness of the proposed model is demonstrated with extensive experiments on real world clinical dataset consists of 10,084 training, 1,289 validation and 1,496 test subjects. MP-ViT achieved substantial improvement in area under the curve (AUC), outperforming the vision transformer (ViT) by 5.5% and CNN-based architectures by 1.8%. These results highlight the potential of MP-ViT in improving performance for hemorrhage detection when different orientation contrasts are needed.

Identification of HER2-over-expression, HER2-low-expression, and HER2-zero-expression statuses in breast cancer based on <sup>18</sup>F-FDG PET/CT radiomics.

Hou X, Chen K, Luo H, Xu W, Li X

pubmed logopapersMay 12 2025
According to the updated classification system, human epidermal growth factor receptor 2 (HER2) expression statuses are divided into the following three groups: HER2-over-expression, HER2-low-expression, and HER2-zero-expression. HER2-negative expression was reclassified into HER2-low-expression and HER2-zero-expression. This study aimed to identify three different HER2 expression statuses for breast cancer (BC) patients using PET/CT radiomics and clinicopathological characteristics. A total of 315 BC patients who met the inclusion and exclusion criteria from two institutions were retrospectively included. The patients in institution 1 were divided into the training set and the independent validation set according to the ratio of 7:3, and institution 2 was used as the external validation set. According to the results of pathological examination, all BC patients were divided into HER2-over-expression, HER2-low-expression, and HER2-zero-expression. First, PET/CT radiomic features and clinicopathological features based on each patient were extracted and collected. Second, multiple methods were used to perform feature screening and feature selection. Then, four machine learning classifiers, including logistic regression (LR), k-nearest neighbor (KNN), support vector machine (SVM), and random forest (RF), were constructed to identify HER2-over-expression vs. others, HER2-low-expression vs. others, and HER2-zero-expression vs. others. The receiver operator characteristic (ROC) curve was plotted to measure the model's predictive power. According to the feature screening process, 8, 10, and 2 radiomics features and 2 clinicopathological features were finally selected to construct three prediction models (HER2-over-expression vs. others, HER2-low-expression vs. others, and HER2-zero-expression vs. others). For HER2-over-expression vs. others, the RF model outperformed other models with an AUC value of 0.843 (95%CI: 0.774-0.897), 0.785 (95%CI: 0.665-0.877), and 0.788 (95%CI: 0.708-0.868) in the training set, independent validation set, and external validation set. Concerning HER2-low-expression vs. others, the outperformance of the LR model over other models was identified with an AUC value of 0.783 (95%CI: 0.708-0.846), 0.756 (95%CI: 0.634-0.854), and 0.779 (95%CI: 0.698-0.860) in the training set, independent validation set, and external validation set. Whereas, the KNN model was confirmed as the optimal model to distinguish HER2-zero-expression from others, with an AUC value of 0.929 (95%CI: 0.890-0.958), 0.847 (95%CI: 0.764-0.910), and 0.835 (95%CI: 0.762-0.908) in the training set, independent validation set, and external validation set. Combined PET/CT radiomic models integrating with clinicopathological characteristics are non-invasively predictive of different HER2 statuses of BC patients.

Two-Stage Automatic Liver Classification System Based on Deep Learning Approach Using CT Images.

Kılıç R, Yalçın A, Alper F, Oral EA, Ozbek IY

pubmed logopapersMay 12 2025
Alveolar echinococcosis (AE) is a parasitic disease caused by Echinococcus multilocularis, where early detection is crucial for effective treatment. This study introduces a novel method for the early diagnosis of liver diseases by differentiating between tumor, AE, and healthy cases using non-contrast CT images, which are widely accessible and eliminate the risks associated with contrast agents. The proposed approach integrates an automatic liver region detection method based on RCNN followed by a CNN-based classification framework. A dataset comprising over 27,000 thorax-abdominal images from 233 patients, including 8206 images with liver tissue, was constructed and used to evaluate the proposed method. The experimental results demonstrate the importance of the two-stage classification approach. In a 2-class classification problem for healthy and non-healthy classes, an accuracy rate of 0.936 (95% CI: 0.925 <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>-</mo></math> 0.947) was obtained, and that for 3-class classification problem with AE, tumor, and healthy classes was obtained as 0.863 (95% CI: 0.847 <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>-</mo></math> 0.879). These results highlight the potential use of the proposed framework as a fully automatic approach for liver classification without the use of contrast agents. Furthermore, the proposed framework demonstrates competitive performance compared to other state-of-the-art techniques, suggesting its applicability in clinical practice.

Biological markers and psychosocial factors predict chronic pain conditions.

Fillingim M, Tanguay-Sabourin C, Parisien M, Zare A, Guglietti GV, Norman J, Petre B, Bortsov A, Ware M, Perez J, Roy M, Diatchenko L, Vachon-Presseau E

pubmed logopapersMay 12 2025
Chronic pain is a multifactorial condition presenting significant diagnostic and prognostic challenges. Biomarkers for the classification and the prediction of chronic pain are therefore critically needed. Here, in this multidataset study of over 523,000 participants, we applied machine learning to multidimensional biological data from the UK Biobank to identify biomarkers for 35 medical conditions associated with pain (for example, rheumatoid arthritis and gout) or self-reported chronic pain (for example, back pain and knee pain). Biomarkers derived from blood immunoassays, brain and bone imaging, and genetics were effective in predicting medical conditions associated with chronic pain (area under the curve (AUC) 0.62-0.87) but not self-reported pain (AUC 0.50-0.62). Notably, all biomarkers worked in synergy with psychosocial factors, accurately predicting both medical conditions (AUC 0.69-0.91) and self-reported pain (AUC 0.71-0.92). These findings underscore the necessity of adopting a holistic approach in the development of biomarkers to enhance their clinical utility.

Preoperative prediction of malignant transformation in sinonasal inverted papilloma: a novel MRI-based deep learning approach.

Ding C, Wen B, Han Q, Hu N, Kang Y, Wang Y, Wang C, Zhang L, Xian J

pubmed logopapersMay 12 2025
To develop a novel MRI-based deep learning (DL) diagnostic model, utilizing multicenter large-sample data, for the preoperative differentiation of sinonasal inverted papilloma (SIP) from SIP-transformed squamous cell carcinoma (SIP-SCC). This study included 568 patients from four centers with confirmed SIP (n = 421) and SIP-SCC (n = 147). Deep learning models were built using T1WI, T2WI, and CE-T1WI. A combined model was constructed by integrating these features through an attention mechanism. The diagnostic performance of radiologists, both with and without the model's assistance, was compared. Model performance was evaluated through receiver operating characteristic (ROC) analysis, calibration curves, and decision curve analysis (DCA). The combined model demonstrated superior performance in differentiating SIP from SIP-SCC, achieving AUCs of 0.954, 0.897, and 0.859 in the training, internal validation, and external validation cohorts, respectively. It showed optimal accuracy, stability, and clinical benefit, as confirmed by Brier scores and calibration curves. The diagnostic performance of radiologists, especially for less experienced ones, was significantly improved with model assistance. The MRI-based deep learning model enhances the capability to predict malignant transformation of sinonasal inverted papilloma before surgery. By facilitating earlier diagnosis and promoting timely pathological examination or surgical intervention, this approach holds the potential to enhance patient prognosis. Questions Sinonasal inverted papilloma (SIP) is prone to malignant transformation locally, leading to poor prognosis; current diagnostic methods are invasive and inaccurate, necessitating effective preoperative differentiation. Findings The MRI-based deep learning model accurately diagnoses malignant transformations of SIP, enabling junior radiologists to achieve greater clinical benefits with the assistance of the model. Clinical relevance A novel MRI-based deep learning model enhances the capability of preoperative diagnosis of malignant transformation in sinonasal inverted papilloma, providing a non-invasive tool for personalized treatment planning.

Enhancing noninvasive pancreatic cystic neoplasm diagnosis with multimodal machine learning.

Huang W, Xu Y, Li Z, Li J, Chen Q, Huang Q, Wu Y, Chen H

pubmed logopapersMay 12 2025
Pancreatic cystic neoplasms (PCNs) are a complex group of lesions with a spectrum of malignancy. Accurate differentiation of PCN types is crucial for patient management, as misdiagnosis can result in unnecessary surgeries or treatment delays, affecting the quality of life. The significance of developing a non-invasive, accurate diagnostic model is underscored by the need to improve patient outcomes and reduce the impact of these conditions. We developed a machine learning model capable of accurately identifying different types of PCNs in a non-invasive manner, by using a dataset comprising 449 MRI and 568 CT scans from adult patients, spanning from 2009 to 2022. The study's results indicate that our multimodal machine learning algorithm, which integrates both clinical and imaging data, significantly outperforms single-source data algorithms. Specifically, it demonstrated state-of-the-art performance in classifying PCN types, achieving an average accuracy of 91.2%, precision of 91.7%, sensitivity of 88.9%, and specificity of 96.5%. Remarkably, for patients with mucinous cystic neoplasms (MCNs), regardless of undergoing MRI or CT imaging, the model achieved a 100% prediction accuracy rate. It indicates that our non-invasive multimodal machine learning model offers strong support for the early screening of MCNs, and represents a significant advancement in PCN diagnosis for improving clinical practice and patient outcomes. We also achieved the best results on an additional pancreatic cancer dataset, which further proves the generality of our model.
Page 159 of 1701699 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.