Sort by:
Page 174 of 6526512 results

Ali H, Abu Qdais A, Chatterjee A, Abdalkader M, Raz E, Nguyen TN, Al Kasab S

pubmed logopapersSep 12 2025
Cerebrovascular imaging has undergone significant advances, enhancing the diagnosis and management of cerebrovascular diseases such as stroke, aneurysms, and arteriovenous malformations. This chapter explores key imaging modalities, including non-contrast computed tomography, computed tomography angiography, magnetic resonance imaging (MRI), and digital subtraction angiography. Innovations such as high-resolution vessel wall imaging, artificial intelligence (AI)-driven stroke detection, and advanced perfusion imaging have improved diagnostic accuracy and treatment selection. Additionally, novel techniques like 7-T MRI, molecular imaging, and functional ultrasound provide deeper insights into vascular pathology. AI and machine learning applications are revolutionizing automated detection and prognostication, expediting treatment decisions. Challenges remain in standardization, radiation exposure, and accessibility. However, continued technological advances, multimodal imaging integration, and AI-driven automation promise a future of precise, non-invasive cerebrovascular diagnostics, ultimately improving patient outcomes.

Su HZ, Hong LC, Li ZY, Fu QM, Wu YH, Wu SF, Zhang ZB, Yang DH, Zhang XD

pubmed logopapersSep 12 2025
Cervical lymph node metastasis (CLNM) critically impacts surgery approaches, prognosis, and recurrence in patients with major salivary gland carcinomas (MSGCs). We aimed to develop and validate an ultrasound (US)-based deep learning (DL) radiomics model for noninvasive prediction of CLNM in MSGCs. A total of 214 patients with MSGCs from 4 medical centers were divided into training (Centers 1-2, n = 144) and validation (Centers 3-4, n = 70) cohorts. Radiomics and DL features were extracted from preoperative US images. Following feature selection, radiomics score and DL score were constructed respectively. Subsequently, the least absolute shrinkage and selection operator (LASSO) regression was used to identify optimal features, which were then employed to develop predictive models using logistic regression (LR) and 8 machine learning algorithms. Model performance was evaluated using multiple metrics, with particular focus on the area under the receiver operating characteristic curve (AUC). Radiomics and DL scores showed robust performance in predicting CLNM in MSGCs, with AUCs of 0.819 and 0.836 in the validation cohort, respectively. After LASSO regression, 6 key features (patient age, tumor edge, calcification, US reported CLN-positive, radiomics score, and DL score) were selected to construct 9 predictive models. In the validation cohort, the models' AUCs ranged from 0.770 to 0.962. The LR model achieved the best performance, with an AUC of 0.962, accuracy of 0.886, precision of 0.762, recall of 0.842, and an F1 score of 0.8. The composite model integrating clinical, US, radiomics, and DL features accurately noninvasively predicts CLNM preoperatively in MSGCs. CLNM in MSGCs is critical for treatment planning, but noninvasive prediction is limited. This study developed an US-based DL radiomics model to enable noninvasive CLNM prediction, supporting personalized surgery and reducing unnecessary interventions.

Jomeiri A, Habibizad Navin A, Shamsi M

pubmed logopapersSep 12 2025
Alzheimer's disease (AD) poses a significant global health challenge, necessitating early and accurate diagnosis to enable timely intervention. Structural MRI (sMRI) is a key imaging modality for detecting AD-related brain atrophy, yet traditional deep learning models like convolutional neural networks (CNNs) struggle to capture complex spatial dependencies critical for AD diagnosis. This study introduces the Regional Attention-Enhanced Vision Transformer (RAE-ViT), a novel framework designed for AD classification using sMRI data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. RAE-ViT leverages regional attention mechanisms to prioritize disease-critical brain regions, such as the hippocampus and ventricles, while integrating hierarchical self-attention and multi-scale feature extraction to model both localized and global structural patterns. Evaluated on 1152 sMRI scans (255 AD, 521 MCI, 376 NC), RAE-ViT achieved state-of-the-art performance with 94.2 % accuracy, 91.8 % sensitivity, 95.7 % specificity, and an AUC of 0.96, surpassing standard ViTs (89.5 %) and CNN-based models (e.g., ResNet-50: 87.8 %). The model's interpretable attention maps align closely with clinical biomarkers (Dice: 0.89 hippocampus, 0.85 ventricles), enhancing diagnostic reliability. Robustness to scanner variability (92.5 % accuracy on 1.5T scans) and noise (92.5 % accuracy under 10 % Gaussian noise) further supports its clinical applicability. A preliminary multimodal extension integrating sMRI and PET data improved accuracy to 95.8 %. Future work will focus on optimizing RAE-ViT for edge devices, incorporating multimodal data (e.g., PET, fMRI, genetic), and exploring self-supervised and federated learning to enhance generalizability and privacy. RAE-ViT represents a significant advancement in AI-driven AD diagnosis, offering potential for early detection and improved patient outcomes.

Kamath A, Willmann J, Andratschke N, Reyes M

pubmed logopapersSep 12 2025
Since its introduction in 2015, the U-Net architecture has become popular for medical image segmentation. U-Net is known for its "skip connections," which transfer image details directly to its decoder branch at various levels. However, it's unclear how these skip connections affect the model's performance when the texture of input images varies. To explore this, we tested six types of U-Net-like architectures in three groups: Standard (U-Net and V-Net), No-Skip (U-Net and V-Net without skip connections), and Enhanced (AGU-Net and UNet++, which have extra skip connections). Because convolutional neural networks (CNNs) are known to be sensitive to texture, we defined a novel texture disparity (TD) metric and ran experiments with synthetic images, adjusting this measure. We then applied these findings to four real medical imaging datasets, covering different anatomies (breast, colon, heart, and spleen) and imaging types (ultrasound, histology, MRI, and CT). The goal was to understand how the choice of architecture impacts the model's ability to handle varying TD between foreground and background. For each dataset, we tested the models with five categories of TD, measuring their performance using the Dice Score Coefficient (DSC), Hausdorff distance, surface distance, and surface DSC. Our results on synthetic data with varying textures show differences between the performance of architectures with and without skip connections, especially when trained in hard textural conditions. When translated to medical data, it indicates that training data sets with a narrow texture range negatively impact the robustness of architectures that include more skip connections. The robustness gap between architectures reduces when trained on a larger TD range. In the harder TD categories, models from the No-Skip group performed the best in 5/8 cases (based on DSC) and 7/8 (based on Hausdorff distances). When measuring robustness using the coefficient of variation metric on the DSC, the No-Skip group performed the best in 7 out of 16 cases, showing superior results than the Enhanced (6/16) and Standard groups (3/16). These findings suggest that skip connections offer performance benefits, usually at the expense of robustness losses, depending on the degree of texture disparity between the foreground and background, and the range of texture variations present in the training set. This indicates careful evaluation of their use for robustness-critical tasks like medical image segmentation. Combinations of texture-aware architectures must be investigated to achieve better performance-robustness characteristics.

Emily Kaczmarek, Justin Szeto, Brennan Nichyporuk, Tal Arbel

arxiv logopreprintSep 12 2025
Alzheimer's disease is a progressive, neurodegenerative disorder that causes memory loss and cognitive decline. While there has been extensive research in applying deep learning models to Alzheimer's prediction tasks, these models remain limited by lack of available labeled data, poor generalization across datasets, and inflexibility to varying numbers of input scans and time intervals between scans. In this study, we adapt three state-of-the-art temporal self-supervised learning (SSL) approaches for 3D brain MRI analysis, and add novel extensions designed to handle variable-length inputs and learn robust spatial features. We aggregate four publicly available datasets comprising 3,161 patients for pre-training, and show the performance of our model across multiple Alzheimer's prediction tasks including diagnosis classification, conversion detection, and future conversion prediction. Importantly, our SSL model implemented with temporal order prediction and contrastive learning outperforms supervised learning on six out of seven downstream tasks. It demonstrates adaptability and generalizability across tasks and number of input images with varying time intervals, highlighting its capacity for robust performance across clinical applications. We release our code and model publicly at https://github.com/emilykaczmarek/SSL-AD.

Yehudit Aperstein, Amit Tzahar, Alon Gottlib, Tal Verber, Ravit Shagan Damti, Alexander Apartsin

arxiv logopreprintSep 12 2025
Overconfidence in deep learning models poses a significant risk in high-stakes medical imaging tasks, particularly in multi-label classification of chest X-rays, where multiple co-occurring pathologies must be detected simultaneously. This study introduces an uncertainty-aware framework for chest X-ray diagnosis based on a DenseNet-121 backbone, enhanced with two selective prediction mechanisms: entropy-based rejection and confidence interval-based rejection. Both methods enable the model to abstain from uncertain predictions, improving reliability by deferring ambiguous cases to clinical experts. A quantile-based calibration procedure is employed to tune rejection thresholds using either global or class-specific strategies. Experiments conducted on three large public datasets (PadChest, NIH ChestX-ray14, and MIMIC-CXR) demonstrate that selective rejection improves the trade-off between diagnostic accuracy and coverage, with entropy-based rejection yielding the highest average AUC across all pathologies. These results support the integration of selective prediction into AI-assisted diagnostic workflows, providing a practical step toward safer, uncertainty-aware deployment of deep learning in clinical settings.

Yuexi Du, Lihui Chen, Nicha C. Dvornek

arxiv logopreprintSep 12 2025
Mammography screening is an essential tool for early detection of breast cancer. The speed and accuracy of mammography interpretation have the potential to be improved with deep learning methods. However, the development of a foundation visual language model (VLM) is hindered by limited data and domain differences between natural and medical images. Existing mammography VLMs, adapted from natural images, often ignore domain-specific characteristics, such as multi-view relationships in mammography. Unlike radiologists who analyze both views together to process ipsilateral correspondence, current methods treat them as independent images or do not properly model the multi-view correspondence learning, losing critical geometric context and resulting in suboptimal prediction. We propose GLAM: Global and Local Alignment for Multi-view mammography for VLM pretraining using geometry guidance. By leveraging the prior knowledge about the multi-view imaging process of mammograms, our model learns local cross-view alignments and fine-grained local features through joint global and local, visual-visual, and visual-language contrastive learning. Pretrained on EMBED [14], one of the largest open mammography datasets, our model outperforms baselines across multiple datasets under different settings.

Zheng B, Zhu Z, Ma K, Liang Y, Liu H

pubmed logopapersSep 12 2025
This study aims to explore a method based on three-dimensional cervical spinal cord reconstruction, radiomics feature extraction, and machine learning to build a postoperative prognosis prediction model for patients with cervical spondylotic myelopathy (CSM). It also evaluates the predictive performance of different cervical spinal cord segmentation strategies and machine learning algorithms. A retrospective analysis is conducted on 126 CSM patients who underwent posterior single-door laminoplasty from January 2017 to December 2022. Three different cervical spinal cord segmentation strategies (narrowest segment, surgical segment, and entire cervical cord C1-C7) are applied to preoperative MRI images for radiomics feature extraction. Good clinical prognosis is defined as a postoperative JOA recovery rate ≥50%. By comparing the performance of 8 machine learning algorithms, the optimal cervical spinal cord segmentation strategy and classifier are selected. Subsequently, clinical features (smoking history, diabetes, preoperative JOA score, and cSVA) are combined with radiomics features to construct a clinical-radiomics prediction model. Among the three cervical spinal cord segmentation strategies, the SVM model based on the narrowest segment performed best (AUC=0.885). Among clinical features, smoking history, diabetes, preoperative JOA score, and cSVA are important indicators for prognosis prediction. When clinical features are combined with radiomics features, the fusion model achieved excellent performance on the test set (accuracy=0.895, AUC=0.967), significantly outperforming either the clinical model or the radiomics model alone. This study validates the feasibility and superiority of three-dimensional radiomics combined with machine learning in predicting postoperative prognosis for CSM. The combination of radiomics features based on the narrowest segment and clinical features can yield a highly accurate prognosis prediction model, providing new insights for clinical assessment and individualized treatment decisions. Future studies need to further validate the stability and generalizability of this model in multi-center, large-sample cohorts.

Misch M, Medani K, Rhisheekesan A, Manjila S

pubmed logopapersSep 12 2025
Trailblazing strides in artificial intelligence (AI) programs have led to enhanced diagnostic imaging, including ultrasound (US), magnetic resonance imaging, and infrared thermography. This systematic review summarizes current efforts to integrate AI into the diagnosis of carpal tunnel syndrome (CTS) and its potential to improve clinical decision-making. A comprehensive literature search was conducted in PubMed, Embase, and Cochrane database in accordance with PRISMA guidelines. Articles were included if they evaluated the application of AI in the diagnosis or detection of CTS. Search terms included "carpal tunnel syndrome" and "artificial intelligence", along with relevant MeSH terms. A total of 22 studies met inclusion criteria and were analyzed qualitatively. AI models, especially deep learning algorithms, demonstrated strong diagnostic performance, particularly with US imaging. Frequently used inputs included echointensity, pixelation patterns, and the cross-sectional area of the median nerve. AI-assisted image analysis enabled superior detection and segmentation of the median nerve, often outperforming radiologists in sensitivity and specificity. Additionally, AI complemented electromyography by offering insight into the physiological integrity of the nerve. AI holds significant promise as an adjunctive tool in the diagnosis and management of CTS. Its ability to extract and quantify radiomic features may support accurate, reproducible diagnoses and allow for longitudinal digital documentation. When integrated with existing modalities, AI may enhance clinical assessments, inform surgical decision-making, and extend diagnostic capabilities into telehealth and point-of-care settings. Continued development and prospective validation of these technologies are essential for streamlining widespread integration into clinical practice.

Full PM, Schirrmeister RT, Hein M, Russe MF, Reisert M, Ammann C, Greiser KH, Niendorf T, Pischon T, Schulz-Menger J, Maier-Hein KH, Bamberg F, Rospleszcz S, Schlett CL, Schuppert C

pubmed logopapersSep 12 2025
The prospective, multicenter German National Cohort (NAKO) provides a unique dataset of cardiac magnetic resonance (CMR) cine images. Effective processing of these images requires a robust segmentation and quality control pipeline. A deep learning model for semantic segmentation, based on the nnU-Net architecture, was applied to full-cycle short-axis cine images from 29,908 baseline participants. The primary objective was to determine data on structure and function for both ventricles (LV, RV), including end-diastolic volumes (EDV), end-systolic volumes (ESV), and LV myocardial mass. Quality control measures included a visual assessment of outliers in morphofunctional parameters, inter- and intra-ventricular phase differences, and time-volume curves (TVC). These were adjudicated using a five-point rating scale, ranging from five (excellent) to one (non-diagnostic), with ratings of three or lower subject to exclusion. The predictive value of outlier criteria for inclusion and exclusion was evaluated using receiver operating characteristics analysis. The segmentation model generated complete data for 29,609 participants (incomplete in 1.0%), of which 5,082 cases (17.0%) underwent visual assessment. Quality assurance yielded a sample of 26,899 (90.8%) participants with excellent or good quality, excluding 1,875 participants due to image quality issues and 835 participants due to segmentation quality issues. TVC was the strongest single discriminator between included and excluded participants (AUC: 0.684). Of the two-category combinations, the pairing of TVC and phases provided the greatest improvement over TVC alone (AUC difference: 0.044; p<0.001). The best performance was observed when all three categories were combined (AUC: 0.748). By extending the quality-controlled sample to include mid-level 'acceptable' quality ratings, a total of 28,413 (96.0%) participants could be included. The implemented pipeline facilitated the automated segmentation of an extensive CMR dataset, integrating quality control measures. This methodology ensures that ensuing quantitative analyses are conducted with a diminished risk of bias.
Page 174 of 6526512 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.