Sort by:
Page 261 of 3423416 results

Efficient slice anomaly detection network for 3D brain MRI Volume.

Zhang Z, Mohsenzadeh Y

pubmed logopapersJun 1 2025
Current anomaly detection methods excel with benchmark industrial data but struggle with natural images and medical data due to varying definitions of 'normal' and 'abnormal.' This makes accurate identification of deviations in these fields particularly challenging. Especially for 3D brain MRI data, all the state-of-the-art models are reconstruction-based with 3D convolutional neural networks which are memory-intensive, time-consuming and producing noisy outputs that require further post-processing. We propose a framework called Simple Slice-based Network (SimpleSliceNet), which utilizes a model pre-trained on ImageNet and fine-tuned on a separate MRI dataset as a 2D slice feature extractor to reduce computational cost. We aggregate the extracted features to perform anomaly detection tasks on 3D brain MRI volumes. Our model integrates a conditional normalizing flow to calculate log likelihood of features and employs the contrastive loss to enhance anomaly detection accuracy. The results indicate improved performance, showcasing our model's remarkable adaptability and effectiveness when addressing the challenges exists in brain MRI data. In addition, for the large-scale 3D brain volumes, our model SimpleSliceNet outperforms the state-of-the-art 2D and 3D models in terms of accuracy, memory usage and time consumption. Code is available at: https://github.com/Jarvisarmy/SimpleSliceNet.

Diagnostic Performance of ChatGPT-4o in Detecting Hip Fractures on Pelvic X-rays.

Erdem TE, Kirilmaz A, Kekec AF

pubmed logopapersJun 1 2025
Hip fractures are a major orthopedic problem, especially in the elderly population. Hip fractures are usually diagnosed by clinical evaluation and imaging, especially X-rays. In recent years, new approaches to fracture detection have emerged with the use of artificial intelligence (AI) and deep learning techniques in medical imaging. In this study, we aimed to evaluate the diagnostic performance of ChatGPT-4o, an artificial intelligence model, in diagnosing hip fractures. A total of 200 anteroposterior pelvic X-ray images were retrospectively analyzed. Half of the images belonged to patients with surgically confirmed hip fractures, including both displaced and non-displaced types, while the other half represented patients with soft tissue trauma and no fractures. Each image was evaluated by ChatGPT-4o through a standardized prompt, and its predictions (fracture vs. no fracture) were compared against the gold standard diagnoses. Diagnostic performance metrics such as sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), receiver operating characteristic (ROC) curve, Cohen's kappa, and F1 score were calculated. ChatGPT-4o demonstrated an overall accuracy of 82.5% in detecting hip fractures on pelvic radiographs, with a sensitivity of 78.0% and specificity of 87.0%. PPVs and NPVs were 85.7% and 79.8%, respectively. The area under the ROC curve (AUC) was 0.825, indicating good discriminative performance. Among 22 false-negative cases, 68.2% were non-displaced fractures, suggesting the model had greater difficulty identifying subtle radiographic findings. Cohen's kappa coefficient was 0.65, showing substantial agreement with actual diagnoses. Chi-square analysis revealed a strong correlation (χ² = 82.59, <i>P</i> < 0.001), while McNemar's test (<i>P</i> = 0.176) showed no significant asymmetry in error distribution. ChatGPT-4o shows promising accuracy in identifying hip fractures on pelvic X-rays, especially when fractures are displaced. However, its sensitivity drops significantly for non-displaced fractures, leading to many false negatives. This highlights the need for caution when interpreting negative AI results, particularly when clinical suspicion remains high. While not a replacement for expert assessment, ChatGPT-4o may assist in settings with limited specialist access.

Neuroimaging and machine learning in eating disorders: a systematic review.

Monaco F, Vignapiano A, Di Gruttola B, Landi S, Panarello E, Malvone R, Palermo S, Marenna A, Collantoni E, Celia G, Di Stefano V, Meneguzzo P, D'Angelo M, Corrivetti G, Steardo L

pubmed logopapersJun 1 2025
Eating disorders (EDs), including anorexia nervosa (AN), bulimia nervosa (BN), and binge eating disorder (BED), are complex psychiatric conditions with high morbidity and mortality. Neuroimaging and machine learning (ML) represent promising approaches to improve diagnosis, understand pathophysiological mechanisms, and predict treatment response. This systematic review aimed to evaluate the application of ML techniques to neuroimaging data in EDs. Following PRISMA guidelines (PROSPERO registration: CRD42024628157), we systematically searched PubMed and APA PsycINFO for studies published between 2014 and 2024. Inclusion criteria encompassed human studies using neuroimaging and ML methods applied to AN, BN, or BED. Data extraction focused on study design, imaging modalities, ML techniques, and performance metrics. Quality was assessed using the GRADE framework and the ROBINS-I tool. Out of 185 records screened, 5 studies met the inclusion criteria. Most applied support vector machines (SVMs) or other supervised ML models to structural MRI or diffusion tensor imaging data. Cortical thickness alterations in AN and diffusion-based metrics effectively distinguished ED subtypes. However, all studies were observational, heterogeneous, and at moderate to serious risk of bias. Sample sizes were small, and external validation was lacking. ML applied to neuroimaging shows potential for improving ED characterization and outcome prediction. Nevertheless, methodological limitations restrict generalizability. Future research should focus on larger, multicenter, and multimodal studies to enhance clinical applicability. Level IV, multiple observational studies with methodological heterogeneity and moderate to serious risk of bias.

Classification of differentially activated groups of fibroblasts using morphodynamic and motile features.

Kang M, Min C, Devarasou S, Shin JH

pubmed logopapersJun 1 2025
Fibroblasts play essential roles in cancer progression, exhibiting activation states that can either promote or inhibit tumor growth. Understanding these differential activation states is critical for targeting the tumor microenvironment (TME) in cancer therapy. However, traditional molecular markers used to identify cancer-associated fibroblasts are limited by their co-expression across multiple fibroblast subtypes, making it difficult to distinguish specific activation states. Morphological and motility characteristics of fibroblasts reflect their underlying gene expression patterns and activation states, making these features valuable descriptors of fibroblast behavior. This study proposes an artificial intelligence-based classification framework to identify and characterize differentially activated fibroblasts by analyzing their morphodynamic and motile features. We extract these features from label-free live-cell imaging data of fibroblasts co-cultured with breast cancer cell lines using deep learning and machine learning algorithms. Our findings show that morphodynamic and motile features offer robust insights into fibroblast activation states, complementing molecular markers and overcoming their limitations. This biophysical state-based cellular classification framework provides a novel, comprehensive approach for characterizing fibroblast activation, with significant potential for advancing our understanding of the TME and informing targeted cancer therapies.

Development and validation of a combined clinical and MRI-based biomarker model to differentiate mild cognitive impairment from mild Alzheimer's disease.

Hosseini Z, Mohebbi A, Kiani I, Taghilou A, Mohammadjafari A, Aghamollaii V

pubmed logopapersJun 1 2025
Two of the most common complaints seen in neurology clinics are Alzheimer's disease (AD) and mild cognitive impairment (MCI), characterized by similar symptoms. The aim of this study was to develop and internally validate the diagnostic value of combined neurological and radiological predictors in differentiating mild AD from MCI as the outcome variable, which helps in preventing AD development. A cross-sectional study of 161 participants was conducted in a general healthcare setting, including 30 controls, 71 mild AD, and 60 MCI. Binary logistic regression was used to identify predictors of interest, with collinearity assessment conducted prior to model development. Model performance was assessed through calibration, shrinkage, and decision-curve analyses. Finally, the combined clinical and radiological model was compared to models utilizing only clinical or radiological predictors. The final model included age, sex, education status, Montreal cognitive assessment, Global Cerebral Atrophy Index, Medial Temporal Atrophy Scale, mean hippocampal volume, and Posterior Parietal Atrophy Index, with the area under the curve of 0.978 (0.934-0.996). Internal validation methods did not show substantial reduction in diagnostic performance. Combined model showed higher diagnostic performance compared to clinical and radiological models alone. Decision curve analysis highlighted the usefulness of this model for differentiation across all probability levels. A combined clinical-radiological model has excellent diagnostic performance in differentiating mild AD from MCI. Notably, the model leveraged straightforward neuroimaging markers, which are relatively simple to measure and interpret, suggesting that they could be integrated into practical, formula-driven diagnostic workflows without requiring computationally intensive deep learning models.

Improving predictability, reliability, and generalizability of brain-wide associations for cognitive abilities via multimodal stacking.

Tetereva A, Knodt AR, Melzer TR, van der Vliet W, Gibson B, Hariri AR, Whitman ET, Li J, Lal Khakpoor F, Deng J, Ireland D, Ramrakha S, Pat N

pubmed logopapersJun 1 2025
Brain-wide association studies (BWASs) have attempted to relate cognitive abilities with brain phenotypes, but have been challenged by issues such as predictability, test-retest reliability, and cross-cohort generalizability. To tackle these challenges, we proposed a machine learning "stacking" approach that draws information from whole-brain MRI across different modalities, from task-functional MRI (fMRI) contrasts and functional connectivity during tasks and rest to structural measures, into one prediction model. We benchmarked the benefits of stacking using the Human Connectome Projects: Young Adults (<i>n</i> = 873, 22-35 years old) and Human Connectome Projects-Aging (<i>n</i> = 504, 35-100 years old) and the Dunedin Multidisciplinary Health and Development Study (Dunedin Study, <i>n</i> = 754, 45 years old). For predictability, stacked models led to out-of-sample <i>r</i>∼0.5-0.6 when predicting cognitive abilities at the time of scanning, primarily driven by task-fMRI contrasts. Notably, using the Dunedin Study, we were able to predict participants' cognitive abilities at ages 7, 9, and 11 years using their multimodal MRI at age 45 years, with an out-of-sample <i>r</i> of 0.52. For test-retest reliability, stacked models reached an excellent level of reliability (interclass correlation > 0.75), even when we stacked only task-fMRI contrasts together. For generalizability, a stacked model with nontask MRI built from one dataset significantly predicted cognitive abilities in other datasets. Altogether, stacking is a viable approach to undertake the three challenges of BWAS for cognitive abilities.

Scale-Aware Super-Resolution Network With Dual Affinity Learning for Lesion Segmentation From Medical Images.

Luo L, Li Y, Chai Z, Lin H, Heng PA, Chen H

pubmed logopapersJun 1 2025
Convolutional neural networks (CNNs) have shown remarkable progress in medical image segmentation. However, the lesion segmentation remains a challenge to state-of-the-art CNN-based algorithms due to the variance in scales and shapes. On the one hand, tiny lesions are hard to delineate precisely from the medical images which are often of low resolutions. On the other hand, segmenting large-size lesions requires large receptive fields, which exacerbates the first challenge. In this article, we present a scale-aware super-resolution (SR) network to adaptively segment lesions of various sizes from low-resolution (LR) medical images. Our proposed network contains dual branches to simultaneously conduct lesion mask SR (LMSR) and lesion image SR (LISR). Meanwhile, we introduce scale-aware dilated convolution (SDC) blocks into the multitask decoders to adaptively adjust the receptive fields of the convolutional kernels according to the lesion sizes. To guide the segmentation branch to learn from richer high-resolution (HR) features, we propose a feature affinity (FA) module and a scale affinity (SA) module to enhance the multitask learning of the dual branches. On multiple challenging lesion segmentation datasets, our proposed network achieved consistent improvements compared with other state-of-the-art methods. Code will be available at: https://github.com/poiuohke/SASR_Net.

A Multimodal Model Based on Transvaginal Ultrasound-Based Radiomics to Predict the Risk of Peritoneal Metastasis in Ovarian Cancer: A Multicenter Study.

Zhou Y, Duan Y, Zhu Q, Li S, Zhang C

pubmed logopapersJun 1 2025
This study aimed to develop a predictive model for peritoneal metastasis (PM) in ovarian cancer using a combination radiomics and clinical biomarkers to improve diagnostic accuracy. This retrospective cohort study of 619 ovarian cancer patients involved demographic data, radiomics, O-RADS standardized description, clinical biomarkers, and histological findings. Radiomics features were extracted using 3D Slicer and Pyradiomics, with selective feature extraction using Least Absolute Shrinkage and Selection Operator regression. Model development and validation were carried out using logistic regression and machine learning methods RESULTS: Interobserver agreement was high for radiomics features, with 1049 features initially extracted and 7 features selected through regression analysis. Multi-modal information such as Ascites, Fallopian tube invasion, Greatest diameter, HE4 and D-dimer levels were significant predictors of PM. The developed radiomics nomogram demonstrated strong discriminatory power, with AUC values of 0.912, 0.883, and 0.831 in the training, internal test, and external test sets respectively. The nomogram displayed superior diagnostic performance compared to single-modality models. The integration of multimodal information in a predictive model for PM in ovarian cancer shows promise for enhancing diagnostic accuracy and guiding personalized treatment. This multi-modal approach offers a potential strategy for improving patient outcomes in ovarian cancer management with PM.

Enhancing Pathological Complete Response Prediction in Breast Cancer: The Added Value of Pretherapeutic Contrast-Enhanced Cone Beam Breast CT Semantic Features.

Wang Y, Ma Y, Wang F, Liu A, Zhao M, Bian K, Zhu Y, Yin L, Ye Z

pubmed logopapersJun 1 2025
To explore the association between pretherapeutic contrast-enhanced cone beam breast CT (CE-CBBCT) features and pathological complete response (pCR), and to develop a predictive model that integrates clinicopathological and imaging features. In this prospective study, a cohort of 200 female patients who underwent CE-CBBCT prior to neoadjuvant therapy and surgery was divided into train (n=150) and test (n=50) sets in a 3:1 ratio. Optimal predictive features were identified using univariate logistic regression and recursive feature elimination with cross-validation (RFECV). Models were constructed using XGBoost and evaluated through the receiver operating characteristic (ROC) curve, calibration curves, and decision curve analysis. The performance of combined model was further evaluated across molecular subtypes. Feature significance within the combined model was determined using the SHapley Additive exPlanation (SHAP) algorithm. The model incorporating three clinicopathological and six CE-CBBCT imaging features demonstrated robust predictive performance for pCR, with area under curves (AUCs) of 0.924 in the train set and 0.870 in the test set. Molecular subtype, spiculation, and adjacent vascular sign (AVS) grade emerged as the most influential SHAP features. The highest AUCs were observed for HER2-positive subgroup (train: 0.935; test: 0.844), followed by luminal (train: 0.841; test: 0.717) and triple-negative breast cancer (TNBC; train: 0.760; test: 0.583). SHAP analysis indicated that spiculation was crucial for luminal breast cancer prediction, while AVS grade was critical for HER2-positive and TNBC cases. Integrating clinicopathological and CE-CBBCT imaging features enhanced pCR prediction accuracy, particularly in HER2-positive cases, underscoring its potential clinical applicability.

Coarse for Fine: Bounding Box Supervised Thyroid Ultrasound Image Segmentation Using Spatial Arrangement and Hierarchical Prediction Consistency.

Chi J, Lin G, Li Z, Zhang W, Chen JH, Huang Y

pubmed logopapersJun 1 2025
Weakly-supervised learning methods have become increasingly attractive for medical image segmentation, but suffered from a high dependence on quantifying the pixel-wise affinities of low-level features, which are easily corrupted in thyroid ultrasound images, resulting in segmentation over-fitting to weakly annotated regions without precise delineation of target boundaries. We propose a dual-branch weakly-supervised learning framework to optimize the backbone segmentation network by calibrating semantic features into rational spatial distribution under the indirect, coarse guidance of the bounding box mask. Specifically, in the spatial arrangement consistency branch, the maximum activations sampled from the preliminary segmentation prediction and the bounding box mask along the horizontal and vertical dimensions are compared to measure the rationality of the approximate target localization. In the hierarchical prediction consistency branch, the target and background prototypes are encapsulated from the semantic features under the combined guidance of the preliminary segmentation prediction and the bounding box mask. The secondary segmentation prediction induced from the prototypes is compared with the preliminary prediction to quantify the rationality of the elaborated target and background semantic feature perception. Experiments on three thyroid datasets illustrate that our model outperforms existing weakly-supervised methods for thyroid gland and nodule segmentation and is comparable to the performance of fully-supervised methods with reduced annotation time. The proposed method has provided a weakly-supervised segmentation strategy by simultaneously considering the target's location and the rationality of target and background semantic features distribution. It can improve the applicability of deep learning based segmentation in the clinical practice.
Page 261 of 3423416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.