Sort by:
Page 85 of 2262251 results

Effective workflow from multimodal MRI data to model-based prediction.

Jung K, Wischnewski KJ, Eickhoff SB, Popovych OV

pubmed logopapersJun 20 2025
Predicting human behavior from neuroimaging data remains a complex challenge in neuroscience. To address this, we propose a systematic and multi-faceted framework that incorporates a model-based workflow using dynamical brain models. This approach utilizes multi-modal MRI data for brain modeling and applies the optimized modeling outcome to machine learning. We demonstrate the performance of such an approach through several examples such as sex classification and prediction of cognition or personality traits. We in particular show that incorporating the simulated data into machine learning can significantly improve the prediction performance compared to using empirical features alone. These results suggest considering the output of the dynamical brain models as an additional neuroimaging data modality that complements empirical data by capturing brain features that are difficult to measure directly. The discussed model-based workflow can offer a promising avenue for investigating and understanding inter-individual variability in brain-behavior relationships and enhancing prediction performance in neuroimaging research.

Detection of breast cancer using fractional discrete sinc transform based on empirical Fourier decomposition.

Azmy MM

pubmed logopapersJun 20 2025
Breast cancer is the most common cause of death among women worldwide. Early detection of breast cancer is important; for saving patients' lives. Ultrasound and mammography are the most common noninvasive methods for detecting breast cancer. Computer techniques are used to help physicians diagnose cancer. In most of the previous studies, the classification parameter rates were not high enough to achieve the correct diagnosis. In this study, new approaches were applied to detect breast cancer images from three databases. The programming software used to extract features from the images was MATLAB R2022a. Novel approaches were obtained using new fractional transforms. These fractional transforms were deduced from the fraction Fourier transform and novel discrete transforms. The novel discrete transforms were derived from discrete sine and cosine transforms. The steps of the approaches were described below. First, fractional transforms were applied to the breast images. Then, the empirical Fourier decomposition (EFD) was obtained. The mean, variance, kurtosis, and skewness were subsequently calculated. Finally, RNN-BILSTM (recurrent neural network-bidirectional-long short-term memory) was used as a classification phase. The proposed approaches were compared to obtain the highest accuracy rate during the classification phase based on different fractional transforms. The highest accuracy rate was obtained when the fractional discrete sinc transform of approach 4 was applied. The area under the receiver operating characteristic curve (AUC) was 1. The accuracy, sensitivity, specificity, precision, G-mean, and F-measure rates were 100%. If traditional machine learning methods, such as support vector machines (SVMs) and artificial neural networks (ANNs), were used, the classification parameter rates would be low. Therefore, the fourth approach used RNN-BILSTM to extract the features of breast images perfectly. This approach can be programed on a computer to help physicians correctly classify breast images.

The value of multimodal neuroimaging in the diagnosis and treatment of post-traumatic stress disorder: a narrative review.

Zhang H, Hu Y, Yu Y, Zhou Z, Sun Y, Qi C, Yang L, Xie H, Zhang J, Zhu H

pubmed logopapersJun 20 2025
Post-traumatic stress disorder (PTSD) is a delayed-onset or prolonged persistent psychiatric disorder caused by individuals experiencing an unusually threatening or catastrophic stressful event or situation. Due to its long duration and recurrent nature, unimodal neuroimaging tools such as computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and electroencephalography (EEG) have been widely used in the diagnosis and treatment of PTSD for early intervention. However, as compared with an unimodal approach, a multimodal imaging approach can better capture integrated neural mechanisms underlying the occurrence and development of PTSD, including predisposing factors, changes in neural activity, and physiological mechanisms of symptoms. Moreover, a multimodal neuroimaging approach can aid the diagnosis and treatment of PTSD, facilitate searching for biomarkers at different stages of PTSD, and explore biomarkers for symptomatic improvement. However, at present, the majority of PTSD studies remain unimodal, while the combination of multimodal brain imaging data with machine learning will become an important direction for future research.

Current and future applications of artificial intelligence in lung cancer and mesothelioma.

Roche JJ, Seyedshahi F, Rakovic K, Thu AW, Le Quesne J, Blyth KG

pubmed logopapersJun 20 2025
Considerable challenges exist in managing lung cancer and mesothelioma, including diagnostic complexity, treatment stratification, early detection and imaging quantification. Variable incidence in mesothelioma also makes equitable provision of high-quality care difficult. In this context, artificial intelligence (AI) offers a range of assistive/automated functions that can potentially enhance clinical decision-making, while reducing inequality and pathway delay. In this state-of-the-art narrative review, we synthesise evidence on this topic, focusing particularly on tools that ingest routine pathology and radiology images. We summarise the strengths and weaknesses of AI applied to common multidisciplinary team (MDT) functions, including histological diagnosis, therapeutic response prediction, radiological detection and quantification, and survival estimation. We also review emerging methods capable of generating novel biological insights and current barriers to implementation, including access to high-quality training data and suitable regulatory and technical infrastructure. Neural networks trained on pathology images have proven utility in histological classification, prognostication, response prediction and survival. Self-supervised models can also generate new insights into biological features responsible for adverse outcomes. Radiology applications include lung nodule tools, which offer critical pathway support for imminent lung cancer screening and urgent referrals. Tumour segmentation AI offers particular advantages in mesothelioma, where response assessment and volumetric staging are difficult using human readers due to tumour size and morphological complexity. AI is also critical for radiogenomics, permitting effective integration of molecular and radiomic features for discovery of non-invasive markers for molecular subtyping and enhanced stratification. AI solutions offer considerable potential benefits across the MDT, particularly in repetitive or time-consuming tasks based on pathology and radiology images. Effective leveraging of this technology is critical for lung cancer screening and efficient delivery of increasingly complex diagnostic and predictive MDT functions. Future AI research should involve transparent and interpretable outputs that assist in explaining the basis of AI-supported decision making.

Generalizable model to predict new or progressing compression fractures in tumor-infiltrated thoracolumbar vertebrae in an all-comer population.

Flores A, Nitturi V, Kavoussi A, Feygin M, Andrade de Almeida RA, Ramirez Ferrer E, Anand A, Nouri S, Allam AK, Ricciardelli A, Reyes G, Reddy S, Rampalli I, Rhines L, Tatsui CE, North RY, Ghia A, Siewerdsen JH, Ropper AE, Alvarez-Breckenridge C

pubmed logopapersJun 20 2025
Neurosurgical evaluation is required in the setting of spinal metastases at high risk for leading to a vertebral body fracture. Both irradiated and nonirradiated vertebrae are affected. Understanding fracture risk is critical in determining management, including follow-up timing and prophylactic interventions. Herein, the authors report the results of a machine learning model that predicts the development or progression of a pathological vertebral compression fracture (VCF) in metastatic tumor-infiltrated thoracolumbar vertebrae in an all-comer population. A multi-institutional all-comer cohort of patients with tumor containing vertebral levels spanning T1 through L5 and at least 1 year of follow-up was included in the study. Clinical features of the patients, diseases, and treatments were collected. CT radiomic features of the vertebral bodies were extracted from tumor-infiltrated vertebrae that did or did not subsequently fracture or progress. Recursive feature elimination (RFE) of both radiomic and clinical features was performed. The resulting features were used to create a purely clinical model, purely radiomic model, and combined clinical-radiomic model. A Spine Instability Neoplastic Score (SINS) model was created for a baseline performance comparison. Model performance was assessed using the area under the receiver operating characteristic curve (AUROC), sensitivity, and specificity (with 95% confidence intervals) with tenfold cross-validation. Within 1 year from initial CT, 123 of 977 vertebrae developed VCF. Selected clinical features included SINS, SINS component for < 50% vertebral body collapse, SINS component for "none of the prior 3" (i.e., "none of the above" on the SINS component for vertebral body involvement), histology, age, and BMI. Of the 2015 radiomic features, RFE selected 19 to be used in the pure radiomic model and the combined clinical-radiomic model. The best performing model was a random forest classifier using both clinical and radiomic features, demonstrating an AUROC of 0.86 (95% CI 0.82-0.9), sensitivity of 0.78 (95% CI 0.70-0.84), and specificity of 0.80 (95% CI 0.77-0.82). This performance was significantly higher than the best SINS-alone model (AUROC 0.75, 95% CI 0.70-0.80) and outperformed the clinical-only model but not in a statistically significant manner (AUROC 0.82, 95% CI 0.77-0.87). The authors developed a clinically generalizable machine learning model to predict the risk of a new or progressing VCF in an all-comer population. This model addresses limitations from prior work and was trained on the largest cohort of patients and vertebrae published to date. If validated, the model could lead to more consistent and systematic identification of high-risk vertebrae, resulting in faster, more accurate triage of patients for optimal management.

Image-Based Search in Radiology: Identification of Brain Tumor Subtypes within Databases Using MRI-Based Radiomic Features.

von Reppert M, Chadha S, Willms K, Avesta A, Maleki N, Zeevi T, Lost J, Tillmanns N, Jekel L, Merkaj S, Lin M, Hoffmann KT, Aneja S, Aboian MS

pubmed logopapersJun 20 2025
Existing neuroradiology reference materials do not cover the full range of primary brain tumor presentations, and text-based medical image search engines are limited by the lack of consistent structure in radiology reports. To address this, an image-based search approach is introduced here, leveraging an institutional database to find reference MRIs visually similar to presented query cases. Two hundred ninety-five patients (mean age and standard deviation, 51 ± 20 years) with primary brain tumors who underwent surgical and/or radiotherapeutic treatment between 2000 and 2021 were included in this retrospective study. Semiautomated convolutional neural network-based tumor segmentation was performed, and radiomic features were extracted. The data set was split into reference and query subsets, and dimensionality reduction was applied to cluster reference cases. Radiomic features extracted from each query case were projected onto the clustered reference cases, and nearest neighbors were retrieved. Retrieval performance was evaluated by using mean average precision at k, and the best-performing dimensionality reduction technique was identified. Expert readers independently rated visual similarity by using a 5-point Likert scale. t-Distributed stochastic neighbor embedding with 6 components was the highest-performing dimensionality reduction technique, with mean average precision at 5 ranging from 78%-100% by tumor type. The top 5 retrieved reference cases showed high visual similarity Likert scores with corresponding query cases (76% 'similar' or 'very similar'). We introduce an image-based search method for exploring historical MR images of primary brain tumors and fetching reference cases closely resembling queried ones. Assessment involving comparison of tumor types and visual similarity Likert scoring by expert neuroradiologists validates the effectiveness of this method.

BioTransX: A novel bi-former based hybrid model with bi-level routing attention for brain tumor classification with explainable insights.

Rajpoot R, Jain S, Semwal VB

pubmed logopapersJun 20 2025
Brain tumors, known for their life-threatening implications, underscore the urgency of precise and interpretable early detection. Expertise remains essential for accurate identification through MRI scans due to the intricacies involved. However, the growing recognition of automated detection systems holds the potential to enhance accuracy and improve interpretability. By consistently providing easily comprehensible results, these automated solutions could boost the overall efficiency and effectiveness of brain tumor diagnosis, promising a transformative era in healthcare. This paper introduces a new hybrid model, BioTransX, which uses a bi-former encoder mechanism, a dynamic sparse attention-based transformer, in conjunction with ensemble convolutional networks. Recognizing the importance of better contrast and data quality, we applied Contrast-Limited Adaptive Histogram Equalization (CLAHE) during the initial data processing stage. Additionally, to address the crucial aspect of model interpretability, we integrated Grad-CAM and Gradient Attention Rollout, which elucidate decisions by highlighting influential regions within medical images. Our hybrid deep learning model was primarily evaluated on the Kaggle MRI dataset for multi-class brain tumor classification, achieving a mean accuracy and F1-score of 99.29%. To validate its generalizability and robustness, BioTransX was further tested on two additional benchmark datasets, BraTS and Figshare, where it consistently maintained high performance across key evaluation metrics. The transformer-based hybrid model demonstrated promising performance in explainable identification and offered notable advantages in computational efficiency and memory usage. These strengths differentiate BioTransX from existing models in the literature and make it ideal for real-world deployment in resource-constrained clinical infrastructures.

Three-dimensional U-Net with transfer learning improves automated whole brain delineation from MRI brain scans of rats, mice, and monkeys.

Porter VA, Hobson BA, D'Almeida AJ, Bales KL, Lein PJ, Chaudhari AJ

pubmed logopapersJun 20 2025
Automated whole-brain delineation (WBD) techniques often struggle to generalize across pre-clinical studies due to variations in animal models, magnetic resonance imaging (MRI) scanners, and tissue contrasts. We developed a 3D U-Net neural network for WBD pre-trained on organophosphate intoxication (OPI) rat brain MRI scans. We used transfer learning (TL) to adapt this OPI-pretrained network to other animal models: rat model of Alzheimer's disease (AD), mouse model of tetramethylenedisulfotetramine (TETS) intoxication, and titi monkey model of social bonding. We assessed an OPI-pretrained 3D U-Net across animal models under three conditions: (1) direct application to each dataset; (2) utilizing TL; and (3) training disease-specific U-Net models. For each condition, training dataset size (TDS) was optimized, and output WBDs were compared to manual segmentations for accuracy. The OPI-pretrained 3D U-Net (TDS = 100) achieved the best accuracy [median[min-max]] for the test OPI dataset with a Dice coefficient (DC) = [0.987 [0.977-0.992]] and Hausdorff distance (HD) = [0.86 [0.55-1.27]]mm. TL improved generalization across all models [AD (TDS = 40): DC = 0.987 [0.977-0.992] and HD = 0.72 [0.54-1.00]mm; TETS (TDS = 10): DC = 0.992 [0.984-0.993] and HD = 0.40 [0.31-0.50]mm; Monkey (TDS = 8): DC = 0.977 [0.968-0.979] and HD = 3.03 [2.19-3.91]mm], showing performance comparable to disease-specific networks. The OPI-pretrained 3D U-Net with TL achieved accuracy comparable to disease-specific networks with reduced training data (TDS ≤ 40 scans) across all models. Future work will focus on developing a multi-region delineation pipeline for pre-clinical MRI brain data, utilizing the proposed WBD as an initial step.

Generative deep-learning-model based contrast enhancement for digital subtraction angiography using a text-conditioned image-to-image model.

Takata T, Yamada K, Yamamoto M, Kondo H

pubmed logopapersJun 20 2025
Digital subtraction angiography (DSA) is an essential imaging technique in interventional radiology, enabling detailed visualization of blood vessels by subtracting pre- and post-contrast images. However, reduced contrast, either accidental or intentional, can impair the clarity of vascular structures. This issue becomes particularly critical in patients with chronic kidney disease (CKD), where minimizing iodinated contrast is necessary to reduce the risk of contrast-induced nephropathy (CIN). This study explored the potential of using a generative deep-learning-model based contrast enhancement technique for DSA. A text-conditioned image-to-image model was developed using Stable Diffusion, augmented with ControlNet to reduce hallucinations and Low-Rank Adaptation for model fine-tuning. A total of 1207 DSA series were used for training and testing, with additional low-contrast images generated through data augmentation. The model was trained using tagged text labels and evaluated using metrics such as Root Mean Square (RMS) contrast, Michelson contrast, signal-to-noise ratio (SNR), and entropy. Evaluation results indicated significant improvements, with RMS contrast, Michelson contrast, and entropy respectively increased from 7.91 to 17.7, 0.875 to 0.992, and 3.60 to 5.60, reflecting enhanced detail. However, SNR decreased from 21.3 to 8.50, indicating increased noise. This study demonstrated the feasibility of deep learning-based contrast enhancement for DSA images and highlights the potential for generative deep-learning-model to improve angiographic imaging. Further refinements, particularly in artifact suppression and clinical validation, are necessary for practical implementation in medical settings.

Combination of 2D and 3D nnU-Net for ground glass opacity segmentation in CT images of Post-COVID-19 patients.

Nguyen QH, Hoang DA, Pham HV

pubmed logopapersJun 20 2025
The COVID-19 pandemic plays a significant roles in the global health, highlighting the imperative for effective management of post-recovery symptoms. Within this context, Ground Glass Opacity (GGO) in lung computed tomography CT scans emerges as a critical indicator for early intervention. Recently, most researchers have investigated initially a challenge to refine techniques for GGO segmentation. These approaches aim to scrutinize and juxtapose cutting-edge methods for analyzing lung CT images of patients recuperating from COVID-19. While many methods in this challenge utilize the nnU-Net architecture, its general approach has not concerned completely GGO areas such as marking infected areas, ground-glass opacity, irregular shapes and fuzzy boundaries. This research has investigated a specialized machine learning algorithm, advancing the nn-UNet framework to accurately segment GGO in lung CT scans of post-COVID-19 patients. We propose a novel approach for two-stage image segmentation methods based on nnU-Net 2D and 3D models including lung and shadow image segmentation, incorporating the attention mechanism. The combination models enhance automatic segmentation and models' accuracy when using different error function in the training process. Experimental results show that the proposed model's outcomes DSC score ranks fifth among the compared results. The proposed method has also the second-highest sensitivity value among the methods, which shows that this method has a higher true segmentation rate than most of the other methods. The proposed method has achieved a Hausdorff95 of 54.566, Surface dice of 0.7193, Sensitivity of 0.7528, and Specificity of 0.7749. As compared with the state-of-the-art methods, the proposed model in experimental results is improved much better than the current methods in term of segmentation of infected areas. The proposed model has been deployed in the case study of real-world problems with the combination of 2D and 3D models. It is demonstrated the capacity to comprehensively detect lung lesions correctly. Additionally, the boundary loss function has assisted in achieving more precise segmentation for low-resolution images. Initially segmenting lung area has reduced the volume of images requiring processing, while diminishing for training process.
Page 85 of 2262251 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.