Sort by:
Page 112 of 4003995 results

MR-AIV reveals <i>in vivo</i> brain-wide fluid flow with physics-informed AI.

Toscano JD, Guo Y, Wang Z, Vaezi M, Mori Y, Karniadakis GE, Boster KAS, Kelley DH

pubmed logopapersAug 1 2025
The circulation of cerebrospinal and interstitial fluid plays a vital role in clearing metabolic waste from the brain, and its disruption has been linked to neurological disorders. However, directly measuring brain-wide fluid transport-especially in the deep brain-has remained elusive. Here, we introduce magnetic resonance artificial intelligence velocimetry (MR-AIV), a framework featuring a specialized physics-informed architecture and optimization method that reconstructs three-dimensional fluid velocity fields from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). MR-AIV unveils brain-wide velocity maps while providing estimates of tissue permeability and pressure fields-quantities inaccessible to other methods. Applied to the brain, MR-AIV reveals a functional landscape of interstitial and perivascular flow, quantitatively distinguishing slow diffusion-driven transport (∼ 0.1 µm/s) from rapid advective flow (∼ 3 µm/s). This approach enables new investigations into brain clearance mechanisms and fluid dynamics in health and disease, with broad potential applications to other porous media systems, from geophysics to tissue mechanics.

Brain Age Prediction: Deep Models Need a Hand to Generalize.

Rajabli R, Soltaninejad M, Fonov VS, Bzdok D, Collins DL

pubmed logopapersAug 1 2025
Predicting brain age from T1-weighted MRI is a promising marker for understanding brain aging and its associated conditions. While deep learning models have shown success in reducing the mean absolute error (MAE) of predicted brain age, concerns about robust and accurate generalization in new data limit their clinical applicability. The large number of trainable parameters, combined with limited medical imaging training data, contributes to this challenge, often resulting in a generalization gap where there is a significant discrepancy between model performance on training data versus unseen data. In this study, we assess a deep model, SFCN-reg, based on the VGG-16 architecture, and address the generalization gap through comprehensive preprocessing, extensive data augmentation, and model regularization. Using training data from the UK Biobank, we demonstrate substantial improvements in model performance. Specifically, our approach reduces the generalization MAE by 47% (from 5.25 to 2.79 years) in the Alzheimer's Disease Neuroimaging Initiative dataset and by 12% (from 4.35 to 3.75 years) in the Australian Imaging, Biomarker and Lifestyle dataset. Furthermore, we achieve up to 13% reduction in scan-rescan error (from 0.80 to 0.70 years) while enhancing the model's robustness to registration errors. Feature importance maps highlight anatomical regions used to predict age. These results highlight the critical role of high-quality preprocessing and robust training techniques in improving accuracy and narrowing the generalization gap, both necessary steps toward the clinical use of brain age prediction models. Our study makes valuable contributions to neuroimaging research by offering a potential pathway to improve the clinical applicability of deep learning models.

Transparent brain tumor detection using DenseNet169 and LIME.

Abraham LA, Palanisamy G, Veerapu G

pubmed logopapersAug 1 2025
A crucial area of research in the field of medical imaging is that of brain tumor classification, which greatly aids diagnosis and facilitates treatment planning. This paper proposes DenseNet169-LIME-TumorNet, a model based on deep learning and an integrated combination of DenseNet169 with LIME to boost the performance of brain tumor classification and its interpretability. The model was trained and evaluated on the publicly available Brain Tumor MRI Dataset containing 2,870 images spanning three tumor types. Dense169-LIME-TumorNet achieves a classification accuracy of 98.78%, outperforming widely used architectures including Inception V3, ResNet50, MobileNet V2, EfficientNet variants, and other DenseNet configurations. The integration of LIME provides visual explanations that enhance transparency and reliability in clinical decision-making. Furthermore, the model demonstrates minimal computational overhead, enabling faster inference and deployment in resource-constrained clinical environments, thereby highlighting its practical utility for real-time diagnostic support. Work in the future should run towards creating generalization through the adoption of a multi-modal learning approach, hybrid deep learning development, and real-time application development for AI-assisted diagnosis.

Lumbar and pelvic CT image segmentation based on cross-scale feature fusion and linear self-attention mechanism.

Li C, Chen L, Liu Q, Teng J

pubmed logopapersAug 1 2025
The lumbar spine and pelvis are critical stress-bearing structures of the human body, and their rapid and accurate segmentation plays a vital role in clinical diagnosis and intervention. However, conventional CT imaging poses significant challenges due to the low contrast of sacral and bilateral hip tissues and the complex and highly similar intervertebral space structures within the lumbar spine. To address these challenges, we propose a general-purpose segmentation network that integrates a cross-scale feature fusion strategy with a linear self-attention mechanism. The proposed network effectively extracts multi-scale features and fuses them along the channel dimension, enabling both structural and boundary information of lumbar and pelvic regions to be captured within the encoder-decoder architecture.Furthermore, we introduce a linear mapping strategy to approximate the traditional attention matrix with a low-rank representation, allowing the linear attention mechanism to significantly reduce computational complexity while maintaining segmentation accuracy for vertebrae and pelvic bones. Comparative and ablation experiments conducted on the CTSpine1K and CTPelvic1K datasets demonstrate that our method achieves improvements of 1.5% in Dice Similarity Coefficient (DSC) and 2.6% in Hausdorff Distance (HD) over state-of-the-art models, validating the effectiveness of our approach in enhancing boundary segmentation quality and segmentation accuracy in homogeneous anatomical regions.

Reference charts for first-trimester placental volume derived using OxNNet.

Mathewlynn S, Starck LN, Yin Y, Soltaninejad M, Swinburne M, Nicolaides KH, Syngelaki A, Contreras AG, Bigiotti S, Woess EM, Gerry S, Collins S

pubmed logopapersAug 1 2025
To establish a comprehensive reference range for OxNNet-derived first-trimester placental volume (FTPV), based on values observed in healthy pregnancies. Data were obtained from the First Trimester Placental Ultrasound Study, an observational cohort study in which three-dimensional placental ultrasound imaging was performed between 11 + 2 and 14 + 1 weeks' gestation, alongside otherwise routine care. A subgroup of singleton pregnancies resulting in term live birth, without neonatal unit admission or major chromosomal or structural abnormality, were included. Exclusion criteria were fetal growth restriction, maternal diabetes mellitus, hypertensive disorders of pregnancy or other maternal medical conditions (e.g. chronic hypertension, antiphospholipid syndrome, systemic lupus erythematosus). Placental images were processed using the OxNNet toolkit, a software solution based on a fully convolutional neural network, for automated placental segmentation and volume calculation. Quantile regression and the lambda-mu-sigma (LMS) method were applied to model the distribution of FTPV, using both crown-rump length (CRL) and gestational age as predictors. Model fit was assessed using the Akaike information criterion (AIC), and centile curves were constructed for visual inspection. The cohort comprised 2547 cases. The distribution of FTPV across gestational ages was positively skewed, with variation in the distribution at different gestational timepoints. In model comparisons, the LMS method yielded lower AIC values compared with quantile regression models. For predicting FTPV from CRL, the LMS model with the Sinh-Arcsinh distribution achieved the best performance, with the lowest AIC value. For gestational-age-based prediction, the LMS model with the Box-Cox Cole and Green original distribution achieved the lowest AIC value. The LMS models were selected to construct centile charts for FTPV based on both CRL and gestational age. Evaluation of the centile charts revealed strong agreement between predicted and observed centiles, with minimal deviations. Both models demonstrated excellent calibration, and the Z-scores derived using each of the models confirmed normal distribution. This study established reference ranges for FTPV based on both CRL and gestational age in healthy pregnancies. The LMS method provided the best model fit, demonstrating excellent calibration and minimal deviations between predicted and observed centiles. These findings should facilitate the exploration of FTPV as a potential biomarker for adverse pregnancy outcome and provide a foundation for future research into its clinical applications. © 2025 The Author(s). Ultrasound in Obstetrics & Gynecology published by John Wiley & Sons Ltd on behalf of International Society of Ultrasound in Obstetrics and Gynecology.

Weakly Supervised Intracranial Aneurysm Detection and Segmentation in MR angiography via Multi-task UNet with Vesselness Prior

Erin Rainville, Amirhossein Rasoulian, Hassan Rivaz, Yiming Xiao

arxiv logopreprintAug 1 2025
Intracranial aneurysms (IAs) are abnormal dilations of cerebral blood vessels that, if ruptured, can lead to life-threatening consequences. However, their small size and soft contrast in radiological scans often make it difficult to perform accurate and efficient detection and morphological analyses, which are critical in the clinical care of the disorder. Furthermore, the lack of large public datasets with voxel-wise expert annotations pose challenges for developing deep learning algorithms to address the issues. Therefore, we proposed a novel weakly supervised 3D multi-task UNet that integrates vesselness priors to jointly perform aneurysm detection and segmentation in time-of-flight MR angiography (TOF-MRA). Specifically, to robustly guide IA detection and segmentation, we employ the popular Frangi's vesselness filter to derive soft cerebrovascular priors for both network input and an attention block to conduct segmentation from the decoder and detection from an auxiliary branch. We train our model on the Lausanne dataset with coarse ground truth segmentation, and evaluate it on the test set with refined labels from the same database. To further assess our model's generalizability, we also validate it externally on the ADAM dataset. Our results demonstrate the superior performance of the proposed technique over the SOTA techniques for aneurysm segmentation (Dice = 0.614, 95%HD =1.38mm) and detection (false positive rate = 1.47, sensitivity = 92.9%).

Enhanced Detection of Age-Related and Cognitive Declines Using Automated Hippocampal-To-Ventricle Ratio in Alzheimer's Patients.

Fernandez-Lozano S, Fonov V, Schoemaker D, Pruessner J, Potvin O, Duchesne S, Collins DL

pubmed logopapersAug 1 2025
The hippocampal-to-ventricle ratio (HVR) is a biomarker of medial temporal atrophy, particularly useful in the assessment of neurodegeneration in diseases such as Alzheimer's disease (AD). To minimize subjectivity and inter-rater variability, an automated, accurate, precise, and reliable segmentation technique for the hippocampus (HC) and surrounding cerebro-spinal fluid (CSF) filled spaces-such as the temporal horns of the lateral ventricles-is essential. We trained and evaluated three automated methods for the segmentation of both HC and CSF (Multi-Atlas Label Fusion (MALF), Nonlinear Patch-Based Segmentation (NLPB), and a Convolutional Neural Network (CNN)). We then evaluated these methods, including the widely used FreeSurfer technique, using baseline T1w MRIs of 1641 participants from the AD Neuroimaging Initiative study with various degree of atrophy associated with their cognitive status on the spectrum from cognitively healthy to clinically probable AD. Our gold standard consisted in manual segmentation of HC and CSF from 80 cognitively healthy individuals. We calculated HC volumes and HVR and compared all methods in terms of segmentation reliability, similarity across methods, sensitivity in detecting between-group differences and associations with age, scores of the learning subtest of the Rey Auditory Verbal Learning Test (RAVLT) and the Alzheimer's Disease Assessment Scale 13 (ADAS13) scores. Cross validation demonstrated that the CNN method yielded more accurate HC and CSF segmentations when compared to MALF and NLPB, demonstrating higher volumetric overlap (Dice Kappa = 0.94) and correlation (rho = 0.99) with the manual labels. It was also the most reliable method in clinical data application, showing minimal failures. Our comparisons yielded high correlations between FreeSurfer, CNN and NLPB volumetric values. HVR yielded higher control:AD effect sizes than HC volumes among all segmentation methods, reinforcing the significance of HVR in clinical distinction. The positive association with age was significantly stronger for HVR compared to HC volumes on all methods except FreeSurfer. Memory associations with HC volumes or HVR were only significant for individuals with mild cognitive impairment. Finally, the HC volumes and HVR showed comparable negative associations with ADAS13, particularly in the mild cognitive impairment cohort. This study provides an evaluation of automated segmentation methods centered to estimate HVR, emphasizing the superior performance of a CNN-based algorithm. The findings underscore the pivotal role of accurate segmentation in HVR calculations for precise clinical applications, contributing valuable insights into medial temporal lobe atrophy in neurodegenerative disorders, especially AD.

Anatomical Considerations for Achieving Optimized Outcomes in Individualized Cochlear Implantation.

Timm ME, Avallone E, Timm M, Salcher RB, Rudnik N, Lenarz T, Schurzig D

pubmed logopapersAug 1 2025
Machine learning models can assist with the selection of electrode arrays required for optimal insertion angles. Cochlea implantation is a successful therapy in patients with severe to profound hearing loss. The effectiveness of a cochlea implant depends on precise insertion and positioning of electrode array within the cochlea, which is known for its variability in shape and size. Preoperative imaging like CT or MRI plays a significant role in evaluating cochlear anatomy and planning the surgical approach to optimize outcomes. In this study, preoperative and postoperative CT and CBCT data of 558 cochlea-implant patients were analyzed in terms of the influence of anatomical factors and insertion depth onto the resulting insertion angle. Machine learning models can predict insertion depths needed for optimal insertion angles, with performance improving by including cochlear dimensions in the models. A simple linear regression using just the insertion depth explained 88% of variability, whereas adding cochlear length or diameter and width further improved predictions up to 94%.

FOCUS-DWI improves prostate cancer detection through deep learning reconstruction with IQMR technology.

Zhao Y, Xie XL, Zhu X, Huang WN, Zhou CW, Ren KX, Zhai RY, Wang W, Wang JW

pubmed logopapersAug 1 2025
This study explored the effects of using Intelligent Quick Magnetic Resonance (IQMR) image post-processing on image quality in Field of View Optimized and Constrained Single-Shot Diffusion-Weighted Imaging (FOCUS-DWI) sequences for prostate cancer detection, and assessed its efficacy in distinguishing malignant from benign lesions. The clinical data and MRI images from 62 patients with prostate masses (31 benign and 31 malignant) were retrospectively analyzed. Axial T2-weighted imaging with fat saturation (T2WI-FS) and FOCUS-DWI sequences were acquired, and the FOCUS-DWI images were processed using the IQMR post-processing system to generate IQMR-FOCUS-DWI images. Two independent radiologists undertook subjective scoring, grading using the Prostate Imaging Reporting and Data System (PI-RADS), diagnosis of benign and malignant lesions, and diagnostic confidence scoring for images from the FOCUS-DWI and IQMR-FOCUS-DWI sequences. Additionally, quantitative analyses, specifically, the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM), were conducted using T2WI-FS as the reference standard. The apparent diffusion coefficients (ADCs) of malignant and benign lesions were compared between the two imaging sequences. Spearman correlation coefficients were calculated to evaluate the associations between diagnostic confidence scores and diagnostic accuracy rates of the two sequence groups, as well as between the ADC values of malignant lesions and Gleason grading in the two sequence groups. Receiver operating characteristic (ROC) curves were utilized to assess the efficacy of ADC in distinguishing lesions. The qualitative analysis revealed that IQMR-FOCUS-DWI images showed significantly better noise suppression, reduced geometric distortion, and enhanced overall quality relative to the FOCUS-DWI images (P < 0.001). There was no significant difference in the PI-RADS scores between IQMR-FOCUS-DWI and FOCUS-DWI images (P = 0.0875), while the diagnostic confidence scores of IQMR-FOCUS-DWI sequences were markedly higher than those of FOCUS-DWI sequences (P = 0.0002). The diagnostic results of the FOCUS-DWI sequences for benign and malignant prostate lesions were consistent with those of the pathological results (P < 0.05), as were those of the IQMR-FOCUS-DWI sequences (P < 0.05). The quantitative analysis indicated that the PSNR, SSIM, and ADC values were markedly greater in IQMR-FOCUS-DWI images relative to FOCUS-DWI images (P < 0.01). In both imaging sequences, benign lesions exhibited ADC values markedly greater than those of malignant lesions (P < 0.001). The diagnostic confidence scores of both groups of sequences were significantly positively correlated with the diagnostic accuracy rate. In malignant lesions, the ADC values of the FOCUS-DWI sequences showed moderate negative correlations with the Gleason grading, while the ADC values of the IQMR-FOCUS-DWI sequences were strongly negatively associated with the Gleason grading. ROC curves indicated the superior diagnostic performance of IQMR-FOCUS-DWI (AUC = 0.941) compared to FOCUS-DWI (AUC = 0.832) for differentiating prostate lesions (P = 0.0487). IQMR-FOCUS-DWI significantly enhances image quality and improves diagnostic accuracy for benign and malignant prostate lesions compared to conventional FOCUS-DWI.

A RF-based end-to-end Breast Cancer Prediction algorithm.

Win KN

pubmed logopapersAug 1 2025
Breast cancer became the primary cause of cancer-related deaths among women year by year. Early detection and accurate prediction of breast cancer play a crucial role in strengthening the quality of human life. Many scientists have concentrated on analyzing and conducting the development of many algorithms and progressing computer-aided diagnosis applications. Whereas many research have been conducted, feature research on cancer diagnosis is rare, especially regarding predicting the desired features by providing and feeding breast cancer features into the system. In this regard, this paper proposed a Breast Cancer Prediction (RF-BCP) algorithm based on Random Forest by taking inputs to predict cancer. For the experiment of the proposed algorithm, two datasets were utilized namely Breast Cancer dataset and a curated mammography dataset, and also compared the accuracy of the proposed algorithm with SVM, Gaussian NB, and KNN algorithms. Experimental results show that the proposed algorithm can predict well and outperform other existing machine learning algorithms to support decision-making.
Page 112 of 4003995 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.