Sort by:
Page 40 of 1621612 results

Automated biometry for assessing cephalopelvic disproportion in 3D 0.55T fetal MRI at term

Uus, A., Bansal, S., Gerek, Y., Waheed, H., Neves Silva, S., Aviles Verdera, J., Kyriakopoulou, V., Betti, L., Jaufuraully, S., Hajnal, J. V., Siasakos, D., David, A., Chandiramani, M., Hutter, J., Story, L., Rutherford, M.

medrxiv logopreprintAug 21 2025
Fetal MRI offers detailed three-dimensional visualisation of both fetal and maternal pelvic anatomy, allowing for assessment of the risk of cephalopelvic disproportion and obstructed labour. However, conventional measurements of fetal and pelvic proportions and their relative positioning are typically performed manually in 2D, making them time-consuming, subject to inter-observer variability, and rarely integrated into routine clinical workflows. In this work, we present the first fully automated pipeline for pelvic and fetal head biometry in T2-weighted fetal MRI at late gestation. The method employs deep learning-based localisation of anatomical landmarks in 3D reconstructed MRI images, followed by computation of 12 standard linear and circumference measurements commonly used in the assessment of cephalopelvic disproportion. Landmark detection is based on 3D UNet models within MONAI framework, trained on 57 semi-manually annotated datasets. The full pipeline is quantitatively validated on 10 test cases. Furthermore, we demonstrate its clinical feasibility and relevance by applying it to 206 fetal MRI scans (36-40 weeks gestation) from the MiBirth study, which investigates prediction of mode of delivery using low field MRI.

Hessian-based lightweight neural network for brain vessel segmentation on a minimal training dataset

Alexandra Bernadotte, Elfimov Nikita, Mikhail Shutov, Ivan Menshikov

arxiv logopreprintAug 21 2025
Accurate segmentation of blood vessels in brain magnetic resonance angiography (MRA) is essential for successful surgical procedures, such as aneurysm repair or bypass surgery. Currently, annotation is primarily performed through manual segmentation or classical methods, such as the Frangi filter, which often lack sufficient accuracy. Neural networks have emerged as powerful tools for medical image segmentation, but their development depends on well-annotated training datasets. However, there is a notable lack of publicly available MRA datasets with detailed brain vessel annotations. To address this gap, we propose a novel semi-supervised learning lightweight neural network with Hessian matrices on board for 3D segmentation of complex structures such as tubular structures, which we named HessNet. The solution is a Hessian-based neural network with only 6000 parameters. HessNet can run on the CPU and significantly reduces the resource requirements for training neural networks. The accuracy of vessel segmentation on a minimal training dataset reaches state-of-the-art results. It helps us create a large, semi-manually annotated brain vessel dataset of brain MRA images based on the IXI dataset (annotated 200 images). Annotation was performed by three experts under the supervision of three neurovascular surgeons after applying HessNet. It provides high accuracy of vessel segmentation and allows experts to focus only on the most complex important cases. The dataset is available at https://git.scinalytics.com/terilat/VesselDatasetPartly.

Hessian-Based Lightweight Neural Network HessNet for State-of-the-Art Brain Vessel Segmentation on a Minimal Training Dataset

Alexandra Bernadotte, Elfimov Nikita, Mikhail Shutov, Ivan Menshikov

arxiv logopreprintAug 21 2025
Accurate segmentation of blood vessels in brain magnetic resonance angiography (MRA) is essential for successful surgical procedures, such as aneurysm repair or bypass surgery. Currently, annotation is primarily performed through manual segmentation or classical methods, such as the Frangi filter, which often lack sufficient accuracy. Neural networks have emerged as powerful tools for medical image segmentation, but their development depends on well-annotated training datasets. However, there is a notable lack of publicly available MRA datasets with detailed brain vessel annotations. To address this gap, we propose a novel semi-supervised learning lightweight neural network with Hessian matrices on board for 3D segmentation of complex structures such as tubular structures, which we named HessNet. The solution is a Hessian-based neural network with only 6000 parameters. HessNet can run on the CPU and significantly reduces the resource requirements for training neural networks. The accuracy of vessel segmentation on a minimal training dataset reaches state-of-the-art results. It helps us create a large, semi-manually annotated brain vessel dataset of brain MRA images based on the IXI dataset (annotated 200 images). Annotation was performed by three experts under the supervision of three neurovascular surgeons after applying HessNet. It provides high accuracy of vessel segmentation and allows experts to focus only on the most complex important cases. The dataset is available at https://git.scinalytics.com/terilat/VesselDatasetPartly.

Vision Transformer Autoencoders for Unsupervised Representation Learning: Revealing Novel Genetic Associations through Learned Sparse Attention Patterns

Islam, S. R., He, W., Xie, Z., Zhi, D.

medrxiv logopreprintAug 21 2025
The discovery of genetic loci associated with brain architecture can provide deeper insights into neuroscience and potentially lead to improved personalized medicine outcomes. Previously, we designed the Unsupervised Deep learning-derived Imaging Phenotypes (UDIPs) approach to extract phenotypes from brain imaging using a convolutional (CNN) autoencoder, and conducted brain imaging GWAS on UK Biobank (UKBB). In this work, we design a vision transformer (ViT)-based autoencoder, leveraging its distinct inductive bias and its ability to capture unique patterns through its pairwise attention mechanism. The encoder generates contextual embeddings for input patches, from which we derive a 128-dimensional latent representation, interpreted as phenotypes, by applying average pooling. The GWAS on these 128 phenotypes discovered 10 loci previously unreported by CNN-based UDIP model, 3 of which had no previous associations with brain structure in the GWAS Catalog. Our interpretation results suggest that these novel associations stem from the ViTs capability to learn sparse attention patterns, enabling the capturing of non-local patterns such as left-right hemisphere symmetry within brain MRI data. Our results highlight the advantages of transformer-based architectures in feature extraction and representation learning for genetic discovery.

Deep Learning-Enhanced Single Breath-Hold Abdominal MRI at 0.55 T-Technical Feasibility and Image Quality Assessment.

Seifert AC, Breit HC, Obmann MM, Korolenko A, Nickel MD, Fenchel M, Boll DT, Vosshenrich J

pubmed logopapersAug 21 2025
Inherently lower signal-to-noise ratios hamper the broad clinical use of low-field abdominal MRI. This study aimed to investigate the technical feasibility and image quality of deep learning (DL)-enhanced T2 HASTE and T1 VIBE-Dixon abdominal MRI at 0.55 T. From July 2024 to September 2024, healthy volunteers underwent conventional and DL-enhanced 0.55 T abdominal MRI, including conventional T2 HASTE, fat-suppressed T2 HASTE (HASTE FS), and T1 VIBE-Dixon acquisitions, and DL-enhanced single- (HASTE DL<sub>SBH</sub>) and multi-breath-hold HASTE (HASTE DL<sub>MBH</sub>), fat-suppressed single- (HASTE FS DL<sub>SBH</sub>) and multi-breath-hold HASTE (HASTE FS DL<sub>MBH</sub>), and T1 VIBE-Dixon (VIBE-Dixon<sub>DL</sub>) acquisitions. Three abdominal radiologists evaluated the scans for quality parameters and artifacts (Likert scale 1-5), and incidental findings. Interreader agreement and comparative analyses were conducted. 33 healthy volunteers (mean age: 30±4years) were evaluated. Image quality was better for single breath-hold DL-enhanced MRI (all P<0.001) with good or better interreader agreement (κ≥0.61), including T2 HASTE (HASTE DL<sub>SBH</sub>: 4 [IQR: 4-4] vs. HASTE: 3 [3-3]), T2 HASTE FS (4 [4-4] vs. 3 [3-3]), and T1 VIBE-Dixon (4 [4-5] vs. 4 [3-4]). Similarly, image noise and spatial resolution were better for DL-MRI scans (P<0.001). No quality differences were found between single- and multi-breath-hold HASTE DL or HASTE FS DL (both: 4 [4-4]; P>0.572). The number and size of incidental lesions were identical between techniques (16 lesions; mean diameter 8±5 mm; P=1.000). DL-based image reconstruction enables single breath-hold T2 HASTE and T1 VIBE-Dixon abdominal imaging at 0.55 T with better image quality than conventional MRI.

Systematic Evaluation of Wavelet-Based Denoising for MRI Brain Images: Optimal Configurations and Performance Benchmarks

Asadullah Bin Rahman, Masud Ibn Afjal, Md. Abdulla Al Mamun

arxiv logopreprintAug 20 2025
Medical imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound are essential for accurate diagnosis and treatment planning in modern healthcare. However, noise contamination during image acquisition and processing frequently degrades image quality, obscuring critical diagnostic details and compromising clinical decision-making. Additionally, enhancement techniques such as histogram equalization may inadvertently amplify existing noise artifacts, including salt-and-pepper distortions. This study investigates wavelet transform-based denoising methods for effective noise mitigation in medical images, with the primary objective of identifying optimal combinations of threshold values, decomposition levels, and wavelet types to achieve superior denoising performance and enhanced diagnostic accuracy. Through systematic evaluation across various noise conditions, the research demonstrates that the bior6.8 biorthogonal wavelet with universal thresholding at decomposition levels 2-3 consistently achieves optimal denoising performance, providing significant noise reduction while preserving essential anatomical structures and diagnostic features critical for clinical applications.

An MRI Atlas of the Human Fetal Brain: Reference and Segmentation Tools for Fetal Brain MRI Analysis

Mahdi Bagheri, Clemente Velasco-Annis, Jian Wang, Razieh Faghihpirayesh, Shadab Khan, Camilo Calixto, Camilo Jaimes, Lana Vasung, Abdelhakim Ouaalam, Onur Afacan, Simon K. Warfield, Caitlin K. Rollins, Ali Gholipour

arxiv logopreprintAug 20 2025
Accurate characterization of in-utero brain development is essential for understanding typical and atypical neurodevelopment. Building upon previous efforts to construct spatiotemporal fetal brain MRI atlases, we present the CRL-2025 fetal brain atlas, which is a spatiotemporal (4D) atlas of the developing fetal brain between 21 and 37 gestational weeks. This atlas is constructed from carefully processed MRI scans of 160 fetuses with typically-developing brains using a diffeomorphic deformable registration framework integrated with kernel regression on age. CRL-2025 uniquely includes detailed tissue segmentations, transient white matter compartments, and parcellation into 126 anatomical regions. This atlas offers significantly enhanced anatomical details over the CRL-2017 atlas, and is released along with the CRL diffusion MRI atlas with its newly created tissue segmentation and labels as well as deep learning-based multiclass segmentation models for fine-grained fetal brain MRI segmentation. The CRL-2025 atlas and its associated tools provide a robust and scalable platform for fetal brain MRI segmentation, groupwise analysis, and early neurodevelopmental research, and these materials are publicly released to support the broader research community.

Functional brain network identification in opioid use disorder using machine learning analysis of resting-state fMRI BOLD signals.

Temtam A, Witherow MA, Ma L, Sadique MS, Moeller FG, Iftekharuddin KM

pubmed logopapersAug 20 2025
Understanding the neurobiology of opioid use disorder (OUD) using resting-state functional magnetic resonance imaging (rs-fMRI) may help inform treatment strategies to improve patient outcomes. Recent literature suggests time-frequency characteristics of rs-fMRI blood oxygenation level-dependent (BOLD) signals may offer complementary information to traditional analysis techniques. However, existing studies of OUD analyze BOLD signals using measures computed across all time points. This study, for the first time in the literature, employs data-driven machine learning (ML) for time-frequency analysis of local neural activity within key functional networks to differentiate OUD subjects from healthy controls (HC). We obtain time-frequency features based on rs-fMRI BOLD signals from the default mode network (DMN), salience network (SN), and executive control network (ECN) for 31 OUD and 45 HC subjects. Then, we perform 5-fold cross-validation classification (OUD vs. HC) experiments to study the discriminative power of functional network features while taking into consideration significant demographic features. ML-based time-frequency analysis of DMN, SN, and ECN significantly (p < 0.05) outperforms chance baselines for discriminative power with mean F1 scores of 0.6675, 0.7090, and 0.6810, respectively, and mean AUCs of 0.7302, 0.7603, and 0.7103, respectively. Follow-up Boruta ML analysis of selected time-frequency (wavelet) features reveals significant (p < 0.05) detail coefficients for all three functional networks, underscoring the need for ML and time-frequency analysis of rs-fMRI BOLD signals in the study of OUD.

Attention-based deep learning network for predicting World Health Organization meningioma grade and Ki-67 expression based on magnetic resonance imaging.

Cheng X, Li H, Li C, Li J, Liu Z, Fan X, Lu C, Song K, Shen Z, Wang Z, Yang Q, Zhang J, Yin J, Qian C, You Y, Wang X

pubmed logopapersAug 20 2025
Preoperative assessment of World Health Organization (WHO) meningioma grading and Ki-67 expression is crucial for treatment strategies. We aimed to develop a fully automated attention-based deep learning network to predict WHO meningioma grading and Ki-67 expression. This retrospective study included 952 meningioma patients, divided into training (n = 542), internal validation (n = 96), and external test sets (n = 314). For each task, clinical, radiomics, and deep learning models were compared. We used no-new-Unet (nn-Unet) models to construct the segmentation network, followed by four classification models using ResNet50 or Swin Transformer architectures with 2D or 2.5D input strategies. All deep learning models incorporated attention mechanisms. Both the segmentation and 2.5D classification models demonstrated robust performance on the external test set. The segmentation network achieved Dice coefficients of 0.98 (0.97-0.99) and 0.87 (0.83-0.91) for brain parenchyma and tumour segmentation. For predicting meningioma grade, the 2.5D ResNet50 achieved the highest area under the curve (AUC) of 0.90 (0.85-0.93), significantly outperforming the clinical (AUC = 0.77 [0.70-0.83], p < 0.001) and radiomics models (AUC = 0.80 [0.75-0.85], p < 0.001). For Ki-67 expression prediction, the 2.5D Swin Transformer achieved the highest AUC of 0.89 (0.85-0.93), outperforming both the clinical (AUC = 0.76 [0.71-0.81], p < 0.001) and radiomics models (AUC = 0.82 [0.77-0.86], p = 0.002). Our automated deep learning network demonstrated superior performance. This novel network could support more precise treatment planning for meningioma patients. Question Can artificial intelligence accurately assess meningioma WHO grade and Ki-67 expression from preoperative MRI to guide personalised treatment and follow-up strategies? Findings The attention-enhanced nn-Unet segmentation achieved high accuracy, while 2.5D deep learning models with attention mechanisms achieved accurate prediction of grades and Ki-67. Clinical relevance Our fully automated 2.5D deep learning model, enhanced with attention mechanisms, accurately predicts WHO grades and Ki-67 expression levels in meningiomas, offering a robust, objective, and non-invasive solution to support clinical diagnosis and optimise treatment planning.

Profiling disease experience in patients living with brain aneurysms by analyzing multimodal clinical data and quality of life measures.

Reder SR, Hardt J, Brockmann MA, Brockmann C, Kim S, Kawulycz M, Schulz M, Kantelhardt SR, Petrowski K, Fischbeck S

pubmed logopapersAug 20 2025
To explore the mental and physical health (MH, PH) on individuals living with brain aneurysms and to profile their differences in disease experience. In N = 111 patients the Short Form 36 Health Survey (SF-36) was assessed via an online survey; Supplementary data included angiography and magnetic resonance imaging (MRI) findings (including AI-based brain Lesion Volume analyses in ml, or LV). Correlation and regression analyses were conducted (including biological sex, age, overall brain LV, PH, MH). Disease profiles were determined using principal component analysis. Compared to the German normative cohort, patients exhibited overall lower SF-36 scores. In regression analyses, the DW was predictable by PH (β = 0.345) and MH (β=-0.646; R = 0.557; p < 0.001). Vasospasm severity correlated significantly with LV (r = 0.242, p = 0.043), MH (r=-0.321, p = 0.043), and PH (r=-0.372, p = 0.028). Higher LV were associated with poorer PH (r=-0.502, p = 0.001), unlike MH (p > 0.05). Main disease profiles were identified: (1) those with increased LV post-rupture (high DW); (2) older individuals with stable aneurysms (low DW); (3) revealing a sex disparity in QoL despite similar vasospasm severity; and 4), focused on chronic pain and its impact on daily tasks. Two sub-profiles highlighted trauma-induced impairments, functional disabilities from LV, and persistent anxiety. Reduced thalamic and pallidal volumes were linked to low QoL following subarachnoid hemorrhage. MH has a greater impact on life quality compared to physical disabilities, leading to prolonged DW. A singular physical impairment was rather atypical for a perceived worse outcome. Patient profiles revealed that clinical history, sex, psychological stress, and pain each contribute uniquely to QoL and work capacity. Prioritizing MH in assessing workability and rehabilitation is crucial for survivors' long-term outcome.
Page 40 of 1621612 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.