Sort by:
Page 81 of 1221217 results

Automated detection of spinal bone marrow oedema in axial spondyloarthritis: training and validation using two large phase 3 trial datasets.

Jamaludin A, Windsor R, Ather S, Kadir T, Zisserman A, Braun J, Gensler LS, Østergaard M, Poddubnyy D, Coroller T, Porter B, Ligozio G, Readie A, Machado PM

pubmed logopapersJun 9 2025
To evaluate the performance of machine learning (ML) models for the automated scoring of spinal MRI bone marrow oedema (BMO) in patients with axial spondyloarthritis (axSpA) and compare them with expert scoring. ML algorithms using SpineNet software were trained and validated on 3483 spinal MRIs from 686 axSpA patients across two clinical trial datasets. The scoring pipeline involved (i) detection and labelling of vertebral bodies and (ii) classification of vertebral units for the presence or absence of BMO. Two models were tested: Model 1, without manual segmentation, and Model 2, incorporating an intermediate manual segmentation step. Model outputs were compared with those of human experts using kappa statistics, balanced accuracy, sensitivity, specificity, and AUC. Both models performed comparably to expert readers, regarding presence vs absence of BMO. Model 1 outperformed Model 2, with an AUC of 0.94 (vs 0.88), accuracy of 75.8% (vs 70.5%), and kappa of 0.50 (vs 0.31), using absolute reader consensus scoring as the external reference; this performance was similar to the expert inter-reader accuracy of 76.8% and kappa of 0.47, in a radiographic axSpA dataset. In a non-radiographic axSpA dataset, Model 1 achieved an AUC of 0.97 (vs 0.91 for Model 2), accuracy of 74.6% (vs 70%), and kappa of 0.52 (vs 0.27), comparable to the expert inter-reader accuracy of 74.2% and kappa of 0.46. ML software shows potential for automated MRI BMO assessment in axSpA, offering benefits such as improved consistency, reduced labour costs, and minimised inter- and intra-reader variability. Clinicaltrials.gov, MEASURE 1 study (NCT01358175); PREVENT study (NCT02696031).

Simultaneous Segmentation of Ventricles and Normal/Abnormal White Matter Hyperintensities in Clinical MRI using Deep Learning

Mahdi Bashiri Bawil, Mousa Shamsi, Abolhassan Shakeri Bavil

arxiv logopreprintJun 8 2025
Multiple sclerosis (MS) diagnosis and monitoring rely heavily on accurate assessment of brain MRI biomarkers, particularly white matter hyperintensities (WMHs) and ventricular changes. Current segmentation approaches suffer from several limitations: they typically segment these structures independently despite their pathophysiological relationship, struggle to differentiate between normal and pathological hyperintensities, and are poorly optimized for anisotropic clinical MRI data. We propose a novel 2D pix2pix-based deep learning framework for simultaneous segmentation of ventricles and WMHs with the unique capability to distinguish between normal periventricular hyperintensities and pathological MS lesions. Our method was developed and validated on FLAIR MRI scans from 300 MS patients. Compared to established methods (SynthSeg, Atlas Matching, BIANCA, LST-LPA, LST-LGA, and WMH-SynthSeg), our approach achieved superior performance for both ventricle segmentation (Dice: 0.801+/-0.025, HD95: 18.46+/-7.1mm) and WMH segmentation (Dice: 0.624+/-0.061, precision: 0.755+/-0.161). Furthermore, our method successfully differentiated between normal and abnormal hyperintensities with a Dice coefficient of 0.647. Notably, our approach demonstrated exceptional computational efficiency, completing end-to-end processing in approximately 4 seconds per case, up to 36 times faster than baseline methods, while maintaining minimal resource requirements. This combination of improved accuracy, clinically relevant differentiation capability, and computational efficiency addresses critical limitations in current neuroimaging analysis, potentially enabling integration into routine clinical workflows and enhancing MS diagnosis and monitoring.

Deep learning-based prospective slice tracking for continuous catheter visualization during MRI-guided cardiac catheterization.

Neofytou AP, Kowalik G, Vidya Shankar R, Kunze K, Moon T, Mellor N, Neji R, Razavi R, Pushparajah K, Roujol S

pubmed logopapersJun 8 2025
This proof-of-concept study introduces a novel, deep learning-based, parameter-free, automatic slice-tracking technique for continuous catheter tracking and visualization during MR-guided cardiac catheterization. The proposed sequence includes Calibration and Runtime modes. Initially, Calibration mode identifies the catheter tip's three-dimensional coordinates using a fixed stack of contiguous slices. A U-Net architecture with a ResNet-34 encoder is used to identify the catheter tip location. Once identified, the sequence then switches to Runtime mode, dynamically acquiring three contiguous slices automatically centered on the catheter tip. The catheter location is estimated from each Runtime stack using the same network and fed back to the sequence, enabling prospective slice tracking to keep the catheter in the central slice. If the catheter remains unidentified over several dynamics, the sequence reverts to Calibration mode. This artificial intelligence (AI)-based approach was evaluated prospectively in a three-dimensional-printed heart phantom and 3 patients undergoing MR-guided cardiac catheterization. This technique was also compared retrospectively in 2 patients with a previous non-AI automatic tracking method relying on operator-defined parameters. In the phantom study, the tracking framework achieved 100% accuracy/sensitivity/specificity in both modes. Across all patients, the average accuracy/sensitivity/specificity were 100 ± 0/100 ± 0/100 ± 0% (Calibration) and 98.4 ± 0.8/94.1 ± 2.9/100.0 ± 0.0% (Runtime). The parametric, non-AI technique and the proposed parameter-free AI-based framework yielded identical accuracy (100%) in Calibration mode and similar accuracy range in Runtime mode (Patients 1 and 2: 100%-97%, and 100%-98%, respectively). An AI-based prospective slice-tracking framework was developed for real-time, parameter-free, operator-independent, automatic tracking of gadolinium-filled balloon catheters. Its feasibility was successfully demonstrated in patients undergoing MRI-guided cardiac catheterization.

MRI-mediated intelligent multimodal imaging system: from artificial intelligence to clinical imaging diagnosis.

Li Y, Wang J, Pan X, Shan Y, Zhang J

pubmed logopapersJun 8 2025
MRI, as a mature diagnostic method in clinical application, is favored by doctors and patients, there are also insurmountable bottleneck problems. AI strategies such as multimodal imaging integration and machine learning are used to build an intelligent multimodal imaging system based on MRI data to solve the unmet clinical needs in various medical environments. This review systematically discusses the development of MRI-guided multimodal imaging systems and the application of intelligent multimodal imaging systems integrated with artificial intelligence in the early diagnosis of brain and cardiovascular diseases. The safe and effective deployment of AI in clinical diagnostic equipment can help enhance early accurate diagnosis and personalized patient care.

Transfer Learning and Explainable AI for Brain Tumor Classification: A Study Using MRI Data from Bangladesh

Shuvashis Sarker

arxiv logopreprintJun 8 2025
Brain tumors, regardless of being benign or malignant, pose considerable health risks, with malignant tumors being more perilous due to their swift and uncontrolled proliferation, resulting in malignancy. Timely identification is crucial for enhancing patient outcomes, particularly in nations such as Bangladesh, where healthcare infrastructure is constrained. Manual MRI analysis is arduous and susceptible to inaccuracies, rendering it inefficient for prompt diagnosis. This research sought to tackle these problems by creating an automated brain tumor classification system utilizing MRI data obtained from many hospitals in Bangladesh. Advanced deep learning models, including VGG16, VGG19, and ResNet50, were utilized to classify glioma, meningioma, and various brain cancers. Explainable AI (XAI) methodologies, such as Grad-CAM and Grad-CAM++, were employed to improve model interpretability by emphasizing the critical areas in MRI scans that influenced the categorization. VGG16 achieved the most accuracy, attaining 99.17%. The integration of XAI enhanced the system's transparency and stability, rendering it more appropriate for clinical application in resource-limited environments such as Bangladesh. This study highlights the capability of deep learning models, in conjunction with explainable artificial intelligence (XAI), to enhance brain tumor detection and identification in areas with restricted access to advanced medical technologies.

SMART MRS: A Simulated MEGA-PRESS ARTifacts toolbox for GABA-edited MRS.

Bugler H, Shamaei A, Souza R, Harris AD

pubmed logopapersJun 8 2025
To create a Python-based toolbox to simulate commonly occurring artifacts for single voxel gamma-aminobutyric acid (GABA)-edited MRS data. The toolbox was designed to maximize user flexibility and contains artifact, applied, input/output (I/O), and support functions. The artifact functions can produce spurious echoes, eddy currents, nuisance peaks, line broadening, baseline contamination, linear frequency drifts, and frequency and phase shift artifacts. Applied functions combine or apply specific parameter values to produce recognizable effects such as lipid peak and motion contamination. I/O and support functions provide additional functionality to accommodate different kinds of input data (MATLAB FID-A.mat files, NIfTI-MRS files), which vary by domain (time vs. frequency), MRS data type (e.g., edited vs. non-edited) and scale. A frequency and phase correction machine learning model experiment trained on corrupted simulated data and validated on in vivo data is shown to highlight the utility of our toolbox. Data simulated from the toolbox are complementary for research applications, as demonstrated by training a frequency and phase correction deep learning model that is applied to in vivo data containing artifacts. Visual assessment also confirms the resemblance of simulated artifacts compared to artifacts found in in vivo data. Our easy to install Python artifact simulated toolbox SMART_MRS is useful to enhance the diversity and quality of existing simulated edited-MRS data and is complementary to existing MRS simulation software.

NeXtBrain: Combining local and global feature learning for brain tumor classification.

Pacal I, Akhan O, Deveci RT, Deveci M

pubmed logopapersJun 7 2025
The accurate and timely diagnosis of brain tumors is of paramount clinical significance for effective treatment planning and improved patient outcomes. While deep learning has advanced medical image analysis, concurrently achieving high classification accuracy, robust generalization, and computational efficiency remains a formidable challenge. This is often due to the difficulty in optimally capturing both fine-grained local tumor features and their broader global contextual cues without incurring substantial computational costs. This paper introduces NeXtBrain, a novel hybrid architecture meticulously designed to overcome these limitations. NeXtBrain's core innovations, the NeXt Convolutional Block (NCB) and the NeXt Transformer Block (NTB), synergistically enhance feature learning: NCB leverages Multi-Head Convolutional Attention and a SwiGLU-based MLP to precisely extract subtle local tumor morphologies and detailed textures, while NTB integrates self-attention with convolutional attention and a SwiGLU MLP to effectively model long-range spatial dependencies and global contextual relationships, crucial for differentiating complex tumor characteristics. Evaluated on two publicly available benchmark datasets, Figshare and Kaggle, NeXtBrain was rigorously compared against 17 state-of-the-art (SOTA) models. On Figshare, it achieved 99.78 % accuracy and a 99.77 % F1-score. On Kaggle, it attained 99.78 % accuracy and a 99.81 % F1-score, surpassing leading SOTA ViT, CNN, and hybrid models. Critically, NeXtBrain demonstrates exceptional computational efficiency, achieving these SOTA results with only 23.91 million parameters, requiring just 10.32 GFLOPs, and exhibiting a rapid inference time of 0.007 ms. This efficiency allows it to outperform significantly larger models such as DeiT3-Base with 85.82 M parameters, Swin-Base with 86.75 M parameters in both accuracy and computational demand.

Physics-informed neural networks for denoising high b-value diffusion-weighted images.

Lin Q, Yang F, Yan Y, Zhang H, Xie Q, Zheng J, Yang W, Qian L, Liu S, Yao W, Qu X

pubmed logopapersJun 7 2025
Diffusion-weighted imaging (DWI) is widely applied in tumor diagnosis by measuring the diffusion of water molecules. To increase the sensitivity to tumor identification, faithful high b-value DWI images are expected by setting a stronger strength of gradient field in magnetic resonance imaging (MRI). However, high b-value DWI images are heavily affected by reduced signal-to-noise ratio due to the exponential decay of signal intensity. Thus, removing noise becomes important for high b-value DWI images. Here, we propose a Physics-Informed neural Network for high b-value DWI images Denoising (PIND) by leveraging information from physics-informed loss and prior information from low b-value DWI images with high signal-to-noise ratio. Experiments are conducted on a prostate DWI dataset that has 125 subjects. Compared with the original noisy images, PIND improves the peak signal-to-noise ratio from 31.25 dB to 36.28 dB, and structural similarity index measure from 0.77 to 0.92. Our schemes can save 83% data acquisition time since fewer averages of high b-value DWI images need to be acquired, while maintaining 98% accuracy of the apparent diffusion coefficient value, suggesting its potential effectiveness in preserving essential diffusion characteristics. Reader study by 4 radiologists (3, 6, 13, and 18 years of experience) indicates PIND's promising performance on overall quality, signal-to-noise ratio, artifact suppression, and lesion conspicuity, showing potential for improving clinical DWI applications.

Contribution of Labrum and Cartilage to Joint Surface in Different Hip Deformities: An Automatic Deep Learning-Based 3-Dimensional Magnetic Resonance Imaging Analysis.

Meier MK, Roshardt JA, Ruckli AC, Gerber N, Lerch TD, Jung B, Tannast M, Schmaranzer F, Steppacher SD

pubmed logopapersJun 7 2025
Multiple 2-dimensional magnetic resonance imaging (MRI) studies have indicated that the size of the labrum adjusts in response to altered joint loading. In patients with hip dysplasia, it tends to increase as a compensatory mechanism for inadequate acetabular coverage. To determine the differences in labral contribution to the joint surface among different hip deformities as well as which radiographic parameters influence labral contribution to the joint surface using a deep learning-based approach for automatic 3-dimensional (3D) segmentation of MRI. Cross-sectional study; Level of evidence, 4. This retrospective study was approved by the local ethics committee with waiver for informed consent. A total of 98 patients (100 hips) with symptomatic hip deformities undergoing direct hip magnetic resonance arthrography (3 T) between January 2020 and October 2021 were consecutively selected (mean age, 30 ± 9 years; 64% female). The standard imaging protocol included proton density-weighted turbo spin echo images and an axial-oblique 3D T1-weighted MP2RAGE sequence. According to acetabular morphology, hips were divided into subgroups: dysplasia (lateral center-edge [LCE] angle, <23°), normal coverage (LCE, 23°-33°), overcoverage (LCE, 33°-39°), severe overcoverage (LCE, >39°), and retroversion (retroversion index >10% and all 3 retroversion signs positive). A previously validated deep learning approach for automatic segmentation and software for calculation of the joint surface were used. The labral contribution to the joint surface was defined as follows: labrum surface area/(labrum surface area + cartilage surface area). One-way analysis of variance with Tukey correction for multiple comparison and linear regression analysis was performed. The mean labral contribution of the joint surface of dysplastic hips was 26% ± 5% (95% CI, 24%-28%) and higher compared with all other hip deformities (<i>P</i> value range, .001-.036). Linear regression analysis identified LCE angle (β = -.002; <i>P</i> < .001) and femoral torsion (β = .001; <i>P</i> = .008) as independent predictors for labral contribution to the joint surface with a goodness-of-fit <i>R</i><sup>2</sup> value of 0.35. The labral contribution to the joint surface differs among hip deformities and is influenced by lateral acetabular coverage and femoral torsion. This study paves the way for a more in-depth understanding of the underlying pathomechanism and a reliable 3D analysis of the hip joint that can be indicative for surgical decision-making in patients with hip deformities.

Automatic MRI segmentation of masticatory muscles using deep learning enables large-scale muscle parameter analysis.

Ten Brink RSA, Merema BJ, den Otter ME, Jensma ML, Witjes MJH, Kraeima J

pubmed logopapersJun 7 2025
Mandibular reconstruction to restore mandibular continuity often relies on patient-specific implants and virtual surgical planning, but current implant designs rarely consider individual biomechanical demands, which are critical for preventing complications such as stress shielding, screw loosening, and implant failure. The inclusion of patient-specific masticatory muscle parameters such as cross-sectional area, vectors, and volume could improve implant success, but manual segmentation of these parameters is time-consuming, limiting large-scale analyses. In this study, a deep learning model was trained for automatic segmentation of eight masticatory muscles on MRI images. Forty T1-weighted MRI scans were segmented manually or via pseudo-labelling for training. Training employed 5-fold cross-validation over 1000 epochs per fold and testing was done on 10 manually segmented scans. The model achieved a mean Dice similarity coefficient (DSC) of 0.88, intersection over union (IoU) of 0.79, precision of 0.87, and recall of 0.89, demonstrating high segmentation accuracy. These results indicate the feasibility of large-scale, reproducible analyses of muscle volumes, directions, and estimated forces. By integrating these parameters into implant design and surgical planning, this method offers a step forward in developing personalized surgical strategies that could improve postoperative outcomes in mandibular reconstruction. This brings the field closer to truly individualized patient care.
Page 81 of 1221217 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.