Sort by:
Page 32 of 72720 results

Simulating workload reduction with an AI-based prostate cancer detection pathway using a prediction uncertainty metric.

Fransen SJ, Bosma JS, van Lohuizen Q, Roest C, Simonis FFJ, Kwee TC, Yakar D, Huisman H

pubmed logopapersJun 7 2025
This study compared two uncertainty quantification (UQ) metrics to rule out prostate MRI scans with a high-confidence artificial intelligence (AI) prediction and investigated the resulting potential radiologist's workload reduction in a clinically significant prostate cancer (csPCa) detection pathway. This retrospective study utilized 1612 MRI scans from three institutes for csPCa (Gleason Grade Group ≥ 2) assessment. We compared the standard diagnostic pathway (radiologist reading) to an AI-based rule-out pathway in terms of efficacy and accuracy in diagnosing csPCa. In the rule-out pathway, 15 AI submodels (trained on 7756 cases) diagnosed each MRI scan, and any prediction deemed uncertain was referred to a radiologist for reading. We compared the mean (meanUQ) and variability (varUQ) of predictions using the DeLong test on the area under the receiver operating characteristic curves (AUROC). The level of workload reduction of the best UQ method was determined based on a maintained sensitivity at non-inferior specificity using the margins 0.05 and 0.10. The workload reduction of the proposed pathway was institute-specific: up to 20% at a 0.10 non-inferiority margin (p < 0.05) and non-significant workload reduction at a 0.05 margin. VarUQ-based rule out gave higher but non-significant AUROC scores than meanUQ in certain selected cases (+0.05 AUROC, p > 0.05). MeanUQ and varUQ showed promise in AI-based rule-out csPCa detection. Using varUQ in an AI-based csPCa detection pathway could reduce the number of scans radiologists need to read. The varying performance of the UQ rule-out indicates the need for institute-specific UQ thresholds. Question AI can autonomously assess prostate MRI scans with high certainty at a non-inferior performance compared to radiologists, potentially reducing the workload of radiologists. Findings The optimal ratio of AI-model and radiologist readings is institute-dependent and requires calibration. Clinical relevance Semi-autonomous AI-based prostate cancer detection with variational UQ scores shows promise in reducing the number of scans radiologists need to read.

Contribution of Labrum and Cartilage to Joint Surface in Different Hip Deformities: An Automatic Deep Learning-Based 3-Dimensional Magnetic Resonance Imaging Analysis.

Meier MK, Roshardt JA, Ruckli AC, Gerber N, Lerch TD, Jung B, Tannast M, Schmaranzer F, Steppacher SD

pubmed logopapersJun 7 2025
Multiple 2-dimensional magnetic resonance imaging (MRI) studies have indicated that the size of the labrum adjusts in response to altered joint loading. In patients with hip dysplasia, it tends to increase as a compensatory mechanism for inadequate acetabular coverage. To determine the differences in labral contribution to the joint surface among different hip deformities as well as which radiographic parameters influence labral contribution to the joint surface using a deep learning-based approach for automatic 3-dimensional (3D) segmentation of MRI. Cross-sectional study; Level of evidence, 4. This retrospective study was approved by the local ethics committee with waiver for informed consent. A total of 98 patients (100 hips) with symptomatic hip deformities undergoing direct hip magnetic resonance arthrography (3 T) between January 2020 and October 2021 were consecutively selected (mean age, 30 ± 9 years; 64% female). The standard imaging protocol included proton density-weighted turbo spin echo images and an axial-oblique 3D T1-weighted MP2RAGE sequence. According to acetabular morphology, hips were divided into subgroups: dysplasia (lateral center-edge [LCE] angle, <23°), normal coverage (LCE, 23°-33°), overcoverage (LCE, 33°-39°), severe overcoverage (LCE, >39°), and retroversion (retroversion index >10% and all 3 retroversion signs positive). A previously validated deep learning approach for automatic segmentation and software for calculation of the joint surface were used. The labral contribution to the joint surface was defined as follows: labrum surface area/(labrum surface area + cartilage surface area). One-way analysis of variance with Tukey correction for multiple comparison and linear regression analysis was performed. The mean labral contribution of the joint surface of dysplastic hips was 26% ± 5% (95% CI, 24%-28%) and higher compared with all other hip deformities (<i>P</i> value range, .001-.036). Linear regression analysis identified LCE angle (β = -.002; <i>P</i> < .001) and femoral torsion (β = .001; <i>P</i> = .008) as independent predictors for labral contribution to the joint surface with a goodness-of-fit <i>R</i><sup>2</sup> value of 0.35. The labral contribution to the joint surface differs among hip deformities and is influenced by lateral acetabular coverage and femoral torsion. This study paves the way for a more in-depth understanding of the underlying pathomechanism and a reliable 3D analysis of the hip joint that can be indicative for surgical decision-making in patients with hip deformities.

NeXtBrain: Combining local and global feature learning for brain tumor classification.

Pacal I, Akhan O, Deveci RT, Deveci M

pubmed logopapersJun 7 2025
The accurate and timely diagnosis of brain tumors is of paramount clinical significance for effective treatment planning and improved patient outcomes. While deep learning has advanced medical image analysis, concurrently achieving high classification accuracy, robust generalization, and computational efficiency remains a formidable challenge. This is often due to the difficulty in optimally capturing both fine-grained local tumor features and their broader global contextual cues without incurring substantial computational costs. This paper introduces NeXtBrain, a novel hybrid architecture meticulously designed to overcome these limitations. NeXtBrain's core innovations, the NeXt Convolutional Block (NCB) and the NeXt Transformer Block (NTB), synergistically enhance feature learning: NCB leverages Multi-Head Convolutional Attention and a SwiGLU-based MLP to precisely extract subtle local tumor morphologies and detailed textures, while NTB integrates self-attention with convolutional attention and a SwiGLU MLP to effectively model long-range spatial dependencies and global contextual relationships, crucial for differentiating complex tumor characteristics. Evaluated on two publicly available benchmark datasets, Figshare and Kaggle, NeXtBrain was rigorously compared against 17 state-of-the-art (SOTA) models. On Figshare, it achieved 99.78 % accuracy and a 99.77 % F1-score. On Kaggle, it attained 99.78 % accuracy and a 99.81 % F1-score, surpassing leading SOTA ViT, CNN, and hybrid models. Critically, NeXtBrain demonstrates exceptional computational efficiency, achieving these SOTA results with only 23.91 million parameters, requiring just 10.32 GFLOPs, and exhibiting a rapid inference time of 0.007 ms. This efficiency allows it to outperform significantly larger models such as DeiT3-Base with 85.82 M parameters, Swin-Base with 86.75 M parameters in both accuracy and computational demand.

Physics-informed neural networks for denoising high b-value diffusion-weighted images.

Lin Q, Yang F, Yan Y, Zhang H, Xie Q, Zheng J, Yang W, Qian L, Liu S, Yao W, Qu X

pubmed logopapersJun 7 2025
Diffusion-weighted imaging (DWI) is widely applied in tumor diagnosis by measuring the diffusion of water molecules. To increase the sensitivity to tumor identification, faithful high b-value DWI images are expected by setting a stronger strength of gradient field in magnetic resonance imaging (MRI). However, high b-value DWI images are heavily affected by reduced signal-to-noise ratio due to the exponential decay of signal intensity. Thus, removing noise becomes important for high b-value DWI images. Here, we propose a Physics-Informed neural Network for high b-value DWI images Denoising (PIND) by leveraging information from physics-informed loss and prior information from low b-value DWI images with high signal-to-noise ratio. Experiments are conducted on a prostate DWI dataset that has 125 subjects. Compared with the original noisy images, PIND improves the peak signal-to-noise ratio from 31.25 dB to 36.28 dB, and structural similarity index measure from 0.77 to 0.92. Our schemes can save 83% data acquisition time since fewer averages of high b-value DWI images need to be acquired, while maintaining 98% accuracy of the apparent diffusion coefficient value, suggesting its potential effectiveness in preserving essential diffusion characteristics. Reader study by 4 radiologists (3, 6, 13, and 18 years of experience) indicates PIND's promising performance on overall quality, signal-to-noise ratio, artifact suppression, and lesion conspicuity, showing potential for improving clinical DWI applications.

Hypothalamus and intracranial volume segmentation at the group level by use of a Gradio-CNN framework.

Vernikouskaya I, Rasche V, Kassubek J, Müller HP

pubmed logopapersJun 6 2025
This study aimed to develop and evaluate a graphical user interface (GUI) for the automated segmentation of the hypothalamus and intracranial volume (ICV) in brain MRI scans. The interface was designed to facilitate efficient and accurate segmentation for research applications, with a focus on accessibility and ease of use for end-users. We developed a web-based GUI using the Gradio library integrating deep learning-based segmentation models trained on annotated brain MRI scans. The model utilizes a U-Net architecture to delineate the hypothalamus and ICV. The GUI allows users to upload high-resolution MRI scans, visualize the segmentation results, calculate hypothalamic volume and ICV, and manually correct individual segmentation results. To ensure widespread accessibility, we deployed the interface using ngrok, allowing users to access the tool via a shared link. As an example for the universality of the approach, the tool was applied to a group of 90 patients with Parkinson's disease (PD) and 39 controls. The GUI demonstrated high usability and efficiency in segmenting the hypothalamus and the ICV, with no significant difference in normalized hypothalamic volume observed between PD patients and controls, consistent with previously published findings. The average processing time per patient volume was 18 s for the hypothalamus and 44 s for the ICV segmentation on a 6 GB NVidia GeForce GTX 1060 GPU. The ngrok-based deployment allowed for seamless access across different devices and operating systems, with an average connection time of less than 5 s. The developed GUI provides a powerful and accessible tool for applications in neuroimaging. The combination of the intuitive interface, accurate deep learning-based segmentation, and easy deployment via ngrok addresses the need for user-friendly tools in brain MRI analysis. This approach has the potential to streamline workflows in neuroimaging research.

Post-processing steps improve generalisability and robustness of an MRI-based radiogenomic model for human papillomavirus status prediction in oropharyngeal cancer.

Ahmadian M, Bodalal Z, Bos P, Martens RM, Agrotis G, van der Hulst HJ, Vens C, Karssemakers L, Al-Mamgani A, de Graaf P, Jasperse B, Brakenhoff RH, Leemans CR, Beets-Tan RGH, Castelijns JA, van den Brekel MWM

pubmed logopapersJun 6 2025
To assess the impact of image post-processing steps on the generalisability of MRI-based radiogenomic models. Using a human papillomavirus (HPV) status in oropharyngeal squamous cell carcinoma (OPSCC) prediction model, this study examines the potential of different post-processing strategies to increase its generalisability across data from different centres and image acquisition protocols. Contrast-enhanced T1-weighted MR images of OPSCC patients of two cohorts from different centres, with confirmed HPV status, were manually segmented. After radiomic feature extraction, the HPV prediction model trained on a training set with 91 patients was subsequently tested on two independent cohorts: a test set with 62 patients and an externally derived cohort of 157 patients. The data processing options included: data harmonisation, a process to ensure consistency in data from different centres; exclusion of unstable features across different segmentations and scan protocols; and removal of highly correlated features to reduce redundancy. The predictive model, trained without post-processing, showed high performance on the test set, with an AUC of 0.79 (95% CI: 0.66-0.90, p < 0.001). However, when tested on the external data, the model performed less well, resulting in an AUC of 0.52 (95% CI: 0.45-0.58, p = 0.334). The model's generalisability substantially improved after performing post-processing steps. The AUC for the test set reached 0.76 (95% CI: 0.63-0.87, p < 0.001), while for the external cohort, the predictive model achieved an AUC of 0.73 (95% CI: 0.64-0.81, p < 0.001). When applied before model development, post-processing steps can enhance the robustness and generalisability of predictive radiogenomics models. Question How do post-processing steps impact the generalisability of MRI-based radiogenomic prediction models? Findings Applying post-processing steps, i.e., data harmonisation, identification of stable radiomic features, and removal of correlated features, before model development can improve model robustness and generalisability. Clinical relevance Post-processing steps in MRI radiogenomic model generation lead to reliable non-invasive diagnostic tools for personalised cancer treatment strategies.

CAN TRANSFER LEARNING IMPROVE SUPERVISED SEGMENTATIONOF WHITE MATTER BUNDLES IN GLIOMA PATIENTS?

Riccardi, C., Ghezzi, S., Amorosino, G., Zigiotto, L., Sarubbo, S., Jovicich, J., Avesani, P.

biorxiv logopreprintJun 6 2025
In clinical neuroscience, the segmentation of the main white matter bundles is propaedeutic for many tasks such as pre-operative neurosurgical planning and monitoring of neuro-related diseases. Automating bundle segmentation with data-driven approaches and deep learning models has shown promising accuracy in the context of healthy individuals. The lack of large clinical datasets is preventing the translation of these results to patients. Inference on patients data with models trained on healthy population is not effective because of domain shift. This study aims to carry out an empirical analysis to investigate how transfer learning might be beneficial to overcome these limitations. For our analysis, we consider a public dataset with hundreds of individuals and a clinical dataset of glioma patients. We focus our preliminary investigation on the corticospinal tract. The results show that transfer learning might be effective in partially overcoming the domain shift.

Reliable Evaluation of MRI Motion Correction: Dataset and Insights

Kun Wang, Tobit Klug, Stefan Ruschke, Jan S. Kirschke, Reinhard Heckel

arxiv logopreprintJun 6 2025
Correcting motion artifacts in MRI is important, as they can hinder accurate diagnosis. However, evaluating deep learning-based and classical motion correction methods remains fundamentally difficult due to the lack of accessible ground-truth target data. To address this challenge, we study three evaluation approaches: real-world evaluation based on reference scans, simulated motion, and reference-free evaluation, each with its merits and shortcomings. To enable evaluation with real-world motion artifacts, we release PMoC3D, a dataset consisting of unprocessed Paired Motion-Corrupted 3D brain MRI data. To advance evaluation quality, we introduce MoMRISim, a feature-space metric trained for evaluating motion reconstructions. We assess each evaluation approach and find real-world evaluation together with MoMRISim, while not perfect, to be most reliable. Evaluation based on simulated motion systematically exaggerates algorithm performance, and reference-free evaluation overrates oversmoothed deep learning outputs.

TissUnet: Improved Extracranial Tissue and Cranium Segmentation for Children through Adulthood

Markian Mandzak, Elvira Yang, Anna Zapaishchykova, Yu-Hui Chen, Lucas Heilbroner, John Zielke, Divyanshu Tak, Reza Mojahed-Yazdi, Francesca Romana Mussa, Zezhong Ye, Sridhar Vajapeyam, Viviana Benitez, Ralph Salloum, Susan N. Chi, Houman Sotoudeh, Jakob Seidlitz, Sabine Mueller, Hugo J. W. L. Aerts, Tina Y. Poussaint, Benjamin H. Kann

arxiv logopreprintJun 6 2025
Extracranial tissues visible on brain magnetic resonance imaging (MRI) may hold significant value for characterizing health conditions and clinical decision-making, yet they are rarely quantified. Current tools have not been widely validated, particularly in settings of developing brains or underlying pathology. We present TissUnet, a deep learning model that segments skull bone, subcutaneous fat, and muscle from routine three-dimensional T1-weighted MRI, with or without contrast enhancement. The model was trained on 155 paired MRI-computed tomography (CT) scans and validated across nine datasets covering a wide age range and including individuals with brain tumors. In comparison to AI-CT-derived labels from 37 MRI-CT pairs, TissUnet achieved a median Dice coefficient of 0.79 [IQR: 0.77-0.81] in a healthy adult cohort. In a second validation using expert manual annotations, median Dice was 0.83 [IQR: 0.83-0.84] in healthy individuals and 0.81 [IQR: 0.78-0.83] in tumor cases, outperforming previous state-of-the-art method. Acceptability testing resulted in an 89% acceptance rate after adjudication by a tie-breaker(N=108 MRIs), and TissUnet demonstrated excellent performance in the blinded comparative review (N=45 MRIs), including both healthy and tumor cases in pediatric populations. TissUnet enables fast, accurate, and reproducible segmentation of extracranial tissues, supporting large-scale studies on craniofacial morphology, treatment effects, and cardiometabolic risk using standard brain T1w MRI.

Comparative analysis of convolutional neural networks and vision transformers in identifying benign and malignant breast lesions.

Wang L, Fang S, Chen X, Pan C, Meng M

pubmed logopapersJun 6 2025
Various deep learning models have been developed and employed for medical image classification. This study conducted comprehensive experiments on 12 models, aiming to establish reliable benchmarks for research on breast dynamic contrast-enhanced magnetic resonance imaging image classification. Twelve deep learning models were systematically compared by analyzing variations in 4 key hyperparameters: optimizer (Op), learning rate, batch size (BS), and data augmentation. The evaluation criteria encompassed a comprehensive set of metrics including accuracy (Ac), loss value, precision, recall rate, F1-score, and area under the receiver operating characteristic curve. Furthermore, the training times and model parameter counts were assessed for holistic performance comparison. Adjustments in the BS within Adam Op had a minimal impact on Ac in the convolutional neural network models. However, altering the Op and learning rate while maintaining the same BS significantly affected the Ac. The ResNet152 network model exhibited the lowest Ac. Both the recall rate and area under the receiver operating characteristic curve for the ResNet152 and Vision transformer-base (ViT) models were inferior compared to the others. Data augmentation unexpectedly reduced the Ac of ResNet50, ResNet152, VGG16, VGG19, and ViT models. The VGG16 model boasted the shortest training duration, whereas the ViT model, before data augmentation, had the longest training time and smallest model weight. The ResNet152 and ViT models were not well suited for image classification tasks involving small breast dynamic contrast-enhanced magnetic resonance imaging datasets. Although data augmentation is typically beneficial, its application should be approached cautiously. These findings provide important insights to inform and refine future research in this domain.
Page 32 of 72720 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.