Sort by:
Page 280 of 3463455 results

Dual-stage AI system for Pathologist-Free Tumor Detectionand subtyping in Oral Squamous Cell Carcinoma

Chaudhary, N., Muddemanavar, P., Singh, D. K., Rai, A., Mishra, D., SV, S., Augustine, J., Chandra, A., Chaurasia, A., Ahmad, T.

medrxiv logopreprintJun 6 2025
BackgroundAccurate histological grading of oral squamous cell carcinoma (OSCC) is critical for prognosis and treatment planning. Current methods lack automation for OSCC detection, subtyping, and differentiation from high-risk pre-malignant conditions like oral submucous fibrosis (OSMF). Further, analysis of whole-slide image (WSI) analysis is time-consuming and variable, limiting consistency. We present a clinically relevant deep learning framework that leverages weakly supervised learning and attention-based multiple instance learning (MIL) to enable automated OSCC grading and early prediction of malignant transformation from OSMF. MethodsWe conducted a multi-institutional retrospective cohort study using a curated dataset of 1,925 whole-slide images (WSIs), including 1,586 OSCC cases stratified into well-, moderately-, and poorly-differentiated subtypes (WD, MD, and PD), 128 normal controls, and 211 OSMF and OSMF with OSCC cases. We developed a two-stage deep learning pipeline named OralPatho. In stage one, an attention-based multiple instance learning (MIL) model was trained to perform binary classification (normal vs OSCC). In stage two, a gated attention mechanism with top-K patch selection was employed to classify the OSCC subtypes. Model performance was assessed using stratified 3-fold cross-validation and external validation on an independent dataset. FindingsThe binary classifier demonstrated robust performance with a mean F1-score exceeding 0.93 across all validation folds. The multiclass model achieved consistent macro-F1 scores of 0.72, 0.70, and 0.68, along with AUCs of 0.79 for WD, 0.71 for MD, and 0.61 for PD OSCC subtypes. Model generalizability was validated using an independent external dataset. Attention maps reliably highlighted clinically relevant histological features, supporting the systems interpretability and diagnostic alignment with expert pathological assessment. InterpretationThis study demonstrates the feasibility of attention-based, weakly supervised learning for accurate OSCC grading from whole-slide images. OralPatho combines high diagnostic performance with real-time interpretability, making it a scalable solution for both advanced pathology labs and resource-limited settings.

Clinically Interpretable Deep Learning via Sparse BagNets for Epiretinal Membrane and Related Pathology Detection

Ofosu Mensah, S., Neubauer, J., Ayhan, M. S., Djoumessi Donteu, K. R., Koch, L. M., Uzel, M. M., Gelisken, F., Berens, P.

medrxiv logopreprintJun 6 2025
Epiretinal membrane (ERM) is a vitreoretinal interface disease that, if not properly addressed, can lead to vision impairment and negatively affect quality of life. For ERM detection and treatment planning, Optical Coherence Tomography (OCT) has become the primary imaging modality, offering non-invasive, high-resolution cross-sectional imaging of the retina. Deep learning models have also led to good ERM detection performance on OCT images. Nevertheless, most deep learning models cannot be easily understood by clinicians, which limits their acceptance in clinical practice. Post-hoc explanation methods have been utilised to support the uptake of models, albeit, with partial success. In this study, we trained a sparse BagNet model, an inherently interpretable deep learning model, to detect ERM in OCT images. It performed on par with a comparable black-box model and generalised well to external data. In a multitask setting, it also accurately predicted other changes related to the ERM pathophysiology. Through a user study with ophthalmologists, we showed that the visual explanations readily provided by the sparse BagNet model for its decisions are well-aligned with clinical expertise. We propose potential directions for clinical implementation of the sparse BagNet model to guide clinical decisions in practice.

Detecting neurodegenerative changes in glaucoma using deep mean kurtosis-curve-corrected tractometry

Kasa, L. W., Schierding, W., Kwon, E., Holdsworth, S., Danesh-Meyer, H. V.

medrxiv logopreprintJun 6 2025
Glaucoma is increasingly recognized as a neurodegenerative condition involving both retinal and central nervous system structures. Here, we present an integrated framework that combines MK-Curve-corrected diffusion kurtosis imaging (DKI), tractometry, and deep autoencoder-based normative modeling to detect localized white matter abnormalities associated with glaucoma. Using UK Biobank diffusion MRI data, we show that MK-Curve approach corrects anatomically implausible values and improves the reliability of DKI metrics - particularly mean (MK), radial (RK), and axial kurtosis (AK) - in regions of complex fiber architecture. Tractometry revealed reduced MK in glaucoma patients along the optic radiation, inferior longitudinal fasciculus, and inferior fronto-occipital fasciculus, but not in a non-visual control tract, supporting disease specificity. These abnormalities were spatially localized, with significant changes observed at multiple points along the tracts. MK demonstrated greater sensitivity than MD and exhibited altered distributional features, reflecting microstructural heterogeneity not captured by standard metrics. Node-wise MK values in the right optic radiation showed weak but significant correlations with retinal OCT measures (ganglion cell layer and retinal nerve fiber layer thickness), reinforcing the biological relevance of these findings. Deep autoencoder-based modeling further enabled subject-level anomaly detection that aligned spatially with group-level changes and outperformed traditional approaches. Together, our results highlight the potential of advanced diffusion modeling and deep learning for sensitive, individualized detection of glaucomatous neurodegeneration and support their integration into future multimodal imaging pipelines in neuro-ophthalmology.

Deep learning-enabled MRI phenotyping uncovers regional body composition heterogeneity and disease associations in two European population cohorts

Mertens, C. J., Haentze, H., Ziegelmayer, S., Kather, J. N., Truhn, D., Kim, S. H., Busch, F., Weller, D., Wiestler, B., Graf, M., Bamberg, F., Schlett, C. L., Weiss, J. B., Ringhof, S., Can, E., Schulz-Menger, J., Niendorf, T., Lammert, J., Molwitz, I., Kader, A., Hering, A., Meddeb, A., Nawabi, J., Schulze, M. B., Keil, T., Willich, S. N., Krist, L., Hadamitzky, M., Hannemann, A., Bassermann, F., Rueckert, D., Pischon, T., Hapfelmeier, A., Makowski, M. R., Bressem, K. K., Adams, L. C.

medrxiv logopreprintJun 6 2025
Body mass index (BMI) does not account for substantial inter-individual differences in regional fat and muscle compartments, which are relevant for the prevalence of cardiometabolic and cancer conditions. We applied a validated deep learning pipeline for automated segmentation of whole-body MRI scans in 45,851 adults from the UK Biobank and German National Cohort, enabling harmonized quantification of visceral (VAT), gluteofemoral (GFAT), and abdominal subcutaneous adipose tissue (ASAT), liver fat fraction (LFF), and trunk muscle volume. Associations with clinical conditions were evaluated using compartment measures adjusted for age, sex, height, and BMI. Our analysis demonstrates that regional adiposity and muscle volume show distinct associations with cardiometabolic and cancer prevalence, and that substantial disease heterogeneity exists within BMI strata. The analytic framework and reference data presented here will support future risk stratification efforts and facilitate the integration of automated MRI phenotyping into large-scale population and clinical research.

Magnetic resonance imaging and the evaluation of vestibular schwannomas: a systematic review

Lee, K. S., Wijetilake, N., Connor, S., Vercauteren, T., Shapey, J.

medrxiv logopreprintJun 6 2025
IntroductionThe assessment of vestibular schwannoma (VS) requires a standardized measurement approach as growth is a key element in defining treatment strategy for VS. Volumetric measurements offer higher sensitivity and precision, but existing methods of segmentation, are labour-intensive, lack standardisation and are prone to variability and subjectivity. A new core set of measurement indicators reported consistently, will support clinical decision-making and facilitate evidence synthesis. This systematic review aimed to identify indicators used in 1) magnetic resonance imaging (MRI) acquisition and 2) measurement or 3) growth of VS. This work is expected to inform a Delphi consensus. MethodsSystematic searches of Medline, Embase and Cochrane Central were undertaken on 4th October 2024. Studies that assessed the evaluation of VS with MRI, between 2014 and 2024 were included. ResultsThe final dataset consisted of 102 studies and 19001 patients. Eighty-six (84.3%) studies employed post contrast T1 as the MRI acquisition of choice for evaluating VS. Nine (8.8%) studies additionally employed heavily weighted T2 sequences such as constructive interference in steady state (CISS) and FIESTA-C. Only 45 (44.1%) studies reported the slice thickness with the majority 38 (84.4%) choosing <3mm in thickness. Fifty-eight (56.8%) studies measured volume whilst 49 (48.0%) measured the largest linear dimension; 14 (13.7%) studies used both measurements. Four studies employed semi-automated or automated segmentation processes to measure the volumes of VS. Of 68 studies investigating growth, 54 (79.4%) provided a threshold. Significant variation in volumetric growth was observed but the threshold for significant percentage change reported by most studies was 20% (n = 18). ConclusionSubstantial variation in MRI acquisition, and methods for evaluating measurement and growth of VS, exists across the literature. This lack of standardization is likely attributed to resource constraints and the fact that currently available volumetric segmentation methods are very labour-intensive. Following the identification of the indicators employed in the literature, this study aims to develop a Delphi consensus for the standardized measurement of VS and uptake in employing a data-driven artificial intelligence-based measuring tools.

TissUnet: Improved Extracranial Tissue and Cranium Segmentation for Children through Adulthood

Markiian Mandzak, Elvira Yang, Anna Zapaishchykova, Yu-Hui Chen, Lucas Heilbroner, John Zielke, Divyanshu Tak, Reza Mojahed-Yazdi, Francesca Romana Mussa, Zezhong Ye, Sridhar Vajapeyam, Viviana Benitez, Ralph Salloum, Susan N. Chi, Houman Sotoudeh, Jakob Seidlitz, Sabine Mueller, Hugo J. W. L. Aerts, Tina Y. Poussaint, Benjamin H. Kann

arxiv logopreprintJun 6 2025
Extracranial tissues visible on brain magnetic resonance imaging (MRI) may hold significant value for characterizing health conditions and clinical decision-making, yet they are rarely quantified. Current tools have not been widely validated, particularly in settings of developing brains or underlying pathology. We present TissUnet, a deep learning model that segments skull bone, subcutaneous fat, and muscle from routine three-dimensional T1-weighted MRI, with or without contrast enhancement. The model was trained on 155 paired MRI-computed tomography (CT) scans and validated across nine datasets covering a wide age range and including individuals with brain tumors. In comparison to AI-CT-derived labels from 37 MRI-CT pairs, TissUnet achieved a median Dice coefficient of 0.79 [IQR: 0.77-0.81] in a healthy adult cohort. In a second validation using expert manual annotations, median Dice was 0.83 [IQR: 0.83-0.84] in healthy individuals and 0.81 [IQR: 0.78-0.83] in tumor cases, outperforming previous state-of-the-art method. Acceptability testing resulted in an 89% acceptance rate after adjudication by a tie-breaker(N=108 MRIs), and TissUnet demonstrated excellent performance in the blinded comparative review (N=45 MRIs), including both healthy and tumor cases in pediatric populations. TissUnet enables fast, accurate, and reproducible segmentation of extracranial tissues, supporting large-scale studies on craniofacial morphology, treatment effects, and cardiometabolic risk using standard brain T1w MRI.

ResPF: Residual Poisson Flow for Efficient and Physically Consistent Sparse-View CT Reconstruction

Changsheng Fang, Yongtong Liu, Bahareh Morovati, Shuo Han, Yu Shi, Li Zhou, Shuyi Fan, Hengyong Yu

arxiv logopreprintJun 6 2025
Sparse-view computed tomography (CT) is a practical solution to reduce radiation dose, but the resulting ill-posed inverse problem poses significant challenges for accurate image reconstruction. Although deep learning and diffusion-based methods have shown promising results, they often lack physical interpretability or suffer from high computational costs due to iterative sampling starting from random noise. Recent advances in generative modeling, particularly Poisson Flow Generative Models (PFGM), enable high-fidelity image synthesis by modeling the full data distribution. In this work, we propose Residual Poisson Flow (ResPF) Generative Models for efficient and accurate sparse-view CT reconstruction. Based on PFGM++, ResPF integrates conditional guidance from sparse measurements and employs a hijacking strategy to significantly reduce sampling cost by skipping redundant initial steps. However, skipping early stages can degrade reconstruction quality and introduce unrealistic structures. To address this, we embed a data-consistency into each iteration, ensuring fidelity to sparse-view measurements. Yet, PFGM sampling relies on a fixed ordinary differential equation (ODE) trajectory induced by electrostatic fields, which can be disrupted by step-wise data consistency, resulting in unstable or degraded reconstructions. Inspired by ResNet, we introduce a residual fusion module to linearly combine generative outputs with data-consistent reconstructions, effectively preserving trajectory continuity. To the best of our knowledge, this is the first application of Poisson flow models to sparse-view CT. Extensive experiments on synthetic and clinical datasets demonstrate that ResPF achieves superior reconstruction quality, faster inference, and stronger robustness compared to state-of-the-art iterative, learning-based, and diffusion models.

Development of a Deep Learning Model for the Volumetric Assessment of Osteonecrosis of the Femoral Head on Three-Dimensional Magnetic Resonance Imaging.

Uemura K, Takashima K, Otake Y, Li G, Mae H, Okada S, Hamada H, Sugano N

pubmed logopapersJun 6 2025
Although volumetric assessment of necrotic lesions using the Steinberg classification predicts future collapse in osteonecrosis of the femoral head (ONFH), quantifying these lesions using magnetic resonance imaging (MRI) generally requires time and effort, allowing the Steinberg classification to be routinely used in clinical investigations. Thus, this study aimed to use deep learning to develop a method for automatically segmenting necrotic lesions using MRI and for automatically classifying them according to the Steinberg classification. A total of 63 hips from patients who had ONFH and did not have collapse were included. An orthopaedic surgeon manually segmented the femoral head and necrotic lesions on MRI acquired using a spoiled gradient-echo sequence. Based on manual segmentation, 22 hips were classified as Steinberg grade A, 23 as Steinberg grade B, and 18 as Steinberg grade C. The manually segmented labels were used to train a deep learning model that used a 5-layer Dynamic U-Net system. A four-fold cross-validation was performed to assess segmentation accuracy using the Dice coefficient (DC) and average symmetric distance (ASD). Furthermore, hip classification accuracy according to the Steinberg classification was evaluated along with the weighted Kappa coefficient. The median DC and ASD for the femoral head region were 0.95 (interquartile range [IQR], 0.95 to 0.96) and 0.65 mm (IQR, 0.59 to 0.75), respectively. For necrotic lesions, the median DC and ASD were 0.89 (IQR, 0.85 to 0.92) and 0.76 mm (IQR, 0.58 to 0.96), respectively. Based on the Steinberg classification, the grading matched in 59 hips (accuracy: 93.7%), with a weighted Kappa coefficient of 0.98. The proposed deep learning model exhibited high accuracy in segmenting and grading necrotic lesions according to the Steinberg classification using MRI. This model can be used to assist clinicians in the volumetric assessment of ONFH.

Comparative analysis of convolutional neural networks and vision transformers in identifying benign and malignant breast lesions.

Wang L, Fang S, Chen X, Pan C, Meng M

pubmed logopapersJun 6 2025
Various deep learning models have been developed and employed for medical image classification. This study conducted comprehensive experiments on 12 models, aiming to establish reliable benchmarks for research on breast dynamic contrast-enhanced magnetic resonance imaging image classification. Twelve deep learning models were systematically compared by analyzing variations in 4 key hyperparameters: optimizer (Op), learning rate, batch size (BS), and data augmentation. The evaluation criteria encompassed a comprehensive set of metrics including accuracy (Ac), loss value, precision, recall rate, F1-score, and area under the receiver operating characteristic curve. Furthermore, the training times and model parameter counts were assessed for holistic performance comparison. Adjustments in the BS within Adam Op had a minimal impact on Ac in the convolutional neural network models. However, altering the Op and learning rate while maintaining the same BS significantly affected the Ac. The ResNet152 network model exhibited the lowest Ac. Both the recall rate and area under the receiver operating characteristic curve for the ResNet152 and Vision transformer-base (ViT) models were inferior compared to the others. Data augmentation unexpectedly reduced the Ac of ResNet50, ResNet152, VGG16, VGG19, and ViT models. The VGG16 model boasted the shortest training duration, whereas the ViT model, before data augmentation, had the longest training time and smallest model weight. The ResNet152 and ViT models were not well suited for image classification tasks involving small breast dynamic contrast-enhanced magnetic resonance imaging datasets. Although data augmentation is typically beneficial, its application should be approached cautiously. These findings provide important insights to inform and refine future research in this domain.

Reliable Evaluation of MRI Motion Correction: Dataset and Insights

Kun Wang, Tobit Klug, Stefan Ruschke, Jan S. Kirschke, Reinhard Heckel

arxiv logopreprintJun 6 2025
Correcting motion artifacts in MRI is important, as they can hinder accurate diagnosis. However, evaluating deep learning-based and classical motion correction methods remains fundamentally difficult due to the lack of accessible ground-truth target data. To address this challenge, we study three evaluation approaches: real-world evaluation based on reference scans, simulated motion, and reference-free evaluation, each with its merits and shortcomings. To enable evaluation with real-world motion artifacts, we release PMoC3D, a dataset consisting of unprocessed Paired Motion-Corrupted 3D brain MRI data. To advance evaluation quality, we introduce MoMRISim, a feature-space metric trained for evaluating motion reconstructions. We assess each evaluation approach and find real-world evaluation together with MoMRISim, while not perfect, to be most reliable. Evaluation based on simulated motion systematically exaggerates algorithm performance, and reference-free evaluation overrates oversmoothed deep learning outputs.
Page 280 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.