Sort by:
Page 600 of 7627616 results

Philipp Hans Nunn, Henner Huflage, Jan-Peter Grunz, Philipp Gruschwitz, Oliver Schad, Thorsten Alexander Bley, Johannes Tran-Gia, Tobias Wech

arxiv logopreprintJun 13 2025
Purpose: Inversion recovery prepared ultra-short echo time (IR-UTE)-based MRI enables radiation-free visualization of osseous tissue. However, sufficient signal-to-noise ratio (SNR) can only be obtained with long acquisition times. This study proposes a data-driven approach to reconstruct undersampled IR-UTE knee data, thereby accelerating MR-based 3D imaging of bones. Methods: Data were acquired with a 3D radial IR-UTE pulse sequence, implemented using the open-source framework Pulseq. A denoising convolutional neural network (DnCNN) was trained in a supervised fashion using data from eight healthy subjects. Conjugate gradient sensitivity encoding (CG-SENSE) reconstructions of different retrospectively undersampled subsets (corresponding to 2.5-min, 5-min and 10-min acquisition times) were paired with the respective reference dataset reconstruction (30-min acquisition time). The DnCNN was then integrated into a Landweber-based reconstruction algorithm, enabling physics-based iterative reconstruction. Quantitative evaluations of the approach were performed using one prospectively accelerated scan as well as retrospectively undersampled datasets from four additional healthy subjects, by assessing the structural similarity index measure (SSIM), the peak signal-to-noise ratio (PSNR), the normalized root mean squared error (NRMSE), and the perceptual sharpness index (PSI). Results: Both the reconstructions of prospective and retrospective acquisitions showed good agreement with the reference dataset, indicating high image quality, particularly for an acquisition time of 5 min. The proposed method effectively preserves contrast and structural details while suppressing noise, albeit with a slight reduction in sharpness. Conclusion: The proposed method is poised to enable MR-based bone assessment in the knee within clinically feasible scan times.

Zuzanna Skorniewska, Bartlomiej W. Papiez

arxiv logopreprintJun 13 2025
The adoption of neural network models in medical imaging has been constrained by strict privacy regulations, limited data availability, high acquisition costs, and demographic biases. Deep generative models offer a promising solution by generating synthetic data that bypasses privacy concerns and addresses fairness by producing samples for under-represented groups. However, unlike natural images, medical imaging requires validation not only for fidelity (e.g., Fr\'echet Inception Score) but also for morphological and clinical accuracy. This is particularly true for colour fundus retinal imaging, which requires precise replication of the retinal vascular network, including vessel topology, continuity, and thickness. In this study, we in-vestigated whether a distance-based loss function based on deep activation layers of a large foundational model trained on large corpus of domain data, colour fundus imaging, offers advantages over a perceptual loss and edge-detection based loss functions. Our extensive validation pipeline, based on both domain-free and domain specific tasks, suggests that domain-specific deep features do not improve autoen-coder image generation. Conversely, our findings highlight the effectiveness of con-ventional edge detection filters in improving the sharpness of vascular structures in synthetic samples.

Libin Lan, Hongxing Li, Zunhui Xia, Yudong Zhang

arxiv logopreprintJun 13 2025
Incomplete multi-modal medical image segmentation faces critical challenges from modality imbalance, including imbalanced modality missing rates and heterogeneous modality contributions. Due to their reliance on idealized assumptions of complete modality availability, existing methods fail to dynamically balance contributions and neglect the structural relationships between modalities, resulting in suboptimal performance in real-world clinical scenarios. To address these limitations, we propose a novel model, named Dynamic Modality-Aware Fusion Network (DMAF-Net). The DMAF-Net adopts three key ideas. First, it introduces a Dynamic Modality-Aware Fusion (DMAF) module to suppress missing-modality interference by combining transformer attention with adaptive masking and weight modality contributions dynamically through attention maps. Second, it designs a synergistic Relation Distillation and Prototype Distillation framework to enforce global-local feature alignment via covariance consistency and masked graph attention, while ensuring semantic consistency through cross-modal class-specific prototype alignment. Third, it presents a Dynamic Training Monitoring (DTM) strategy to stabilize optimization under imbalanced missing rates by tracking distillation gaps in real-time, and to balance convergence speeds across modalities by adaptively reweighting losses and scaling gradients. Extensive experiments on BraTS2020 and MyoPS2020 demonstrate that DMAF-Net outperforms existing methods for incomplete multi-modal medical image segmentation. Extensive experiments on BraTS2020 and MyoPS2020 demonstrate that DMAF-Net outperforms existing methods for incomplete multi-modal medical image segmentation. Our code is available at https://github.com/violet-42/DMAF-Net.

Iaquaniello C, Scordo E, Robba C

pubmed logopapersJun 13 2025
To synthesize current evidence on prognostic factors, tools, and strategies influencing functional outcomes in patients with traumatic brain injury (TBI), with a focus on the acute and postacute phases of care. Key early predictors such as Glasgow Coma Scale (GCS) scores, pupillary reactivity, and computed tomography (CT) imaging findings remain fundamental in guiding clinical decision-making. Prognostic models like IMPACT and CRASH enhance early risk stratification, while outcome measures such as the Glasgow Outcome Scale-Extended (GOS-E) provide structured long-term assessments. Despite their utility, heterogeneity in assessment approaches and treatment protocols continues to limit consistency in outcome predictions. Recent advancements highlight the value of fluid biomarkers like neurofilament light chain (NFL) and glial fibrillary acidic protein (GFAP), which offer promising avenues for improved accuracy. Additionally, artificial intelligence models are emerging as powerful tools to integrate complex datasets and refine individualized outcome forecasting. Neurological prognostication after TBI is evolving through the integration of clinical, radiological, molecular, and computational data. Although standardized models and scales remain foundational, emerging technologies and therapies - such as biomarkers, machine learning, and neurostimulants - represent a shift toward more personalized and actionable strategies to optimize recovery and long-term function.

Chisholm M, Jabal MS, He H, Wang Y, Kalisz K, Lafata KJ, Calabrese E, Bashir MR, Tailor TD, Magudia K

pubmed logopapersJun 13 2025
Differences in CT-based body composition (BC) have been observed by race. We sought to investigate whether indices reporting census block group-level disadvantage, area deprivation index (ADI) and social vulnerability index (SVI), age, sex, and/or clinical factors could explain race-based differences in body composition. The first abdominal CT exams for patients in Durham County at a single institution in 2020 were analyzed using a fully automated and open-source deep learning BC analysis workflow to generate cross-sectional areas for skeletal muscle (SMA), subcutaneous fat (SFA), and visceral fat (VFA). Patient level demographic and clinical data were gathered from the electronic health record. State ADI ranking and SVI values were linked to each patient. Univariable and multivariable models were created to assess the association of demographics, ADI, SVI, and other relevant clinical factors with SMA, SFA, and VFA. 5,311 patients (mean age, 57.4 years; 55.5% female, 46.5% Black; 39.5% White 10.3% Hispanic) were included. At univariable analysis, race, ADI, SVI, sex, BMI, weight, and height were significantly associated with all body compartments (SMA, SFA, and VFA, all p<0.05). At multivariable analyses adjusted for patient characteristics and clinical comorbidities, race remained a significant predictor, whereas ADI did not. SVI was significant in a multivariable model with SMA.

Mahmoudi A, Alizadeh A, Ganji Z, Zare H

pubmed logopapersJun 13 2025
Focal Cortical Dysplasia (FCD) is a leading cause of drug-resistant epilepsy, particularly in children and young adults, necessitating precise presurgical planning. Traditional structural MRI often fails to detect subtle FCD lesions, especially in MRI-negative cases. Recent advancements in Artificial Intelligence (AI), particularly Machine Learning (ML) and Deep Learning (DL), have the potential to enhance FCD detection's sensitivity and specificity. This systematic review, following PRISMA guidelines, searched PubMed, Embase, Scopus, Web of Science, and Science Direct for articles published from 2020 onwards, using keywords related to "Focal Cortical Dysplasia," "MRI," and "Artificial Intelligence/Machine Learning/Deep Learning." Included were original studies employing AI and structural MRI (sMRI) for FCD detection in humans, reporting quantitative performance metrics, and published in English. Data extraction was performed independently by two reviewers, with discrepancies resolved by a third. The included studies demonstrated that AI significantly improved FCD detection, achieving sensitivity up to 97.1% and specificities up to 84.3% across various MRI sequences, including MPRAGE, MP2RAGE, and FLAIR. AI models, particularly deep learning models, matched or surpassed human radiologist performance, with combined AI-human expertise reaching up to 87% detection rates. Among 88 full-text articles reviewed, 27 met inclusion criteria. The studies emphasized the importance of advanced MRI sequences and multimodal MRI for enhanced detection, though model performance varied with FCD type and training datasets. Recent advances in sMRI and AI, especially deep learning, offer substantial potential to improve FCD detection, leading to better presurgical planning and patient outcomes in drug-resistant epilepsy. These methods enable faster, more accurate, and automated FCD detection, potentially enhancing surgical decision-making. Further clinical validation and optimization of AI algorithms across diverse datasets are essential for broader clinical translation.

Daniya Najiha Abdul Kareem, Abdul Hannan, Mubashir Noman, Jean Lahoud, Mustansar Fiaz, Hisham Cholakkal

arxiv logopreprintJun 13 2025
Accurate microscopic medical image segmentation plays a crucial role in diagnosing various cancerous cells and identifying tumors. Driven by advancements in deep learning, convolutional neural networks (CNNs) and transformer-based models have been extensively studied to enhance receptive fields and improve medical image segmentation task. However, they often struggle to capture complex cellular and tissue structures in challenging scenarios such as background clutter and object overlap. Moreover, their reliance on the availability of large datasets for improved performance, along with the high computational cost, limit their practicality. To address these issues, we propose an efficient framework for the segmentation task, named InceptionMamba, which encodes multi-stage rich features and offers both performance and computational efficiency. Specifically, we exploit semantic cues to capture both low-frequency and high-frequency regions to enrich the multi-stage features to handle the blurred region boundaries (e.g., cell boundaries). These enriched features are input to a hybrid model that combines an Inception depth-wise convolution with a Mamba block, to maintain high efficiency and capture inherent variations in the scales and shapes of the regions of interest. These enriched features along with low-resolution features are fused to get the final segmentation mask. Our model achieves state-of-the-art performance on two challenging microscopic segmentation datasets (SegPC21 and GlaS) and two skin lesion segmentation datasets (ISIC2017 and ISIC2018), while reducing computational cost by about 5 times compared to the previous best performing method.

Haoyu Dong, Yuwen Chen, Hanxue Gu, Nicholas Konz, Yaqian Chen, Qihang Li, Maciej A. Mazurowski

arxiv logopreprintJun 13 2025
The widespread use of Magnetic Resonance Imaging (MRI) and the rise of deep learning have enabled the development of powerful predictive models for a wide range of diagnostic tasks in MRI, such as image classification or object segmentation. However, training models for specific new tasks often requires large amounts of labeled data, which is difficult to obtain due to high annotation costs and data privacy concerns. To circumvent this issue, we introduce MRI-CORE (MRI COmprehensive Representation Encoder), a vision foundation model pre-trained using more than 6 million slices from over 110,000 MRI volumes across 18 main body locations. Experiments on five diverse object segmentation tasks in MRI demonstrate that MRI-CORE can significantly improve segmentation performance in realistic scenarios with limited labeled data availability, achieving an average gain of 6.97% 3D Dice Coefficient using only 10 annotated slices per task. We further demonstrate new model capabilities in MRI such as classification of image properties including body location, sequence type and institution, and zero-shot segmentation. These results highlight the value of MRI-CORE as a generalist vision foundation model for MRI, potentially lowering the data annotation resource barriers for many applications.

Naomi Fridman, Bubby Solway, Tomer Fridman, Itamar Barnea, Anat Goldshtein

arxiv logopreprintJun 13 2025
Breast cancer remains a leading cause of cancer-related mortality worldwide, making early detection and accurate treatment response monitoring critical priorities. We present BreastDCEDL, a curated, deep learning-ready dataset comprising pre-treatment 3D Dynamic Contrast-Enhanced MRI (DCE-MRI) scans from 2,070 breast cancer patients drawn from the I-SPY1, I-SPY2, and Duke cohorts, all sourced from The Cancer Imaging Archive. The raw DICOM imaging data were rigorously converted into standardized 3D NIfTI volumes with preserved signal integrity, accompanied by unified tumor annotations and harmonized clinical metadata including pathologic complete response (pCR), hormone receptor (HR), and HER2 status. Although DCE-MRI provides essential diagnostic information and deep learning offers tremendous potential for analyzing such complex data, progress has been limited by lack of accessible, public, multicenter datasets. BreastDCEDL addresses this gap by enabling development of advanced models, including state-of-the-art transformer architectures that require substantial training data. To demonstrate its capacity for robust modeling, we developed the first transformer-based model for breast DCE-MRI, leveraging Vision Transformer (ViT) architecture trained on RGB-fused images from three contrast phases (pre-contrast, early post-contrast, and late post-contrast). Our ViT model achieved state-of-the-art pCR prediction performance in HR+/HER2- patients (AUC 0.94, accuracy 0.93). BreastDCEDL includes predefined benchmark splits, offering a framework for reproducible research and enabling clinically meaningful modeling in breast cancer imaging.

André Ferreira, Kunpeng Xie, Caroline Wilpert, Gustavo Correia, Felix Barajas Ordonez, Tiago Gil Oliveira, Maike Bode, Robert Siepmann, Frank Hölzle, Rainer Röhrig, Jens Kleesiek, Daniel Truhn, Jan Egger, Victor Alves, Behrus Puladi

arxiv logopreprintJun 13 2025
AI requires extensive datasets, while medical data is subject to high data protection. Anonymization is essential, but poses a challenge for some regions, such as the head, as identifying structures overlap with regions of clinical interest. Synthetic data offers a potential solution, but studies often lack rigorous evaluation of realism and utility. Therefore, we investigate to what extent synthetic data can replace real data in segmentation tasks. We employed head and neck cancer CT scans and brain glioma MRI scans from two large datasets. Synthetic data were generated using generative adversarial networks and diffusion models. We evaluated the quality of the synthetic data using MAE, MS-SSIM, Radiomics and a Visual Turing Test (VTT) performed by 5 radiologists and their usefulness in segmentation tasks using DSC. Radiomics indicates high fidelity of synthetic MRIs, but fall short in producing highly realistic CT tissue, with correlation coefficient of 0.8784 and 0.5461 for MRI and CT tumors, respectively. DSC results indicate limited utility of synthetic data: tumor segmentation achieved DSC=0.064 on CT and 0.834 on MRI, while bone segmentation a mean DSC=0.841. Relation between DSC and correlation is observed, but is limited by the complexity of the task. VTT results show synthetic CTs' utility, but with limited educational applications. Synthetic data can be used independently for the segmentation task, although limited by the complexity of the structures to segment. Advancing generative models to better tolerate heterogeneous inputs and learn subtle details is essential for enhancing their realism and expanding their application potential.
Page 600 of 7627616 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,700+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.