Sort by:
Page 251 of 3853843 results

Inference of single cell profiles from histology stains with the Single-Cell omics from Histology Analysis Framework (SCHAF)

Comiter, C., Chen, X., Vaishnav, E. D., Kobayashi-Kirschvink, K. J., Ciapmricotti, M., Zhang, K., Murray, J., Monticolo, F., Qi, J., Tanaka, R., Brodowska, S. E., Li, B., Yang, Y., Rodig, S. J., Karatza, A., Quintanal Villalonga, A., Turner, M., Pfaff, K. L., Jane-Valbuena, J., Slyper, M., Waldman, J., Vigneau, S., Wu, J., Blosser, T. R., Segerstolpe, A., Abravanel, D., Wagle, N., Demehri, S., Zhuang, X., Rudin, C. M., Klughammer, J., Rozenblatt-Rosen, O., Stultz, C. M., Shu, J., Regev, A.

biorxiv logopreprintJun 13 2025
Tissue biology involves an intricate balance between cell-intrinsic processes and interactions between cells organized in specific spatial patterns, which can be respectively captured by single cell profiling methods, such as single cell RNA-seq (scRNA-seq) and spatial transcriptomics, and histology imaging data, such as Hematoxylin-and-Eosin (H&E) stains. While single cell profiles provide rich molecular information, they can be challenging to collect routinely in the clinic and either lack spatial resolution or high gene throughput. Conversely, histological H&E assays have been a cornerstone of tissue pathology for decades, but do not directly report on molecular details, although the observed structure they capture arises from molecules and cells. Here, we leverage vision transformers and adversarial deep learning to develop the Single Cell omics from Histology Analysis Framework (SCHAF), which generates a tissue sample's spatially-resolved whole transcriptome single cell omics dataset from its H&E histology image. We demonstrate SCHAF on a variety of tissues--including lung cancer, metastatic breast cancer, placentae, and whole mouse pups--training with matched samples analyzed by sc/snRNA-seq, H&E staining, and, when available, spatial transcriptomics. SCHAF generated appropriate single cell profiles from histology images in test data, related them spatially, and compared well to ground-truth scRNA-Seq, expert pathologist annotations, or direct spatial transcriptomic measurements, with some limitations. SCHAF opens the way to next-generation H&E analyses and an integrated understanding of cell and tissue biology in health and disease.

MRI-CORE: A Foundation Model for Magnetic Resonance Imaging

Haoyu Dong, Yuwen Chen, Hanxue Gu, Nicholas Konz, Yaqian Chen, Qihang Li, Maciej A. Mazurowski

arxiv logopreprintJun 13 2025
The widespread use of Magnetic Resonance Imaging (MRI) and the rise of deep learning have enabled the development of powerful predictive models for a wide range of diagnostic tasks in MRI, such as image classification or object segmentation. However, training models for specific new tasks often requires large amounts of labeled data, which is difficult to obtain due to high annotation costs and data privacy concerns. To circumvent this issue, we introduce MRI-CORE (MRI COmprehensive Representation Encoder), a vision foundation model pre-trained using more than 6 million slices from over 110,000 MRI volumes across 18 main body locations. Experiments on five diverse object segmentation tasks in MRI demonstrate that MRI-CORE can significantly improve segmentation performance in realistic scenarios with limited labeled data availability, achieving an average gain of 6.97% 3D Dice Coefficient using only 10 annotated slices per task. We further demonstrate new model capabilities in MRI such as classification of image properties including body location, sequence type and institution, and zero-shot segmentation. These results highlight the value of MRI-CORE as a generalist vision foundation model for MRI, potentially lowering the data annotation resource barriers for many applications.

InceptionMamba: Efficient Multi-Stage Feature Enhancement with Selective State Space Model for Microscopic Medical Image Segmentation

Daniya Najiha Abdul Kareem, Abdul Hannan, Mubashir Noman, Jean Lahoud, Mustansar Fiaz, Hisham Cholakkal

arxiv logopreprintJun 13 2025
Accurate microscopic medical image segmentation plays a crucial role in diagnosing various cancerous cells and identifying tumors. Driven by advancements in deep learning, convolutional neural networks (CNNs) and transformer-based models have been extensively studied to enhance receptive fields and improve medical image segmentation task. However, they often struggle to capture complex cellular and tissue structures in challenging scenarios such as background clutter and object overlap. Moreover, their reliance on the availability of large datasets for improved performance, along with the high computational cost, limit their practicality. To address these issues, we propose an efficient framework for the segmentation task, named InceptionMamba, which encodes multi-stage rich features and offers both performance and computational efficiency. Specifically, we exploit semantic cues to capture both low-frequency and high-frequency regions to enrich the multi-stage features to handle the blurred region boundaries (e.g., cell boundaries). These enriched features are input to a hybrid model that combines an Inception depth-wise convolution with a Mamba block, to maintain high efficiency and capture inherent variations in the scales and shapes of the regions of interest. These enriched features along with low-resolution features are fused to get the final segmentation mask. Our model achieves state-of-the-art performance on two challenging microscopic segmentation datasets (SegPC21 and GlaS) and two skin lesion segmentation datasets (ISIC2017 and ISIC2018), while reducing computational cost by about 5 times compared to the previous best performing method.

Does restrictive anorexia nervosa impact brain aging? A machine learning approach to estimate age based on brain structure.

Gupta Y, de la Cruz F, Rieger K, di Giuliano M, Gaser C, Cole J, Breithaupt L, Holsen LM, Eddy KT, Thomas JJ, Cetin-Karayumak S, Kubicki M, Lawson EA, Miller KK, Misra M, Schumann A, Bär KJ

pubmed logopapersJun 13 2025
Anorexia nervosa (AN), a severe eating disorder marked by extreme weight loss and malnutrition, leads to significant alterations in brain structure. This study used machine learning (ML) to estimate brain age from structural MRI scans and investigated brain-predicted age difference (brain-PAD) as a potential biomarker in AN. Structural MRI scans were collected from female participants aged 10-40 years across two institutions (Boston, USA, and Jena, Germany), including acute AN (acAN; n=113), weight-restored AN (wrAN; n=35), and age-matched healthy controls (HC; n=90). The ML model was trained on 3487 healthy female participants (ages 5-45 years) from ten datasets, using 377 neuroanatomical features extracted from T1-weighted MRI scans. The model achieved strong performance with a mean absolute error (MAE) of 1.93 years and a correlation of r = 0.88 in HCs. In acAN patients, brain age was overestimated by an average of +2.25 years, suggesting advanced brain aging. In contrast, wrAN participants showed significantly lower brain-PAD than acAN (+0.26 years, p=0.0026) and did not differ from HC (p=0.98), suggesting normalization of brain age estimates following weight restoration. A significant group-by-age interaction effect on predicted brain age (p<0.001) indicated that brain age deviations were most pronounced in younger acAN participants. Brain-PAD in acAN was significantly negatively associated with BMI (r = -0.291, p<sub>fdr</sub> = 0.005), but not in wrAN or HC groups. Importantly, no significant associations were found between brain-PAD and clinical symptom severity. These findings suggest that acute AN is linked to advanced brain aging during the acute stage, and that may partially normalize following weight recovery.

DMAF-Net: An Effective Modality Rebalancing Framework for Incomplete Multi-Modal Medical Image Segmentation

Libin Lan, Hongxing Li, Zunhui Xia, Yudong Zhang

arxiv logopreprintJun 13 2025
Incomplete multi-modal medical image segmentation faces critical challenges from modality imbalance, including imbalanced modality missing rates and heterogeneous modality contributions. Due to their reliance on idealized assumptions of complete modality availability, existing methods fail to dynamically balance contributions and neglect the structural relationships between modalities, resulting in suboptimal performance in real-world clinical scenarios. To address these limitations, we propose a novel model, named Dynamic Modality-Aware Fusion Network (DMAF-Net). The DMAF-Net adopts three key ideas. First, it introduces a Dynamic Modality-Aware Fusion (DMAF) module to suppress missing-modality interference by combining transformer attention with adaptive masking and weight modality contributions dynamically through attention maps. Second, it designs a synergistic Relation Distillation and Prototype Distillation framework to enforce global-local feature alignment via covariance consistency and masked graph attention, while ensuring semantic consistency through cross-modal class-specific prototype alignment. Third, it presents a Dynamic Training Monitoring (DTM) strategy to stabilize optimization under imbalanced missing rates by tracking distillation gaps in real-time, and to balance convergence speeds across modalities by adaptively reweighting losses and scaling gradients. Extensive experiments on BraTS2020 and MyoPS2020 demonstrate that DMAF-Net outperforms existing methods for incomplete multi-modal medical image segmentation. Extensive experiments on BraTS2020 and MyoPS2020 demonstrate that DMAF-Net outperforms existing methods for incomplete multi-modal medical image segmentation. Our code is available at https://github.com/violet-42/DMAF-Net.

Fast MRI of bones in the knee -- An AI-driven reconstruction approach for adiabatic inversion recovery prepared ultra-short echo time sequences

Philipp Hans Nunn, Henner Huflage, Jan-Peter Grunz, Philipp Gruschwitz, Oliver Schad, Thorsten Alexander Bley, Johannes Tran-Gia, Tobias Wech

arxiv logopreprintJun 13 2025
Purpose: Inversion recovery prepared ultra-short echo time (IR-UTE)-based MRI enables radiation-free visualization of osseous tissue. However, sufficient signal-to-noise ratio (SNR) can only be obtained with long acquisition times. This study proposes a data-driven approach to reconstruct undersampled IR-UTE knee data, thereby accelerating MR-based 3D imaging of bones. Methods: Data were acquired with a 3D radial IR-UTE pulse sequence, implemented using the open-source framework Pulseq. A denoising convolutional neural network (DnCNN) was trained in a supervised fashion using data from eight healthy subjects. Conjugate gradient sensitivity encoding (CG-SENSE) reconstructions of different retrospectively undersampled subsets (corresponding to 2.5-min, 5-min and 10-min acquisition times) were paired with the respective reference dataset reconstruction (30-min acquisition time). The DnCNN was then integrated into a Landweber-based reconstruction algorithm, enabling physics-based iterative reconstruction. Quantitative evaluations of the approach were performed using one prospectively accelerated scan as well as retrospectively undersampled datasets from four additional healthy subjects, by assessing the structural similarity index measure (SSIM), the peak signal-to-noise ratio (PSNR), the normalized root mean squared error (NRMSE), and the perceptual sharpness index (PSI). Results: Both the reconstructions of prospective and retrospective acquisitions showed good agreement with the reference dataset, indicating high image quality, particularly for an acquisition time of 5 min. The proposed method effectively preserves contrast and structural details while suppressing noise, albeit with a slight reduction in sharpness. Conclusion: The proposed method is poised to enable MR-based bone assessment in the knee within clinically feasible scan times.

Recent Advances in sMRI and Artificial Intelligence for Presurgical Planning in Focal Cortical Dysplasia: A Systematic Review.

Mahmoudi A, Alizadeh A, Ganji Z, Zare H

pubmed logopapersJun 13 2025
Focal Cortical Dysplasia (FCD) is a leading cause of drug-resistant epilepsy, particularly in children and young adults, necessitating precise presurgical planning. Traditional structural MRI often fails to detect subtle FCD lesions, especially in MRI-negative cases. Recent advancements in Artificial Intelligence (AI), particularly Machine Learning (ML) and Deep Learning (DL), have the potential to enhance FCD detection's sensitivity and specificity. This systematic review, following PRISMA guidelines, searched PubMed, Embase, Scopus, Web of Science, and Science Direct for articles published from 2020 onwards, using keywords related to "Focal Cortical Dysplasia," "MRI," and "Artificial Intelligence/Machine Learning/Deep Learning." Included were original studies employing AI and structural MRI (sMRI) for FCD detection in humans, reporting quantitative performance metrics, and published in English. Data extraction was performed independently by two reviewers, with discrepancies resolved by a third. The included studies demonstrated that AI significantly improved FCD detection, achieving sensitivity up to 97.1% and specificities up to 84.3% across various MRI sequences, including MPRAGE, MP2RAGE, and FLAIR. AI models, particularly deep learning models, matched or surpassed human radiologist performance, with combined AI-human expertise reaching up to 87% detection rates. Among 88 full-text articles reviewed, 27 met inclusion criteria. The studies emphasized the importance of advanced MRI sequences and multimodal MRI for enhanced detection, though model performance varied with FCD type and training datasets. Recent advances in sMRI and AI, especially deep learning, offer substantial potential to improve FCD detection, leading to better presurgical planning and patient outcomes in drug-resistant epilepsy. These methods enable faster, more accurate, and automated FCD detection, potentially enhancing surgical decision-making. Further clinical validation and optimization of AI algorithms across diverse datasets are essential for broader clinical translation.

Exploring the Effectiveness of Deep Features from Domain-Specific Foundation Models in Retinal Image Synthesis

Zuzanna Skorniewska, Bartlomiej W. Papiez

arxiv logopreprintJun 13 2025
The adoption of neural network models in medical imaging has been constrained by strict privacy regulations, limited data availability, high acquisition costs, and demographic biases. Deep generative models offer a promising solution by generating synthetic data that bypasses privacy concerns and addresses fairness by producing samples for under-represented groups. However, unlike natural images, medical imaging requires validation not only for fidelity (e.g., Fr\'echet Inception Score) but also for morphological and clinical accuracy. This is particularly true for colour fundus retinal imaging, which requires precise replication of the retinal vascular network, including vessel topology, continuity, and thickness. In this study, we in-vestigated whether a distance-based loss function based on deep activation layers of a large foundational model trained on large corpus of domain data, colour fundus imaging, offers advantages over a perceptual loss and edge-detection based loss functions. Our extensive validation pipeline, based on both domain-free and domain specific tasks, suggests that domain-specific deep features do not improve autoen-coder image generation. Conversely, our findings highlight the effectiveness of con-ventional edge detection filters in improving the sharpness of vascular structures in synthetic samples.

High-Fidelity 3D Imaging of Dental Scenes Using Gaussian Splatting.

Jin CX, Li MX, Yu H, Gao Y, Guo YP, Xia GS, Huang C

pubmed logopapersJun 13 2025
Three-dimensional visualization is increasingly used in dentistry for diagnostics, education, and treatment design. The accurate replication of geometry and color is crucial for these applications. Image-based rendering, which uses 2-dimensional photos to generate photo-realistic 3-dimensional representations, provides an affordable and practical option, aiding both regular and remote health care. This study explores an advanced novel view synthesis (NVS) method called Gaussian splatting (GS), a differentiable image-based rendering approach, to assess its feasibility for dental scene capturing. The rendering quality and resource usage were compared with representative NVS methods. In addition, the linear measurement trueness of extracted craniofacial meshes was evaluated against a commercial facial scanner and 3 smartphone facial scanning apps, while teeth meshes were assessed against 2 intraoral scanners and a desktop scanner. GS-based representation demonstrated superior rendering quality, achieving the highest visual quality, fastest rendering speed, and lowest resource usage. The craniofacial measurements showed similar trueness to commercial facial scanners. The dental measurements had larger deviations than intraoral and desktop scanners did, although all deviations remained within clinically acceptable limits. The GS-based representation shows great potential for developing a convenient and cost-effective method of capturing dental scenes, offering a balance between color fidelity and trueness suitable for clinical applications.

Enhancing Free-hand 3D Photoacoustic and Ultrasound Reconstruction using Deep Learning.

Lee S, Kim S, Seo M, Park S, Imrus S, Ashok K, Lee D, Park C, Lee S, Kim J, Yoo JH, Kim M

pubmed logopapersJun 13 2025
This study introduces a motion-based learning network with a global-local self-attention module (MoGLo-Net) to enhance 3D reconstruction in handheld photoacoustic and ultrasound (PAUS) imaging. Standard PAUS imaging is often limited by a narrow field of view (FoV) and the inability to effectively visualize complex 3D structures. The 3D freehand technique, which aligns sequential 2D images for 3D reconstruction, faces significant challenges in accurate motion estimation without relying on external positional sensors. MoGLo-Net addresses these limitations through an innovative adaptation of the self-attention mechanism, which effectively exploits the critical regions, such as fully-developed speckle areas or high-echogenic tissue regions within successive ultrasound images to accurately estimate the motion parameters. This facilitates the extraction of intricate features from individual frames. Additionally, we employ a patch-wise correlation operation to generate a correlation volume that is highly correlated with the scanning motion. A custom loss function was also developed to ensure robust learning with minimized bias, leveraging the characteristics of the motion parameters. Experimental evaluations demonstrated that MoGLo-Net surpasses current state-of-the-art methods in both quantitative and qualitative performance metrics. Furthermore, we expanded the application of 3D reconstruction technology beyond simple B-mode ultrasound volumes to incorporate Doppler ultrasound and photoacoustic imaging, enabling 3D visualization of vasculature. The source code for this study is publicly available at: https://github.com/pnu-amilab/US3D.
Page 251 of 3853843 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.