Sort by:
Page 38 of 3433422 results

Evaluation of Stapes Image Quality with Ultra-High-Resolution CT in Comparison with Conebeam CT and High-Resolution CT in Cadaveric Heads.

Puel U, Boukhzer S, Doyen M, Hossu G, Boubaker F, Frédérique G, Blum A, Teixeira PAG, Eliezer M, Parietti-Winkler C, Gillet R

pubmed logopapersSep 2 2025
Conventional CT imaging techniques are ineffective in adequately depicting the stapes. The purpose of this study was to evaluate the ability of high-resolution (HR), ultra-high-resolution (UHR) with and without deep learning reconstruction (DLR), and conebeam (CB)-CT scanners to image the stapes by using micro-CT as a reference. Eleven temporal bone specimens were imaged by using all imaging modalities. Subjective image analysis was performed by grading image quality on a Likert scale, and objective image analysis was performed by taking various measurements of the stapes superstructure and footplate. Image noise and radiation dose were also recorded. The global image quality scores were all worse than micro-CT (<i>P</i> ≤ .01). UHR-CT with and without DLR had the second-best global image quality scores (<i>P</i> > .99), which were both better than CB-CT (<i>P</i> = .01 for both). CB-CT had a better global image quality score than HR-CT (<i>P</i> = .01). Most of the measurements differed between HR-CT and micro-CT (<i>P</i> ≤ .02), but not between UHR-CT with and without DLR, CB-CT, and micro-CT (<i>P</i> > .06). The air noise value of UHR-CT with DLR was not different from CB-CT (<i>P</i> = .49), but HR-CT and UHR-CT without DLR exhibited higher values than UHR-CT with DLR (<i>P</i> ≤ .001). HR-CT and UHR-CT with and without DLR yielded the same effective radiation dose values of 1.23 ± 0.11 (1.13-1.35) mSv, which was 4 times higher than that of CB-CT (0.35 ± 0 mSv, <i>P</i> ≤ .01). UHR-CT with and without DLR offers comparable objective image analysis to CB-CT while providing superior subjective image quality. However, this is achieved at the cost of a higher radiation dose. Both CB-CT and UHR-CT with and without DLR are more effective than HR-CT in objective and subjective image analysis.

Toward a robust lesion detection model in breast DCE-MRI: adapting foundation models to high-risk women

Gabriel A. B. do Nascimento, Vincent Dong, Guilherme J. Cavalcante, Alex Nguyen, Thaís G. do Rêgo, Yuri Malheiros, Telmo M. Silva Filho, Carla R. Zeballos Torrez, James C. Gee, Anne Marie McCarthy, Andrew D. A. Maidment, Bruno Barufaldi

arxiv logopreprintSep 2 2025
Accurate breast MRI lesion detection is critical for early cancer diagnosis, especially in high-risk populations. We present a classification pipeline that adapts a pretrained foundation model, the Medical Slice Transformer (MST), for breast lesion classification using dynamic contrast-enhanced MRI (DCE-MRI). Leveraging DINOv2-based self-supervised pretraining, MST generates robust per-slice feature embeddings, which are then used to train a Kolmogorov--Arnold Network (KAN) classifier. The KAN provides a flexible and interpretable alternative to conventional convolutional networks by enabling localized nonlinear transformations via adaptive B-spline activations. This enhances the model's ability to differentiate benign from malignant lesions in imbalanced and heterogeneous clinical datasets. Experimental results demonstrate that the MST+KAN pipeline outperforms the baseline MST classifier, achieving AUC = 0.80 \pm 0.02 while preserving interpretability through attention-based heatmaps. Our findings highlight the effectiveness of combining foundation model embeddings with advanced classification strategies for building robust and generalizable breast MRI analysis tools.

From Noisy Labels to Intrinsic Structure: A Geometric-Structural Dual-Guided Framework for Noise-Robust Medical Image Segmentation

Tao Wang, Zhenxuan Zhang, Yuanbo Zhou, Xinlin Zhang, Yuanbin Chen, Tao Tan, Guang Yang, Tong Tong

arxiv logopreprintSep 2 2025
The effectiveness of convolutional neural networks in medical image segmentation relies on large-scale, high-quality annotations, which are costly and time-consuming to obtain. Even expert-labeled datasets inevitably contain noise arising from subjectivity and coarse delineations, which disrupt feature learning and adversely impact model performance. To address these challenges, this study propose a Geometric-Structural Dual-Guided Network (GSD-Net), which integrates geometric and structural cues to improve robustness against noisy annotations. It incorporates a Geometric Distance-Aware module that dynamically adjusts pixel-level weights using geometric features, thereby strengthening supervision in reliable regions while suppressing noise. A Structure-Guided Label Refinement module further refines labels with structural priors, and a Knowledge Transfer module enriches supervision and improves sensitivity to local details. To comprehensively assess its effectiveness, we evaluated GSD-Net on six publicly available datasets: four containing three types of simulated label noise, and two with multi-expert annotations that reflect real-world subjectivity and labeling inconsistencies. Experimental results demonstrate that GSD-Net achieves state-of-the-art performance under noisy annotations, achieving improvements of 2.52% on Kvasir, 22.76% on Shenzhen, 8.87% on BU-SUC, and 4.59% on BraTS2020 under SR simulated noise. The codes of this study are available at https://github.com/ortonwang/GSD-Net.

Optimizing Paths for Adaptive Fly-Scan Microscopy: An Extended Version

Yu Lu, Thomas F. Lynn, Ming Du, Zichao Di, Sven Leyffer

arxiv logopreprintSep 2 2025
In x-ray microscopy, traditional raster-scanning techniques are used to acquire a microscopic image in a series of step-scans. Alternatively, scanning the x-ray probe along a continuous path, called a fly-scan, reduces scan time and increases scan efficiency. However, not all regions of an image are equally important. Currently used fly-scan methods do not adapt to the characteristics of the sample during the scan, often wasting time in uniform, uninteresting regions. One approach to avoid unnecessary scanning in uniform regions for raster step-scans is to use deep learning techniques to select a shorter optimal scan path instead of a traditional raster scan path, followed by reconstructing the entire image from the partially scanned data. However, this approach heavily depends on the quality of the initial sampling, requires a large dataset for training, and incurs high computational costs. We propose leveraging the fly-scan method along an optimal scanning path, focusing on regions of interest (ROIs) and using image completion techniques to reconstruct details in non-scanned areas. This approach further shortens the scanning process and potentially decreases x-ray exposure dose while maintaining high-quality and detailed information in critical regions. To achieve this, we introduce a multi-iteration fly-scan framework that adapts to the scanned image. Specifically, in each iteration, we define two key functions: (1) a score function to generate initial anchor points and identify potential ROIs, and (2) an objective function to optimize the anchor points for convergence to an optimal set. Using these anchor points, we compute the shortest scanning path between optimized anchor points, perform the fly-scan, and subsequently apply image completion based on the acquired information in preparation for the next scan iteration.

Anisotropic Fourier Features for Positional Encoding in Medical Imaging

Nabil Jabareen, Dongsheng Yuan, Dingming Liu, Foo-Wei Ten, Sören Lukassen

arxiv logopreprintSep 2 2025
The adoption of Transformer-based architectures in the medical domain is growing rapidly. In medical imaging, the analysis of complex shapes - such as organs, tissues, or other anatomical structures - combined with the often anisotropic nature of high-dimensional images complicates these adaptations. In this study, we critically examine the role of Positional Encodings (PEs), arguing that commonly used approaches may be suboptimal for the specific challenges of medical imaging. Sinusoidal Positional Encodings (SPEs) have proven effective in vision tasks, but they struggle to preserve Euclidean distances in higher-dimensional spaces. Isotropic Fourier Feature Positional Encodings (IFPEs) have been proposed to better preserve Euclidean distances, but they lack the ability to account for anisotropy in images. To address these limitations, we propose Anisotropic Fourier Feature Positional Encoding (AFPE), a generalization of IFPE that incorporates anisotropic, class-specific, and domain-specific spatial dependencies. We systematically benchmark AFPE against commonly used PEs on multi-label classification in chest X-rays, organ classification in CT images, and ejection fraction regression in echocardiography. Our results demonstrate that choosing the correct PE can significantly improve model performance. We show that the optimal PE depends on the shape of the structure of interest and the anisotropy of the data. Finally, our proposed AFPE significantly outperforms state-of-the-art PEs in all tested anisotropic settings. We conclude that, in anisotropic medical images and videos, it is of paramount importance to choose an anisotropic PE that fits the data and the shape of interest.

MedDINOv3: How to adapt vision foundation models for medical image segmentation?

Yuheng Li, Yizhou Wu, Yuxiang Lai, Mingzhe Hu, Xiaofeng Yang

arxiv logopreprintSep 2 2025
Accurate segmentation of organs and tumors in CT and MRI scans is essential for diagnosis, treatment planning, and disease monitoring. While deep learning has advanced automated segmentation, most models remain task-specific, lacking generalizability across modalities and institutions. Vision foundation models (FMs) pretrained on billion-scale natural images offer powerful and transferable representations. However, adapting them to medical imaging faces two key challenges: (1) the ViT backbone of most foundation models still underperform specialized CNNs on medical image segmentation, and (2) the large domain gap between natural and medical images limits transferability. We introduce MedDINOv3, a simple and effective framework for adapting DINOv3 to medical segmentation. We first revisit plain ViTs and design a simple and effective architecture with multi-scale token aggregation. Then, we perform domain-adaptive pretraining on CT-3M, a curated collection of 3.87M axial CT slices, using a multi-stage DINOv3 recipe to learn robust dense features. MedDINOv3 matches or exceeds state-of-the-art performance across four segmentation benchmarks, demonstrating the potential of vision foundation models as unified backbones for medical image segmentation. The code is available at https://github.com/ricklisz/MedDINOv3.

A Preliminary Study on an Intelligent Segmentation and Classification Model for Amygdala-Hippocampus MRI Images in Alzheimer's Disease.

Liu S, Zhou K, Geng D

pubmed logopapersSep 2 2025
This study developed a deep learning model for segmenting and classifying the amygdala-hippocampus in Alzheimer's disease (AD), using a large-scale neuroimaging dataset to improve early AD detection and intervention. We collected 1000 healthy controls (HC) and 1000 AD patients as internal training data from 15 Chinese medical centers. The independent external validation dataset was sourced from another three centers. All subjects underwent neuroimaging and neuropsychological assessments. A semi-automated annotation pipeline was used: the amygdala-hippocampus of 200 cases in each group were manually annotated to train the U²-Net segmentation model, followed by model annotation of 800 cases with iterative refinement. The DenseNet-121 architecture was built for automated classification. The robustness of the model was evaluated using an external validation set. All 18 medical centers were distributed across diverse geographical regions in China. AD patients had lower MMSE/MoCA scores. Amygdala and hippocampal volumes were smaller in AD. Semi-automated annotation improved segmentation with DSC all exceeding 0.88 (P<0.001). The final DSC of the 2000-case cohort was 0.914 in the training set and 0.896 in the testing set. The classification model achieved an AUC of 0.905. The external validation set comprised 100 cases in each group, and it can achieve an AUC of 0.835. The amygdala-hippocampus recognition precision may be improved by the deep learning-based semi-automated approach and classification model, which will help with AD evaluation, diagnosis, and clinical AI application.

Predictive modeling of hematoma expansion from non-contrast computed tomography in spontaneous intracerebral hemorrhage patients

Ironside, N., El Naamani, K., Rizvi, T., Shifat-E-Rabbi, M., Kundu, S., Becceril-Gaitan, A., Pas, K., Snyder, H., Chen, C.-J., Langefeld, C., Woo, D., Mayer, S. A., Connolly, E. S., Rohde, G. K., VISTA-ICH,, ERICH investigators,

medrxiv logopreprintSep 2 2025
Hematoma expansion is a consistent predictor of poor neurological outcome and mortality after spontaneous intracerebral hemorrhage (ICH). An incomplete understanding of its biophysiology has limited early preventative intervention. Transport-based morphometry (TBM) is a mathematical modeling technique that uses a physically meaningful metric to quantify and visualize discriminating image features that are not readily perceptible to the human eye. We hypothesized that TBM could discover relationships between hematoma morphology on initial Non-Contrast Computed Tomography (NCCT) and hematoma expansion. 170 spontaneous ICH patients enrolled in the multi-center international Virtual International Trials of Stroke Archive (VISTA-ICH) with time-series NCCT data were used for model derivation. Its performance was assessed on a test dataset of 170 patients from the Ethnic/Racial Variations of Intracerebral Hemorrhage (ERICH) study. A unique transport-based representation was produced from each presentation NCCT hematoma image to identify morphological features of expansion. The principal hematoma features identified by TBM were larger size, density heterogeneity, shape irregularity and peripheral density distribution. These were consistent with clinician-identified features of hematoma expansion, corroborating the hypothesis that morphological characteristics of the hematoma promote future growth. Incorporating these traits into a multivariable model comprising morphological, spatial and clinical information achieved a AUROC of 0.71 for quantifying 24-hour hematoma expansion risk in the test dataset. This outperformed existing clinician protocols and alternate machine learning methods, suggesting that TBM detected features with improved precision than by visual inspection alone. This pre-clinical study presents a quantitative and interpretable method for discovery and visualization of NCCT biomarkers of hematoma expansion in ICH patients. Because TBM has a direct physical meaning, its modeling of NCCT hematoma features can inform hypotheses for hematoma expansion mechanisms. It has potential future application as a clinical risk stratification tool.

Decoding Fibrosis: Transcriptomic and Clinical Insights via AI-Derived Collagen Deposition Phenotypes in MASLD

Wojciechowska, M. K., Thing, M., Hu, Y., Mazzoni, G., Harder, L. M., Werge, M. P., Kimer, N., Das, V., Moreno Martinez, J., Prada-Medina, C. A., Vyberg, M., Goldin, R., Serizawa, R., Tomlinson, J., Douglas Gaalsgard, E., Woodcock, D. J., Hvid, H., Pfister, D. R., Jurtz, V. I., Gluud, L.-L., Rittscher, J.

medrxiv logopreprintSep 2 2025
Histological assessment is foundational to multi-omics studies of liver disease, yet conventional fibrosis staging lacks resolution, and quantitative metrics like collagen proportionate area (CPA) fail to capture tissue architecture. While recent AI-driven approaches offer improved precision, they are proprietary and not accessible to academic research. Here, we present a novel, interpretable AI-based framework for characterising liver fibrosis from picrosirius red (PSR)-stained slides. By identifying distinct data-driven collagen deposition phenotypes (CDPs) which capture distinct morphologies, our method substantially improves the sensitivity and specificity of downstream transcriptomic and proteomic analyses compared to CPA and traditional fibrosis scores. Pathway analysis reveals that CDPs 4 and 5 are associated with active extracellular matrix remodelling, while phenotype correlates highlight links to liver functional status. Importantly, we demonstrate that selected CDPs can predict clinical outcomes with similar accuracy to established fibrosis metrics. All models and tools are made freely available to support transparent and reproducible multi-omics pathology research. HighlightsO_LIWe present a set of data-driven collagen deposition phenotypes for analysing PSR-stained liver biopsies, offering a spatially informed alternative to conventional fibrosis staging and CPA available as open-source code. C_LIO_LIThe identified collagen deposition phenotypes enhance transcriptomic and proteomic signal detection, revealing active ECM remodelling and distinct functional tissue states. C_LIO_LISelected phenotypes predict clinical outcomes with performance comparable to fibrosis stage and CPA, highlighting their potential as candidate quantitative indicators of fibrosis severity. C_LI O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=98 SRC="FIGDIR/small/25334719v1_ufig1.gif" ALT="Figure 1"> View larger version (22K): [email protected]@1793532org.highwire.dtl.DTLVardef@93a0d8org.highwire.dtl.DTLVardef@24d289_HPS_FORMAT_FIGEXP M_FIG C_FIG

Optimizing and Evaluating Robustness of AI for Brain Metastasis Detection and Segmentation via Loss Functions and Multi-dataset Training

Han, Y., Pathak, P., Award, O., Mohamed, A. S. R., Ugarte, V., Zhou, B., Hamstra, D. A., Echeverria, A. E., Mekdash, H. A., Siddiqui, Z. A., Sun, B.

medrxiv logopreprintSep 2 2025
Purpose: Accurate detection and segmentation of brain metastases (BM) from MRI are critical for the appropriate management of cancer patients. This study investigates strategies to enhance the robustness of artificial intelligence (AI)-based BM detection and segmentation models. Method: A DeepMedic-based network with a loss function, tunable with a sensitivity/specificity tradeoff weighting factor \alpha- was trained on T1 post-contrast MRI datasets from two institutions (514 patients, 4520 lesions). Robustness was evaluated on an external dataset from a third institution dataset (91 patients, 397 lesions), featuring ground truth annotations from two physicians. We investigated the impact of loss function weighting factor, \alpha and training dataset combinations. Detection performance (sensitivity, precision, F1 score) and segmentation accuracy (Dice similarity, and 95% Hausdorff distance (HD95)) were evaluated using one physician contours as the reference standard. The optimal AI model was then directly compared to the performance of the second physician. Results: Varying demonstrated a trade-off between sensitivity (higher ) and precision (lower ), with =0.5 yielding the best F1 score (0.80 {+/-} 0.04 vs. 0.78 {+/-} 0.04 for =0.95 and 0.72 {+/-} 0.03 for =0.99) on the external dataset. The optimally trained model achieved detection performance comparable to the physician (F1: AI=0.83 {+/-} 0.04, Physician=0.83 {+/-} 0.04), but slightly underperformed in segmentation (Dice: 0.79 {+/-} 0.04 vs. AI=0.74 {+/-} 0.03; HD95: 2.8 {+/-} 0.14 mm vs. AI=3.18 {+/-} 0.16 mm, p<0.05). Conclusion: The derived optimal model achieves detection and segmentation performance comparable to an expert physician in a parallel comparison.
Page 38 of 3433422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.