Sort by:
Page 80 of 84834 results

LiteMIL: A Computationally Efficient Transformer-Based MIL for Cancer Subtyping on Whole Slide Images.

Kussaibi, H.

medrxiv logopreprintMay 12 2025
PurposeAccurate cancer subtyping is crucial for effective treatment; however, it presents challenges due to overlapping morphology and variability among pathologists. Although deep learning (DL) methods have shown potential, their application to gigapixel whole slide images (WSIs) is often hindered by high computational demands and the need for efficient, context-aware feature aggregation. This study introduces LiteMIL, a computationally efficient transformer-based multiple instance learning (MIL) network combined with Phikon, a pathology-tuned self-supervised feature extractor, for robust and scalable cancer subtyping on WSIs. MethodsInitially, patches were extracted from TCGA-THYM dataset (242 WSIs, six subtypes) and subsequently fed in real-time to Phikon for feature extraction. To train MILs, features were arranged into uniform bags using a chunking strategy that maintains tissue context while increasing training data. LiteMIL utilizes a learnable query vector within an optimized multi-head attention module for effective feature aggregation. The models performance was evaluated against established MIL methods on the Thymic Dataset and three additional TCGA datasets (breast, lung, and kidney cancer). ResultsLiteMIL achieved 0.89 {+/-} 0.01 F1 score and 0.99 AUC on Thymic dataset, outperforming other MILs. LiteMIL demonstrated strong generalizability across the external datasets, scoring the best on breast and kidney cancer datasets. Compared to TransMIL, LiteMIL significantly reduces training time and GPU memory usage. Ablation studies confirmed the critical role of the learnable query and layer normalization in enhancing performance and stability. ConclusionLiteMIL offers a resource-efficient, robust solution. Its streamlined architecture, combined with the compact Phikon features, makes it suitable for integrating into routine histopathological workflows, particularly in resource-limited settings.

Automated scout-image-based estimation of contrast agent dosing: a deep learning approach

Schirrmeister, R., Taleb, L., Friemel, P., Reisert, M., Bamberg, F., Weiss, J., Rau, A.

medrxiv logopreprintMay 12 2025
We developed and tested a deep-learning-based algorithm for the approximation of contrast agent dosage based on computed tomography (CT) scout images. We prospectively enrolled 817 patients undergoing clinically indicated CT imaging, predominantly of the thorax and/or abdomen. Patient weight was collected by study staff prior to the examination 1) with a weight scale and 2) as self-reported. Based on the scout images, we developed an EfficientNet convolutional neural network pipeline to estimate the optimal contrast agent dose based on patient weight and provide a browser-based user interface as a versatile open-source tool to account for different contrast agent compounds. We additionally analyzed the body-weight-informative CT features by synthesizing representative examples for different weights using in-context learning and dataset distillation. The cohort consisted of 533 thoracic, 70 abdominal and 229 thoracic-abdominal CT scout scans. Self-reported patient weight was statistically significantly lower than manual measurements (75.13 kg vs. 77.06 kg; p < 10-5, Wilcoxon signed-rank test). Our pipeline predicted patient weight with a mean absolute error of 3.90 {+/-} 0.20 kg (corresponding to a roughly 4.48 - 11.70 ml difference in contrast agent depending on the agent) in 5-fold cross-validation and is publicly available at https://tinyurl.com/ct-scout-weight. Interpretability analysis revealed that both larger anatomical shape and higher overall attenuation were predictive of body weight. Our open-source deep learning pipeline allows for the automatic estimation of accurate contrast agent dosing based on scout images in routine CT imaging studies. This approach has the potential to streamline contrast agent dosing workflows, improve efficiency, and enhance patient safety by providing quick and accurate weight estimates without additional measurements or reliance on potentially outdated records. The models performance may vary depending on patient positioning and scout image quality and the approach requires validation on larger patient cohorts and other clinical centers. Author SummaryAutomation of medical workflows using AI has the potential to increase reproducibility while saving costs and time. Here, we investigated automating the estimation of the required contrast agent dosage for CT examinations. We trained a deep neural network to predict the body weight from the initial 2D CT Scout images that are required prior to the actual CT examination. The predicted weight is then converted to a contrast agent dosage based on contrast-agent-specific conversion factors. To facilitate application in clinical routine, we developed a user-friendly browser-based user interface that allows clinicians to select a contrast agent or input a custom conversion factor to receive dosage suggestions, with local data processing in the browser. We also investigate what image characteristics predict body weight and find plausible relationships such as higher attenuation and larger anatomical shapes correlating with higher body weights. Our work goes beyond prior work by implementing a single model for a variety of anatomical regions, providing an accessible user interface and investigating the predictive characteristics of the images.

Automatic Quantification of Ki-67 Labeling Index in Pediatric Brain Tumors Using QuPath

Spyretos, C., Pardo Ladino, J. M., Blomstrand, H., Nyman, P., Snodahl, O., Shamikh, A., Elander, N. O., Haj-Hosseini, N.

medrxiv logopreprintMay 12 2025
AO_SCPLOWBSTRACTC_SCPLOWThe quantification of the Ki-67 labeling index (LI) is critical for assessing tumor proliferation and prognosis in tumors, yet manual scoring remains a common practice. This study presents an automated workflow for Ki-67 scoring in whole slide images (WSIs) using an Apache Groovy code script for QuPath, complemented by a Python-based post-processing script, providing cell density maps and summary tables. The tissue and cell segmentation are performed using StarDist, a deep learning model, and adaptive thresholding to classify Ki-67 positive and negative nuclei. The pipeline was applied to a cohort of 632 pediatric brain tumor cases with 734 Ki-67-stained WSIs from the Childrens Brain Tumor Network. Medulloblastoma showed the highest Ki-67 LI (median: 19.84), followed by atypical teratoid rhabdoid tumor (median: 19.36). Moderate values were observed in brainstem glioma-diffuse intrinsic pontine glioma (median: 11.50), high-grade glioma (grades 3 & 4) (median: 9.50), and ependymoma (median: 5.88). Lower indices were found in meningioma (median: 1.84), while the lowest were seen in low-grade glioma (grades 1 & 2) (median: 0.85), dysembryoplastic neuroepithelial tumor (median: 0.63), and ganglioglioma (median: 0.50). The results aligned with the consensus of the oncology, demonstrating a significant correlation in Ki-67 LI across most of the tumor families/types, with high malignancy tumors showing the highest proliferation indices and lower malignancy tumors exhibiting lower Ki-67 LI. The automated approach facilitates the assessment of large amounts of Ki-67 WSIs in research settings.

Altered intrinsic ignition dynamics linked to Amyloid-β and tau pathology in Alzheimer's disease

Patow, G. A., Escrichs, A., Martinez-Molina, N., Ritter, P., Deco, G.

biorxiv logopreprintMay 11 2025
Alzheimer's disease (AD) progressively alters brain structure and function, yet the associated changes in large-scale brain network dynamics remain poorly understood. We applied the intrinsic ignition framework to resting-state functional MRI (rs-fMRI) data from AD patients, individuals with mild cognitive impairment (MCI), and cognitively healthy controls (HC) to elucidate how AD shapes intrinsic brain activity. We assessed node-metastability at the whole-brain level and in 7 canonical resting-state networks (RSNs). Our results revealed a progressive decline in dynamical complexity across the disease continuum. HC exhibited the highest node-metastability, whereas it was substantially reduced in MCI and AD patients. The cortical hierarchy of information processing was also disrupted, indicating that rich-club hubs may be selectively affected in AD progression. Furthermore, we used linear mixed-effects models to evaluate the influence of Amyloid-{beta} (A{beta}) and tau pathology on brain dynamics at both regional and whole-brain levels. We found significant associations between both protein burdens and alterations in node metastability. Lastly, a machine learning classifier trained on brain dynamics, A{beta}, and tau burden features achieved high accuracy in discriminating between disease stages. Together, our findings highlight the progressive disruption of intrinsic ignition across whole-brain and RSNs in AD and support the use of node-metastability in conjunction with proteinopathy as a novel framework for tracking disease progression.

Creation of an Open-Access Lung Ultrasound Image Database For Deep Learning and Neural Network Applications

Kumar, A., Nandakishore, P., Gordon, A. J., Baum, E., Madhok, J., Duanmu, Y., Kugler, J.

medrxiv logopreprintMay 11 2025
BackgroundLung ultrasound (LUS) offers advantages over traditional imaging for diagnosing pulmonary conditions, with superior accuracy compared to chest X-ray and similar performance to CT at lower cost. Despite these benefits, widespread adoption is limited by operator dependency, moderate interrater reliability, and training requirements. Deep learning (DL) could potentially address these challenges, but development of effective algorithms is hindered by the scarcity of comprehensive image repositories with proper metadata. MethodsWe created an open-source dataset of LUS images derived a multi-center study involving N=226 adult patients presenting with respiratory symptoms to emergency departments between March 2020 and April 2022. Images were acquired using a standardized scanning protocol (12-zone or modified 8-zone) with various point-of-care ultrasound devices. Three blinded researchers independently analyzed each image following consensus guidelines, with disagreements adjudicated to provide definitive interpretations. Videos were pre-processed to remove identifiers, and frames were extracted and resized to 128x128 pixels. ResultsThe dataset contains 1,874 video clips comprising 303,977 frames. Half of the participants (50%) had COVID-19 pneumonia. Among all clips, 66% contained no abnormalities, 18% contained B-lines, 4.5% contained consolidations, 6.4% contained both B-lines and consolidations, and 5.2% had indeterminate findings. Pathological findings varied significantly by lung zone, with anterior zones more frequently normal and less likely to show consolidations compared to lateral and posterior zones. DiscussionThis dataset represents one of the largest annotated LUS repositories to date, including both COVID-19 and non-COVID-19 patients. The comprehensive metadata and expert interpretations enhance its utility for DL applications. Despite limitations including potential device-specific characteristics and COVID-19 predominance, this repository provides a valuable resource for developing AI tools to improve LUS acquisition and interpretation.

A Clinical Neuroimaging Platform for Rapid, Automated Lesion Detection and Personalized Post-Stroke Outcome Prediction

Brzus, M., Griffis, J. C., Riley, C. J., Bruss, J., Shea, C., Johnson, H. J., Boes, A. D.

medrxiv logopreprintMay 11 2025
Predicting long-term functional outcomes for individuals with stroke is a significant challenge. Solving this challenge will open new opportunities for improving stroke management by informing acute interventions and guiding personalized rehabilitation strategies. The location of the stroke is a key predictor of outcomes, yet no clinically deployed tools incorporate lesion location information for outcome prognostication. This study responds to this critical need by introducing a fully automated, three-stage neuroimaging processing and machine learning pipeline that predicts personalized outcomes from clinical imaging in adult ischemic stroke patients. In the first stage, our system automatically processes raw DICOM inputs, registers the brain to a standard template, and uses deep learning models to segment the stroke lesion. In the second stage, lesion location and automatically derived network features are input into statistical models trained to predict long-term impairments from a large independent cohort of lesion patients. In the third stage, a structured PDF report is generated using a large language model that describes the strokes location, the arterial distribution, and personalized prognostic information. We demonstrate the viability of this approach in a proof-of-concept application predicting select cognitive outcomes in a stroke cohort. Brain-behavior models were pre-trained to predict chronic impairment on 28 different cognitive outcomes in a large cohort of patients with focal brain lesions (N=604). The automated pipeline used these models to predict outcomes from clinically acquired MRIs in an independent ischemic stroke cohort (N=153). Starting from raw clinical DICOM images, we show that our pipeline can generate outcome predictions for individual patients in less than 3 minutes with 96% concordance relative to methods requiring manual processing. We also show that prediction accuracy is enhanced using models that incorporate lesion location, lesion-associated network information, and demographics. Our results provide a strong proof-of-concept and lay the groundwork for developing imaging-based clinical tools for stroke outcome prognostication.

Deeply Explainable Artificial Neural Network

David Zucker

arxiv logopreprintMay 10 2025
While deep learning models have demonstrated remarkable success in numerous domains, their black-box nature remains a significant limitation, especially in critical fields such as medical image analysis and inference. Existing explainability methods, such as SHAP, LIME, and Grad-CAM, are typically applied post hoc, adding computational overhead and sometimes producing inconsistent or ambiguous results. In this paper, we present the Deeply Explainable Artificial Neural Network (DxANN), a novel deep learning architecture that embeds explainability ante hoc, directly into the training process. Unlike conventional models that require external interpretation methods, DxANN is designed to produce per-sample, per-feature explanations as part of the forward pass. Built on a flow-based framework, it enables both accurate predictions and transparent decision-making, and is particularly well-suited for image-based tasks. While our focus is on medical imaging, the DxANN architecture is readily adaptable to other data modalities, including tabular and sequential data. DxANN marks a step forward toward intrinsically interpretable deep learning, offering a practical solution for applications where trust and accountability are essential.

Reproducing and Improving CheXNet: Deep Learning for Chest X-ray Disease Classification

Daniel Strick, Carlos Garcia, Anthony Huang

arxiv logopreprintMay 10 2025
Deep learning for radiologic image analysis is a rapidly growing field in biomedical research and is likely to become a standard practice in modern medicine. On the publicly available NIH ChestX-ray14 dataset, containing X-ray images that are classified by the presence or absence of 14 different diseases, we reproduced an algorithm known as CheXNet, as well as explored other algorithms that outperform CheXNet's baseline metrics. Model performance was primarily evaluated using the F1 score and AUC-ROC, both of which are critical metrics for imbalanced, multi-label classification tasks in medical imaging. The best model achieved an average AUC-ROC score of 0.85 and an average F1 score of 0.39 across all 14 disease classifications present in the dataset.

Batch Augmentation with Unimodal Fine-tuning for Multimodal Learning

H M Dipu Kabir, Subrota Kumar Mondal, Mohammad Ali Moni

arxiv logopreprintMay 10 2025
This paper proposes batch augmentation with unimodal fine-tuning to detect the fetus's organs from ultrasound images and associated clinical textual information. We also prescribe pre-training initial layers with investigated medical data before the multimodal training. At first, we apply a transferred initialization with the unimodal image portion of the dataset with batch augmentation. This step adjusts the initial layer weights for medical data. Then, we apply neural networks (NNs) with fine-tuned initial layers to images in batches with batch augmentation to obtain features. We also extract information from descriptions of images. We combine this information with features obtained from images to train the head layer. We write a dataloader script to load the multimodal data and use existing unimodal image augmentation techniques with batch augmentation for the multimodal data. The dataloader brings a new random augmentation for each batch to get a good generalization. We investigate the FPU23 ultrasound and UPMC Food-101 multimodal datasets. The multimodal large language model (LLM) with the proposed training provides the best results among the investigated methods. We receive near state-of-the-art (SOTA) performance on the UPMC Food-101 dataset. We share the scripts of the proposed method with traditional counterparts at the following repository: github.com/dipuk0506/multimodal

Improving Generalization of Medical Image Registration Foundation Model

Jing Hu, Kaiwei Yu, Hongjiang Xian, Shu Hu, Xin Wang

arxiv logopreprintMay 10 2025
Deformable registration is a fundamental task in medical image processing, aiming to achieve precise alignment by establishing nonlinear correspondences between images. Traditional methods offer good adaptability and interpretability but are limited by computational efficiency. Although deep learning approaches have significantly improved registration speed and accuracy, they often lack flexibility and generalizability across different datasets and tasks. In recent years, foundation models have emerged as a promising direction, leveraging large and diverse datasets to learn universal features and transformation patterns for image registration, thus demonstrating strong cross-task transferability. However, these models still face challenges in generalization and robustness when encountering novel anatomical structures, varying imaging conditions, or unseen modalities. To address these limitations, this paper incorporates Sharpness-Aware Minimization (SAM) into foundation models to enhance their generalization and robustness in medical image registration. By optimizing the flatness of the loss landscape, SAM improves model stability across diverse data distributions and strengthens its ability to handle complex clinical scenarios. Experimental results show that foundation models integrated with SAM achieve significant improvements in cross-dataset registration performance, offering new insights for the advancement of medical image registration technology. Our code is available at https://github.com/Promise13/fm_sam}{https://github.com/Promise13/fm\_sam.
Page 80 of 84834 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.