Sort by:
Page 40 of 1081080 results

Patient-Specific and Interpretable Deep Brain Stimulation Optimisation Using MRI and Clinical Review Data

Mikroulis, A., Lasica, A., Filip, P., Bakstein, E., Novak, D.

medrxiv logopreprintJul 17 2025
BackgroundOptimisation of Deep Brain Stimulation (DBS) settings is a key aspect in achieving clinical efficacy in movement disorders, such as the Parkinsons disease. Modern techniques attempt to solve the problem through data-intensive statistical and machine learning approaches, adding significant overhead to the existing clinical workflows. Here, we present an optimisation approach for DBS electrode contact and current selection, grounded in routinely collected MRI data, well-established tools (Lead-DBS) and, optionally, clinical review records. MethodsThe pipeline, packaged in a cross-platform tool, uses lead reconstruction data and simulation of volume of tissue activated to estimate the contacts in optimal position relative to the target structure, and suggest optimal stimulation current. The tool then allows further interactive user optimisation of the current settings. Existing electrode contact evaluations can be optionally included in the calculation process for further fine-tuning and adverse effect avoidance. ResultsBased on a sample of 177 implanted electrode reconstructions from 89 Parkinsons disease patients, we demonstrate that DBS parameter setting by our algorithm is more effective in covering the target structure (Wilcoxon p<6e-12, Hedges g>0.34) and minimising electric field leakage to neighbouring regions (p<2e-15, g>0.84) compared to expert parameter settings. ConclusionThe proposed automated method, for optimisation of the DBS electrode contact and current selection shows promising results and is readily applicable to existing clinical workflows. We demonstrate that the algorithmically selected contacts perform better than manual selections according to electric field calculations, allowing for a comparable clinical outcome without the iterative optimisation procedure.

Large Language Model-Based Entity Extraction Reliably Classifies Pancreatic Cysts and Reveals Predictors of Malignancy: A Cross-Sectional and Retrospective Cohort Study

Papale, A. J., Flattau, R., Vithlani, N., Mahajan, D., Ziemba, Y., Zavadsky, T., Carvino, A., King, D., Nadella, S.

medrxiv logopreprintJul 17 2025
Pancreatic cystic lesions (PCLs) are often discovered incidentally on imaging and may progress to pancreatic ductal adenocarcinoma (PDAC). PCLs have a high incidence in the general population, and adherence to screening guidelines can be variable. With the advent of technologies that enable automated text classification, we sought to evaluate various natural language processing (NLP) tools including large language models (LLMs) for identifying and classifying PCLs from radiology reports. We correlated our classification of PCLs to clinical features to identify risk factors for a positive PDAC biopsy. We contrasted a previously described NLP classifier to LLMs for prospective identification of PCLs in radiology. We evaluated various LLMs for PCL classification into low-risk or high-risk categories based on published guidelines. We compared prompt-based PCL classification to specific entity-guided PCL classification. To this end, we developed tools to deidentify radiology and track patients longitudinally based on their radiology reports. Additionally, we used our newly developed tools to evaluate a retrospective database of patients who underwent pancreas biopsy to determine associated factors including those in their radiology reports and clinical features using multivariable logistic regression modelling. Of 14,574 prospective radiology reports, 665 (4.6%) described a pancreatic cyst, including 175 (1.2%) high-risk lesions. Our Entity-Extraction Large Language Model tool achieved recall 0.992 (95% confidence interval [CI], 0.985-0.998), precision 0.988 (0.979-0.996), and F1-score 0.990 (0.985-0.995) for detecting cysts; F1-scores were 0.993 (0.987-0.998) for low-risk and 0.977 (0.952-0.995) for high-risk classification. Among 4,285 biopsy patients, 330 had pancreatic cysts documented [&ge;]6 months before biopsy. In the final multivariable model (AUC = 0.877), independent predictors of adenocarcinoma were change in duct caliber with upstream atrophy (adjusted odds ratio [AOR], 4.94; 95% CI, 1.30-18.79), mural nodules (AOR, 11.02; 1.81-67.26), older age (AOR, 1.10; 1.05-1.16), lower body mass index (AOR, 0.86; 0.76-0.96), and total bilirubin (AOR, 1.81; 1.18-2.77). Automated NLP-based analysis of radiology reports using LLM-driven entity extraction can accurately identify and risk-stratify PCLs and, when retrospectively applied, reveal factors predicting malignant progression. Widespread implementation may improve surveillance and enable earlier intervention.

Myocardial Native T1 Mapping in the German National Cohort (NAKO): Associations with Age, Sex, and Cardiometabolic Risk Factors

Ammann, C., Gröschel, J., Saad, H., Rospleszcz, S., Schuppert, C., Hadler, T., Hickstein, R., Niendorf, T., Nolde, J. M., Schulze, M. B., Greiser, K. H., Decker, J. A., Kröncke, T., Küstner, T., Nikolaou, K., Willich, S. N., Keil, T., Dörr, M., Bülow, R., Bamberg, F., Pischon, T., Schlett, C. L., Schulz-Menger, J.

medrxiv logopreprintJul 17 2025
Background and AimsIn cardiovascular magnetic resonance (CMR), myocardial native T1 mapping enables quantitative, non-invasive tissue characterization and is sensitive to subclinical changes in myocardial structure and composition. We investigated how age, sex, and cardiometabolic risk factors are associated with myocardial T1 in a population-based analysis within the German National Cohort (NAKO). MethodsThis cross-sectional study included 29,573 prospectively enrolled participants who underwent CMR-based midventricular T1 mapping at 3.0 T, alongside clinical phenotyping. After artificial intelligence-assisted myocardial segmentation, a subset of 9,162 outliers was subjected to manual quality control according to clinical evaluation standards. Associations with cardiometabolic risk factors, identified through self-reported medical history, clinical chemistry, and blood pressure measurements, were evaluated using adjusted linear regression models. ResultsWomen had higher T1 values than men, with sex differences progressively declining with age. T1 was significantly elevated in individuals with diabetes ({beta}=3.91 ms; p<0.001), kidney disease ({beta}=3.44 ms; p<0.001), and current smoking ({beta}=6.67 ms; p<0.001). Conversely, hyperlipidaemia was significantly associated with lower T1 ({beta}=-4.41 ms; p<0.001). Associations with hypertension showed a sex-specific pattern: T1 was lower in women but increased with hypertension severity in men. ConclusionsMyocardial native T1 varies by sex and age and shows associations with major cardiometabolic risk factors. Notably, lower T1 times in participants with hyperlipidaemia may indicate a direct effect of blood lipids on the heart. Our findings support the utility of T1 mapping as a sensitive marker of early myocardial changes and highlight the sex-specific interplay between cardiometabolic health and myocardial tissue composition. Graphical Abstract O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=139 SRC="FIGDIR/small/25331651v1_ufig1.gif" ALT="Figure 1"> View larger version (44K): [email protected]@131514borg.highwire.dtl.DTLVardef@d03877org.highwire.dtl.DTLVardef@2b2fec_HPS_FORMAT_FIGEXP M_FIG C_FIG Key QuestionHow are age, sex, and cardiometabolic risk factors associated with myocardial native T1, a quantitative magnetic resonance imaging marker of myocardial tissue composition, in a large-scale population-based evaluation within the German National Cohort (NAKO)? Key FindingT1 relaxation times were higher in women and gradually converged between sexes with age. Diabetes, kidney disease, smoking, and hypertension in men were associated with prolonged T1 times. Unexpectedly, hyperlipidaemia and hypertension in women showed a negative association with T1. Take-Home MessageNative T1 mapping is sensitive to subclinical myocardial changes and reflects a close interplay between metabolic and myocardial health. It reveals marked age-dependent sex differences and sex-specific responses in myocardial tissue composition to cardiometabolic risk factors.

Cardiac Function Assessment with Deep-Learning-Based Automatic Segmentation of Free-Running 4D Whole-Heart CMR

Ogier, A. C., Baup, S., Ilanjian, G., Touray, A., Rocca, A., Banus Cobo, J., Monton Quesada, I., Nicoletti, M., Ledoux, J.-B., Richiardi, J., Holtackers, R. J., Yerly, J., Stuber, M., Hullin, R., Rotzinger, D., van Heeswijk, R. B.

medrxiv logopreprintJul 17 2025
BackgroundFree-running (FR) cardiac MRI enables free-breathing ECG-free fully dynamic 5D (3D spatial+cardiac+respiration dimensions) imaging but poses significant challenges for clinical integration due to the volume and complexity of image analysis. Existing segmentation methods are tailored to 2D cine or static 3D acquisitions and cannot leverage the unique spatial-temporal wealth of FR data. PurposeTo develop and validate a deep learning (DL)-based segmentation framework for isotropic 3D+cardiac cycle FR cardiac MRI that enables accurate, fast, and clinically meaningful anatomical and functional analysis. MethodsFree-running, contrast-free bSSFP acquisitions at 1.5T and contrast-enhanced GRE acquisitions at 3T were used to reconstruct motion-resolved 5D datasets. From these, the end-expiratory respiratory phase was retained to yield fully isotropic 4D datasets. Automatic propagation of a limited set of manual segmentations was used to segment the left and right ventricular blood pool (LVB, RVB) and left ventricular myocardium (LVM) on reformatted short-axis (SAX) end-systolic (ES) and end-diastolic (ED) images. These were used to train a 3D nnU-Net model. Validation was performed using geometric metrics (Dice similarity coefficient [DSC], relative volume difference [RVD]), clinical metrics (ED and ES volumes, ejection fraction [EF]), and physiological consistency metrics (systole-diastole LVM volume mismatch and LV-RV stroke volume agreement). To assess the robustness and flexibility of the approach, we evaluated multiple additional DL training configurations such as using 4D propagation-based data augmentation to incorporate all cardiac phases into training. ResultsThe main proposed method achieved automatic segmentation within a minute, delivering high geometric accuracy and consistency (DSC: 0.94 {+/-} 0.01 [LVB], 0.86 {+/-} 0.02 [LVM], 0.92 {+/-} 0.01 [RVB]; RVD: 2.7%, 5.8%, 4.5%). Clinical LV metrics showed excellent agreement (ICC > 0.98 for EDV/ESV/EF, bias < 2 mL for EDV/ESV, < 1% for EF), while RV metrics remained clinically reliable (ICC > 0.93 for EDV/ESV/EF, bias < 1 mL for EDV/ESV, < 1% for EF) but exhibited wider limits of agreement. Training on all cardiac phases improved temporal coherence, reducing LVM volume mismatch from 4.0% to 2.6%. ConclusionThis study validates a DL-based method for fast and accurate segmentation of whole-heart free-running 4D cardiac MRI. Robust performance across diverse protocols and evaluation with complementary metrics that match state-of-the-art benchmarks supports its integration into clinical and research workflows, helping to overcome a key barrier to the broader adoption of free-running imaging.

Real-time, inline quantitative MRI enabled by scanner-integrated machine learning: a proof of principle with NODDI

Samuel Rot, Iulius Dragonu, Christina Triantafyllou, Matthew Grech-Sollars, Anastasia Papadaki, Laura Mancini, Stephen Wastling, Jennifer Steeden, John Thornton, Tarek Yousry, Claudia A. M. Gandini Wheeler-Kingshott, David L. Thomas, Daniel C. Alexander, Hui Zhang

arxiv logopreprintJul 16 2025
Purpose: The clinical feasibility and translation of many advanced quantitative MRI (qMRI) techniques are inhibited by their restriction to 'research mode', due to resource-intensive, offline parameter estimation. This work aimed to achieve 'clinical mode' qMRI, by real-time, inline parameter estimation with a trained neural network (NN) fully integrated into a vendor's image reconstruction environment, therefore facilitating and encouraging clinical adoption of advanced qMRI techniques. Methods: The Siemens Image Calculation Environment (ICE) pipeline was customised to deploy trained NNs for advanced diffusion MRI parameter estimation with Open Neural Network Exchange (ONNX) Runtime. Two fully-connected NNs were trained offline with data synthesised with the neurite orientation dispersion and density imaging (NODDI) model, using either conventionally estimated (NNMLE) or ground truth (NNGT) parameters as training labels. The strategy was demonstrated online with an in vivo acquisition and evaluated offline with synthetic test data. Results: NNs were successfully integrated and deployed natively in ICE, performing inline, whole-brain, in vivo NODDI parameter estimation in <10 seconds. DICOM parametric maps were exported from the scanner for further analysis, generally finding that NNMLE estimates were more consistent than NNGT with conventional estimates. Offline evaluation confirms that NNMLE has comparable accuracy and slightly better noise robustness than conventional fitting, whereas NNGT exhibits compromised accuracy at the benefit of higher noise robustness. Conclusion: Real-time, inline parameter estimation with the proposed generalisable framework resolves a key practical barrier to clinical uptake of advanced qMRI methods and enables their efficient integration into clinical workflows.

CytoSAE: Interpretable Cell Embeddings for Hematology

Muhammed Furkan Dasdelen, Hyesu Lim, Michele Buck, Katharina S. Götze, Carsten Marr, Steffen Schneider

arxiv logopreprintJul 16 2025
Sparse autoencoders (SAEs) emerged as a promising tool for mechanistic interpretability of transformer-based foundation models. Very recently, SAEs were also adopted for the visual domain, enabling the discovery of visual concepts and their patch-wise attribution to tokens in the transformer model. While a growing number of foundation models emerged for medical imaging, tools for explaining their inferences are still lacking. In this work, we show the applicability of SAEs for hematology. We propose CytoSAE, a sparse autoencoder which is trained on over 40,000 peripheral blood single-cell images. CytoSAE generalizes to diverse and out-of-domain datasets, including bone marrow cytology, where it identifies morphologically relevant concepts which we validated with medical experts. Furthermore, we demonstrate scenarios in which CytoSAE can generate patient-specific and disease-specific concepts, enabling the detection of pathognomonic cells and localized cellular abnormalities at the patch level. We quantified the effect of concepts on a patient-level AML subtype classification task and show that CytoSAE concepts reach performance comparable to the state-of-the-art, while offering explainability on the sub-cellular level. Source code and model weights are available at https://github.com/dynamical-inference/cytosae.

Interpreting Radiologist's Intention from Eye Movements in Chest X-ray Diagnosis

Trong-Thang Pham, Anh Nguyen, Zhigang Deng, Carol C. Wu, Hien Van Nguyen, Ngan Le

arxiv logopreprintJul 16 2025
Radiologists rely on eye movements to navigate and interpret medical images. A trained radiologist possesses knowledge about the potential diseases that may be present in the images and, when searching, follows a mental checklist to locate them using their gaze. This is a key observation, yet existing models fail to capture the underlying intent behind each fixation. In this paper, we introduce a deep learning-based approach, RadGazeIntent, designed to model this behavior: having an intention to find something and actively searching for it. Our transformer-based architecture processes both the temporal and spatial dimensions of gaze data, transforming fine-grained fixation features into coarse, meaningful representations of diagnostic intent to interpret radiologists' goals. To capture the nuances of radiologists' varied intention-driven behaviors, we process existing medical eye-tracking datasets to create three intention-labeled subsets: RadSeq (Systematic Sequential Search), RadExplore (Uncertainty-driven Exploration), and RadHybrid (Hybrid Pattern). Experimental results demonstrate RadGazeIntent's ability to predict which findings radiologists are examining at specific moments, outperforming baseline methods across all intention-labeled datasets.

Benchmarking and Explaining Deep Learning Cortical Lesion MRI Segmentation in Multiple Sclerosis

Nataliia Molchanova, Alessandro Cagol, Mario Ocampo-Pineda, Po-Jui Lu, Matthias Weigel, Xinjie Chen, Erin Beck, Charidimos Tsagkas, Daniel Reich, Colin Vanden Bulcke, Anna Stolting, Serena Borrelli, Pietro Maggi, Adrien Depeursinge, Cristina Granziera, Henning Mueller, Pedro M. Gordaliza, Meritxell Bach Cuadra

arxiv logopreprintJul 16 2025
Cortical lesions (CLs) have emerged as valuable biomarkers in multiple sclerosis (MS), offering high diagnostic specificity and prognostic relevance. However, their routine clinical integration remains limited due to subtle magnetic resonance imaging (MRI) appearance, challenges in expert annotation, and a lack of standardized automated methods. We propose a comprehensive multi-centric benchmark of CL detection and segmentation in MRI. A total of 656 MRI scans, including clinical trial and research data from four institutions, were acquired at 3T and 7T using MP2RAGE and MPRAGE sequences with expert-consensus annotations. We rely on the self-configuring nnU-Net framework, designed for medical imaging segmentation, and propose adaptations tailored to the improved CL detection. We evaluated model generalization through out-of-distribution testing, demonstrating strong lesion detection capabilities with an F1-score of 0.64 and 0.5 in and out of the domain, respectively. We also analyze internal model features and model errors for a better understanding of AI decision-making. Our study examines how data variability, lesion ambiguity, and protocol differences impact model performance, offering future recommendations to address these barriers to clinical adoption. To reinforce the reproducibility, the implementation and models will be publicly accessible and ready to use at https://github.com/Medical-Image-Analysis-Laboratory/ and https://doi.org/10.5281/zenodo.15911797.

Site-Level Fine-Tuning with Progressive Layer Freezing: Towards Robust Prediction of Bronchopulmonary Dysplasia from Day-1 Chest Radiographs in Extremely Preterm Infants

Sybelle Goedicke-Fritz, Michelle Bous, Annika Engel, Matthias Flotho, Pascal Hirsch, Hannah Wittig, Dino Milanovic, Dominik Mohr, Mathias Kaspar, Sogand Nemat, Dorothea Kerner, Arno Bücker, Andreas Keller, Sascha Meyer, Michael Zemlin, Philipp Flotho

arxiv logopreprintJul 16 2025
Bronchopulmonary dysplasia (BPD) is a chronic lung disease affecting 35% of extremely low birth weight infants. Defined by oxygen dependence at 36 weeks postmenstrual age, it causes lifelong respiratory complications. However, preventive interventions carry severe risks, including neurodevelopmental impairment, ventilator-induced lung injury, and systemic complications. Therefore, early BPD prognosis and prediction of BPD outcome is crucial to avoid unnecessary toxicity in low risk infants. Admission radiographs of extremely preterm infants are routinely acquired within 24h of life and could serve as a non-invasive prognostic tool. In this work, we developed and investigated a deep learning approach using chest X-rays from 163 extremely low-birth-weight infants ($\leq$32 weeks gestation, 401-999g) obtained within 24 hours of birth. We fine-tuned a ResNet-50 pretrained specifically on adult chest radiographs, employing progressive layer freezing with discriminative learning rates to prevent overfitting and evaluated a CutMix augmentation and linear probing. For moderate/severe BPD outcome prediction, our best performing model with progressive freezing, linear probing and CutMix achieved an AUROC of 0.78 $\pm$ 0.10, balanced accuracy of 0.69 $\pm$ 0.10, and an F1-score of 0.67 $\pm$ 0.11. In-domain pre-training significantly outperformed ImageNet initialization (p = 0.031) which confirms domain-specific pretraining to be important for BPD outcome prediction. Routine IRDS grades showed limited prognostic value (AUROC 0.57 $\pm$ 0.11), confirming the need of learned markers. Our approach demonstrates that domain-specific pretraining enables accurate BPD prediction from routine day-1 radiographs. Through progressive freezing and linear probing, the method remains computationally feasible for site-level implementation and future federated learning deployments.

Generate to Ground: Multimodal Text Conditioning Boosts Phrase Grounding in Medical Vision-Language Models

Felix Nützel, Mischa Dombrowski, Bernhard Kainz

arxiv logopreprintJul 16 2025
Phrase grounding, i.e., mapping natural language phrases to specific image regions, holds significant potential for disease localization in medical imaging through clinical reports. While current state-of-the-art methods rely on discriminative, self-supervised contrastive models, we demonstrate that generative text-to-image diffusion models, leveraging cross-attention maps, can achieve superior zero-shot phrase grounding performance. Contrary to prior assumptions, we show that fine-tuning diffusion models with a frozen, domain-specific language model, such as CXR-BERT, substantially outperforms domain-agnostic counterparts. This setup achieves remarkable improvements, with mIoU scores doubling those of current discriminative methods. These findings highlight the underexplored potential of generative models for phrase grounding tasks. To further enhance performance, we introduce Bimodal Bias Merging (BBM), a novel post-processing technique that aligns text and image biases to identify regions of high certainty. BBM refines cross-attention maps, achieving even greater localization accuracy. Our results establish generative approaches as a more effective paradigm for phrase grounding in the medical imaging domain, paving the way for more robust and interpretable applications in clinical practice. The source code and model weights are available at https://github.com/Felix-012/generate_to_ground.
Page 40 of 1081080 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.