Sort by:
Page 94 of 6386373 results

Juampablo E. Heras Rivera, Caitlin M. Neher, Mehmet Kurt

arxiv logopreprintOct 3 2025
$\textbf{Purpose:}$ To develop and evaluate an operator learning framework for nonlinear inversion (NLI) of brain magnetic resonance elastography (MRE) data, which enables real-time inversion of elastograms with comparable spatial accuracy to NLI. $\textbf{Materials and Methods:}$ In this retrospective study, 3D MRE data from 61 individuals (mean age, 37.4 years; 34 female) were used for development of the framework. A predictive deep operator learning framework (oNLI) was trained using 10-fold cross-validation, with the complex curl of the measured displacement field as inputs and NLI-derived reference elastograms as outputs. A structural prior mechanism, analogous to Soft Prior Regularization in the MRE literature, was incorporated to improve spatial accuracy. Subject-level evaluation metrics included Pearson's correlation coefficient, absolute relative error, and structural similarity index measure between predicted and reference elastograms across brain regions of different sizes to understand accuracy. Statistical analyses included paired t-tests comparing the proposed oNLI variants to the convolutional neural network baselines. $\textbf{Results:}$ Whole brain absolute percent error was 8.4 $\pm$ 0.5 ($\mu'$) and 10.0 $\pm$ 0.7 ($\mu''$) for oNLI and 15.8 $\pm$ 0.8 ($\mu'$) and 26.1 $\pm$ 1.1 ($\mu''$) for CNNs. Additionally, oNLI outperformed convolutional architectures as per Pearson's correlation coefficient, $r$, in the whole brain and across all subregions for both the storage modulus and loss modulus (p < 0.05). $\textbf{Conclusion:}$ The oNLI framework enables real-time MRE inversion (30,000x speedup), outperforming CNN-based approaches and maintaining the fine-grained spatial accuracy achievable with NLI in the brain.

Numan Saeed, Tausifa Jan Saleem, Fadillah Maani, Muhammad Ridzuan, Hu Wang, Mohammad Yaqub

arxiv logopreprintOct 3 2025
Deep learning for medical imaging is hampered by task-specific models that lack generalizability and prognostic capabilities, while existing 'universal' approaches suffer from simplistic conditioning and poor medical semantic understanding. To address these limitations, we introduce DuPLUS, a deep learning framework for efficient multi-modal medical image analysis. DuPLUS introduces a novel vision-language framework that leverages hierarchical semantic prompts for fine-grained control over the analysis task, a capability absent in prior universal models. To enable extensibility to other medical tasks, it includes a hierarchical, text-controlled architecture driven by a unique dual-prompt mechanism. For segmentation, DuPLUS is able to generalize across three imaging modalities, ten different anatomically various medical datasets, encompassing more than 30 organs and tumor types. It outperforms the state-of-the-art task specific and universal models on 8 out of 10 datasets. We demonstrate extensibility of its text-controlled architecture by seamless integration of electronic health record (EHR) data for prognosis prediction, and on a head and neck cancer dataset, DuPLUS achieved a Concordance Index (CI) of 0.69. Parameter-efficient fine-tuning enables rapid adaptation to new tasks and modalities from varying centers, establishing DuPLUS as a versatile and clinically relevant solution for medical image analysis. The code for this work is made available at: https://anonymous.4open.science/r/DuPLUS-6C52

Ming Zhao, Wenhui Dong, Yang Zhang, Xiang Zheng, Zhonghao Zhang, Zian Zhou, Yunzhi Guan, Liukun Xu, Wei Peng, Zhaoyang Gong, Zhicheng Zhang, Dachuan Li, Xiaosheng Ma, Yuli Ma, Jianing Ni, Changjiang Jiang, Lixia Tian, Qixin Chen, Kaishun Xia, Pingping Liu, Tongshun Zhang, Zhiqiang Liu, Zhongan Bi, Chenyang Si, Tiansheng Sun, Caifeng Shan

arxiv logopreprintOct 3 2025
Spine disorders affect 619 million people globally and are a leading cause of disability, yet AI-assisted diagnosis remains limited by the lack of level-aware, multimodal datasets. Clinical decision-making for spine disorders requires sophisticated reasoning across X-ray, CT, and MRI at specific vertebral levels. However, progress has been constrained by the absence of traceable, clinically-grounded instruction data and standardized, spine-specific benchmarks. To address this, we introduce SpineMed, an ecosystem co-designed with practicing spine surgeons. It features SpineMed-450k, the first large-scale dataset explicitly designed for vertebral-level reasoning across imaging modalities with over 450,000 instruction instances, and SpineBench, a clinically-grounded evaluation framework. SpineMed-450k is curated from diverse sources, including textbooks, guidelines, open datasets, and ~1,000 de-identified hospital cases, using a clinician-in-the-loop pipeline with a two-stage LLM generation method (draft and revision) to ensure high-quality, traceable data for question-answering, multi-turn consultations, and report generation. SpineBench evaluates models on clinically salient axes, including level identification, pathology assessment, and surgical planning. Our comprehensive evaluation of several recently advanced large vision-language models (LVLMs) on SpineBench reveals systematic weaknesses in fine-grained, level-specific reasoning. In contrast, our model fine-tuned on SpineMed-450k demonstrates consistent and significant improvements across all tasks. Clinician assessments confirm the diagnostic clarity and practical utility of our model's outputs.

Tianzheng Hu, Qiang Li, Shu Liu, Vince D. Calhoun, Guido van Wingen, Shujian Yu

arxiv logopreprintOct 3 2025
The development of diagnostic models is gaining traction in the field of psychiatric disorders. Recently, machine learning classifiers based on resting-state functional magnetic resonance imaging (rs-fMRI) have been developed to identify brain biomarkers that differentiate psychiatric disorders from healthy controls. However, conventional machine learning-based diagnostic models often depend on extensive feature engineering, which introduces bias through manual intervention. While deep learning models are expected to operate without manual involvement, their lack of interpretability poses significant challenges in obtaining explainable and reliable brain biomarkers to support diagnostic decisions, ultimately limiting their clinical applicability. In this study, we introduce an end-to-end innovative graph neural network framework named BrainIB++, which applies the information bottleneck (IB) principle to identify the most informative data-driven brain regions as subgraphs during model training for interpretation. We evaluate the performance of our model against nine established brain network classification methods across three multi-cohort schizophrenia datasets. It consistently demonstrates superior diagnostic accuracy and exhibits generalizability to unseen data. Furthermore, the subgraphs identified by our model also correspond with established clinical biomarkers in schizophrenia, particularly emphasizing abnormalities in the visual, sensorimotor, and higher cognition brain functional network. This alignment enhances the model's interpretability and underscores its relevance for real-world diagnostic applications.

morgan, s., salman, s., walker, j., Freeman, W. D.

medrxiv logopreprintOct 3 2025
IntroductionSubarachnoid hemorrhage (SAH) is a life-threatening and crucial neurological emergency. SAHDAI-XAI (Subarachnoid Hemorrhage Detection Artificial Intelligence) is a cloud-based machine learning model created as a binary positive and negative classifier to detect SAH bleeding seen in any of eight potential hemorrhage spaces. It aims to address the lack of transparency in AI- based detection of subarachnoid hemorrhage. MethodsThis project is divided into two phases, integrating Auto-ML and BLAST, combining the statistical assessment of hemorrhage detection accuracy using a low-code approach with the simultaneous colour-based visualization of bleeding areas to enhance transparency. In phase 1, an AutoML model was trained on Google Cloud Vertex AI after preprocessing. The Model completed four runs, progressively increasing the dataset size. The dataset is split into 80% for training, 10% for validation, and 10% for testing, with explainability (XRAI) applied to the testing images. We started with 20 non-contrast head CT images followed by 40, 200, and then 300 images, and in each AutoML run, the dataset was equivalently divided into one half manually labeled as positive for hemorrhage and the other half labeled as negative controls. The fourth AutoML evaluated the models ability to differentiate between a hemorrhage and other pathologies, such as tumors and calcifications. In phase 2, the goal is to increase explainability by visualizing predictive image features and showing the detection of hemorrhage locations using the Brain Lesion Analysis and Segmentation Tool for Computed Tomography (BLAST). This model segments and quantifies four different hemorrhage and edema locations. ResultsIn phase one, the first two AutoML runs demonstrated 100% average precision due to the small data size. In the third run, the average precision was 97.9% after increasing the dataset size, and one false negative (FN) image was detected. In the fourth round, after evaluating the models differentiation abilities, the average precision rate dropped to 94.4%. This round demonstrated two false positive (FP) images from the testing deck. After extensive preprocessing using the BLAST model public Python code in the second phase, topographic images of the bleeding were demonstrated with different outcomes. Some accurately cover a significant percentage of the bleeding, whereas others do not. ConclusionThe SAHDAI-XAI model is a new image-based SAH explainable AI model that shows enhanced transparency for AI hemorrhage detection in daily clinical life and aims to overcome AIs untransparent nature and accelerate time to diagnosis, thereby helping decrease the mortality rates.6 BLAST model utilization facilitates a better understanding of AI outcomes and supports the creation of visually demonstrated XAI in SAH detection and predicting hemorrhage coverage. The goal is to resolve AIs hidden black-box aspect, making ML model outcomes increasingly transparent and explainable. Keywords: SAH, explainable AI, GCP, AutoML, BLAST, black-box.

Sato, T., Nishitsuka, K., Itoh, T., Okashita, T., Wada, S., Shinjo, A.

medrxiv logopreprintOct 3 2025
Deep learning has shown promise in diabetic retinopathy screening using fundus images. However, many existing models operate as "black boxes," providing limited interpretability at the lesion level. This study aimed to develop an explainable deep learning model capable of detecting four diabetic retinopathy-related lesions--hemorrhages, hard exudates, cotton wool spots, and microaneurysms--and evaluate its performance using both conventional per-lesion metrics and a novel syntactic agreement framework. A total of 1,087 fundus images were obtained from publicly available datasets (EyePACS and APTOS), which contained 585 images graded as mild-to-moderate nonproliferative diabetic retinopathy (DR1 or DR2). All images were manually annotated for the presence of the four lesions. A U-Net-based segmentation model was trained to generate binary predictions for each lesion type. The performance of the model was evaluated using sensitivity, specificity, precision, and F1 score, along with five syntactic agreement criteria that evaluated the lesion-set consistency between the predicted and ground truth outputs at the image level. The model achieved high sensitivity and F1 scores for hemorrhages and hard exudates, showed moderate performance for cotton wool spots, and failed to detect any microaneurysms (0% sensitivity), with 92.9% of the microaneurysms cases misclassified as hemorrhages. Despite this limitation, the image-level agreement remained high, with any-lesion match and hemorrhage match rates exceeding 95%. These findings suggest that although individual lesion classification was imperfect, the model effectively recognized abnormal images, highlighting its potential as a screening tool. The proposed syntactic agreement framework offers a complementary evaluation strategy that aligns more closely with clinical interpretation and may help bridge the gap between artificial intelligence-based predictions and real-world ophthalmic decision-making.

Sirkin, N. J., Harper, T., Lamey, E., Wilhelm, J. N., Rought, G., Yerrapragada, A.

medrxiv logopreprintOct 3 2025
BackgroundPediatric brain tumors are the leading cause of cancer death in children, with surgical resection critical for survival and neurodevelopment. Intraoperative molecular imaging has advanced in adults but remains limited in pediatrics. This review examines the availability of intraoperative metabolomic imaging, AI integration, and multi-modal imaging in pediatric brain tumor surgery. MethodsLiterature search was done in PubMed, Scopus, Web of Science, and Embase from 2010-2025. Included studies addressed intraoperative molecular imaging in pediatrics, metabolomic neurosurgery approaches, or AI application in pediatric brain tumor care. ResultsOf 2,847 articles, 75 met criteria. Pediatric intraoperative imaging mainly uses magnetic resonance imaging (21 studies), with limited metabolomic approaches (16 studies). Mass spectrometry shows promise for real-time tissue characterization but mainly in adults. AI in pediatric neuroimaging improved tumor segmentation and outcome prediction in 15 studies. Key gaps: (1) limited pediatric metabolomic databases, (2) lack of real-time metabolomic platforms for developing brains, (3) limited neurodevelopment integration in surgical planning, (4) no standard protocols for multi-modal integration. DiscussionThe review highlights opportunities to advance intraoperative molecular imaging in pediatric neurosurgery via metabolomic-guided and AI-integrated approaches. Future research should develop pediatric-specific metabolomic platforms, age-specific biomarker libraries, and integrated decision-support systems considering oncological and neurodevelopment outcomes.

Kikuchi, T., Walston, S. L., Takita, H., Mitsuyama, Y., Ito, R., Hashimoto, M., Nakaura, T., Hyakutake, H., Kawabe, S., Mori, H., Ueda, D.

medrxiv logopreprintOct 3 2025
BackgroundThe integration of artificial intelligence (AI) in radiology has accelerated globally, with Japans Pharmaceuticals and Medical Devices Agency (PMDA) approving numerous AI-based Software as a Medical Device (SaMD) products. However, the transparency and completeness of clinical evidence available to healthcare providers remain unclear. PurposeTo systematically evaluate the availability and transparency of clinical evidence in package inserts of PMDA-approved AI-based radiology SaMD products, identifying gaps that may impact clinical implementation. Materials and MethodsWe conducted a systematic review of all PMDA-approved SaMD products as of December 31, 2024. Products were included if they utilized AI technology and were classified for radiology applications. Data extraction focused on product characteristics, study designs, demographic information, and performance metrics. ResultsOf 151 approved SaMD products, 40 utilized AI technology, with 20 specifically designed for radiology applications. Critical gaps were identified in demographic reporting, with no products providing complete case demographic data. Performance metrics varied widely, with sensitivity ranging from 67.7% to 100% in standalone studies. Physician-assisted studies consistently demonstrated performance improvements but lacked stratified results by characteristics in all cases. ConclusionCurrent package insert requirements provide insufficient transparency for evidence-based clinical implementation of AI-based radiology software. Enhanced regulatory frameworks and industry-led initiatives for comprehensive validation are essential for safe and effective AI deployment in Japanese healthcare.

Mohamed Ismail, N., Miller, M., Crossland, H., Sharif, J.-A., Chapple, J. P., Wahlestedt, C., Shkura, K., Volmar, C.-H., Slabaugh, G., Timmons, J. A.

medrxiv logopreprintOct 3 2025
INTRODUCTIONAlzheimers disease (AD) has greater prevalence in women and lacks effective treatments. Integrating multimodal data using machine learning (ML) may help improve diagnostics and prognostics. METHODSWe produced a large and updatable blood transcriptomic dataset (n=1021, with n=317 replicates). Technical robustness was assessed using sampling-at-random, batch adjustment and classification metrics. Transcriptomic and MRI features were concatenated to develop models for AD classification. RESULTSReprofiling of blood transcriptomics resolved previous technical artefacts (sampling-at-random AUC; Legacy=0.732 vs. New=0.567). AD-associated molecular pathways were influenced by cell counts and sex, including unchanged mitochondrial DNA-encoded RNA and altered B-cell receptor biology. Several genes linked to AD-associated neuroinflammatory pathways, including BLNK, MS4A1, and CARD16, showed significant enrichment. Concatenation of transcriptomics and MRI models modestly improved classification performance (AUC; MRI=0.922 vs. transcriptomics-MRI=0.930). DISCUSSIONWe provide a new large-scale and technically robust blood AD transcriptomic dataset, highlighting details of molecular sexual dimorphism in AD and potential literature false positives, while providing a novel resource for future multimodal ML and genomic studies.

Zuo S, Li Y, Qi Y, Liu A

pubmed logopapersOct 2 2025
Graph-based methods using resting-state functional magnetic resonance imaging demonstrate strong capabilities in modeling brain networks. However, existing graph-based methods often overlook inter-graph relationships, limiting their ability to capture the intrinsic features shared across individuals. Additionally, their simplistic integration strategies may fail to take full advantage of multimodal information. To address these challenges, this paper proposes a Multilevel Correlation-aware and Modal-aware Graph Convolutional Network (MCM-GCN) for the reliable diagnosis of neurodevelopmental disorders. At the individual level, we design a correlation-driven feature generation module that incorporates a pooling layer with external graph attention to perceive inter-graph correlations, generating discriminative brain embeddings and identifying disease-related regions. At the population level, to deeply integrate multimodal and multi-atlas information, a multimodal-decoupled feature enhancement module learns unique and shared embeddings from brain graphs and phenotypic data and then fuses them adaptively with graph channel attention for reliable disease classification. Extensive experiments on two public datasets for Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD) demonstrate that MCM-GCN outperforms other competing methods, with an accuracy of 92.88% for ASD and 76.55% for ADHD. The MCM-GCN framework integrates individual-level and population-level analyses, offering a comprehensive perspective for neurodevelopmental disorder diagnosis, significantly improving diagnostic accuracy while identifying key indicators. These findings highlight the potential of the MCM-GCN for imaging-assisted diagnosis of neurodevelopmental diseases, advancing interpretable deep learning in medical imaging analysis.
Page 94 of 6386373 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.