Sort by:
Page 95 of 6386373 results

Han Y, Yan H, Li H, Liu F, Li P, Yuan Y, Guo W

pubmed logopapersOct 2 2025
This study intended to examine network homogeneity (NH) alterations in drug-naive patients with panic disorder (PD) before and after treatment and whether NH could serve as a potential biomarker. Fifty-eight patients and 85 healthy controls (HCs) underwent resting-state functional magnetic resonance imaging. Patients were rescanned following a 4-week course of paroxetine monotherapy. NH was computed to evaluate intra-network functional integration across the Yeo 7-Network. Machine learning (ML) was employed to assess the diagnostic and prognostic potential of NH metrics. Transcriptome-neuroimaging association analyses were conducted to explore the molecular correlates of NH alterations. Compared with HCs, patients showed disrupted intra-network integration in the frontoparietal, default mode, sensorimotor, limbic, and ventral attention networks, with prominent NH alterations in the superior frontal gyrus (SFG), middle temporal gyrus (MTG), superior temporal gyrus (STG), somatosensory cortex, insular, and anterior cingulate cortex. Importantly, the SFG, MTG, and STG demonstrated cross-network abnormalities. After treatment, clinical improvement correlated with normalized NH in the SFG and additional changes in the inferior occipital gyrus and calcarine sulcus within the visual network. ML demonstrated the utility of NH for PD classification and treatment outcome prediction. Transcriptome-neuroimaging analysis identified specific gene profiles related to NH alterations. NH reflects both pathological features and treatment-related changes in PD, providing a measure of network dysfunction and therapeutic response. Cross-network NH disruptions in hub regions and visual processing may reflect core neuropharmacological mechanisms underlying PD. ML findings support the potential of NH as a neuroimaging biomarker for diagnosis and treatment monitoring in PD.

Low D, Rutherford S

pubmed logopapersOct 2 2025
To develop a computed tomography (CT)-radiomics-based machine-learning algorithm for prediction of functional recovery in paraplegic dogs with acute intervertebral disc extrusion (IVDE). Multivariable prediction model development. Paraplegic dogs with acute IVDE: 128 deep-pain positive and 86 deep-pain negative (DPN). Radiomics features from noncontrast CT were combined with deep-pain perception in an extreme gradient algorithm using an 80:20 train-test split. Model performance was assessed on the independent test set (Test<sub>full</sub>) and on the test set of DPN dogs (Test<sub>DPN</sub>). Deep-pain perception alone served as the control. Recovery of ambulation was recorded in 165/214 dogs (77.1%) after decompressive surgery. The model had an area under the receiver operating characteristic curve (AUC) of .9118 (95% CI: .8366-.9872), accuracy of 86.1% (95% CI: 74.4%-95.4%), sensitivity of 82.4% (95% CI: 68.6%-93.9%), and specificity of 100.0% (95% CI: 100.0%-100.0%) on Test<sub>full</sub>, and an AUC of .7692 (95% CI: .6250-.9000), accuracy of 72.7% (95% CI: 50.0%-90.9%), sensitivity of 53.8% (95% CI: 25.0%-80.0%), and specificity of 100.0% (95% CI: 100.0%-100.0%) on Test<sub>DPN</sub>. Deep-pain perception had an AUC of .8088 (95% CI: .7273-.8871), accuracy of 69.8% (95% CI: 55.8%-83.7%), sensitivity of 61.8% (95% CI: 45.5%-77.4%), and specificity of 100.0% (95% CI: 100.0%-100.0%), which was different from that of the model (p = .02). Noncontrast CT-based radiomics provided prognostic information in dogs with severe spinal cord injury secondary to acute intervertebral disc extrusion. The model outperformed deep-pain perception alone in identifying dogs that recovered ambulation following decompressive surgery. Radiomics features from noncontrast CT, when integrated into a multimodal machine-learning algorithm, may be useful as an assistive tool for surgical decision making.

Jong Bum Won, Wesley De Neve, Joris Vankerschaver, Utku Ozbulak

arxiv logopreprintOct 2 2025
Deep neural networks (DNNs) have demonstrated remarkable success in medical imaging, yet their real-world deployment remains challenging due to spurious correlations, where models can learn non-clinical features instead of meaningful medical patterns. Existing medical imaging datasets are not designed to systematically study this issue, largely due to restrictive licensing and limited supplementary patient data. To address this gap, we introduce SpurBreast, a curated breast MRI dataset that intentionally incorporates spurious correlations to evaluate their impact on model performance. Analyzing over 100 features involving patient, device, and imaging protocol, we identify two dominant spurious signals: magnetic field strength (a global feature influencing the entire image) and image orientation (a local feature affecting spatial alignment). Through controlled dataset splits, we demonstrate that DNNs can exploit these non-clinical signals, achieving high validation accuracy while failing to generalize to unbiased test data. Alongside these two datasets containing spurious correlations, we also provide benchmark datasets without spurious correlations, allowing researchers to systematically investigate clinically relevant and irrelevant features, uncertainty estimation, adversarial robustness, and generalization strategies. Models and datasets are available at https://github.com/utkuozbulak/spurbreast.

Roman Jacome, Romario Gualdrón-Hurtado, Leon Suarez, Henry Arguello

arxiv logopreprintOct 2 2025
Imaging inverse problems aims to recover high-dimensional signals from undersampled, noisy measurements, a fundamentally ill-posed task with infinite solutions in the null-space of the sensing operator. To resolve this ambiguity, prior information is typically incorporated through handcrafted regularizers or learned models that constrain the solution space. However, these priors typically ignore the task-specific structure of that null-space. In this work, we propose \textit{Non-Linear Projections of the Null-Space} (NPN), a novel class of regularization that, instead of enforcing structural constraints in the image domain, promotes solutions that lie in a low-dimensional projection of the sensing matrix's null-space with a neural network. Our approach has two key advantages: (1) Interpretability: by focusing on the structure of the null-space, we design sensing-matrix-specific priors that capture information orthogonal to the signal components that are fundamentally blind to the sensing process. (2) Flexibility: NPN is adaptable to various inverse problems, compatible with existing reconstruction frameworks, and complementary to conventional image-domain priors. We provide theoretical guarantees on convergence and reconstruction accuracy when used within plug-and-play methods. Empirical results across diverse sensing matrices demonstrate that NPN priors consistently enhance reconstruction fidelity in various imaging inverse problems, such as compressive sensing, deblurring, super-resolution, computed tomography, and magnetic resonance imaging, with plug-and-play methods, unrolling networks, deep image prior, and diffusion models.

Nanaka Hosokawa, Ryo Takahashi, Tomoya Kitano, Yukihiro Iida, Chisako Muramatsu, Tatsuro Hayashi, Yuta Seino, Xiangrong Zhou, Takeshi Hara, Akitoshi Katsumata, Hiroshi Fujita

arxiv logopreprintOct 2 2025
In this study, we utilized the multimodal capabilities of OpenAI GPT-4o to automatically generate jaw cyst findings on dental panoramic radiographs. To improve accuracy, we constructed a Self-correction Loop with Structured Output (SLSO) framework and verified its effectiveness. A 10-step process was implemented for 22 cases of jaw cysts, including image input and analysis, structured data generation, tooth number extraction and consistency checking, iterative regeneration when inconsistencies were detected, and finding generation with subsequent restructuring and consistency verification. A comparative experiment was conducted using the conventional Chain-of-Thought (CoT) method across seven evaluation items: transparency, internal structure, borders, root resorption, tooth movement, relationships with other structures, and tooth number. The results showed that the proposed SLSO framework improved output accuracy for many items, with 66.9%, 33.3%, and 28.6% improvement rates for tooth number, tooth movement, and root resorption, respectively. In the successful cases, a consistently structured output was achieved after up to five regenerations. Although statistical significance was not reached because of the small size of the dataset, the overall SLSO framework enforced negative finding descriptions, suppressed hallucinations, and improved tooth number identification accuracy. However, the accurate identification of extensive lesions spanning multiple teeth is limited. Nevertheless, further refinement is required to enhance overall performance and move toward a practical finding generation system.

Md Talha Mohsin, Ismail Abdulrashid

arxiv logopreprintOct 2 2025
Healthcare generates diverse streams of data, including electronic health records (EHR), medical imaging, genetics, and ongoing monitoring from wearable devices. Traditional diagnostic models frequently analyze these sources in isolation, which constrains their capacity to identify cross-modal correlations essential for early disease diagnosis. Our research presents a multimodal foundation model that consolidates diverse patient data through an attention-based transformer framework. At first, dedicated encoders put each modality into a shared latent space. Then, they combine them using multi-head attention and residual normalization. The architecture is made for pretraining on many tasks, which makes it easy to adapt to new diseases and datasets with little extra work. We provide an experimental strategy that uses benchmark datasets in oncology, cardiology, and neurology, with the goal of testing early detection tasks. The framework includes data governance and model management tools in addition to technological performance to improve transparency, reliability, and clinical interpretability. The suggested method works toward a single foundation model for precision diagnostics, which could improve the accuracy of predictions and help doctors make decisions.

Arman Behnam

arxiv logopreprintOct 2 2025
Accurate detection and segmentation of brain tumors from magnetic resonance imaging (MRI) are essential for diagnosis, treatment planning, and clinical monitoring. While convolutional architectures such as U-Net have long been the backbone of medical image segmentation, their limited capacity to capture long-range dependencies constrains performance on complex tumor structures. Recent advances in diffusion models have demonstrated strong potential for generating high-fidelity medical images and refining segmentation boundaries. In this work, we propose VGDM: Vision-Guided Diffusion Model for Brain Tumor Detection and Segmentation framework, a transformer-driven diffusion framework for brain tumor detection and segmentation. By embedding a vision transformer at the core of the diffusion process, the model leverages global contextual reasoning together with iterative denoising to enhance both volumetric accuracy and boundary precision. The transformer backbone enables more effective modeling of spatial relationships across entire MRI volumes, while diffusion refinement mitigates voxel-level errors and recovers fine-grained tumor details. This hybrid design provides a pathway toward improved robustness and scalability in neuro-oncology, moving beyond conventional U-Net baselines. Experimental validation on MRI brain tumor datasets demonstrates consistent gains in Dice similarity and Hausdorff distance, underscoring the potential of transformer-guided diffusion models to advance the state of the art in tumor segmentation.

Jinwei Zhang, Lianrui Zuo, Yihao Liu, Hang Zhang, Samuel W. Remedios, Bennett A. Landman, Peter A. Calabresi, Shiv Saidha, Scott D. Newsome, Dzung L. Pham, Jerry L. Prince, Ellen M. Mowry, Aaron Carass

arxiv logopreprintOct 2 2025
In multiple sclerosis, lesions interfere with automated magnetic resonance imaging analyses such as brain parcellation and deformable registration, while lesion segmentation models are hindered by the limited availability of annotated training data. To address both issues, we propose MSRepaint, a unified diffusion-based generative model for bidirectional lesion filling and synthesis that restores anatomical continuity for downstream analyses and augments segmentation through realistic data generation. MSRepaint conditions on spatial lesion masks for voxel-level control, incorporates contrast dropout to handle missing inputs, integrates a repainting mechanism to preserve surrounding anatomy during lesion filling and synthesis, and employs a multi-view DDIM inversion and fusion pipeline for 3D consistency with fast inference. Extensive evaluations demonstrate the effectiveness of MSRepaint across multiple tasks. For lesion filling, we evaluate both the accuracy within the filled regions and the impact on downstream tasks including brain parcellation and deformable registration. MSRepaint outperforms the traditional lesion filling methods FSL and NiftySeg, and achieves accuracy on par with FastSurfer-LIT, a recent diffusion model-based inpainting method, while offering over 20 times faster inference. For lesion synthesis, state-of-the-art MS lesion segmentation models trained on MSRepaint-synthesized data outperform those trained on CarveMix-synthesized data or real ISBI challenge training data across multiple benchmarks, including the MICCAI 2016 and UMCL datasets. Additionally, we demonstrate that MSRepaint's unified bidirectional filling and synthesis capability, with full spatial control over lesion appearance, enables high-fidelity simulation of lesion evolution in longitudinal MS progression.

Jhonatan Contreras, Thomas Bocklitz

arxiv logopreprintOct 2 2025
Deep learning has achieved remarkable success in medical image analysis, however its adoption in clinical practice is limited by a lack of interpretability. These models often make correct predictions without explaining their reasoning. They may also rely on image regions unrelated to the disease or visual cues, such as annotations, that are not present in real-world conditions. This can reduce trust and increase the risk of misleading diagnoses. We introduce the Guided Focus via Segment-Wise Relevance Network (GFSR-Net), an approach designed to improve interpretability and reliability in medical imaging. GFSR-Net uses a small number of human annotations to approximate where a person would focus within an image intuitively, without requiring precise boundaries or exhaustive markings, making the process fast and practical. During training, the model learns to align its focus with these areas, progressively emphasizing features that carry diagnostic meaning. This guidance works across different types of natural and medical images, including chest X-rays, retinal scans, and dermatological images. Our experiments demonstrate that GFSR achieves comparable or superior accuracy while producing saliency maps that better reflect human expectations. This reduces the reliance on irrelevant patterns and increases confidence in automated diagnostic tools.

Jiyao Liu, Jinjie Wei, Wanying Qu, Chenglong Ma, Junzhi Ning, Yunheng Li, Ying Chen, Xinzhe Luo, Pengcheng Chen, Xin Gao, Ming Hu, Huihui Xu, Xin Wang, Shujian Gao, Dingkang Yang, Zhongying Deng, Jin Ye, Lihao Liu, Junjun He, Ningsheng Xu

arxiv logopreprintOct 2 2025
Medical Image Quality Assessment (IQA) serves as the first-mile safety gate for clinical AI, yet existing approaches remain constrained by scalar, score-based metrics and fail to reflect the descriptive, human-like reasoning process central to expert evaluation. To address this gap, we introduce MedQ-Bench, a comprehensive benchmark that establishes a perception-reasoning paradigm for language-based evaluation of medical image quality with Multi-modal Large Language Models (MLLMs). MedQ-Bench defines two complementary tasks: (1) MedQ-Perception, which probes low-level perceptual capability via human-curated questions on fundamental visual attributes; and (2) MedQ-Reasoning, encompassing both no-reference and comparison reasoning tasks, aligning model evaluation with human-like reasoning on image quality. The benchmark spans five imaging modalities and over forty quality attributes, totaling 2,600 perceptual queries and 708 reasoning assessments, covering diverse image sources including authentic clinical acquisitions, images with simulated degradations via physics-based reconstructions, and AI-generated images. To evaluate reasoning ability, we propose a multi-dimensional judging protocol that assesses model outputs along four complementary axes. We further conduct rigorous human-AI alignment validation by comparing LLM-based judgement with radiologists. Our evaluation of 14 state-of-the-art MLLMs demonstrates that models exhibit preliminary but unstable perceptual and reasoning skills, with insufficient accuracy for reliable clinical use. These findings highlight the need for targeted optimization of MLLMs in medical IQA. We hope that MedQ-Bench will catalyze further exploration and unlock the untapped potential of MLLMs for medical image quality evaluation.
Page 95 of 6386373 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.