Sort by:
Page 365 of 3703697 results

A multi-modal model integrating MRI habitat and clinicopathology to predict platinum sensitivity in patients with high-grade serous ovarian cancer: a diagnostic study.

Bi Q, Ai C, Meng Q, Wang Q, Li H, Zhou A, Shi W, Lei Y, Wu Y, Song Y, Xiao Z, Li H, Qiang J

pubmed logopapersMay 20 2025
Platinum resistance of high-grade serous ovarian cancer (HGSOC) cannot currently be recognized by specific molecular biomarkers. We aimed to compare the predictive capacity of various models integrating MRI habitat, whole slide images (WSIs), and clinical parameters to predict platinum sensitivity in HGSOC patients. A retrospective study involving 998 eligible patients from four hospitals was conducted. MRI habitats were clustered using K-means algorithm on multi-parametric MRI. Following feature extraction and selection, a Habitat model was developed. Vision Transformer (ViT) and multi-instance learning were trained to derive the patch-level prediction and WSI-level prediction on hematoxylin and eosin (H&E)-stained WSIs, respectively, forming a Pathology model. Logistic regression (LR) was used to create a Clinic model. A multi-modal model integrating Clinic, Habitat, and Pathology (CHP) was constructed using Multi-Head Attention (MHA) and compared with the unimodal models and Ensemble multi-modal models. The area under the curve (AUC) and integrated discrimination improvement (IDI) value were used to assess model performance and gains. In the internal validation cohort and the external test cohort, the Habitat model showed the highest AUCs (0.722 and 0.685) compared to the Clinic model (0.683 and 0.681) and the Pathology model (0.533 and 0.565), respectively. The AUCs (0.789 and 0.807) of the multi-modal model interating CHP based on MHA were highest than those of any unimodal models and Ensemble multi-modal models, with positive IDI values. MRI-based habitat imaging showed potentials to predict platinum sensitivity in HGSOC patients. Multi-modal integration of CHP based on MHA was helpful to improve prediction performance.

Dynadiff: Single-stage Decoding of Images from Continuously Evolving fMRI

Marlène Careil, Yohann Benchetrit, Jean-Rémi King

arxiv logopreprintMay 20 2025
Brain-to-image decoding has been recently propelled by the progress in generative AI models and the availability of large ultra-high field functional Magnetic Resonance Imaging (fMRI). However, current approaches depend on complicated multi-stage pipelines and preprocessing steps that typically collapse the temporal dimension of brain recordings, thereby limiting time-resolved brain decoders. Here, we introduce Dynadiff (Dynamic Neural Activity Diffusion for Image Reconstruction), a new single-stage diffusion model designed for reconstructing images from dynamically evolving fMRI recordings. Our approach offers three main contributions. First, Dynadiff simplifies training as compared to existing approaches. Second, our model outperforms state-of-the-art models on time-resolved fMRI signals, especially on high-level semantic image reconstruction metrics, while remaining competitive on preprocessed fMRI data that collapse time. Third, this approach allows a precise characterization of the evolution of image representations in brain activity. Overall, this work lays the foundation for time-resolved brain-to-image decoding.

TransMedSeg: A Transferable Semantic Framework for Semi-Supervised Medical Image Segmentation

Mengzhu Wang, Jiao Li, Shanshan Wang, Long Lan, Huibin Tan, Liang Yang, Guoli Yang

arxiv logopreprintMay 20 2025
Semi-supervised learning (SSL) has achieved significant progress in medical image segmentation (SSMIS) through effective utilization of limited labeled data. While current SSL methods for medical images predominantly rely on consistency regularization and pseudo-labeling, they often overlook transferable semantic relationships across different clinical domains and imaging modalities. To address this, we propose TransMedSeg, a novel transferable semantic framework for semi-supervised medical image segmentation. Our approach introduces a Transferable Semantic Augmentation (TSA) module, which implicitly enhances feature representations by aligning domain-invariant semantics through cross-domain distribution matching and intra-domain structural preservation. Specifically, TransMedSeg constructs a unified feature space where teacher network features are adaptively augmented towards student network semantics via a lightweight memory module, enabling implicit semantic transformation without explicit data generation. Interestingly, this augmentation is implicitly realized through an expected transferable cross-entropy loss computed over the augmented teacher distribution. An upper bound of the expected loss is theoretically derived and minimized during training, incurring negligible computational overhead. Extensive experiments on medical image datasets demonstrate that TransMedSeg outperforms existing semi-supervised methods, establishing a new direction for transferable representation learning in medical image analysis.

RADAR: Enhancing Radiology Report Generation with Supplementary Knowledge Injection

Wenjun Hou, Yi Cheng, Kaishuai Xu, Heng Li, Yan Hu, Wenjie Li, Jiang Liu

arxiv logopreprintMay 20 2025
Large language models (LLMs) have demonstrated remarkable capabilities in various domains, including radiology report generation. Previous approaches have attempted to utilize multimodal LLMs for this task, enhancing their performance through the integration of domain-specific knowledge retrieval. However, these approaches often overlook the knowledge already embedded within the LLMs, leading to redundant information integration and inefficient utilization of learned representations. To address this limitation, we propose RADAR, a framework for enhancing radiology report generation with supplementary knowledge injection. RADAR improves report generation by systematically leveraging both the internal knowledge of an LLM and externally retrieved information. Specifically, it first extracts the model's acquired knowledge that aligns with expert image-based classification outputs. It then retrieves relevant supplementary knowledge to further enrich this information. Finally, by aggregating both sources, RADAR generates more accurate and informative radiology reports. Extensive experiments on MIMIC-CXR, CheXpert-Plus, and IU X-ray demonstrate that our model outperforms state-of-the-art LLMs in both language quality and clinical accuracy

Federated learning in low-resource settings: A chest imaging study in Africa -- Challenges and lessons learned

Jorge Fabila, Lidia Garrucho, Víctor M. Campello, Carlos Martín-Isla, Karim Lekadir

arxiv logopreprintMay 20 2025
This study explores the use of Federated Learning (FL) for tuberculosis (TB) diagnosis using chest X-rays in low-resource settings across Africa. FL allows hospitals to collaboratively train AI models without sharing raw patient data, addressing privacy concerns and data scarcity that hinder traditional centralized models. The research involved hospitals and research centers in eight African countries. Most sites used local datasets, while Ghana and The Gambia used public ones. The study compared locally trained models with a federated model built across all institutions to evaluate FL's real-world feasibility. Despite its promise, implementing FL in sub-Saharan Africa faces challenges such as poor infrastructure, unreliable internet, limited digital literacy, and weak AI regulations. Some institutions were also reluctant to share model updates due to data control concerns. In conclusion, FL shows strong potential for enabling AI-driven healthcare in underserved regions, but broader adoption will require improvements in infrastructure, education, and regulatory support.

End-to-end Cortical Surface Reconstruction from Clinical Magnetic Resonance Images

Jesper Duemose Nielsen, Karthik Gopinath, Andrew Hoopes, Adrian Dalca, Colin Magdamo, Steven Arnold, Sudeshna Das, Axel Thielscher, Juan Eugenio Iglesias, Oula Puonti

arxiv logopreprintMay 20 2025
Surface-based cortical analysis is valuable for a variety of neuroimaging tasks, such as spatial normalization, parcellation, and gray matter (GM) thickness estimation. However, most tools for estimating cortical surfaces work exclusively on scans with at least 1 mm isotropic resolution and are tuned to a specific magnetic resonance (MR) contrast, often T1-weighted (T1w). This precludes application using most clinical MR scans, which are very heterogeneous in terms of contrast and resolution. Here, we use synthetic domain-randomized data to train the first neural network for explicit estimation of cortical surfaces from scans of any contrast and resolution, without retraining. Our method deforms a template mesh to the white matter (WM) surface, which guarantees topological correctness. This mesh is further deformed to estimate the GM surface. We compare our method to recon-all-clinical (RAC), an implicit surface reconstruction method which is currently the only other tool capable of processing heterogeneous clinical MR scans, on ADNI and a large clinical dataset (n=1,332). We show a approximately 50 % reduction in cortical thickness error (from 0.50 to 0.24 mm) with respect to RAC and better recovery of the aging-related cortical thinning patterns detected by FreeSurfer on high-resolution T1w scans. Our method enables fast and accurate surface reconstruction of clinical scans, allowing studies (1) with sample sizes far beyond what is feasible in a research setting, and (2) of clinical populations that are difficult to enroll in research studies. The code is publicly available at https://github.com/simnibs/brainnet.

"DCSLK: Combined Large Kernel Shared Convolutional Model with Dynamic Channel Sampling".

Li Z, Luo S, Li H, Li Y

pubmed logopapersMay 20 2025
This study centers around the competition between Convolutional Neural Networks (CNNs) with large convolutional kernels and Vision Transformers in the domain of computer vision, delving deeply into the issues pertaining to parameters and computational complexity that stem from the utilization of large convolutional kernels. Even though the size of the convolutional kernels has been extended up to 51×51, the enhancement of performance has hit a plateau, and moreover, striped convolution incurs a performance degradation. Enlightened by the hierarchical visual processing mechanism inherent in humans, this research innovatively incorporates a shared parameter mechanism for large convolutional kernels. It synergizes the expansion of the receptive field enabled by large convolutional kernels with the extraction of fine-grained features facilitated by small convolutional kernels. To address the surging number of parameters, a meticulously designed parameter sharing mechanism is employed, featuring fine-grained processing in the central region of the convolutional kernel and wide-ranging parameter sharing in the periphery. This not only curtails the parameter count and mitigates the model complexity but also sustains the model's capacity to capture extensive spatial relationships. Additionally, in light of the problems of spatial feature information loss and augmented memory access during the 1×1 convolutional channel compression phase, this study further puts forward a dynamic channel sampling approach, which markedly elevates the accuracy of tumor subregion segmentation. To authenticate the efficacy of the proposed methodology, a comprehensive evaluation has been conducted on three brain tumor segmentation datasets, namely BraTs2020, BraTs2024, and Medical Segmentation Decathlon Brain 2018. The experimental results evince that the proposed model surpasses the current mainstream ConvNet and Transformer architectures across all performance metrics, proffering novel research perspectives and technical stratagems for the realm of medical image segmentation.

Portable Ultrasound Bladder Volume Measurement Over Entire Volume Range Using a Deep Learning Artificial Intelligence Model in a Selected Cohort: A Proof of Principle Study.

Jeong HJ, Seol A, Lee S, Lim H, Lee M, Oh SJ

pubmed logopapersMay 19 2025
We aimed to prospectively investigate whether bladder volume measured using deep learning artificial intelligence (AI) algorithms (AI-BV) is more accurate than that measured using conventional methods (C-BV) if using a portable ultrasound bladder scanner (PUBS). Patients who underwent filling cystometry because of lower urinary tract symptoms between January 2021 and July 2022 were enrolled. Every time the bladder was filled serially with normal saline from 0 mL to maximum cystometric capacity in 50 mL increments, C-BV was measured using PUBS. Ultrasound images obtained during this process were manually annotated to define the bladder contour, which was used to build a deep learning AI model. The true bladder volume (T-BV) for each bladder volume range was compared with C-BV and AI-BV for analysis. We enrolled 250 patients (213 men and 37 women), and a deep learning AI model was established using 1912 bladder images. There was a significant difference between C-BV (205.5 ± 170.8 mL) and T-BV (190.5 ± 165.7 mL) (p = 0.001), but no significant difference between AI-BV (197.0 ± 161.1 mL) and T-BV (190.5 ± 165.7 mL) (p = 0.081). In bladder volume ranges of 101-150, 151-200, and 201-300 mL, there were significant differences in the percentage of volume differences between [C-BV and T-BV] and [AI-BV and T-BV] (p < 0.05), but no significant difference if converted to absolute values (p > 0.05). C-BV (R<sup>2</sup> = 0.91, p < 0.001) and AI-BV (R<sup>2</sup> = 0.90, p < 0.001) were highly correlated with T-BV. The mean difference between AI-BV and T-BV (6.5 ± 50.4) was significantly smaller than that between C-BV and T-BV (15.0 ± 50.9) (p = 0.001). Following image pre-processing, deep learning AI-BV more accurately estimated true BV than conventional methods in this selected cohort on internal validation. Determination of the clinical relevance of these findings and performance in external cohorts requires further study. The clinical trial was conducted using an approved product for its approved indication, so approval from the Ministry of Food and Drug Safety (MFDS) was not required. Therefore, there is no clinical trial registration number.

Accuracy of segment anything model for classification of vascular stenosis in digital subtraction angiography.

Navasardyan V, Katz M, Goertz L, Zohranyan V, Navasardyan H, Shahzadi I, Kröger JR, Borggrefe J

pubmed logopapersMay 19 2025
This retrospective study evaluates the diagnostic performance of an optimized comprehensive multi-stage framework based on the Segment Anything Model (SAM), which we named Dr-SAM, for detecting and grading vascular stenosis in the abdominal aorta and iliac arteries using digital subtraction angiography (DSA). A total of 100 DSA examinations were conducted on 100 patients. The infrarenal abdominal aorta (AAI), common iliac arteries (CIA), and external iliac arteries (EIA) were independently evaluated by two experienced radiologists using a standardized 5-point grading scale. Dr-SAM analyzed the same DSA images, and its assessments were compared with the average stenosis grading provided by the radiologists. Diagnostic accuracy was evaluated using Cohen's kappa, specificity, sensitivity, and Wilcoxon signed-rank tests. Interobserver agreement between radiologists, which established the reference standard, was strong (Cohen's kappa: CIA right = 0.95, CIA left = 0.94, EIA right = 0.98, EIA left = 0.98, AAI = 0.79). Dr-SAM showed high agreement with radiologist consensus for CIA (κ = 0.93 right, 0.91 left), moderate agreement for EIA (κ = 0.79 right, 0.76 left), and fair agreement for AAI (κ = 0.70). Dr-SAM demonstrated excellent specificity (up to 1.0) and robust sensitivity (0.67-0.83). Wilcoxon tests revealed no significant differences between Dr-SAM and radiologist grading (p > 0.05). Dr-SAM proved to be an accurate and efficient tool for vascular assessment, with the potential to streamline diagnostic workflows and reduce variability in stenosis grading. Its ability to deliver rapid and consistent evaluations may contribute to earlier detection of disease and the optimization of treatment strategies. Further studies are needed to confirm these findings in prospective settings and to enhance its capabilities, particularly in the detection of occlusions.

GuidedMorph: Two-Stage Deformable Registration for Breast MRI

Yaqian Chen, Hanxue Gu, Haoyu Dong, Qihang Li, Yuwen Chen, Nicholas Konz, Lin Li, Maciej A. Mazurowski

arxiv logopreprintMay 19 2025
Accurately registering breast MR images from different time points enables the alignment of anatomical structures and tracking of tumor progression, supporting more effective breast cancer detection, diagnosis, and treatment planning. However, the complexity of dense tissue and its highly non-rigid nature pose challenges for conventional registration methods, which primarily focus on aligning general structures while overlooking intricate internal details. To address this, we propose \textbf{GuidedMorph}, a novel two-stage registration framework designed to better align dense tissue. In addition to a single-scale network for global structure alignment, we introduce a framework that utilizes dense tissue information to track breast movement. The learned transformation fields are fused by introducing the Dual Spatial Transformer Network (DSTN), improving overall alignment accuracy. A novel warping method based on the Euclidean distance transform (EDT) is also proposed to accurately warp the registered dense tissue and breast masks, preserving fine structural details during deformation. The framework supports paradigms that require external segmentation models and with image data only. It also operates effectively with the VoxelMorph and TransMorph backbones, offering a versatile solution for breast registration. We validate our method on ISPY2 and internal dataset, demonstrating superior performance in dense tissue, overall breast alignment, and breast structural similarity index measure (SSIM), with notable improvements by over 13.01% in dense tissue Dice, 3.13% in breast Dice, and 1.21% in breast SSIM compared to the best learning-based baseline.
Page 365 of 3703697 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.