Sort by:
Page 58 of 6226216 results

You N, Cao X, Nie H, Su T, Song H, Jin Z, Xin X, Wang D, Sun L

pubmed logopapersOct 10 2025
This study aimed to clarify whether quantitative high-resolution computed tomography (HRCT) analysis can assess the condition of interstitial lung disease (ILD) associated with anti-melanoma differentiation-associated gene 5 positive (anti MDA5+) dermatomyositis (DM) and investigate the efficacy of tofacitinib in the treatment of anti-MDA5+ DM. Seventy patients were included in this retrospective study: 39 in the tofacitinib group and 31 in the group without tofacitinib. Patients' HRCT were uploaded to a deep learning system to assess ILD regression. Based on patients' quantitative HRCT results, survival and glucocorticoids (GCs) usage, the efficacy of tofacitinib in the treatment of anti-MDA5+ DM were assessed. The safety was assessed by recording the incidence of adverse reactions. Data were analyzed using SPSS26.0 and R4.4.1. No significant differences for baseline characteristics were observed between the two groups of patients, except for cutaneous involvement. Tofacitinib group showed higher 3-year survival and it was an independent protective factor against mortality. Elevated serum ferritin (>1000μg/L) increased the risk of death. Quantitative HRCT analysis showed a significant reduction in the percentage of whole-lung involvement in the tofacitinib group between the baseline and follow-up. The total lesion volume reduction in the whole lung after treatment was substantially higher in the tofacitinib group. The tofacitinib group had a shorter duration of GCs tapering and a higher risk of EBV infection. Quantitative HRCT analysis can be used to assess the response of ILD to tofacitinib treatment. Tofacitinib is effective in patients with anti-MDA5+ DM-ILD but increases the risk of infection.

Narimani S, Hoff SR, Kurz KD, Gjesdal KI, Geisler J, Grøvik E

pubmed logopapersOct 10 2025
Segmentation of breast lesions in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is critical for effective diagnosis. This study investigates the impact of breast region segmentation (BRS) on the performance of deep learning-based breast lesion segmentation (BLS) in breast DCE-MRI. The study utilized the Stavanger Dataset, comprising 59 DCE-MRI scans, and employed the UNet++ architecture as the segmentation model. Four experimental approaches were designed to assess the influence of BRS on BLS: (1) Whole Volume (WV) without BRS, (2) WV with BRS, (3) BRS applied only to Selected Lesion-containing Slices (SLS), and (4) BRS applied to an Optimal Volume (OV). Data augmentation and oversampling techniques were implemented to address dataset limitations and enhance model generalizability. A systematic method was employed to determine OV sizes for patient's DCE-MRI images ensuring full lesion inclusion. Model training and validation were conducted using a hybrid loss function-comprising Dice loss, focal loss, and cross-entropy loss-and a five-fold cross-validation strategy. Final evaluations were performed on a randomly split test dataset for each of the four approaches. The findings indicate that applying BRS significantly enhances model performance. The most notable improvement was observed in the fourth approach, BRS with OV, which achieved approximately a 50% increase in segmentation accuracy compared to the non-BRS baseline. Furthermore, the BRS with OV approach resulted in a substantial reduction in computational energy consumption-up to 450%, highlighting its potential as an environmentally sustainable solution for large-scale applications.

Ji B, Liu Y, Zhou B, Mi R, Liu Y, Lv Y, Wang P, Li Y, Sun Q, Wu N, Quan Y, Wu S, Yan L

pubmed logopapersOct 10 2025
Accurate diagnosis of anterior disc displacement (ADD) is essential for managing temporomandibular joint disorders (TMJ). This study employed machine learning (ML) to automatically detect anteriorly displaced TMJ discs in magnetic resonance images (MRI). This retrospective study included patients with TMJ disorders who visited the Hospital between January 2023 and June 2024. Five machine learning models-decision tree (DT), K-nearest neighbors (KNN), support vector machine (SVM), random forest (RF), and logistic regression (LR)-were utilized to train and validate radiomics data derived from TMJ imaging. Model performance was assessed using an 8:2 train-test split, evaluating accuracy with metrics such as area under the curve (AUC), sensitivity, specificity, precision, and F1 score. After manual delineation of TMJ ROIs by an experienced radiologist (serving as reference standard), radiomic feature extraction included first-order statistics, size- and shape-based features, and texture features.The open-phase, close-phase, and open and close fusion radiomics image features were evaluated separately. The study analyzed 382 TMJs from 191 patients, comprising 214 normal joints and 168 abnormal joints. The fusion radiomics model using five classifiers surpassed both open-phase and close-phase models, demonstrating superior performance in both training and validation cohorts. The fusion radiomics model consistently outperformed single-phase analyses across both diagnostic tasks. For normal vs. abnormal TMJ discrimination, the Random Forest (RF) classifier demonstrated robust performance with AUCs of 0.889 (95% CI: 0.854-0.924) in training and 0.874 (95% CI: 0.799-0.948) in validation.Complete performance metrics for all five classifiers are detailed in the main text. The fusion radiomics model effectively distinguished normal from abnormal joints and differentiated between ADDwR and ADDwoR, supporting personalized treatment planning. not applicable.

Zhang X, Wu C, Zhao Z, Lei J, Tian W, Zhang Y, Xie W, Wang Y

pubmed logopapersOct 10 2025
Developing generalist foundation model has recently attracted tremendous attention in the field of AI for Medicine, which requires open-source medical image datasets that incorporate diverse supervision signals across various imaging modalities. In this paper, we introduce RadGenome-Chest CT, a comprehensive, large-scale, region-guided 3D chest CT interpretation dataset based on CT-RATE. Specifically, we leverage the latest powerful universal segmentation model and large language models, to extend the original datasets from the following aspects: organ-level segmentation masks covering 197 categories, which provide intermediate reasoning visual clues for interpretation; 665K multigranularity grounded reports, where each sentence of the report is linked to the corresponding anatomical region of CT volume with a segmentation mask; 1.2M grounded VQA pairs, where questions and answers are all linked with reference segmentation masks, enabling models to associate visual evidence with textual explanations. We believe that RadGenome-Chest CT can significantly advance the development of multimodal medical foundation models, by training to generate texts based on given segmentation regions, which is unattainable with previous relevant datasets.

Na Y, Kim K, Cho H, Ye SJ, Kim H, Ahn SS, Park JE, Lee J

pubmed logopapersOct 10 2025
Training deep neural networks with multi-domain data generally gives more robustness and accuracy than training with single domain data, leading to the development of many deep learning-based algorithms using multi-domain data. However, if part of the input data is unavailable due to missing or corrupted data, a significant bias can occur, a problem that may be relatively more critical in medical applications where patients may be negatively affected. In this study, we propose the Laplacian filter attention with style transfer generative adversarial network (LASTGAN) to solve the problem of missing sequences in brain tumor magnetic resonance imaging (MRI). Our method combines image imputation and image-to-image translation to accurately synthesize specific sequences of missing MR images. LASTGAN can accurately synthesize both overall anatomical structures and tumor regions of the brain in MR images by employing a novel attention module that utilizes a Laplacian filter. Additionally, among the other sub-networks, the generator injects a style vector of the missing domain that is subsequently inferred by the style encoder, while the style mapper assists the generator in synthesizing domain-specific images. We show that the proposed model, LASTGAN, synthesizes high quality MR images with respect to other existing GAN-based methods. Furthermore, we validate the use of LASTGAN for data imputation or augmentation through segmentation experiments.

Monteiro-Martins S, Li Y, Borisov O, Khan A, Reichardt W, Haug S, Kellner E, Buechert M, Ott E, Russe MF, Bamberg F, Kiryluk K, Sekula P, Reisert M, Köttgen A

pubmed logopapersOct 10 2025
Chronic kidney disease (CKD) is defined as sustained abnormalities in kidney function or structure. Genetic studies of CKD have largely focused on kidney function markers such as estimated glomerular filtration rate (eGFR). We hypothesized that genome-wide association studies (GWAS) of magnetic resonance imaging (MRI)-based kidney sub-volumes could provide insights into CKD risk genes complementary to the study of eGFR. Total kidney volume (TKV) and sub-volumes for cortex, medulla, and sinus were derived from abdominal MRIs of 38,816 United Kingdom Biobank participants of European ancestry using a trained convolutional neural network. GWAS was performed for body surface area-normalized kidney volumes and eGFR for comparison. Potentially causal genes at each locus were prioritized using a developed annotation pipeline. We assessed locus overlap between volumes, biomarker-based kidney function, and clinical traits using colocalization analyses. Annotated genes were further characterized through enrichment analyses, molecular and clinical annotations, including a screen for rare, putative loss-of-function variants. GWAS for 9,803,932 common genetic variants identified 34 significant loci for TKV, 24 for medulla, 26 for cortex, and 71 for sinus, compared to 32 for eGFR. Prioritized genes for cortex and medulla volumes showed corresponding tissue-specific expression and were enriched for kidney development- and hypoxia-related pathways. Genetic effect sizes of significant index single nucleotide polymorphisms for TKV, cortex, and medulla volumes correlated positively with those for eGFR. Some loci such as PKHD1 and BICC1 were strongly associated with kidney volumes but not eGFR. Integration with disease information revealed that rare, putative loss-of-function variants in BICC1, and common variants with regulatory potential, are associated with increased risk for CKD and dialysis, which was not identified in a previous GWAS of eGFR CONCLUSIONS: Our investigation shows that genetic findings of kidney structure can complement kidney function studies and reveal previously unrecognized CKD risk genes in the population.

Patrício C, Rio-Torto I, Cardoso JS, Teixeira LF, Neves JC

pubmed logopapersOct 10 2025
The main challenges limiting the adoption of deep learning-based solutions in medical workflows are the availability of annotated data and the lack of interpretability of such systems. Concept Bottleneck Models (CBMs) tackle the latter by constraining the model output on a set of predefined and human-interpretable concepts. However, the increased interpretability achieved through these concept-based explanations implies a higher annotation burden. Moreover, if a new concept needs to be added, the whole system needs to be retrained. Inspired by the remarkable performance shown by Large Vision-Language Models (LVLMs) in few-shot settings, we propose a simple, yet effective, methodology, CBVLM, which tackles both of the aforementioned challenges. First, for each concept, we prompt the LVLM to answer if the concept is present in the input image. Then, we ask the LVLM to classify the image based on the previous concept predictions. Moreover, in both stages, we incorporate a retrieval module responsible for selecting the best examples for in-context learning. By grounding the final diagnosis on the predicted concepts, we ensure explainability, and by leveraging the few-shot capabilities of LVLMs, we drastically lower the annotation cost. We validate our approach with extensive experiments across four medical datasets and twelve LVLMs (both generic and medical) and show that CBVLM consistently outperforms CBMs and task-specific supervised methods without requiring any training and using just a few annotated examples. More information on our project page: https://cristianopatricio.github.io/CBVLM/.

Champendal M, Ribeiro RT, Müller H, Prior JO, Sá Dos Reis C

pubmed logopapersOct 10 2025
The clinical adoption of AI-based denoising in PET/CT relies on the development of transparent and trustworthy tools that align with the radiographers' needs and support integration into routine practice. This study aims to determine the key characteristics of an eXplainable Artificial Intelligence (XAI)/tool aligning the radiographers' needs to facilitate the clinical adoption of AI-based denoising algorithm in PET/CT. Two focus groups were organised, involving ten voluntary participants recruited from nuclear medicine departments from Western-Switzerland, forming a convenience sample of radiographers. Two different scenarios, matching or mismatching the ground truth were used to identify their needs and the questions they would like to ask to understand the AI-denoising algorithm. Additionally, the characteristics that an XAI tool should possess to best meet their needs were investigated. Content analysis was performed following the three steps outlined by Wanlin. Ethics cleared the study. Ten radiographers (aged 31-60y) identified two levels of explanation: (1) simple, global explanations with numerical confidence levels for rapid understanding in routine settings; (2) detailed, case-specific explanations using mixed formats where necessary, depending on the clinical situation and users to build confidence and support decision-making. Key questions include the functions of the algorithm ('what'), the clinical context ('when') and the dependency of the results ('how'). An effective XAI tool should be easy, adaptable, user-friendly and not disruptive to workflows. Radiographers need two levels of explanation from XAI tools: global summaries that preserve workflow efficiency and detailed, case-specific insights when needed. Meeting these needs is key to fostering trust, understanding, and integration of AI-based denoising in PET/CT. Implementing adaptive XAI tools tailored to radiographers' needs can support clinical workflows and accelerate the adoption of AI in PET/CT imaging.

Yingtie Lei, Zimeng Li, Chi-Man Pun, Yupeng Liu, Xuhang Chen

arxiv logopreprintOct 10 2025
Ultra-high-field 7T MRI offers enhanced spatial resolution and tissue contrast that enables the detection of subtle pathological changes in neurological disorders. However, the limited availability of 7T scanners restricts widespread clinical adoption due to substantial infrastructure costs and technical demands. Computational approaches for synthesizing 7T-quality images from accessible 3T acquisitions present a viable solution to this accessibility challenge. Existing CNN approaches suffer from limited spatial coverage, while Transformer models demand excessive computational overhead. RWKV architectures offer an efficient alternative for global feature modeling in medical image synthesis, combining linear computational complexity with strong long-range dependency capture. Building on this foundation, we propose Frequency Spatial-RWKV (FS-RWKV), an RWKV-based framework for 3T-to-7T MRI translation. To better address the challenges of anatomical detail preservation and global tissue contrast recovery, FS-RWKV incorporates two key modules: (1) Frequency-Spatial Omnidirectional Shift (FSO-Shift), which performs discrete wavelet decomposition followed by omnidirectional spatial shifting on the low-frequency branch to enhance global contextual representation while preserving high-frequency anatomical details; and (2) Structural Fidelity Enhancement Block (SFEB), a module that adaptively reinforces anatomical structure through frequency-aware feature fusion. Comprehensive experiments on UNC and BNU datasets demonstrate that FS-RWKV consistently outperforms existing CNN-, Transformer-, GAN-, and RWKV-based baselines across both T1w and T2w modalities, achieving superior anatomical fidelity and perceptual quality.

Christian Bluethgen, Dave Van Veen, Daniel Truhn, Jakob Nikolas Kather, Michael Moor, Malgorzata Polacin, Akshay Chaudhari, Thomas Frauenfelder, Curtis P. Langlotz, Michael Krauthammer, Farhad Nooralahzadeh

arxiv logopreprintOct 10 2025
Building agents, systems that perceive and act upon their environment with a degree of autonomy, has long been a focus of AI research. This pursuit has recently become vastly more practical with the emergence of large language models (LLMs) capable of using natural language to integrate information, follow instructions, and perform forms of "reasoning" and planning across a wide range of tasks. With its multimodal data streams and orchestrated workflows spanning multiple systems, radiology is uniquely suited to benefit from agents that can adapt to context and automate repetitive yet complex tasks. In radiology, LLMs and their multimodal variants have already demonstrated promising performance for individual tasks such as information extraction and report summarization. However, using LLMs in isolation underutilizes their potential to support complex, multi-step workflows where decisions depend on evolving context from multiple information sources. Equipping LLMs with external tools and feedback mechanisms enables them to drive systems that exhibit a spectrum of autonomy, ranging from semi-automated workflows to more adaptive agents capable of managing complex processes. This review examines the design of such LLM-driven agentic systems, highlights key applications, discusses evaluation methods for planning and tool use, and outlines challenges such as error cascades, tool-use efficiency, and health IT integration.
Page 58 of 6226216 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.