Sort by:
Page 88 of 6386373 results

Wang T, Cao Y, Lu Z, Huang Y, Lu J, Fan F, Shan H, Zhang Y

pubmed logopapersOct 6 2025
Metal artifacts in computed tomography (CT) imaging significantly hinder diagnostic accuracy and clinical decision-making. While deep learning-based metal artifact reduction (MAR) methods have demonstrated promising progress, their clinical application is still constrained by three major challenges: (1) balancing metal artifact reduction with the preservation of critical anatomical structures, (2) effectively capturing the clinical priors of metal artifacts, and (3) dynamically adapting to polychromatic spectral variations. To address these limitations, in this paper, we propose a Self-supervised MAR method for computed Tomography (SMART) that leverages range-null space decomposition (RND) to model metal and tissue LACs separately, and employs implicit neural representation (INR) to learn their respective clinical characteristics without explicit supervision. Specifically, RND decouples metal and tissue LACs into a residual range component for metal LAC modeling, which captures metal artifacts, thus facilitating metal artifact reduction, and a null component for tissue LAC modeling, which focuses on preserving tissue details. To deal with the lack of paired data in clinical settings, we utilize INR to learn the clinical characteristics of these components in a self-supervised manner. Furthermore, SMART incorporates polychromatic spectra into the implicit representation, allowing dynamic adaptation to spectral variations across different imaging conditions. Extensive experiments on one synthetic and two clinical datasets demonstrate the strong potential of SMART in real-world scenarios. By flexibly adapting to spectral variations, it achieves superior generalizability to out-of-distribution clinical data.

Wang Q, Zheng N, Yu Q, Shao S

pubmed logopapersOct 6 2025
HER-2-positive breast cancer is a biologically distinct subtype, accurate early assessment of HER-2 status is therefore critical for guiding personalized treatment. Currently mainly detected through immunohistochemistry (IHC) and fluorescence in situ hybridization (FISH) on biopsy or surgical specimens. However, these methods are invasive, susceptible to sampling errors, and lack the capability for real-time, noninvasive monitoring of tumor heterogeneity or treatment response. Therefore, the development of noninvasive imaging-based predictive methods has gained significant research interest. Multiparametric magnetic resonance imaging (mpMRI) can quantify tumor perfusion parameters (Ktrans-vascular permeability, Ve- extracellular space) through dynamic contrast-enhanced MRI (DCE-MRI), measure the apparent diffusion coefficient (ADC) using diffusion-weighted imaging (DWI), and obtain metabolic information via positron emission tomography-MRI (PET-MRI), which are closely associated with HER-2 expression status. Concurrently, radiomics and deep learning (DL) systematically extract multidimensional features of breast tumors from multimodal imaging data, including morphological parameters (sphericity, surface area), first-order statistical metrics (kurtosis, K; skewness), and textural features (gray-level co-occurrence matrix GLCM, quantisation texture space distribution; gray-level run-length matrix GLRLM, evaluation of homogeneous region size), thereby constructing high-dimensional quantitative analysis datasets. Based on the resolution of the heterogeneity of the feature spatial distribution, the DL algorithms can autonomously mine the potential imaging patterns closely related to the expression of HER-2 molecules and establish a non-invasive prediction model. Although traditional single-parameter models, such as the ADC derived from DWI, can provide valuable information about the tumor microenvironment, their predictive efficacy is often constrained by parameter inconsistencies and a lack of standardization. As a narrative review, this article argues that multimodal imaging, radiomics, and deep learning are better equipped to capture the complex HER-2-related tumor heterogeneity, thereby providing a stronger theoretical foundation for guiding personalized treatment strategies and prognostic evaluation.

Räisänen IT, Penttala M, Sahni V, Toby Thomas J, Grigoriadis A, Sakellari D, Gupta S, Pärnänen P, Pätilä T, Sorsa T

pubmed logopapersOct 6 2025
Calcifying carotid artery atheromas (CCAAs) identified on standard dental panoramic radiographs (DPRs) have been presented as potential disease markers for cardiovascular disease (CVD). CCAAs are further linked to several systemic disease processes (i.e. diabetes) that are also associated with periodontitis. The active matrix metalloproteinase-8 (aMMP-8) mouthrinse point-of-care-test has been multiply globally validated for periodontitis disease diagnostics. Calprotectin can inhibit matrix metalloproteinases and also exert significant anti-microbial activities. Recently, calprotectin has been suggested as a potential biomarker of endovascular inflammation. This special report considers a combination of mouthrinse aMMP-8 and calprotectin in periodontitis disease diagnostics at the dentist's office for simultaneously identifying at-risk patients of diabetes and CVD reviewing recent PubMed indexed findings comparing disease diagnostics by aMMP-8 and calprotectin individually and combined. By combining CCAA-DPRs analysis with oral fluid mouthrinse aMMP-8 and calprotectin lateral-flow immunoassays as point-of-care/chair-side testing's, especially by the polynomial-algorithm-machine-learning technology (including computer vision), can provide a modern noninvasive, safe, economical diagnostic AI-tool. This tool can be utilized for on-line real-time screening of the interlinked processes involving stroke-, CVD-, diabetic- and periodontal disease cascades. Accordingly, identified at-risk patients are then referred for necessary medical and dental interventions.

Ty S, Haque F, Desai P, Takahashi N, Chaudhary U, Choyke PL, Thomas A, Türkbey B, Harmon SA

pubmed logopapersOct 6 2025
Small cell lung cancer (SCLC) is an aggressive disease with diverse phenotypes that reflect the heterogeneous expression of tumor-related genes. Recent studies have shown that neuroendocrine (NE) transcription factors may be used to classify SCLC tumors with distinct therapeutic responses. The liver is a common site of metastatic disease in SCLC and can drive a poor prognosis. Here, we present a computational approach to detect and characterize metastatic SCLC (mSCLC) liver lesions and their associated NE-related phenotype as a method to improve patient management. This study utilized computed tomography scans of patients with hepatic lesions from two data sources for segmentation and classification of liver disease: (1) a public dataset from patients of various cancer types (segmentation; n = 131) and (2) an institutional cohort of patients with SCLC (segmentation and classification; n = 86). We developed deep learning segmentation algorithms and compared their performance for automatically detecting liver lesions, evaluating the results with and without the inclusion of the SCLC cohort. Following segmentation in the SCLC cohort, radiomic features were extracted from the detected lesions, and least absolute shrinkage and selection operator regression was utilized to select features from a training cohort (80/20 split). Subsequently, we trained radiomics-based machine learning classifiers to stratify patients based on their NE tumor profile, defined as expression levels of a preselected gene set derived from bulk RNA sequencing or circulating free DNA chromatin immunoprecipitation sequencing. Our liver lesion detection tool achieved lesion-based sensitivities of 66%-83% for the two datasets. In patients with mSCLC, the radiomics-based NE phenotype classifier distinguished patients as positive or negative for harboring NE-like liver metastasis phenotype with an area under the receiver operating characteristic curve of 0.73 and an F1 score of 0.88 in the testing cohort. We demonstrate the potential of utilizing artificial intelligence (AI)-based platforms as clinical decision support systems, which could help clinicians determine treatment options for patients with SCLC based on their associated molecular tumor profile. Targeted therapy requires accurate molecular characterization of disease, which imaging and AI may aid in determining.

Gupta A, Malhotra D

pubmed logopapersOct 6 2025
Parkinson's Disease (PD) is a rapidly progressing neurodegenerative disorder that often presents neuropsychiatric symptoms, affecting millions globally, particularly within aging populations. Addressing the urgent need for early and accurate diagnosis, this study introduces ParkEnNET, a Majority Voting-Based Ensemble Transfer Learning Framework for early PD detection. Traditional deep learning models, although powerful, require large labeled datasets, extensive computational resources, and are prone to overfitting when applied to small, noisy medical datasets. To overcome these limitations, ParkEnNET leverages transfer learning, utilizing pretrained deep learning models to efficiently extract relevant features from limited MRI data. By integrating the strengths of multiple models through a majority voting ensemble strategy, ParkEnNET effectively handles challenges such as data variability, class imbalance, and imaging noise. The framework was validated both through internal testing and on an independent clinical dataset collected from Superspeciality Hospital Jammu, ensuring real-world generalizability. Experimental results demonstrated that ParkEnNET achieved a diagnostic accuracy of 98.23%, with a precision of 100.0%, recall of 95.24%, and an F1-score of 97.44%, outperforming all individual models including VGGNet, ResNet-50, and EfficientNet. These outcomes establish ParkEnNET as a promising diagnostic framework with strong performance on limited datasets, offering significant potential to enhance early clinical detection and timely intervention for Parkinson's Disease.

Satrio Pambudi, Filippo Menolascina

arxiv logopreprintOct 6 2025
The selection of appropriate medical imaging procedures is a critical and complex clinical decision, guided by extensive evidence-based standards such as the ACR Appropriateness Criteria (ACR-AC). However, the underutilization of these guidelines, stemming from the difficulty of mapping unstructured patient narratives to structured criteria, contributes to suboptimal patient outcomes and increased healthcare costs. To bridge this gap, we introduce a multi-agent cognitive architecture that automates the translation of free-text clinical scenarios into specific, guideline-adherent imaging recommendations. Our system leverages a novel, domain-adapted dense retrieval model, ColBERT, fine-tuned on a synthetically generated dataset of 8,840 clinical scenario-recommendation pairs to achieve highly accurate information retrieval from the ACR-AC knowledge base. This retriever identifies candidate guidelines with a 93.9% top-10 recall, which are then processed by a sequence of LLM-based agents for selection and evidence-based synthesis. We evaluate our architecture using GPT-4.1 and MedGemma agents, demonstrating a state-of-the-art exact match accuracy of 81%, meaning that in 81% of test cases the predicted procedure set was identical to the guideline's reference set, and an F1-score of 0.879. This represents a 67-percentage-point absolute improvement in accuracy over a strong standalone GPT-4.1 baseline, underscoring the contribution that our architecture makes to a frontier model. These results were obtained on a challenging test set with substantial lexical divergence from the source guidelines. Our code is available at https://anonymous.4open.science/r/demo-iclr-B567/

Alec K. Peltekian, Halil Ertugrul Aktas, Gorkem Durak, Kevin Grudzinski, Bradford C. Bemiss, Carrie Richardson, Jane E. Dematte, G. R. Scott Budinger, Anthony J. Esposito, Alexander Misharin, Alok Choudhary, Ankit Agrawal, Ulas Bagci

arxiv logopreprintOct 6 2025
Mixture-of-Experts (MoE) architectures have significantly contributed to scalable machine learning by enabling specialized subnetworks to tackle complex tasks efficiently. However, traditional MoE systems lack domain-specific constraints essential for medical imaging, where anatomical structure and regional disease heterogeneity strongly influence pathological patterns. Here, we introduce Regional Expert Networks (REN), the first anatomically-informed MoE framework tailored specifically for medical image classification. REN leverages anatomical priors to train seven specialized experts, each dedicated to distinct lung lobes and bilateral lung combinations, enabling precise modeling of region-specific pathological variations. Multi-modal gating mechanisms dynamically integrate radiomics biomarkers and deep learning (DL) features (CNN, ViT, Mamba) to weight expert contributions optimally. Applied to interstitial lung disease (ILD) classification, REN achieves consistently superior performance: the radiomics-guided ensemble reached an average AUC of 0.8646 +/- 0.0467, a +12.5 percent improvement over the SwinUNETR baseline (AUC 0.7685, p = 0.031). Region-specific experts further revealed that lower-lobe models achieved AUCs of 0.88-0.90, surpassing DL counterparts (CNN: 0.76-0.79) and aligning with known disease progression patterns. Through rigorous patient-level cross-validation, REN demonstrates strong generalizability and clinical interpretability, presenting a scalable, anatomically-guided approach readily extensible to other structured medical imaging applications.

Arnela Hadzic, Simon Johannes Joham, Martin Urschler

arxiv logopreprintOct 6 2025
Generating synthetic CT (sCT) from MRI or CBCT plays a crucial role in enabling MRI-only and CBCT-based adaptive radiotherapy, improving treatment precision while reducing patient radiation exposure. To address this task, we adopt a fully 3D Flow Matching (FM) framework, motivated by recent work demonstrating FM's efficiency in producing high-quality images. In our approach, a Gaussian noise volume is transformed into an sCT image by integrating a learned FM velocity field, conditioned on features extracted from the input MRI or CBCT using a lightweight 3D encoder. We evaluated the method on the SynthRAD2025 Challenge benchmark, training separate models for MRI $\rightarrow$ sCT and CBCT $\rightarrow$ sCT across three anatomical regions: abdomen, head and neck, and thorax. Validation and testing were performed through the challenge submission system. The results indicate that the method accurately reconstructs global anatomical structures; however, preservation of fine details was limited, primarily due to the relatively low training resolution imposed by memory and runtime constraints. Future work will explore patch-based training and latent-space flow models to improve resolution and local structural fidelity.

Xin Li, Kaixiang Yang, Qiang Li, Zhiwei Wang

arxiv logopreprintOct 6 2025
Dual-view mammography, including craniocaudal (CC) and mediolateral oblique (MLO) projections, offers complementary anatomical views crucial for breast cancer diagnosis. However, in real-world clinical workflows, one view may be missing, corrupted, or degraded due to acquisition errors or compression artifacts, limiting the effectiveness of downstream analysis. View-to-view translation can help recover missing views and improve lesion alignment. Unlike natural images, this task in mammography is highly challenging due to large non-rigid deformations and severe tissue overlap in X-ray projections, which obscure pixel-level correspondences. In this paper, we propose Column-Aware and Implicit 3D Diffusion (CA3D-Diff), a novel bidirectional mammogram view translation framework based on conditional diffusion model. To address cross-view structural misalignment, we first design a column-aware cross-attention mechanism that leverages the geometric property that anatomically corresponding regions tend to lie in similar column positions across views. A Gaussian-decayed bias is applied to emphasize local column-wise correlations while suppressing distant mismatches. Furthermore, we introduce an implicit 3D structure reconstruction module that back-projects noisy 2D latents into a coarse 3D feature volume based on breast-view projection geometry. The reconstructed 3D structure is refined and injected into the denoising UNet to guide cross-view generation with enhanced anatomical awareness. Extensive experiments demonstrate that CA3D-Diff achieves superior performance in bidirectional tasks, outperforming state-of-the-art methods in visual fidelity and structural consistency. Furthermore, the synthesized views effectively improve single-view malignancy classification in screening settings, demonstrating the practical value of our method in real-world diagnostics.

Quang-Khai Bui-Tran, Minh-Toan Dinh, Thanh-Huy Nguyen, Ba-Thinh Lam, Mai-Anh Vu, Ulas Bagci

arxiv logopreprintOct 6 2025
Accurate liver segmentation in multi-phase MRI is vital for liver fibrosis assessment, yet labeled data is often scarce and unevenly distributed across imaging modalities and vendor systems. We propose a label-efficient segmentation approach that promotes cross-modality generalization under real-world conditions, where GED4 hepatobiliary-phase annotations are limited, non-contrast sequences (T1WI, T2WI, DWI) are unlabeled, and spatial misalignment and missing phases are common. Our method integrates a foundation-scale 3D segmentation backbone adapted via fine-tuning, co-training with cross pseudo supervision to leverage unlabeled volumes, and a standardized preprocessing pipeline. Without requiring spatial registration, the model learns to generalize across MRI phases and vendors, demonstrating robust segmentation performance in both labeled and unlabeled domains. Our results exhibit the effectiveness of our proposed label-efficient baseline for liver segmentation in multi-phase, multi-vendor MRI and highlight the potential of combining foundation model adaptation with co-training for real-world clinical imaging tasks.
Page 88 of 6386373 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.