Sort by:
Page 1 of 1401393 results
Next

SMART: Self-supervised Learning for Metal Artifact Reduction in Computed Tomography Using Range Null Space Decomposition.

Wang T, Cao Y, Lu Z, Huang Y, Lu J, Fan F, Shan H, Zhang Y

pubmed logopapersOct 6 2025
Metal artifacts in computed tomography (CT) imaging significantly hinder diagnostic accuracy and clinical decision-making. While deep learning-based metal artifact reduction (MAR) methods have demonstrated promising progress, their clinical application is still constrained by three major challenges: (1) balancing metal artifact reduction with the preservation of critical anatomical structures, (2) effectively capturing the clinical priors of metal artifacts, and (3) dynamically adapting to polychromatic spectral variations. To address these limitations, in this paper, we propose a Self-supervised MAR method for computed Tomography (SMART) that leverages range-null space decomposition (RND) to model metal and tissue LACs separately, and employs implicit neural representation (INR) to learn their respective clinical characteristics without explicit supervision. Specifically, RND decouples metal and tissue LACs into a residual range component for metal LAC modeling, which captures metal artifacts, thus facilitating metal artifact reduction, and a null component for tissue LAC modeling, which focuses on preserving tissue details. To deal with the lack of paired data in clinical settings, we utilize INR to learn the clinical characteristics of these components in a self-supervised manner. Furthermore, SMART incorporates polychromatic spectra into the implicit representation, allowing dynamic adaptation to spectral variations across different imaging conditions. Extensive experiments on one synthetic and two clinical datasets demonstrate the strong potential of SMART in real-world scenarios. By flexibly adapting to spectral variations, it achieves superior generalizability to out-of-distribution clinical data.

Automated detection and characterization of small cell lung cancer liver metastasis on computed tomography.

Ty S, Haque F, Desai P, Takahashi N, Chaudhary U, Choyke PL, Thomas A, Türkbey B, Harmon SA

pubmed logopapersOct 6 2025
Small cell lung cancer (SCLC) is an aggressive disease with diverse phenotypes that reflect the heterogeneous expression of tumor-related genes. Recent studies have shown that neuroendocrine (NE) transcription factors may be used to classify SCLC tumors with distinct therapeutic responses. The liver is a common site of metastatic disease in SCLC and can drive a poor prognosis. Here, we present a computational approach to detect and characterize metastatic SCLC (mSCLC) liver lesions and their associated NE-related phenotype as a method to improve patient management. This study utilized computed tomography scans of patients with hepatic lesions from two data sources for segmentation and classification of liver disease: (1) a public dataset from patients of various cancer types (segmentation; n = 131) and (2) an institutional cohort of patients with SCLC (segmentation and classification; n = 86). We developed deep learning segmentation algorithms and compared their performance for automatically detecting liver lesions, evaluating the results with and without the inclusion of the SCLC cohort. Following segmentation in the SCLC cohort, radiomic features were extracted from the detected lesions, and least absolute shrinkage and selection operator regression was utilized to select features from a training cohort (80/20 split). Subsequently, we trained radiomics-based machine learning classifiers to stratify patients based on their NE tumor profile, defined as expression levels of a preselected gene set derived from bulk RNA sequencing or circulating free DNA chromatin immunoprecipitation sequencing. Our liver lesion detection tool achieved lesion-based sensitivities of 66%-83% for the two datasets. In patients with mSCLC, the radiomics-based NE phenotype classifier distinguished patients as positive or negative for harboring NE-like liver metastasis phenotype with an area under the receiver operating characteristic curve of 0.73 and an F1 score of 0.88 in the testing cohort. We demonstrate the potential of utilizing artificial intelligence (AI)-based platforms as clinical decision support systems, which could help clinicians determine treatment options for patients with SCLC based on their associated molecular tumor profile. Targeted therapy requires accurate molecular characterization of disease, which imaging and AI may aid in determining.

ParkEnNET: a majority voting-based ensemble transfer learning framework for early Parkinson's disease detection.

Gupta A, Malhotra D

pubmed logopapersOct 6 2025
Parkinson's Disease (PD) is a rapidly progressing neurodegenerative disorder that often presents neuropsychiatric symptoms, affecting millions globally, particularly within aging populations. Addressing the urgent need for early and accurate diagnosis, this study introduces ParkEnNET, a Majority Voting-Based Ensemble Transfer Learning Framework for early PD detection. Traditional deep learning models, although powerful, require large labeled datasets, extensive computational resources, and are prone to overfitting when applied to small, noisy medical datasets. To overcome these limitations, ParkEnNET leverages transfer learning, utilizing pretrained deep learning models to efficiently extract relevant features from limited MRI data. By integrating the strengths of multiple models through a majority voting ensemble strategy, ParkEnNET effectively handles challenges such as data variability, class imbalance, and imaging noise. The framework was validated both through internal testing and on an independent clinical dataset collected from Superspeciality Hospital Jammu, ensuring real-world generalizability. Experimental results demonstrated that ParkEnNET achieved a diagnostic accuracy of 98.23%, with a precision of 100.0%, recall of 95.24%, and an F1-score of 97.44%, outperforming all individual models including VGGNet, ResNet-50, and EfficientNet. These outcomes establish ParkEnNET as a promising diagnostic framework with strong performance on limited datasets, offering significant potential to enhance early clinical detection and timely intervention for Parkinson's Disease.

Pre-Training for Large-Scale Functional Connectome Fingerprinting Supports Generalization and Transfer Learning in Functional Neuroimaging

Ogg, M., Kitchell, L.

biorxiv logopreprintOct 6 2025
Functional MRI currently supports a limited application space stemming from modest dataset sizes, large interindividual variability and heterogeneity among scanning protocols. These constraints have made it difficult for fMRI researchers to take advantage of modern deep-learning tools that have revolutionized other fields such as NLP, speech transcription, and image recognition. To address these issues, we scaled up functional connectome fingerprinting as a neural network pre-training task, drawing inspiration from speaker recognition research, to learn a generalizable representation of brain function. This approach sets a new high-water mark for neural fingerprinting on a previously unseen scale, across many popular public fMRI datasets (individual recognition over held out scan sessions: 94% on MPI-Leipzig, 94% on NKI-Rockland, 73% on OASIS-3, and 99% on HCP). Near-ceiling performance is maintained even when the duration of the evaluation scan is truncated to less than two minutes. We show that this representation can also generalize to support accurate neural fingerprinting for completely new datasets and participants not used in training. Finally, we demonstrate that the representation learned by the network encodes features related to individual variability that supports some transfer learning to new tasks. These results open the door for a new generation of clinical applications based on functional imaging data.

Accuracy and reproducibility of large language model measurements of liver metastases: comparison with radiologist measurements.

Sugawara H, Takada A, Kato S

pubmed logopapersOct 4 2025
To compare the accuracy and reproducibility of lesion-diameter measurements performed by three state-of-the-art LLMs with those obtained by radiologists. In this retrospective study using a public database, 83 patients with solitary colorectal-cancer liver metastases were identified. From each CT series, a radiologist extracted the single axial slice showing the maximal tumor diameter and converted it to a 512 × 512-pixel PNG image (window level 50 HU, window width 400 HU) with pixel size encoded in the filename. Three LLMs-ChatGPT-o3 (OpenAI), Gemini 2.5 Pro (Google), and Claude 4 Opus (Anthropic)-were prompted to estimate the longest lesion diameter twice, ≥ 1 week apart. Two board-certified radiologists (12 years' experience each) independently measured the same single slice images and one radiologist repeated the measurements after ≥ 1 week. Agreement was assessed with intraclass correlation coefficients (ICC); 95% confidence intervals were obtained by bootstrap resampling (5 000 iterations). Radiologist inter-observer agreement was excellent (ICC = 0.95, 95% CI 0.86-0.99); intra-observer agreement was 0.98 (95% CI 0.94-0.99). Gemini achieved good model-to-radiologist agreement (ICC = 0.81, 95% CI 0.68-0.89) and intra-model reproducibility (ICC = 0.78, 95% CI 0.65-0.87). GPT-o3 showed moderate agreement (ICC = 0.52) and poor reproducibility (ICC = 0.25); Claude showed poor agreement (ICC = 0.07) and reproducibility (ICC = 0.47). LLMs do not yet match radiologists in measuring colorectal cancer liver metastasis; however, Gemini's good agreement and reproducibility highlight the rapid progress of image interpretation capability of LLMs.

Breast cancer prediction using mammography exams for real hospital settings.

Pathak S, Schlötterer J, Geerdink J, Veltman J, van Keulen M, Strisciuglio N, Seifert C

pubmed logopapersOct 4 2025
Breast cancer prediction models for mammography assume that annotations are available for individual images or regions of interest (ROIs), and that there is a fixed number of images per patient. These assumptions do not hold in real hospital settings, where clinicians provide only a final diagnosis for the entire mammography exam (case). Since data in real hospital settings scales with continuous patient intake, while manual annotation efforts do not, we develop a framework for case-level breast cancer prediction that does not require any manual annotation and can be trained with case labels readily available at the hospital. Specifically, we propose a two-level multi-instance learning (MIL) approach at patch and image level for case-level breast cancer prediction and evaluate it on two public and one private dataset. We propose a novel domain-specific MIL pooling observing that breast cancer may or may not occur in both sides, while images of both breasts are taken as a precaution during mammography. We propose a dynamic training procedure for training our MIL framework on a variable number of images per case. We show that our two-level MIL model can be applied in real hospital settings where only case labels, and a variable number of images per case are available, without any loss in performance compared to models trained on image labels. Only trained with weak (case-level) labels, it has the capability to point out in which breast side, mammography view and view region the abnormality lies.

MambaCAFU: Hybrid Multi-Scale and Multi-Attention Model with Mamba-Based Fusion for Medical Image Segmentation

T-Mai Bui, Fares Bougourzi, Fadi Dornaika, Vinh Truong Hoang

arxiv logopreprintOct 4 2025
In recent years, deep learning has shown near-expert performance in segmenting complex medical tissues and tumors. However, existing models are often task-specific, with performance varying across modalities and anatomical regions. Balancing model complexity and performance remains challenging, particularly in clinical settings where both accuracy and efficiency are critical. To address these issues, we propose a hybrid segmentation architecture featuring a three-branch encoder that integrates CNNs, Transformers, and a Mamba-based Attention Fusion (MAF) mechanism to capture local, global, and long-range dependencies. A multi-scale attention-based CNN decoder reconstructs fine-grained segmentation maps while preserving contextual consistency. Additionally, a co-attention gate enhances feature selection by emphasizing relevant spatial and semantic information across scales during both encoding and decoding, improving feature interaction and cross-scale communication. Extensive experiments on multiple benchmark datasets show that our approach outperforms state-of-the-art methods in accuracy and generalization, while maintaining comparable computational complexity. By effectively balancing efficiency and effectiveness, our architecture offers a practical and scalable solution for diverse medical imaging tasks. Source code and trained models will be publicly released upon acceptance to support reproducibility and further research.

Brain metabolic imaging with 18 F-PET-CT and machine-learning clustering analysis reveal divergent metabolic phenotypes in patients with amyotrophic lateral sclerosis.

Zhang J, Han F, Wang X, Wu F, Song X, Liu Q, Wang J, Grecucci A, Zhang Y, Yi X, Chen BT

pubmed logopapersOct 3 2025
Amyotrophic lateral sclerosis (ALS) is a fatal neurodegenerative disorder characterized by significant clinicopathologic heterogeneity. This study aimed to identify distinct ALS phenotypes by integrating brain 18 F-fluorodeoxyglucose positron emission tomography-computed tomography (18 F-FDG PET-CT) metabolic imaging with consensus clustering data. This study prospectively enrolled 127 patients with ALS and 128 healthy controls. All participants underwent a brain 18 F-FDG-PET-CT metabolic imaging, psychological questionnaires, and functional screening. K-means consensus clustering was applied to define neuroimaging-based phenotypes. Survival analyses were also performed. Whole exome sequencing (WES) was utilized to detect ALS-related genetic mutations, followed by GO/KEGG pathway enrichment and imaging-transcriptome analysis based on the brain metabolic activity on the 18 F-FDG-PET-CT imaging. Consensus clustering identified two metabolic phenotypes, i.e., the metabolic attenuation phenotype and the metabolic non-attenuation phenotype according to their glucose metabolic activity pattern. The metabolic attenuation phenotype was associated with worse survival (p = 0.022), poorer physical function (p = 0.005), more severe depression (p = 0.026) and greater anxiety level (p = 0.05). WES testing and neuroimaging-transcriptome analysis identified specific gene mutations and molecular pathways with each phenotype. We identified two distinct ALS phenotypes with varying clinicopathologic features, indicating that the unsupervised machine learning applied to PET imaging may effectively classify metabolic subtypes of ALS. These findings contributed novel insights into the heterogeneous pathophysiology of ALS, which should inform personalized therapeutic strategies for patients with ALS.

Enhanced retinal blood vessel segmentation via loss balancing in dense generative adversarial networks with quick attention mechanisms.

Sandeep D, Baranitharan K, Padmavathi A, Guganathan L

pubmed logopapersOct 3 2025
Manual segmentation of retinal blood vessels in fundus images has been widely used for detecting vascular occlusion, diabetic retinopathy, and other retinal conditions. However, existing automated methods face challenges in accurately segmenting fine vessels and optimizing loss functions effectively. This study aims to develop an integrated framework that enhances vessel segmentation accuracy and robustness for clinical applications. The proposed pipeline integrates multiple advanced techniques to address the limitations of current approaches. In preprocessing, Quasi-Cross Bilateral Filtering (QCBF) is applied to reduce noise and enhance vessel visibility. Feature extraction is performed using a Directed Acyclic Graph Neural Network with VGG16 (DAGNN-VGG16) for hierarchical and topologically-aware representation learning. Segmentation is achieved using a Dense Generative Adversarial Network with Quick Attention Network (Dense GAN-QAN), which balances loss and emphasizes critical vessel features. To further optimize training convergence, the Swarm Bipolar Algorithm (SBA) is employed for loss minimization. The method was evaluated on three benchmark retinal vessel segmentation datasets-CHASE-DB1, STARE, and DRIVE-using sixfold cross-validation. The proposed approach achieved consistently high performance with mean results of accuracy: 99.87%, F1- score: 99.82%, precision: 99.84%, recall: 99.78%, and specificity: 99.87% across all datasets, demonstrating strong generalization and robustness. The integrated QCBF-DAGNN-VGG16-Dense GAN-QAN-SBA framework advances the state-of-the-art in retinal vessel segmentation by effectively handling fine vessel structures and ensuring optimized training. Its consistently high performance across multiple datasets highlights its potential for reliable clinical deployment in retinal disease detection and diagnosis.

Genome-wide analysis of brain age identifies 59 associated loci and unveils relationships with mental and physical health.

Jawinski P, Forstbach H, Kirsten H, Beyer F, Villringer A, Witte AV, Scholz M, Ripke S, Markett S

pubmed logopapersOct 3 2025
Neuroimaging and machine learning are advancing research into the mechanisms of biological aging. In this field, 'brain age gap' has emerged as a promising magnetic resonance imaging-based biomarker that quantifies the deviation between an individual's biological and chronological age of the brain. Here we conducted an in-depth genomic analysis of the brain age gap and its relationships with over 1,000 health traits. Genome-wide analyses in up to 56,348 individuals unveiled a heritability of 23-29% attributable to common genetic variants and highlighted 59 associated loci (39 novel). The leading locus encompasses MAPT, encoding the tau protein central to Alzheimer's disease. Genetic correlations revealed relationships with mental health, physical health, lifestyle and socioeconomic traits, including depressed mood, diabetes, alcohol intake and income. Mendelian randomization indicated a causal role of high blood pressure and type 2 diabetes in accelerated brain aging. Our study highlights key genes and pathways related to neurogenesis, immune-system-related processes and small GTPase binding, laying the foundation for further mechanistic exploration.
Page 1 of 1401393 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.