Sort by:
Page 64 of 1421420 results

M$^3$AD: Multi-task Multi-gate Mixture of Experts for Alzheimer's Disease Diagnosis with Conversion Pattern Modeling

Yufeng Jiang, Hexiao Ding, Hongzhao Chen, Jing Lan, Xinzhi Teng, Gerald W. Y. Cheng, Zongxi Li, Haoran Xie, Jung Sun Yoo, Jing Cai

arxiv logopreprintAug 3 2025
Alzheimer's disease (AD) progression follows a complex continuum from normal cognition (NC) through mild cognitive impairment (MCI) to dementia, yet most deep learning approaches oversimplify this into discrete classification tasks. This study introduces M$^3$AD, a novel multi-task multi-gate mixture of experts framework that jointly addresses diagnostic classification and cognitive transition modeling using structural MRI. We incorporate three key innovations: (1) an open-source T1-weighted sMRI preprocessing pipeline, (2) a unified learning framework capturing NC-MCI-AD transition patterns with demographic priors (age, gender, brain volume) for improved generalization, and (3) a customized multi-gate mixture of experts architecture enabling effective multi-task learning with structural MRI alone. The framework employs specialized expert networks for diagnosis-specific pathological patterns while shared experts model common structural features across the cognitive continuum. A two-stage training protocol combines SimMIM pretraining with multi-task fine-tuning for joint optimization. Comprehensive evaluation across six datasets comprising 12,037 T1-weighted sMRI scans demonstrates superior performance: 95.13% accuracy for three-class NC-MCI-AD classification and 99.15% for binary NC-AD classification, representing improvements of 4.69% and 0.55% over state-of-the-art approaches. The multi-task formulation simultaneously achieves 97.76% accuracy in predicting cognitive transition. Our framework outperforms existing methods using fewer modalities and offers a clinically practical solution for early intervention. Code: https://github.com/csyfjiang/M3AD.

EfficientGFormer: Multimodal Brain Tumor Segmentation via Pruned Graph-Augmented Transformer

Fatemeh Ziaeetabar

arxiv logopreprintAug 2 2025
Accurate and efficient brain tumor segmentation remains a critical challenge in neuroimaging due to the heterogeneous nature of tumor subregions and the high computational cost of volumetric inference. In this paper, we propose EfficientGFormer, a novel architecture that integrates pretrained foundation models with graph-based reasoning and lightweight efficiency mechanisms for robust 3D brain tumor segmentation. Our framework leverages nnFormer as a modality-aware encoder, transforming multi-modal MRI volumes into patch-level embeddings. These features are structured into a dual-edge graph that captures both spatial adjacency and semantic similarity. A pruned, edge-type-aware Graph Attention Network (GAT) enables efficient relational reasoning across tumor subregions, while a distillation module transfers knowledge from a full-capacity teacher to a compact student model for real-time deployment. Experiments on the MSD Task01 and BraTS 2021 datasets demonstrate that EfficientGFormer achieves state-of-the-art accuracy with significantly reduced memory and inference time, outperforming recent transformer-based and graph-based baselines. This work offers a clinically viable solution for fast and accurate volumetric tumor delineation, combining scalability, interpretability, and generalization.

Deep learning-driven incidental detection of vertebral fractures in cancer patients: advancing diagnostic precision and clinical management.

Mniai EM, Laletin V, Tselikas L, Assi T, Bonnet B, Camez AO, Zemmouri A, Muller S, Moussa T, Chaibi Y, Kiewsky J, Quenet S, Avare C, Lassau N, Balleyguier C, Ayobi A, Ammari S

pubmed logopapersAug 2 2025
Vertebral compression fractures (VCFs) are the most prevalent skeletal manifestations of osteoporosis in cancer patients. Yet, they are frequently missed or not reported in routine clinical radiology, adversely impacting patient outcomes and quality of life. This study evaluates the diagnostic performance of a deep-learning (DL)-based application and its potential to reduce the miss rate of incidental VCFs in a high-risk cancer population. We retrospectively analysed thoraco-abdomino-pelvic (TAP) CT scans from 1556 patients with stage IV cancer collected consecutively over a 4-month period (September-December 2023) in a tertiary cancer center. A DL-based application flagged cases positive for VCFs, which were subsequently reviewed by two expert radiologists for validation. Additionally, grade 3 fractures identified by the application were independently assessed by two expert interventional radiologists to determine their eligibility for vertebroplasty. Of the 1556 cases, 501 were flagged as positive for VCF by the application, with 436 confirmed as true positives by expert review, yielding a positive predictive value (PPV) of 87%. Common causes of false positives included sclerotic vertebral metastases, scoliosis, and vertebrae misidentification. Notably, 83.5% (364/436) of true positive VCFs were absent from radiology reports, indicating a substantial non-report rate in routine practice. Ten grade 3 fractures were overlooked or not reported by radiologists. Among them, 9 were deemed suitable for vertebroplasty by expert interventional radiologists. This study underscores the potential of DL-based applications to improve the detection of VCFs. The analyzed tool can assist radiologists in detecting more incidental vertebral fractures in adult cancer patients, optimising timely treatment and reducing associated morbidity and economic burden. Moreover, it might enhance patient access to interventional treatments such as vertebroplasty. These findings highlight the transformative role that DL can play in optimising clinical management and outcomes for osteoporosis-related VCFs in cancer patients.

AI enhanced diagnostic accuracy and workload reduction in hepatocellular carcinoma screening.

Lu RF, She CY, He DN, Cheng MQ, Wang Y, Huang H, Lin YD, Lv JY, Qin S, Liu ZZ, Lu ZR, Ke WP, Li CQ, Xiao H, Xu ZF, Liu GJ, Yang H, Ren J, Wang HB, Lu MD, Huang QH, Chen LD, Wang W, Kuang M

pubmed logopapersAug 2 2025
Hepatocellular carcinoma (HCC) ultrasound screening encounters challenges related to accuracy and the workload of radiologists. This retrospective, multicenter study assessed four artificial intelligence (AI) enhanced strategies using 21,934 liver ultrasound images from 11,960 patients to improve HCC ultrasound screening accuracy and reduce radiologist workload. UniMatch was used for lesion detection and LivNet for classification, trained on 17,913 images. Among the strategies tested, Strategy 4, which combined AI for initial detection and radiologist evaluation of negative cases in both detection and classification phases, outperformed others. It not only matched the high sensitivity of original algorithm (0.956 vs. 0.991) but also improved specificity (0.787 vs. 0.698), reduced radiologist workload by 54.5%, and decreased both recall and false positive rates. This approach demonstrates a successful model of human-AI collaboration, not only enhancing clinical outcomes but also mitigating unnecessary patient anxiety and system burden by minimizing recalls and false positives.

Integrating Time and Frequency Domain Features of fMRI Time Series for Alzheimer's Disease Classification Using Graph Neural Networks.

Peng W, Li C, Ma Y, Dai W, Fu D, Liu L, Liu L, Yu N, Liu J

pubmed logopapersAug 2 2025
Accurate and early diagnosis of Alzheimer's Disease (AD) is crucial for timely interventions and treatment advancement. Functional Magnetic Resonance Imaging (fMRI), measuring brain blood-oxygen level changes over time, is a powerful AD-diagnosis tool. However, current fMRI-based AD diagnosis methods rely on noise-susceptible time-domain features and focus only on synchronous brain-region interactions in the same time phase, neglecting asynchronous ones. To overcome these issues, we propose Frequency-Time Fusion Graph Neural Network (FTF-GNN). It integrates frequency- and time-domain features for robust AD classification, considering both asynchronous and synchronous brain-region interactions. First, we construct a fully connected hypervariate graph, where nodes represent brain regions and their Blood Oxygen Level-Dependent (BOLD) values at a time series point. A Discrete Fourier Transform (DFT) transforms these BOLD values from the spatial to the frequency domain for frequency-component analysis. Second, a Fourier-based Graph Neural Network (FourierGNN) processes the frequency features to capture asynchronous brain region connectivity patterns. Third, these features are converted back to the time domain and reshaped into a matrix where rows represent brain regions and columns represent their frequency-domain features at each time point. Each brain region then fuses its frequency-domain features with position encoding along the time series, preserving temporal and spatial information. Next, we build a brain-region network based on synchronous BOLD value associations and input the brain-region network and the fused features into a Graph Convolutional Network (GCN) to capture synchronous brain region connectivity patterns. Finally, a fully connected network classifies the brain-region features. Experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset demonstrate the method's effectiveness: Our model achieves 91.26% accuracy and 96.79% AUC in AD versus Normal Control (NC) classification, showing promising performance. For early-stage detection, it attains state-of-the-art performance in distinguishing NC from Late Mild Cognitive Impairment (LMCI) with 87.16% accuracy and 93.22% AUC. Notably, in the challenging task of differentiating LMCI from AD, FTF-GNN achieves optimal performance (85.30% accuracy, 94.56% AUC), while also delivering competitive results (77.40% accuracy, 91.17% AUC) in distinguishing Early MCI (EMCI) from LMCI-the most clinically complex subtype classification. These results indicate that leveraging complementary frequency- and time-domain information, along with considering asynchronous and synchronous brain-region interactions, can address existing approach limitations, offering a robust neuroimaging-based diagnostic solution.

Light Convolutional Neural Network to Detect Chronic Obstructive Pulmonary Disease (COPDxNet): A Multicenter Model Development and External Validation Study.

Rabby ASA, Chaudhary MFA, Saha P, Sthanam V, Nakhmani A, Zhang C, Barr RG, Bon J, Cooper CB, Curtis JL, Hoffman EA, Paine R, Puliyakote AK, Schroeder JD, Sieren JC, Smith BM, Woodruff PG, Reinhardt JM, Bhatt SP, Bodduluri S

pubmed logopapersAug 1 2025
Approximately 70% of adults with chronic obstructive pulmonary disease (COPD) remain undiagnosed. Opportunistic screening using chest computed tomography (CT) scans, commonly acquired in clinical practice, may be used to improve COPD detection through simple, clinically applicable deep-learning models. We developed a lightweight, convolutional neural network (COPDxNet) that utilizes minimally processed chest CT scans to detect COPD. We analyzed 13,043 inspiratory chest CT scans from the COPDGene participants, (9,675 standard-dose and 3,368 low-dose scans), which we randomly split into training (70%) and test (30%) sets at the participant level to no individual contributed to both sets. COPD was defined by postbronchodilator FEV /FVC < 0.70. We constructed a simple, four-block convolutional model that was trained on pooled data and validated on the held-out standard- and low-dose test sets. External validation was performed using standard-dose CT scans from 2,890 SPIROMICS participants and low-dose CT scans from 7,893 participants in the National Lung Screening Trial (NLST). We evaluated performance using the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, Brier scores, and calibration curves. On COPDGene standard-dose CT scans, COPDxNet achieved an AUC of 0.92 (95% CI: 0.91 to 0.93), sensitivity of 80.2%, and specificity of 89.4%. On low-dose scans, AUC was 0.88 (95% CI: 0.86 to 0.90). When the COPDxNet model was applied to external validation datasets, it showed an AUC of 0.92 (95% CI: 0.91 to 0.93) in SPIROMICS and 0.82 (95% CI: 0.81 to 0.83) on NLST. The model was well-calibrated, with Brier scores of 0.11 for standard- dose and 0.13 for low-dose CT scans in COPDGene, 0.12 in SPIROMICS, and 0.17 in NLST. COPDxNet demonstrates high discriminative accuracy and generalizability for detecting COPD on standard- and low-dose chest CT scans, supporting its potential for clinical and screening applications across diverse populations.

Mobile U-ViT: Revisiting large kernel and U-shaped ViT for efficient medical image segmentation

Fenghe Tang, Bingkun Nian, Jianrui Ding, Wenxin Ma, Quan Quan, Chengqi Dong, Jie Yang, Wei Liu, S. Kevin Zhou

arxiv logopreprintAug 1 2025
In clinical practice, medical image analysis often requires efficient execution on resource-constrained mobile devices. However, existing mobile models-primarily optimized for natural images-tend to perform poorly on medical tasks due to the significant information density gap between natural and medical domains. Combining computational efficiency with medical imaging-specific architectural advantages remains a challenge when developing lightweight, universal, and high-performing networks. To address this, we propose a mobile model called Mobile U-shaped Vision Transformer (Mobile U-ViT) tailored for medical image segmentation. Specifically, we employ the newly purposed ConvUtr as a hierarchical patch embedding, featuring a parameter-efficient large-kernel CNN with inverted bottleneck fusion. This design exhibits transformer-like representation learning capacity while being lighter and faster. To enable efficient local-global information exchange, we introduce a novel Large-kernel Local-Global-Local (LGL) block that effectively balances the low information density and high-level semantic discrepancy of medical images. Finally, we incorporate a shallow and lightweight transformer bottleneck for long-range modeling and employ a cascaded decoder with downsample skip connections for dense prediction. Despite its reduced computational demands, our medical-optimized architecture achieves state-of-the-art performance across eight public 2D and 3D datasets covering diverse imaging modalities, including zero-shot testing on four unseen datasets. These results establish it as an efficient yet powerful and generalization solution for mobile medical image analysis. Code is available at https://github.com/FengheTan9/Mobile-U-ViT.

Cerebral Amyloid Deposition With <sup>18</sup>F-Florbetapir PET Mediates Retinal Vascular Density and Cognitive Impairment in Alzheimer's Disease.

Chen Z, He HL, Qi Z, Bi S, Yang H, Chen X, Xu T, Jin ZB, Yan S, Lu J

pubmed logopapersAug 1 2025
Alzheimer's disease (AD) is accompanied by alterations in retinal vascular density (VD), but the mechanisms remain unclear. This study investigated the relationship among cerebral amyloid-β (Aβ) deposition, VD, and cognitive decline. We enrolled 92 participants, including 47 AD patients and 45 healthy control (HC) participants. VD across retinal subregions was quantified using deep learning-based fundus photography, and cerebral Aβ deposition was measured with <sup>18</sup>F-florbetapir (<sup>18</sup>F-AV45) PET/MRI. Using the minimum bounding circle of the optic disc as the diameter (papilla-diameter, PD), VD (total, 0.5-1.0 PD, 1.0-1.5 PD, 1.5-2.0 PD, 2.0-2.5 PD) was calculated. Standardized uptake value ratio (SUVR) for Aβ deposition was computed for global and regional cortical areas, using the cerebellar cortex as the reference region. Cognitive performance was assessed with the Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA). Pearson correlation, multiple linear regression, and mediation analyses were used to explore Aβ deposition, VD, and cognition. AD patients exhibited significantly lower VD in all subregions compared to HC (p < 0.05). Reduced VD correlated with higher SUVR in the global cortex and a decline in cognitive abilities (p < 0.05). Mediation analysis indicated that VD influenced MMSE and MoCA through SUVR in the global cortex, with the most pronounced effects observed in the 1.0-1.5 PD range. Retinal VD is associated with cognitive decline, a relationship primarily mediated by cerebral Aβ deposition measured via <sup>18</sup>F-AV45 PET. These findings highlight the potential of retinal VD as a biomarker for early detection in AD.

Weakly Supervised Intracranial Aneurysm Detection and Segmentation in MR angiography via Multi-task UNet with Vesselness Prior

Erin Rainville, Amirhossein Rasoulian, Hassan Rivaz, Yiming Xiao

arxiv logopreprintAug 1 2025
Intracranial aneurysms (IAs) are abnormal dilations of cerebral blood vessels that, if ruptured, can lead to life-threatening consequences. However, their small size and soft contrast in radiological scans often make it difficult to perform accurate and efficient detection and morphological analyses, which are critical in the clinical care of the disorder. Furthermore, the lack of large public datasets with voxel-wise expert annotations pose challenges for developing deep learning algorithms to address the issues. Therefore, we proposed a novel weakly supervised 3D multi-task UNet that integrates vesselness priors to jointly perform aneurysm detection and segmentation in time-of-flight MR angiography (TOF-MRA). Specifically, to robustly guide IA detection and segmentation, we employ the popular Frangi's vesselness filter to derive soft cerebrovascular priors for both network input and an attention block to conduct segmentation from the decoder and detection from an auxiliary branch. We train our model on the Lausanne dataset with coarse ground truth segmentation, and evaluate it on the test set with refined labels from the same database. To further assess our model's generalizability, we also validate it externally on the ADAM dataset. Our results demonstrate the superior performance of the proposed technique over the SOTA techniques for aneurysm segmentation (Dice = 0.614, 95%HD =1.38mm) and detection (false positive rate = 1.47, sensitivity = 92.9%).

Your other Left! Vision-Language Models Fail to Identify Relative Positions in Medical Images

Daniel Wolf, Heiko Hillenhagen, Billurvan Taskin, Alex Bäuerle, Meinrad Beer, Michael Götz, Timo Ropinski

arxiv logopreprintAug 1 2025
Clinical decision-making relies heavily on understanding relative positions of anatomical structures and anomalies. Therefore, for Vision-Language Models (VLMs) to be applicable in clinical practice, the ability to accurately determine relative positions on medical images is a fundamental prerequisite. Despite its importance, this capability remains highly underexplored. To address this gap, we evaluate the ability of state-of-the-art VLMs, GPT-4o, Llama3.2, Pixtral, and JanusPro, and find that all models fail at this fundamental task. Inspired by successful approaches in computer vision, we investigate whether visual prompts, such as alphanumeric or colored markers placed on anatomical structures, can enhance performance. While these markers provide moderate improvements, results remain significantly lower on medical images compared to observations made on natural images. Our evaluations suggest that, in medical imaging, VLMs rely more on prior anatomical knowledge than on actual image content for answering relative position questions, often leading to incorrect conclusions. To facilitate further research in this area, we introduce the MIRP , Medical Imaging Relative Positioning, benchmark dataset, designed to systematically evaluate the capability to identify relative positions in medical images.
Page 64 of 1421420 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.