Sort by:
Page 49 of 1411408 results

Advanced liver fibrosis detection using a two-stage deep learning approach on standard T2-weighted MRI.

Gupta P, Singh S, Gulati A, Dutta N, Aggarwal Y, Kalra N, Premkumar M, Taneja S, Verma N, De A, Duseja A

pubmed logopapersAug 19 2025
To develop and validate a deep learning model for automated detection of advanced liver fibrosis using standard T2-weighted MRI. We utilized two datasets: the public CirrMRI600 + dataset (n = 374) containing T2-weighted MRI scans from patients with cirrhosis (n = 318) and healthy subjects (n = 56), and an in-house dataset of chronic liver disease patients (n = 187). A two-stage deep learning pipeline was developed: first, an automated liver segmentation model using nnU-Net architecture trained on CirrMRI600 + and then applied to segment livers in our in-house dataset; second, a Masked Attention ResNet classification model. For classification model training, patients with liver stiffness measurement (LSM) > 12 kPa were classified as advanced fibrosis (n = 104). In contrast, healthy subjects from CirrMRI600 + and patients with LSM ≤ 12 kPa were classified as non-advanced fibrosis (n = 116). Model validation was exclusively performed on a separate test set of 23 patients with histopathological confirmation of the degree of fibrosis (METAVIR ≥ F3 indicating advanced fibrosis). We additionally compared our two-stage approach with direct classification without segmentation, and evaluated alternative architectures including DenseNet121 and SwinTransformer. The liver segmentation model performed excellently on the test set (mean Dice score: 0.960 ± 0.009; IoU: 0.923 ± 0.016). On the pathologically confirmed independent test set (n = 23), our two-stage model achieved strong diagnostic performance (sensitivity: 0.778, specificity: 0.800, AUC: 0.811, accuracy: 0.783), significantly outperforming direct classification without segmentation (AUC: 0.743). Classification performance was highly dependent on segmentation quality, with cases having excellent segmentation (Score 1) showing higher accuracy (0.818) than those with poor segmentation (Score 3, accuracy: 0.625). Alternative architectures with masked attention showed comparable but slightly lower performance (DenseNet121: AUC 0.795; SwinTransformer: AUC 0.782). Our fully automated deep learning pipeline effectively detects advanced liver fibrosis using standard non-contrast T2-weighted MRI, potentially offering a non-invasive alternative to current diagnostic approaches. The segmentation-first approach provides significant performance gains over direct classification.

ASDFormer: A Transformer with Mixtures of Pooling-Classifier Experts for Robust Autism Diagnosis and Biomarker Discovery

Mohammad Izadi, Mehran Safayani

arxiv logopreprintAug 19 2025
Autism Spectrum Disorder (ASD) is a complex neurodevelopmental condition marked by disruptions in brain connectivity. Functional MRI (fMRI) offers a non-invasive window into large-scale neural dynamics by measuring blood-oxygen-level-dependent (BOLD) signals across the brain. These signals can be modeled as interactions among Regions of Interest (ROIs), which are grouped into functional communities based on their underlying roles in brain function. Emerging evidence suggests that connectivity patterns within and between these communities are particularly sensitive to ASD-related alterations. Effectively capturing these patterns and identifying interactions that deviate from typical development is essential for improving ASD diagnosis and enabling biomarker discovery. In this work, we introduce ASDFormer, a Transformer-based architecture that incorporates a Mixture of Pooling-Classifier Experts (MoE) to capture neural signatures associated with ASD. By integrating multiple specialized expert branches with attention mechanisms, ASDFormer adaptively emphasizes different brain regions and connectivity patterns relevant to autism. This enables both improved classification performance and more interpretable identification of disorder-related biomarkers. Applied to the ABIDE dataset, ASDFormer achieves state-of-the-art diagnostic accuracy and reveals robust insights into functional connectivity disruptions linked to ASD, highlighting its potential as a tool for biomarker discovery.

Classification of familial and non-familial ADHD using auto-encoding network and binary hypothesis testing

Baboli, R., Martin, E., Qiu, Q., Zhao, L., Liu, T., Li, X.

medrxiv logopreprintAug 19 2025
Family history is one the most powerful risk factor for attention-deficit/hyperactivity disorder (ADHD), yet no study has tested whether multimodal Magnetic Resonance Imaging (MRI) combined with deep learning can separate familial ADHD (ADHD-F) and non-familial ADHD (ADHD-NF). T1-weighted and diffusion-weighted MRI data from 438 children (129 ADHD-F, 159 ADHD-NF, and 150 controls) were parcellated into 425 cortical and white-matter metrics. Our pipeline combined three feature-selection steps (t-test filtering, mutual-information ranking, and Lasso) with an auto-encoder and applied the binary-hypothesis strategy throughout; each held-out subject was assigned both possible labels in turn and evaluated under leave-one-out testing nested within five-fold cross-validation. Accuracy, sensitivity, specificity, and area under the curve (AUC) quantified performance. The model achieved accuracies/AUCs of 0.66 / 0.67 for ADHD-F vs controls, 0.67 / 0.70 for ADHD-NF vs controls, and 0.62 / 0.67 for ADHD-F vs ADHD-NF. In classification between ADHD-F and controls, the most informative metrics were the mean diffusivity (MD) of the right fornix, the MD of the left parahippocampal cingulum, and the cortical thickness of the right inferior parietal cortex. In classification between ADHD-NF and controls, the key contributors were the fractional anisotropy (FA) of the left inferior fronto-occipital fasciculus, the MD of the right fornix, and the cortical thickness of the right medial orbitofrontal cortex. In classification between ADHD-F and ADHD-NF, the highlighted features were the volume of the left cingulate cingulum tract, the volume of the right parietal segment of the superior longitudinal fasciculus, and the cortical thickness of the right fusiform cortex. Our binary hypothesis semi-supervised deep learning framework reliably separates familial and non-familial ADHD and shows that advanced semi-supervised deep learning techniques can deliver robust, generalizable neurobiological markers for neurodevelopmental disorders.

Advancing deep learning-based segmentation for multiple lung cancer lesions in real-world multicenter CT scans.

Rafael-Palou X, Jimenez-Pastor A, Martí-Bonmatí L, Muñoz-Nuñez CF, Laudazi M, Alberich-Bayarri Á

pubmed logopapersAug 18 2025
Accurate segmentation of lung cancer lesions in computed tomography (CT) is essential for precise diagnosis, personalized therapy planning, and treatment response assessment. While automatic segmentation of the primary lung lesion has been widely studied, the ability to segment multiple lesions per patient remains underexplored. In this study, we address this gap by introducing a novel, automated approach for multi-instance segmentation of lung cancer lesions, leveraging a heterogeneous cohort with real-world multicenter data. We analyzed 1,081 retrospectively collected CT scans with 5,322 annotated lesions (4.92 ± 13.05 lesions per scan). The cohort was stratified into training (n = 868) and testing (n = 213) subsets. We developed an automated three-step pipeline, including thoracic bounding box extraction, multi-instance lesion segmentation, and false positive reduction via a novel multiscale cascade classifier to filter spurious and non-lesion candidates. On the independent test set, our method achieved a Dice similarity coefficient of 76% for segmentation and a lesion detection sensitivity of 85%. When evaluated on an external dataset of 188 real-world cases, it achieved a Dice similarity coefficient of 73%, and a lesion detection sensitivity of 85%. Our approach accurately detected and segmented multiple lung cancer lesions per patient on CT scans, demonstrating robustness across an independent test set and an external real-world dataset. AI-driven segmentation comprehensively captures lesion burden, enhancing lung cancer assessment and disease monitoring KEY POINTS: Automatic multi-instance lung cancer lesion segmentation is underexplored yet crucial for disease assessment. Developed a deep learning-based segmentation pipeline trained on multi-center real-world data, which reached 85% sensitivity at external validation. Thoracic bounding box and false positive reduction techniques improved the pipeline's segmentation performance.

Development of a lung perfusion automated quantitative model based on dual-energy CT pulmonary angiography in patients with chronic pulmonary thromboembolism.

Xi L, Wang J, Liu A, Ni Y, Du J, Huang Q, Li Y, Wen J, Wang H, Zhang S, Zhang Y, Zhang Z, Wang D, Xie W, Gao Q, Cheng Y, Zhai Z, Liu M

pubmed logopapersAug 18 2025
To develop PerAIDE, an AI-driven system for automated analysis of pulmonary perfusion blood volume (PBV) using dual-energy computed tomography pulmonary angiography (DE-CTPA) in patients with chronic pulmonary thromboembolism (CPE). In this prospective observational study, 32 patients with chronic thromboembolic pulmonary disease (CTEPD) and 151 patients with chronic thromboembolic pulmonary hypertension (CTEPH) were enrolled between January 2022 and July 2024. PerAIDE was developed to automatically quantify three distinct perfusion patterns-normal, reduced, and defective-on DE-CTPA images. Two radiologists independently assessed PBV scores. Follow-up imaging was conducted 3 months after balloon pulmonary angioplasty (BPA). PerAIDE demonstrated high agreement with the radiologists (intraclass correlation coefficient = 0.778) and reduced analysis time significantly (31 ± 3 s vs. 15 ± 4 min, p < 0.001). CTEPH patients had greater perfusion defects than CTEPD (0.35 vs. 0.29, p < 0.001), while reduced perfusion was more prevalent in CTEPD (0.36 vs. 0.30, p < 0.001). Perfusion defects correlated positively with pulmonary vascular resistance (ρ = 0.534) and mean pulmonary artery pressure (ρ = 0.482), and negatively with oxygenation index (ρ = -0.441). PerAIDE effectively differentiated CTEPH from CTEPD (AUC = 0.809, 95% CI: 0.745-0.863). At the 3-month post-BPA, a significant reduction in perfusion defects was observed (0.36 vs. 0.33, p < 0.01). CTEPD and CTEPH exhibit distinct perfusion phenotypes on DE-CTPA. PerAIDE reliably quantifies perfusion abnormalities and correlates strongly with clinical and hemodynamic markers of CPE severity. ClinicalTrials.gov, NCT06526468. Registered 28 August 2024- Retrospectively registered, https://clinicaltrials.gov/study/NCT06526468?cond=NCT06526468&rank=1 . PerAIDE is a dual-energy computed tomography pulmonary angiography (DE-CTPA) AI-driven system that rapidly and accurately assesses perfusion blood volume in patients with chronic pulmonary thromboembolism, effectively distinguishing between CTEPD and CTEPH phenotypes and correlating with disease severity and therapeutic response. Right heart catheterization for definitive diagnosis of chronic pulmonary thromboembolism (CPE) is invasive. PerAIDE-based perfusion defects correlated with disease severity to aid CPE-treatment assessment. CTEPH demonstrates severe perfusion defects, while CTEPD displays predominantly reduced perfusion. PerAIDE employs a U-Net-based adaptive threshold method, which achieves alignment with and faster processing relative to manual evaluation.

MCBL-UNet: A Hybrid Mamba-CNN Boundary Enhanced Light-weight UNet for Placenta Ultrasound Image Segmentation.

Jiang C, Zhu C, Guo H, Tan G, Liu C, Li K

pubmed logopapersAug 18 2025
The shape and size of the placenta are closely related to fetal development in the second and third trimesters of pregnancy. Accurately segmenting the placental contour in ultrasound images is a challenge because it is limited by image noise, fuzzy boundaries, and tight clinical resources. To address these issues, we propose MCBL-UNet, a novel lightweight segmentation framework that combines the long-range modeling capabilities of Mamba and the local feature extraction advantages of convolutional neural networks (CNNs) to achieve efficient segmentation through multi-information fusion. Based on a compact 6-layer U-Net architecture, MCBL-UNet introduces several key modules: a boundary enhancement module (BEM) to extract fine-grained edge and texture features; a multi-dimensional global context module (MGCM) to capture global semantics and edge information in the deep stages of the encoder and decoder; and a parallel channel spatial attention module (PCSAM) to suppress redundant information in skip connections while enhancing spatial and channel correlations. To further improve feature reconstruction and edge preservation capabilities, we introduce an attention downsampling module (ADM) and a content-aware upsampling module (CUM). MCBL-UNet has achieved excellent segmentation performance on multiple medical ultrasound datasets (placenta, gestational sac, thyroid nodules). Using only 1.31M parameters and 1.26G FLOPs, the model outperforms 13 existing mainstream methods in key indicators such as Dice coefficient and mIoU, showing a perfect balance between high accuracy and low computational cost. This model is not only suitable for resource-constrained clinical environments, but also provides a new idea for introducing the Mamba structure into medical image segmentation.

Early Detection of Cardiovascular Disease in Chest Population Screening: Challenges for a Rapidly Emerging Cardiac CT Application.

Walstra ANH, Gratama JWC, Heuvelmans MA, Oudkerk M

pubmed logopapersAug 18 2025
While lung cancer screening (LCS) reduces lung cancer-related mortality in high-risk individuals, cardiovascular disease (CVD) remains a leading cause of death due to shared risk factors such as smoking and age. Coronary artery calcium (CAC) assessment offers an opportunity for concurrent cardiovascular screening, with higher CAC scores indicating increased CVD risk and mortality. Despite guidelines recommending CAC-scoring on all non-contrast chest CT scans, a lack of standardization leads to underreporting and missed opportunities for preventive care. Routine CAC-scoring in LCS can enable personalized CVD management and reduce unnecessary treatments. However, challenges persist in achieving adequate diagnostic quality with one combined image acquisition for both lung and cardiovascular assessment. Advancements in CT technology have improved CAC quantification on low-dose CT scans. Electron-beam tomography, valued for superior temporal resolution, was replaced by multi-detector CT for better spatial resolution and general usability. Dual-source CT further improved temporal resolution and reduced motion artifacts, making non-gated CT protocols for CAC-assessment possible. Additionally, artificial intelligence-based CAC quantification can reduce the added workload of cardiovascular screening within LCS programs. This review explores recent advancements in cardiac CT technologies that address prior challenges in opportunistic CVD screening and considers key factors for integrating CVD screening into LCS programs, aiming for high-quality standardization in CAC reporting.

Breaking Reward Collapse: Adaptive Reinforcement for Open-ended Medical Reasoning with Enhanced Semantic Discrimination

Yizhou Liu, Jingwei Wei, Zizhi Chen, Minghao Han, Xukun Zhang, Keliang Liu, Lihua Zhang

arxiv logopreprintAug 18 2025
Reinforcement learning (RL) with rule-based rewards has demonstrated strong potential in enhancing the reasoning and generalization capabilities of vision-language models (VLMs) and large language models (LLMs), while reducing computational overhead. However, its application in medical imaging remains underexplored. Existing reinforcement fine-tuning (RFT) approaches in this domain primarily target closed-ended visual question answering (VQA), limiting their applicability to real-world clinical reasoning. In contrast, open-ended medical VQA better reflects clinical practice but has received limited attention. While some efforts have sought to unify both formats via semantically guided RL, we observe that model-based semantic rewards often suffer from reward collapse, where responses with significant semantic differences receive similar scores. To address this, we propose ARMed (Adaptive Reinforcement for Medical Reasoning), a novel RL framework for open-ended medical VQA. ARMed first incorporates domain knowledge through supervised fine-tuning (SFT) on chain-of-thought data, then applies reinforcement learning with textual correctness and adaptive semantic rewards to enhance reasoning quality. We evaluate ARMed on six challenging medical VQA benchmarks. Results show that ARMed consistently boosts both accuracy and generalization, achieving a 32.64% improvement on in-domain tasks and an 11.65% gain on out-of-domain benchmarks. These results highlight the critical role of reward discriminability in medical RL and the promise of semantically guided rewards for enabling robust and clinically meaningful multimodal reasoning.

HierAdaptMR: Cross-Center Cardiac MRI Reconstruction with Hierarchical Feature Adapters

Ruru Xu, Ilkay Oksuz

arxiv logopreprintAug 18 2025
Deep learning-based cardiac MRI reconstruction faces significant domain shift challenges when deployed across multiple clinical centers with heterogeneous scanner configurations and imaging protocols. We propose HierAdaptMR, a hierarchical feature adaptation framework that addresses multi-level domain variations through parameter-efficient adapters. Our method employs Protocol-Level Adapters for sequence-specific characteristics and Center-Level Adapters for scanner-dependent variations, built upon a variational unrolling backbone. A Universal Adapter enables generalization to entirely unseen centers through stochastic training that learns center-invariant adaptations. The framework utilizes multi-scale SSIM loss with frequency domain enhancement and contrast-adaptive weighting for robust optimization. Comprehensive evaluation on the CMRxRecon2025 dataset spanning 5+ centers, 10+ scanners, and 9 modalities demonstrates superior cross-center generalization while maintaining reconstruction quality. code: https://github.com/Ruru-Xu/HierAdaptMR

FractMorph: A Fractional Fourier-Based Multi-Domain Transformer for Deformable Image Registration

Shayan Kebriti, Shahabedin Nabavi, Ali Gooya

arxiv logopreprintAug 17 2025
Deformable image registration (DIR) is a crucial and challenging technique for aligning anatomical structures in medical images and is widely applied in diverse clinical applications. However, existing approaches often struggle to capture fine-grained local deformations and large-scale global deformations simultaneously within a unified framework. We present FractMorph, a novel 3D dual-parallel transformer-based architecture that enhances cross-image feature matching through multi-domain fractional Fourier transform (FrFT) branches. Each Fractional Cross-Attention (FCA) block applies parallel FrFTs at fractional angles of 0{\deg}, 45{\deg}, 90{\deg}, along with a log-magnitude branch, to effectively extract local, semi-global, and global features at the same time. These features are fused via cross-attention between the fixed and moving image streams. A lightweight U-Net style network then predicts a dense deformation field from the transformer-enriched features. On the ACDC cardiac MRI dataset, FractMorph achieves state-of-the-art performance with an overall Dice Similarity Coefficient (DSC) of 86.45%, an average per-structure DSC of 75.15%, and a 95th-percentile Hausdorff distance (HD95) of 1.54 mm on our data split. We also introduce FractMorph-Light, a lightweight variant of our model with only 29.6M parameters, which maintains the superior accuracy of the main model while using approximately half the memory. Our results demonstrate that multi-domain spectral-spatial attention in transformers can robustly and efficiently model complex non-rigid deformations in medical images using a single end-to-end network, without the need for scenario-specific tuning or hierarchical multi-scale networks. The source code of our implementation is available at https://github.com/shayankebriti/FractMorph.
Page 49 of 1411408 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.