Sort by:
Page 16 of 3473462 results

SMAS: Structural MRI-based AD Score using Bayesian supervised VAE.

Nemali A, Bernal J, Yakupov R, D S, Dyrba M, Incesoy EI, Mukherjee S, Peters O, Ersözlü E, Hellmann-Regen J, Preis L, Priller J, Spruth E, Altenstein S, Lohse A, Schneider A, Fliessbach K, Kimmich O, Wiltfang J, Hansen N, Schott B, Rostamzadeh A, Glanz W, Butryn M, Buerger K, Janowitz D, Ewers M, Perneczky R, Rauchmann B, Teipel S, Kilimann I, Goerss D, Laske C, Sodenkamp S, Spottke A, Coenjaerts M, Brosseron F, Lüsebrink F, Dechent P, Scheffler K, Hetzer S, Kleineidam L, Stark M, Jessen F, Duzel E, Ziegler G

pubmed logopapersAug 15 2025
This study introduces the Structural MRI-based Alzheimer's Disease Score (SMAS), a novel index intended to quantify Alzheimer's Disease (AD)-related morphometric patterns using a deep learning Bayesian-supervised Variational Autoencoder (Bayesian-SVAE). The SMAS index was constructed using baseline structural MRI data from the DELCODE study and evaluated longitudinally in two independent cohorts: DELCODE (n=415) and ADNI (n=190). Our findings indicate that SMAS has strong associations with cognitive performance (DELCODE: r=-0.83; ADNI: r=-0.62), age (DELCODE: r=0.50; ADNI: r=0.28), hippocampal volume (DELCODE: r=-0.44; ADNI: r=-0.66), and total gray matter volume (DELCODE: r=-0.42; ADNI: r=-0.47), suggesting its potential as a biomarker for AD-related brain atrophy. Moreover, our longitudinal studies indicated that SMAS may be useful for the early identification and tracking of AD. The model demonstrated significant predictive accuracy in distinguishing cognitively healthy individuals from those with AD (DELCODE: AUC=0.971 at baseline, 0.833 at 36 months; ADNI: AUC=0.817 at baseline, improving to 0.903 at 24 months). Notably, over 36 months, the SMAS index outperformed existing measures such as SPARE-AD and hippocampal volume. The relevance map analysis revealed significant morphological changes in key AD-related brain regions, including the hippocampus, posterior cingulate cortex, precuneus, and lateral parietal cortex, highlighting that SMAS is a sensitive and interpretable biomarker of brain atrophy, suitable for early AD detection and longitudinal monitoring of disease progression.

FusionFM: Fusing Eye-specific Foundational Models for Optimized Ophthalmic Diagnosis

Ke Zou, Jocelyn Hui Lin Goh, Yukun Zhou, Tian Lin, Samantha Min Er Yew, Sahana Srinivasan, Meng Wang, Rui Santos, Gabor M. Somfai, Huazhu Fu, Haoyu Chen, Pearse A. Keane, Ching-Yu Cheng, Yih Chung Tham

arxiv logopreprintAug 15 2025
Foundation models (FMs) have shown great promise in medical image analysis by improving generalization across diverse downstream tasks. In ophthalmology, several FMs have recently emerged, but there is still no clear answer to fundamental questions: Which FM performs the best? Are they equally good across different tasks? What if we combine all FMs together? To our knowledge, this is the first study to systematically evaluate both single and fused ophthalmic FMs. To address these questions, we propose FusionFM, a comprehensive evaluation suite, along with two fusion approaches to integrate different ophthalmic FMs. Our framework covers both ophthalmic disease detection (glaucoma, diabetic retinopathy, and age-related macular degeneration) and systemic disease prediction (diabetes and hypertension) based on retinal imaging. We benchmarked four state-of-the-art FMs (RETFound, VisionFM, RetiZero, and DINORET) using standardized datasets from multiple countries and evaluated their performance using AUC and F1 metrics. Our results show that DINORET and RetiZero achieve superior performance in both ophthalmic and systemic disease tasks, with RetiZero exhibiting stronger generalization on external datasets. Regarding fusion strategies, the Gating-based approach provides modest improvements in predicting glaucoma, AMD, and hypertension. Despite these advances, predicting systemic diseases, especially hypertension in external cohort remains challenging. These findings provide an evidence-based evaluation of ophthalmic FMs, highlight the benefits of model fusion, and point to strategies for enhancing their clinical applicability.

High sensitivity in spontaneous intracranial hemorrhage detection from emergency head CT scans using ensemble-learning approach.

Takala J, Peura H, Pirinen R, Väätäinen K, Terjajev S, Lin Z, Raj R, Korja M

pubmed logopapersAug 15 2025
Spontaneous intracranial hemorrhages have a high disease burden. Due to increasing medical imaging, new technological solutions for assisting in image interpretation are warranted. We developed a deep learning (DL) solution for spontaneous intracranial hemorrhage detection from head CT scans. The DL solution included four base convolutional neural networks (CNNs), which were trained using 300 head CT scans. A metamodel was trained on top of the four base CNNs, and simple post processing steps were applied to improve the solution's accuracy. The solution performance was evaluated using a retrospective dataset of consecutive emergency head CTs imaged in ten different emergency rooms. 7797 head CT scans were included in the validation dataset and 118 CT scans presented with spontaneous intracranial hemorrhage. The trained metamodel together with a simple rule-based post-processing step showed 89.8% sensitivity and 89.5% specificity for hemorrhage detection at the case-level. The solution detected all 78 spontaneous hemorrhage cases imaged presumably or confirmedly within 12 h from the symptom onset and identified five hemorrhages missed in the initial on-call reports. Although the success of DL algorithms depends on multiple factors, including training data versatility and quality of annotations, using the proposed ensemble-learning approach and rule-based post-processing may help clinicians to develop highly accurate DL solutions for clinical imaging diagnostics.

UniDCF: A Foundation Model for Comprehensive Dentocraniofacial Hard Tissue Reconstruction

Chunxia Ren, Ning Zhu, Yue Lai, Gui Chen, Ruijie Wang, Yangyi Hu, Suyao Liu, Shuwen Mao, Hong Su, Yu Zhang, Li Xiao

arxiv logopreprintAug 15 2025
Dentocraniofacial hard tissue defects profoundly affect patients' physiological functions, facial aesthetics, and psychological well-being, posing significant challenges for precise reconstruction. Current deep learning models are limited to single-tissue scenarios and modality-specific imaging inputs, resulting in poor generalizability and trade-offs between anatomical fidelity, computational efficiency, and cross-tissue adaptability. Here we introduce UniDCF, a unified framework capable of reconstructing multiple dentocraniofacial hard tissues through multimodal fusion encoding of point clouds and multi-view images. By leveraging the complementary strengths of each modality and incorporating a score-based denoising module to refine surface smoothness, UniDCF overcomes the limitations of prior single-modality approaches. We curated the largest multimodal dataset, comprising intraoral scans, CBCT, and CT from 6,609 patients, resulting in 54,555 annotated instances. Evaluations demonstrate that UniDCF outperforms existing state-of-the-art methods in terms of geometric precision, structural completeness, and spatial accuracy. Clinical simulations indicate UniDCF reduces reconstruction design time by 99% and achieves clinician-rated acceptability exceeding 94%. Overall, UniDCF enables rapid, automated, and high-fidelity reconstruction, supporting personalized and precise restorative treatments, streamlining clinical workflows, and enhancing patient outcomes.

A comparative analysis of imaging-based algorithms for detecting focal cortical dysplasia type II in children.

Šanda J, Holubová Z, Kala D, Jiránková K, Kudr M, Masák T, Bělohlávková A, Kršek P, Otáhal J, Kynčl M

pubmed logopapersAug 15 2025
Focal cortical dysplasia (FCD) is the leading cause of drug-resistant epilepsy (DRE) in pediatric patients. Accurate detection of FCDs is crucial for successful surgical outcomes, yet remains challenging due to frequently subtle MRI findings, especially in children, whose brain morphology undergoes significant developmental changes. Automated detection algorithms have the potential to improve diagnostic precision, particularly in cases, where standard visual assessment fails. This study aimed to evaluate the performance of automated algorithms in detecting FCD type II in pediatric patients and to examine the impact of adult versus pediatric templates on detection accuracy. MRI data from 23 surgical pediatric patients with histologically confirmed FCD type II were retrospectively analyzed. Three imaging-based detection algorithms were applied to T1-weighted images, each targeting key structural features: cortical thickness, gray matter intensity (extension), and gray-white matter junction blurring. Their performance was assessed using adult and pediatric healthy controls templates, with validation against both predictive radiological ROIs (PRR) and post-resection cavities (PRC). The junction algorithm achieved the highest median dice score (0.028, IQR 0.038, p < 0.01 when compared with other algorithms) and detected relevant clusters even in MRI-negative cases. The adult template (median dice score 0.013, IQR 0.027) significantly outperformed the pediatric template (0.0032, IQR 0.023) (p < 0.001), highlighting the importance of template consistency. Despite superior performance of the adult template, its use in pediatric populations may introduce bias, as it does not account for age-specific morphological features such as cortical maturation and incomplete myelination. Automated algorithms, especially those targeting junction blurring, enhance FCD detection in pediatric populations. These algorithms may serve as valuable decision-support tools, particularly in settings where neuroradiological expertise is limited.

Comprehensive analysis of [<sup>18</sup>F]MFBG biodistribution normal patterns and variability in pediatric patients with neuroblastoma.

Wang P, Chen X, Yan X, Yan J, Yang S, Mao J, Li F, Su X

pubmed logopapersAug 15 2025
[<sup>18</sup>F]-meta-fluorobenzylguanidine ([<sup>18</sup>F]MFBG) PET/CT is a promising imaging modality for neural crest-derived tumors, particularly neuroblastoma. Accurate interpretation necessitates an understanding of normal biodistribution and variations in physiological uptake. This study aimed to systematically characterize the physiological distribution and variability of [<sup>18</sup>F]MFBG uptake in pediatric patients to enhance clinical interpretation and differentiate normal from pathological uptake. We retrospectively analyzed [<sup>18</sup>F]MFBG PET/CT scans from 169 pediatric neuroblastoma patients, including 20 in confirmed remission, for detailed biodistribution analysis. Organ uptake was quantified using both manual segmentation and deep learning(DL)-based automatic segmentation methods. Patterns of physiological uptake variants were categorized and illustrated using representative cases. [<sup>18</sup>F]MFBG demonstrated consistent physiological uptake in the salivary glands (SUVmax 9.8 ± 3.3), myocardium (7.1 ± 1.7), and adrenal glands (4.6 ± 0.9), with low activity in bone (0.6 ± 0.2) and muscle (0.8 ± 0.2). DL-based analysis confirmed uniform, mild uptake across vertebral and peripheral skeletal structures (SUVmean 0.47 ± 0.08). Three physiological liver uptake patterns were identified: uniform (43%), left-lobe predominant (31%), and marginal (26%). Asymmetric uptake in the pancreatic head, transient brown adipose tissue activity, gallbladder excretion, and symmetric epiphyseal uptake were also recorded. These variants were not associated with structural abnormalities or clinical recurrence and showed distinct patterns from pathological lesions. This study establishes a reference for normal [<sup>18</sup>F]MFBG biodistribution and physiological variants in children. Understanding these patterns is essential for accurate image interpretation and the avoidance of diagnostic pitfalls in pediatric neuroblastoma patients.

Efficient Image-to-Image Schrödinger Bridge for CT Field of View Extension

Zhenhao Li, Long Yang, Xiaojie Yin, Haijun Yu, Jiazhou Wang, Hongbin Han, Weigang Hu, Yixing Huang

arxiv logopreprintAug 15 2025
Computed tomography (CT) is a cornerstone imaging modality for non-invasive, high-resolution visualization of internal anatomical structures. However, when the scanned object exceeds the scanner's field of view (FOV), projection data are truncated, resulting in incomplete reconstructions and pronounced artifacts near FOV boundaries. Conventional reconstruction algorithms struggle to recover accurate anatomy from such data, limiting clinical reliability. Deep learning approaches have been explored for FOV extension, with diffusion generative models representing the latest advances in image synthesis. Yet, conventional diffusion models are computationally demanding and slow at inference due to their iterative sampling process. To address these limitations, we propose an efficient CT FOV extension framework based on the image-to-image Schr\"odinger Bridge (I$^2$SB) diffusion model. Unlike traditional diffusion models that synthesize images from pure Gaussian noise, I$^2$SB learns a direct stochastic mapping between paired limited-FOV and extended-FOV images. This direct correspondence yields a more interpretable and traceable generative process, enhancing anatomical consistency and structural fidelity in reconstructions. I$^2$SB achieves superior quantitative performance, with root-mean-square error (RMSE) values of 49.8\,HU on simulated noisy data and 152.0HU on real data, outperforming state-of-the-art diffusion models such as conditional denoising diffusion probabilistic models (cDDPM) and patch-based diffusion methods. Moreover, its one-step inference enables reconstruction in just 0.19s per 2D slice, representing over a 700-fold speedup compared to cDDPM (135s) and surpassing diffusionGAN (0.58s), the second fastest. This combination of accuracy and efficiency makes I$^2$SB highly suitable for real-time or clinical deployment.

Deep learning radiomics of elastography for diagnosing compensated advanced chronic liver disease: an international multicenter study.

Lu X, Zhang H, Kuroda H, Garcovich M, de Ledinghen V, Grgurević I, Linghu R, Ding H, Chang J, Wu M, Feng C, Ren X, Liu C, Song T, Meng F, Zhang Y, Fang Y, Ma S, Wang J, Qi X, Tian J, Yang X, Ren J, Liang P, Wang K

pubmed logopapersAug 15 2025
Accurate, noninvasive diagnosis of compensated advanced chronic liver disease (cACLD) is essential for effective clinical management but remains challenging. This study aimed to develop a deep learning-based radiomics model using international multicenter data and to evaluate its performance by comparing it to the two-dimensional shear wave elastography (2D-SWE) cut-off method covering multiple countries or regions, etiologies, and ultrasound device manufacturers. This retrospective study included 1937 adult patients with chronic liver disease due to hepatitis B, hepatitis C, or metabolic dysfunction-associated steatotic liver disease. All patients underwent 2D-SWE imaging and liver biopsy at 17 centers across China, Japan, and Europe using devices from three manufacturers (SuperSonic Imagine, General Electric, and Mindray). The proposed generalized deep learning radiomics of elastography model integrated both elastographic images and liver stiffness measurements and was trained and tested on stratified internal and external datasets. A total of 1937 patients with 9472 2D-SWE images were included in the statistical analysis. Compared to 2D-SWE, the model achieved a higher area under the receiver operating characteristic curve (AUC) (0.89 vs 0.83, P = 0.025). It also achieved a highly consistent diagnosis across all subanalyses (P values: 0.21-0.91), whereas 2D-SWE exhibited different AUCs in the country or region (P < 0.001) and etiology (P = 0.005) subanalyses but not in the manufacturer subanalysis (P = 0.24). The model demonstrated more accurate and robust performance in noninvasive cACLD diagnosis than 2D-SWE across different countries or regions, etiologies, and manufacturers.

Artificial intelligence across the cancer care continuum.

Riaz IB, Khan MA, Osterman TJ

pubmed logopapersAug 15 2025
Artificial intelligence (AI) holds significant potential to enhance various aspects of oncology, spanning the cancer care continuum. This review provides an overview of current and emerging AI applications, from risk assessment and early detection to treatment and supportive care. AI-driven tools are being developed to integrate diverse data sources, including multi-omics and electronic health records, to improve cancer risk stratification and personalize prevention strategies. In screening and diagnosis, AI algorithms show promise in augmenting the accuracy and efficiency of medical image analysis and histopathology interpretation. AI also offers opportunities to refine treatment planning, optimize radiation therapy, and personalize systemic therapy selection. Furthermore, AI is explored for its potential to improve survivorship care by tailoring interventions and to enhance end-of-life care through improved symptom management and prognostic modeling. Beyond care delivery, AI augments clinical workflows, streamlines the dissemination of up-to-date evidence, and captures critical patient-reported outcomes for clinical decision support and outcomes assessment. However, the successful integration of AI into clinical practice requires addressing key challenges, including rigorous validation of algorithms, ensuring data privacy and security, and mitigating potential biases. Effective implementation necessitates interdisciplinary collaboration and comprehensive education for health care professionals. The synergistic interaction between AI and clinical expertise is crucial for realizing the potential of AI to contribute to personalized and effective cancer care. This review highlights the current state of AI in oncology and underscores the importance of responsible development and implementation.

BRIEF: BRain-Inspired network connection search with Extensive temporal feature Fusion enhances disease classification

Xiangxiang Cui, Min Zhao, Dongmei Zhi, Shile Qi, Vince D Calhoun, Jing Sui

arxiv logopreprintAug 15 2025
Existing deep learning models for functional MRI-based classification have limitations in network architecture determination (relying on experience) and feature space fusion (mostly simple concatenation, lacking mutual learning). Inspired by the human brain's mechanism of updating neural connections through learning and decision-making, we proposed a novel BRain-Inspired feature Fusion (BRIEF) framework, which is able to optimize network architecture automatically by incorporating an improved neural network connection search (NCS) strategy and a Transformer-based multi-feature fusion module. Specifically, we first extracted 4 types of fMRI temporal representations, i.e., time series (TCs), static/dynamic functional connection (FNC/dFNC), and multi-scale dispersion entropy (MsDE), to construct four encoders. Within each encoder, we employed a modified Q-learning to dynamically optimize the NCS to extract high-level feature vectors, where the NCS is formulated as a Markov Decision Process. Then, all feature vectors were fused via a Transformer, leveraging both stable/time-varying connections and multi-scale dependencies across different brain regions to achieve the final classification. Additionally, an attention module was embedded to improve interpretability. The classification performance of our proposed BRIEF was compared with 21 state-of-the-art models by discriminating two mental disorders from healthy controls: schizophrenia (SZ, n=1100) and autism spectrum disorder (ASD, n=1550). BRIEF demonstrated significant improvements of 2.2% to 12.1% compared to 21 algorithms, reaching an AUC of 91.5% - 0.6% for SZ and 78.4% - 0.5% for ASD, respectively. This is the first attempt to incorporate a brain-inspired, reinforcement learning strategy to optimize fMRI-based mental disorder classification, showing significant potential for identifying precise neuroimaging biomarkers.
Page 16 of 3473462 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.