Sort by:
Page 119 of 3543538 results

Dynamic neural network modulation associated with rumination in major depressive disorder: a prospective observational comparative analysis of cognitive behavioral therapy and pharmacotherapy.

Katayama N, Shinagawa K, Hirano J, Kobayashi Y, Nakagawa A, Umeda S, Kamiya K, Tajima M, Amano M, Nogami W, Ihara S, Noda S, Terasawa Y, Kikuchi T, Mimura M, Uchida H

pubmed logopapersAug 6 2025
Cognitive behavioral therapy (CBT) and pharmacotherapy are primary treatments for major depressive disorder (MDD). However, their differential effects on the neural networks associated with rumination, or repetitive negative thinking, remain poorly understood. This study included 135 participants, whose rumination severity was measured using the rumination response scale (RRS) and whose resting brain activity was measured using functional magnetic resonance imaging (fMRI) at baseline and after 16 weeks. MDD patients received either standard CBT based on Beck's manual (n = 28) or pharmacotherapy (n = 32). Using a hidden Markov model, we observed that MDD patients exhibited increased activity in the default mode network (DMN) and decreased occupancies in the sensorimotor and central executive networks (CEN). The DMN occurrence rate correlated positively with rumination severity. CBT, while not specifically designed to target rumination, reduced DMN occurrence rate and facilitated transitions toward a CEN-dominant brain state as part of broader therapeutic effects. Pharmacotherapy shifted DMN activity to the posterior region of the brain. These findings suggest that CBT and pharmacotherapy modulate brain network dynamics related to rumination through distinct therapeutic pathways.

Pyramidal attention-based T network for brain tumor classification: a comprehensive analysis of transfer learning approaches for clinically reliable and reliable AI hybrid approaches.

Banerjee T, Chhabra P, Kumar M, Kumar A, Abhishek K, Shah MA

pubmed logopapersAug 6 2025
Brain tumors are a significant challenge to human health as they impair the proper functioning of the brain and the general quality of life, thus requiring clinical intervention through early and accurate diagnosis. Although current state-of-the-art deep learning methods have achieved remarkable progress, there is still a gap in the representation learning of tumor-specific spatial characteristics and the robustness of the classification model on heterogeneous data. In this paper, we introduce a novel Pyramidal Attention-Based bi-partitioned T Network (PABT-Net) that combines the hierarchical pyramidal attention mechanism and T-block based bi-partitioned feature extraction, and a self-convolutional dilated neural classifier as the final task. Such an architecture increases the discriminability of the space and decreases the false forecasting by adaptively focusing on informative areas in brain MRI images. The model was thoroughly tested on three benchmark datasets, Figshare Brain Tumor Dataset, Sartaj Brain MRI Dataset, and Br35H Brain Tumor Dataset, containing 7023 images labeled in four tumor classes: glioma, meningioma, no tumor, and pituitary tumor. It attained an overall classification accuracy of 99.12%, a mean cross-validation accuracy of 98.77%, a Jaccard similarity index of 0.986, and a Cohen's Kappa value of 0.987, indicating superb generalization and clinical stability. The model's effectiveness is also confirmed by tumor-wise classification accuracies: 96.75%, 98.46%, and 99.57% in glioma, meningioma, and pituitary tumors, respectively. Comparative experiments with the state-of-the-art models, including VGG19, MobileNet, and NASNet, were carried out, and ablation studies proved the effectiveness of NASNet incorporation. To capture more prominent spatial-temporal patterns, we investigated hybrid networks, including NASNet with ANN, CNN, LSTM, and CNN-LSTM variants. The framework implements a strict nine-fold cross-validation procedure. It integrates a broad range of measures in its evaluation, including precision, recall, specificity, F1-score, AUC, confusion matrices, and the ROC analysis, consistent across distributions. In general, the PABT-Net model has high potential to be a clinically deployable, interpretable, state-of-the-art automated brain tumor classification model.

Automated Deep Learning-based Segmentation of the Dentate Nucleus Using Quantitative Susceptibility Mapping MRI.

Shiraishi DH, Saha S, Adanyeguh IM, Cocozza S, Corben LA, Deistung A, Delatycki MB, Dogan I, Gaetz W, Georgiou-Karistianis N, Graf S, Grisoli M, Henry PG, Jarola GM, Joers JM, Langkammer C, Lenglet C, Li J, Lobo CC, Lock EF, Lynch DR, Mareci TH, Martinez ARM, Monti S, Nigri A, Pandolfo M, Reetz K, Roberts TP, Romanzetti S, Rudko DA, Scaravilli A, Schulz JB, Subramony SH, Timmann D, França MC, Harding IH, Rezende TJR

pubmed logopapersAug 6 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop a dentate nucleus (DN) segmentation tool using deep learning (DL) applied to brain MRI-based quantitative susceptibility mapping (QSM) images. Materials and Methods Brain QSM images from healthy controls and individuals with cerebellar ataxia or multiple sclerosis were collected from nine different datasets (2016-2023) worldwide for this retrospective study (ClinicalTrials.gov Identifier: NCT04349514). Manual delineation of the DN was performed by experienced raters. Automated segmentation performance was evaluated against manual reference segmentations following training with several DL architectures. A two-step approach was used, consisting of a localization model followed by DN segmentation. Performance metrics included intraclass correlation coefficient (ICC), Dice score, and Pearson correlation coefficient. Results The training and testing datasets comprised 328 individuals (age range, 11-64 years; 171 female), including 141 healthy individuals and 187 with cerebellar ataxia or multiple sclerosis. The manual tracing protocol produced reference standards with high intrarater (average ICC 0.91) and interrater reliability (average ICC 0.78). Initial DL architecture exploration indicated that the nnU-Net framework performed best. The two-step localization plus segmentation pipeline achieved a Dice score of 0.90 ± 0.03 and 0.89 ± 0.04 for left and right DN segmentation, respectively. In external testing, the proposed algorithm outperformed the current leading automated tool (mean Dice scores for left and right DN: 0.86 ± 0.04 vs 0.57 ± 0.22, <i>P</i> < .001; 0.84 ± 0.07 vs 0.58 ± 0.24, <i>P</i> < .001). The model demonstrated generalizability across datasets unseen during the training step, with automated segmentations showing high correlation with manual annotations (left DN: r = 0.74; <i>P</i> < .001; right DN: r = 0.48; <i>P</i> = .03). Conclusion The proposed model accurately and efficiently segmented the DN from brain QSM images. The model is publicly available (https://github.com/art2mri/DentateSeg). ©RSNA, 2025.

ATLASS: An AnaTomicaLly-Aware Self-Supervised Learning Framework for Generalizable Retinal Disease Detection.

Khan AA, Ahmad KM, Shafiq S, Akram MU, Shao J

pubmed logopapersAug 6 2025
Medical imaging, particularly retinal fundus photography, plays a crucial role in early disease detection and treatment for various ocular disorders. However, the development of robust diagnostic systems using deep learning remains constrained by the scarcity of expertly annotated data, which is time-consuming and expensive. Self-Supervised Learning (SSL) has emerged as a promising solution, but existing models fail to effectively incorporate critical domain knowledge specific to retinal anatomy. This potentially limits their clinical relevance and diagnostic capability. We address this issue by introducing an anatomically aware SSL framework that strategically integrates domain expertise through specialized masking of vital retinal structures during pretraining. Our approach leverages vessel and optic disc segmentation maps to guide the SSL process, enabling the development of clinically relevant feature representations without extensive labeled data. The framework combines a Vision Transformer with dual-masking strategies and anatomically informed loss functions to preserve structural integrity during feature learning. Comprehensive evaluation across multiple datasets demonstrates our method's competitive performance in diverse retinal disease classification tasks, including diabetic retinopathy grading, glaucoma detection, age-related macular degeneration identification, and multi-disease classification. The evaluation results establish the effectiveness of anatomically-aware SSL in advancing automated retinal disease diagnosis while addressing the fundamental challenge of limited labeled medical data.

Segmenting Whole-Body MRI and CT for Multiorgan Anatomic Structure Delineation.

Häntze H, Xu L, Mertens CJ, Dorfner FJ, Donle L, Busch F, Kader A, Ziegelmayer S, Bayerl N, Navab N, Rueckert D, Schnabel J, Aerts HJWL, Truhn D, Bamberg F, Weiss J, Schlett CL, Ringhof S, Niendorf T, Pischon T, Kauczor HU, Nonnenmacher T, Kröncke T, Völzke H, Schulz-Menger J, Maier-Hein K, Hering A, Prokop M, van Ginneken B, Makowski MR, Adams LC, Bressem KK

pubmed logopapersAug 6 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop and validate MRSegmentator, a retrospective cross-modality deep learning model for multiorgan segmentation of MRI scans. Materials and Methods This retrospective study trained MRSegmentator on 1,200 manually annotated UK Biobank Dixon MRI sequences (50 participants), 221 in-house abdominal MRI sequences (177 patients), and 1228 CT scans from the TotalSegmentator-CT dataset. A human-in-the-loop annotation workflow leveraged cross-modality transfer learning from an existing CT segmentation model to segment 40 anatomic structures. The model's performance was evaluated on 900 MRI sequences from 50 participants in the German National Cohort (NAKO), 60 MRI sequences from AMOS22 dataset, and 29 MRI sequences from TotalSegmentator-MRI. Reference standard manual annotations were used for comparison. Metrics to assess segmentation quality included Dice Similarity Coefficient (DSC). Statistical analyses included organ-and sequence-specific mean ± SD reporting and two-sided <i>t</i> tests for demographic effects. Results 139 participants were evaluated; demographic information was available for 70 (mean age 52.7 years ± 14.0 [SD], 36 female). Across all test datasets, MRSegmentator demonstrated high class wise DSC for well-defined organs (lungs: 0.81-0.96, heart: 0.81-0.94) and organs with anatomic variability (liver: 0.82-0.96, kidneys: 0.77-0.95). Smaller structures showed lower DSC (portal/splenic veins: 0.64-0.78, adrenal glands: 0.56-0.69). The average DSC on the external testing using NAKO data, ranged from 0.85 ± 0.08 for T2-HASTE to 0.91 ± 0.05 for in-phase sequences. The model generalized well to CT, achieving mean DSC of 0.84 ± 0.12 on AMOS CT data. Conclusion MRSegmentator accurately segmented 40 anatomic structures on MRI and generalized to CT; outperforming existing open-source tools. Published under a CC BY 4.0 license.

Artificial Intelligence Iterative Reconstruction Algorithm Combined with Low-Dose Aortic CTA for Preoperative Access Assessment of Transcatheter Aortic Valve Implantation: A Prospective Cohort Study.

Li Q, Liu D, Li K, Li J, Zhou Y

pubmed logopapersAug 6 2025
This study aimed to explore whether an artificial intelligence iterative reconstruction (AIIR) algorithm combined with low-dose aortic computed tomography angiography (CTA) demonstrates clinical effectiveness in assessing preoperative access for transcatheter aortic valve implantation (TAVI). A total of 109 patients were prospectively recruited for aortic CTA scans and divided into two groups: group A (n = 51) with standard-dose CT examinations (SDCT) and group B (n = 58) with low-dose CT examinations (LDCT). Group B was further subdivided into groups B1 and B2. Groups A and B2 used the hybrid iterative algorithm (HIR: Karl 3D), whereas Group B1 used the AIIR algorithm. CT attenuation and noise of different vessel segments were measured, and the contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) were calculated. Two radiologists, who were blinded to the study details, rated the subjective image quality on a 5-point scale. The effective radiation doses were also recorded for groups A and B. Group B1 demonstrated the highest CT attenuation, SNR, and CNR and the lowest image noise among the three groups (p < 0.05). The scores of subjective image noise, vessel and non-calcified plaque edge sharpness, and overall image quality in Group B1 were higher than those in groups A and B2 (p < 0.001). Group B2 had the highest artifacts scores compared with groups A and B1 (p < 0.05). The radiation dose in group B was reduced by 50.33% compared with that in group A (p < 0.001). The AIIR algorithm combined with low-dose CTA yielded better diagnostic images before TAVI than the Karl 3D algorithm.

Quantum Federated Learning in Healthcare: The Shift from Development to Deployment and from Models to Data.

Bhatia AS, Kais S, Alam MA

pubmed logopapersAug 6 2025
Healthcare organizations have a high volume of sensitive data and traditional technologies have limited storage capacity and computational resources. The prospect of sharing healthcare data for machine learning is more arduous due to firm regulations related to patient privacy. In recent years, federated learning has offered a solution to accelerate distributed machine learning addressing concerns related to data privacy and governance. Currently, the blend of quantum computing and machine learning has experienced significant attention from academic institutions and research communities. The ultimate objective of this work is to develop a federated quantum machine learning framework (FQML) to tackle the optimization, security, and privacy challenges in the healthcare industry for medical imaging tasks. In this work, we proposed federated quantum convolutional neural networks (QCNNs) with distributed training across edge devices. To demonstrate the feasibility of the proposed FQML framework, we performed extensive experiments on two benchmark medical datasets (Pneumonia MNIST, and CT kidney disease analysis), which are non-independently and non-identically partitioned among the healthcare institutions/clients. The proposed framework is validated and assessed via large-scale simulations. Based on our results, the quantum simulation experiments achieve performance levels on par with well-known classical CNN models, 86.3% accuracy on the pneumonia dataset and 92.8% on the CT-kidney dataset, while requiring fewer model parameters and consuming less data. Moreover, the client selection mechanism is proposed to reduce the computation overhead at each communication round, which effectively improves the convergence rate.

AI-Guided Cardiac Computer Tomography in Type 1 Diabetes Patients with Low Coronary Artery Calcium Score.

Wohlfahrt P, Pazderník M, Marhefková N, Roland R, Adla T, Earls J, Haluzík M, Dubský M

pubmed logopapersAug 6 2025
<b><i>Objective:</i></b> Cardiovascular risk stratification based on traditional risk factors lacks precision at the individual level. While coronary artery calcium (CAC) scoring enhances risk prediction by detecting calcified atherosclerotic plaques, it may underestimate risk in individuals with noncalcified plaques-a pattern common in younger type 1 diabetes (T1D) patients. Understanding the prevalence of noncalcified atherosclerosis in T1D is crucial for developing more effective screening strategies. Therefore, this study aimed to assess the burden of clinically significant atherosclerosis in T1D patients with CAC <100 using artificial intelligence (AI)-guided quantitative coronary computed tomographic angiography (AI-QCT). <b><i>Methods:</i></b> This study enrolled T1D patients aged ≥30 years with disease duration ≥10 years and no manifest or symptomatic atherosclerotic cardiovascular disease (ASCVD). CAC and carotid ultrasound were assessed in all participants. AI-QCT was performed in patients with CAC 0 and at least one plaque in the carotid arteries or those with CAC 1-99. <b><i>Results:</i></b> Among the 167 participants (mean age 52 ± 10 years; 44% women; T1D duration 29 ± 11 years), 93 (56%) had CAC = 0, 46 (28%) had CAC 1-99, 8 (5%) had CAC 100-299, and 20 (12%) had CAC ≥300. AI-QCT was performed in a subset of 52 patients. Only 11 (21%) had no evidence of coronary artery disease. Significant coronary stenosis was identified in 17% of patients, and 30 (73%) presented with at least one high-risk plaque. Compared with CAC-based risk categories, AI-QCT reclassified 58% of patients, and 21% compared with the STENO1 risk categories. There was only fair agreement between AI-QCT and CAC (κ = 0.25), and a slight agreement between AI-QCT and STENO1 risk categories (κ = 0.02). <b><i>Conclusion:</i></b> AI-QCT may reveal subclinical atherosclerotic burden and high-risk features that remain undetected by traditional risk models or CAC. These findings challenge the assumption that a low CAC score equates to a low cardiovascular risk in T1D.

On the effectiveness of multimodal privileged knowledge distillation in two vision transformer based diagnostic applications

Simon Baur, Alexandra Benova, Emilio Dolgener Cantú, Jackie Ma

arxiv logopreprintAug 6 2025
Deploying deep learning models in clinical practice often requires leveraging multiple data modalities, such as images, text, and structured data, to achieve robust and trustworthy decisions. However, not all modalities are always available at inference time. In this work, we propose multimodal privileged knowledge distillation (MMPKD), a training strategy that utilizes additional modalities available solely during training to guide a unimodal vision model. Specifically, we used a text-based teacher model for chest radiographs (MIMIC-CXR) and a tabular metadata-based teacher model for mammography (CBIS-DDSM) to distill knowledge into a vision transformer student model. We show that MMPKD can improve the resulting attention maps' zero-shot capabilities of localizing ROI in input images, while this effect does not generalize across domains, as contrarily suggested by prior research.

Recurrent inference machine for medical image registration.

Zhang Y, Zhao Y, Xue H, Kellman P, Klein S, Tao Q

pubmed logopapersAug 5 2025
Image registration is essential for medical image applications where alignment of voxels across multiple images is needed for qualitative or quantitative analysis. With recent advances in deep neural networks and parallel computing, deep learning-based medical image registration methods become competitive with their flexible modeling and fast inference capabilities. However, compared to traditional optimization-based registration methods, the speed advantage may come at the cost of registration performance at inference time. Besides, deep neural networks ideally demand large training datasets while optimization-based methods are training-free. To improve registration accuracy and data efficiency, we propose a novel image registration method, termed Recurrent Inference Image Registration (RIIR) network. RIIR is formulated as a meta-learning solver for the registration problem in an iterative manner. RIIR addresses the accuracy and data efficiency issues, by learning the update rule of optimization, with implicit regularization combined with explicit gradient input. We extensively evaluated RIIR on brain MRI, lung CT, and quantitative cardiac MRI datasets, in terms of both registration accuracy and training data efficiency. Our experiments showed that RIIR outperformed a range of deep learning-based methods, even with only 5% of the training data, demonstrating high data efficiency. Key findings from our ablation studies highlighted the important added value of the hidden states introduced in the recurrent inference framework for meta-learning. Our proposed RIIR offers a highly data-efficient framework for deep learning-based medical image registration.
Page 119 of 3543538 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.