Sort by:
Page 127 of 1981980 results

FedBCD: Federated Ultrasound Video and Image Joint Learning for Breast Cancer Diagnosis.

Deng T, Huang C, Cai M, Liu Y, Liu M, Lin J, Shi Z, Zhao B, Huang J, Liang C, Han G, Liu Z, Wang Y, Han C

pubmed logopapersJun 1 2025
Ultrasonography plays an essential role in breast cancer diagnosis. Current deep learning based studies train the models on either images or videos in a centralized learning manner, lacking consideration of joint benefits between two different modality models or the privacy issue of data centralization. In this study, we propose the first decentralized learning solution for joint learning with breast ultrasound video and image, called FedBCD. To enable the model to learn from images and videos simultaneously and seamlessly in client-level local training, we propose a Joint Ultrasound Video and Image Learning (JUVIL) model to bridge the dimension gap between video and image data by incorporating temporal and spatial adapters. The parameter-efficient design of JUVIL with trainable adapters and frozen backbone further reduces the computational cost and communication burden of federated learning, finally improving the overall efficiency. Moreover, considering conventional model-wise aggregation may lead to unstable federated training due to different modalities, data capacities in different clients, and different functionalities across layers. We further propose a Fisher information matrix (FIM) guided Layer-wise Aggregation method named FILA. By measuring layer-wise sensitivity with FIM, FILA assigns higher contributions to the clients with lower sensitivity, improving personalized performance during federated training. Extensive experiments on three image clients and one video client demonstrate the benefits of joint learning architecture, especially for the ones with small-scale data. FedBCD significantly outperforms nine federated learning methods on both video-based and image-based diagnoses, demonstrating the superiority and potential for clinical practice. Code is released at https://github.com/tianpeng-deng/FedBCD.

The Pivotal Role of Baseline LDCT for Lung Cancer Screening in the Era of Artificial Intelligence.

De Luca GR, Diciotti S, Mascalchi M

pubmed logopapersJun 1 2025
In this narrative review, we address the ongoing challenges of lung cancer (LC) screening using chest low-dose computerized tomography (LDCT) and explore the contributions of artificial intelligence (AI), in overcoming them. We focus on evaluating the initial (baseline) LDCT examination, which provides a wealth of information relevant to the screening participant's health. This includes the detection of large-size prevalent LC and small-size malignant nodules that are typically diagnosed as LCs upon growth in subsequent annual LDCT scans. Additionally, the baseline LDCT examination provides valuable information about smoking-related comorbidities, including cardiovascular disease, chronic obstructive pulmonary disease, and interstitial lung disease (ILD), by identifying relevant markers. Notably, these comorbidities, despite the slow progression of their markers, collectively exceed LC as ultimate causes of death at follow-up in LC screening participants. Computer-assisted diagnosis tools currently improve the reproducibility of radiologic readings and reduce the false negative rate of LDCT. Deep learning (DL) tools that analyze the radiomic features of lung nodules are being developed to distinguish between benign and malignant nodules. Furthermore, AI tools can predict the risk of LC in the years following a baseline LDCT. AI tools that analyze baseline LDCT examinations can also compute the risk of cardiovascular disease or death, paving the way for personalized screening interventions. Additionally, DL tools are available for assessing osteoporosis and ILD, which helps refine the individual's current and future health profile. The primary obstacles to AI integration into the LDCT screening pathway are the generalizability of performance and the explainability.

Knowledge-Aware Multisite Adaptive Graph Transformer for Brain Disorder Diagnosis.

Song X, Shu K, Yang P, Zhao C, Zhou F, Frangi AF, Xiao X, Dong L, Wang T, Wang S, Lei B

pubmed logopapersJun 1 2025
Brain disorder diagnosis via resting-state functional magnetic resonance imaging (rs-fMRI) is usually limited due to the complex imaging features and sample size. For brain disorder diagnosis, the graph convolutional network (GCN) has achieved remarkable success by capturing interactions between individuals and the population. However, there are mainly three limitations: 1) The previous GCN approaches consider the non-imaging information in edge construction but ignore the sensitivity differences of features to non-imaging information. 2) The previous GCN approaches solely focus on establishing interactions between subjects (i.e., individuals and the population), disregarding the essential relationship between features. 3) Multisite data increase the sample size to help classifier training, but the inter-site heterogeneity limits the performance to some extent. This paper proposes a knowledge-aware multisite adaptive graph Transformer to address the above problems. First, we evaluate the sensitivity of features to each piece of non-imaging information, and then construct feature-sensitive and feature-insensitive subgraphs. Second, after fusing the above subgraphs, we integrate a Transformer module to capture the intrinsic relationship between features. Third, we design a domain adaptive GCN using multiple loss function terms to relieve data heterogeneity and to produce the final classification results. Last, the proposed framework is validated on two brain disorder diagnostic tasks. Experimental results show that the proposed framework can achieve state-of-the-art performance.

Semi-Supervised Gland Segmentation via Feature-Enhanced Contrastive Learning and Dual-Consistency Strategy.

Yu J, Li B, Pan X, Shi Z, Wang H, Lan R, Luo X

pubmed logopapersJun 1 2025
In the field of gland segmentation in histopathology, deep-learning methods have made significant progress. However, most existing methods not only require a large amount of high-quality annotated data but also tend to confuse the internal of the gland with the background. To address this challenge, we propose a new semi-supervised method named DCCL-Seg for gland segmentation, which follows the teacher-student framework. Our approach can be divided into follows steps. First, we design a contrastive learning module to improve the ability of the student model's feature extractor to distinguish between gland and background features. Then, we introduce a Signed Distance Field (SDF) prediction task and employ dual-consistency strategy (across tasks and models) to better reinforce the learning of gland internal. Next, we proposed a pseudo label filtering and reweighting mechanism, which filters and reweights the pseudo labels generated by the teacher model based on confidence. However, even after reweighting, the pseudo labels may still be influenced by unreliable pixels. Finally, we further designed an assistant predictor to learn the reweighted pseudo labels, which do not interfere with the student model's predictor and ensure the reliability of the student model's predictions. Experimental results on the publicly available GlaS and CRAG datasets demonstrate that our method outperforms other semi-supervised medical image segmentation methods.

FeaInfNet: Diagnosis of Medical Images With Feature-Driven Inference and Visual Explanations.

Peng Y, He L, Hu D, Liu Y, Yang L, Shang S

pubmed logopapersJun 1 2025
Interpretable deep-learning models have received widespread attention in the field of image recognition. However, owing to the coexistence of medical-image categories and the challenge of identifying subtle decision-making regions, many proposed interpretable deep-learning models suffer from insufficient accuracy and interpretability in diagnosing images of medical diseases. Therefore, this study proposed a feature-driven inference network (FeaInfNet) that incorporates a feature-based network reasoning structure. Specifically, local feature masks (LFM) were developed to extract feature vectors, thereby providing global information for these vectors and enhancing the expressive ability of FeaInfNet. Second, FeaInfNet compares the similarity of the feature vector corresponding to each subregion image patch with the disease and normal prototype templates that may appear in the region. It then combines the comparison of each subregion when making the final diagnosis. This strategy simulates the diagnosis process of doctors, making the model interpretable during the reasoning process, while avoiding misleading results caused by the participation of normal areas during reasoning. Finally, we proposed adaptive dynamic masks (Adaptive-DM) to interpret feature vectors and prototypes into human-understandable image patches to provide an accurate visual interpretation. Extensive experiments on multiple publicly available medical datasets, including RSNA, iChallenge-PM, COVID-19, ChinaCXRSet, MontgomerySet, and CBIS-DDSM, demonstrated that our method achieves state-of-the-art classification accuracy and interpretability compared with baseline methods in the diagnosis of medical images. Additional ablation studies were performed to verify the effectiveness of each component.

A Machine Learning Algorithm to Estimate the Probability of a True Scaphoid Fracture After Wrist Trauma.

Bulstra AEJ

pubmed logopapersJun 1 2025
To identify predictors of a true scaphoid fracture among patients with radial wrist pain following acute trauma, train 5 machine learning (ML) algorithms in predicting scaphoid fracture probability, and design a decision rule to initiate advanced imaging in high-risk patients. Two prospective cohorts including 422 patients with radial wrist pain following wrist trauma were combined. There were 117 scaphoid fractures (28%) confirmed on computed tomography, magnetic resonance imaging, or radiographs. Eighteen fractures (15%) were occult. Predictors of a scaphoid fracture were identified among demographics, mechanism of injury and examination maneuvers. Five ML-algorithms were trained in calculating scaphoid fracture probability. ML-algorithms were assessed on ability to discriminate between patients with and without a fracture (area under the receiver operating characteristic curve), agreement between observed and predicted probabilities (calibration), and overall performance (Brier score). The best performing ML-algorithm was incorporated into a probability calculator. A decision rule was proposed to initiate advanced imaging among patients with negative radiographs. Pain over the scaphoid on ulnar deviation, sex, age, and mechanism of injury were most strongly associated with a true scaphoid fracture. The best performing ML-algorithm yielded an area under the receiver operating characteristic curve, calibration slope, intercept, and Brier score of 0.77, 0.84, -0.01 and 0.159, respectively. The ML-derived decision rule proposes to initiate advanced imaging in patients with radial-sided wrist pain, negative radiographs, and a fracture probability of ≥10%. When applied to our cohort, this would yield 100% sensitivity, 38% specificity, and would have reduced the number of patients undergoing advanced imaging by 36% without missing a fracture. The ML-algorithm accurately calculated scaphoid fracture probability based on scaphoid pain on ulnar deviation, sex, age, and mechanism of injury. The ML-decision rule may reduce the number of patients undergoing advanced imaging by a third with a small risk of missing a fracture. External validation is required before implementation. Diagnostic II.

Multi-Objective Evolutionary Optimization Boosted Deep Neural Networks for Few-Shot Medical Segmentation With Noisy Labels.

Li H, Zhang Y, Zuo Q

pubmed logopapersJun 1 2025
Fully-supervised deep neural networks have achieved remarkable progress in medical image segmentation, yet they heavily rely on extensive manually labeled data and exhibit inflexibility for unseen tasks. Few-shot segmentation (FSS) addresses these issues by predicting unseen classes from a few labeled support examples. However, most existing FSS models struggle to generalize to diverse target tasks distinct from training domains. Furthermore, designing promising network architectures for such tasks is expertise-intensive and laborious. In this paper, we introduce MOE-FewSeg, a novel automatic design method for FSS architectures. Specifically, we construct a U-shaped encoder-decoder search space that incorporates capabilities for information interaction and feature selection, thereby enabling architectures to leverage prior knowledge from publicly available datasets across diverse domains for improved prediction of various target tasks. Given the potential conflicts among disparate target tasks, we formulate the multi-task problem as a multi-objective optimization problem. We employ a multi-objective genetic algorithm to identify the Pareto-optimal architectures for these target tasks within this search space. Furthermore, to mitigate the impact of noisy labels due to dataset quality variations, we propose a noise-robust loss function named NRL, which encourages the model to de-emphasize larger loss values. Empirical results demonstrate that MOE-FewSeg outperforms manually designed architectures and other related approaches.

<i>Radiology: Cardiothoracic Imaging</i> Highlights 2024.

Catania R, Mukherjee A, Chamberlin JH, Calle F, Philomina P, Mastrodicasa D, Allen BD, Suchá D, Abbara S, Hanneman K

pubmed logopapersJun 1 2025
<i>Radiology: Cardiothoracic Imaging</i> publishes research, technical developments, and reviews related to cardiac, vascular, and thoracic imaging. The current review article, led by the <i>Radiology: Cardiothoracic Imaging</i> trainee editorial board, highlights the most impactful articles published in the journal between November 2023 and October 2024. The review encompasses various aspects of cardiac, vascular, and thoracic imaging related to coronary artery disease, cardiac MRI, valvular imaging, congenital and inherited heart diseases, thoracic imaging, lung cancer, artificial intelligence, and health services research. Key highlights include the role of CT fractional flow reserve analysis to guide patient management, the role of MRI elastography in identifying age-related myocardial stiffness associated with increased risk of heart failure, review of MRI in patients with cardiovascular implantable electronic devices and fractured or abandoned leads, imaging of mitral annular disjunction, specificity of the Lung Imaging Reporting and Data System version 2022 for detecting malignant airway nodules, and a radiomics-based reinforcement learning model to analyze serial low-dose CT scans in lung cancer screening. Ongoing research and future directions include artificial intelligence tools for applications such as plaque quantification using coronary CT angiography and growing understanding of the interconnectedness of environmental sustainability and cardiovascular imaging. <b>Keywords:</b> CT, MRI, CT-Coronary Angiography, Cardiac, Pulmonary, Coronary Arteries, Heart, Lung, Mediastinum, Mitral Valve, Aortic Valve, Artificial Intelligence © RSNA, 2025.

Network Occlusion Sensitivity Analysis Identifies Regional Contributions to Brain Age Prediction.

He L, Wang S, Chen C, Wang Y, Fan Q, Chu C, Fan L, Xu J

pubmed logopapersJun 1 2025
Deep learning frameworks utilizing convolutional neural networks (CNNs) have frequently been used for brain age prediction and have achieved outstanding performance. Nevertheless, deep learning remains a black box as it is hard to interpret which brain parts contribute significantly to the predictions. To tackle this challenge, we first trained a lightweight, fully CNN model for brain age estimation on a large sample data set (N = 3054, age range = [8,80 years]) and tested it on an independent data set (N = 555, age range = [8,80 years]). We then developed an interpretable scheme combining network occlusion sensitivity analysis (NOSA) with a fine-grained human brain atlas to uncover the learned invariance of the model. Our findings show that the dorsolateral, dorsomedial frontal cortex, anterior cingulate cortex, and thalamus had the highest contributions to age prediction across the lifespan. More interestingly, we observed that different regions showed divergent patterns in their predictions for specific age groups and that the bilateral hemispheres contributed differently to the predictions. Regions in the frontal lobe were essential predictors in both the developmental and aging stages, with the thalamus remaining relatively stable and saliently correlated with other regional changes throughout the lifespan. The lateral and medial temporal brain regions gradually became involved during the aging phase. At the network level, the frontoparietal and the default mode networks show an inverted U-shape contribution from the developmental to the aging stages. The framework could identify regional contributions to the brain age prediction model, which could help increase the model interpretability when serving as an aging biomarker.
Page 127 of 1981980 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.