Sort by:
Page 8 of 3423413 results

Neural Collapse-Inspired Multi-Label Federated Learning under Label-Distribution Skew

Can Peng, Yuyuan Liu, Yingyu Yang, Pramit Saha, Qianye Yang, J. Alison Noble

arxiv logopreprintSep 16 2025
Federated Learning (FL) enables collaborative model training across distributed clients while preserving data privacy. However, the performance of deep learning often deteriorates in FL due to decentralized and heterogeneous data. This challenge is further amplified in multi-label scenarios, where data exhibit complex characteristics such as label co-occurrence, inter-label dependency, and discrepancies between local and global label relationships. While most existing FL research primarily focuses on single-label classification, many real-world applications, particularly in domains such as medical imaging, often involve multi-label settings. In this paper, we address this important yet underexplored scenario in FL, where clients hold multi-label data with skewed label distributions. Neural Collapse (NC) describes a geometric structure in the latent feature space where features of each class collapse to their class mean with vanishing intra-class variance, and the class means form a maximally separated configuration. Motivated by this theory, we propose a method to align feature distributions across clients and to learn high-quality, well-clustered representations. To make the NC-structure applicable to multi-label settings, where image-level features may contain multiple semantic concepts, we introduce a feature disentanglement module that extracts semantically specific features. The clustering of these disentangled class-wise features is guided by a predefined shared NC structure, which mitigates potential conflicts between client models due to diverse local data distributions. In addition, we design regularisation losses to encourage compact clustering in the latent feature space. Experiments conducted on four benchmark datasets across eight diverse settings demonstrate that our approach outperforms existing methods, validating its effectiveness in this challenging FL scenario.

A Computational Pipeline for Patient-Specific Modeling of Thoracic Aortic Aneurysm: From Medical Image to Finite Element Analysis

Jiasong Chen, Linchen Qian, Ruonan Gong, Christina Sun, Tongran Qin, Thuy Pham, Caitlin Martin, Mohammad Zafar, John Elefteriades, Wei Sun, Liang Liang

arxiv logopreprintSep 16 2025
The aorta is the body's largest arterial vessel, serving as the primary pathway for oxygenated blood within the systemic circulation. Aortic aneurysms consistently rank among the top twenty causes of mortality in the United States. Thoracic aortic aneurysm (TAA) arises from abnormal dilation of the thoracic aorta and remains a clinically significant disease, ranking as one of the leading causes of death in adults. A thoracic aortic aneurysm ruptures when the integrity of all aortic wall layers is compromised due to elevated blood pressure. Currently, three-dimensional computed tomography (3D CT) is considered the gold standard for diagnosing TAA. The geometric characteristics of the aorta, which can be quantified from medical imaging, and stresses on the aortic wall, which can be obtained by finite element analysis (FEA), are critical in evaluating the risk of rupture and dissection. Deep learning based image segmentation has emerged as a reliable method for extracting anatomical regions of interest from medical images. Voxel based segmentation masks of anatomical structures are typically converted into structured mesh representation to enable accurate simulation. Hexahedral meshes are commonly used in finite element simulations of the aorta due to their computational efficiency and superior simulation accuracy. Due to anatomical variability, patient specific modeling enables detailed assessment of individual anatomical and biomechanics behaviors, supporting precise simulations, accurate diagnoses, and personalized treatment strategies. Finite element (FE) simulations provide valuable insights into the biomechanical behaviors of tissues and organs in clinical studies. Developing accurate FE models represents a crucial initial step in establishing a patient-specific, biomechanically based framework for predicting the risk of TAA.

The HeartMagic prospective observational study protocol - characterizing subtypes of heart failure with preserved ejection fraction

Meyer, P., Rocca, A., Banus, J., Ogier, A. C., Georgantas, C., Calarnou, P., Fatima, A., Vallee, J.-P., Deux, J.-F., Thomas, A., Marquis, J., Monney, P., Lu, H., Ledoux, J.-B., Tillier, C., Crowe, L. A., Abdurashidova, T., Richiardi, J., Hullin, R., van Heeswijk, R. B.

medrxiv logopreprintSep 16 2025
Introduction Heart failure (HF) is a life-threatening syndrome with significant morbidity and mortality. While evidence-based drug treatments have effectively reduced morbidity and mortality in HF with reduced ejection fraction (HFrEF), few therapies have been demonstrated to improve outcomes in HF with preserved ejection fraction (HFpEF). The multifaceted clinical presentation is one of the main reasons why the current understanding of HFpEF remains limited. This may be caused by the existence of several HFpEF disease subtypes that each need different treatments. There is therefore an unmet need for a holistic approach that combines comprehensive imaging with metabolomic, transcriptomic and genomic mapping to subtype HFpEF patients. This protocol details the approach employed in the HeartMagic study to address this gap in understanding. Methods This prospective multi-center observational cohort study will include 500 consecutive patients with actual or recent hospitalization for treatment of HFpEF at two Swiss university hospitals, along with 50 age-matched HFrEF patients and 50 age-matched healthy controls. Diagnosis of heart failure is based on clinical signs and symptoms and subgrouping HF patients is based on the left-ventricular ejection fraction. In addition to routine clinical workup, participants undergo genomic, transcriptomic, and metabolomic analyses, while the anatomy, composition, and function of the heart are quantified by comprehensive echocardiography and magnetic resonance imaging (MRI). Quantitative MRI is also applied to characterize the kidney. The primary outcome is a composite of one-year cardiovascular mortality or rehospitalization. Machine learning (ML) based multi-modal clustering will be employed to identify distinct HFpEF subtypes in the holistic data. The clinical importance of these subtypes shall be evaluated based on their association with the primary outcome. Statistical analysis will include group comparisons across modalities, survival analysis for the primary outcome, and integrative multi-modal clustering combining clinical, imaging, ECG, genomic, transcriptomic, and metabolomic data to identify and validate HFpEF subtypes. Discussion The integration of comprehensive MRI with extensive genomic and metabolomic profiling in this study will result in an unprecedented panoramic view of HFpEF and should enable us to distinguish functional subgroups of HFpEF patients. This approach has the potential to provide unprecedented insights on HFpEF disease and should provide a basis for personalized therapies. Beyond this, identifying HFpEF subtypes with specific molecular and structural characteristics could lead to new targeted pharmacological interventions, with the potential to improve patient outcomes.

Challenges and Limitations of Multimodal Large Language Models in Interpreting Pediatric Panoramic Radiographs.

Mine Y, Iwamoto Y, Okazaki S, Nishimura T, Tabata E, Takeda S, Peng TY, Nomura R, Kakimoto N, Murayama T

pubmed logopapersSep 16 2025
Multimodal large language models (LLMs) have potential for medical image analysis, yet their reliability for pediatric panoramic radiographs remains uncertain. This study evaluated two multimodal LLMs (OpenAI o1, Claude 3.5 Sonnet) for detecting and counting teeth (including tooth germs) on pediatric panoramic radiographs. Eighty-seven pediatric panoramic radiographs from an open-source data set were analyzed. Two pediatric dentists annotated the presence or absence of each potential tooth position. Each image was processed five times by the LLMs using identical prompts, and the results were compared with the expert annotations. Standard performance metrics and Fleiss' kappa were calculated. Detailed examination revealed that subtle developmental stages and minor tooth loss were consistently misidentified. Claude 3.5 Sonnet had higher sensitivity but significantly lower specificity (29.8% ± 21.5%), resulting in many false positives. OpenAI o1 demonstrated superior specificity compared to Claude 3.5 Sonnet, but still failed to correctly detect subtle defects in certain mixed dentition cases. Both models showed large variability in repeated runs. Both LLMs failed to achieve clinically acceptable performance and cannot reliably identify nuanced discrepancies critical for pediatric dentistry. Further refinements and consistency improvements are essential before routine clinical use.

AI-powered insights in pediatric nephrology: current applications and future opportunities.

Nada A, Ahmed Y, Hu J, Weidemann D, Gorman GH, Lecea EG, Sandokji IA, Cha S, Shin S, Bani-Hani S, Mannemuddhu SS, Ruebner RL, Kakajiwala A, Raina R, George R, Elchaki R, Moritz ML

pubmed logopapersSep 16 2025
Artificial intelligence (AI) is rapidly emerging as a transformative force in pediatric nephrology, enabling improvements in diagnostic accuracy, therapeutic precision, and operational workflows. By integrating diverse datasets-including patient histories, genomics, imaging, and longitudinal clinical records-AI-driven tools can detect subtle kidney anomalies, predict acute kidney injury, and forecast disease progression. Deep learning models, for instance, have demonstrated the potential to enhance ultrasound interpretations, refine kidney biopsy assessments, and streamline pathology evaluations. Coupled with robust decision support systems, these innovations also optimize medication dosing and dialysis regimens, ultimately improving patient outcomes. AI-powered chatbots hold promise for improving patient engagement and adherence, while AI-assisted documentation solutions offer relief from administrative burdens, mitigating physician burnout. However, ethical and practical challenges remain. Healthcare professionals must receive adequate training to harness AI's capabilities, ensuring that such technologies bolster rather than erode the vital doctor-patient relationship. Safeguarding data privacy, minimizing algorithmic bias, and establishing standardized regulatory frameworks are critical for safe deployment. Beyond clinical care, AI can accelerate pediatric nephrology research by identifying biomarkers, enabling more precise patient recruitment, and uncovering novel therapeutic targets. As these tools evolve, interdisciplinary collaborations and ongoing oversight will be key to integrating AI responsibly. Harnessing AI's vast potential could revolutionize pediatric nephrology, championing a future of individualized, proactive, and empathetic care for children with kidney diseases. Through strategic collaboration and transparent development, these advanced technologies promise to minimize disparities, foster innovation, and sustain compassionate patient-centered care, shaping a new horizon in pediatric nephrology research and practice.

Role of Artificial Intelligence in Lung Transplantation: Current State, Challenges, and Future Directions.

Duncheskie RP, Omari OA, Anjum F

pubmed logopapersSep 16 2025
Lung transplantation remains a critical treatment for end-stage lung diseases, yet it continues to have 1 of the lowest survival rates among solid organ transplants. Despite its life-saving potential, the field faces several challenges, including organ shortages, suboptimal donor matching, and post-transplant complications. The rapidly advancing field of artificial intelligence (AI) offers significant promise in addressing these challenges. Traditional statistical models, such as linear and logistic regression, have been used to predict post-transplant outcomes but struggle to adapt to new trends and evolving data. In contrast, machine learning algorithms can evolve with new data, offering dynamic and updated predictions. AI holds the potential to enhance lung transplantation at multiple stages. In the pre-transplant phase, AI can optimize waitlist management, refine donor selection, and improve donor-recipient matching, and enhance diagnostic imaging by harnessing vast datasets. Post-transplant, AI can help predict allograft rejection, improve immunosuppressive management, and better forecast long-term patient outcomes, including quality of life. However, the integration of AI in lung transplantation also presents challenges, including data privacy concerns, algorithmic bias, and the need for external clinical validation. This review explores the current state of AI in lung transplantation, summarizes key findings from recent studies, and discusses the potential benefits, challenges, and ethical considerations in this rapidly evolving field, highlighting future research directions.

Prediction of cerebrospinal fluid intervention in fetal ventriculomegaly via AI-powered normative modelling.

Zhou M, Rajan SA, Nedelec P, Bayona JB, Glenn O, Gupta N, Gano D, George E, Rauschecker AM

pubmed logopapersSep 16 2025
Fetal ventriculomegaly (VM) is common and largely benign when isolated. However, it can occasionally progress to hydrocephalus, a more severe condition associated with increased mortality and neurodevelopmental delay that may require surgical postnatal intervention. Accurate differentiation between VM and hydrocephalus is essential but remains challenging, relying on subjective assessment and limited two-dimensional measurements. Deep learning-based segmentation offers a promising solution for objective and reproducible volumetric analysis. This work presents an AI-powered method for segmentation, volume quantification, and classification of the ventricles in fetal brain MRI to predict need for postnatal intervention. This retrospective study included 222 patients with singleton pregnancies. An nnUNet was trained to segment the fetal ventricles on 20 manually segmented, institutional fetal brain MRIs combined with 80 studies from a publicly available dataset. The validated model was then applied to 138 normal fetal brain MRIs to generate a normative reference range across a range of gestational ages (18-36 weeks). Finally it was applied to 64 fetal brains with VM (14 of which required postnatal intervention). ROC curves and AUC to predict VM and need for postnatal intervention were calculated. The nnUNet predicted segmentation of the fetal ventricles in the reference dataset were high quality and accurate (median Dice score 0.96, IQR 0.93-0.99). A normative reference range of ventricular volumes across gestational ages was developed using automated segmentation volumes. The optimal threshold for identifying VM was 2 standard deviations from normal with sensitivity of 92% and specificity of 93% (AUC 0.97, 95% CI 0.91-0.98). When normalized to intracranial volume, fetal ventricular volume was higher and subarachnoid volume lower among those who required postnatal intervention (p<0.001, p=0.003). The optimal threshold for identifying need for postnatal intervention was 11 standard deviations from normal with sensitivity of 86% and specificity of 100% (AUC 0.97, 95% CI 0.86-1.00). This work introduces a deep-learning based method for fast and accurate quantification of ventricular volumes in fetal brain MRI. A normative reference standard derived using this method can predict VM and need for postnatal CSF intervention. Increased ventricular volume is a strong predictor for postnatal intervention. VM = ventriculomegaly, 2D = two-dimensional, 3D = three-dimensional, ROC = receiver operating characteristics, AUC = area under curve.

Head-to-Head Comparison of Two AI Computer-Aided Triage Solutions for Detecting Intracranial Hemorrhage on Non-Contrast Head CT.

Garcia GM, Young P, Dawood L, Elshikh M

pubmed logopapersSep 16 2025
This study aims to provide a comprehensive comparison of the performance and reproducibility of two commercially available artificial intelligence (AI) software computer-aided triage and notification solutions, Vendor A (Aidoc) and Vendor B (Viz.ai), for the detection of intracranial hemorrhage (ICH) on non-contrast enhanced head CT (NCHCT) scans performed within a single academic institution. The retrospective analysis was conducted on a large patient cohort from multiple healthcare settings within a single academic institution, utilizing standardized scanning protocols. Sensitivity, specificity, false positive, and false negative rates were evaluated for both vendors. Outputs assessed included AI-generated case-level classification. Among 4,081 scans, 595 were positive for ICH. Vendor A demonstrated a sensitivity of 94.4% and specificity of 97.4%, PPV of 85.9%, and NPV of 99.1%. Vendor B showed a sensitivity of 59.5% and specificity of 99.0%, PPV of 90.0%, and NPV of 92.6%. Vendor A had 20 false negatives, which primarily involved subdural and intraparenchymal hemorrhages, and 97 false positives, which appear to be related to motion artifact. Vendor B had 145 false negatives, largely comprised of subdural and subarachnoid hemorrhages, and 36 false positives, which appeared to be related to motion artifact and calcified or dense lesions. Concordantly, 18 cases were false negatives and 11 cases were false positives for both AI solutions. The findings of this study provide valuable information for clinicians and healthcare institutions considering the implementation of AI software for computer aided-triage and notification in the detection of intracranial hemorrhage. The discussion encompasses the implications of the results, the importance of evaluating AI findings in context-especially in the absence of explainability tools, potential areas for improvement, and the relevance of standardized scanning protocols in ensuring the reliability of AI-based diagnostic tools in clinical practice. ICH = Intracranial Hemorrhage; NCHCT = Non-contrast Enhanced Head CT; AI = Artificial Intelligence; SDH = Subdural Hemorrhage; SAH = Subarachnoid Hemorrhage; IPH = Intraparenchymal Hemorrhage; IVH = Intraventricular Hemorrhage; PPV = Positive Predictive Value; NPV = Negative Predictive Value; CADt = Computer-Aided Triage; PACS = Picture Archiving and Communication System; FN = False Negative; FP = False Positive; CI = Confidence Interval.

Predicting cardiovascular events from routine mammograms using machine learning.

Barraclough JY, Gandomkar Z, Fletcher RA, Barbieri S, Kuo NI, Rodgers A, Douglas K, Poppe KK, Woodward M, Luxan BG, Neal B, Jorm L, Brennan P, Arnott C

pubmed logopapersSep 16 2025
Cardiovascular risk is underassessed in women. Many women undergo screening mammography in midlife when the risk of cardiovascular disease rises. Mammographic features such as breast arterial calcification and tissue density are associated with cardiovascular risk. We developed and tested a deep learning algorithm for cardiovascular risk prediction based on routine mammography images. Lifepool is a cohort of women with at least one screening mammogram linked to hospitalisation and death databases. A deep learning model based on DeepSurv architecture was developed to predict major cardiovascular events from mammography images. Model performance was compared against standard risk prediction models using the concordance index, comparative to the Harrells C-statistic. There were 49 196 women included, with a median follow-up of 8.8 years (IQR 7.7-10.6), among whom 3392 experienced a first major cardiovascular event. The DeepSurv model using mammography features and participant age had a concordance index of 0.72 (95% CI 0.71 to 0.73), with similar performance to modern models containing age and clinical variables including the New Zealand 'PREDICT' tool and the American Heart Association 'PREVENT' equations. A deep learning algorithm based on only mammographic features and age predicted cardiovascular risk with performance comparable to traditional cardiovascular risk equations. Risk assessments based on mammography may be a novel opportunity for improving cardiovascular risk screening in women.

Automated brain extraction for canine magnetic resonance images.

Lesta GD, Deserno TM, Abani S, Janisch J, Hänsch A, Laue M, Winzer S, Dickinson PJ, De Decker S, Gutierrez-Quintana R, Subbotin A, Bocharova K, McLarty E, Lemke L, Wang-Leandro A, Spohn F, Volk HA, Nessler JN

pubmed logopapersSep 16 2025
Brain extraction is a common preprocessing step when working with intracranial medical imaging data. While several tools exist to automate the preprocessing of magnetic resonance imaging (MRI) of the human brain, none are available for canine MRIs. We present a pipeline mapping separate 2D scans to a 3D image, and a neural network for canine brain extraction. The training dataset consisted of T1-weighted and contrast-enhanced images from 68 dogs of different breeds, all cranial conformations (mesaticephalic, dolichocephalic, brachycephalic), with several pathological conditions, taken at three institutions. Testing was performed on a similarly diverse group of 10 dogs with images from a 4th institution. The model achieved excellent results in terms of Dice ([Formula: see text]) and Jaccard ([Formula: see text]) metrics and generalised well across different MRI scanners, the three aforementioned skull types, and variations in head size and breed. The pipeline was effective for a combination of one to three acquisition planes (i.e., transversal, dorsal, and sagittal). Aside from the T1 weighted imaging training datasets, the model also performed well on other MRI sequences with Jaccardian indices and median Dice scores ranging from 0.86 to 0.89 and 0.92 to 0.94, respectively. Our approach was robust for automated brain extraction. Variations in canine anatomy and performance degradation in multi-scanner data can largely be mitigated through normalisation and augmentation techniques. Brain extraction, as a preprocessing step, can improve the accuracy of an algorithm for abnormality classification in MRI image slices.
Page 8 of 3423413 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.