Sort by:
Page 82 of 1341332 results

Cardiac Measurement Calculation on Point-of-Care Ultrasonography with Artificial Intelligence

Mercaldo, S. F., Bizzo, B. C., Sadore, T., Halle, M. A., MacDonald, A. L., Newbury-Chaet, I., L'Italien, E., Schultz, A. S., Tam, V., Hegde, S. M., Mangion, J. R., Mehrotra, P., Zhao, Q., Wu, J., Hillis, J.

medrxiv logopreprintJun 28 2025
IntroductionPoint-of-care ultrasonography (POCUS) enables clinicians to obtain critical diagnostic information at the bedside especially in resource limited settings. This information may include 2D cardiac quantitative data, although measuring the data manually can be time-consuming and subject to user experience. Artificial intelligence (AI) can potentially automate this quantification. This study assessed the interpretation of key cardiac measurements on POCUS images by an AI-enabled device (AISAP Cardio V1.0). MethodsThis retrospective diagnostic accuracy study included 200 POCUS cases from four hospitals (two in Israel and two in the United States). Each case was independently interpreted by three cardiologists and the device for seven measurements (left ventricular (LV) ejection fraction, inferior vena cava (IVC) maximal diameter, left atrial (LA) area, right atrial (RA) area, LV end diastolic diameter, right ventricular (RV) fractional area change and aortic root diameter). The endpoints were the root mean square error (RMSE) of the device compared to the average cardiologist measurement (LV ejection fraction and IVC maximal diameter were primary endpoints; the other measurements were secondary endpoints). Predefined passing criteria were based on the upper bounds of the RMSE 95% confidence intervals (CIs). The inter-cardiologist RMSE was also calculated for reference. ResultsThe device achieved the passing criteria for six of the seven measurements. While not achieving the passing criterion for RV fractional area change, it still achieved a better RMSE than the inter-cardiologist RMSE. The RMSE was 6.20% (95% CI: 5.57 to 6.83; inter-cardiologist RMSE of 8.23%) for LV ejection fraction, 0.25cm (95% CI: 0.20 to 0.29; 0.36cm) for IVC maximal diameter, 2.39cm2 (95% CI: 1.96 to 2.82; 4.39cm2) for LA area, 2.11cm2 (95% CI: 1.75 to 2.47; 3.49cm2) for RA area, 5.06mm (95% CI: 4.58 to 5.55; 4.67mm) for LV end diastolic diameter, 10.17% (95% CI: 9.01 to 11.33; 14.12%) for RV fractional area change and 0.19cm (95% CI: 0.16 to 0.21; 0.24cm) for aortic root diameter. DiscussionThe device accurately calculated these cardiac measurements especially when benchmarked against inter-cardiologist variability. Its use could assist clinicians who utilize POCUS and better enable their clinical decision-making.

Inpainting is All You Need: A Diffusion-based Augmentation Method for Semi-supervised Medical Image Segmentation

Xinrong Hu, Yiyu Shi

arxiv logopreprintJun 28 2025
Collecting pixel-level labels for medical datasets can be a laborious and expensive process, and enhancing segmentation performance with a scarcity of labeled data is a crucial challenge. This work introduces AugPaint, a data augmentation framework that utilizes inpainting to generate image-label pairs from limited labeled data. AugPaint leverages latent diffusion models, known for their ability to generate high-quality in-domain images with low overhead, and adapts the sampling process for the inpainting task without need for retraining. Specifically, given a pair of image and label mask, we crop the area labeled with the foreground and condition on it during reversed denoising process for every noise level. Masked background area would gradually be filled in, and all generated images are paired with the label mask. This approach ensures the accuracy of match between synthetic images and label masks, setting it apart from existing dataset generation methods. The generated images serve as valuable supervision for training downstream segmentation models, effectively addressing the challenge of limited annotations. We conducted extensive evaluations of our data augmentation method on four public medical image segmentation datasets, including CT, MRI, and skin imaging. Results across all datasets demonstrate that AugPaint outperforms state-of-the-art label-efficient methodologies, significantly improving segmentation performance.

Prognostic value of body composition out of PSMA-PET/CT in prostate cancer patients undergoing PSMA-therapy.

Roll W, Plagwitz L, Ventura D, Masthoff M, Backhaus C, Varghese J, Rahbar K, Schindler P

pubmed logopapersJun 28 2025
This retrospective study aims to develop a deep learning-based approach to whole-body CT segmentation out of standard PSMA-PET-CT to assess body composition in metastatic castration resistant prostate cancer (mCRPC) patients prior to [<sup>177</sup>Lu]Lu-PSMA radioligand therapy (RLT). Our goal is to go beyond standard PSMA-PET-based pretherapeutic assessment and identify additional body composition metrics out of the CT-component, with potential prognostic value. We used a deep learning segmentation model to perform fully automated segmentation of different tissue compartments, including visceral- (VAT), subcutaneous- (SAT), intra/intermuscular- adipose tissue (IMAT) from [<sup>68</sup> Ga]Ga-PSMA-PET-CT scans of n = 86 prostate cancer patients before RLT. The proportions of different adipose tissue compartments to total adipose tissue (TAT) assessed on a 3D CT-volume of the abdomen or on a 2D single slice basis (centered at third lumbal vertebra (L3)) were compared for their prognostic value. First, univariate and multivariate Cox proportional hazards regression analyses were performed. Subsequently, the subjects were dichotomized at the median tissue composition, and these subgroups were evaluated by Kaplan-Meier analysis with the log-rank test. The automated segmentation model was useful for delineating different adipose tissue compartments and skeletal muscle across different patient anatomies. Analyses revealed significant correlations between lower SAT and higher IMAT ratios and poorer therapeutic outcomes in Cox regression analysis (SAT/TAT: p = 0.038; IMAT/TAT: p < 0.001) in the 3D model. In the single slice approach only IMAT/SAT was significantly associated with survival in Cox regression analysis (p < 0.001; SAT/TAT: p > 0.05). IMAT ratio remained an independent predictor of survival in multivariate analysis when including PSMA-PET and blood-based prognostic factors. In this proof-of-principle study the implementation of a deep learning-based whole-body analysis provides a robust and detailed CT-based assessment of body composition in mCRPC patients undergoing RLT. Potential prognostic parameters have to be corroborated in larger prospective datasets.

<sup>Advanced glaucoma disease segmentation and classification with grey wolf optimized U</sup> <sup>-Net++ and capsule networks</sup>.

Govindharaj I, Deva Priya W, Soujanya KLS, Senthilkumar KP, Shantha Shalini K, Ravichandran S

pubmed logopapersJun 27 2025
Early detection of glaucoma represents a vital factor in securing vision while the disease retains its position as one of the central causes of blindness worldwide. The current glaucoma screening strategies with expert interpretation depend on complex and time-consuming procedures which slow down both diagnosis processes and intervention timing. This research adopts a complex automated glaucoma diagnostic system that combines optimized segmentation solutions together with classification platforms. The proposed segmentation approach implements an enhanced version of U-Net++ using dynamic parameter control provided by GWO to segment optic disc and cup regions in retinal fundus images. Through the implementation of GWO the algorithm uses wolf-pack hunting strategies to adjust parameters dynamically which enables it to locate diverse textural patterns inside images. The system uses a CapsNet capsule network for classification because it maintains visual spatial organization to detect glaucoma-related patterns precisely. The developed system secures an evaluation accuracy of 95.1% in segmentation and classification tasks better than typical approaches. The automated system eliminates and enhances clinical diagnostic speed as well as diagnostic precision. The tool stands out because of its supreme detection accuracy and reliability thus making it an essential clinical early-stage glaucoma diagnostic system and a scalable healthcare deployment solution. To develop an advanced automated glaucoma diagnostic system by integrating an optimized U-Net++ segmentation model with a Capsule Network (CapsNet) classifier, enhanced through Grey Wolf Optimization Algorithm (GWOA), for precise segmentation of optic disc and cup regions and accurate glaucoma classification from retinal fundus images. This study proposes a two-phase computer-assisted diagnosis (CAD) framework. In the segmentation phase, an enhanced U-Net++ model, optimized by GWOA, is employed to accurately delineate the optic disc and cup regions in fundus images. The optimization dynamically tunes hyperparameters based on grey wolf hunting behavior for improved segmentation precision. In the classification phase, a CapsNet architecture is used to maintain spatial hierarchies and effectively classify images as glaucomatous or normal based on segmented outputs. The performance of the proposed model was validated using the ORIGA retinal fundus image dataset, and evaluated against conventional approaches. The proposed GWOA-UNet++ and CapsNet framework achieved a segmentation and classification accuracy of 95.1%, outperforming existing benchmark models such as MTA-CS, ResFPN-Net, DAGCN, MRSNet and AGCT. The model demonstrated robustness against image irregularities, including variations in optic disc size and fundus image quality, and showed superior performance across accuracy, sensitivity, specificity, precision, and F1-score metrics. The developed automated glaucoma detection system exhibits enhanced diagnostic accuracy, efficiency, and reliability, offering significant potential for early-stage glaucoma detection and clinical decision support. Future work will involve large-scale multi-ethnic dataset validation, integration with clinical workflows, and deployment as a mobile or cloud-based screening tool.

Causality-Adjusted Data Augmentation for Domain Continual Medical Image Segmentation.

Zhu Z, Dong Q, Luo G, Wang W, Dong S, Wang K, Tian Y, Wang G, Li S

pubmed logopapersJun 27 2025
In domain continual medical image segmentation, distillation-based methods mitigate catastrophic forgetting by continuously reviewing old knowledge. However, these approaches often exhibit biases towards both new and old knowledge simultaneously due to confounding factors, which can undermine segmentation performance. To address these biases, we propose the Causality-Adjusted Data Augmentation (CauAug) framework, introducing a novel causal intervention strategy called the Texture-Domain Adjustment Hybrid-Scheme (TDAHS) alongside two causality-targeted data augmentation approaches: the Cross Kernel Network (CKNet) and the Fourier Transformer Generator (FTGen). (1) TDAHS establishes a domain-continual causal model that accounts for two types of knowledge biases by identifying irrelevant local textures (L) and domain-specific features (D) as confounders. It introduces a hybrid causal intervention that combines traditional confounder elimination with a proposed replacement approach to better adapt to domain shifts, thereby promoting causal segmentation. (2) CKNet eliminates confounder L to reduce biases in new knowledge absorption. It decreases reliance on local textures in input images, forcing the model to focus on relevant anatomical structures and thus improving generalization. (3) FTGen causally intervenes on confounder D by selectively replacing it to alleviate biases that impact old knowledge retention. It restores domain-specific features in images, aiding in the comprehensive distillation of old knowledge. Our experiments show that CauAug significantly mitigates catastrophic forgetting and surpasses existing methods in various medical image segmentation tasks. The implementation code is publicly available at: https://github.com/PerceptionComputingLab/CauAug_DCMIS.

3D Auto-segmentation of pancreas cancer and surrounding anatomical structures for surgical planning.

Rhu J, Oh N, Choi GS, Kim JM, Choi SY, Lee JE, Lee J, Jeong WK, Min JH

pubmed logopapersJun 27 2025
This multicenter study aimed to develop a deep learning-based autosegmentation model for pancreatic cancer and surrounding anatomical structures using computed tomography (CT) to enhance surgical planning. We included patients with pancreatic cancer who underwent pancreatic surgery at three tertiary referral hospitals. A hierarchical Swin Transformer V2 model was implemented to segment the pancreas, pancreatic cancers, and peripancreatic structures from preoperative contrast-enhanced CT scans. Data was divided into training and internal validation sets at a 3:1 ratio (from one tertiary institution), with separately prepared external validation set (from two separate institutions). Segmentation performance was quantitatively assessed using the dice similarity coefficient (DSC) and qualitatively evaluated (complete vs partial vs absent). A total of 275 patients (51.6% male, mean age 65.8 ± 9.5 years) were included (176 training group, 59 internal validation group, and 40 external validation group). No significant differences in baseline characteristics were observed between the groups. The model achieved an overall mean DSC of 75.4 ± 6.0 and 75.6 ± 4.8 in the internal and external validation groups, respectively. It showed high accuracy particularly in the pancreas parenchyma (84.8 ± 5.3 and 86.1 ± 4.1) and lower accuracy in pancreatic cancer (57.0 ± 28.7 and 54.5 ± 23.5). The DSC scores for pancreatic cancer tended to increase with larger tumor sizes. Moreover, the qualitative assessments revealed high accuracy in the superior mesenteric artery (complete segmentation, 87.5%-100%), portal and superior mesenteric vein (97.5%-100%), pancreas parenchyma (83.1%-87.5%), but lower accuracy in cancers (62.7%-65.0%). The deep learning-based autosegmentation model for 3D visualization of pancreatic cancer and peripancreatic structures showed robust performance. Further improvement will enhance many promising applications in clinical research.

D<sup>2</sup>-RD-UNet: A dual-stage dual-class framework with connectivity correction for hepatic vessels segmentation.

Cavicchioli M, Moglia A, Garret G, Puglia M, Vacavant A, Pugliese G, Cerveri P

pubmed logopapersJun 27 2025
Accurate segmentation of hepatic and portal veins is critical for preoperative planning in liver surgery, especially for resection and transplantation procedures. Extensive anatomical variability, pathological alterations, and inherent class imbalance between background and vascular structures challenge this task. Current state-of-the-art deep learning approaches often fail to generalize across patient variability or maintain vascular topology, thus limiting their clinical applicability. To overcome these limitations, we propose the D<sup>2</sup>-RD-UNet, a dual-stage, dual-class segmentation framework for hepatic and portal vessels. The D<sup>2</sup>-RD-UNet architecture employs dense and residual connections to improve feature propagation and segmentation accuracy. Our D<sup>2</sup>-RD-UNet integrates advanced data-driven preprocessing, a dual-path architecture for 3D and 4D data, with the latter concatenating computed tomography (CT) scans with four relevant vesselness filters (Sato, Frangi, OOF, and RORPO). The pipeline is completed by the first developed postprocessing multi-class vessel connectivity correction algorithm based on centerlines. Additionally, we introduce the first radius-based branching algorithm to evaluate the model's predictions locally, providing detailed insights into the accuracy of vascular reconstructions at different scales. In order to make up for the scarcity of well-annotated open datasets for hepatic vessels segmentation, we curated AIMS-HPV-385, a large, pathological, multi-class, and validated dataset on 385 CT scans. We trained different configurations of D<sup>2</sup>-RD-UNet and state-of-the-art models on 327 CTs of AIMS-HPV-385. Experimental results on the remaining 58 CTs of AIMS-HPV-385 and on the 20 CTs of 3D-IRCADb-01 demonstrate superior performances of the D<sup>2</sup>-RD-UNet variants over state-of-the-art methods, achieving robust generalization, preserving vascular continuity, and offering a reliable approach for liver vascular reconstructions.

AI Model Passport: Data and System Traceability Framework for Transparent AI in Health

Varvara Kalokyri, Nikolaos S. Tachos, Charalampos N. Kalantzopoulos, Stelios Sfakianakis, Haridimos Kondylakis, Dimitrios I. Zaridis, Sara Colantonio, Daniele Regge, Nikolaos Papanikolaou, The ProCAncer-I consortium, Konstantinos Marias, Dimitrios I. Fotiadis, Manolis Tsiknakis

arxiv logopreprintJun 27 2025
The increasing integration of Artificial Intelligence (AI) into health and biomedical systems necessitates robust frameworks for transparency, accountability, and ethical compliance. Existing frameworks often rely on human-readable, manual documentation which limits scalability, comparability, and machine interpretability across projects and platforms. They also fail to provide a unique, verifiable identity for AI models to ensure their provenance and authenticity across systems and use cases, limiting reproducibility and stakeholder trust. This paper introduces the concept of the AI Model Passport, a structured and standardized documentation framework that acts as a digital identity and verification tool for AI models. It captures essential metadata to uniquely identify, verify, trace and monitor AI models across their lifecycle - from data acquisition and preprocessing to model design, development and deployment. In addition, an implementation of this framework is presented through AIPassport, an MLOps tool developed within the ProCAncer-I EU project for medical imaging applications. AIPassport automates metadata collection, ensures proper versioning, decouples results from source scripts, and integrates with various development environments. Its effectiveness is showcased through a lesion segmentation use case using data from the ProCAncer-I dataset, illustrating how the AI Model Passport enhances transparency, reproducibility, and regulatory readiness while reducing manual effort. This approach aims to set a new standard for fostering trust and accountability in AI-driven healthcare solutions, aspiring to serve as the basis for developing transparent and regulation compliant AI systems across domains.

Towards Scalable and Robust White Matter Lesion Localization via Multimodal Deep Learning

Julia Machnio, Sebastian Nørgaard Llambias, Mads Nielsen, Mostafa Mehdipour Ghazi

arxiv logopreprintJun 27 2025
White matter hyperintensities (WMH) are radiological markers of small vessel disease and neurodegeneration, whose accurate segmentation and spatial localization are crucial for diagnosis and monitoring. While multimodal MRI offers complementary contrasts for detecting and contextualizing WM lesions, existing approaches often lack flexibility in handling missing modalities and fail to integrate anatomical localization efficiently. We propose a deep learning framework for WM lesion segmentation and localization that operates directly in native space using single- and multi-modal MRI inputs. Our study evaluates four input configurations: FLAIR-only, T1-only, concatenated FLAIR and T1, and a modality-interchangeable setup. It further introduces a multi-task model for jointly predicting lesion and anatomical region masks to estimate region-wise lesion burden. Experiments conducted on the MICCAI WMH Segmentation Challenge dataset demonstrate that multimodal input significantly improves the segmentation performance, outperforming unimodal models. While the modality-interchangeable setting trades accuracy for robustness, it enables inference in cases with missing modalities. Joint lesion-region segmentation using multi-task learning was less effective than separate models, suggesting representational conflict between tasks. Our findings highlight the utility of multimodal fusion for accurate and robust WMH analysis, and the potential of joint modeling for integrated predictions.

Catheter detection and segmentation in X-ray images via multi-task learning.

Xi L, Ma Y, Koland E, Howell S, Rinaldi A, Rhode KS

pubmed logopapersJun 27 2025
Automated detection and segmentation of surgical devices, such as catheters or wires, in X-ray fluoroscopic images have the potential to enhance image guidance in minimally invasive heart surgeries. In this paper, we present a convolutional neural network model that integrates a resnet architecture with multiple prediction heads to achieve real-time, accurate localization of electrodes on catheters and catheter segmentation in an end-to-end deep learning framework. We also propose a multi-task learning strategy in which our model is trained to perform both accurate electrode detection and catheter segmentation simultaneously. A key challenge with this approach is achieving optimal performance for both tasks. To address this, we introduce a novel multi-level dynamic resource prioritization method. This method dynamically adjusts sample and task weights during training to effectively prioritize more challenging tasks, where task difficulty is inversely proportional to performance and evolves throughout the training process. The proposed method has been validated on both public and private datasets for single-task catheter segmentation and multi-task catheter segmentation and detection. The performance of our method is also compared with existing state-of-the-art methods, demonstrating significant improvements, with a mean <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>J</mi></math> of 64.37/63.97 and with average precision over all IoU thresholds of 84.15/83.13, respectively, for detection and segmentation multi-task on the validation and test sets of the catheter detection and segmentation dataset. Our approach achieves a good balance between accuracy and efficiency, making it well-suited for real-time surgical guidance applications.
Page 82 of 1341332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.