Sort by:
Page 270 of 3493486 results

Uncovering ethical biases in publicly available fetal ultrasound datasets.

Fiorentino MC, Moccia S, Cosmo MD, Frontoni E, Giovanola B, Tiribelli S

pubmed logopapersJun 13 2025
We explore biases present in publicly available fetal ultrasound (US) imaging datasets, currently at the disposal of researchers to train deep learning (DL) algorithms for prenatal diagnostics. As DL increasingly permeates the field of medical imaging, the urgency to critically evaluate the fairness of benchmark public datasets used to train them grows. Our thorough investigation reveals a multifaceted bias problem, encompassing issues such as lack of demographic representativeness, limited diversity in clinical conditions depicted, and variability in US technology used across datasets. We argue that these biases may significantly influence DL model performance, which may lead to inequities in healthcare outcomes. To address these challenges, we recommend a multilayered approach. This includes promoting practices that ensure data inclusivity, such as diversifying data sources and populations, and refining model strategies to better account for population variances. These steps will enhance the trustworthiness of DL algorithms in fetal US analysis.

Quantitative and qualitative assessment of ultra-low-dose paranasal sinus CT using deep learning image reconstruction: a comparison with hybrid iterative reconstruction.

Otgonbaatar C, Lee D, Choi J, Jang H, Shim H, Ryoo I, Jung HN, Suh S

pubmed logopapersJun 13 2025
This study aimed to evaluate the quantitative and qualitative performances of ultra-low-dose computed tomography (CT) with deep learning image reconstruction (DLR) compared with those of hybrid iterative reconstruction (IR) for preoperative paranasal sinus (PNS) imaging. This retrospective analysis included 132 patients who underwent non-contrast ultra-low-dose sinus CT (0.03 mSv). Images were reconstructed using hybrid IR and DLR. Objective image quality metrics, including image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), noise power spectrum (NPS), and no-reference perceptual image sharpness, were assessed. Two board-certified radiologists independently performed subjective image quality evaluations. The ultra-low-dose CT protocol achieved a low radiation dose (effective dose: 0.03 mSv). DLR showed significantly lower image noise (28.62 ± 4.83 Hounsfield units) compared to hybrid IR (140.70 ± 16.04, p < 0.001), with DLR yielding smoother and more uniform images. DLR demonstrated significantly improved SNR (22.47 ± 5.82 vs 9.14 ± 2.45, p < 0.001) and CNR (71.88 ± 14.03 vs 11.81 ± 1.50, p < 0.001). NPS analysis revealed that DLR reduced the noise magnitude and NPS peak values. Additionally, DLR demonstrated significantly sharper images (no-reference perceptual sharpness metric: 0.56 ± 0.04) compared to hybrid IR (0.36 ± 0.01). Radiologists rated DLR as superior in overall image quality, bone structure visualization, and diagnostic confidence compared to hybrid IR at ultra-low-dose CT. DLR significantly outperformed hybrid IR in ultra-low-dose PNS CT by reducing image noise, improving SNR and CNR, enhancing image sharpness, and maintaining critical anatomical visualization, demonstrating its potential for effective preoperative planning with minimal radiation exposure. Question Ultra-low-dose CT for paranasal sinuses is essential for patients requiring repeated scans and functional endoscopic sinus surgery (FESS) planning to reduce cumulative radiation exposure. Findings DLR outperformed hybrid IR in ultra-low-dose paranasal sinus CT. Clinical relevance Ultra-low-dose CT with DLR delivers sufficient image quality for detailed surgical planning, effectively minimizing unnecessary radiation exposure to enhance patient safety.

Long-term prognostic value of the CT-derived fractional flow reserve combined with atherosclerotic burden in patients with non-obstructive coronary artery disease.

Wang Z, Li Z, Xu T, Wang M, Xu L, Zeng Y

pubmed logopapersJun 13 2025
The long-term prognostic significance of the coronary computed tomography angiography (CCTA)-derived fractional flow reserve (CT-FFR) for non-obstructive coronary artery disease (CAD) is uncertain. We aimed to investigate the additional prognostic value of CT-FFR beyond CCTA-defined atherosclerotic burden for long-term outcomes. Consecutive patients with suspected stable CAD were candidates for this retrospective cohort study. Deep-learning-based vessel-specific CT-FFR was calculated. All patients enrolled were followed for at least 5 years. The primary outcome was major adverse cardiovascular events (MACE). Predictive abilities for MACE were compared among three models (model 1, constructed using clinical variables; model 2, model 1 + CCTA-derived atherosclerotic burden (Leiden risk score and segment involvement score); and model 3, model 2 + CT-FFR). A total of 1944 patients (median age, 59 (53-65) years; 53.0% men) were included. During a median follow-up time of 73.4 (71.2-79.7) months, 64 patients (3.3%) experienced MACE. In multivariate-adjusted Cox models, CT-FFR ≤ 0.80 (HR: 7.18; 95% CI: 4.25-12.12; p < 0.001) was a robust and independent predictor for MACE. The discriminant ability was higher in model 2 than in model 1 (C-index, 0.76 vs. 0.68; p = 0.001) and was further promoted by adding CT-FFR to model 3 (C-index, 0.83 vs. 0.76; p < 0.001). Integrated discrimination improvement (IDI) was 0.033 (p = 0.022) for model 2 beyond model 1. Of note, compared with model 2, model 3 also exhibited improved discrimination (IDI = 0.056; p < 0.001). In patients with non-obstructive CAD, CT-FFR provides robust and incremental prognostic information for predicting long-term outcomes. The combined model including CT-FFR and CCTA-defined atherosclerotic burden exhibits improved prediction abilities, which is helpful for risk stratification. Question Prognostic significance of the CT-fractional flow reserve (FFR) in non-obstructive coronary artery disease for long-term outcomes merits further investigation. Findings Our data strongly emphasized the independent and additional predictive value of CT-FFR beyond coronary CTA-defined atherosclerotic burden and clinical risk factors. Clinical relevance The new combined predictive model incorporating CT-FFR can be satisfactorily used for risk stratification of patients with non-obstructive coronary artery disease by identifying those who are truly suitable for subsequent high-intensity preventative therapies and extensive follow-up for prognostic reasons.

Enhancing Privacy: The Utility of Stand-Alone Synthetic CT and MRI for Tumor and Bone Segmentation

André Ferreira, Kunpeng Xie, Caroline Wilpert, Gustavo Correia, Felix Barajas Ordonez, Tiago Gil Oliveira, Maike Bode, Robert Siepmann, Frank Hölzle, Rainer Röhrig, Jens Kleesiek, Daniel Truhn, Jan Egger, Victor Alves, Behrus Puladi

arxiv logopreprintJun 13 2025
AI requires extensive datasets, while medical data is subject to high data protection. Anonymization is essential, but poses a challenge for some regions, such as the head, as identifying structures overlap with regions of clinical interest. Synthetic data offers a potential solution, but studies often lack rigorous evaluation of realism and utility. Therefore, we investigate to what extent synthetic data can replace real data in segmentation tasks. We employed head and neck cancer CT scans and brain glioma MRI scans from two large datasets. Synthetic data were generated using generative adversarial networks and diffusion models. We evaluated the quality of the synthetic data using MAE, MS-SSIM, Radiomics and a Visual Turing Test (VTT) performed by 5 radiologists and their usefulness in segmentation tasks using DSC. Radiomics indicates high fidelity of synthetic MRIs, but fall short in producing highly realistic CT tissue, with correlation coefficient of 0.8784 and 0.5461 for MRI and CT tumors, respectively. DSC results indicate limited utility of synthetic data: tumor segmentation achieved DSC=0.064 on CT and 0.834 on MRI, while bone segmentation a mean DSC=0.841. Relation between DSC and correlation is observed, but is limited by the complexity of the task. VTT results show synthetic CTs' utility, but with limited educational applications. Synthetic data can be used independently for the segmentation task, although limited by the complexity of the structures to segment. Advancing generative models to better tolerate heterogeneous inputs and learn subtle details is essential for enhancing their realism and expanding their application potential.

Beyond Benchmarks: Towards Robust Artificial Intelligence Bone Segmentation in Socio-Technical Systems

Xie, K., Gruber, L. J., Crampen, M., Li, Y., Ferreira, A., Tappeiner, E., Gillot, M., Schepers, J., Xu, J., Pankert, T., Beyer, M., Shahamiri, N., ten Brink, R., Dot, G., Weschke, C., van Nistelrooij, N., Verhelst, P.-J., Guo, Y., Xu, Z., Bienzeisler, J., Rashad, A., Flügge, T., Cotton, R., Vinayahalingam, S., Ilesan, R., Raith, S., Madsen, D., Seibold, C., Xi, T., Berge, S., Nebelung, S., Kodym, O., Sundqvist, O., Thieringer, F., Lamecker, H., Coppens, A., Potrusil, T., Kraeima, J., Witjes, M., Wu, G., Chen, X., Lambrechts, A., Cevidanes, L. H. S., Zachow, S., Hermans, A., Truhn, D., Alves,

medrxiv logopreprintJun 13 2025
Despite the advances in automated medical image segmentation, AI models still underperform in various clinical settings, challenging real-world integration. In this multicenter evaluation, we analyzed 20 state-of-the-art mandibular segmentation models across 19,218 segmentations of 1,000 clinically resampled CT/CBCT scans. We show that segmentation accuracy varies by up to 25% depending on socio-technical factors such as voxel size, bone orientation, and patient conditions such as osteosynthesis or pathology. Higher sharpness, isotropic smaller voxels, and neutral orientation significantly improved results, while metallic osteosynthesis and anatomical complexity led to significant degradation. Our findings challenge the common view of AI models as "plug-and-play" tools and suggest evidence-based optimization recommendations for both clinicians and developers. This will in turn boost the integration of AI segmentation tools in routine healthcare.

BreastDCEDL: Curating a Comprehensive DCE-MRI Dataset and developing a Transformer Implementation for Breast Cancer Treatment Response Prediction

Naomi Fridman, Bubby Solway, Tomer Fridman, Itamar Barnea, Anat Goldshtein

arxiv logopreprintJun 13 2025
Breast cancer remains a leading cause of cancer-related mortality worldwide, making early detection and accurate treatment response monitoring critical priorities. We present BreastDCEDL, a curated, deep learning-ready dataset comprising pre-treatment 3D Dynamic Contrast-Enhanced MRI (DCE-MRI) scans from 2,070 breast cancer patients drawn from the I-SPY1, I-SPY2, and Duke cohorts, all sourced from The Cancer Imaging Archive. The raw DICOM imaging data were rigorously converted into standardized 3D NIfTI volumes with preserved signal integrity, accompanied by unified tumor annotations and harmonized clinical metadata including pathologic complete response (pCR), hormone receptor (HR), and HER2 status. Although DCE-MRI provides essential diagnostic information and deep learning offers tremendous potential for analyzing such complex data, progress has been limited by lack of accessible, public, multicenter datasets. BreastDCEDL addresses this gap by enabling development of advanced models, including state-of-the-art transformer architectures that require substantial training data. To demonstrate its capacity for robust modeling, we developed the first transformer-based model for breast DCE-MRI, leveraging Vision Transformer (ViT) architecture trained on RGB-fused images from three contrast phases (pre-contrast, early post-contrast, and late post-contrast). Our ViT model achieved state-of-the-art pCR prediction performance in HR+/HER2- patients (AUC 0.94, accuracy 0.93). BreastDCEDL includes predefined benchmark splits, offering a framework for reproducible research and enabling clinically meaningful modeling in breast cancer imaging.

Protocol of the observational study STRATUM-OS: First step in the development and validation of the STRATUM tool based on multimodal data processing to assist surgery in patients affected by intra-axial brain tumours

Fabelo, H., Ramallo-Farina, Y., Morera, J., Pineiro, J. F., Lagares, A., Jimenez-Roldan, L., Burstrom, G., Garcia-Bello, M. A., Garcia-Perez, L., Falero, R., Gonzalez, M., Duque, S., Rodriguez-Jimenez, C., Hernandez, M., Delgado-Sanchez, J. J., Paredes, A. B., Hernandez, G., Ponce, P., Leon, R., Gonzalez-Martin, J. M., Rodriguez-Esparragon, F., Callico, G. M., Wagner, A. M., Clavo, B., STRATUM,

medrxiv logopreprintJun 13 2025
IntroductionIntegrated digital diagnostics can support complex surgeries in many anatomic sites, and brain tumour surgery represents one of the most complex cases. Neurosurgeons face several challenges during brain tumour surgeries, such as differentiating critical tissue from brain tumour margins. To overcome these challenges, the STRATUM project will develop a 3D decision support tool for brain surgery guidance and diagnostics based on multimodal data processing, including hyperspectral imaging, integrated as a point-of-care computing tool in neurosurgical workflows. This paper reports the protocol for the development and technical validation of the STRATUM tool. Methods and analysisThis international multicentre, prospective, open, observational cohort study, STRATUM-OS (study: 28 months, pre-recruitment: 2 months, recruitment: 20 months, follow-up: 6 months), with no control group, will collect data from 320 patients undergoing standard neurosurgical procedures to: (1) develop and technically validate the STRATUM tool, and (2) collect the outcome measures for comparing the standard procedure versus the standard procedure plus the use of the STRATUM tool during surgery in a subsequent historically controlled non-randomized clinical trial. Ethics and disseminationThe protocol was approved by the participant Ethics Committees. Results will be disseminated in scientific conferences and peer-reviewed journals. Trial registration number[Pending Number] ARTICLE SUMMARYO_ST_ABSStrengths and limitations of this studyC_ST_ABSO_LISTRATUM-OS will be the first multicentre prospective observational study to develop and technically validate a 3D decision support tool for brain surgery guidance and diagnostics in real-time based on artificial intelligence and multimodal data processing, including the emerging hyperspectral imaging modality. C_LIO_LIThis study encompasses a prospective collection of multimodal pre, intra and postoperative medical data, including innovative imaging modalities, from patients with intra-axial brain tumours. C_LIO_LIThis large observational study will act as historical control in a subsequent clinical trial to evaluate a fully-working prototype. C_LIO_LIAlthough the estimated sample size is deemed adequate for the purpose of the study, the complexity of the clinical context and the type of surgery could potentially lead to under-recruitment and under-representation of less prevalent tumour types. C_LI

Investigating the Role of Area Deprivation Index in Observed Differences in CT-Based Body Composition by Race.

Chisholm M, Jabal MS, He H, Wang Y, Kalisz K, Lafata KJ, Calabrese E, Bashir MR, Tailor TD, Magudia K

pubmed logopapersJun 13 2025
Differences in CT-based body composition (BC) have been observed by race. We sought to investigate whether indices reporting census block group-level disadvantage, area deprivation index (ADI) and social vulnerability index (SVI), age, sex, and/or clinical factors could explain race-based differences in body composition. The first abdominal CT exams for patients in Durham County at a single institution in 2020 were analyzed using a fully automated and open-source deep learning BC analysis workflow to generate cross-sectional areas for skeletal muscle (SMA), subcutaneous fat (SFA), and visceral fat (VFA). Patient level demographic and clinical data were gathered from the electronic health record. State ADI ranking and SVI values were linked to each patient. Univariable and multivariable models were created to assess the association of demographics, ADI, SVI, and other relevant clinical factors with SMA, SFA, and VFA. 5,311 patients (mean age, 57.4 years; 55.5% female, 46.5% Black; 39.5% White 10.3% Hispanic) were included. At univariable analysis, race, ADI, SVI, sex, BMI, weight, and height were significantly associated with all body compartments (SMA, SFA, and VFA, all p<0.05). At multivariable analyses adjusted for patient characteristics and clinical comorbidities, race remained a significant predictor, whereas ADI did not. SVI was significant in a multivariable model with SMA.

DMAF-Net: An Effective Modality Rebalancing Framework for Incomplete Multi-Modal Medical Image Segmentation

Libin Lan, Hongxing Li, Zunhui Xia, Yudong Zhang

arxiv logopreprintJun 13 2025
Incomplete multi-modal medical image segmentation faces critical challenges from modality imbalance, including imbalanced modality missing rates and heterogeneous modality contributions. Due to their reliance on idealized assumptions of complete modality availability, existing methods fail to dynamically balance contributions and neglect the structural relationships between modalities, resulting in suboptimal performance in real-world clinical scenarios. To address these limitations, we propose a novel model, named Dynamic Modality-Aware Fusion Network (DMAF-Net). The DMAF-Net adopts three key ideas. First, it introduces a Dynamic Modality-Aware Fusion (DMAF) module to suppress missing-modality interference by combining transformer attention with adaptive masking and weight modality contributions dynamically through attention maps. Second, it designs a synergistic Relation Distillation and Prototype Distillation framework to enforce global-local feature alignment via covariance consistency and masked graph attention, while ensuring semantic consistency through cross-modal class-specific prototype alignment. Third, it presents a Dynamic Training Monitoring (DTM) strategy to stabilize optimization under imbalanced missing rates by tracking distillation gaps in real-time, and to balance convergence speeds across modalities by adaptively reweighting losses and scaling gradients. Extensive experiments on BraTS2020 and MyoPS2020 demonstrate that DMAF-Net outperforms existing methods for incomplete multi-modal medical image segmentation. Extensive experiments on BraTS2020 and MyoPS2020 demonstrate that DMAF-Net outperforms existing methods for incomplete multi-modal medical image segmentation. Our code is available at https://github.com/violet-42/DMAF-Net.

Exploring the Effectiveness of Deep Features from Domain-Specific Foundation Models in Retinal Image Synthesis

Zuzanna Skorniewska, Bartlomiej W. Papiez

arxiv logopreprintJun 13 2025
The adoption of neural network models in medical imaging has been constrained by strict privacy regulations, limited data availability, high acquisition costs, and demographic biases. Deep generative models offer a promising solution by generating synthetic data that bypasses privacy concerns and addresses fairness by producing samples for under-represented groups. However, unlike natural images, medical imaging requires validation not only for fidelity (e.g., Fr\'echet Inception Score) but also for morphological and clinical accuracy. This is particularly true for colour fundus retinal imaging, which requires precise replication of the retinal vascular network, including vessel topology, continuity, and thickness. In this study, we in-vestigated whether a distance-based loss function based on deep activation layers of a large foundational model trained on large corpus of domain data, colour fundus imaging, offers advantages over a perceptual loss and edge-detection based loss functions. Our extensive validation pipeline, based on both domain-free and domain specific tasks, suggests that domain-specific deep features do not improve autoen-coder image generation. Conversely, our findings highlight the effectiveness of con-ventional edge detection filters in improving the sharpness of vascular structures in synthetic samples.
Page 270 of 3493486 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.