Sort by:
Page 100 of 2252246 results

Validation of a Dynamic Risk Prediction Model Incorporating Prior Mammograms in a Diverse Population.

Jiang S, Bennett DL, Colditz GA

pubmed logopapersJun 2 2025
For breast cancer risk prediction to be clinically useful, it must be accurate and applicable to diverse groups of women across multiple settings. To examine whether a dynamic risk prediction model incorporating prior mammograms, previously validated in Black and White women, could predict future risk of breast cancer across a racially and ethnically diverse population in a population-based screening program. This prognostic study included women aged 40 to 74 years with 1 or more screening mammograms drawn from the British Columbia Breast Screening Program from January 1, 2013, to December 31, 2019, with follow-up via linkage to the British Columbia Cancer Registry through June 2023. This provincial, organized screening program offers screening mammography with full field digital mammography (FFDM) every 2 years. Data were analyzed from May to August 2024. FFDM-based, artificial intelligence-generated mammogram risk score (MRS), including up to 4 years of prior mammograms. The primary outcomes were 5-year risk of breast cancer (measured with the area under the receiver operating characteristic curve [AUROC]) and absolute risk of breast cancer calibrated to the US Surveillance, Epidemiology, and End Results incidence rates. Among 206 929 women (mean [SD] age, 56.1 [9.7] years; of 118 093 with data on race, there were 34 266 East Asian; 1946 Indigenous; 6116 South Asian; and 66 742 White women), there were 4168 pathology-confirmed incident breast cancers diagnosed through June 2023. Mean (SD) follow-up time was 5.3 (3.0) years. Using up to 4 years of prior mammogram images in addition to the most current mammogram, a 5-year AUROC of 0.78 (95% CI, 0.77-0.80) was obtained based on analysis of images alone. Performance was consistent across subgroups defined by race and ethnicity in East Asian (AUROC, 0.77; 95% CI, 0.75-0.79), Indigenous (AUROC, 0.77; 95% CI 0.71-0.83), and South Asian (AUROC, 0.75; 95% CI 0.71-0.79) women. Stratification by age gave a 5-year AUROC of 0.76 (95% CI, 0.74-0.78) for women aged 50 years or younger and 0.80 (95% CI, 0.78-0.82) for women older than 50 years. There were 18 839 participants (9.0%) with a 5-year risk greater than 3%, and the positive predictive value was 4.9% with an incidence of 11.8 per 1000 person-years. A dynamic MRS generated from both current and prior mammograms showed robust performance across diverse racial and ethnic populations in a province-wide screening program starting from age 40 years, reflecting improved accuracy for racially and ethnically diverse populations.

Artificial Intelligence-Driven Innovations in Diabetes Care and Monitoring

Abdul Rahman, S., Mahadi, M., Yuliana, D., Budi Susilo, Y. K., Ariffin, A. E., Amgain, K.

medrxiv logopreprintJun 2 2025
This study explores Artificial Intelligence (AI)s transformative role in diabetes care and monitoring, focusing on innovations that optimize patient outcomes. AI, particularly machine learning and deep learning, significantly enhances early detection of complications like diabetic retinopathy and improves screening efficacy. The methodology employs a bibliometric analysis using Scopus, VOSviewer, and Publish or Perish, analyzing 235 articles from 2023-2025. Results indicate a strong interdisciplinary focus, with Computer Science and Medicine being dominant subject areas (36.9% and 12.9% respectively). Bibliographic coupling reveals robust international collaborations led by the U.S. (1558.52 link strength), UK, and China, with key influential documents by Zhu (2023c) and Annuzzi (2023). This research highlights AIs impact on enhancing monitoring, personalized treatment, and proactive care, while acknowledging challenges in data privacy and ethical deployment. Future work should bridge technological advancements with real-world implementation to create equitable and efficient diabetes care systems.

Evaluating the performance and potential bias of predictive models for the detection of transthyretin cardiac amyloidosis

Hourmozdi, J., Easton, N., Benigeri, S., Thomas, J. D., Narang, A., Ouyang, D., Duffy, G., Upton, R., Hawkes, W., Akerman, A., Okwuosa, I., Kline, A., Kho, A. N., Luo, Y., Shah, S. J., Ahmad, F. S.

medrxiv logopreprintJun 2 2025
BackgroundDelays in the diagnosis of transthyretin amyloid cardiomyopathy (ATTR-CM) contribute to the significant morbidity of the condition, especially in the era of disease-modifying therapies. Screening for ATTR-CM with AI and other algorithms may improve timely diagnosis, but these algorithms have not been directly compared. ObjectivesThe aim of this study was to compare the performance of four algorithms for ATTR-CM detection in a heart failure population and assess the risk for harms due to model bias. MethodsWe identified patients in an integrated health system from 2010-2022 with ATTR-CM and age- and sex-matched them to controls with heart failure to target 5% prevalence. We compared the performance of a claims-based random forest model (Huda et al. model), a regression-based score (Mayo ATTR-CM), and two deep learning echo models (EchoNet-LVH and EchoGo(R) Amyloidosis). We evaluated for bias using standard fairness metrics. ResultsThe analytical cohort included 176 confirmed cases of ATTR-CM and 3192 control patients with 79.2% self-identified as White and 9.0% as Black. The Huda et al. model performed poorly (AUC 0.49). Both deep learning echo models had a higher AUC when compared to the Mayo ATTR-CM Score (EchoNet-LVH 0.88; EchoGo Amyloidosis 0.92; Mayo ATTR-CM Score 0.79; DeLong P<0.001 for both). Bias auditing met fairness criteria for equal opportunity among patients who identified as Black. ConclusionsDeep learning, echo-based models to detect ATTR-CM demonstrated best overall discrimination when compared to two other models in external validation with low risk of harms due to racial bias.

Synthetic Ultrasound Image Generation for Breast Cancer Diagnosis Using cVAE-WGAN Models: An Approach Based on Generative Artificial Intelligence

Mondillo, G., Masino, M., Colosimo, S., Perrotta, A., Frattolillo, V., Abbate, F. G.

medrxiv logopreprintJun 2 2025
The scarcity and imbalance of medical image datasets hinder the development of robust computer-aided diagnosis (CAD) systems for breast cancer. This study explores the application of advanced generative models, based on generative artificial intelligence (GenAI), for the synthesis of digital breast ultrasound images. Using a hybrid Conditional Variational Autoencoder-Wasserstein Generative Adversarial Network (CVAE-WGAN) architecture, we developed a system to generate high-quality synthetic images conditioned on the class (malignant vs. normal/benign). These synthetic images, generated from the low-resolution BreastMNIST dataset and filtered for quality, were systematically integrated with real training data at different mixing ratios (W). The performance of a CNN classifier trained on these mixed datasets was evaluated against a baseline model trained only on real data balanced with SMOTE. The optimal integration (mixing weight W=0.25) produced a significant performance increase on the real test set: +8.17% in macro-average F1-score and +4.58% in accuracy compared to using real data alone. Analysis confirmed the originality of the generated samples. This approach offers a promising solution for overcoming data limitations in image-based breast cancer diagnostics, potentially improving the capabilities of CAD systems.

A Comparative Performance Analysis of Regular Expressions and an LLM-Based Approach to Extract the BI-RADS Score from Radiological Reports

Dennstaedt, F., Lerch, L., Schmerder, M., Cihoric, N., Cerghetti, G. M., Gaio, R., Bonel, H., Filchenko, I., Hastings, J., Dammann, F., Aebersold, D. M., von Tengg, H., Nairz, K.

medrxiv logopreprintJun 2 2025
BackgroundDifferent Natural Language Processing (NLP) techniques have demonstrated promising results for data extraction from radiological reports. Both traditional rule-based methods like regular expressions (Regex) and modern Large Language Models (LLMs) can extract structured information. However, comparison between these approaches for extraction of specific radiological data elements has not been widely conducted. MethodsWe compared accuracy and processing time between Regex and LLM-based approaches for extracting BI-RADS scores from 7,764 radiology reports (mammography, ultrasound, MRI, and biopsy). We developed a rule-based algorithm using Regex patterns and implemented an LLM-based extraction using the Rombos-LLM-V2.6-Qwen-14b model. A ground truth dataset of 199 manually classified reports was used for evaluation. ResultsThere was no statistically significant difference in the accuracy in extracting BI-RADS scores between Regex and an LLM-based method (accuracy of 89.20% for Regex versus 87.69% for the LLM-based method; p=0.56). Compared to the LLM-based method, Regex processing was more efficient, completing the task 28,120 times faster (0.06 seconds vs. 1687.20 seconds). Further analysis revealed LLMs favored common classifications (particularly BI-RADS value of 2) while Regex more frequently returned "unclear" values. We also could confirm in our sample an already known laterality bias for breast cancer (BI-RADS 6) and detected a slight laterality skew for suspected breast cancer (BI-RADS 5) as well. ConclusionFor structured, standardized data like BI-RADS, traditional NLP techniques seem to be superior, though future work should explore hybrid approaches combining Regex precision for standardized elements with LLM contextual understanding for more complex information extraction tasks.

Machine Learning Methods Based on Chest CT for Predicting the Risk of COVID-19-Associated Pulmonary Aspergillosis.

Liu J, Zhang J, Wang H, Fang C, Wei L, Chen J, Li M, Wu S, Zeng Q

pubmed logopapersJun 1 2025
To develop and validate a machine learning model based on chest CT and clinical risk factors to predict secondary aspergillus infection in hospitalized COVID-19 patients. This retrospective study included 291 COVID-19 patients with complete clinical data between December 2022 and March 2024, and some (n=82) of them developed secondary aspergillus infection after admission. Patients were divided into training (n=162), internal validation (n=69) and external validation (n=60) cohorts. The least absolute shrinkage and selection operator regression was applied to select the most significant image features extracted from chest CT. Univariate and multivariate logistic regression analyses were performed to develop a multifactorial model, which integrated chest CT with clinical risk factors, to predict secondary aspergillus infection in hospitalized COVID-19 patients. The performance of the constructed models was assessed with the receiver operating characteristic curve and the area under the curve (AUC). The clinical application value of the models was comprehensively evaluated using decision curve analysis (DCA). Eleven radiomics features and seven clinical risk factors were selected to develop prediction models. The multifactorial model demonstrated a favorable predictive performance with the highest AUC values of 0.98 (95% CI, 0.96-1.00) in the training cohort, 0.98 (95% CI, 0.96-1.00) in the internal validation cohort, and 0.87 (95% CI, 0.75-0.99) in the external validation cohort, which was significantly superior to the models relied solely on chest CT or clinical risk factors. The calibration curves from Hosmer-Lemeshow tests showed that there were no significant differences in the training cohort (p=0.359) and internal validation cohort (p=0.941), suggesting the good performance of the multifactorial model. DCA indicated that the multifactorial model exhibited better performance than others. The multifactorial model can serve as a reliable tool for predicting the risk of COVID-19-associated pulmonary aspergillosis.

GDP-Net: Global Dependency-Enhanced Dual-Domain Parallel Network for Ring Artifact Removal.

Zhang Y, Liu G, Liu Y, Xie S, Gu J, Huang Z, Ji X, Lyu T, Xi Y, Zhu S, Yang J, Chen Y

pubmed logopapersJun 1 2025
In Computed Tomography (CT) imaging, the ring artifacts caused by the inconsistent detector response can significantly degrade the reconstructed images, having negative impacts on the subsequent applications. The new generation of CT systems based on photon-counting detectors are affected by ring artifacts more severely. The flexibility and variety of detector responses make it difficult to build a well-defined model to characterize the ring artifacts. In this context, this study proposes the global dependency-enhanced dual-domain parallel neural network for Ring Artifact Removal (RAR). First, based on the fact that the features of ring artifacts are different in Cartesian and Polar coordinates, the parallel architecture is adopted to construct the deep neural network so that it can extract and exploit the latent features from different domains to improve the performance of ring artifact removal. Besides, the ring artifacts are globally relevant whether in Cartesian or Polar coordinate systems, but convolutional neural networks show inherent shortcomings in modeling long-range dependency. To tackle this problem, this study introduces the novel Mamba mechanism to achieve a global receptive field without incurring high computational complexity. It enables effective capture of the long-range dependency, thereby enhancing the model performance in image restoration and artifact reduction. The experiments on the simulated data validate the effectiveness of the dual-domain parallel neural network and the Mamba mechanism, and the results on two unseen real datasets demonstrate the promising performance of the proposed RAR algorithm in eliminating ring artifacts and recovering image details.

Evolution of Cortical Lesions and Function-Specific Cognitive Decline in People With Multiple Sclerosis.

Krijnen EA, Jelgerhuis J, Van Dam M, Bouman PM, Barkhof F, Klawiter EC, Hulst HE, Strijbis EMM, Schoonheim MM

pubmed logopapersJun 1 2025
Cortical lesions in multiple sclerosis (MS) severely affect cognition, but their longitudinal evolution and impact on specific cognitive functions remain understudied. This study investigates the evolution of function-specific cognitive functioning over 10 years in people with MS and assesses the influence of cortical lesion load and formation on these trajectories. In this prospectively collected study, people with MS underwent 3T MRI (T1 and fluid-attenuated inversion recovery) at 3 study visits between 2008 and 2022. Cognitive functioning was evaluated based on neuropsychological assessment reflecting 7 cognitive functions: attention; executive functioning (EF); information processing speed (IPS); verbal fluency; and verbal, visuospatial, and working memory. Cortical lesions were manually identified on artificial intelligence-generated double-inversion recovery images. Linear mixed models were constructed to assess the temporal evolution between cortical lesion load and function-specific cognitive decline. In addition, analyses were stratified by MS disease stage: early and late relapsing-remitting MS (cutoff disease duration at 15 years) and progressive MS. The study included 223 people with MS (mean age, 47.8 ± 11.1 years; 153 women) and 62 healthy controls. All completed 5-year follow-up, and 37 healthy controls and 94 with MS completed 10-year follow-up. At baseline, people with MS exhibited worse functioning of IPS and working memory. Over 10 years, cognitive decline was most severe in attention, verbal memory, and EF. At baseline, people with MS had a median cortical lesion count of 7 (range 0-73), which was related to subsequent decline in attention (B[95% CI] = -0.22 [-0.40 to -0.03]) and verbal fluency (B[95% CI] = -0.23[-0.37 to -0.09]). Over time, cortical lesions increased by a median count of 4 (range -2 to 71), particularly in late and progressive disease, and was related to decline in verbal fluency (B [95% CI] = -0.33 [-0.51 to -0.15]). The associations between (change in) cortical lesion load and cognitive decline were not modified by MS disease stage. Cognition worsened over 10 years, particularly affecting attention, verbal memory, and EF, while preexisting impairments were worst in other functions such as IPS. Worse baseline cognitive functioning was related to baseline cortical lesions, whereas baseline cortical lesions and cortical lesion formation were related to cognitive decline in functions less affected at baseline. Accumulating cortical damage leads to spreading of cognitive impairments toward additional functions.

Information Geometric Approaches for Patient-Specific Test-Time Adaptation of Deep Learning Models for Semantic Segmentation.

Ravishankar H, Paluru N, Sudhakar P, Yalavarthy PK

pubmed logopapersJun 1 2025
The test-time adaptation (TTA) of deep-learning-based semantic segmentation models, specific to individual patient data, was addressed in this study. The existing TTA methods in medical imaging are often unconstrained, require anatomical prior information or additional neural networks built during training phase, making them less practical, and prone to performance deterioration. In this study, a novel framework based on information geometric principles was proposed to achieve generic, off-the-shelf, regularized patient-specific adaptation of models during test-time. By considering the pre-trained model and the adapted models as part of statistical neuromanifolds, test-time adaptation was treated as constrained functional regularization using information geometric measures, leading to improved generalization and patient optimality. The efficacy of the proposed approach was shown on three challenging problems: 1) improving generalization of state-of-the-art models for segmenting COVID-19 anomalies in Computed Tomography (CT) images 2) cross-institutional brain tumor segmentation from magnetic resonance (MR) images, 3) segmentation of retinal layers in Optical Coherence Tomography (OCT) images. Further, it was demonstrated that robust patient-specific adaptation can be achieved without adding significant computational burden, making it first of its kind based on information geometric principles.

A new method for placental volume measurements using tracked 2D ultrasound and automatic image segmentation.

Sagberg K, Lie T, F Peterson H, Hillestad V, Eskild A, Bø LE

pubmed logopapersJun 1 2025
Placental volume measurements can potentially identify high-risk pregnancies. We aimed to develop and validate a new method for placental volume measurements using tracked 2D ultrasound and automatic image segmentation. We included 43 pregnancies at gestational week 27 and acquired placental images using a 2D ultrasound probe with position tracking, and trained a convolutional neural network (CNN) for automatic image segmentation. The automatically segmented 2D images were combined with tracking data to calculate placental volume. For 15 of the included pregnancies, placental volume was also estimated based on MRI examinations, 3D ultrasound and manually segmented 2D ultrasound images. The ultrasound methods were compared to MRI (gold standard). The CNN demonstrated good performance in automatic image segmentation (F1-score 0.84). The correlation with MRI-based placental volume was similar for tracked 2D ultrasound using automatically segmented images (absolute agreement intraclass correlation coefficient [ICC] 0.58, 95% CI 0.13-0.84) and manually segmented images (ICC 0.59, 95% CI 0.13-0.84). The 3D ultrasound method showed lower ICC (0.35, 95% CI -0.11 to 0.74) than the methods based on tracked 2D ultrasound. Tracked 2D ultrasound with automatic image segmentation is a promising new method for placental volume measurements and has potential for further improvement.
Page 100 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.