Sort by:
Page 62 of 6256241 results

Zhen Yang, Yansong Ma, Lei Chen

arxiv logopreprintOct 10 2025
Trustworthy medical image segmentation aims at deliver accurate and reliable results for clinical decision-making. Most existing methods adopt the evidence deep learning (EDL) paradigm due to its computational efficiency and theoretical robustness. However, the EDL-based methods often neglect leveraging uncertainty maps rich in attention cues to refine ambiguous boundary segmentation. To address this, we propose a progressive evidence uncertainty guided attention (PEUA) mechanism to guide the model to focus on the feature representation learning of hard regions. Unlike conventional approaches, PEUA progressively refines attention using uncertainty maps while employing low-rank learning to denoise attention weights, enhancing feature learning for challenging regions. Concurrently, standard EDL methods suppress evidence of incorrect class indiscriminately via Kullback-Leibler (KL) regularization, impairing the uncertainty assessment in ambiguous areas and consequently distorts the corresponding attention guidance. We thus introduce a semantic-preserving evidence learning (SAEL) strategy, integrating a semantic-smooth evidence generator and a fidelity-enhancing regularization term to retain critical semantics. Finally, by embedding PEUA and SAEL with the state-of-the-art U-KAN, we proposes Evidential U-KAN, a novel solution for trustworthy medical image segmentation. Extensive experiments on 4 datasets demonstrate superior accuracy and reliability over the competing methods. The code is available at \href{https://anonymous.4open.science/r/Evidence-U-KAN-BBE8}{github}.

Xue B, Duan D, Feng J, Zhao Z, Tan J, Zhang J, Peng C, Li C, Li C

pubmed logopapersOct 10 2025
This study aimed to evaluate the effectiveness of intelligent quick magnetic resonance (IQMR) for accelerating brain MRI scanning and improving image quality in patients with acute ischemic stroke. In this prospective study, 58 patients with acute ischemic stroke underwent head MRI examinations between July 2023 and January 2024, including diffusion-weighted imaging and both conventional and accelerated T1-weighted, T2-weighted, and T2 fluid-attenuated inversion recovery fat-saturated (T2-FLAIR) sequences. Accelerated sequences were processed using IQMR, producing IQMR-T1WI, IQMR-T2WI, and IQMR-T2-FLAIR images. Image quality was assessed qualitatively by two readers using a five-point Likert scale (1 = non-diagnostic to 5 = excellent). Signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of lesions and surrounding tissues were quantitatively measured. The Alberta Stroke Program Early CT Score (ASPECTS) was used to evaluate ischemia severity. Total scan time was reduced from 5 minutes 9 seconds to 2 minutes 40 seconds, accounting for a reduction of 48.22%. IQMR significantly improved SNR/CNR in accelerated sequences (P < .05), achieving parity with routine sequences (P > .05). Qualitative scores for lesion conspicuity and internal display improved post-IQMR (P < .05).. ASPECTS showed no significant difference between IQMR and routine images (P = 0.79; ICC = 0.91-0.93). IQMR addressed MRI's slow scanning limitation without hardware modifications, enhancing diagnostic efficiency. The results have been found to align with advancements in deep learning. Limitations included the small sample size and the exclusion of functional sequences. IQMR could significantly reduce brain MRI scanning time and enhance image quality in patients with acute ischemic stroke.

Seiger, R., Fierlinger, P., the Alzheimer's Disease Neuroimaging Initiative (ADNI),

medrxiv logopreprintOct 10 2025
Convolutional neural networks (CNNs) have been the standard for computer vision tasks including applications in Alzheimers disease (AD). Recently, Vision Transformers (ViTs) have been introduced, which have emerged as a strong alternative to CNNs. A common precursor stage of AD is a syndrome called mild cognitive impairment (MCI). However, not all individuals diagnosed with MCI progress to AD. In this investigation, we aimed to assess whether a ViT can reliably predict converters versus non-converters. A transfer learning approach was used for model training, by applying a pretrained ViT model, fine-tuned on the ADNI dataset. The cohort comprised 575 individuals (299 stable MCI; 276 progressive MCI who converted within 36 months), from whom axial T1-weighted MRI slices covering the hippocampal region were used as model input. Results showed an average area under the receiver operating characteristic curve (AUC-ROC) on the test set of 0.74 {+/-} 0.02 (mean {+/-} SD), an accuracy of 0.69 {+/-} 0.03, a sensitivity of 0.65 {+/-} 0.07, a specificity of 0.72 {+/-} 0.06, and an F1-score for the progressive MCI class of 0.67 {+/-} 0.04. These findings demonstrate that a ViT approach achieves reasonable classification accuracy for predicting the conversion from MCI to AD by specifically focusing on the hippocampal region.

Fasterholdt I, Schrøder JS, Hansen LH, Bowen JM, Gerdes A, Kidholm K, Haja TM, Calabrò F, Cecchi R, Stanimirovic A, Francis T, Rac VE, Rasmussen BSB

pubmed logopapersOct 10 2025
In 2022, a multidisciplinary group of experts and patients published a Model for ASsessing the value of AI (MAS-AI) in medical imaging. MAS-AI is a critical tool for decision-makers, enabling them to make informed choices on the prioritization of AI solutions. The objective of this study was to assess the face validity and transferability of MAS-AI by investigating workshop participants' perceptions in Denmark, Italy, and Canada regarding the importance of its content. A Delphi process was conducted, including inputs from four workshops with a sample of decision makers from hospitals or the healthcare sector, patient partners and various researchers and experts. The participants were asked to rate the importance of each of the domains and subtopics in MAS-AI on a 0-3 Likert scale. A total of 95 participants from three countries participated. The face validity of all MAS-AI domains was confirmed by Denmark, Canada, and Italy, with over 70 percent of the respondents in the first round rating the domains as moderately or highly important. Overall, the five process factors were considered moderately or highly important by between 93 percent and 87 percent of the respondents. All the individual subtopics under each domain were rated above the 70 percent cut-off, except five subtopics for Italy. The study confirmed the validity of the MAS-AI domains in Denmark, Canada, and Italy. Several improvements in study design and data collection were identified. In the future, analyzing participants to understand which items were rated as important by whom could provide valuable insights.

Andersen, A., Huang, R., Liu, E.

medrxiv logopreprintOct 10 2025
Artificial intelligence (AI) is rapidly transforming healthcare practice, with growing evidence supporting its use in diagnosis, prognosis, treatment planning, and operational decision-making. The proliferation of systematic reviews in recent years underscores the need for an updated synthesis of the literature to inform research, policy, and practice. We searched PubMed, Web of Science, Scopus, IEEE Xplore, and CINAHL for systematic reviews and meta-analyses published between 2019 and November 2024. Eligible reviews focused on AI applications in healthcare practice, were peer-reviewed, and written in English. A total of 181 reviews met the inclusion criteria. Publication volume increased steadily, peaking in 2024. AI research was concentrated in high-density domains, such as radiology, oncology, and critical care. Across reviews, diagnostic imaging, electronic health record (EHR) data, and biomarkers/laboratory results accounted for 70% of training data sources, though newer data types, such as wearable device and sensor data, emerged from 2022 onward. Diagnosis, prognosis, and treatment comprised over 80% of AI applications, with novel uses emerging in recent years. Ethical concerns were reported in 64.6% of reviews, with privacy, model accuracy, data and algorithmic bias, and explainability as recurrent themes. The proportion of reviews reporting ethical concerns increased from 2021 to 2024. AI applications in healthcare are expanding in scope, diversifying in data sources, and evolving toward novel clinical and operational uses. The human-centered AI or Human-AI-Human paradigm, integrating computational precision with clinical expertise, holds significant promise but will require parallel advances in governance, regulatory frameworks, and ethical oversight to ensure safe adoption. Article highlightsO_LIThis umbrella review summarizes 181 systematic reviews on AI applications across healthcare practice fields. C_LIO_LI70% of the reviews reported AI models trained on diagnostic imaging, EHR, and biomarker data. C_LIO_LIThere has been recent growth in AI using wearable, sensor, and novel multimodal health data. C_LIO_LIDiagnosis, prognosis, and treatment comprised over 80% of the applications in all of the reviews. C_LIO_LIEthical concerns, including privacy, accuracy, bias, and explainability, were raised in over half of the reviews. C_LI

Pandey, D., Xu, L., Kun, E., Li, C., Wang, J. Y., Melek, A., DiCarlo, J. C., Castillo, E., Narula, J., Taylor, C. A., Narasimhan, V. M.

medrxiv logopreprintOct 10 2025
Coronary artery disease (CAD) is the leading cause of death worldwide, yet it is highly preventable. Early detection is critical, particularly because the first clinical manifestation of CAD is a heart attack in [~]50% of individuals. Current clinical risk scores rely largely on traditional biomarkers and do not leverage recent advances in medical imaging and genetics. To address this, we developed preCog, a multimodal artificial intelligence framework that integrates multi-organ imaging, genetic risk, metabolic biomarkers, ECGs and demographic data to predict time to incident CAD over up to 10 years of follow-up in a cohort of 60,000 UK Biobank participants. To quantify imaging risk, we applied 2D and 3D foundational computer vision models to more than 500,000 images spanning cardiac, liver and pancreas MRI, as well as DXA scans, each capturing distinct facets of CAD pathophysiology. We extracted deep learning derived image embeddings and compressed them to compact representations that were highly correlated with conventional imaging metrics of cardiac function (e.g. ejection fraction and stroke volumes) yet outperformed these metrics for CAD prediction (AUC 0.794 vs 0.666). Because imaging cost is a key constraint, we evaluated modality-level contributions and found that only cardiac long-axis cine MRI and aortic distensibility MRI contributed substantial independent value. After adjusting for baseline traits, liver, pancreas and DXA features added no significant predictive power. We constructed a joint time-to-event model integrating imaging, a polygenic risk score (PRS) trained on more than 1.25 million individuals from non-imaged UK Biobank participants, the Million Veteran Program and FinnGen, blood biochemistry and clinical variables. The joint model achieved a C-index of 0.75, exceeding PREVENT (0.71) and Framingham Risk Score (0.66). Importantly we found that imaging and genetic risks were largely independent, indicating that individuals with similar imaging-based risk at a given age may progress differently based on underlying genetic risk. A hierarchical risk stratification framework combining clinical, genetic and imaging data identified a subgroup with a 15-fold increase in incident CAD risk relative to a low-risk baseline. Performance was consistent across data collection centers spanning rural and urban settings and diverse demographics (C-index range 0.73-0.76). Our findings demonstrate the utility of multi-modal AI for medical forecasting of common complex disease to preempt or mitigate their occurrence.

Al Muttaki, M. R. R., Afrin, S., Anil, A. I. A., Shawon, M. M. H.

medrxiv logopreprintOct 10 2025
Breast cancer, which is among the top causes of cancer-related deaths in women worldwide, demonstrates the importance of effective and rapid diagnostic tools, especially in early diagnosis, to enhance the survival level. Although machine learning (ML) advances have had an increasing number of medical imaging applications, limitations of diversity and applicability of datasets, the interpretation and efficiency of models remain a challenge to clinical use. The paper assesses eight of the most popular ML models, such as Convolutional Neural Network (CNN), Kolmogorov-Arnold Network (KAN), k-Nearest Neighbors, Support Vector Machine, XGBoost, Random Forest, Naive Bayes, and a Hybrid model based on the Mammogram Mastery dataset of Iraq-Sulaymaniyah, which consists of 745 original and 9,685 augmented mammogram images. The hybrid model has the best accuracy (0.9667) and F1 Score (0.9444), and the KAN model has the best ROC AUC (0.9760) and Log Loss (0.1421), meaning they are best in terms of discriminative power and proper calibration. Random Forest, which has the lowest false negatives (3) when compared with Fast Multinomial and Fast Text, became most secure in clinical screening since it struck a balance between sensitivity and computing efficiency. The two practical challenges, though, are the slow inference time of the KAN model (0.323 seconds) and the expensive training cost (1009.10 seconds) of the Hybrid model. These insights explain that the Hybrid and KAN models are promising means of improving the accuracy of the diagnostics, and Random Forest can serve as a practically representative tool for reducing the number of missed diagnoses. The context of future research needs to address multi-dataset validation from multiple institutions, speed optimization of inference, multi-classification, and improved interpretability that will be used in clinically integrative settings. By addressing these gaps, ML-based diagnostics have the potential to increase the rate of breast cancer diagnosis, minimizing diagnostic errors and improving patient outcomes in various clinical contexts, which can facilitate the scaling of screening services available across the world.

Sagar, A.

medrxiv logopreprintOct 10 2025
Accurate medical image classification is critical for early diagnosis and effective treatment planning. However, conventional deep learning models often fail to provide reliable uncertainty estimates, limiting their clinical applicability. In this study, we propose a novel Bayesian neural network architecture for medical image classification that integrates channel-wise and spatial attention mechanisms, including Squeeze-and-Excitation (SE) blocks and a novel Spiral Attention, to enhance feature representation. The proposed model employs a Bayes-by-Backprop approach in the fully connected layers to quantify both epistemic and aleatoric uncertainties, allowing for reliable prediction confidence estimation. We validate our approach on multiple benchmark datasets, including diabetic retinopathy, COVID-19 chest X-rays, skin lesion images, and gastrointestinal endoscopy images. Extensive experiments demonstrate that our method not only achieves high classification performance but also provides meaningful uncertainty estimates, improving interpretability and robustness in clinical decision-making. Additionally, qualitative analysis using Grad-CAM visualizations highlights the models ability to focus on clinically relevant regions, further supporting its potential for real-world deployment.

Karimi D, Calixto C, Snoussi H, Li B, Cortes-Albornoz MC, Velasco-Annis C, Rollins C, Pierotich L, Jaimes C, Gholipour A, Warfield SK

pubmed logopapersOct 9 2025
Diffusion-weighted MRI (dMRI) is increasingly used to study the normal and abnormal development of fetal brain in-utero. It offers invaluable insights into the neurodevelopmental processes in the fetal stage. However, reliable analysis of fetal dMRI data requires dedicated computational methods that are currently unavailable. The lack of automated methods for fast, accurate, and reproducible data analysis has seriously limited our ability to tap the potential of fetal brain dMRI for medical and scientific applications. In this work, we developed and validated a unified computational framework to (1) segment the brain tissue into white matter, cortical/subcortical gray matter, and cerebrospinal fluid, (2) segment 31 distinct white matter tracts, and (3) parcellate the brain's cortex, deep gray nuclei, and white matter structures into 96 anatomically meaningful regions. We utilized a set of manual, semi-automatic, and automatic approaches to annotate 97 fetal brains. Using these labels, we developed and validated a multi-task deep learning method to perform the three computations. Evaluations show that the new method can accurately carry out all three tasks, achieving a mean Dice similarity coefficient of 0.865 on tissue segmentation, 0.825 on white matter tract segmentation, and 0.819 on parcellation. Further validation on independent external data shows generalizability of the proposed method. The new method can help advance the field of fetal neuroimaging as it can lead to substantial improvements in fetal brain tractography, tract-specific analysis, and structural connectivity assessment.

Guo R, Xia W, Xu F, Qian Y, Han Q, Geng D, Gao X, Wang Y

pubmed logopapersOct 9 2025
Separate renal function assessment is important in clinical decision making. The single-photon emission computed tomography is commonly used for the assessment although radioactive, tedious and of high cost. This study aimed to automatically assess the separate renal function using plain CT images and artificial intelligence methods, including deep learning-based automatic segmentation and radiomics modeling. We performed a retrospective study on 281 patients with nephrarctia or hydronephrosis from two centers (Training set: 159 patients from Center I; Test set: 122 patients from Center II). The renal parenchyma and hydronephrosis regions in plain CT images were automatically segmented using deep learning-based U-Net transformers (UNETR). Radiomic features were extracted from the two regions and used to build radiomic signature using the ElasticNet, then further combined with clinical characteristics using multivariable logistic regression to obtain an integrated model. The automatic segmentation was evaluated using the dice similarity coefficient (DSC). The mean DSC of automatic kidney segmentation based on UNETR was 0.894 and 0.881 in the training and test sets. The average time of automatic and manual segmentation was 3.4 s/case and 1477.9 s/case. The AUC of radiomic signature was 0.778 in the training set and 0.801 in the test set. The AUC of the integrated model was 0.792 and 0.825 in the training and test sets. It is feasible to assess the renal function of each kidney separately using plain CT and AI methods. Our method can minimize the radiation risk, improve the diagnostic efficiency and reduce the costs.
Page 62 of 6256241 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.