Sort by:
Page 18 of 6046038 results

Brunyé TT, Mitroff SR, Elmore JG

pubmed logopapersOct 17 2025
Artificial intelligence (AI) has the potential to transform medical informatics by supporting clinical decision-making, reducing diagnostic errors, and improving workflows and efficiency. However, successful integration of AI-based decision support systems depends on careful consideration of human-AI collaboration, trust, skill maintenance, and automation bias. This work proposes five central questions to guide future research in medical informatics and human-computer interface (HCI). We focus on AI-based clinical decision support systems, including computer vision algorithms for medical imaging (radiology, pathology), natural language processing for structured and unstructured electronic health record (EHR) data, and rule-based systems. Relevant data modalities include clinician-acquired images, EHR text, and increasingly, patient-generated content in telehealth contexts. We review existing evidence regarding diagnostic errors across specialties, the effectiveness and risks of AI tools in reducing perceptual and interpretive errors, and the human factors influencing diagnostic decision-making in AI-enabled contexts. We synthesize insights from medicine, cognitive science, and HCI to identify gaps in knowledge and propose five key questions for continued research. Diagnostic errors remain common across medicine, with AI offering potential to reduce both perceptual and interpretive errors. However, the impact of AI depends critically on how and when information is presented. Studies indicate that delayed or toggleable cues may outperform immediate ones, but attentional capture, overreliance, and bias remain significant risks. Explainable AI provides transparency but can also bias decisions. Long-term reliance on AI may erode clinician skills, particularly for trainees and in low-prevalence contexts. Historical failures of computer-aided diagnosis in mammography highlight these challenges. Effective AI integration requires human-centered and adaptive design. Five central research questions address: (1) what type and format of information AI should provide; (2) when information should be presented; (3) how explainable AI affects diagnostic decisions; (4) how AI influences automation bias and complacency; and (5) the risks of skill decay due to reliance on AI. Each question underscores the importance of balancing efficiency, accuracy, and clinician expertise while mitigating bias and skill degradation. AI holds promise for improving diagnostic accuracy and efficiency, but realizing its potential requires post-deployment evaluation, equitable access, clinician oversight, and targeted training. AI must complement, rather than replace, human expertise, ensuring safe, effective, and sustainable integration into diagnostic decision-making. Addressing these challenges proactively can maximize AI's potential across healthcare and other high-stakes domains.

Vuskov R, Hermans A, Pixberg M, Müller-Hübenthal J, Brauksiepe A, Corban E, Cubukcu M, Nowak J, Kargaliev A, von der Stück M, Siepmann R, Kuhl C, Truhn D, Nebelung S

pubmed logopapersOct 17 2025
Developing a deep-learning model for automated multi-tissue, multi-condition knee MRI analysis and assessing its clinical potential. This retrospective dual-center study included 3121 MRI studies from 3018 adults, who underwent routine knee MRI examinations at a radiologic practice (2012-2019). Twenty-three conditions across cartilage, menisci, bone marrow, ligaments, and other soft tissues were manually labeled. A 3D slice transformer network was trained for binary classification and evaluated in terms of the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity using a five-fold cross-validation and an external test set of 448 MRI studies (429 adults) from a university hospital (2022-2023). To assess differences in diagnostic performance, two inexperienced and two experienced radiology residents read 50 external test studies with and without model assistance. Paired t-tests were used for statistical analysis. Averaged over cross-validation tests, the model's AUC was at least 0.85 for 8 conditions and at least 0.75 for 18 conditions. Generalization on the external test set was robust, with a mean absolute AUC difference of 0.05 ± 0.03 per condition. Model assistance improved accuracy and sensitivity for inexperienced residents, increased inter-reader agreement for both groups, and increased sensitivity and shortened reading times by 10% (p = 0.045) for experienced residents. Specificity decreased slightly when conditions with low model performance (AUC < 0.75) were included. Our deep-learning model performed well across diverse knee conditions and effectively assisted radiology residents. Future work should focus on more fine-grained predictions for subtle or rare conditions to enable comprehensive joint assessment in clinical practice. Question Increasing MRI utilization adds pressure on radiologists, necessitating comprehensive AI models for image analysis to manage this growing demand efficiently. Findings Our AI model enhanced diagnostic performance and efficiency of resident radiologists when reading knee MRI studies, demonstrating robust results across diverse conditions and two datasets. Clinical relevance Model assistance increases the sensitivity of radiologists, helping to identify pathologies that were overlooked without AI assistance. Reduced reading times suggest potential alleviation of radiologists' workload.

Wu S, She H, Wang Z, Tong L, Wang Z, Du YP

pubmed logopapersOct 17 2025
This study aims to develop real-time phase-contrast (PC) cardiovascular MRI with low latency. In this study, a framework using golden-angle radial sequence and a deep-learning-based reconstruction network, named DLCNet, is proposed for real-time PC cardiovascular MRI. The DLCNet is designed to capture both spatial and temporal features by combining dictionary learning and CNN. A dataset of 15 normal subjects was acquired and utilized to train and test the DLCNet. The reconstructed image quality and flow measurements at the ascending aorta were compared with different reconstruction algorithms. Real-time PC cardiovascular MRI was demonstrated with low latency using the proposed framework via Gadgetron platform at a scanner. The prospectively reconstructed results were compared with those obtained from electrocardiograph (ECG)-gated, breath-hold, segmented PC. The proposed reconstruction network outperformed other algorithms in both image reconstruction quality and flow quantification. The overall framework achieved an imaging speed of 14.6 frames per second, with an image display latency of less than 60 ms. The real-time flow quantification results showed good agreement with ECG-gated, breath-hold, segmented PC-MRI. The proposed framework was successfully demonstrated for real-time PC cardiovascular MRI with high-quality image reconstruction and low-latency image display.

Li W, Xi Y, Lu M, He J, Zhu J, Li H, Yang T, Zeng X, Liu X, Xu R, Huang H, Liu H, Zhang T, Min X, Wang R

pubmed logopapersOct 17 2025
Accurate preoperative T and TNM staging of clear cell renal cell carcinoma (ccRCC) is crucial for diagnosis and treatment, but these assessments often depend on subjective radiologist judgment, leading to interobserver variability. This study aims to design and validate two CT-based deep learning models and evaluate their clinical utility for the preoperative T and TNM staging of ccRCC. Data from 1,148 ccRCC patients across five medical centers were retrospectively collected. Specifically, data from two centers were merged and randomly divided into a training set (80%) and a testing (20%) set. Data from two additional centers comprised external validation set 1, and data from the remaining independent center comprised external validation set 2. Two 3D deep learning models based on a Transformer-ResNet (TR-Net) architecture were developed to predict T staging (T1, T2, T3 + T4) and TNM staging (I, II, III, IV) using corticomedullary phase CT images. Gradient-weighted Class Activation Mapping (Grad-CAM) was used to generate heatmaps for improved model interpretability, and a human-machine collaboration experiment was conducted to evaluate clinical utility. Models' performance was evaluated using micro-average AUC (micro-AUC), macro-average AUC (macro-AUC), and accuracy (ACC). Across the two external validation sets, the T staging model achieved micro-AUCs of 0.939 and 0.954, macro-AUCs of 0.857 and 0.894, and ACCs of 0.843 and 0.869, while the TNM staging model achieved micro-AUCs of 0.935 and 0.924, macro-AUCs of 0.817 and 0.888, and ACCs of 0.856 and 0.807. While the models demonstrated acceptable overall performance in preoperative ccRCC staging, performance was moderate for advanced subclasses (T3 + T4 AUC: 0.769 and 0.795; TNM III AUC: 0.669 and 0.801). Grad-CAM heatmaps highlighted key tumor regions, improving interpretability. The human-machine collaboration demonstrated improved diagnostic accuracy with model assistance. The CT-based 3D TR-Net models showed acceptable overall performance with moderate results in advanced subclasses in preoperative ccRCC staging, with interpretable outputs and collaborative benefits, making them potentially useful decision-support tools.

Marie HS, Elbaz M, Soliman RS, Hafez ME, Elkhatib AA

pubmed logopapersOct 17 2025
Pediatric oral diseases affect over 60% of children globally, yet current diagnostic approaches lack precision and speed necessary for early intervention. This study developed a novel bio-inspired neutrosophic-enzyme intelligence framework integrating biological principles with uncertainty quantification for enhanced pediatric dental diagnostics. We validated the framework across 18,432 pediatric patients aged 3-17 years from six international centers using multi-modal data, including clinical examinations, radiographic imaging, genetic biomarkers, and behavioral assessments. The framework incorporates neutrosophic deep learning for uncertainty modeling, enzyme-inspired feature extraction mimicking salivary enzyme dynamics, axolotl-regenerative healing prediction, and genetic-immunological optimization. Comprehensive validation employed stratified cross-validation, leave-one-center-out testing, and 18-month longitudinal tracking with mixed-effects statistical analysis. The framework achieved 97.3% diagnostic accuracy (95% CI: 95.8-98.2%), 94.7% sensitivity for incipient caries detection, and 96.2% specificity, significantly outperforming conventional methods (80.2% accuracy, p < 0.001) and state-of-art deep learning (89.4% accuracy, p < 0.001). Clinical efficiency improved with 37.5% diagnostic time reduction and 58.1% patient throughput increase. Cross-population validation showed consistent performance (89.7-93.8% accuracy) across ethnic groups with no demographic bias (p > 0.05). Economic analysis demonstrated 34.5% cost reduction with $12,450 per quality-adjusted life year and 8.7-month return on investment. The framework provides explicit uncertainty quantification enabling risk-stratified clinical decisions while maintaining robust safety profiles with zero serious adverse events. All algorithmic implementations and supplementary statistical validation reports are publicly provided to ensure transparency and reproducibility. This bio-inspired approach establishes new benchmarks for AI-assisted pediatric healthcare, demonstrating superior diagnostic performance, clinical efficiency, and global scalability for addressing pediatric oral health disparities.

De Rosa S, Lassola S, Gualdi F, Battaglini D

pubmed logopapersOct 17 2025
Noninfectious pulmonary complications are a significant cause of morbidity and mortality in immunocompromised patients, particularly in those undergoing hematopoietic stem cell transplantation, solid organ transplantation, chemotherapy, or immunotherapy. These syndromes often mimic infections, leading to delayed diagnosis and inappropriate treatment. Acute complications include peri-engraftment respiratory distress syndrome, diffuse alveolar hemorrhage, drug-induced lung injury, immune checkpoint inhibitor-related pneumonitis, and radiation pneumonitis, while late or chronic complications, such as organizing pneumonia, interstitial lung disease, bronchiolitis obliterans syndrome, and chronic graft-versus-host disease-related lung involvement, typically develop months to years after therapy. Accurate and timely diagnosis is essential, relying on high-resolution CT, bronchoalveolar lavage, and, in selected cases, lung biopsy to differentiate these conditions from infections. Current treatments remain largely empirical, focusing on corticosteroids, supportive intensive care, and immunosuppressive adjustment, although novel strategies, including inhaled hemostatic agents and JAK inhibitors, are emerging. Despite advances in supportive management, late-onset complications remain associated with poor long-term functional outcomes. Future directions include the development of biomarkers, artificial intelligence-assisted radiological tools, and multicenter registries to improve classification, risk stratification, and treatment. In this narrative review, we highlight current evidence around noninfectious pulmonary complications in the critical care setting, diagnosis, and treatment.

Ma J, Wei S, Huo X, Gu Y, Shu N, Lin Y, Dai Z

pubmed logopapersOct 17 2025
Transitioning from a euthymic state to severe depression is a continuous process. Early identification of depressive neural biomarkers in healthy population can promote effective intervention and reduce the risk of developing depression. We employed a longitudinal design and adopted the relevance vector regression (RVR) approach and multimodal MRI data to predict the depressive mood [i.e., Beck Depression Inventory (BDI) score] and its longitudinal changes in young healthy adults (N = 121). We constructed three models and compared their performance, which are model using functional connectivity features (FC model), structural connectivity features (SC model), and both FC and SC features (multi-modality model). Based on feature correlations to BDI, these models were further divided into positive and negative models. For prediction of baseline BDI score, the FC model and multi-modality model exhibited superior predictive performance (rho ≥ 0.39, p < 0.001, R<sup>2</sup> ≥ 0.14). Feature analysis revealed that, FC involved parietal and prefrontal networks, and SC involved prefrontal and subcortical networks contributed more to the baseline prediction. As for prediction of longitudinal BDI changes, multi-modality model (rho = 0.41, p < 0.001, R<sup>2</sup> = 0.09) showed the best performance among the three models. Its feature pattern is similar to that of baseline prediction, also involving brain areas such as the parietal and subcortical networks. These findings revealed rich information of the neural basis underlying individual depressive mood from a multimodal perspective, which can provide reliable biomarkers for capturing mental health changes and future individualized assessment and intervention.

Fayyaz AM, Abdulkadir SJ, Talpur N, Al-Selwi SM, Hassan SU, Sumiea EH

pubmed logopapersOct 17 2025
Explainable Artificial Intelligence (XAI) has become a crucial aspect of modern Machine Learning (ML) and Deep Learning (DL) applications, emphasizing transparency and trust in model predictions. Among various XAI techniques, Gradient-weighted Class Activation Mapping (Grad-CAM) stands out for its ability to visually interpret Convolutional Neural Networks (CNNs) by highlighting image regions that contribute significantly to decision-making. This Systematic Literature Review (SLR) provides a comprehensive analysis of Grad-CAM, its advancements in medical imaging, and applications in ML and DL. The review explores current research trends, variations of Grad-CAM, and its integration with different ML/DL architectures. A systematic search across Scopus, Web of Science, IEEE Xplore, and ScienceDirect identified 427 peer-reviewed publications (2020-2024), of which 51 were selected for in-depth examination. This study offers valuable insights into the evolution of Grad-CAM, its optimization techniques, and its role in improving model interpretability in medical imaging analysis and related fields.

Ye D, Ou Z, Ye F, Wang S, Li T, Zhang S, Chen J, Huang Y, Su Z

pubmed logopapersOct 17 2025
The accuracy of shear wave elastography for non-invasive assessment of renal fibrosis (RF) in chronic kidney disease (CKD) needs further improvement. We developed a tool using an ensemble deep learning model (EDLM) that can accurately assess RF in CKD patients based solely on two-dimensional shear wave elastography (2D-SWE) images. Retrospective data were collected from CKD patients between April 2019 and October 2024, along with renal 2D-SWE images obtained before biopsy. Pathological evaluation was the reference standard of RF. All patients were randomly divided into training, validation, and test sets in a 7:1:2 ratio. An EDLM integrating three convolutional neural networks (ResNet18, DenseNet121, and EfficientNet-b7) through a voting strategy at the output level was developed and validated using 2D-SWE images. The diagnostic performance of the EDLM was compared with that of radiologists. A total of 286 CKD patients (mean age ± standard deviation: 41.86 ± 14.94, males: 162) and 858 2D-SWE images (mild RF: 405, moderate-severe RF: 453) were included. In the test set, EDLM achieved an accuracy of 93.0% (95% CI: 88.1, 95.9), negative predictive value of 89.6% (95% CI: 81.5, 94.5), positive predictive value of 96.4% (95% CI: 90.0, 98.8), specificity of 96.3% (95% CI: 89.7, 98.7), and sensitivity of 90.0% (95% CI: 82.1, 94.7). The area under the receiver operating characteristic curves of the EDLM was 0.989, surpassing experienced radiologist by 0.186 (<i>P</i> < 0.001) and less experienced radiologist by 0.279 (<i>P</i> < 0.001). EDLM based on 2D-SWE images significantly improved the diagnostic performance of RF in CKD. The EDLM was expected to be a potential tool for accurately non-invasive assessment of RF in CKD. The online version contains supplementary material available at 10.1186/s12880-025-01964-y.

Gerard Comas-Quiles, Carles Garcia-Cabrera, Julia Dietlmeier, Noel E. O'Connor, Ferran Marques

arxiv logopreprintOct 17 2025
Unsupervised anomaly detection (UAD) presents a complementary alternative to supervised learning for brain tumor segmentation in magnetic resonance imaging (MRI), particularly when annotated datasets are limited, costly, or inconsistent. In this work, we propose a novel Multimodal Vision Transformer Autoencoder (MViT-AE) trained exclusively on healthy brain MRIs to detect and localize tumors via reconstruction-based error maps. This unsupervised paradigm enables segmentation without reliance on manual labels, addressing a key scalability bottleneck in neuroimaging workflows. Our method is evaluated in the BraTS-GoAT 2025 Lighthouse dataset, which includes various types of tumors such as gliomas, meningiomas, and pediatric brain tumors. To enhance performance, we introduce a multimodal early-late fusion strategy that leverages complementary information across multiple MRI sequences, and a post-processing pipeline that integrates the Segment Anything Model (SAM) to refine predicted tumor contours. Despite the known challenges of UAD, particularly in detecting small or non-enhancing lesions, our method achieves clinically meaningful tumor localization, with lesion-wise Dice Similarity Coefficient of 0.437 (Whole Tumor), 0.316 (Tumor Core), and 0.350 (Enhancing Tumor) on the test set, and an anomaly Detection Rate of 89.4% on the validation set. These findings highlight the potential of transformer-based unsupervised models to serve as scalable, label-efficient tools for neuro-oncological imaging.
Page 18 of 6046038 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.