Sort by:
Page 45 of 6156144 results

Lyell D, Dinh M, Gillett M, Abraham N, Symes ER, Susanto AP, Chakar BA, Seimon RV, Coiera E, Magrabi F

pubmed logopapersOct 13 2025
Artificial intelligence (AI) tools could assist emergency doctors interpreting chest X-rays to inform urgent care. However, the impact of AI assistance on clinical decision-making, a precursor to enhanced care and patient outcomes, remains understudied. This study evaluates the effect of AI assistance on clinical decisions of emergency doctors interpreting chest X-rays. Junior and senior residents, emergency registrars and consultants working in Australian emergency departments were eligible. Doctors completed 18 clinical vignettes involving chest X-ray interpretation, representative of typical patient presentations. Vignettes were randomly selected from a bank of 49 based on the emergency medicine curriculum and contained a chest X-ray, presenting complaint, relevant symptoms and observations. Of the 18 vignettes, each doctor was randomly assigned to have half assisted by a commercial AI tool capable of detecting 124 different chest X-ray findings. Four vignettes contained X-rays known to produce incorrect AI findings. Primary outcomes were correct diagnosis and patient management. X-ray interpretation time, confidence of diagnosis, perceptions about the AI tool and the differential impact of AI assistance by seniority were also examined. 200 doctors participated. AI assistance increased correct diagnosis by 5.9% (95% CI 2.7 to 9.2%) compared with unassisted vignettes, with the largest increase among senior residents (11.8%; 95% CI 5.2% to 18.3%). Patient management increased by 3.2% (95% CI 0.1% to 6.4%). Confidence in diagnosis increased by 5% (95% CI 3.4% to 6.6%; p<0.001) and interpretation time increased by 4.9 s (p=0.08). Incorrect AI findings decreased correct diagnosis by 1% for false-positive (p=0.9) and 9% for false-negative findings (p=0.1). Participants found the AI tool helpful for interpreting chest X-rays, highlighting missed findings, but were neutral on its accuracy. Improvements in diagnosis and patient management without meaningful increases in interpretation time suggest AI assistance could benefit clinical decisions involving chest X-ray interpretation. Further studies are required to ascertain if such improvements translate to improved patient care.

Li Y, Yi P, Jin M, Li Y, Chen W

pubmed logopapersOct 13 2025
The aim of this study is to build and validate a model based on structural magnetic resonance imaging (sMRI) to predict the progression of mild cognitive impairment (MCI) to Alzheimer's disease (AD). A total of 343 patients with MCI were selected from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database as study subjects. Among them, 154 patients progressed to AD during the 48-month follow-up. All subjects were randomly divided into a training set (n = 240) and a validation set (n = 103) in a 7:3 ratio according to enrollment time. The baseline T1-weighted (T1W) structural MR images of each patient were automatically segmented into whole-brain three-dimensional (3D) white and gray matter images based on the training set data. In addition, radiomics signatures were extracted from each structural image. Baseline neuropsychological scores were combined with the radiomics signatures to construct a prediction model using machine learning. The diagnostic accuracy and reliability of the model were evaluated using the receiver operating characteristic (ROC) curve analysis in both the training and validation sets. Stepwise logistic regression analysis showed that clinical dementia rating (CDR), Alzheimer's Disease Assessment Scale (ADAS-cog) and radiomics markers were independent predictors of progression from MCI to AD. ROC curve showed that the AUC values of CDR, ADAS-cog and radiomics markers in the training set and validation set were 0.895 and 0.882, respectively. The sensitivity was 0.933 and 0.977, and the specificity was 0.669 and 0.661, respectively. DeLong test showed that the diagnostic efficacy of the comprehensive model was significantly different from that of the independent predictors (P = 0.023). The integrated model, based on structural analysis of magnetic resonance images, can accurately identify and predict individuals with MCI at high risk of progressing to AD.

Lin G, Chen W, Chen Y, Shi C, Cao J, Mao W, Zhao C, Zhou H, Hu Y, Xia S, Yang W, Xu M, Chen M, Ji J, Lu C

pubmed logopapersOct 13 2025
Non-invasive preoperative assessment of HER2 status is critical for identifying candidates for targeted therapy and personalizing treatment strategies in endometrial cancer (EC). This study aims to assess the preoperative value of multiparametric magnetic resonance imaging (MRI)-based radiomics in predicting HER2 status and prognosis of EC patients. We included 492 patients with EC divided into training (n = 215), internal validation (n = 92), and external validation cohorts 1 (n = 64) and 2 (n = 121). Models were constructed using six machine learning algorithms based on radiomics features derived from multiparametric MRI, including T2-weighted, diffusion-weighted, and contrast-enhanced T1-weighted sequences. A fusion model integrating key clinical predictors with the radiomics score (Rad-score) was created. Its predictive performance was evaluated through receiver operating characteristic (ROC) analysis, and its prognostic significance was assessed through survival analysis. HER2 (+) status was associated with poor differentiation and myometrial invasion in patients with EC. A support vector machine (SVM)-based model comprised of multiparametric MRI-based radiomics features demonstrated excellent performance in predicting HER2 status, with a mean area under the ROC curve (AUC) of 0.814 in the validation cohorts. A fusion model combining the SVM-based Rad-score with clinical factors significantly improved prediction accuracy, achieving AUCs of 0.914 in the training cohort, and 0.809-0.865 in the validation cohorts. Kaplan-Meier analysis revealed that patients with EC with predicted HER2 (+) status had worse progression-free survival than those with predicted HER2 (-) status. The fusion model based on multiparametric MRI-based radiomics features can potentially aid in the accurate preoperative prediction of HER2 status and prognosis of patients with EC, providing essential insights for clinical decision-making.

Ragab H, Aydemir DG, Cicek H, Kahl H, Möhring L, Striegler M, de Jong LT, Karaagac H, Adam G, Avanesov M

pubmed logopapersOct 13 2025
The increasing availability of large image data sets and technical advances in the field of information technology have also greatly advanced the use of artificial intelligence (AI) in radiology in recent years. Especially in the field of abdominal MRI diagnostics, there are numerous opportunities to use AI applications to provide efficient, objective, and standardized image acquisition and diagnosis.This review summarizes the current state of research and clinical application of AI in abdominal MRI diagnostics with the help of a literature search via PubMed. The focus is on interpretive areas of application such as automatic segmentation of abdominal organs, classification of pathologies, and quantitative analysis of a wide range of abdominal diseases. In addition, the technical requirements, challenges and limitations as well as ethical aspects are systematically examined.AI-based systems show promising preclinical results, for example, in image reconstruction, segmentation, detection and characterization of lesions, as well as in the classification, for example, of PSC-typical bile duct changes based on MRCP. Interestingly, however, compared to other organ-specific applications in radiology, there are only a few clinically usable tools in abdominal imaging. In addition, there are still major challenges due to the often very heterogeneous data quality, the availability of carefully annotated image data, and legal and ethical safeguards. However, the issues of cost structure and profitability, as well as the remuneration of AI-based applications, also play a significant role and need to be clarified.Despite the great potential and promising preclinical work, the integration of AI systems in abdominal MRI is not yet established in everyday clinical practice. Successful clinical implementation requires standardized workflows, transparent model architecture, legally compliant framework conditions, clear reimbursement guidelines, and the active involvement of radiological expertise. In the future, multimodal, predictive systems with the integration of supplementary clinical data and the ethically reflected design of AI-supported decision-making processes will become increasingly important. · Compared to other application areas within radiology, there are still very few dedicated and validated AI applications for abdominal MRI, which is mainly due to the comparatively complex data structure and the high inter-individual variability of the abdomen.. · For successful integration into clinical practice, it is essential to have multi-center training data sets, such as those found in the context of large cohort studies, as well as transparent data protection and competitive remuneration.. · Ragab H, Aydemir DG, Cicek H et al. Artificial Intelligence in Abdominal MRI Diagnostics: Current Applications, Challenges, and Future Perspectives. Rofo 2025; DOI 10.1055/a-2704-7577.

Leili Barekatain, Ben Glocker

arxiv logopreprintOct 13 2025
Understanding model decisions is crucial in medical imaging, where interpretability directly impacts clinical trust and adoption. Vision Transformers (ViTs) have demonstrated state-of-the-art performance in diagnostic imaging; however, their complex attention mechanisms pose challenges to explainability. This study evaluates the explainability of different Vision Transformer architectures and pre-training strategies - ViT, DeiT, DINO, and Swin Transformer - using Gradient Attention Rollout and Grad-CAM. We conduct both quantitative and qualitative analyses on two medical imaging tasks: peripheral blood cell classification and breast ultrasound image classification. Our findings indicate that DINO combined with Grad-CAM offers the most faithful and localized explanations across datasets. Grad-CAM consistently produces class-discriminative and spatially precise heatmaps, while Gradient Attention Rollout yields more scattered activations. Even in misclassification cases, DINO with Grad-CAM highlights clinically relevant morphological features that appear to have misled the model. By improving model transparency, this research supports the reliable and explainable integration of ViTs into critical medical diagnostic workflows.

Sicheng Zhou, Lei Wu, Cao Xiao, Parminder Bhatia, Taha Kass-Hout

arxiv logopreprintOct 13 2025
Self-supervised learning (SSL) has transformed vision encoder training in general domains but remains underutilized in medical imaging due to limited data and domain specific biases. We present MammoDINO, a novel SSL framework for mammography, pretrained on 1.4 million mammographic images. To capture clinically meaningful features, we introduce a breast tissue aware data augmentation sampler for both image-level and patch-level supervision and a cross-slice contrastive learning objective that leverages 3D digital breast tomosynthesis (DBT) structure into 2D pretraining. MammoDINO achieves state-of-the-art performance on multiple breast cancer screening tasks and generalizes well across five benchmark datasets. It offers a scalable, annotation-free foundation for multipurpose computer-aided diagnosis (CAD) tools for mammogram, helping reduce radiologists' workload and improve diagnostic efficiency in breast cancer screening.

Porpiglia F, Checcucci E, Volpi G, Stura I, Cillis S, Ortenzi M, Cisero E, Garzena V, Gatti C, Liguori S, Sica M, Alessio P, Garino D, Tonelli L, Marchiò C, Piramide F, Piana A, Bollito E, Piazzolla P, De Luca S, Migliaretti G, Manfredi M, Fiori C, Amparore D

pubmed logopapersOct 13 2025
Three-dimensional (3D) augmented reality (AR) and artificial intelligence (AI) technologies have recently been introduced to enhance guidance during robot-assisted radical prostatectomy (RARP). By overlaying virtual and real-time images, this approach helps accurately localize hidden lesions during surgery, enabling the execution of tailored procedures. This study aimed to evaluate whether 3D-AI-AR guidance reduces positive surgical margins (PSMs) compared with standard tw0-dimensional (2D) magnetic resonance imaging (MRI)-based interventions. In this prospective, multicenter randomized controlled trial (NCT06318559), 133 patients with extracapsular extension or bulging at preoperative MRI were enrolled and randomized (2:1) to either 2D MRI-guided (n = 84) or 3D-AI-AR-guided RARP (n = 49). All the patients underwent nerve-sparing RARP. Intraoperative selective biopsies were then performed at the level of the preserved neurovascular bundle (NVB): cognitive in the MRI group and AR guided in the 3D group. The primary outcomes included PSM rate. Prostate-specific antigen (PSA) levels, continence, and potency recovery were assessed during the 12 mo of follow-up. The use of postoperative radiotherapy was recorded. Biochemical recurrence (BCR) was defined as PSA >0.4 ng/ml. All the analyses were conducted with SAS Statistics Software v.9.4. Baseline and intraoperative characteristics were similar between the groups. While PSMs on prostate surface were comparable (p = 0.8), 3D-guided excisional biopsies had a significantly higher positivity rate (52% vs 13%; p = 0.001), allowing an improved margin control. The 3D group had a lower overall PSM rate (22% vs 39%; p = 0.047), required less postoperative RT (18% vs 35%; p = 0.046), and showed higher continence at 12 mo (91% vs 71%; p = 0.03). Potency and BCR rates were similar. The execution of a 3D-AI-AR-guided biopsy at the level of preserved NVBs during nerve-sparing RARP allows correct identification of the tumor with subsequent improvement of margin control. Longer follow-up is required to assess the functional and long-term oncological outcomes of this approach.

Dell'Agnello F, Capellini K, Gasparotti E, Scarpolini MA, Buongiorno R, Monteleone A, Cademartiri F, Celi S

pubmed logopapersOct 13 2025
Numerical simulations play a key role in evaluating the hemodynamics of the thoracic aorta (TA). Common computational fluid dynamics (CFD) methods apply the rigid wall hypothesis, thus disregarding vessel deformation during the cardiac cycle; Fluid-Structure Interaction (FSI) approaches, while accounting for vessel compliance, demand extensive computational resources and rely on assumptions about wall mechanical properties. This study aims to develop a digital twin model of the aorta by implementing an AI-based framework for patient-specific moving boundaries, to be applied in CFD simulations (CFD<sub>MB</sub>) of the entire aorta. Starting from multi-phase ECG-gated CT scans, we built models of the TA and left ventricle (LV) at different phases of the cardiac cycle. An in-house non rigid-registration coupled with radial basis functions interpolation, was used to get iso-topological and mapped surface meshes at each phase. From the analysis of the LV volume changes during the cardiac cycle, patient-specific inlet condition was also applied. Results from CFD<sub>MB</sub> simulations were compared with those obtained from CFD. The CFD<sub>MB</sub> approach accurately captured TA morphological changes during the cardiac cycle, without compromising mesh quality. Differences in the main hemodynamic results were found between the two performed simulations strategies. The CFD<sub>MB</sub> approach also modeled the flow waveform shift that occurs along the TA lumen, enabling pulse wave velocity estimation. The implemented pipeline represents a promising method for patient-specific hemodynamic studies, overcoming the limitations of both conventional CFD and FSI simulations.

Wang C, Li Y, Ji Y, Yu K, Qin C, Liu L, Shuai Y, Chen J, Li A, Zhang T

pubmed logopapersOct 13 2025
Determining predictive biomarkers for immunotherapy response in non-small cell lung cancer (NSCLC) patients is a complex task. This research aimed to develop a multimodal model (CRDL) integrating clinical data, deep learning (DL), and radiomics (Rad) to predict immune responses in NSCLC patients receiving checkpoint blockade therapies. This study also evaluated whether CRDL outperforms unimodal, pre-fusion models (Pre-FMs) and post-fusion models (Post-FMs). This is a retrospective study that utilized data from 228 lung cancer patients at the Memorial Sloan Kettering Cancer Center, with varying Programmed Death-Ligand 1(PD-L1) expression levels among the patients. 228 NSCLC patients were randomly divided into two groups in a 7:3 ratio: the training cohort (n=159) and the validation cohort (n=69). Image histological features were extracted using the "PyRadiomics" package, and DL features were obtained through the deep convolutional neural network from chest computed tomography images, and clinical data from the patients were also collected. Feature reduction was performed using t-tests and the Least absolute shrinkage and selection operator regression. Unimodal modal and Pre-FMs were constructed using random forests, while the post-fusion model was developed using a support vector machine approach. The performance of the model was measured by the area under the receiver operating characteristic curve (AUC). 512 DL features and 382 Rad features were extracted. The CRDL model demonstrated superior performance with AUC values of 0.884 in the validation dataset and 0.976 in the training dataset, surpassing the best DL model in both unimodal and pre-fusion settings, which had training and validation AUCs of 0.854 and 0.749. The CRDL model accurately forecasts immunotherapy responses in NSCLC patients, offering one dependable non-invasive test.

Korbecki A, Gewald M, Winiarczyk K, Zagórski K, Zdanowicz-Ratajczyk A, Litwinowicz K, Sobański M, Korbecka J, Kacała A, Machaj W, Zimny A

pubmed logopapersOct 13 2025
Tractography using diffusion tensor imaging (DTI) and constrained spherical deconvolution (CSD) provides valuable insights into the structure of white matter pathways. However, different methodologies may produce divergent fractional anisotropy (FA) values due to fundamental differences in their underlying approaches. This study compares FA measurements obtained using a manual DTI deterministic tractography method and an automatic AI-based approach via the TractSeg framework. Thirty healthy adults underwent brain MRI, and nine major white matter tracts were reconstructed using DTI-based vendor software and CSD with the AI-driven TractSeg. FA measurements were analyzed using inter-rater reliability and agreement metrics, including intraclass correlation coefficients (ICCs). Results revealed substantial differences in FA between the two methods, with ICC values ranging from poor to moderate for most fibers. Normalization using FA values of the corpus callosum (CC) and comparison of relative values further highlighted impactful discrepancies for all of the fibers (p < 0.001). Manual DTI-based methods yielded higher FA values across most tracts, with the largest discrepancies observed in the CC and inferior fronto-occipital fasciculus. Conversely, AI-based TractSeg showed higher FA values for the uncinate fasciculus, demonstrating advantages for smaller, complex fibers. Additionally, tract volume analysis showed that AI-based methods consistently produced larger tract volumes; however, volume differences did not align with FA ICC patterns. This indicates that volumetric discrepancies alone do not explain FA variability between methods. Despite high inter-rater reliability for manual measurements, significant inter-method differences indicate that FA values from the two methods are not interchangeable. Standardization is needed for reliable cross-study comparisons.
Page 45 of 6156144 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.