Sort by:
Page 330 of 6636627 results

Novak A, Ather S, Morgado ATE, Maskell G, Cowell GW, Black D, Shah A, Bowness JS, Shadmaan A, Bloomfield C, Oke JL, Johnson H, Beggs M, Gleeson F, Aylward P, Hafeez A, Elramlawy M, Lam K, Griffiths B, Harford M, Aaron L, Seeley C, Luney M, Kirkland J, Wing L, Qamhawi Z, Mandal I, Millard T, Chimbani M, Sharazi A, Bryant E, Haithwaite W, Medonica A

pubmed logopapersJul 28 2025
Incorrectly placed endotracheal tubes (ETTs) can lead to serious clinical harm. Studies have demonstrated the potential for artificial intelligence (AI)-led algorithms to detect ETT placement on chest X-Ray (CXR) images, however their effect on clinician accuracy remains unexplored. This study measured the impact of an AI-assisted ETT detection algorithm on the ability of clinical staff to correctly identify ETT misplacement on CXR images. Four hundred CXRs of intubated adult patients were retrospectively sourced from the John Radcliffe Hospital (Oxford) and two other UK NHS hospitals. Images were de-identified and selected from a range of clinical settings, including the intensive care unit (ICU) and emergency department (ED). Each image was independently reported by a panel of thoracic radiologists, whose consensus classification of ETT placement (correct, too low [distal], or too high [proximal]) served as the reference standard for the study. Correct ETT position was defined as the tip located 3-7 cm above the carina, in line with established guidelines. Eighteen clinical readers of varying seniority from six clinical specialties were recruited across four NHS hospitals. Readers viewed the dataset using an online platform and recorded a blinded classification of ETT position for each image. After a four-week washout period, this was repeated with assistance from an AI-assisted image interpretation tool. Reader accuracy, reported confidence, and timings were measured during each study phase. 14,400 image interpretations were undertaken. Pooled accuracy for tube placement classification improved from 73.6 to 77.4% (p = 0.002). Accuracy for identification of critically misplaced tubes increased from 79.3 to 89.0% (p = 0.001). Reader confidence improved with AI assistance, with no change in mean interpretation time at 36 s per image. Use of assistive AI technology improved accuracy and confidence in interpreting ETT placement on CXR, especially for identification of critically misplaced tubes. AI assistance may potentially provide a useful adjunct to support clinicians in identifying misplaced ETTs on CXR.

Bai D, Shi G, Liang Y, Li F, Zheng Z, Wang Z

pubmed logopapersJul 28 2025
This study aimed to develop an interpretable machine learning model integrating delayed-phase contrast-enhanced CT radiomics with clinical features for noninvasive prediction of pathological grading in appendiceal pseudomyxoma peritonei (PMP), using Shapley Additive Explanations (SHAP) for model interpretation. This retrospective study analyzed 158 pathologically confirmed PMP cases (85 low-grade, 73 high-grade) from January 4, 2015 to April 30, 2024. Comprehensive clinical data including demographic characteristics, serum tumor markers (CEA, CA19-9, CA125, D-dimer, CA-724, CA-242), and CT-peritoneal cancer index (CT-PCI) were collected. Radiomics features were extracted from preoperative contrast-enhanced CT scans using standardized protocols. After rigorous feature selection and five-fold cross-validation, we developed three predictive models: clinical-only, radiomics-only, and a combined clinical-radiomics model using logistic regression. Model performance was evaluated through ROC analysis (AUC), Delong test, decision curve analysis (DCA), and Brier score, with SHAP values providing interpretability. The combined model demonstrated superior performance, achieving AUCs of 0.91 (95%CI:0.86-0.95) and 0.88 (95%CI:0.82-0.93) in training and testing sets respectively, significantly outperforming standalone models (P < 0.05). DCA confirmed greater clinical utility across most threshold probabilities, with favorable Brier scores (training:0.124; testing:0.142) indicating excellent calibration. SHAP analysis identified the top predictive features: wavelet-LHH_glcm_InverseVariance (radiomics), original_shape_Elongation (radiomics), and CA-199 (clinical). Our SHAP-interpretable combined model provides an accurate, noninvasive tool for PMP grading, facilitating personalized treatment decisions. The integration of radiomics and clinical data demonstrates superior predictive performance compared to conventional approaches, with potential to improve patient outcomes.

Li F, Li Z, Xu H, Kong G, Zhang Z, Cheng K, Gu L, Hua L

pubmed logopapersJul 28 2025
To predict the 1p/19q molecular status of Lower-grade glioma (LGG) patients nondestructively, this study developed a deep learning (DL) approach using radiomic to provide a potential decision aid for clinical determination of molecular stratification of LGG. The study retrospectively collected images and clinical data of 218 patients diagnosed with LGG between July 2018 and July 2022, including 155 cases from The Cancer Imaging Archive (TCIA) database and 63 cases from a regional medical centre. Patients' clinical data and MRI images were collected, including contrast-enhanced T1-weighted images and T2-weighted images. After pre-processing the image data, tumour regions of interest (ROI) were segmented by two senior neurosurgeons. In this study, an Ensemble Convolutional Neural Network (ECNN) was proposed to predict the 1p/19q status. This method, consisting of Variational Autoencoder (VAE), Information Gain (IG) and Convolutional Neural Network (CNN), is compared with four machine learning algorithms (Random Forest, Decision Tree, K-Nearest Neighbour, Gaussian Neff Bayes). Fivefold cross-validation was used to evaluate and calibrate the model. Precision, recall, accuracy, F1 score and area under the curve (AUC) were calculated to assess model performance. Our cohort comprises 118 patients diagnosed with 1p/19q codeletion and 100 patients diagnosed with 1p/19q non-codeletion. The study findings indicate that the ECNN method demonstrates excellent predictive performance on the validation dataset. Our model achieved an average precision of 0.981, average recall of 0.980, average F1-score of 0.981, and average accuracy of 0.981. The average area under the curve (AUC) for our model is 0.994, surpassing that of the other four traditional machine learning algorithms (AUC: 0.523-0.702). This suggests that the model based on the ECNN algorithm performs well in distinguishing the 1p/19q molecular status of LGG patients. The deep learning model based on conventional MRI radiomic integrates VAE and IG methods. Compared with traditional machine learning algorithms, it shows the best performance in the prediction of 1p/19q molecular co-deletion status. It may become a potentially effective tool for non-invasively and effectively identifying molecular features of lower-grade glioma in the future, providing an important reference for clinicians to formulate individualized diagnosis and treatment plans.

Zhao Y, Zhang L, Zhang S, Li J, Shi K, Yao D, Li Q, Zhang T, Xu L, Geng L, Sun Y, Wan J

pubmed logopapersJul 28 2025
This study aims to evaluate the diagnostic value of machine learning-based MRI imaging in differentiating benign and malignant prostate cancer and detecting clinically significant prostate cancer (csPCa, defined as Gleason score ≥7) using systematic review and meta-analysis methods. Electronic databases (PubMed, Web of Science, Cochrane Library, and Embase) were systematically searched for predictive studies using machine learning-based MRI imaging for prostate cancer diagnosis. Sensitivity, specificity, and area under the curve (AUC) were used to assess the diagnostic accuracy of machine learning-based MRI imaging for both benign/malignant prostate cancer and csPCa. A total of 12 studies met the inclusion criteria, with 3474 patients included in the meta-analysis. Machine learning-based MRI imaging demonstrated good diagnostic value for both benign/malignant prostate cancer and csPCa. The pooled sensitivity and specificity for diagnosing benign/malignant prostate cancer were 0.92 (95% CI: 0.83-0.97) and 0.90 (95% CI: 0.68-0.97), respectively, with a combined AUC of 0.96 (95% CI: 0.94-0.98). For csPCa diagnosis, the pooled sensitivity and specificity were 0.83 (95% CI: 0.77-0.87) and 0.73 (95% CI: 0.65-0.81), respectively, with a combined AUC of 0.86 (95% CI: 0.83-0.89). Machine learning-based MRI imaging shows good diagnostic accuracy for both benign/malignant prostate cancer and csPCa. Further in-depth studies are needed to validate these findings.

Li L, Wang Z, Wang C, Chen T, Deng K, Wei H, Wang D, Li J, Zhang H

pubmed logopapersJul 28 2025
Accurate identification of anal fistulas is essential, as it directly impacts the severity of subsequent perianal infections, prognostic indicators, and overall treatment outcomes. Traditional manual recognition methods are inefficient. In response, computer vision methods have been adopted to improve efficiency. Convolutional neural networks(CNNs) are the main basis for detecting anal fistulas in current computer vision techniques. However, these methods often struggle to capture long-range dependencies effectively, which results in inadequate handling of images of anal fistulas. This study proposes a new fusion model, CVT-HNet, that integrates MobileNet with vision transformer technology. This design utilizes CNNs to extract local features and Transformers to capture long-range dependencies. In addition, the MobileNetV2 with Coordinate Attention mechanism and encoder modules are optimized to improve the precision of detecting anal fistulas. Comparative experimental results show that CVT-HNet achieves an accuracy of 80.66% with significant robustness. It surpasses both pure Transformer architecture models and other fusion networks. Internal validation results demonstrate the reliability and consistency of CVT-HNet. External validation demonstrates that our model exhibits commendable transportability and generalizability. In visualization analysis, CVT-HNet exhibits a more concentrated focus on the region of interest in images of anal fistulas. Furthermore, the contribution of each CVT-HNet component module is evaluated by ablation experiments. The experimental results highlight the superior performance and practicality of CVT-HNet in detecting anal fistulas. By combining local and global information, CVT-HNet demonstrates strong performance. The model not only achieves high accuracy and robustness but also exhibits strong generalizability. This makes it suitable for real-world applications where variability in data is common.These findings emphasize its effectiveness in clinical contexts.

Zhu H, Yang H, Wang Y, Hu K, He G, Zhou J, Li Z

pubmed logopapersJul 28 2025
Deep learning techniques have become pivotal in medical image segmentation, but their success often relies on large, manually annotated datasets, which are expensive and labor-intensive to obtain. Additionally, different segmentation tasks frequently require retraining models from scratch, resulting in substantial computational costs. To address these limitations, we propose PDoRA, an innovative parameter-efficient fine-tuning method that leverages knowledge transfer from a pre-trained SwinUNETR model for a wide range of brain image segmentation tasks. PDoRA minimizes the reliance on extensive data annotation and computational resources by decomposing model weights into principal and residual weights. The principal weights are further divided into magnitude and direction, enabling independent fine-tuning to enhance the model's ability to capture task-specific features. The residual weights remain fixed and are later fused with the updated principal weights, ensuring model stability while enhancing performance. We evaluated PDoRA on three diverse medical image datasets for brain structure and metastasis segmentation. The results demonstrate that PDoRA consistently outperforms existing parameter-efficient fine-tuning methods, achieving superior segmentation accuracy and efficiency. Our code is available at https://github.com/Perfect199001/PDoRA/tree/main .

Zhang J, Wang W, Dong J, Yang X, Bai S, Tian J, Li B, Li X, Zhang J, Wu H, Zeng X, Ye Y, Ding S, Wan J, Wu K, Mao Y, Li C, Zhang N, Xu J, Dai Y, Shi F, Sun B, Zhou Y, Zhao H

pubmed logopapersJul 28 2025
Three-dimensional magnetic resonance vessel wall imaging (3D MR-VWI) is critical for characterizing cerebrovascular pathologies, yet its clinical adoption is hindered by labor-intensive postprocessing. We developed VWI Assistant, a multi-sequence integrated deep learning platform trained on multicenter data (study cohorts 1981 patients and imaging datasets) to automate artery segmentation and reconstruction. The framework demonstrated robust performance across diverse patient populations, imaging protocols, and scanner manufacturers, achieving 92.9% qualified rate comparable to expert manual delineation. VWI Assistant reduced processing time by over 90% (10-12 min per case) compared to manual methods (p < 0.001) and improved inter-/intra-reader agreement. Real-world deployment (n = 1099 patients) demonstrated rapid clinical adoption, with utilization rates increasing from 10.8% to 100.0% within 12 months. By streamlining 3D MR-VWI workflows, VWI Assistant addresses scalability challenges in vascular imaging, offering a practical tool for routine use and large-scale research, significantly improving workflow efficiency while reducing labor and time costs.

Attallah O

pubmed logopapersJul 28 2025
Breast cancer is a relatively common carcinoma among women worldwide and remains a considerable public health concern. Consequently, the prompt identification of cancer is crucial, as research indicates that 96% of cancers are treatable if diagnosed prior to metastasis. Despite being considered the gold standard for breast cancer evaluation, conventional mammography possesses inherent drawbacks, including accessibility issues, especially in rural regions, and discomfort associated with the procedure. Therefore, there has been a surge in interest in non-invasive, radiation-free alternative diagnostic techniques, such as thermal imaging (thermography). Thermography employs infrared thermal sensors to capture and assess temperature maps of human breasts for the identification of potential tumours based on areas of thermal irregularity. This study proposes an advanced computer-aided diagnosis (CAD) system called Thermo-CAD to assess early breast cancer detection using thermal imaging, aimed at assisting radiologists. The CAD system employs a variety of deep learning techniques, specifically incorporating multiple convolutional neural networks (CNNs) to enhance diagnostic accuracy and reliability. To effectively integrate multiple deep features and diminish the dimensionality of features derived from each CNN, feature transformation and selection methods, including non-negative matrix factorization and Relief-F, are used leading to a reduction in classification complexity. The Thermo-CAD system is assessed utilising two datasets: the DMR-IR (Database for Mastology Research Infrared Images), for distinguishing between normal and abnormal breast tissues, and a novel thermography dataset to distinguish abnormal instances as benign or malignant. Thermo-CAD has proven to be an outstanding CAD system for thermographic breast cancer detection, attaining 100% accuracy on the DMR-IR dataset (normal versus abnormal breast cancer) using CSVM and MGSVM classifiers, and lower accuracy using LSVM and QSVM classifiers. However, it showed a lower ability to distinguish benign from malignant cases (second dataset), achieving an accuracy of 79.3% using CSVM. Yet, it remains a promising tool for early-stage cancer detection, especially in resource-constrained environments.

Chen X, Peng J, Zhang Z, Song Q, Li D, Zhai G, Fu W, Shu Z

pubmed logopapersJul 28 2025
Autism spectrum disorder (ASD) diagnosis remains challenging and could benefit from objective imaging-based approaches. This study aimed to construct a prediction model using whole-brain imaging radiomics and machine learning to identify children with ASD. We analyzed 223 subjects (120 with ASD) from the ABIDE database, randomly divided into training and test sets (7:3 ratio), and an independent external test set of 87 participants from Georgetown University and University of Miami. Radiomics features were extracted from white matter, gray matter, and cerebrospinal fluid from whole-brain MR images. After feature dimensionality reduction, we screened clinical predictors using multivariate logistic regression and combined them with radiomics signatures to build machine learning models. Model performance was evaluated using ROC curves and by stratifying subjects into risk subgroups. Radiomics markers achieved AUCs of 0.78, 0.75, and 0.74 in training, test, and external test sets, respectively. Verbal intelligence quotient(VIQ) emerged as a significant ASD predictor. The decision tree algorithm with radiomics markers performed best, with AUCs of 0.87, 0.84, and 0.83; sensitivities of 0.89, 0.84, and 0.86; and specificities of 0.70, 0.63, and 0.66 in the three datasets, respectively. Risk stratification using a cut-off value of 0.4285 showed significant differences in ASD prevalence between subgroups across all datasets (training: χ<sup>2</sup>=21.325; test: χ<sup>2</sup>=5.379; external test: χ<sup>2</sup>=21.52m, P<0.05). A radiomics signature based on whole-brain MRI features can effectively identify ASD, with performance enhanced by incorporating VIQ data and using a decision tree algorithm, providing a potential adaptive strategy for clinical practice. ASD = Autism Spectrum Disorder; MRI = Magnetic Resonance Imaging; SVM = support vector machine; KNN = K-nearest neighbor; VIQ = Verbal intelligence quotient; FIQ = Full-Scale intelligence quotient; ROC = Receiver Operating Characteristic; AUC = Area under Curve.

Rai P, Mark IT, Soni N, Diehn F, Messina SA, Benson JC, Madhavan A, Agarwal A, Bathla G

pubmed logopapersJul 28 2025
Magnetic resonance imaging (MRI) is a cornerstone of neuroimaging, providing unparalleled soft-tissue contrast. However, its clinical utility is often limited by long acquisition times, which contribute to motion artifacts, patient discomfort, and increased costs. Although traditional acceleration techniques, such as parallel imaging and compressed sensing help reduce scan times, they may reduce signal-to-noise ratio (SNR) and introduce artifacts. The advent of deep learning-based image reconstruction (DLBIR) may help in several ways to reduce scan times while preserving or improving image quality. Various DLBIR techniques are currently available through different vendors, with claimed reductions in gradient times up to 85% while maintaining or enhancing lesion conspicuity, improved noise suppression and diagnostic accuracy. The evolution of DLBIR from 2D to 3D acquisitions, coupled with advancements in self-supervised learning, further expands its capabilities and clinical applicability. Despite these advancements, challenges persist in generalizability across scanners and imaging conditions, susceptibility to artifacts and potential alterations in pathology representation. Additionally, limited data on training, underlying algorithms and clinical validation of these vendor-specific closed-source algorithms pose barriers to end-user trust and widespread adoption. This review explores the current applications of DLBIR in neuroimaging, vendor-driven implementations, and emerging trends that may impact accelerated MRI acquisitions.ABBREVIATIONS: PI= parallel imaging; CS= compressed sensing; DLBIR = deep learning-based image reconstruction; AI= artificial intelligence; DR =. Deep resolve; ACS = Artificial-intelligence-assisted compressed sensing.
Page 330 of 6636627 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.