Sort by:
Page 52 of 3053046 results

Evaluating the impact of view position in X-ray imaging for the classification of lung diseases.

Hage Chehade A, Abdallah N, Marion JM, Oueidat M, Chauvet P

pubmed logopapersJul 28 2025
Clinical information associated with chest X-ray images, such as view position, patient age and gender, plays a crucial role in image interpretation, as it influences the visibility of anatomical structures and pathologies. However, most classification models using the ChestX-ray14 dataset relied solely on image data, disregarding the impact of these clinical variables. This study aims to investigate which clinical variable affects image characteristics and assess its impact on classification performance. To explore the relationships between clinical variables and image characteristics, unsupervised clustering was applied to group images based on their similarities. Afterwards, a statistical analysis was then conducted on each cluster to examine their clinical composition, by analyzing the distribution of age, gender, and view position. An attention-based CNN model was developed separately for each value of the clinical variable with the greatest influence on image characteristics to assess its impact on lung disease classification. The analysis identified view position as the most influential variable affecting image characteristics. Accounting for this, the proposed approach achieved a weighted area under the curve (AUC) of 0.8176 for pneumonia classification, surpassing the base model (without considering view position) by 1.65% and outperforming previous studies by 6.76%. Furthermore, it demonstrated improved performance across all 14 diseases in the ChestX-ray14 dataset. The findings highlight the importance of considering view position when developing classification models for chest X-ray analysis. Accounting for this characteristic allows for more precise disease identification, demonstrating potential for broader clinical application in lung disease evaluation.

Towards trustworthy artificial intelligence in musculoskeletal medicine: A narrative review on uncertainty quantification.

Vahdani AM, Shariatnia M, Rajpurkar P, Pareek A

pubmed logopapersJul 28 2025
Deep learning (DL) models have achieved remarkable performance in musculoskeletal (MSK) medical imaging research, yet their clinical integration remains hindered by their black-box nature and the absence of reliable confidence measures. Uncertainty quantification (UQ) seeks to bridge this gap by providing each DL prediction with a calibrated estimate of uncertainty, thereby fostering clinician trust and safer deployment. We conducted a targeted narrative review, performing expert-driven searches in PubMed, Scopus, and arXiv and mining references from relevant publications in MSK imaging utilizing UQ, and a thematic synthesis was used to derive a cohesive taxonomy of UQ methodologies. UQ approaches encompass multi-pass methods (e.g., test-time augmentation, Monte Carlo dropout, and model ensembling) that infer uncertainty from variability across repeated inferences; single-pass methods (e.g., conformal prediction, and evidential deep learning) that augment each individual prediction with uncertainty metrics; and other techniques that leverage auxiliary information, such as inter-rater variability, hidden-layer activations, or generative reconstruction errors, to estimate confidence. Applications in MSK imaging, include highlighting uncertain areas in cartilage segmentation and identifying uncertain predictions in joint implant design detections; downstream applications include enhanced clinical utility and more efficient data annotation pipelines. Embedding UQ into DL workflows is essential for translating high-performance models into clinical practice. Future research should prioritize robust out-of-distribution handling, computational efficiency, and standardized evaluation metrics to accelerate the adoption of trustworthy AI in MSK medicine. Not applicable.

Self-Assessment of acute rib fracture detection system from chest X-ray: Preliminary study for early radiological diagnosis.

Lee HK, Kim HS, Kim SG, Park JY

pubmed logopapersJul 28 2025
ObjectiveDetecting and accurately diagnosing rib fractures in chest radiographs is a challenging and time-consuming task for radiologists. This study presents a novel deep learning system designed to automate the detection and segmentation of rib fractures in chest radiographs.MethodsThe proposed method combines CenterNet with HRNet v2 for precise fracture region identification and HRNet-W48 with contextual representation to enhance rib segmentation. A dataset consisting of 1006 chest radiographs from a tertiary hospital in Korea was used, with a split of 7:2:1 for training, validation, and testing.ResultsThe rib fracture detection component achieved a sensitivity of 0.7171, indicating its effectiveness in identifying fractures. Additionally, the rib segmentation performance was measured by a dice score of 0.86, demonstrating its accuracy in delineating rib structures. Visual assessment results further highlight the model's capability to pinpoint fractures and segment ribs accurately.ConclusionThis innovative approach holds promise for improving rib fracture detection and rib segmentation, offering potential benefits in clinical practice for more efficient and accurate diagnosis in the field of medical image analysis.

Evaluation of the impact of artificial intelligence-assisted image interpretation on the diagnostic performance of clinicians in identifying endotracheal tube position on plain chest X-ray: a multi-case multi-reader study.

Novak A, Ather S, Morgado ATE, Maskell G, Cowell GW, Black D, Shah A, Bowness JS, Shadmaan A, Bloomfield C, Oke JL, Johnson H, Beggs M, Gleeson F, Aylward P, Hafeez A, Elramlawy M, Lam K, Griffiths B, Harford M, Aaron L, Seeley C, Luney M, Kirkland J, Wing L, Qamhawi Z, Mandal I, Millard T, Chimbani M, Sharazi A, Bryant E, Haithwaite W, Medonica A

pubmed logopapersJul 28 2025
Incorrectly placed endotracheal tubes (ETTs) can lead to serious clinical harm. Studies have demonstrated the potential for artificial intelligence (AI)-led algorithms to detect ETT placement on chest X-Ray (CXR) images, however their effect on clinician accuracy remains unexplored. This study measured the impact of an AI-assisted ETT detection algorithm on the ability of clinical staff to correctly identify ETT misplacement on CXR images. Four hundred CXRs of intubated adult patients were retrospectively sourced from the John Radcliffe Hospital (Oxford) and two other UK NHS hospitals. Images were de-identified and selected from a range of clinical settings, including the intensive care unit (ICU) and emergency department (ED). Each image was independently reported by a panel of thoracic radiologists, whose consensus classification of ETT placement (correct, too low [distal], or too high [proximal]) served as the reference standard for the study. Correct ETT position was defined as the tip located 3-7 cm above the carina, in line with established guidelines. Eighteen clinical readers of varying seniority from six clinical specialties were recruited across four NHS hospitals. Readers viewed the dataset using an online platform and recorded a blinded classification of ETT position for each image. After a four-week washout period, this was repeated with assistance from an AI-assisted image interpretation tool. Reader accuracy, reported confidence, and timings were measured during each study phase. 14,400 image interpretations were undertaken. Pooled accuracy for tube placement classification improved from 73.6 to 77.4% (p = 0.002). Accuracy for identification of critically misplaced tubes increased from 79.3 to 89.0% (p = 0.001). Reader confidence improved with AI assistance, with no change in mean interpretation time at 36 s per image. Use of assistive AI technology improved accuracy and confidence in interpreting ETT placement on CXR, especially for identification of critically misplaced tubes. AI assistance may potentially provide a useful adjunct to support clinicians in identifying misplaced ETTs on CXR.

A radiomics-based interpretable model integrating delayed-phase CT and clinical features for predicting the pathological grade of appendiceal pseudomyxoma peritonei.

Bai D, Shi G, Liang Y, Li F, Zheng Z, Wang Z

pubmed logopapersJul 28 2025
This study aimed to develop an interpretable machine learning model integrating delayed-phase contrast-enhanced CT radiomics with clinical features for noninvasive prediction of pathological grading in appendiceal pseudomyxoma peritonei (PMP), using Shapley Additive Explanations (SHAP) for model interpretation. This retrospective study analyzed 158 pathologically confirmed PMP cases (85 low-grade, 73 high-grade) from January 4, 2015 to April 30, 2024. Comprehensive clinical data including demographic characteristics, serum tumor markers (CEA, CA19-9, CA125, D-dimer, CA-724, CA-242), and CT-peritoneal cancer index (CT-PCI) were collected. Radiomics features were extracted from preoperative contrast-enhanced CT scans using standardized protocols. After rigorous feature selection and five-fold cross-validation, we developed three predictive models: clinical-only, radiomics-only, and a combined clinical-radiomics model using logistic regression. Model performance was evaluated through ROC analysis (AUC), Delong test, decision curve analysis (DCA), and Brier score, with SHAP values providing interpretability. The combined model demonstrated superior performance, achieving AUCs of 0.91 (95%CI:0.86-0.95) and 0.88 (95%CI:0.82-0.93) in training and testing sets respectively, significantly outperforming standalone models (P < 0.05). DCA confirmed greater clinical utility across most threshold probabilities, with favorable Brier scores (training:0.124; testing:0.142) indicating excellent calibration. SHAP analysis identified the top predictive features: wavelet-LHH_glcm_InverseVariance (radiomics), original_shape_Elongation (radiomics), and CA-199 (clinical). Our SHAP-interpretable combined model provides an accurate, noninvasive tool for PMP grading, facilitating personalized treatment decisions. The integration of radiomics and clinical data demonstrates superior predictive performance compared to conventional approaches, with potential to improve patient outcomes.

Prediction of 1p/19q state in glioma by integrated deep learning method based on MRI radiomics.

Li F, Li Z, Xu H, Kong G, Zhang Z, Cheng K, Gu L, Hua L

pubmed logopapersJul 28 2025
To predict the 1p/19q molecular status of Lower-grade glioma (LGG) patients nondestructively, this study developed a deep learning (DL) approach using radiomic to provide a potential decision aid for clinical determination of molecular stratification of LGG. The study retrospectively collected images and clinical data of 218 patients diagnosed with LGG between July 2018 and July 2022, including 155 cases from The Cancer Imaging Archive (TCIA) database and 63 cases from a regional medical centre. Patients' clinical data and MRI images were collected, including contrast-enhanced T1-weighted images and T2-weighted images. After pre-processing the image data, tumour regions of interest (ROI) were segmented by two senior neurosurgeons. In this study, an Ensemble Convolutional Neural Network (ECNN) was proposed to predict the 1p/19q status. This method, consisting of Variational Autoencoder (VAE), Information Gain (IG) and Convolutional Neural Network (CNN), is compared with four machine learning algorithms (Random Forest, Decision Tree, K-Nearest Neighbour, Gaussian Neff Bayes). Fivefold cross-validation was used to evaluate and calibrate the model. Precision, recall, accuracy, F1 score and area under the curve (AUC) were calculated to assess model performance. Our cohort comprises 118 patients diagnosed with 1p/19q codeletion and 100 patients diagnosed with 1p/19q non-codeletion. The study findings indicate that the ECNN method demonstrates excellent predictive performance on the validation dataset. Our model achieved an average precision of 0.981, average recall of 0.980, average F1-score of 0.981, and average accuracy of 0.981. The average area under the curve (AUC) for our model is 0.994, surpassing that of the other four traditional machine learning algorithms (AUC: 0.523-0.702). This suggests that the model based on the ECNN algorithm performs well in distinguishing the 1p/19q molecular status of LGG patients. The deep learning model based on conventional MRI radiomic integrates VAE and IG methods. Compared with traditional machine learning algorithms, it shows the best performance in the prediction of 1p/19q molecular co-deletion status. It may become a potentially effective tool for non-invasively and effectively identifying molecular features of lower-grade glioma in the future, providing an important reference for clinicians to formulate individualized diagnosis and treatment plans.

Machine learning-based MRI imaging for prostate cancer diagnosis: systematic review and meta-analysis.

Zhao Y, Zhang L, Zhang S, Li J, Shi K, Yao D, Li Q, Zhang T, Xu L, Geng L, Sun Y, Wan J

pubmed logopapersJul 28 2025
This study aims to evaluate the diagnostic value of machine learning-based MRI imaging in differentiating benign and malignant prostate cancer and detecting clinically significant prostate cancer (csPCa, defined as Gleason score ≥7) using systematic review and meta-analysis methods. Electronic databases (PubMed, Web of Science, Cochrane Library, and Embase) were systematically searched for predictive studies using machine learning-based MRI imaging for prostate cancer diagnosis. Sensitivity, specificity, and area under the curve (AUC) were used to assess the diagnostic accuracy of machine learning-based MRI imaging for both benign/malignant prostate cancer and csPCa. A total of 12 studies met the inclusion criteria, with 3474 patients included in the meta-analysis. Machine learning-based MRI imaging demonstrated good diagnostic value for both benign/malignant prostate cancer and csPCa. The pooled sensitivity and specificity for diagnosing benign/malignant prostate cancer were 0.92 (95% CI: 0.83-0.97) and 0.90 (95% CI: 0.68-0.97), respectively, with a combined AUC of 0.96 (95% CI: 0.94-0.98). For csPCa diagnosis, the pooled sensitivity and specificity were 0.83 (95% CI: 0.77-0.87) and 0.73 (95% CI: 0.65-0.81), respectively, with a combined AUC of 0.86 (95% CI: 0.83-0.89). Machine learning-based MRI imaging shows good diagnostic accuracy for both benign/malignant prostate cancer and csPCa. Further in-depth studies are needed to validate these findings.

CVT-HNet: a fusion model for recognizing perianal fistulizing Crohn's disease based on CNN and ViT.

Li L, Wang Z, Wang C, Chen T, Deng K, Wei H, Wang D, Li J, Zhang H

pubmed logopapersJul 28 2025
Accurate identification of anal fistulas is essential, as it directly impacts the severity of subsequent perianal infections, prognostic indicators, and overall treatment outcomes. Traditional manual recognition methods are inefficient. In response, computer vision methods have been adopted to improve efficiency. Convolutional neural networks(CNNs) are the main basis for detecting anal fistulas in current computer vision techniques. However, these methods often struggle to capture long-range dependencies effectively, which results in inadequate handling of images of anal fistulas. This study proposes a new fusion model, CVT-HNet, that integrates MobileNet with vision transformer technology. This design utilizes CNNs to extract local features and Transformers to capture long-range dependencies. In addition, the MobileNetV2 with Coordinate Attention mechanism and encoder modules are optimized to improve the precision of detecting anal fistulas. Comparative experimental results show that CVT-HNet achieves an accuracy of 80.66% with significant robustness. It surpasses both pure Transformer architecture models and other fusion networks. Internal validation results demonstrate the reliability and consistency of CVT-HNet. External validation demonstrates that our model exhibits commendable transportability and generalizability. In visualization analysis, CVT-HNet exhibits a more concentrated focus on the region of interest in images of anal fistulas. Furthermore, the contribution of each CVT-HNet component module is evaluated by ablation experiments. The experimental results highlight the superior performance and practicality of CVT-HNet in detecting anal fistulas. By combining local and global information, CVT-HNet demonstrates strong performance. The model not only achieves high accuracy and robustness but also exhibits strong generalizability. This makes it suitable for real-world applications where variability in data is common.These findings emphasize its effectiveness in clinical contexts.

A new low-rank adaptation method for brain structure and metastasis segmentation via decoupled principal weight direction and magnitude.

Zhu H, Yang H, Wang Y, Hu K, He G, Zhou J, Li Z

pubmed logopapersJul 28 2025
Deep learning techniques have become pivotal in medical image segmentation, but their success often relies on large, manually annotated datasets, which are expensive and labor-intensive to obtain. Additionally, different segmentation tasks frequently require retraining models from scratch, resulting in substantial computational costs. To address these limitations, we propose PDoRA, an innovative parameter-efficient fine-tuning method that leverages knowledge transfer from a pre-trained SwinUNETR model for a wide range of brain image segmentation tasks. PDoRA minimizes the reliance on extensive data annotation and computational resources by decomposing model weights into principal and residual weights. The principal weights are further divided into magnitude and direction, enabling independent fine-tuning to enhance the model's ability to capture task-specific features. The residual weights remain fixed and are later fused with the updated principal weights, ensuring model stability while enhancing performance. We evaluated PDoRA on three diverse medical image datasets for brain structure and metastasis segmentation. The results demonstrate that PDoRA consistently outperforms existing parameter-efficient fine-tuning methods, achieving superior segmentation accuracy and efficiency. Our code is available at https://github.com/Perfect199001/PDoRA/tree/main .

Rapid vessel segmentation and reconstruction of head and neck angiograms from MR vessel wall images.

Zhang J, Wang W, Dong J, Yang X, Bai S, Tian J, Li B, Li X, Zhang J, Wu H, Zeng X, Ye Y, Ding S, Wan J, Wu K, Mao Y, Li C, Zhang N, Xu J, Dai Y, Shi F, Sun B, Zhou Y, Zhao H

pubmed logopapersJul 28 2025
Three-dimensional magnetic resonance vessel wall imaging (3D MR-VWI) is critical for characterizing cerebrovascular pathologies, yet its clinical adoption is hindered by labor-intensive postprocessing. We developed VWI Assistant, a multi-sequence integrated deep learning platform trained on multicenter data (study cohorts 1981 patients and imaging datasets) to automate artery segmentation and reconstruction. The framework demonstrated robust performance across diverse patient populations, imaging protocols, and scanner manufacturers, achieving 92.9% qualified rate comparable to expert manual delineation. VWI Assistant reduced processing time by over 90% (10-12 min per case) compared to manual methods (p < 0.001) and improved inter-/intra-reader agreement. Real-world deployment (n = 1099 patients) demonstrated rapid clinical adoption, with utilization rates increasing from 10.8% to 100.0% within 12 months. By streamlining 3D MR-VWI workflows, VWI Assistant addresses scalability challenges in vascular imaging, offering a practical tool for routine use and large-scale research, significantly improving workflow efficiency while reducing labor and time costs.
Page 52 of 3053046 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.