Sort by:
Page 65 of 3993982 results

Predicting Intracranial Pressure Levels: A Deep Learning Approach Using Computed Tomography Brain Scans.

Theodoropoulos D, Trivizakis E, Marias K, Xirouchaki N, Vakis A, Papadaki E, Karantanas A, Karabetsos DA

pubmed logopapersJul 28 2025
Elevated intracranial pressure (ICP) is a serious condition that demands prompt diagnosis to avoid significant neurological injury or even death. Although invasive techniques remain the "gold standard" for ICP measuring, they are time-consuming and pose risks of complications. Various noninvasive methods have been suggested, but their experimental status limits their use in emergency situations. On the other hand, although artificial intelligence has rapidly evolved, it has not yet fully harnessed fast-acquisition modalities such as computed tomography (CT) scans to evaluate ICP. This is likely due to the lack of available annotated data sets. In this article, we present research that addresses this gap by training four distinct deep learning models on a custom data set, enhanced with demographical and Glasgow Coma Scale (GCS) values. A key innovation of our study is the incorporation of demographical data and GCS values as additional channels of the scans. The models were trained and validated on a custom data set consisting of paired CT brain scans (n = 578) with corresponding ICP values, supplemented by GCS scores and demographical data. The algorithm addresses a binary classification problem by predicting whether ICP levels exceed a predetermined threshold of 15 mm Hg. The top-performing models achieved an area under the curve of 88.3% and a recall of 81.8%. An algorithm that enhances the transparency of the model's decisions was used to provide insights into where the models focus when generating outcomes, both for the best and lowest-performing models. This study demonstrates the potential of AI-based models to evaluate ICP levels from brain CT scans with high recall. Although promising, further improvements are necessary in the future to validate these findings and improve clinical applicability.

Evaluating the impact of view position in X-ray imaging for the classification of lung diseases.

Hage Chehade A, Abdallah N, Marion JM, Oueidat M, Chauvet P

pubmed logopapersJul 28 2025
Clinical information associated with chest X-ray images, such as view position, patient age and gender, plays a crucial role in image interpretation, as it influences the visibility of anatomical structures and pathologies. However, most classification models using the ChestX-ray14 dataset relied solely on image data, disregarding the impact of these clinical variables. This study aims to investigate which clinical variable affects image characteristics and assess its impact on classification performance. To explore the relationships between clinical variables and image characteristics, unsupervised clustering was applied to group images based on their similarities. Afterwards, a statistical analysis was then conducted on each cluster to examine their clinical composition, by analyzing the distribution of age, gender, and view position. An attention-based CNN model was developed separately for each value of the clinical variable with the greatest influence on image characteristics to assess its impact on lung disease classification. The analysis identified view position as the most influential variable affecting image characteristics. Accounting for this, the proposed approach achieved a weighted area under the curve (AUC) of 0.8176 for pneumonia classification, surpassing the base model (without considering view position) by 1.65% and outperforming previous studies by 6.76%. Furthermore, it demonstrated improved performance across all 14 diseases in the ChestX-ray14 dataset. The findings highlight the importance of considering view position when developing classification models for chest X-ray analysis. Accounting for this characteristic allows for more precise disease identification, demonstrating potential for broader clinical application in lung disease evaluation.

Towards trustworthy artificial intelligence in musculoskeletal medicine: A narrative review on uncertainty quantification.

Vahdani AM, Shariatnia M, Rajpurkar P, Pareek A

pubmed logopapersJul 28 2025
Deep learning (DL) models have achieved remarkable performance in musculoskeletal (MSK) medical imaging research, yet their clinical integration remains hindered by their black-box nature and the absence of reliable confidence measures. Uncertainty quantification (UQ) seeks to bridge this gap by providing each DL prediction with a calibrated estimate of uncertainty, thereby fostering clinician trust and safer deployment. We conducted a targeted narrative review, performing expert-driven searches in PubMed, Scopus, and arXiv and mining references from relevant publications in MSK imaging utilizing UQ, and a thematic synthesis was used to derive a cohesive taxonomy of UQ methodologies. UQ approaches encompass multi-pass methods (e.g., test-time augmentation, Monte Carlo dropout, and model ensembling) that infer uncertainty from variability across repeated inferences; single-pass methods (e.g., conformal prediction, and evidential deep learning) that augment each individual prediction with uncertainty metrics; and other techniques that leverage auxiliary information, such as inter-rater variability, hidden-layer activations, or generative reconstruction errors, to estimate confidence. Applications in MSK imaging, include highlighting uncertain areas in cartilage segmentation and identifying uncertain predictions in joint implant design detections; downstream applications include enhanced clinical utility and more efficient data annotation pipelines. Embedding UQ into DL workflows is essential for translating high-performance models into clinical practice. Future research should prioritize robust out-of-distribution handling, computational efficiency, and standardized evaluation metrics to accelerate the adoption of trustworthy AI in MSK medicine. Not applicable.

Self-Assessment of acute rib fracture detection system from chest X-ray: Preliminary study for early radiological diagnosis.

Lee HK, Kim HS, Kim SG, Park JY

pubmed logopapersJul 28 2025
ObjectiveDetecting and accurately diagnosing rib fractures in chest radiographs is a challenging and time-consuming task for radiologists. This study presents a novel deep learning system designed to automate the detection and segmentation of rib fractures in chest radiographs.MethodsThe proposed method combines CenterNet with HRNet v2 for precise fracture region identification and HRNet-W48 with contextual representation to enhance rib segmentation. A dataset consisting of 1006 chest radiographs from a tertiary hospital in Korea was used, with a split of 7:2:1 for training, validation, and testing.ResultsThe rib fracture detection component achieved a sensitivity of 0.7171, indicating its effectiveness in identifying fractures. Additionally, the rib segmentation performance was measured by a dice score of 0.86, demonstrating its accuracy in delineating rib structures. Visual assessment results further highlight the model's capability to pinpoint fractures and segment ribs accurately.ConclusionThis innovative approach holds promise for improving rib fracture detection and rib segmentation, offering potential benefits in clinical practice for more efficient and accurate diagnosis in the field of medical image analysis.

Implicit Spatiotemporal Bandwidth Enhancement Filter by Sine-activated Deep Learning Model for Fast 3D Photoacoustic Tomography

I Gede Eka Sulistyawan, Takuro Ishii, Riku Suzuki, Yoshifumi Saijo

arxiv logopreprintJul 28 2025
3D photoacoustic tomography (3D-PAT) using high-frequency hemispherical transducers offers near-omnidirectional reception and enhanced sensitivity to the finer structural details encoded in the high-frequency components of the broadband photoacoustic (PA) signal. However, practical constraints such as limited number of channels with bandlimited sampling rate often result in sparse and bandlimited sensors that degrade image quality. To address this, we revisit the 2D deep learning (DL) approach applied directly to sensor-wise PA radio-frequency (PARF) data. Specifically, we introduce sine activation into the DL model to restore the broadband nature of PARF signals given the observed band-limited and high-frequency PARF data. Given the scarcity of 3D training data, we employ simplified training strategies by simulating random spherical absorbers. This combination of sine-activated model and randomized training is designed to emphasize bandwidth learning over dataset memorization. Our model was evaluated on a leaf skeleton phantom, a micro-CT-verified 3D spiral phantom and in-vivo human palm vasculature. The results showed that the proposed training mechanism on sine-activated model was well-generalized across the different tests by effectively increasing the sensor density and recovering the spatiotemporal bandwidth. Qualitatively, the sine-activated model uniquely enhanced high-frequency content that produces clearer vascular structure with fewer artefacts. Quantitatively, the sine-activated model exhibits full bandwidth at -12 dB spectrum and significantly higher contrast-to-noise ratio with minimal loss of structural similarity index. Lastly, we optimized our approach to enable fast enhanced 3D-PAT at 2 volumes-per-second for better practical imaging of a free-moving targets.

Evaluation of the impact of artificial intelligence-assisted image interpretation on the diagnostic performance of clinicians in identifying endotracheal tube position on plain chest X-ray: a multi-case multi-reader study.

Novak A, Ather S, Morgado ATE, Maskell G, Cowell GW, Black D, Shah A, Bowness JS, Shadmaan A, Bloomfield C, Oke JL, Johnson H, Beggs M, Gleeson F, Aylward P, Hafeez A, Elramlawy M, Lam K, Griffiths B, Harford M, Aaron L, Seeley C, Luney M, Kirkland J, Wing L, Qamhawi Z, Mandal I, Millard T, Chimbani M, Sharazi A, Bryant E, Haithwaite W, Medonica A

pubmed logopapersJul 28 2025
Incorrectly placed endotracheal tubes (ETTs) can lead to serious clinical harm. Studies have demonstrated the potential for artificial intelligence (AI)-led algorithms to detect ETT placement on chest X-Ray (CXR) images, however their effect on clinician accuracy remains unexplored. This study measured the impact of an AI-assisted ETT detection algorithm on the ability of clinical staff to correctly identify ETT misplacement on CXR images. Four hundred CXRs of intubated adult patients were retrospectively sourced from the John Radcliffe Hospital (Oxford) and two other UK NHS hospitals. Images were de-identified and selected from a range of clinical settings, including the intensive care unit (ICU) and emergency department (ED). Each image was independently reported by a panel of thoracic radiologists, whose consensus classification of ETT placement (correct, too low [distal], or too high [proximal]) served as the reference standard for the study. Correct ETT position was defined as the tip located 3-7 cm above the carina, in line with established guidelines. Eighteen clinical readers of varying seniority from six clinical specialties were recruited across four NHS hospitals. Readers viewed the dataset using an online platform and recorded a blinded classification of ETT position for each image. After a four-week washout period, this was repeated with assistance from an AI-assisted image interpretation tool. Reader accuracy, reported confidence, and timings were measured during each study phase. 14,400 image interpretations were undertaken. Pooled accuracy for tube placement classification improved from 73.6 to 77.4% (p = 0.002). Accuracy for identification of critically misplaced tubes increased from 79.3 to 89.0% (p = 0.001). Reader confidence improved with AI assistance, with no change in mean interpretation time at 36 s per image. Use of assistive AI technology improved accuracy and confidence in interpreting ETT placement on CXR, especially for identification of critically misplaced tubes. AI assistance may potentially provide a useful adjunct to support clinicians in identifying misplaced ETTs on CXR.

A radiomics-based interpretable model integrating delayed-phase CT and clinical features for predicting the pathological grade of appendiceal pseudomyxoma peritonei.

Bai D, Shi G, Liang Y, Li F, Zheng Z, Wang Z

pubmed logopapersJul 28 2025
This study aimed to develop an interpretable machine learning model integrating delayed-phase contrast-enhanced CT radiomics with clinical features for noninvasive prediction of pathological grading in appendiceal pseudomyxoma peritonei (PMP), using Shapley Additive Explanations (SHAP) for model interpretation. This retrospective study analyzed 158 pathologically confirmed PMP cases (85 low-grade, 73 high-grade) from January 4, 2015 to April 30, 2024. Comprehensive clinical data including demographic characteristics, serum tumor markers (CEA, CA19-9, CA125, D-dimer, CA-724, CA-242), and CT-peritoneal cancer index (CT-PCI) were collected. Radiomics features were extracted from preoperative contrast-enhanced CT scans using standardized protocols. After rigorous feature selection and five-fold cross-validation, we developed three predictive models: clinical-only, radiomics-only, and a combined clinical-radiomics model using logistic regression. Model performance was evaluated through ROC analysis (AUC), Delong test, decision curve analysis (DCA), and Brier score, with SHAP values providing interpretability. The combined model demonstrated superior performance, achieving AUCs of 0.91 (95%CI:0.86-0.95) and 0.88 (95%CI:0.82-0.93) in training and testing sets respectively, significantly outperforming standalone models (P < 0.05). DCA confirmed greater clinical utility across most threshold probabilities, with favorable Brier scores (training:0.124; testing:0.142) indicating excellent calibration. SHAP analysis identified the top predictive features: wavelet-LHH_glcm_InverseVariance (radiomics), original_shape_Elongation (radiomics), and CA-199 (clinical). Our SHAP-interpretable combined model provides an accurate, noninvasive tool for PMP grading, facilitating personalized treatment decisions. The integration of radiomics and clinical data demonstrates superior predictive performance compared to conventional approaches, with potential to improve patient outcomes.

Prediction of 1p/19q state in glioma by integrated deep learning method based on MRI radiomics.

Li F, Li Z, Xu H, Kong G, Zhang Z, Cheng K, Gu L, Hua L

pubmed logopapersJul 28 2025
To predict the 1p/19q molecular status of Lower-grade glioma (LGG) patients nondestructively, this study developed a deep learning (DL) approach using radiomic to provide a potential decision aid for clinical determination of molecular stratification of LGG. The study retrospectively collected images and clinical data of 218 patients diagnosed with LGG between July 2018 and July 2022, including 155 cases from The Cancer Imaging Archive (TCIA) database and 63 cases from a regional medical centre. Patients' clinical data and MRI images were collected, including contrast-enhanced T1-weighted images and T2-weighted images. After pre-processing the image data, tumour regions of interest (ROI) were segmented by two senior neurosurgeons. In this study, an Ensemble Convolutional Neural Network (ECNN) was proposed to predict the 1p/19q status. This method, consisting of Variational Autoencoder (VAE), Information Gain (IG) and Convolutional Neural Network (CNN), is compared with four machine learning algorithms (Random Forest, Decision Tree, K-Nearest Neighbour, Gaussian Neff Bayes). Fivefold cross-validation was used to evaluate and calibrate the model. Precision, recall, accuracy, F1 score and area under the curve (AUC) were calculated to assess model performance. Our cohort comprises 118 patients diagnosed with 1p/19q codeletion and 100 patients diagnosed with 1p/19q non-codeletion. The study findings indicate that the ECNN method demonstrates excellent predictive performance on the validation dataset. Our model achieved an average precision of 0.981, average recall of 0.980, average F1-score of 0.981, and average accuracy of 0.981. The average area under the curve (AUC) for our model is 0.994, surpassing that of the other four traditional machine learning algorithms (AUC: 0.523-0.702). This suggests that the model based on the ECNN algorithm performs well in distinguishing the 1p/19q molecular status of LGG patients. The deep learning model based on conventional MRI radiomic integrates VAE and IG methods. Compared with traditional machine learning algorithms, it shows the best performance in the prediction of 1p/19q molecular co-deletion status. It may become a potentially effective tool for non-invasively and effectively identifying molecular features of lower-grade glioma in the future, providing an important reference for clinicians to formulate individualized diagnosis and treatment plans.

Machine learning-based MRI imaging for prostate cancer diagnosis: systematic review and meta-analysis.

Zhao Y, Zhang L, Zhang S, Li J, Shi K, Yao D, Li Q, Zhang T, Xu L, Geng L, Sun Y, Wan J

pubmed logopapersJul 28 2025
This study aims to evaluate the diagnostic value of machine learning-based MRI imaging in differentiating benign and malignant prostate cancer and detecting clinically significant prostate cancer (csPCa, defined as Gleason score ≥7) using systematic review and meta-analysis methods. Electronic databases (PubMed, Web of Science, Cochrane Library, and Embase) were systematically searched for predictive studies using machine learning-based MRI imaging for prostate cancer diagnosis. Sensitivity, specificity, and area under the curve (AUC) were used to assess the diagnostic accuracy of machine learning-based MRI imaging for both benign/malignant prostate cancer and csPCa. A total of 12 studies met the inclusion criteria, with 3474 patients included in the meta-analysis. Machine learning-based MRI imaging demonstrated good diagnostic value for both benign/malignant prostate cancer and csPCa. The pooled sensitivity and specificity for diagnosing benign/malignant prostate cancer were 0.92 (95% CI: 0.83-0.97) and 0.90 (95% CI: 0.68-0.97), respectively, with a combined AUC of 0.96 (95% CI: 0.94-0.98). For csPCa diagnosis, the pooled sensitivity and specificity were 0.83 (95% CI: 0.77-0.87) and 0.73 (95% CI: 0.65-0.81), respectively, with a combined AUC of 0.86 (95% CI: 0.83-0.89). Machine learning-based MRI imaging shows good diagnostic accuracy for both benign/malignant prostate cancer and csPCa. Further in-depth studies are needed to validate these findings.

CVT-HNet: a fusion model for recognizing perianal fistulizing Crohn's disease based on CNN and ViT.

Li L, Wang Z, Wang C, Chen T, Deng K, Wei H, Wang D, Li J, Zhang H

pubmed logopapersJul 28 2025
Accurate identification of anal fistulas is essential, as it directly impacts the severity of subsequent perianal infections, prognostic indicators, and overall treatment outcomes. Traditional manual recognition methods are inefficient. In response, computer vision methods have been adopted to improve efficiency. Convolutional neural networks(CNNs) are the main basis for detecting anal fistulas in current computer vision techniques. However, these methods often struggle to capture long-range dependencies effectively, which results in inadequate handling of images of anal fistulas. This study proposes a new fusion model, CVT-HNet, that integrates MobileNet with vision transformer technology. This design utilizes CNNs to extract local features and Transformers to capture long-range dependencies. In addition, the MobileNetV2 with Coordinate Attention mechanism and encoder modules are optimized to improve the precision of detecting anal fistulas. Comparative experimental results show that CVT-HNet achieves an accuracy of 80.66% with significant robustness. It surpasses both pure Transformer architecture models and other fusion networks. Internal validation results demonstrate the reliability and consistency of CVT-HNet. External validation demonstrates that our model exhibits commendable transportability and generalizability. In visualization analysis, CVT-HNet exhibits a more concentrated focus on the region of interest in images of anal fistulas. Furthermore, the contribution of each CVT-HNet component module is evaluated by ablation experiments. The experimental results highlight the superior performance and practicality of CVT-HNet in detecting anal fistulas. By combining local and global information, CVT-HNet demonstrates strong performance. The model not only achieves high accuracy and robustness but also exhibits strong generalizability. This makes it suitable for real-world applications where variability in data is common.These findings emphasize its effectiveness in clinical contexts.
Page 65 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.