Sort by:
Page 93 of 1601600 results

Does Machine Learning Prediction of Magnetic Resonance Imaging PI-RADS Correlate with Target Prostate Biopsy Results?

Arafa MA, Farhat KH, Lotfy N, Khan FK, Mokhtar A, Althunayan AM, Al-Taweel W, Al-Khateeb SS, Azhari S, Rabah DM

pubmed logopapersMay 26 2025
This study aimed to predict and classify MRI PI-RADs scores using different machine learning algorithms and to detect the concordance of PI-RADs scoring with the outcome target of prostate biopsy. Machine learning (ML) algorithms were used to develop best-fitting models for the prediction and classification of MRI PI-RAD. The Random Forest and Extra Trees models achieved the best performance compared to the other methods. The accuracy of both models was 91.95%. The AUC was 0.9329 for the Random Forest model and 0.9404 for the Extra Trees model. PSA level, PSA density, and diameter of the largest lesion were the most important features for the importance of outcome classification. ML prediction enhanced the PI-RAD classification, where clinically significant prostate cancer (csPCa) cases increased from 0% to 1.9% in the low-risk PI-RAD class, this showed that the model identified some previously missed cases. Predictive machine learning models showed an excellent ability to predict MRI Pi-RAD scores and discriminate between low- and high-risk scores. However, caution should be exercised, as a high percentage of negative biopsy cases were assigned Pi-RAD 4 and Pi-RAD 5 scores. ML integration may enhance PI-RAD's utility by reducing unnecessary biopsies in low-risk patients (via better csPCa detection) and refining the high-risk categorization. Combining such PI-RAD scores with significant parameters, such as PSA density, lesion diameter, number of lesions, and age, in decision curve analysis and utility paradigms would assist physicians' clinical decisions.

Brain Fractal Dimension and Machine Learning can predict first-episode psychosis and risk for transition to psychosis.

Hu Y, Frisman M, Andreou C, Avram M, Riecher-Rössler A, Borgwardt S, Barth E, Korda A

pubmed logopapersMay 26 2025
Although there are notable structural abnormalities in the brain associated with psychotic diseases, it is still unclear how these abnormalities relate to clinical presentation. However, the fractal dimension (FD), which offers details on the complexity and irregularity of brain microstructures, may be a promising feature, as demonstrated by neuropsychiatric disorders such as Parkinson's and Alzheimer's. It may offer a possible biomarker for the detection and prognosis of psychosis when paired with machine learning. The purpose of this study is to investigate FD as a structural magnetic resonance imaging (sMRI) feature from individuals with a high clinical risk of psychosis who did not transit to psychosis (CHR_NT), clinical high risk who transit to psychosis (CHR_T), patients with first-episode psychosis (FEP) and healthy controls (HC). Using a machine learning approach that ultimately classifies sMRI images, the goals are (a) to evaluate FD as a potential biomarker and (b) to investigate its ability to predict a subsequent transition to psychosis from the high-risk clinical condition. We obtained sMRI images from 194 subjects, including 44 HCs, 77 FEPs, 16 CHR_Ts, and 57 CHR_NTs. We extracted the FD features and analyzed them using machine learning methods under five classification schemas (a) FEP vs. HC, (b) FEP vs. CHR_NT, (c) FEP vs. CHR_T, (d) CHR_NT vs. CHR_T, (d) CHR_NT vs. HC and (e) CHR_T vs. HC. In addition, the CHR_T group was used as external validation in (a), (b) and (d) comparisons to examine whether the progression of the disorder followed the FEP or CHR_NT patterns. The proposed algorithm resulted in a balanced accuracy greater than 0.77. This study has shown that FD can function as a predictive neuroimaging marker, providing fresh information on the microstructural alterations triggered throughout the course of psychosis. The effectiveness of FD in the detection of psychosis and transition to psychosis should be established by further research using larger datasets.

Beyond Accuracy: Evaluating certainty of AI models for brain tumour detection.

Nisa ZU, Bhatti SM, Jaffar A, Mazhar T, Shahzad T, Ghadi YY, Almogren A, Hamam H

pubmed logopapersMay 26 2025
Brain tumors pose a severe health risk, often leading to fatal outcomes if not detected early. While most studies focus on improving classification accuracy, this research emphasizes prediction certainty, quantified through loss values. Traditional metrics like accuracy and precision do not capture confidence in predictions, which is critical for medical applications. This study establishes a correlation between lower loss values and higher prediction certainty, ensuring more reliable tumor classification. We evaluate CNN, ResNet50, XceptionNet, and a Proposed Model (VGG19 with customized classification layers) using accuracy, precision, recall, and loss. Results show that while accuracy remains comparable across models, the Proposed Model achieves the best performance (96.95 % accuracy, 0.087 loss), outperforming others in both precision and recall. These findings demonstrate that certainty-aware AI models are essential for reliable clinical decision-making. This study highlights the potential of AI to bridge the shortage of medical professionals by integrating reliable diagnostic tools in healthcare. AI-powered systems can enhance early detection and improve patient outcomes, reinforcing the need for certainty-driven AI adoption in medical imaging.

Machine-learning modeL based on computed tomography body composition analysis for the estimation of resting energy expenditure: A pilot study.

Palmas F, Ciudin A, Melian J, Guerra R, Zabalegui A, Cárdenas G, Mucarzel F, Rodriguez A, Roson N, Burgos R, Hernández C, Simó R

pubmed logopapersMay 26 2025
The assessment of resting energy expenditure (REE) is a challenging task with the current existing methods. The reference method, indirect calorimetry (IC), is not widely available, and other surrogates, such as equations and bioimpedance (BIA) show poor agreement with IC. Body composition (BC), in particular muscle mass, plays an important role in REE. In recent years, computed tomography (CT) has emerged as a reliable tool for BC assessment, but its usefulness for the REE evaluation has not been examined. In the present study we have explored the usefulness of CT-scan imaging to assess the REE using AI machine-learning models. Single-centre observational cross-sectional pilot study from January to June 2022, including 90 fasting, clinically stable adults (≥18 years) with no contraindications for indirect calorimetry (IC), bioimpedance (BIA), or abdominal CT-scan. REE was measured using classical predictive equations, IC, BIA and skeletal CT-scan. The proposed model was based on a second-order linear regression with different input parameters, and the output corresponds to the estimated REE. The model was trained and tested using a cross-validation one-vs-all strategy including subjects with different characteristics. Data from 90 subjects were included in the final analysis. Bland-Altman plots showed that the CT-based estimation model had a mean bias of 0 kcal/day (LoA: -508.4 to 508.4) compared with IC, indicating better agreement than most predictive equations and similar agreement to BIA (bias 53.4 kcal/day, LoA: -475.7 to 582.4). Surprisingly, gender and BMI, ones of the mains variables included in all the BIA algorithms and mathematical equations were not relevant variables for REE calculated by means of AI coupled to skeletal CT scan. These findings were consistent with the results of other performance metrics, including mean absolute error (MAE), root mean square error (RMSE), and Lin's concordance correlation coefficient (CCC), which also favored the CT-based method over conventional equations. Our results suggest that the analysis of a CT-scan image by means of machine learning model is a reliable tool for the REE estimation. These findings have the potential to significantly change the paradigm and guidelines for nutritional assessment.

Improving brain tumor diagnosis: A self-calibrated 1D residual network with random forest integration.

Sumithra A, Prathap PMJ, Karthikeyan A, Dhanasekaran S

pubmed logopapersMay 26 2025
Medical specialists need to perform precise MRI analysis for accurate diagnosis of brain tumors. Current research has developed multiple artificial intelligence (AI) techniques for the process automation of brain tumor identification. However, existing approaches often depend on singular datasets, limiting their generalization capabilities across diverse clinical scenarios. The research introduces SCR-1DResNet as a new diagnostic tool for brain tumor detection that incorporates self-calibrated Random Forest along with one-dimensional residual networks. The research starts with MRI image acquisition from multiple Kaggle datasets then proceeds through stepwise processing that eliminates noise, enhances images, and performs resizing and normalization and conducts skull stripping operations. After data collection the WaveSegNet mode l extracts important attributes from tumors at multiple scales. Components of Random Forest classifier together with One-Dimensional Residual Network form the SCR-1DResNet model via self-calibration optimization to improve prediction reliability. Tests show the proposed system produces classification precision of 98.50% accompanied by accuracy of 98.80% and recall reaching 97.80% respectively. The SCR-1DResNet model demonstrates superior diagnostic capability and enhanced performance speed which shows strong prospects towards clinical decision support systems and improved neurological and oncological patient treatments.

Diffusion based multi-domain neuroimaging harmonization method with preservation of anatomical details.

Lan H, Varghese BA, Sheikh-Bahaei N, Sepehrband F, Toga AW, Choupan J

pubmed logopapersMay 26 2025
In multi-center neuroimaging studies, the technical variability caused by the batch differences could hinder the ability to aggregate data across sites, and negatively impact the reliability of study-level results. Recent efforts in neuroimaging harmonization have aimed to minimize these technical gaps and reduce technical variability across batches. While Generative Adversarial Networks (GAN) has been a prominent method for addressing harmonization tasks, GAN-harmonized images suffer from artifacts or anatomical distortions. Given the advancements of denoising diffusion probabilistic model which produces high-fidelity images, we have assessed the efficacy of the diffusion model for neuroimaging harmonization. While GAN-based methods intrinsically transform imaging styles between two domains per model, we have demonstrated the diffusion model's superior capability in harmonizing images across multiple domains with single model. Our experiments highlight that the learned domain invariant anatomical condition reinforces the model to accurately preserve the anatomical details while differentiating batch differences at each diffusion step. Our proposed method has been tested using T1-weighted MRI images from two public neuroimaging datasets of ADNI1 and ABIDE II, yielding harmonization results with consistent anatomy preservation and superior FID score compared to the GAN-based methods. We have conducted multiple analyses including extensive quantitative and qualitative evaluations against the baseline models, ablation study showcasing the benefits of the learned domain invariant conditions, and improvements in the consistency of perivascular spaces segmentation analysis and volumetric analysis through harmonization.

Rate and Patient Specific Risk Factors for Periprosthetic Acetabular Fractures during Primary Total Hip Arthroplasty using a Pressfit Cup.

Simon S, Gobi H, Mitterer JA, Frank BJ, Huber S, Aichmair A, Dominkus M, Hofstaetter JG

pubmed logopapersMay 26 2025
Periprosthetic acetabular fractures following primary total hip arthroplasty (THA) using a cementless acetabular component range from occult to severe fractures. The aims of this study were to evaluate the perioperative periprosthetic acetabular fracture rate and patient-specific risks of a modular cementless acetabular component. In this study, we included 7,016 primary THAs (61.4% women, 38.6% men; age, 67 years; interquartile-range, 58 to 74) that received a cementless-hydroxyapatite-coated modular-titanium press-fit acetabular component from a single manufacturer between January 2013 and September 2022. All perioperative radiographs and CT (computer tomography) scans were analyzed for all causes. Patient-specific data and the revision rate were retrieved, and radiographic measurements were performed using artificial intelligence-based software. Following matching based on patients' demographics, a comparison was made between patients who had and did not have periacetabular fractures in order to identify patient-specific and radiographic risk factors for periacetabular fractures. The fracture rate was 0.8% (56 of 7,016). Overall, 33.9% (19 of 56) were small occult fractures solely visible on CT. Additionally, there were 21 of 56 (37.5%) with a stable small fracture. Both groups (40 of 56 (71.4%)) were treated nonoperatively. Revision THA was necessary in 16 of 56, resulting in an overall revision rate of 0.2% (16 of 7,016). Patient-specific risk factors were small acetabular-component size (≤ 50), a low body mass index (BMI) (< 24.5), a higher age (> 68 years), women, a low lateral-central-age-angle (< 24°), a high Extrusion-index (> 20%), a high sharp-angle (> 38°), and a high Tönnis-angle (> 10°). A wide range of periprosthetic acetabular fractures were observed following primary cementless THA. In total, 71.4% of acetabular fractures were small cracks that did not necessitate revision surgery. By identifying patient-specific risk factors, such as advanced age, women, low BMI, and dysplastic hips, future complications may be reduced.

Deep learning model for malignancy prediction of TI-RADS 4 thyroid nodules with high-risk characteristics using multimodal ultrasound: A multicentre study.

Chu X, Wang T, Chen M, Li J, Wang L, Wang C, Wang H, Wong ST, Chen Y, Li H

pubmed logopapersMay 26 2025
The automatic screening of thyroid nodules using computer-aided diagnosis holds great promise in reducing missed and misdiagnosed cases in clinical practice. However, most current research focuses on single-modal images and does not fully leverage the comprehensive information from multimodal medical images, limiting model performance. To enhance screening accuracy, this study uses a deep learning framework that integrates high-dimensional convolutions of B-mode ultrasound (BMUS) and strain elastography (SE) images to predict the malignancy of TI-RADS 4 thyroid nodules with high-risk features. First, we extract nodule regions from the images and expand the boundary areas. Then, adaptive particle swarm optimization (APSO) and contrast limited adaptive histogram equalization (CLAHE) algorithms are applied to enhance ultrasound image contrast. Finally, deep learning techniques are used to extract and fuse high-dimensional features from both ultrasound modalities to classify benign and malignant thyroid nodules. The proposed model achieved an AUC of 0.937 (95 % CI 0.917-0.949) and 0.927 (95 % CI 0.907-0.948) in the test and external validation sets, respectively, demonstrating strong generalization ability. When compared with the diagnostic performance of three groups of radiologists, the model outperformed them significantly. Meanwhile, with the model's assistance, all three radiologist groups showed improved diagnostic performance. Furthermore, heatmaps generated by the model show a high alignment with radiologists' expertise, further confirming its credibility. The results indicate that our model can assist in clinical thyroid nodule diagnosis, reducing the risk of missed and misdiagnosed diagnoses, particularly for high-risk populations, and holds significant clinical value.

ScanAhead: Simplifying standard plane acquisition of fetal head ultrasound.

Men Q, Zhao H, Drukker L, Papageorghiou AT, Noble JA

pubmed logopapersMay 26 2025
The fetal standard plane acquisition task aims to detect an Ultrasound (US) image characterized by specified anatomical landmarks and appearance for assessing fetal growth. However, in practice, due to variability in human operator skill and possible fetal motion, it can be challenging for a human operator to acquire a satisfactory standard plane. To support a human operator with this task, this paper first describes an approach to automatically predict the fetal head standard plane from a video segment approaching the standard plane. A transformer-based image predictor is proposed to produce a high-quality standard plane by understanding diverse scales of head anatomy within the US video frame. Because of the visual gap between the video frames and standard plane image, the predictor is equipped with an offset adaptor that performs domain adaption to translate the off-plane structures to the anatomies that would usually appear in a standard plane view. To enhance the anatomical details of the predicted US image, the approach is extended by utilizing a second modality, US probe movement, that provides 3D location information. Quantitative and qualitative studies conducted on two different head biometry planes demonstrate that the proposed US image predictor produces clinically plausible standard planes with superior performance to comparative published methods. The results of dual-modality solution show an improved visualization with enhanced anatomical details of the predicted US image. Clinical evaluations are also conducted to demonstrate the consistency between the predicted echo textures and the expected echo patterns seen in a typical real standard plane, which indicates its clinical feasibility for improving the standard plane acquisition process.

Deep Learning for Pneumonia Diagnosis: A Custom CNN Approach with Superior Performance on Chest Radiographs

Mehta, A., Vyas, M.

medrxiv logopreprintMay 26 2025
A major global health and wellness issue causing major health problems and death, pneumonia underlines the need of quickly and precisely identifying and treating it. Though imaging technology has advanced, radiologists manual reading of chest X-rays still constitutes the basic method for pneumonia detection, which causes delays in both treatment and medical diagnosis. This study proposes a pneumonia detection method to automate the process using deep learning techniques. The concept employs a bespoke convolutional neural network (CNN) trained on different pneumonia-positive and pneumonia-negative cases from several healthcare providers. Various pre-processing steps were done on the chest radiographs to increase integrity and efficiency before teaching the design. Based on the comparison study with VGG19, ResNet50, InceptionV3, DenseNet201, and MobileNetV3, our bespoke CNN model was discovered to be the most efficient in balancing accuracy, recall, and parameter complexity. It shows 96.5% accuracy and 96.6% F1 score. This study contributes to the expansion of an automated, paired with a reliable, pneumonia finding system, which could improve personal outcomes and increase healthcare efficiency. The full project is available at here.
Page 93 of 1601600 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.