Sort by:
Page 464 of 7497488 results

Bodoque-Cubas J, Fernández-Sáez J, Martínez-Hervás S, Pérez-Lacasta MJ, Carles-Lavila M, Pallarés-Gasulla RM, Salazar-González JJ, Gil-Boix JV, Miret-Llauradó M, Aulinas-Masó A, Argüelles-Jiménez I, Tofé-Povedano S

pubmed logopapersJul 12 2025
The increasing incidence of thyroid nodules (TN) raises concerns about overdiagnosis and overtreatment. This study evaluates the clinical and economic impact of KOIOS, an FDA-approved artificial intelligence (AI) tool for the management of TN. A retrospective analysis was conducted on 176 patients who underwent thyroid surgery between May 2022 and November 2024. Ultrasound images were evaluated independently by an expert and novice operators using the American College of Radiology Thyroid Imaging Reporting and Data System (ACR-TIRADS), while KOIOS provided AI-adapted risk stratification. Sensitivity, specificity, and Receiver-Operating Curve (ROC) analysis were performed. The incremental cost-effectiveness ratio (ICER) was defined based on the number of optimal care interventions (FNAB and thyroid surgery). Both deterministic and probabilistic sensitivity analyses were conducted to evaluate model robustness. KOIOS AI demonstrated similar diagnostic performance to the expert operator (AUC: 0.794, 95% CI: 0.718-0.871 vs. 0.784, 95% CI: 0.706-0.861; p = 0.754) and significantly outperformed the novice operator (AUC: 0.619, 95% CI: 0.526-0.711; p < 0.001). ICER analysis estimated the cost per additional optimal care decision at -€8,085.56, indicating KOIOS as a dominant and cost-saving strategy when considering a third-party payer perspective over a one-year horizon. Deterministic sensitivity analysis identified surgical costs as the main drivers of variability, while probabilistic analysis consistently favored KOIOS as the optimal strategy. KOIOS AI is a cost-effective alternative, particularly in reducing overdiagnosis and overtreatment for benign TNs. Prospective, real-life studies are needed to validate these findings and explore long-term implications.

Choi JW, Cho YJ, Lee SB, Lee S, Hwang JY, Choi YH, Cheon JE, Lee J

pubmed logopapersJul 12 2025
Magnetic resonance imaging (MRI) is crucial in pediatric radiology; however, the prolonged scan time is a major drawback that often requires sedation. Deep learning reconstruction (DLR) is a promising method for accelerating MRI acquisition. To evaluate the clinical feasibility of accelerated brain MRI with DLR in pediatric neuroimaging, focusing on image quality compared to conventional MRI. In this retrospective study, 116 pediatric participants (mean age 7.9 ± 5.4 years) underwent routine brain MRI with three reconstruction methods: conventional MRI without DLR (C-MRI), conventional MRI with DLR (DLC-MRI), and accelerated MRI with DLR (DLA-MRI). Two pediatric radiologists independently assessed the overall image quality, sharpness, artifacts, noise, and lesion conspicuity. Quantitative image analysis included the measurement of image noise and coefficient of variation (CoV). DLA-MRI reduced the scan time by 43% compared with C-MRI. Compared with C-MRI, DLA-MRI demonstrated higher scores for overall image quality, noise, and artifacts, as well as similar or higher scores for lesion conspicuity, but similar or lower scores for sharpness. DLC-MRI demonstrated the highest scores for all the parameters. Despite variations in image quality and lesion conspicuity, the lesion detection rates were 100% across all three reconstructions. Quantitative analysis revealed lower noise and CoV for DLA-MRI than those for C-MRI. Interobserver agreement was substantial to almost perfect (weighted Cohen's kappa = 0.72-0.97). DLR enabled faster MRI with improved image quality compared with conventional MRI, highlighting its potential to address prolonged MRI scan times in pediatric neuroimaging and optimize clinical workflows.

Saranya M, Praveena R

pubmed logopapersJul 12 2025
The brain tumours originate in the brain or its surrounding structures, such as the pituitary and pineal glands, and can be benign or malignant. While benign tumours may grow into neighbouring tissues, metastatic tumours occur when cancer from other organs spreads to the brain. This is because identification and staging of such tumours are critical because basically all aspects involving a patient's disease entail accurate diagnosis as well as the staging of the tumour. Image segmentation is incredibly valuable to medical imaging since it can make possible to simulate surgical operations, diseases diagnosis, anatomical and pathologic analysis. This study performs the prediction and classification of brain tumours present in MRI, a combined classification and localization framework model is proposed connecting Fully Convolutional Neural Network (FCNN) and You Only Look Once version 5 (YOLOv5). The FCNN model is designed to classify images into four categories: benign - glial, adenomas and pituitary related, and meningeal. It utilizes a derivative of Root Mean Square Propagation (RMSProp)optimization to boost the classification rate, based upon which the performance was evaluated with the standard measures that are precision, recall, F1 coefficient, specificity and accuracy. Subsequently, the YOLOv5 architectural design for more accurate detection of tumours is incorporated, with the subsequent use of FCNN for creation of the segmentation's masks of the tumours. Thus, the analysis proves that the suggested approach has more accuracy than the existing system with 98.80% average accuracy in the identification and categorization of brain tumour. This integration of detection and segmentation models presents one of the most effective techniques for enhancing the diagnostic performance of the system to add value within the medical imaging field. On the basis of these findings, it becomes possible to conclude that the advancements in the deep learning structures could apparently improve the tumour diagnosis while contributing to the finetuning of the clinical management.

Awan HA, Chaudhary MFA, Reinhardt JM

pubmed logopapersJul 12 2025
Chronic obstructive pulmonary disease (COPD) is a heterogeneous condition with complicated structural and functional impairments. For decades now, chest computed tomography (CT) has been used to quantify various abnormalities related to COPD. More recently, with the newer data-driven approaches, biomarker development and validation have evolved rapidly. Studies now target multiple anatomical structures including lung parenchyma, the airways, the vasculature, and the fissures to better characterize COPD. This review explores the evolution of chest CT biomarkers in COPD, beginning with traditional thresholding approaches that quantify emphysema and airway dimensions. We then highlight some of the texture analysis efforts that have been made over the years for subtyping lung tissue. We also discuss image registration-based biomarkers that have enabled spatially-aware mechanisms for understanding local abnormalities within the lungs. More recently, deep learning has enabled automated biomarker extraction, offering improved precision in phenotype characterization and outcome prediction. We highlight the most recent of these approaches as well. Despite these advancements, several challenges remain in terms of dataset heterogeneity, model generalizability, and clinical interpretability. This review lastly provides a structured overview of these limitations and highlights future potential of CT biomarkers in personalized COPD management.

Park J, Yoon YE, Jang Y, Jung T, Jeon J, Lee SA, Choi HM, Hwang IC, Chun EJ, Cho GY, Chang HJ

pubmed logopapersJul 12 2025
This study aims to present the Segmentation-based Myocardial Advanced Refinement Tracking (SMART) system, a novel artificial intelligence (AI)-based framework for transthoracic echocardiography (TTE) that incorporates motion tracking and left ventricular (LV) myocardial segmentation for automated LV mass (LVM) and global longitudinal strain (LVGLS) assessment. The SMART system demonstrates LV speckle tracking based on motion vector estimation, refined by structural information using endocardial and epicardial segmentation throughout the cardiac cycle. This approach enables automated measurement of LVM<sub>SMART</sub> and LVGLS<sub>SMART</sub>. The feasibility of SMART is validated in 111 hypertrophic cardiomyopathy (HCM) patients (median age: 58 years, 69% male) who underwent TTE and cardiac magnetic resonance imaging (CMR). LVGLS<sub>SMART</sub> showed a strong correlation with conventional manual LVGLS measurements (Pearson's correlation coefficient [PCC] 0.851; mean difference 0 [-2-0]). When compared to CMR as the reference standard for LVM, the conventional dimension-based TTE method overestimated LVM (PCC 0.652; mean difference: 106 [90-123]), whereas LVM<sub>SMART</sub> demonstrated excellent agreement with CMR (PCC 0.843; mean difference: 1 [-11-13]). For predicting extensive myocardial fibrosis, LVGLS<sub>SMART</sub> and LVM<sub>SMART</sub> exhibited performance comparable to conventional LVGLS and CMR (AUC: 0.72 and 0.66, respectively). Patients identified as high risk for extensive fibrosis by LVGLS<sub>SMART</sub> and LVM<sub>SMART</sub> had significantly higher rates of adverse outcomes, including heart failure hospitalization, new-onset atrial fibrillation, and defibrillator implantation. The SMART technique provides a comparable LVGLS evaluation and a more accurate LVM assessment than conventional TTE, with predictive values for myocardial fibrosis and adverse outcomes. These findings support its utility in HCM management.

Jung J, Phillipi M, Tran B, Chen K, Chan N, Ho E, Sun S, Houshyar R

pubmed logopapersJul 12 2025
Large language models (LLM) have shown promise in assisting medical decision-making. However, there is limited literature exploring the diagnostic accuracy of LLMs in generating differential diagnoses from text-based image descriptions and clinical presentations in pediatric radiology. To examine the performance of multiple proprietary LLMs in producing accurate differential diagnoses for text-based pediatric radiological cases without imaging. One hundred sixty-four cases were retrospectively selected from a pediatric radiology textbook and converted into two formats: (1) image description only, and (2) image description with clinical presentation. The ChatGPT-4 V, Claude 3.5 Sonnet, and Gemini 1.5 Pro algorithms were given these inputs and tasked with providing a top 1 diagnosis and a top 3 differential diagnoses. Accuracy of responses was assessed by comparison with the original literature. Top 1 accuracy was defined as whether the top 1 diagnosis matched the textbook, and top 3 differential accuracy was defined as the number of diagnoses in the model-generated top 3 differential that matched any of the top 3 diagnoses in the textbook. McNemar's test, Cochran's Q test, Friedman test, and Wilcoxon signed-rank test were used to compare algorithms and assess the impact of added clinical information, respectively. There was no significant difference in top 1 accuracy between ChatGPT-4 V, Claude 3.5 Sonnet, and Gemini 1.5 Pro when only image descriptions were provided (56.1% [95% CI 48.4-63.5], 64.6% [95% CI 57.1-71.5], 61.6% [95% CI 54.0-68.7]; P = 0.11). Adding clinical presentation to image description significantly improved top 1 accuracy for ChatGPT-4 V (64.0% [95% CI 56.4-71.0], P = 0.02) and Claude 3.5 Sonnet (80.5% [95% CI 73.8-85.8], P < 0.001). For image description and clinical presentation cases, Claude 3.5 Sonnet significantly outperformed both ChatGPT-4 V and Gemini 1.5 Pro (P < 0.001). For top 3 differential accuracy, no significant differences were observed between ChatGPT-4 V, Claude 3.5 Sonnet, and Gemini 1.5 Pro, regardless of whether the cases included only image descriptions (1.29 [95% CI 1.16-1.41], 1.35 [95% CI 1.23-1.48], 1.37 [95% CI 1.25-1.49]; P = 0.60) or both image descriptions and clinical presentations (1.33 [95% CI 1.20-1.45], 1.52 [95% CI 1.41-1.64], 1.48 [95% 1.36-1.59]; P = 0.72). Only Claude 3.5 Sonnet performed significantly better when clinical presentation was added (P < 0.001). Commercial LLMs performed similarly on pediatric radiology cases in providing top 1 accuracy and top 3 differential accuracy when only a text-based image description was used. Adding clinical presentation significantly improved top 1 accuracy for ChatGPT-4 V and Claude 3.5 Sonnet, with Claude showing the largest improvement. Claude 3.5 Sonnet outperformed both ChatGPT-4 V and Gemini 1.5 Pro in top 1 accuracy when both image and clinical data were provided. No significant differences were found in top 3 differential accuracy across models in any condition.

Houshi S, Khodakarami Z, Shaygannejad A, Khosravi F, Shaygannejad V

pubmed logopapersJul 12 2025
Disability progression despite disease-modifying therapy remains a major challenge in multiple sclerosis (MS). Artificial intelligence (AI) models exploiting magnetic resonance imaging (MRI) promise personalized prognostication, yet their real-world accuracy is uncertain. To systematically review and meta-analyze MRI-based AI studies predicting future disability progression in MS. Five databases were searched from inception to 17 May 2025 following PRISMA. Eligible studies used MRI in an AI model to forecast changes in the Expanded Disability Status Scale (EDSS) or equivalent metrics. Two reviewers conducted study selection, data extraction, and QUADAS-2 assessment. Random-effects meta-analysis was applied when ≥3 studies reported compatible regression statistics. Twenty-one studies with 12,252 MS patients met inclusion criteria. Five used regression on continuous EDSS, fourteen classification, one time-to-event, and one both. Conventional machine learning predominated (57%), and deep learning (38%). Median classification area under the curve (AUC) was 0.78 (range 0.57-0.86); median regression root-mean-square-error (RMSE) 1.08 EDSS points. Pooled RMSE across regression studies was 1.31 (95% CI 1.02-1.60; I<sup>2</sup> = 95%). Deep learning conferred only marginal, non-significant gains over classical algorithms. External validation appeared in six studies; calibration, decision-curve analysis and code releases were seldom reported. QUADAS-2 indicated generally low patient-selection bias but frequent index-test concerns. MRI-driven AI models predict MS disability progression with moderate accuracy, but error margins that exceed one EDSS point limit individual-level utility. Harmonized endpoints, larger multicenter cohorts, rigorous external validation, and prospective clinician-in-the-loop trials are essential before routine clinical adoption.

Wei Y, Huang B, Zhao B, Lin Z, Zhou SZ

pubmed logopapersJul 12 2025
Augmented reality (AR) technology holds significant promise for enhancing surgical navigation in needle-based procedures such as biopsies and ablations. However, most existing AR systems rely on patient-specific markers, which disrupt clinical workflows and require time-consuming preoperative calibrations, thereby hindering operational efficiency and precision. We developed a novel multi-camera AR navigation system that eliminates the need for patient-specific markers by utilizing ceiling-mounted markers mapped to fixed medical imaging devices. A hierarchical optimization framework integrates both marker mapping and multi-camera calibration. Deep learning techniques are employed to enhance marker detection and registration accuracy. Additionally, a vision-based pose compensation method is implemented to mitigate errors caused by patient movement, improving overall positional accuracy. Validation through phantom experiments and simulated clinical scenarios demonstrated an average puncture accuracy of 3.72 ± 1.21 mm. The system reduced needle placement time by 20 s compared to traditional marker-based methods. It also effectively corrected errors induced by patient movement, with a mean positional error of 0.38 pixels and an angular deviation of 0.51 <math xmlns="http://www.w3.org/1998/Math/MathML"><mmultiscripts><mrow></mrow> <mrow></mrow> <mo>∘</mo></mmultiscripts> </math> . These results highlight the system's precision, adaptability, and reliability in realistic surgical conditions. This marker-free AR guidance system significantly streamlines surgical workflows while enhancing needle navigation accuracy. Its simplicity, cost-effectiveness, and adaptability make it an ideal solution for both high- and low-resource clinical environments, offering the potential for improved precision, reduced procedural time, and better patient outcomes.

Krismer F, Seppi K, Poewe W

pubmed logopapersJul 12 2025
Neuroimaging plays a crucial role in diagnosing multiple system atrophy and monitoring progressive neurodegeneration in this fatal disease. Advanced MRI techniques and post-processing methods have demonstrated significant volume loss and microstructural changes in brain regions well known to be affected by MSA pathology. These observations can be exploited to support the differential diagnosis of MSA distinguishing it from Parkinson's disease and progressive supranuclear palsy with high sensitivity and specificity. Longitudinal studies reveal aggressive neurodegeneration in MSA, with notable atrophy rates in the cerebellum, pons, and putamen. Radiotracer imaging using PET and SPECT has shown characteristic disease-related patterns, aiding in differential diagnosis and tracking disease progression. Future research should focus on early diagnosis, particularly in prodromal stages, and the development of reliable biomarkers for clinical trials. Combining different neuroimaging modalities and machine learning algorithms can enhance diagnostic precision and provide a comprehensive understanding of MSA pathology.

Chen Y, Sun Z, Zhong H, Chen Y, Wu X, Su L, Lai Z, Zheng T, Lyu G, Su Q

pubmed logopapersJul 12 2025
This study aimed to develop and evaluate eight machine learning models based on multimodal ultrasound to precisely predict of diabetic tibial neuropathy (DTN) in patients. Additionally, the SHapley Additive exPlanations(SHAP)framework was introduced to quantify the importance of each feature variable, providing a precise and noninvasive assessment tool for DTN patients, optimizing clinical management strategies, and enhancing patient prognosis. A prospective analysis was conducted using multimodal ultrasound and clinical data from 255 suspected DTN patients who visited the Second Affiliated Hospital of Fujian Medical University between January 2024 and November 2024. Key features were selected using Least Absolute Shrinkage and Selection Operator (LASSO) regression. Predictive models were constructed using Extreme Gradient Boosting (XGB), Logistic Regression, Support Vector Machines, k-Nearest Neighbors, Random Forest, Decision Tree, Naïve Bayes, and Neural Network. The SHAP method was employed to refine model interpretability. Furthermore, in order to verify the generalization degree of the model, this study also collected 135 patients from three other tertiary hospitals for external test. LASSO regression identified Echo intensity(EI), Cross-sectional area (CSA), Mean elasticity value(Emean), Superb microvascular imaging(SMI), and History of smoking were key features for DTN prediction. The XGB model achieved an Area Under the Curve (AUC) of 0.94, 0.83 and 0.79 in the training, internal test and external test sets, respectively. SHAP analysis highlighted the ranking significance of EI, CSA, Emean, SMI, and History of smoking. Personalized prediction explanations provided by theSHAP values demonstrated the contribution of each feature to the final prediction, and enhancing model interpretability. Furthermore, decision plots depicted how different features influenced mispredictions, thereby facilitating further model optimization or feature adjustment. This study proposed a DTN prediction model based on machine-learning algorithms applied to multimodal ultrasound data. The results indicated the superior performance of the XGB model and its interpretability was enhanced using SHAP analysis. This cost-effective and user-friendly approach provides potential support for personalized treatment and precision medicine for DTN.
Page 464 of 7497488 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,500+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.