Sort by:
Page 124 of 3453445 results

Accurate and real-time brain tumour detection and classification using optimized YOLOv5 architecture.

Saranya M, Praveena R

pubmed logopapersJul 12 2025
The brain tumours originate in the brain or its surrounding structures, such as the pituitary and pineal glands, and can be benign or malignant. While benign tumours may grow into neighbouring tissues, metastatic tumours occur when cancer from other organs spreads to the brain. This is because identification and staging of such tumours are critical because basically all aspects involving a patient's disease entail accurate diagnosis as well as the staging of the tumour. Image segmentation is incredibly valuable to medical imaging since it can make possible to simulate surgical operations, diseases diagnosis, anatomical and pathologic analysis. This study performs the prediction and classification of brain tumours present in MRI, a combined classification and localization framework model is proposed connecting Fully Convolutional Neural Network (FCNN) and You Only Look Once version 5 (YOLOv5). The FCNN model is designed to classify images into four categories: benign - glial, adenomas and pituitary related, and meningeal. It utilizes a derivative of Root Mean Square Propagation (RMSProp)optimization to boost the classification rate, based upon which the performance was evaluated with the standard measures that are precision, recall, F1 coefficient, specificity and accuracy. Subsequently, the YOLOv5 architectural design for more accurate detection of tumours is incorporated, with the subsequent use of FCNN for creation of the segmentation's masks of the tumours. Thus, the analysis proves that the suggested approach has more accuracy than the existing system with 98.80% average accuracy in the identification and categorization of brain tumour. This integration of detection and segmentation models presents one of the most effective techniques for enhancing the diagnostic performance of the system to add value within the medical imaging field. On the basis of these findings, it becomes possible to conclude that the advancements in the deep learning structures could apparently improve the tumour diagnosis while contributing to the finetuning of the clinical management.

Seeing is Believing-On the Utility of CT in Phenotyping COPD.

Awan HA, Chaudhary MFA, Reinhardt JM

pubmed logopapersJul 12 2025
Chronic obstructive pulmonary disease (COPD) is a heterogeneous condition with complicated structural and functional impairments. For decades now, chest computed tomography (CT) has been used to quantify various abnormalities related to COPD. More recently, with the newer data-driven approaches, biomarker development and validation have evolved rapidly. Studies now target multiple anatomical structures including lung parenchyma, the airways, the vasculature, and the fissures to better characterize COPD. This review explores the evolution of chest CT biomarkers in COPD, beginning with traditional thresholding approaches that quantify emphysema and airway dimensions. We then highlight some of the texture analysis efforts that have been made over the years for subtyping lung tissue. We also discuss image registration-based biomarkers that have enabled spatially-aware mechanisms for understanding local abnormalities within the lungs. More recently, deep learning has enabled automated biomarker extraction, offering improved precision in phenotype characterization and outcome prediction. We highlight the most recent of these approaches as well. Despite these advancements, several challenges remain in terms of dataset heterogeneity, model generalizability, and clinical interpretability. This review lastly provides a structured overview of these limitations and highlights future potential of CT biomarkers in personalized COPD management.

Novel deep learning framework for simultaneous assessment of left ventricular mass and longitudinal strain: clinical feasibility and validation in patients with hypertrophic cardiomyopathy.

Park J, Yoon YE, Jang Y, Jung T, Jeon J, Lee SA, Choi HM, Hwang IC, Chun EJ, Cho GY, Chang HJ

pubmed logopapersJul 12 2025
This study aims to present the Segmentation-based Myocardial Advanced Refinement Tracking (SMART) system, a novel artificial intelligence (AI)-based framework for transthoracic echocardiography (TTE) that incorporates motion tracking and left ventricular (LV) myocardial segmentation for automated LV mass (LVM) and global longitudinal strain (LVGLS) assessment. The SMART system demonstrates LV speckle tracking based on motion vector estimation, refined by structural information using endocardial and epicardial segmentation throughout the cardiac cycle. This approach enables automated measurement of LVM<sub>SMART</sub> and LVGLS<sub>SMART</sub>. The feasibility of SMART is validated in 111 hypertrophic cardiomyopathy (HCM) patients (median age: 58 years, 69% male) who underwent TTE and cardiac magnetic resonance imaging (CMR). LVGLS<sub>SMART</sub> showed a strong correlation with conventional manual LVGLS measurements (Pearson's correlation coefficient [PCC] 0.851; mean difference 0 [-2-0]). When compared to CMR as the reference standard for LVM, the conventional dimension-based TTE method overestimated LVM (PCC 0.652; mean difference: 106 [90-123]), whereas LVM<sub>SMART</sub> demonstrated excellent agreement with CMR (PCC 0.843; mean difference: 1 [-11-13]). For predicting extensive myocardial fibrosis, LVGLS<sub>SMART</sub> and LVM<sub>SMART</sub> exhibited performance comparable to conventional LVGLS and CMR (AUC: 0.72 and 0.66, respectively). Patients identified as high risk for extensive fibrosis by LVGLS<sub>SMART</sub> and LVM<sub>SMART</sub> had significantly higher rates of adverse outcomes, including heart failure hospitalization, new-onset atrial fibrillation, and defibrillator implantation. The SMART technique provides a comparable LVGLS evaluation and a more accurate LVM assessment than conventional TTE, with predictive values for myocardial fibrosis and adverse outcomes. These findings support its utility in HCM management.

Accuracy of large language models in generating differential diagnosis from clinical presentation and imaging findings in pediatric cases.

Jung J, Phillipi M, Tran B, Chen K, Chan N, Ho E, Sun S, Houshyar R

pubmed logopapersJul 12 2025
Large language models (LLM) have shown promise in assisting medical decision-making. However, there is limited literature exploring the diagnostic accuracy of LLMs in generating differential diagnoses from text-based image descriptions and clinical presentations in pediatric radiology. To examine the performance of multiple proprietary LLMs in producing accurate differential diagnoses for text-based pediatric radiological cases without imaging. One hundred sixty-four cases were retrospectively selected from a pediatric radiology textbook and converted into two formats: (1) image description only, and (2) image description with clinical presentation. The ChatGPT-4 V, Claude 3.5 Sonnet, and Gemini 1.5 Pro algorithms were given these inputs and tasked with providing a top 1 diagnosis and a top 3 differential diagnoses. Accuracy of responses was assessed by comparison with the original literature. Top 1 accuracy was defined as whether the top 1 diagnosis matched the textbook, and top 3 differential accuracy was defined as the number of diagnoses in the model-generated top 3 differential that matched any of the top 3 diagnoses in the textbook. McNemar's test, Cochran's Q test, Friedman test, and Wilcoxon signed-rank test were used to compare algorithms and assess the impact of added clinical information, respectively. There was no significant difference in top 1 accuracy between ChatGPT-4 V, Claude 3.5 Sonnet, and Gemini 1.5 Pro when only image descriptions were provided (56.1% [95% CI 48.4-63.5], 64.6% [95% CI 57.1-71.5], 61.6% [95% CI 54.0-68.7]; P = 0.11). Adding clinical presentation to image description significantly improved top 1 accuracy for ChatGPT-4 V (64.0% [95% CI 56.4-71.0], P = 0.02) and Claude 3.5 Sonnet (80.5% [95% CI 73.8-85.8], P < 0.001). For image description and clinical presentation cases, Claude 3.5 Sonnet significantly outperformed both ChatGPT-4 V and Gemini 1.5 Pro (P < 0.001). For top 3 differential accuracy, no significant differences were observed between ChatGPT-4 V, Claude 3.5 Sonnet, and Gemini 1.5 Pro, regardless of whether the cases included only image descriptions (1.29 [95% CI 1.16-1.41], 1.35 [95% CI 1.23-1.48], 1.37 [95% CI 1.25-1.49]; P = 0.60) or both image descriptions and clinical presentations (1.33 [95% CI 1.20-1.45], 1.52 [95% CI 1.41-1.64], 1.48 [95% 1.36-1.59]; P = 0.72). Only Claude 3.5 Sonnet performed significantly better when clinical presentation was added (P < 0.001). Commercial LLMs performed similarly on pediatric radiology cases in providing top 1 accuracy and top 3 differential accuracy when only a text-based image description was used. Adding clinical presentation significantly improved top 1 accuracy for ChatGPT-4 V and Claude 3.5 Sonnet, with Claude showing the largest improvement. Claude 3.5 Sonnet outperformed both ChatGPT-4 V and Gemini 1.5 Pro in top 1 accuracy when both image and clinical data were provided. No significant differences were found in top 3 differential accuracy across models in any condition.

AI-powered disease progression prediction in multiple sclerosis using magnetic resonance imaging: a systematic review and meta-analysis.

Houshi S, Khodakarami Z, Shaygannejad A, Khosravi F, Shaygannejad V

pubmed logopapersJul 12 2025
Disability progression despite disease-modifying therapy remains a major challenge in multiple sclerosis (MS). Artificial intelligence (AI) models exploiting magnetic resonance imaging (MRI) promise personalized prognostication, yet their real-world accuracy is uncertain. To systematically review and meta-analyze MRI-based AI studies predicting future disability progression in MS. Five databases were searched from inception to 17 May 2025 following PRISMA. Eligible studies used MRI in an AI model to forecast changes in the Expanded Disability Status Scale (EDSS) or equivalent metrics. Two reviewers conducted study selection, data extraction, and QUADAS-2 assessment. Random-effects meta-analysis was applied when ≥3 studies reported compatible regression statistics. Twenty-one studies with 12,252 MS patients met inclusion criteria. Five used regression on continuous EDSS, fourteen classification, one time-to-event, and one both. Conventional machine learning predominated (57%), and deep learning (38%). Median classification area under the curve (AUC) was 0.78 (range 0.57-0.86); median regression root-mean-square-error (RMSE) 1.08 EDSS points. Pooled RMSE across regression studies was 1.31 (95% CI 1.02-1.60; I<sup>2</sup> = 95%). Deep learning conferred only marginal, non-significant gains over classical algorithms. External validation appeared in six studies; calibration, decision-curve analysis and code releases were seldom reported. QUADAS-2 indicated generally low patient-selection bias but frequent index-test concerns. MRI-driven AI models predict MS disability progression with moderate accuracy, but error margins that exceed one EDSS point limit individual-level utility. Harmonized endpoints, larger multicenter cohorts, rigorous external validation, and prospective clinician-in-the-loop trials are essential before routine clinical adoption.

Efficient needle guidance: multi-camera augmented reality navigation without patient-specific calibration.

Wei Y, Huang B, Zhao B, Lin Z, Zhou SZ

pubmed logopapersJul 12 2025
Augmented reality (AR) technology holds significant promise for enhancing surgical navigation in needle-based procedures such as biopsies and ablations. However, most existing AR systems rely on patient-specific markers, which disrupt clinical workflows and require time-consuming preoperative calibrations, thereby hindering operational efficiency and precision. We developed a novel multi-camera AR navigation system that eliminates the need for patient-specific markers by utilizing ceiling-mounted markers mapped to fixed medical imaging devices. A hierarchical optimization framework integrates both marker mapping and multi-camera calibration. Deep learning techniques are employed to enhance marker detection and registration accuracy. Additionally, a vision-based pose compensation method is implemented to mitigate errors caused by patient movement, improving overall positional accuracy. Validation through phantom experiments and simulated clinical scenarios demonstrated an average puncture accuracy of 3.72 ± 1.21 mm. The system reduced needle placement time by 20 s compared to traditional marker-based methods. It also effectively corrected errors induced by patient movement, with a mean positional error of 0.38 pixels and an angular deviation of 0.51 <math xmlns="http://www.w3.org/1998/Math/MathML"><mmultiscripts><mrow></mrow> <mrow></mrow> <mo>∘</mo></mmultiscripts> </math> . These results highlight the system's precision, adaptability, and reliability in realistic surgical conditions. This marker-free AR guidance system significantly streamlines surgical workflows while enhancing needle navigation accuracy. Its simplicity, cost-effectiveness, and adaptability make it an ideal solution for both high- and low-resource clinical environments, offering the potential for improved precision, reduced procedural time, and better patient outcomes.

The role of neuro-imaging in multiple system atrophy.

Krismer F, Seppi K, Poewe W

pubmed logopapersJul 12 2025
Neuroimaging plays a crucial role in diagnosing multiple system atrophy and monitoring progressive neurodegeneration in this fatal disease. Advanced MRI techniques and post-processing methods have demonstrated significant volume loss and microstructural changes in brain regions well known to be affected by MSA pathology. These observations can be exploited to support the differential diagnosis of MSA distinguishing it from Parkinson's disease and progressive supranuclear palsy with high sensitivity and specificity. Longitudinal studies reveal aggressive neurodegeneration in MSA, with notable atrophy rates in the cerebellum, pons, and putamen. Radiotracer imaging using PET and SPECT has shown characteristic disease-related patterns, aiding in differential diagnosis and tracking disease progression. Future research should focus on early diagnosis, particularly in prodromal stages, and the development of reliable biomarkers for clinical trials. Combining different neuroimaging modalities and machine learning algorithms can enhance diagnostic precision and provide a comprehensive understanding of MSA pathology.

Diabetic Tibial Neuropathy Prediction: Improving interpretability of Various Machine-Learning Models Based on Multimodal-Ultrasound Features Using SHAP Methodology.

Chen Y, Sun Z, Zhong H, Chen Y, Wu X, Su L, Lai Z, Zheng T, Lyu G, Su Q

pubmed logopapersJul 12 2025
This study aimed to develop and evaluate eight machine learning models based on multimodal ultrasound to precisely predict of diabetic tibial neuropathy (DTN) in patients. Additionally, the SHapley Additive exPlanations(SHAP)framework was introduced to quantify the importance of each feature variable, providing a precise and noninvasive assessment tool for DTN patients, optimizing clinical management strategies, and enhancing patient prognosis. A prospective analysis was conducted using multimodal ultrasound and clinical data from 255 suspected DTN patients who visited the Second Affiliated Hospital of Fujian Medical University between January 2024 and November 2024. Key features were selected using Least Absolute Shrinkage and Selection Operator (LASSO) regression. Predictive models were constructed using Extreme Gradient Boosting (XGB), Logistic Regression, Support Vector Machines, k-Nearest Neighbors, Random Forest, Decision Tree, Naïve Bayes, and Neural Network. The SHAP method was employed to refine model interpretability. Furthermore, in order to verify the generalization degree of the model, this study also collected 135 patients from three other tertiary hospitals for external test. LASSO regression identified Echo intensity(EI), Cross-sectional area (CSA), Mean elasticity value(Emean), Superb microvascular imaging(SMI), and History of smoking were key features for DTN prediction. The XGB model achieved an Area Under the Curve (AUC) of 0.94, 0.83 and 0.79 in the training, internal test and external test sets, respectively. SHAP analysis highlighted the ranking significance of EI, CSA, Emean, SMI, and History of smoking. Personalized prediction explanations provided by theSHAP values demonstrated the contribution of each feature to the final prediction, and enhancing model interpretability. Furthermore, decision plots depicted how different features influenced mispredictions, thereby facilitating further model optimization or feature adjustment. This study proposed a DTN prediction model based on machine-learning algorithms applied to multimodal ultrasound data. The results indicated the superior performance of the XGB model and its interpretability was enhanced using SHAP analysis. This cost-effective and user-friendly approach provides potential support for personalized treatment and precision medicine for DTN.

Artificial Intelligence and its effect on Radiology Residency Education: Current Challenges, Opportunities, and Future Directions.

Volin J, van Assen M, Bala W, Safdar N, Balthazar P

pubmed logopapersJul 12 2025
Artificial intelligence has become an impressive force manifesting itself in the radiology field, improving workflows, and influencing clinical decision-making. With this increasing presence, a closer look at how residents can be properly exposed to this technology is needed. Within this paper, we aim to discuss the three pillars central to a trainee's experience including education on AI, AI-Education tools, and clinical implementation of AI. An already overcrowded clinical residency curricula makes little room for a thorough AI education; the challenge of which may be overcome through longitudinal distinct educational tracks during residency or external courses offered through a variety of societies. In addition to teaching the fundamentals of AI, programs which offer education tools utilizing AI will improve on antiquated clinical curricula. These education tools are a growing field in research and industry offering a variety of unique opportunities to promote active inquiry, improved comprehension and overall clinical competence. The near 700 FDA-approved AI clinical tools almost guarantees that residents will be exposed to this technology which may have mixed effects on education, although more research needs to be done to further elucidate this challenge. Ethical considerations, including algorithmic bias, liability, and post-deployment monitoring, highlight the need for structured instruction and mentorship. As AI continues to evolve, residency programs must prioritize evidence-based, adaptable curricula to prepare future radiologists to critically assess, utilize, and contribute to AI advancements, ensuring that these tools complement rather than undermine clinical expertise.

Integrating LLMs into Radiology Education: An Interpretation-Centric Framework for Enhanced Learning While Supporting Workflow.

Lyo SK, Cook TS

pubmed logopapersJul 12 2025
Radiology education is challenged by increasing clinical workloads, limiting trainee supervision time and hindering real-time feedback. Large language models (LLMs) can enhance radiology education by providing real-time guidance, feedback, and educational resources while supporting efficient clinical workflows. We present an interpretation-centric framework for integrating LLMs into radiology education subdivided into distinct phases spanning pre-dictation preparation, active dictation support, and post-dictation analysis. In the pre-dictation phase, LLMs can analyze clinical data and provide context-aware summaries of each case, suggest relevant educational resources, and triage cases based on their educational value. In the active dictation phase, LLMs can provide real-time educational support through processes such as differential diagnosis support, completeness guidance, classification schema assistance, structured follow-up guidance, and embedded educational resources. In the post-dictation phase, LLMs can be used to analyze discrepancies between trainee and attending reports, identify areas for improvement, provide targeted educational recommendations, track trainee performance over time, and analyze the radiologic entities that trainees encounter. This framework offers a comprehensive approach to integrating LLMs into radiology education, with the potential to enhance trainee learning while preserving clinical efficiency.
Page 124 of 3453445 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.