Sort by:
Page 94 of 3163151 results

Central Obesity-related Brain Alterations Predict Cognitive Impairments in First Episode of Psychosis.

Kolenič M, McWhinney SR, Selitser M, Šafářová N, Franke K, Vochoskova K, Burdick K, Španiel F, Hajek T

pubmed logopapersJul 13 2025
Cognitive impairment is a key contributor to disability and poor outcomes in schizophrenia, yet it is not adequately addressed by currently available treatments. Thus, it is important to search for preventable or treatable risk factors for cognitive impairment. Here, we hypothesized that obesity-related neurostructural alterations will be associated with worse cognitive outcomes in people with first episode of psychosis (FEP). This observational study presents cross-sectional data from the Early-Stage Schizophrenia Outcome project. We acquired T1-weighted 3D MRI scans in 440 participants with FEP at the time of the first hospitalization and in 257 controls. Metabolic assessments included body mass index (BMI), waist-to-hip ratio (WHR), serum concentrations of triglycerides, cholesterol, glucose, insulin, and hs-CRP. We chose machine learning-derived brain age gap estimate (BrainAGE) as our measure of neurostructural changes and assessed attention, working memory and verbal learning using Digit Span and the Auditory Verbal Learning Test. Among obesity/metabolic markers, only WHR significantly predicted both, higher BrainAGE (t(281)=2.53, p=0.012) and worse verbal learning (t(290) = -2.51, P = .026). The association between FEP and verbal learning was partially mediated by BrainAGE (average causal mediated effects, ACME = -0.04 [-0.10, -0.01], P = .022) and the higher BrainAGE in FEP was partially mediated by higher WHR (ACME = 0.08 [0.02, 0.15], P = .006). Central obesity-related brain alterations were linked with worse cognitive performance already early in the course of psychosis. These structure-function links suggest that preventing or treating central obesity could target brain and cognitive impairments in FEP.

Establishing an AI-based diagnostic framework for pulmonary nodules in computed tomography.

Jia R, Liu B, Ali M

pubmed logopapersJul 12 2025
Pulmonary nodules seen by computed tomography (CT) can be benign or malignant, and early detection is important for optimal management. The existing manual methods of identifying nodules have limitations, such as being time-consuming and erroneous. This study aims to develop an Artificial Intelligence (AI) diagnostic scheme that improves the performance of identifying and categorizing pulmonary nodules using CT scans. The proposed deep learning framework used convolutional neural networks, and the image database totaled 1,056 3D-DICOM CT images. The framework was initially preprocessing, including lung segmentation, nodule detection, and classification. Nodule detection was done using the Retina-UNet model, while the features were classified using a Support Vector Machine (SVM). Performance measures, including accreditation, sensitivity, specificity, and the AUROC, were used to evaluate the model's performance during training and validation. Overall, the developed AI model received an AUROC of 0.9058. The diagnostic accuracy was 90.58%, with an overall positive predictive value of 89% and an overall negative predictive value of 86%. The algorithm effectively handled the CT images at the preprocessing stage, and the deep learning model performed well in detecting and classifying nodules. The application of the new diagnostic framework based on AI algorithms increased the accuracy of the diagnosis compared with the traditional approach. It also provides high reliability for detecting pulmonary nodules and classifying the lesions, thus minimizing intra-observer differences and improving the clinical outcome. In perspective, the advancements may include increasing the size of the annotated data-set and fine-tuning the model due to detection issues of non-solitary nodules.

Characterizing aging-related genetic and physiological determinants of spinal curvature.

Wang FM, Ruby JG, Sethi A, Veras MA, Telis N, Melamud E

pubmed logopapersJul 12 2025
Increased spinal curvature is one of the most recognizable aging traits in the human population. However, despite high prevalence, the etiology of this condition remains poorly understood. To gain better insight into the physiological, biochemical, and genetic risk factors involved, we developed a novel machine learning method to automatically derive thoracic kyphosis and lumbar lordosis angles from dual-energy X-ray absorptiometry (DXA) scans in the UK Biobank Imaging cohort. We carry out genome-wide association and epidemiological association studies to identify genetic and physiological risk factors for both traits. In 41,212 participants, we find that on average males and females gain 2.42° in kyphotic and 1.48° in lordotic angle per decade of life. Increased spinal curvature shows a strong association with decreased muscle mass and bone mineral density. Adiposity demonstrates opposing associations, with decreased kyphosis and increased lordosis. Using Mendelian randomization, we show that genes fundamental to the maintenance of musculoskeletal function (COL11A1, PTHLH, ETFA, TWIST1) and cellular homeostasis such as RNA transcription and DNA repair (RAD9A, MMS22L, HIF1A, RAB28) are likely involved in increased spinal curvature. Our findings reveal a complex interplay between genetics, musculoskeletal health, and age-related changes in spinal curvature, suggesting potential drivers of this universal aging trait.

Vision-language model for report generation and outcome prediction in CT pulmonary angiogram.

Zhong Z, Wang Y, Wu J, Hsu WC, Somasundaram V, Bi L, Kulkarni S, Ma Z, Collins S, Baird G, Ahn SH, Feng X, Kamel I, Lin CT, Greineder C, Atalay M, Jiao Z, Bai H

pubmed logopapersJul 12 2025
Accurate and comprehensive interpretation of pulmonary embolism (PE) from Computed Tomography Pulmonary Angiography (CTPA) scans remains a clinical challenge due to the limited specificity and structure of existing AI tools. We propose an agent-based framework that integrates Vision-Language Models (VLMs) for detecting 32 PE-related abnormalities and Large Language Models (LLMs) for structured report generation. Trained on over 69,000 CTPA studies from 24,890 patients across Brown University Health (BUH), Johns Hopkins University (JHU), and the INSPECT dataset from Stanford, the model demonstrates strong performance in abnormality classification and report generation. For abnormality classification, it achieved AUROC scores of 0.788 (BUH), 0.754 (INSPECT), and 0.710 (JHU), with corresponding BERT-F1 scores of 0.891, 0.829, and 0.842. The abnormality-guided reporting strategy consistently outperformed the organ-based and holistic captioning baselines. For survival prediction, a multimodal fusion model that incorporates imaging, clinical variables, diagnostic outputs, and generated reports achieved concordance indices of 0.863 (BUH) and 0.731 (JHU), outperforming traditional PESI scores. This framework provides a clinically meaningful and interpretable solution for end-to-end PE diagnosis, structured reporting, and outcome prediction.

Integrating Artificial Intelligence in Thyroid Nodule Management: Clinical Outcomes and Cost-Effectiveness Analysis.

Bodoque-Cubas J, Fernández-Sáez J, Martínez-Hervás S, Pérez-Lacasta MJ, Carles-Lavila M, Pallarés-Gasulla RM, Salazar-González JJ, Gil-Boix JV, Miret-Llauradó M, Aulinas-Masó A, Argüelles-Jiménez I, Tofé-Povedano S

pubmed logopapersJul 12 2025
The increasing incidence of thyroid nodules (TN) raises concerns about overdiagnosis and overtreatment. This study evaluates the clinical and economic impact of KOIOS, an FDA-approved artificial intelligence (AI) tool for the management of TN. A retrospective analysis was conducted on 176 patients who underwent thyroid surgery between May 2022 and November 2024. Ultrasound images were evaluated independently by an expert and novice operators using the American College of Radiology Thyroid Imaging Reporting and Data System (ACR-TIRADS), while KOIOS provided AI-adapted risk stratification. Sensitivity, specificity, and Receiver-Operating Curve (ROC) analysis were performed. The incremental cost-effectiveness ratio (ICER) was defined based on the number of optimal care interventions (FNAB and thyroid surgery). Both deterministic and probabilistic sensitivity analyses were conducted to evaluate model robustness. KOIOS AI demonstrated similar diagnostic performance to the expert operator (AUC: 0.794, 95% CI: 0.718-0.871 vs. 0.784, 95% CI: 0.706-0.861; p = 0.754) and significantly outperformed the novice operator (AUC: 0.619, 95% CI: 0.526-0.711; p < 0.001). ICER analysis estimated the cost per additional optimal care decision at -€8,085.56, indicating KOIOS as a dominant and cost-saving strategy when considering a third-party payer perspective over a one-year horizon. Deterministic sensitivity analysis identified surgical costs as the main drivers of variability, while probabilistic analysis consistently favored KOIOS as the optimal strategy. KOIOS AI is a cost-effective alternative, particularly in reducing overdiagnosis and overtreatment for benign TNs. Prospective, real-life studies are needed to validate these findings and explore long-term implications.

Accelerated brain magnetic resonance imaging with deep learning reconstruction: a comparative study on image quality in pediatric neuroimaging.

Choi JW, Cho YJ, Lee SB, Lee S, Hwang JY, Choi YH, Cheon JE, Lee J

pubmed logopapersJul 12 2025
Magnetic resonance imaging (MRI) is crucial in pediatric radiology; however, the prolonged scan time is a major drawback that often requires sedation. Deep learning reconstruction (DLR) is a promising method for accelerating MRI acquisition. To evaluate the clinical feasibility of accelerated brain MRI with DLR in pediatric neuroimaging, focusing on image quality compared to conventional MRI. In this retrospective study, 116 pediatric participants (mean age 7.9 ± 5.4 years) underwent routine brain MRI with three reconstruction methods: conventional MRI without DLR (C-MRI), conventional MRI with DLR (DLC-MRI), and accelerated MRI with DLR (DLA-MRI). Two pediatric radiologists independently assessed the overall image quality, sharpness, artifacts, noise, and lesion conspicuity. Quantitative image analysis included the measurement of image noise and coefficient of variation (CoV). DLA-MRI reduced the scan time by 43% compared with C-MRI. Compared with C-MRI, DLA-MRI demonstrated higher scores for overall image quality, noise, and artifacts, as well as similar or higher scores for lesion conspicuity, but similar or lower scores for sharpness. DLC-MRI demonstrated the highest scores for all the parameters. Despite variations in image quality and lesion conspicuity, the lesion detection rates were 100% across all three reconstructions. Quantitative analysis revealed lower noise and CoV for DLA-MRI than those for C-MRI. Interobserver agreement was substantial to almost perfect (weighted Cohen's kappa = 0.72-0.97). DLR enabled faster MRI with improved image quality compared with conventional MRI, highlighting its potential to address prolonged MRI scan times in pediatric neuroimaging and optimize clinical workflows.

Accurate and real-time brain tumour detection and classification using optimized YOLOv5 architecture.

Saranya M, Praveena R

pubmed logopapersJul 12 2025
The brain tumours originate in the brain or its surrounding structures, such as the pituitary and pineal glands, and can be benign or malignant. While benign tumours may grow into neighbouring tissues, metastatic tumours occur when cancer from other organs spreads to the brain. This is because identification and staging of such tumours are critical because basically all aspects involving a patient's disease entail accurate diagnosis as well as the staging of the tumour. Image segmentation is incredibly valuable to medical imaging since it can make possible to simulate surgical operations, diseases diagnosis, anatomical and pathologic analysis. This study performs the prediction and classification of brain tumours present in MRI, a combined classification and localization framework model is proposed connecting Fully Convolutional Neural Network (FCNN) and You Only Look Once version 5 (YOLOv5). The FCNN model is designed to classify images into four categories: benign - glial, adenomas and pituitary related, and meningeal. It utilizes a derivative of Root Mean Square Propagation (RMSProp)optimization to boost the classification rate, based upon which the performance was evaluated with the standard measures that are precision, recall, F1 coefficient, specificity and accuracy. Subsequently, the YOLOv5 architectural design for more accurate detection of tumours is incorporated, with the subsequent use of FCNN for creation of the segmentation's masks of the tumours. Thus, the analysis proves that the suggested approach has more accuracy than the existing system with 98.80% average accuracy in the identification and categorization of brain tumour. This integration of detection and segmentation models presents one of the most effective techniques for enhancing the diagnostic performance of the system to add value within the medical imaging field. On the basis of these findings, it becomes possible to conclude that the advancements in the deep learning structures could apparently improve the tumour diagnosis while contributing to the finetuning of the clinical management.

Seeing is Believing-On the Utility of CT in Phenotyping COPD.

Awan HA, Chaudhary MFA, Reinhardt JM

pubmed logopapersJul 12 2025
Chronic obstructive pulmonary disease (COPD) is a heterogeneous condition with complicated structural and functional impairments. For decades now, chest computed tomography (CT) has been used to quantify various abnormalities related to COPD. More recently, with the newer data-driven approaches, biomarker development and validation have evolved rapidly. Studies now target multiple anatomical structures including lung parenchyma, the airways, the vasculature, and the fissures to better characterize COPD. This review explores the evolution of chest CT biomarkers in COPD, beginning with traditional thresholding approaches that quantify emphysema and airway dimensions. We then highlight some of the texture analysis efforts that have been made over the years for subtyping lung tissue. We also discuss image registration-based biomarkers that have enabled spatially-aware mechanisms for understanding local abnormalities within the lungs. More recently, deep learning has enabled automated biomarker extraction, offering improved precision in phenotype characterization and outcome prediction. We highlight the most recent of these approaches as well. Despite these advancements, several challenges remain in terms of dataset heterogeneity, model generalizability, and clinical interpretability. This review lastly provides a structured overview of these limitations and highlights future potential of CT biomarkers in personalized COPD management.

Novel deep learning framework for simultaneous assessment of left ventricular mass and longitudinal strain: clinical feasibility and validation in patients with hypertrophic cardiomyopathy.

Park J, Yoon YE, Jang Y, Jung T, Jeon J, Lee SA, Choi HM, Hwang IC, Chun EJ, Cho GY, Chang HJ

pubmed logopapersJul 12 2025
This study aims to present the Segmentation-based Myocardial Advanced Refinement Tracking (SMART) system, a novel artificial intelligence (AI)-based framework for transthoracic echocardiography (TTE) that incorporates motion tracking and left ventricular (LV) myocardial segmentation for automated LV mass (LVM) and global longitudinal strain (LVGLS) assessment. The SMART system demonstrates LV speckle tracking based on motion vector estimation, refined by structural information using endocardial and epicardial segmentation throughout the cardiac cycle. This approach enables automated measurement of LVM<sub>SMART</sub> and LVGLS<sub>SMART</sub>. The feasibility of SMART is validated in 111 hypertrophic cardiomyopathy (HCM) patients (median age: 58 years, 69% male) who underwent TTE and cardiac magnetic resonance imaging (CMR). LVGLS<sub>SMART</sub> showed a strong correlation with conventional manual LVGLS measurements (Pearson's correlation coefficient [PCC] 0.851; mean difference 0 [-2-0]). When compared to CMR as the reference standard for LVM, the conventional dimension-based TTE method overestimated LVM (PCC 0.652; mean difference: 106 [90-123]), whereas LVM<sub>SMART</sub> demonstrated excellent agreement with CMR (PCC 0.843; mean difference: 1 [-11-13]). For predicting extensive myocardial fibrosis, LVGLS<sub>SMART</sub> and LVM<sub>SMART</sub> exhibited performance comparable to conventional LVGLS and CMR (AUC: 0.72 and 0.66, respectively). Patients identified as high risk for extensive fibrosis by LVGLS<sub>SMART</sub> and LVM<sub>SMART</sub> had significantly higher rates of adverse outcomes, including heart failure hospitalization, new-onset atrial fibrillation, and defibrillator implantation. The SMART technique provides a comparable LVGLS evaluation and a more accurate LVM assessment than conventional TTE, with predictive values for myocardial fibrosis and adverse outcomes. These findings support its utility in HCM management.

Accuracy of large language models in generating differential diagnosis from clinical presentation and imaging findings in pediatric cases.

Jung J, Phillipi M, Tran B, Chen K, Chan N, Ho E, Sun S, Houshyar R

pubmed logopapersJul 12 2025
Large language models (LLM) have shown promise in assisting medical decision-making. However, there is limited literature exploring the diagnostic accuracy of LLMs in generating differential diagnoses from text-based image descriptions and clinical presentations in pediatric radiology. To examine the performance of multiple proprietary LLMs in producing accurate differential diagnoses for text-based pediatric radiological cases without imaging. One hundred sixty-four cases were retrospectively selected from a pediatric radiology textbook and converted into two formats: (1) image description only, and (2) image description with clinical presentation. The ChatGPT-4 V, Claude 3.5 Sonnet, and Gemini 1.5 Pro algorithms were given these inputs and tasked with providing a top 1 diagnosis and a top 3 differential diagnoses. Accuracy of responses was assessed by comparison with the original literature. Top 1 accuracy was defined as whether the top 1 diagnosis matched the textbook, and top 3 differential accuracy was defined as the number of diagnoses in the model-generated top 3 differential that matched any of the top 3 diagnoses in the textbook. McNemar's test, Cochran's Q test, Friedman test, and Wilcoxon signed-rank test were used to compare algorithms and assess the impact of added clinical information, respectively. There was no significant difference in top 1 accuracy between ChatGPT-4 V, Claude 3.5 Sonnet, and Gemini 1.5 Pro when only image descriptions were provided (56.1% [95% CI 48.4-63.5], 64.6% [95% CI 57.1-71.5], 61.6% [95% CI 54.0-68.7]; P = 0.11). Adding clinical presentation to image description significantly improved top 1 accuracy for ChatGPT-4 V (64.0% [95% CI 56.4-71.0], P = 0.02) and Claude 3.5 Sonnet (80.5% [95% CI 73.8-85.8], P < 0.001). For image description and clinical presentation cases, Claude 3.5 Sonnet significantly outperformed both ChatGPT-4 V and Gemini 1.5 Pro (P < 0.001). For top 3 differential accuracy, no significant differences were observed between ChatGPT-4 V, Claude 3.5 Sonnet, and Gemini 1.5 Pro, regardless of whether the cases included only image descriptions (1.29 [95% CI 1.16-1.41], 1.35 [95% CI 1.23-1.48], 1.37 [95% CI 1.25-1.49]; P = 0.60) or both image descriptions and clinical presentations (1.33 [95% CI 1.20-1.45], 1.52 [95% CI 1.41-1.64], 1.48 [95% 1.36-1.59]; P = 0.72). Only Claude 3.5 Sonnet performed significantly better when clinical presentation was added (P < 0.001). Commercial LLMs performed similarly on pediatric radiology cases in providing top 1 accuracy and top 3 differential accuracy when only a text-based image description was used. Adding clinical presentation significantly improved top 1 accuracy for ChatGPT-4 V and Claude 3.5 Sonnet, with Claude showing the largest improvement. Claude 3.5 Sonnet outperformed both ChatGPT-4 V and Gemini 1.5 Pro in top 1 accuracy when both image and clinical data were provided. No significant differences were found in top 3 differential accuracy across models in any condition.
Page 94 of 3163151 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.