Sort by:
Page 220 of 3163151 results

A magnetic resonance imaging (MRI)-based deep learning radiomics model predicts recurrence-free survival in lung cancer patients after surgical resection of brain metastases.

Li B, Li H, Chen J, Xiao F, Fang X, Guo R, Liang M, Wu Z, Mao J, Shen J

pubmed logopapersJun 1 2025
To develop and validate a magnetic resonance imaging (MRI)-based deep learning radiomics model (DLRM) to predict recurrence-free survival (RFS) in lung cancer patients after surgical resection of brain metastases (BrMs). A total of 215 lung cancer patients with BrMs confirmed by surgical pathology were retrospectively included in five centres, 167 patients were assigned to the training cohort, and 48 to the external test cohort. All patients underwent regular follow-up brain MRIs. Clinical and morphological MRI models for predicting RFS were built using univariate and multivariate Cox regressions, respectively. Handcrafted and deep learning (DL) signatures were constructed from BrMs pretreatment MR images using the least absolute shrinkage and selection operator (LASSO) method, respectively. A DLRM was established by integrating the clinical and morphological MRI predictors, handcrafted and DL signatures based on the multivariate Cox regression coefficients. The Harrell C-index, area under the receiver operating characteristic curve (AUC), and Kaplan-Meier's survival analysis were used to evaluate model performance. The DLRM showed satisfactory performance in predicting RFS and 6- to 18-month intracranial recurrence in lung cancer patients after BrMs resection, achieving a C-index of 0.79 and AUCs of 0.84-0.90 in the training set and a C-index of 0.74 and AUCs of 0.71-0.85 in the external test set. The DLRM outperformed the clinical model, morphological MRI model, handcrafted signature, DL signature, and clinical-morphological MRI model in predicting RFS (P < 0.05). The DLRM successfully classified patients into high-risk and low-risk intracranial recurrence groups (P < 0.001). This MRI-based DLRM could predict RFS in lung cancer patients after surgical resection of BrMs.

Metabolic Dysfunction-Associated Steatotic Liver Disease Is Associated With Accelerated Brain Ageing: A Population-Based Study.

Wang J, Yang R, Miao Y, Zhang X, Paillard-Borg S, Fang Z, Xu W

pubmed logopapersJun 1 2025
Metabolic dysfunction-associated steatotic liver disease (MASLD) is linked to cognitive decline and dementia risk. We aimed to investigate the association between MASLD and brain ageing and explore the role of low-grade inflammation. Within the UK Biobank, 30 386 chronic neurological disorders-free participants who underwent brain magnetic resonance imaging (MRI) scans were included. Individuals were categorised into no MASLD/related SLD and MASLD/related SLD (including subtypes of MASLD, MASLD with increased alcohol intake [MetALD] and MASLD with other combined aetiology). Brain age was estimated using machine learning by 1079 brain MRI phenotypes. Brain age gap (BAG) was calculated as the difference between brain age and chronological age. Low-grade inflammation (INFLA) was calculated based on white blood cell count, platelet, neutrophil granulocyte to lymphocyte ratio and C-reactive protein. Data were analysed using linear regression and structural equation models. At baseline, 7360 (24.2%) participants had MASLD/related SLD. Compared to participants with no MASLD/related SLD, those with MASLD/related SLD had significantly larger BAG (β = 0.86, 95% CI = 0.70, 1.02), as well as those with MASLD (β = 0.59, 95% CI = 0.41, 0.77) or MetALD (β = 1.57, 95% CI = 1.31, 1.83). The association between MASLD/related SLD and larger BAG was significant across middle-aged (< 60) and older (≥ 60) adults, males and females, and APOE ɛ4 carriers and non-carriers. INFLA mediated 13.53% of the association between MASLD/related SLD and larger BAG (p < 0.001). MASLD/related SLD, as well as MASLD and MetALD, is associated with accelerated brain ageing, even among middle-aged adults and APOE ɛ4 non-carriers. Low-grade systemic inflammation may partially mediate this association.

Leveraging GPT-4 enables patient comprehension of radiology reports.

van Driel MHE, Blok N, van den Brand JAJG, van de Sande D, de Vries M, Eijlers B, Smits F, Visser JJ, Gommers D, Verhoef C, van Genderen ME, Grünhagen DJ, Hilling DE

pubmed logopapersJun 1 2025
To assess the feasibility of using GPT-4 to simplify radiology reports into B1-level Dutch for enhanced patient comprehension. This study utilised GPT-4, optimised through prompt engineering in Microsoft Azure. The researchers iteratively refined prompts to ensure accurate and comprehensive translations of radiology reports. Two radiologists assessed the simplified outputs for accuracy, completeness, and patient suitability. A third radiologist independently validated the final versions. Twelve colorectal cancer patients were recruited from two hospitals in the Netherlands. Semi-structured interviews were conducted to evaluate patients' comprehension and satisfaction with AI-generated reports. The optimised GPT-4 tool produced simplified reports with high accuracy (mean score 3.33/4). Patient comprehension improved significantly from 2.00 (original reports) to 3.28 (simplified reports) and 3.50 (summaries). Correct classification of report outcomes increased from 63.9% to 83.3%. Patient satisfaction was high (mean 8.30/10), with most preferring the long simplified report. RADiANT successfully enhances patient understanding and satisfaction through automated AI-driven report simplification, offering a scalable solution for patient-centred communication in clinical practice. This tool reduces clinician workload and supports informed patient decision-making, demonstrating the potential of LLMs beyond English-based healthcare contexts.

Quantifying the Unknowns of Plaque Morphology: The Role of Topological Uncertainty in Coronary Artery Disease.

Singh Y, Hathaway QA, Dinakar K, Shaw LJ, Erickson B, Lopez-Jimenez F, Bhatt DL

pubmed logopapersJun 1 2025
This article aimed to explore topological uncertainty in medical imaging, particularly in assessing coronary artery calcification using artificial intelligence (AI). Topological uncertainty refers to ambiguities in spatial and structural characteristics of medical features, which can impact the interpretation of coronary plaques. The article discusses the challenges of integrating AI with topological considerations and the need for specialized methodologies beyond traditional performance metrics. It highlights advancements in quantifying topological uncertainty, including the use of persistent homology and topological data analysis techniques. The importance of standardization in methodologies and ethical considerations in AI deployment are emphasized. It also outlines various types of uncertainty in topological frameworks for coronary plaques, categorizing them as quantifiable and controllable or quantifiable and not controllable. Future directions include developing AI algorithms that incorporate topological insights, establishing standardized protocols, and exploring ethical implications to revolutionize cardiovascular care through personalized treatment plans guided by sophisticated topological analysis. Recognizing and quantifying topological uncertainty in medical imaging as AI emerges is critical. Exploring topological uncertainty in coronary artery disease will revolutionize cardiovascular care, promising enhanced precision and personalization in diagnostics and treatment for millions affected by cardiovascular diseases.

Predictive models of severe disease in patients with COVID-19 pneumonia at an early stage on CT images using topological properties.

Iwasaki T, Arimura H, Inui S, Kodama T, Cui YH, Ninomiya K, Iwanaga H, Hayashi T, Abe O

pubmed logopapersJun 1 2025
Prediction of severe disease (SVD) in patients with coronavirus disease (COVID-19) pneumonia at an early stage could allow for more appropriate triage and improve patient prognosis. Moreover, the visualization of the topological properties of COVID-19 pneumonia could help clinical physicians describe the reasons for their decisions. We aimed to construct predictive models of SVD in patients with COVID-19 pneumonia at an early stage on computed tomography (CT) images using SVD-specific features that can be visualized on accumulated Betti number (BN) maps. BN maps (b0 and b1 maps) were generated by calculating the BNs within a shifting kernel in a manner similar to a convolution. Accumulated BN maps were constructed by summing BN maps (b0 and b1 maps) derived from a range of multiple-threshold values. Topological features were computed as intrinsic topological properties of COVID-19 pneumonia from the accumulated BN maps. Predictive models of SVD were constructed with two feature selection methods and three machine learning models using nested fivefold cross-validation. The proposed model achieved an area under the receiver-operating characteristic curve of 0.854 and a sensitivity of 0.908 in a test fold. These results suggested that topological image features could characterize COVID-19 pneumonia at an early stage as SVD.

Prediction of mammographic breast density based on clinical breast ultrasound images using deep learning: a retrospective analysis.

Bunnell A, Valdez D, Wolfgruber TK, Quon B, Hung K, Hernandez BY, Seto TB, Killeen J, Miyoshi M, Sadowski P, Shepherd JA

pubmed logopapersJun 1 2025
Breast density, as derived from mammographic images and defined by the Breast Imaging Reporting & Data System (BI-RADS), is one of the strongest risk factors for breast cancer. Breast ultrasound is an alternative breast cancer screening modality, particularly useful in low-resource, rural contexts. To date, breast ultrasound has not been used to inform risk models that need breast density. The purpose of this study is to explore the use of artificial intelligence (AI) to predict BI-RADS breast density category from clinical breast ultrasound imaging. We compared deep learning methods for predicting breast density directly from breast ultrasound imaging, as well as machine learning models from breast ultrasound image gray-level histograms alone. The use of AI-derived breast ultrasound breast density as a breast cancer risk factor was compared to clinical BI-RADS breast density. Retrospective (2009-2022) breast ultrasound data were split by individual into 70/20/10% groups for training, validation, and held-out testing for reporting results. 405,120 clinical breast ultrasound images from 14,066 women (mean age 53 years, range 18-99 years) with clinical breast ultrasound exams were retrospectively selected for inclusion from three institutions: 10,393 training (302,574 images), 2593 validation (69,842), and 1074 testing (28,616). The AI model achieves AUROC 0.854 in breast density classification and statistically significantly outperforms all image statistic-based methods. In an existing clinical 5-year breast cancer risk model, breast ultrasound AI and clinical breast density predict 5-year breast cancer risk with 0.606 and 0.599 AUROC (DeLong's test p-value: 0.67), respectively. BI-RADS breast density can be estimated from breast ultrasound imaging with high accuracy. The AI model provided superior estimates to other machine learning approaches. Furthermore, we demonstrate that age-adjusted, AI-derived breast ultrasound breast density provides similar predictive power to mammographic breast density in our population. Estimated breast density from ultrasound may be useful in performing breast cancer risk assessment in areas where mammography may not be available. National Cancer Institute.

WAND: Wavelet Analysis-Based Neural Decomposition of MRS Signals for Artifact Removal.

Merkofer JP, van de Sande DMJ, Amirrajab S, Min Nam K, van Sloun RJG, Bhogal AA

pubmed logopapersJun 1 2025
Accurate quantification of metabolites in magnetic resonance spectroscopy (MRS) is challenged by low signal-to-noise ratio (SNR), overlapping metabolites, and various artifacts. Particularly, unknown and unparameterized baseline effects obscure the quantification of low-concentration metabolites, limiting MRS reliability. This paper introduces wavelet analysis-based neural decomposition (WAND), a novel data-driven method designed to decompose MRS signals into their constituent components: metabolite-specific signals, baseline, and artifacts. WAND takes advantage of the enhanced separability of these components within the wavelet domain. The method employs a neural network, specifically a U-Net architecture, trained to predict masks for wavelet coefficients obtained through the continuous wavelet transform. These masks effectively isolate desired signal components in the wavelet domain, which are then inverse-transformed to obtain separated signals. Notably, an artifact mask is created by inverting the sum of all known signal masks, enabling WAND to capture and remove even unpredictable artifacts. The effectiveness of WAND in achieving accurate decomposition is demonstrated through numerical evaluations using simulated spectra. Furthermore, WAND's artifact removal capabilities significantly enhance the quantification accuracy of linear combination model fitting. The method's robustness is further validated using data from the 2016 MRS Fitting Challenge and in vivo experiments.

MEF-Net: Multi-scale and edge feature fusion network for intracranial hemorrhage segmentation in CT images.

Zhang X, Zhang S, Jiang Y, Tian L

pubmed logopapersJun 1 2025
Intracranial Hemorrhage (ICH) refers to cerebral bleeding resulting from ruptured blood vessels within the brain. Delayed and inaccurate diagnosis and treatment of ICH can lead to fatality or disability. Therefore, early and precise diagnosis of intracranial hemorrhage is crucial for protecting patients' lives. Automatic segmentation of hematomas in CT images can provide doctors with essential diagnostic support and improve diagnostic efficiency. CT images of intracranial hemorrhage exhibit characteristics such as multi-scale, multi-target, and blurred edges. This paper proposes a Multi-scale and Edge Feature Fusion Network (MEF-Net) to effectively extract multi-scale and edge features and fully fuse these features through a fusion mechanism. The network first extracts the multi-scale features and edge features of the image through the encoder and the edge detection module respectively, then fuses the deep information, and employs the multi-kernel attention module to process the shallow features, enhancing the multi-target recognition capability. Finally, the feature maps from each module are combined to produce the segmentation result. Experimental results indicate that this method has achieved average DICE scores of 0.7508 and 0.7443 in two public datasets respectively, surpassing those of several advanced methods in medical image segmentation currently available. The proposed MEF-Net significantly improves the accuracy of intracranial hemorrhage segmentation.

Performance of GPT-4 Turbo and GPT-4o in Korean Society of Radiology In-Training Examinations.

Choi A, Kim HG, Choi MH, Ramasamy SK, Kim Y, Jung SE

pubmed logopapersJun 1 2025
Despite the potential of large language models for radiology training, their ability to handle image-based radiological questions remains poorly understood. This study aimed to evaluate the performance of the GPT-4 Turbo and GPT-4o in radiology resident examinations, to analyze differences across question types, and to compare their results with those of residents at different levels. A total of 776 multiple-choice questions from the Korean Society of Radiology In-Training Examinations were used, forming two question sets: one originally written in Korean and the other translated into English. We evaluated the performance of GPT-4 Turbo (gpt-4-turbo-2024-04-09) and GPT-4o (gpt-4o-2024-11-20) on these questions with the temperature set to zero, determining the accuracy based on the majority vote from five independent trials. We analyzed their results using the question type (text-only vs. image-based) and benchmarked them against nationwide radiology residents' performance. The impact of the input language (Korean or English) on model performance was examined. GPT-4o outperformed GPT-4 Turbo for both image-based (48.2% vs. 41.8%, <i>P</i> = 0.002) and text-only questions (77.9% vs. 69.0%, <i>P</i> = 0.031). On image-based questions, GPT-4 Turbo and GPT-4o showed comparable performance to that of 1st-year residents (41.8% and 48.2%, respectively, vs. 43.3%, <i>P</i> = 0.608 and 0.079, respectively) but lower performance than that of 2nd- to 4th-year residents (vs. 56.0%-63.9%, all <i>P</i> ≤ 0.005). For text-only questions, GPT-4 Turbo and GPT-4o performed better than residents across all years (69.0% and 77.9%, respectively, vs. 44.7%-57.5%, all <i>P</i> ≤ 0.039). Performance on the English- and Korean-version questions showed no significant differences for either model (all <i>P</i> ≥ 0.275). GPT-4o outperformed the GPT-4 Turbo in all question types. On image-based questions, both models' performance matched that of 1st-year residents but was lower than that of higher-year residents. Both models demonstrated superior performance compared to residents for text-only questions. The models showed consistent performances across English and Korean inputs.

Conversion of Mixed-Language Free-Text CT Reports of Pancreatic Cancer to National Comprehensive Cancer Network Structured Reporting Templates by Using GPT-4.

Kim H, Kim B, Choi MH, Choi JI, Oh SN, Rha SE

pubmed logopapersJun 1 2025
To evaluate the feasibility of generative pre-trained transformer-4 (GPT-4) in generating structured reports (SRs) from mixed-language (English and Korean) narrative-style CT reports for pancreatic ductal adenocarcinoma (PDAC) and to assess its accuracy in categorizing PDCA resectability. This retrospective study included consecutive free-text reports of pancreas-protocol CT for staging PDAC, from two institutions, written in English or Korean from January 2021 to December 2023. Both the GPT-4 Turbo and GPT-4o models were provided prompts along with the free-text reports via an application programming interface and tasked with generating SRs and categorizing tumor resectability according to the National Comprehensive Cancer Network guidelines version 2.2024. Prompts were optimized using the GPT-4 Turbo model and 50 reports from Institution B. The performances of the GPT-4 Turbo and GPT-4o models in the two tasks were evaluated using 115 reports from Institution A. Results were compared with a reference standard that was manually derived by an abdominal radiologist. Each report was consecutively processed three times, with the most frequent response selected as the final output. Error analysis was guided by the decision rationale provided by the models. Of the 115 narrative reports tested, 96 (83.5%) contained both English and Korean. For SR generation, GPT-4 Turbo and GPT-4o demonstrated comparable accuracies (92.3% [1592/1725] and 92.2% [1590/1725], respectively; <i>P</i> = 0.923). In the resectability categorization, GPT-4 Turbo showed higher accuracy than GPT-4o (81.7% [94/115] vs. 67.0% [77/115], respectively; <i>P</i> = 0.002). In the error analysis of GPT-4 Turbo, the SR generation error rate was 7.7% (133/1725 items), which was primarily attributed to inaccurate data extraction (54.1% [72/133]). The resectability categorization error rate was 18.3% (21/115), with the main cause being violation of the resectability criteria (61.9% [13/21]). Both GPT-4 Turbo and GPT-4o demonstrated acceptable accuracy in generating NCCN-based SRs on PDACs from mixed-language narrative reports. However, oversight by human radiologists is essential for determining resectability based on CT findings.
Page 220 of 3163151 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.