Sort by:
Page 69 of 3163151 results

An approach for cancer outcomes modelling using a comprehensive synthetic dataset.

Tu L, Choi HHF, Clark H, Lloyd SAM

pubmed logopapersJul 24 2025
Limited patient data availability presents a challenge for efficient machine learning (ML) model development. Recent studies have proposed methods to generate synthetic medical images but lack the corresponding prognostic information required for predicting outcomes. We present a cancer outcomes modelling approach that involves generating a comprehensive synthetic dataset which can accurately mimic a real dataset. A real public dataset containing computed tomography-based radiomic features and clinical information for 132 non-small cell lung cancer patients was used. A synthetic dataset of virtual patients was synthesized using a conditional tabular generative adversarial network. Models to predict two-year overall survival were trained on real or synthetic data using combinations of four feature selection methods (mutual information, ANOVA F-test, recursive feature elimination, random forest (RF) importance weights) and six ML algorithms (RF, k-nearest neighbours, logistic regression, support vector machine, XGBoost, Gaussian Naïve Bayes). Models were tested on withheld real data and externally validated. Real and synthetic datasets were similar, with an average one minus Kolmogorov-Smirnov test statistic of 0.871 for continuous features. Chi-square test confirmed agreement for discrete features (p < 0.001). XGBoost using RF importance-based features performed the most consistently for both datasets, with percent differences in balanced accuracy and area under the precision-recall curve of < 1.3%. Preliminary findings demonstrate the potential application of synthetic radiomic and clinical data augmentation for cancer outcomes modelling, although further validation with larger diverse datasets is crucial. While our approach was described in a lung context, it may be applied to other sites or endpoints.

MSA-Net: a multi-scale and adversarial learning network for segmenting bone metastases in low-resolution SPECT imaging.

Wu Y, Lin Q, He Y, Zeng X, Cao Y, Man Z, Liu C, Hao Y, Cai Z, Ji J, Huang X

pubmed logopapersJul 24 2025
Single-photon emission computed tomography (SPECT) plays a crucial role in detecting bone metastases from lung cancer. However, its low spatial resolution and lesion similarity to benign structures present significant challenges for accurate segmentation, especially for lesions of varying sizes. We propose a deep learning-based segmentation framework that integrates conditional adversarial learning with a multi-scale feature extraction generator. The generator employs cascade dilated convolutions, multi-scale modules, and deep supervision, while the discriminator utilizes multi-scale L1 loss computed on image-mask pairs to guide segmentation learning. The proposed model was evaluated on a dataset of 286 clinically annotated SPECT scintigrams. It achieved a Dice Similarity Coefficient (DSC) of 0.6671, precision of 0.7228, and recall of 0.6196 - outperforming both classical and recent adversarial segmentation models in multi-scale lesion detection, especially for small and clustered lesions. Our results demonstrate that the integration of multi-scale feature learning with adversarial supervision significantly improves the segmentation of bone metastasis in SPECT imaging. This approach shows potential for clinical decision support in the management of lung cancer.

Deep Learning to Differentiate Parkinsonian Syndromes Using Multimodal Magnetic Resonance Imaging: A Proof-of-Concept Study.

Mattia GM, Chougar L, Foubert-Samier A, Meissner WG, Fabbri M, Pavy-Le Traon A, Rascol O, Grabli D, Degos B, Pyatigorskaya N, Faucher A, Vidailhet M, Corvol JC, Lehéricy S, Péran P

pubmed logopapersJul 24 2025
The differentiation between multiple system atrophy (MSA) and Parkinson's disease (PD) based on clinical diagnostic criteria can be challenging, especially at an early stage. Leveraging deep learning methods and magnetic resonance imaging (MRI) data has shown great potential in aiding automatic diagnosis. The aim was to determine the feasibility of a three-dimensional convolutional neural network (3D CNN)-based approach using multimodal, multicentric MRI data for differentiating MSA and its variants from PD. MRI data were retrospectively collected from three MSA French reference centers. We computed quantitative maps of gray matter density (GD) from a T1-weighted sequence and mean diffusivity (MD) from diffusion tensor imaging. These maps were used as input to a 3D CNN, either individually ("monomodal," "GD" or "MD") or in combination ("bimodal," "GD-MD"). Classification tasks included the differentiation of PD and MSA patients. Model interpretability was investigated by analyzing misclassified patients and providing a visual interpretation of the most activated regions in CNN predictions. The study population included 92 patients with MSA (50 with MSA-P, parkinsonian variant; 33 with MSA-C, cerebellar variant; 9 with MSA-PC, mixed variant) and 64 with PD. The best accuracies were obtained for the PD/MSA (0.88 ± 0.03 with GD-MD), PD/MSA-C&PC (0.84 ± 0.08 with MD), and PD/MSA-P (0.78 ± 0.09 with GD) tasks. Patients misclassified by the CNN exhibited fewer and milder image alterations, as found using an image-based z score analysis. Activation maps highlighted regions involved in MSA pathophysiology, namely the putamen and cerebellum. Our findings hold promise for developing an efficient, MRI-based, and user-independent diagnostic tool suitable for differentiating parkinsonian syndromes in clinical practice. © 2025 The Author(s). Movement Disorders published by Wiley Periodicals LLC on behalf of International Parkinson and Movement Disorder Society.

Patient Perspectives on Artificial Intelligence in Health Care: Focus Group Study for Diagnostic Communication and Tool Implementation.

Foresman G, Biro J, Tran A, MacRae K, Kazi S, Schubel L, Visconti A, Gallagher W, Smith KM, Giardina T, Haskell H, Miller K

pubmed logopapersJul 24 2025
Artificial intelligence (AI) is rapidly transforming health care, offering potential benefits in diagnosis, treatment, and workflow efficiency. However, limited research explores patient perspectives on AI, especially in its role in diagnosis and communication. This study examines patient perceptions of various AI applications, focusing on the diagnostic process and communication. This study aimed to examine patient perspectives on AI use in health care, particularly in diagnostic processes and communication, identifying key concerns, expectations, and opportunities to guide the development and implementation of AI tools. This study used a qualitative focus group methodology with co-design principles to explore patient and family member perspectives on AI in clinical practice. A single 2-hour session was conducted with 17 adult participants. The session included interactive activities and breakout sessions focused on five specific AI scenarios relevant to diagnosis and communication: (1) portal messaging, (2) radiology review, (3) digital scribe, (4) virtual human, and (5) decision support. The session was audio-recorded and transcribed, with facilitator notes and demographic questionnaires collected. Data were analyzed using inductive thematic analysis by 2 independent researchers (GF and JB), with discrepancies resolved via consensus. Participants reported varying comfort levels with AI applications contingent on the level of patient interaction, with digital scribe (average 4.24, range 2-5) and radiology review (average 4.00, range 2-5) being the highest, and virtual human (average 1.68, range 1-4) being the lowest. In total, five cross-cutting themes emerged: (1) validation (concerns about model reliability), (2) usability (impact on diagnostic processes), (3) transparency (expectations for disclosing AI usage), (4) opportunities (potential for AI to improve care), and (5) privacy (concerns about data security). Participants valued the co-design session and felt they had a significant say in the discussions. This study highlights the importance of incorporating patient perspectives in the design and implementation of AI tools in health care. Transparency, human oversight, clear communication, and data privacy are crucial for patient trust and acceptance of AI in diagnostic processes. These findings inform strategies for individual clinicians, health care organizations, and policy makers to ensure responsible and patient-centered AI deployment in health care.

Analyzing pediatric forearm X-rays for fracture analysis using machine learning.

Lam V, Parida A, Dance S, Tabaie S, Cleary K, Anwar SM

pubmed logopapersJul 24 2025
Forearm fractures constitute a significant proportion of emergency department presentations in pediatric population. The treatment goal is to restore length and alignment between the distal and proximal bone fragments. While immobilization through splinting or casting is enough for non-displaced and minimally displaced fractures. However, moderately or severely displaced fractures often require reduction for realignment. However, appropriate treatment in current practices has challenges due to the lack of resources required for specialized pediatric care leading to delayed and unnecessary transfers between medical centers, which potentially create treatment complications and burdens. The purpose of this study is to build a machine learning model for analyzing forearm fractures to assist clinical centers that lack surgical expertise in pediatric orthopedics. X-ray scans from 1250 children were curated, preprocessed, and manually annotated at our clinical center. Several machine learning models were fine-tuned using a pretraining strategy leveraging self-supervised learning model with vision transformer backbone. We further employed strategies to identify the most important region related to fractures within the forearm X-ray. The model performance was evaluated with and without region of interest (ROI) detection to find an optimal model for forearm fracture analyses. Our proposed strategy leverages self-supervised pretraining (without labels) followed by supervised fine-tuning (with labels). The fine-tuned model using regions cropped with ROI identification resulted in the highest classification performance with a true-positive rate (TPR) of 0.79, true-negative rate (TNR) of 0.74, AUROC of 0.81, and AUPR of 0.86 when evaluated on the testing data. The results showed the feasibility of using machine learning models in predicting the appropriate treatment for forearm fractures in pediatric cases. With further improvement, the algorithm could potentially be used as a tool to assist non-specialized orthopedic providers in diagnosing and providing treatment.

A Dynamic Machine Learning Model to Predict Angiographic Vasospasm After Aneurysmal Subarachnoid Hemorrhage.

Sen RD, McGrath MC, Shenoy VS, Meyer RM, Park C, Fong CT, Lele AV, Kim LJ, Levitt MR, Wang LL, Sekhar LN

pubmed logopapersJul 24 2025
The goal of this study was to develop a highly precise, dynamic machine learning model centered on daily transcranial Doppler ultrasound (TCD) data to predict angiographic vasospasm (AV) in the context of aneurysmal subarachnoid hemorrhage (aSAH). A retrospective review of patients with aSAH treated at a single institution was performed. The primary outcome was AV, defined as angiographic narrowing of any intracranial artery at any time point during admission from risk assessment. Standard demographic, clinical, and radiographic data were collected. Quantitative data including mean arterial pressure, cerebral perfusion pressure, daily serum sodium, and hourly ventriculostomy output were collected. Detailed daily TCD data of intracranial arteries including maximum velocities, pulsatility indices, and Lindegaard ratios were collected. Three predictive machine learning models were created and compared: A static multivariate logistics regression model based on data collected on the date of admission (Baseline Model; BM), a standard TCD model using middle cerebral artery flow velocity and Lindegaard ratio measurements (SM), and a machine learning long short term memory (LSTM) model using all data trended through the hospitalization. A total of 424 patients with aSAH were reviewed, 78 of whom developed AV. In predicting AV at any time point in the future, the LSTM model had the highest precision (0.571) and accuracy (0.776), whereas the SM model had the highest overall performance with an F1 score of 0.566. In predicting AV within 5 days, the LSTM continued to have the highest precision (0.488) and accuracy (0.803). After an ablation test removing all non-TCD elements, the LSTM model improved to a precision of 0.824. Longitudinal TCD data can be used to create a dynamic machine learning model with higher precision than static TCD measurements for predicting AV after aSAH.

Evaluation of Brain Stiffness in Patients Undergoing Carotid Angioplasty and Stenting Using Magnetic Resonance Elastography.

Wu CH, Murphy MC, Chiang CC, Chen ST, Chung CP, Lirng JF, Luo CB, Rossman PJ, Ehman RL, Huston J, Chang FC

pubmed logopapersJul 24 2025
Percutaneous transluminal angioplasty and stenting (PTAS) in patients with carotid stenosis may have potential effects on brain parenchyma. However, current studies on parenchymal changes are scarce due to the need for advanced imaging modalities. Consequently, the alterations in brain parenchyma following PTAS remain an unsolved issue. To investigate changes to the brain parenchyma using magnetic resonance elastography (MRE). Prospective. 13 patients (6 women and 7 men; 39 MRI imaging sessions) with severe unilateral carotid stenosis patients indicated for PTAS were recruited between 2021 and 2024. Noncontrast MRI sequences including MRE (spin echo) were acquired using 3 T scanners. All patients underwent MRE before (preprocedural), within 24 h (early postprocedural) and 3 months after (delayed postprocedural) PTAS. Preprocedural and delayed postprocedural ultrasonographic peak systolic velocity (PSV) was recorded. MRE stiffness and damping ratio were evaluated via neural network inversion of the whole brain, in 14 gray matter (GM) and 12 white matter (WM) regions. Stiffness and damping ratio differences between each pair of MR sessions for each subject were identified by paired sample t tests. The correlations of stiffness and damping ratio with stenosis grade and ultrasonographic PSV dynamics were evaluated by Pearson correlation coefficients. The statistical significance was defined as p < 0.05. The stiffness of lesion side insula, deep GM, and deep WM increased significantly from preprocedural to delayed postprocedural MRE. Increasing deep GM stiffness on the lesion side was positively correlated with the DSA stenosis grade significantly (r = 0.609). The lesion side insula stiffness increments were positively correlated with PSV decrements significantly (r = 0.664). Regional brain stiffness increased 3 months after PTAS. Lesion side stiffness was positively correlated with stenosis grades in deep GM and PSV decrements in the insula. EVIDENCE LEVEL: 2. Stage 2.

Malignancy classification of thyroid incidentalomas using 18F-fluorodeoxy-d-glucose PET/computed tomography-derived radiomics.

Yeghaian M, Piek MW, Bartels-Rutten A, Abdelatty MA, Herrero-Huertas M, Vogel WV, de Boer JP, Hartemink KJ, Bodalal Z, Beets-Tan RGH, Trebeschi S, van der Ploeg IMC

pubmed logopapersJul 24 2025
Thyroid incidentalomas (TIs) are incidental thyroid lesions detected on fluorodeoxy-d-glucose (18F-FDG) PET/computed tomography (PET/CT) scans. This study aims to investigate the role of noninvasive PET/CT-derived radiomic features in characterizing 18F-FDG PET/CT TIs and distinguishing benign from malignant thyroid lesions in oncological patients. We included 46 patients with PET/CT TIs who underwent thyroid ultrasound and thyroid surgery at our oncological referral hospital. Radiomic features extracted from regions of interest (ROI) in both PET and CT images and analyzed for their association with thyroid cancer and their predictive ability. The TIs were graded using the ultrasound TIRADS classification, and histopathological results served as the reference standard. Univariate and multivariate analyses were performed using features from each modality individually and combined. The performance of radiomic features was compared to the TIRADS classification. Among the 46 included patients, 36 patients (78%) had malignant thyroid lesions, while 10 patients (22%) had benign lesions. The combined run length nonuniformity radiomic feature from PET and CT cubical ROIs demonstrated the highest area under the curve (AUC) of 0.88 (P < 0.05), with a negative correlation with malignancy. This performance was comparable to the TIRADS classification (AUC: 0.84, P < 0.05), which showed a positive correlation with thyroid cancer. Multivariate analysis showed higher predictive performance using CT-derived radiomics (AUC: 0.86 ± 0.13) compared to TIRADS (AUC: 0.80 ± 0.08). This study highlights the potential of 18F-FDG PET/CT-derived radiomics to distinguish benign from malignant thyroid lesions. Further studies with larger cohorts and deep learning-based methods could obtain more robust results.

DGEAHorNet: high-order spatial interaction network with dual cross global efficient attention for medical image segmentation.

Peng H, An X, Chen X, Chen Z

pubmed logopapersJul 24 2025
Medical image segmentation is a complex and challenging task, which aims to accurately segment various structures or abnormal regions in medical images. However, obtaining accurate segmentation results is difficult because of the great uncertainty in the shape, location, and scale of the target region. To address these challenges, we propose a higher-order spatial interaction framework with dual cross global efficient attention (DGEAHorNet), which employs a neural network architecture based on recursive gate convolution to adequately extract multi-scale contextual information from images. Specifically, a Dual Cross-Attentions (DCA) is added to the skip connection that can effectively blend multi-stage encoder features and narrow the semantic gap. In the bottleneck stage, global channel spatial attention module (GCSAM) is used to extract image global information. To obtain better feature representation, we feed the output from the GCSAM into the multi-branch dense layer (SENetV2) for excitation. Furthermore, we adopt Depthwise Over-parameterized Convolutional Layer (DO-Conv) in order to replace the common convolutional layer in the input and output part of our network, then add Efficient Attention (EA) to diminish computational complexity and enhance our model's performance. For evaluating the effectiveness of our proposed DGEAHorNet, we conduct comprehensive experiments on four publicly-available datasets, and achieving 0.9320, 0.9337, 0.9312 and 0.7799 in Dice similarity coefficient on ISIC2018, ISIC2017, CVC-ClinicDB and HRF respectively. Our results show that DGEAHorNet has better performance compared with advanced methods. The code is publicly available at https://github.com/penghaixin/mymodel .

Artificial intelligence for multi-time-point arterial phase contrast-enhanced MRI profiling to predict prognosis after transarterial chemoembolization in hepatocellular carcinoma.

Yao L, Adwan H, Bernatz S, Li H, Vogl TJ

pubmed logopapersJul 24 2025
Contrast-enhanced magnetic resonance imaging (CE-MRI) monitoring across multiple time points is critical for optimizing hepatocellular carcinoma (HCC) prognosis during transarterial chemoembolization (TACE) treatment. The aim of this retrospective study is to develop and validate an artificial intelligence (AI)-powered models utilizing multi-time-point arterial phase CE-MRI data for HCC prognosis stratification in TACE patients. A total of 543 individual arterial phase CE-MRI scans from 181 HCC patients were retrospectively collected in this study. All patients underwent TACE and longitudinal arterial phase CE-MRI assessments at three time points: prior to treatment, and following the first and second TACE sessions. Among them, 110 patients received TACE monotherapy, while the remaining 71 patients underwent TACE in combination with microwave ablation (MWA). All images were subjected to standardized preprocessing procedures. We developed an end-to-end deep learning model, ProgSwin-UNETR, based on the Swin Transformer architecture, to perform four-class prognosis stratification directly from input imaging data. The model was trained using multi-time-point arterial phase CE-MRI data and evaluated via fourfold cross-validation. Classification performance was assessed using the area under the receiver operating characteristic curve (AUC). For comparative analysis, we benchmarked performance against traditional radiomics-based classifiers and the mRECIST criteria. Prognostic utility was further assessed using Kaplan-Meier (KM) survival curves. Additionally, multivariate Cox proportional hazards regression was performed as a post hoc analysis to evaluate the independent and complementary prognostic value of the model outputs and clinical variables. GradCAM +  + was applied to visualize the imaging regions contributing most to model prediction. The ProgSwin-UNETR model achieved an accuracy of 0.86 and an AUC of 0.92 (95% CI: 0.90-0.95) for the four-class prognosis stratification task, outperforming radiomic models across all risk groups. Furthermore, KM survival analyses were performed using three different approaches-AI model, radiomics-based classifiers, and mRECIST criteria-to stratify patients by risk. Of the three approaches, only the AI-based ProgSwin-UNETR model achieved statistically significant risk stratification across the entire cohort and in both TACE-alone and TACE + MWA subgroups (p < 0.005). In contrast, the mRECIST and radiomics models did not yield significant survival differences across subgroups (p > 0.05). Multivariate Cox regression analysis further demonstrated that the model was a robust independent prognostic factor (p = 0.01), effectively stratifying patients into four distinct risk groups (Class 0 to Class 3) with Log(HR) values of 0.97, 0.51, -0.53, and -0.92, respectively. Additionally, GradCAM +  + visualizations highlighted critical regional features contributing to prognosis prediction, providing interpretability of the model. ProgSwin-UNETR can well predict the various risk groups of HCC patients undergoing TACE therapy and can further be applied for personalized prediction.
Page 69 of 3163151 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.