Sort by:
Page 131 of 1981980 results

Improving predictability, reliability, and generalizability of brain-wide associations for cognitive abilities via multimodal stacking.

Tetereva A, Knodt AR, Melzer TR, van der Vliet W, Gibson B, Hariri AR, Whitman ET, Li J, Lal Khakpoor F, Deng J, Ireland D, Ramrakha S, Pat N

pubmed logopapersJun 1 2025
Brain-wide association studies (BWASs) have attempted to relate cognitive abilities with brain phenotypes, but have been challenged by issues such as predictability, test-retest reliability, and cross-cohort generalizability. To tackle these challenges, we proposed a machine learning "stacking" approach that draws information from whole-brain MRI across different modalities, from task-functional MRI (fMRI) contrasts and functional connectivity during tasks and rest to structural measures, into one prediction model. We benchmarked the benefits of stacking using the Human Connectome Projects: Young Adults (<i>n</i> = 873, 22-35 years old) and Human Connectome Projects-Aging (<i>n</i> = 504, 35-100 years old) and the Dunedin Multidisciplinary Health and Development Study (Dunedin Study, <i>n</i> = 754, 45 years old). For predictability, stacked models led to out-of-sample <i>r</i>∼0.5-0.6 when predicting cognitive abilities at the time of scanning, primarily driven by task-fMRI contrasts. Notably, using the Dunedin Study, we were able to predict participants' cognitive abilities at ages 7, 9, and 11 years using their multimodal MRI at age 45 years, with an out-of-sample <i>r</i> of 0.52. For test-retest reliability, stacked models reached an excellent level of reliability (interclass correlation > 0.75), even when we stacked only task-fMRI contrasts together. For generalizability, a stacked model with nontask MRI built from one dataset significantly predicted cognitive abilities in other datasets. Altogether, stacking is a viable approach to undertake the three challenges of BWAS for cognitive abilities.

Significant reduction in manual annotation costs in ultrasound medical image database construction through step by step artificial intelligence pre-annotation.

Zheng F, XingMing L, JuYing X, MengYing T, BaoJian Y, Yan S, KeWei Y, ZhiKai L, Cheng H, KeLan Q, XiHao C, WenFei D, Ping H, RunYu W, Ying Y, XiaoHui B

pubmed logopapersJun 1 2025
This study investigates the feasibility of reducing manual image annotation costs in medical image database construction by utilizing a step by step approach where the Artificial Intelligence model (AI model) trained on a previous batch of data automatically pre-annotates the next batch of image data, taking ultrasound image of thyroid nodule annotation as an example. The study used YOLOv8 as the AI model. During the AI model training, in addition to conventional image augmentation techniques, augmentation methods specifically tailored for ultrasound images were employed to balance the quantity differences between thyroid nodule classes and enhance model training effectiveness. The study found that training the model with augmented data significantly outperformed training with raw images data. When the number of original images number was only 1,360, with 7 thyroid nodule classifications, pre-annotation using the AI model trained on augmented data could save at least 30% of the manual annotation workload for junior physicians. When the scale of original images number reached 6,800, the classification accuracy of the AI model trained on augmented data was very close with that of junior physicians, eliminating the need for manual preliminary annotation.

Discriminating Clear Cell From Non-Clear Cell Renal Cell Carcinoma: A Machine Learning Approach Using Contrast-enhanced Ultrasound Radiomics.

Liang M, Wu S, Ou B, Wu J, Qiu H, Zhao X, Luo B

pubmed logopapersMay 31 2025
The aim of this investigation is to assess the clinical usefulness of a machine learning model using contrast-enhanced ultrasound (CEUS) radiomics in discriminating clear cell renal cell carcinoma (ccRCC) from non-ccRCC. A total of 292 patients with pathologically confirmed RCC subtypes underwent CEUS (development set. n = 231; validation set, n = 61) in a retrospective study. Radiomics features were derived from CEUS images acquired during the cortical and parenchymal phases. Radiomics models were developed using logistic regression (LR), support vector machine, decision tree, naive Bayes, gradient boosting machine, and random forest. The suitable model was identified based on the area under the receiver operating characteristic curve (AUC). Appropriate clinical CEUS features were identified through univariate and multivariate LR analyses to develop a clinical model. By integrating radiomics and clinical CEUS features, a combined model was established. A comprehensive evaluation of the models' performance was conducted. After the reduction and selection process were applied to 2250 radiomics features, the final set of 8 features was considered valuable. Among the models, the LR model had the highest performance on the validation set and showed good robustness. In both the development and validation sets, both the radiomics (AUC, 0.946 and 0.927) and the combined models (AUC, 0.949 and 0.925) outperformed the clinical model (AUC, 0.851 and 0.768), showing higher AUC values (all p < 0.05). The combined model exhibited favorable calibration and clinical benefit. The combined model integrating clinical CEUS and CEUS radiomics features demonstrated good diagnostic performance in discriminating ccRCC from non-ccRCC.

Dual-energy CT-based virtual monoenergetic imaging via unsupervised learning.

Liu CK, Chang HY, Huang HM

pubmed logopapersMay 31 2025
Since its development, virtual monoenergetic imaging (VMI) derived from dual-energy computed tomography (DECT) has been shown to be valuable in many clinical applications. However, DECT-based VMI showed increased noise at low keV levels. In this study, we proposed an unsupervised learning method to generate VMI from DECT. This means that we don't require training and labeled (i.e. high-quality VMI) data. Specifically, DECT images were fed into a deep learning (DL) based model expected to output VMI. Based on the theory that VMI obtained from image space data is a linear combination of DECT images, we used the model output (i.e. the predicted VMI) to recalculate DECT images. By minimizing the difference between the measured and recalculated DECT images, the DL-based model can be constrained itself to generate VMI from DECT images. We investigate whether the proposed DL-based method has the ability to improve the quality of VMIs. The experimental results obtained from patient data showed that the DL-based VMIs had better image quality than the conventional DECT-based VMIs. Moreover, the CT number differences between the DECT-based and DL-based VMIs were distributed within <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>±</mo></math> 10 HU for bone and <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>±</mo></math> 5 HU for brain, fat, and muscle. Except for bone, no statistically significant difference in CT number measurements was found between the DECT-based and DL-based VMIs (p > 0.01). Our preliminary results show that DL has the potential to unsupervisedly generate high-quality VMIs directly from DECT.

Relationship between spleen volume and diameter for assessment of response to treatment on CT in patients with hematologic malignancies enrolled in clinical trials.

Hasenstab KA, Lu J, Leong LT, Bossard E, Pylarinou-Sinclair E, Devi K, Cunha GM

pubmed logopapersMay 31 2025
Investigate spleen diameter (d) and volume (v) relationship in patients with hematologic malignancies (HM) by determining volumetric thresholds that best correlate to established diameter thresholds for assessing response to treatment. Exploratorily, interrogate the impact of volumetric measurements in response categories and as a predictor of response. Secondary analysis of prospectively collected clinical trial data of 382 patients with HM. Spleen diameters were computed following Lugano criteria and volumes using deep learning segmentation. d and v relationship was estimated using power regression model, volumetric thresholds ([Formula: see text]) for treatment response estimated; threshold search to determine percentual change ([Formula: see text] and minimum volumetric increase ([Formula: see text]) that maximize agreement with Lugano criteria performed. Spleen diameter and volume predictive performance for clinical response investigated using random forest model. [Formula: see text] describes the relationship between spleen diameter and volume. [Formula: see text] for splenomegaly was 546 cm³. [Formula: see text], [Formula: see text], and [Formula: see text] for assessing response resulting in highest agreement with Lugano criteria were 570 cm<sup>3</sup>, 73%, and 170 cm<sup>3</sup>, respectively. Predictive performance for response between diameter and volume were not significantly different (P=0.78). This study provides empirical spleen volume threshold and percentual changes that best correlate with diameter thresholds, i.e., Lugano criteria, for assessment of response to treatment in patients with HM. In our dataset use of spleen volumetric thresholds versus diameter thresholds resulted in similar response assessment categories and did not signal differences in predictive values for response.

Deep-learning based multi-modal models for brain age, cognition and amyloid pathology prediction.

Wang C, Zhang W, Ni M, Wang Q, Liu C, Dai L, Zhang M, Shen Y, Gao F

pubmed logopapersMay 31 2025
Magnetic resonance imaging (MRI), combined with artificial intelligence techniques, has improved our understanding of brain structural change and enabled the estimation of brain age. Neurodegenerative disorders, such as Alzheimer's disease (AD), have been linked to accelerated brain aging. In this study, we aimed to develop a deep-learning framework that processes and integrates MRI images to more accurately predict brain age, cognitive function, and amyloid pathology. In this study, we aimed to develop a deep-learning framework that processes and integrates MRI images to more accurately predict brain age, cognitive function, and amyloid pathology.We collected over 10,000 T1-weighted MRI scans from more than 7,000 individuals across six cohorts. We designed a multi-modal deep-learning framework that employs 3D convolutional neural networks to analyze MRI and additional neural networks to evaluate demographic data. Our initial model focused on predicting brain age, serving as a foundational model from which we developed separate models for cognition function and amyloid plaque prediction through transfer learning. The brain age prediction model achieved the mean absolute error (MAE) for cognitive normal population in the ADNI (test) datasets of 3.302 years. The gap between predicted brain age and chronological age significantly increases while cognition declines. The cognition prediction model exhibited a root mean square error (RMSE) of 0.334 for the Clinical Dementia Rating (CDR) regression task, achieving an area under the curve (AUC) of approximately 0.95 in identifying ing dementia patients. Dementia related brain regions, such as the medial temporal lobe, were identified by our model. Finally, amyloid plaque prediction model was trained to predict amyloid plaque, and achieved an AUC about 0.8 for dementia patients. These findings indicate that the present predictive models can identify subtle changes in brain structure, enabling precise estimates of brain age, cognitive status, and amyloid pathology. Such models could facilitate the use of MRI as a non-invasive diagnostic tool for neurodegenerative diseases, including AD.

Accelerated proton resonance frequency-based magnetic resonance thermometry by optimized deep learning method.

Xu S, Zong S, Mei CS, Shen G, Zhao Y, Wang H

pubmed logopapersMay 31 2025
Proton resonance frequency (PRF)-based magnetic resonance (MR) thermometry plays a critical role in thermal ablation therapies through focused ultrasound (FUS). For clinical applications, accurate and rapid temperature feedback is essential to ensure both the safety and effectiveness of these treatments. This work aims to improve temporal resolution in dynamic MR temperature map reconstructions using an enhanced deep-learning method, thereby supporting the real-time monitoring required for effective FUS treatments. Five classical neural network architectures-cascade net, complex-valued U-Net, shift window transformer for MRI, real-valued U-Net, and U-Net with residual blocks-along with training-optimized methods were applied to reconstruct temperature maps from 2-fold and 4-fold undersampled k-space data. The training enhancements included pre-training/training-phase data augmentations, knowledge distillation, and a novel amplitude-phase decoupling loss function. Phantom and ex vivo tissue heating experiments were conducted using a FUS transducer. Ground truth was the complex MR images with accurate temperature changes, and datasets were manually undersampled to simulate such acceleration here. Separate testing datasets were used to evaluate real-time performance and temperature accuracy. Furthermore, our proposed deep learning-based rapid reconstruction approach was validated on a clinical dataset obtained from patients with uterine fibroids, demonstrating its clinical applicability. Acceleration factors of 1.9 and 3.7 were achieved for 2× and 4× k-space under samplings, respectively. The deep learning-based reconstruction using ResUNet incorporating the four optimizations, showed superior performance. For 2-fold acceleration, the RMSE of temperature map patches were 0.89°C and 1.15°C for the phantom and ex vivo testing datasets, respectively. The DICE coefficient for the 43°C isotherm-enclosed regions was 0.81, and the Bland-Altman analysis indicated a bias of -0.25°C with limits of agreement of ±2.16°C. In the 4-fold under-sampling case, these evaluation metrics showed approximately a 10% reduction in accuracy. Additionally, the DICE coefficient measuring the overlap between the reconstructed temperature maps (using the optimized ResUNet) and the ground truth, specifically in regions where the temperature exceeded the 43°C threshold, were 0.77 and 0.74 for the 2× and 4× under-sampling scenarios, respectively. This study demonstrates that deep learning-based reconstruction significantly enhances the accuracy and efficiency of MR thermometry, particularly in the context of FUS-based clinical treatments for uterine fibroids. This approach could also be extended to other applications such as essential tremor and prostate cancer treatments where MRI-guided FUS plays a critical role.

Subclinical atrial fibrillation prediction based on deep learning and strain analysis using echocardiography.

Huang SH, Lin YC, Chen L, Unankard S, Tseng VS, Tsao HM, Tang GJ

pubmed logopapersMay 31 2025
Subclinical atrial fibrillation (SCAF), also known as atrial high-rate episodes (AHREs), refers to asymptomatic heart rate elevations associated with increased risks of atrial fibrillation and cardiovascular events. Although deep learning (DL) models leveraging echocardiographic images from ultrasound are widely used for cardiac function analysis, their application to AHRE prediction remains unexplored. This study introduces a novel DL-based framework for automatic AHRE detection using echocardiograms. The approach encompasses left atrium (LA) segmentation, LA strain feature extraction, and AHRE classification. Data from 117 patients with cardiac implantable electronic devices undergoing echocardiography were analyzed, with 80% allocated to the development set and 20% to the test set. LA segmentation accuracy was quantified using the Dice coefficient, yielding scores of 0.923 for the LA cavity and 0.741 for the LA wall. For AHRE classification, metrics such as area under the curve (AUC), accuracy, sensitivity, and specificity were employed. A transformer-based model integrating patient characteristics demonstrated robust performance, achieving mean AUC of 0.815, accuracy of 0.809, sensitivity of 0.800, and specificity of 0.783 for a 24-h AHRE duration threshold. This framework represents a reliable tool for AHRE assessment and holds significant potential for early SCAF detection, enhancing clinical decision-making and patient outcomes.

MSLesSeg: baseline and benchmarking of a new Multiple Sclerosis Lesion Segmentation dataset.

Guarnera F, Rondinella A, Crispino E, Russo G, Di Lorenzo C, Maimone D, Pappalardo F, Battiato S

pubmed logopapersMay 31 2025
This paper presents MSLesSeg, a new, publicly accessible MRI dataset designed to advance research in Multiple Sclerosis (MS) lesion segmentation. The dataset comprises 115 scans of 75 patients including T1, T2 and FLAIR sequences, along with supplementary clinical data collected across different sources. Expert-validated annotations provide high-quality lesion segmentation labels, establishing a reliable human-labeled dataset for benchmarking. Part of the dataset was shared with expert scientists with the aim to compare the last automatic AI-based image segmentation solutions with an expert-biased handmade segmentation. In addition, an AI-based lesion segmentation of MSLesSeg was developed and technically validated against the last state-of-the-art methods. The dataset, the detailed analysis of researcher contributions, and the baseline results presented here mark a significant milestone for advancing automated MS lesion segmentation research.

NeoPred: dual-phase CT AI forecasts pathologic response to neoadjuvant chemo-immunotherapy in NSCLC.

Zheng J, Yan Z, Wang R, Xiao H, Chen Z, Ge X, Li Z, Liu Z, Yu H, Liu H, Wang G, Yu P, Fu J, Zhang G, Zhang J, Liu B, Huang Y, Deng H, Wang C, Fu W, Zhang Y, Wang R, Jiang Y, Lin Y, Huang L, Yang C, Cui F, He J, Liang H

pubmed logopapersMay 31 2025
Accurate preoperative prediction of major pathological response or pathological complete response after neoadjuvant chemo-immunotherapy remains a critical unmet need in resectable non-small-cell lung cancer (NSCLC). Conventional size-based imaging criteria offer limited reliability, while biopsy confirmation is available only post-surgery. We retrospectively assembled 509 consecutive NSCLC cases from four Chinese thoracic-oncology centers (March 2018 to March 2023) and prospectively enrolled 50 additional patients. Three 3-dimensional convolutional neural networks (pre-treatment CT, pre-surgical CT, dual-phase CT) were developed; the best-performing dual-phase model (NeoPred) optionally integrated clinical variables. Model performance was measured by area under the receiver-operating-characteristic curve (AUC) and compared with nine board-certified radiologists. In an external validation set (n=59), NeoPred achieved an AUC of 0.772 (95% CI: 0.650 to 0.895), sensitivity 0.591, specificity 0.733, and accuracy 0.627; incorporating clinical data increased the AUC to 0.787. In a prospective cohort (n=50), NeoPred reached an AUC of 0.760 (95% CI: 0.628 to 0.891), surpassing the experts' mean AUC of 0.720 (95% CI: 0.574 to 0.865). Model assistance raised the pooled expert AUC to 0.829 (95% CI: 0.707 to 0.951) and accuracy to 0.820. Marked performance persisted within radiological stable-disease subgroups (external AUC 0.742, 95% CI: 0.468 to 1.000; prospective AUC 0.833, 95% CI: 0.497 to 1.000). Combining dual-phase CT and clinical variables, NeoPred reliably and non-invasively predicts pathological response to neoadjuvant chemo-immunotherapy in NSCLC, outperforms unaided expert assessment, and significantly enhances radiologist performance. Further multinational trials are needed to confirm generalizability and support surgical decision-making.
Page 131 of 1981980 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.