Sort by:
Page 38 of 72720 results

Generating Synthetic T2*-Weighted Gradient Echo Images of the Knee with an Open-source Deep Learning Model.

Vrettos K, Vassalou EE, Vamvakerou G, Karantanas AH, Klontzas ME

pubmed logopapersJun 1 2025
Routine knee MRI protocols for 1.5 T and 3 T scanners, do not include T2*-w gradient echo (T2*W) images, which are useful in several clinical scenarios such as the assessment of cartilage, synovial blooming (deposition of hemosiderin), chondrocalcinosis and the evaluation of the physis in pediatric patients. Herein, we aimed to develop an open-source deep learning model that creates synthetic T2*W images of the knee using fat-suppressed intermediate-weighted images. A cycleGAN model was trained with 12,118 sagittal knee MR images and tested on an independent set of 2996 images. Diagnostic interchangeability of synthetic T2*W images was assessed against a series of findings. Voxel intensity of four tissues was evaluated with Bland-Altman plots. Image quality was assessed with the use of root mean squared error (NRMSE), structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR). Code, model and a standalone executable file are provided on github. The model achieved a median NRMSE, PSNR and SSIM of 0.5, 17.4, and 0.5, respectively. Images were found interchangeable with an intraclass correlation coefficient >0.95 for all findings. Mean voxel intensity was equal between synthetic and conventional images. Four types of artifacts were identified: geometrical distortion (86/163 cases), object insertion/omission (11/163 cases), a wrap-around-like (26/163 cases) and an incomplete fat-suppression artifact (120/163 cases), which had a median 0 impact (no impact) on the diagnosis. In conclusion, the developed open-source GAN model creates synthetic T2*W images of the knee of high diagnostic value and quality. The identified artifacts had no or minor effect on the diagnostic value of the images.

Multivariate Classification of Adolescent Major Depressive Disorder Using Whole-brain Functional Connectivity.

Li Z, Shen Y, Zhang M, Li X, Wu B

pubmed logopapersJun 1 2025
Adolescent major depressive disorder (MDD) is a serious mental health condition that has been linked to abnormal functional connectivity (FC) patterns within the brain. However, whether FC could be used as a potential biomarker for diagnosis of adolescent MDD is still unclear. The aim of our study was to investigate the potential diagnostic value of whole-brain FC in adolescent MDD. Resting-state functional magnetic resonance imaging data were obtained from 94 adolescents with MDD and 78 healthy adolescents. The whole brain was segmented into 90 regions of interest (ROIs) using the automated anatomical labeling atlas. FC was assessed by calculating the Pearson correlation coefficient of the average time series between each pair of ROIs. A multivariate pattern analysis was employed to classify patients from controls using the whole-brain FC as input features. The linear support vector machine classifier achieved an accuracy of 69.18% using the optimal functional connection features. The consensus functional connections were mainly located within and between large-scale brain networks. The top 10 nodes with the highest weight in the classification model were mainly located in the default mode, salience, auditory, and sensorimotor networks. Our findings highlighted the importance of functional network connectivity in the neurobiology of adolescent MDD, and suggested the possibility of altered FC and high-weight regions as complementary diagnostic markers in adolescents with depression.

Prediction Model and Nomogram for Amyloid Positivity Using Clinical and MRI Features in Individuals With Subjective Cognitive Decline.

Li Q, Cui L, Guan Y, Li Y, Xie F, Guo Q

pubmed logopapersJun 1 2025
There is an urgent need for the precise prediction of cerebral amyloidosis using noninvasive and accessible indicators to facilitate the early diagnosis of individuals with the preclinical stage of Alzheimer's disease (AD). Two hundred and four individuals with subjective cognitive decline (SCD) were enrolled in this study. All subjects completed neuropsychological assessments and underwent 18F-florbetapir PET, structural MRI, and functional MRI. A total of 315 features were extracted from the MRI, demographics, and neuropsychological scales and selected using the least absolute shrinkage and selection operator (LASSO). The logistic regression (LR) model, based on machine learning, was trained to classify SCD as either β-amyloid (Aβ) positive or negative. A nomogram was established using a multivariate LR model to predict the risk of Aβ+. The performance of the prediction model and nomogram was assessed with area under the curve (AUC) and calibration. The final model was based on the right rostral anterior cingulate thickness, the grey matter volume of the right inferior temporal, the ReHo of the left posterior cingulate gyrus and right superior temporal gyrus, as well as MoCA-B and AVLT-R. In the training set, the model achieved a good AUC of 0.78 for predicting Aβ+, with an accuracy of 0.72. The validation of the model also yielded a favorable discriminatory ability with an AUC of 0.88 and an accuracy of 0.83. We have established and validated a model based on cognitive, sMRI, and fMRI data that exhibits adequate discrimination. This model has the potential to predict amyloid status in the SCD group and provide a noninvasive, cost-effective way that might facilitate early screening, clinical diagnosis, and drug clinical trials.

3-D contour-aware U-Net for efficient rectal tumor segmentation in magnetic resonance imaging.

Lu Y, Dang J, Chen J, Wang Y, Zhang T, Bai X

pubmed logopapersJun 1 2025
Magnetic resonance imaging (MRI), as a non-invasive detection method, is crucial for the clinical diagnosis and treatment plan of rectal cancer. However, due to the low contrast of rectal tumor signal in MRI, segmentation is often inaccurate. In this paper, we propose a new three-dimensional rectal tumor segmentation method CAU-Net based on T2-weighted MRI images. The method adopts a convolutional neural network to extract multi-scale features from MRI images and uses a Contour-Aware decoder and attention fusion block (AFB) for contour enhancement. We also introduce adversarial constraint to improve augmentation performance. Furthermore, we construct a dataset of 108 MRI-T2 volumes for the segmentation of locally advanced rectal cancer. Finally, CAU-Net achieved a DSC of 0.7112 and an ASD of 2.4707, which outperforms other state-of-the-art methods. Various experiments on this dataset show that CAU-Net has high accuracy and efficiency in rectal tumor segmentation. In summary, proposed method has important clinical application value and can provide important support for medical image analysis and clinical treatment of rectal cancer. With further development and application, this method has the potential to improve the accuracy of rectal cancer diagnosis and treatment.

SAMBV: A fine-tuned SAM with interpolation consistency regularization for semi-supervised bi-ventricle segmentation from cardiac MRI.

Wang Y, Zhou S, Lu K, Wang Y, Zhang L, Liu W, Wang Z

pubmed logopapersJun 1 2025
The SAM (segment anything model) is a foundation model for general purpose image segmentation, however, when it comes to a specific medical application, such as segmentation of both ventricles from the 2D cardiac MRI, the results are not satisfactory. The scarcity of labeled medical image data further increases the difficulty to apply the SAM to medical image processing. To address these challenges, we propose the SAMBV by fine-tuning the SAM for semi-supervised segmentation of bi-ventricle from the 2D cardiac MRI. The SAM is tuned in three aspects, (i) the position and feature adapters are introduced so that the SAM can adapt to bi-ventricle segmentation. (ii) a dual-branch encoder is incorporated to collect missing local feature information in SAM so as to improve bi-ventricle segmentation. (iii) the interpolation consistency regularization (ICR) semi-supervised manner is utilized, allowing the SAMBV to achieve competitive performance with only 40% of the labeled data in the ACDC dataset. Experimental results demonstrate that the proposed SAMBV achieves an average Dice score improvement of 17.6% over the original SAM, raising its performance from 74.49% to 92.09%. Furthermore, the SAMBV outperforms other supervised SAM fine-tuning methods, showing its effectiveness in semi-supervised medical image segmentation tasks. Notably, the proposed method is specifically designed for 2D MRI data.

Exploring the Limitations of Virtual Contrast Prediction in Brain Tumor Imaging: A Study of Generalization Across Tumor Types and Patient Populations.

Caragliano AN, Macula A, Colombo Serra S, Fringuello Mingo A, Morana G, Rossi A, Alì M, Fazzini D, Tedoldi F, Valbusa G, Bifone A

pubmed logopapersJun 1 2025
Accurate and timely diagnosis of brain tumors is critical for patient management and treatment planning. Magnetic resonance imaging (MRI) is a widely used modality for brain tumor detection and characterization, often aided by the administration of gadolinium-based contrast agents (GBCAs) to improve tumor visualization. Recently, deep learning models have shown remarkable success in predicting contrast-enhancement in medical images, thereby reducing the need of GBCAs and potentially minimizing patient discomfort and risks. In this paper, we present a study aimed at investigating the generalization capabilities of a neural network trained to predict full contrast in brain tumor images from noncontrast MRI scans. While initial results exhibited promising performance on a specific tumor type at a certain stage using a specific dataset, our attempts to extend this success to other tumor types and diverse patient populations yielded unexpected challenges and limitations. Through a rigorous analysis of the factor contributing to these negative results, we aim to shed light on the complexities associated with generalizing contrast enhancement prediction in medical brain tumor imaging, offering valuable insights for future research and clinical applications.

Internal Target Volume Estimation for Liver Cancer Radiation Therapy Using an Ultra Quality 4-Dimensional Magnetic Resonance Imaging.

Liao YP, Xiao H, Wang P, Li T, Aguilera TA, Visak JD, Godley AR, Zhang Y, Cai J, Deng J

pubmed logopapersJun 1 2025
Accurate internal target volume (ITV) estimation is essential for effective and safe radiation therapy in liver cancer. This study evaluates the clinical value of an ultraquality 4-dimensional magnetic resonance imaging (UQ 4D-MRI) technique for ITV estimation. The UQ 4D-MRI technique maps motion information from a low spatial resolution dynamic volumetric MRI onto a high-resolution 3-dimensional MRI used for radiation treatment planning. It was validated using a motion phantom and data from 13 patients with liver cancer. ITV generated from UQ 4D-MRI (ITV<sub>4D</sub>) was compared with those obtained through isotropic expansions (ITV<sub>2 mm</sub> and ITV<sub>5 mm</sub>) and those measured using conventional 4D-computed tomography (computed tomography-based ITV, ITV<sub>CT</sub>) for each patient. Phantom studies showed a displacement measurement difference of <5% between UQ 4D-MRI and single-slice 2-dimensional cine MRI. In patient studies, the maximum superior-inferior displacements of the tumor on UQ 4D-MRI showed no significant difference compared with single-slice 2-dimensional cine imaging (<i>P</i> = .985). Computed tomography-based ITV showed no significant difference (<i>P</i> = .72) with ITV<sub>4D</sub>, whereas ITV<sub>2 mm</sub> and ITV<sub>5 mm</sub> significantly overestimated the volume by 29.0% (<i>P</i> = .002) and 120.7% (<i>P</i> < .001) compared with ITV<sub>4D</sub>, respectively. UQ 4D-MRI enables accurate motion assessment for liver tumors, facilitating precise ITV delineation for radiation treatment planning. Despite uncertainties from artificial intelligence-based delineation and variations in patients' respiratory patterns, UQ 4D-MRI excels at capturing tumor motion trajectories, potentially improving treatment planning accuracy and reducing margins in liver cancer radiation therapy.

An Optimized Framework of QSM Mask Generation Using Deep Learning: QSMmask-Net.

Lee G, Jung W, Sakaie KE, Oh SH

pubmed logopapersJun 1 2025
Quantitative susceptibility mapping (QSM) provides the spatial distribution of magnetic susceptibility within tissues through sequential steps: phase unwrapping and echo combination, mask generation, background field removal, and dipole inversion. Accurate mask generation is crucial, as masks excluding regions outside the brain and without holes are necessary to minimize errors and streaking artifacts during QSM reconstruction. Variations in susceptibility values can arise from different mask generation methods, highlighting the importance of optimizing mask creation. In this study, we propose QSMmask-net, a deep neural network-based method for generating precise QSM masks. QSMmask-net achieved the highest Dice score compared to other mask generation methods. Mean susceptibility values using QSMmask-net masks showed the lowest differences from manual masks (ground truth) in simulations and healthy controls (no significant difference, p > 0.05). Linear regression analysis confirmed a strong correlation with manual masks for hemorrhagic lesions (slope = 0.9814 ± 0.007, intercept = 0.0031 ± 0.001, R<sup>2</sup> = 0.9992, p < 0.05). We have demonstrated that mask generation methods can affect the susceptibility value estimations. QSMmask-net reduces the labor required for mask generation while providing mask quality comparable to manual methods. The proposed method enables users without specialized expertise to create optimized masks, potentially broadening QSM applicability efficiently.

Toward Noninvasive High-Resolution In Vivo pH Mapping in Brain Tumors by <sup>31</sup>P-Informed deepCEST MRI.

Schüre JR, Rajput J, Shrestha M, Deichmann R, Hattingen E, Maier A, Nagel AM, Dörfler A, Steidl E, Zaiss M

pubmed logopapersJun 1 2025
The intracellular pH (pH<sub>i</sub>) is critical for understanding various pathologies, including brain tumors. While conventional pH<sub>i</sub> measurement through <sup>31</sup>P-MRS suffers from low spatial resolution and long scan times, <sup>1</sup>H-based APT-CEST imaging offers higher resolution with shorter scan times. This study aims to directly predict <sup>31</sup>P-pH<sub>i</sub> maps from CEST data by using a fully connected neuronal network. Fifteen tumor patients were scanned on a 3-T Siemens PRISMA scanner and received <sup>1</sup>H-based CEST and T1 measurement, as well as <sup>31</sup>P-MRS. A neural network was trained voxel-wise on CEST and T1 data to predict <sup>31</sup>P-pH<sub>i</sub> values, using data from 11 patients for training and 4 for testing. The predicted pH<sub>i</sub> maps were additionally down-sampled to the original the <sup>31</sup>P-pH<sub>i</sub> resolution, to be able to calculate the RMSE and analyze the correlation, while higher resolved predictions were compared with conventional CEST metrics. The results demonstrated a general correspondence between the predicted deepCEST pH<sub>i</sub> maps and the measured <sup>31</sup>P-pH<sub>i</sub> in test patients. However, slight discrepancies were also observed, with a RMSE of 0.04 pH units in tumor regions. High-resolution predictions revealed tumor heterogeneity and features not visible in conventional CEST data, suggesting the model captures unique pH information and is not simply a T1 segmentation. The deepCEST pH<sub>i</sub> neural network enables the APT-CEST hidden pH-sensitivity and offers pH<sub>i</sub> maps with higher spatial resolution in shorter scan time compared with <sup>31</sup>P-MRS. Although this approach is constrained by the limitations of the acquired data, it can be extended with additional CEST features for future studies, thereby offering a promising approach for 3D pH imaging in a clinical environment.
Page 38 of 72720 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.