Sort by:
Page 136 of 1621612 results

A Pilot Study on Deep Learning With Simplified Intravoxel Incoherent Motion Diffusion-Weighted MRI Parameters for Differentiating Hepatocellular Carcinoma From Other Common Liver Masses.

Ratiphunpong P, Inmutto N, Angkurawaranon S, Wantanajittikul K, Suwannasak A, Yarach U

pubmed logopapersJun 1 2025
To develop and evaluate a deep learning technique for the differentiation of hepatocellular carcinoma (HCC) using "simplified intravoxel incoherent motion (IVIM) parameters" derived from only 3 b-value images. Ninety-eight retrospective magnetic resonance imaging data were collected (68 men, 30 women; mean age 59 ± 14 years), including T2-weighted imaging with fat suppression, in-phase, out-of-phase, and diffusion-weighted imaging (b = 0, 100, 800 s/mm2). Ninety percent of data were used for stratified 10-fold cross-validation. After data preprocessing, diffusion-weighted imaging images were used to compute simplified IVIM and apparent diffusion coefficient (ADC) maps. A 17-layer 3D convolutional neural network (3D-CNN) was implemented, and the input channels were modified for different strategies of input images. The 3D-CNN with IVIM maps (ADC, f, and D*) demonstrated superior performance compared with other strategies, achieving an accuracy of 83.25 ± 6.24% and area under the receiver-operating characteristic curve of 92.70 ± 8.24%, significantly surpassing the baseline of 50% (P < 0.05) and outperforming other strategies in all evaluation metrics. This success underscores the effectiveness of simplified IVIM parameters in combination with a 3D-CNN architecture for enhancing HCC differentiation accuracy. Simplified IVIM parameters derived from 3 b-values, when integrated with a 3D-CNN architecture, offer a robust framework for HCC differentiation.

Res-Net-Based Modeling and Morphologic Analysis of Deep Medullary Veins Using Multi-Echo GRE at 7 T MRI.

Li Z, Liang L, Zhang J, Fan X, Yang Y, Yang H, Wang Q, An J, Xue R, Zhuo Y, Qian H, Zhang Z

pubmed logopapersJun 1 2025
The pathological changes in deep medullary veins (DMVs) have been reported in various diseases. However, accurate modeling and quantification of DMVs remain challenging. We aim to propose and assess an automated approach for modeling and quantifying DMVs at 7 Tesla (7 T) MRI. A multi-echo-input Res-Net was developed for vascular segmentation, and a minimum path loss function was used for modeling and quantifying the geometric parameter of DMVs. Twenty-one patients diagnosed as subcortical vascular dementia (SVaD) and 20 condition matched controls were included in this study. The amplitude and phase images of gradient echo with five echoes were acquired at 7 T. Ten GRE images were manually labeled by two neurologists and compared with the results obtained by our proposed method. Independent samples t test and Pearson correlation were used for statistical analysis in our study, and p value < 0.05 was considered significant. No significant offset was found in centerlines obtained by human labeling and our algorithm (p = 0.734). The length difference between the proposed method and manual labeling was smaller than the error between different clinicians (p < 0.001). Patients with SVaD exhibited fewer DMVs (mean difference = -60.710 ± 21.810, p = 0.011) and higher curvature (mean difference = 0.12 ± 0.022, p < 0.0001), corresponding to their higher Vascular Dementia Assessment Scale-Cog (VaDAS-Cog) scores (mean difference = 4.332 ± 1.992, p = 0.036) and lower Mini-Mental State Examination (MMSE) (mean difference = -3.071 ± 1.443, p = 0.047). The MMSE scores were positively correlated with the numbers of DMVs (r = 0.437, p = 0.037) and were negatively correlated with the curvature (r = -0.426, p = 0.042). In summary, we proposed a novel framework for automated quantifying the morphologic parameters of DMVs. These characteristics of DMVs are expected to help the research and diagnosis of cerebral small vessel diseases with DMV lesions.

Towards fast and reliable estimations of 3D pressure, velocity and wall shear stress in aortic blood flow: CFD-based machine learning approach.

Lin D, Kenjereš S

pubmed logopapersJun 1 2025
In this work, we developed deep neural networks for the fast and comprehensive estimation of the most salient features of aortic blood flow. These features include velocity magnitude and direction, 3D pressure, and wall shear stress. Starting from 40 subject-specific aortic geometries obtained from 4D Flow MRI, we applied statistical shape modeling to generate 1,000 synthetic aorta geometries. Complete computational fluid dynamics (CFD) simulations of these geometries were performed to obtain ground-truth values. We then trained deep neural networks for each characteristic flow feature using 900 randomly selected aorta geometries. Testing on remaining 100 geometries resulted in average errors of 3.11% for velocity and 4.48% for pressure. For wall shear stress predictions, we applied two approaches: (i) directly derived from the neural network-predicted velocity, and, (ii) predicted from a separate neural network. Both approaches yielded similar accuracy, with average error of 4.8 and 4.7% compared to complete 3D CFD results, respectively. We recommend the second approach for potential clinical use due to its significantly simplified workflow. In conclusion, this proof-of-concept analysis demonstrates the numerical robustness, rapid calculation speed (less than seconds), and good accuracy of the CFD-based machine learning approach in predicting velocity, pressure, and wall shear stress distributions in subject-specific aortic flows.

Explainable deep stacking ensemble model for accurate and transparent brain tumor diagnosis.

Haque R, Khan MA, Rahman H, Khan S, Siddiqui MIH, Limon ZH, Swapno SMMR, Appaji A

pubmed logopapersJun 1 2025
Early detection of brain tumors in MRI images is vital for improving treatment results. However, deep learning models face challenges like limited dataset diversity, class imbalance, and insufficient interpretability. Most studies rely on small, single-source datasets and do not combine different feature extraction techniques for better classification. To address these challenges, we propose a robust and explainable stacking ensemble model for multiclass brain tumor classification. To address these challenges, we propose a stacking ensemble model that combines EfficientNetB0, MobileNetV2, GoogleNet, and Multi-level CapsuleNet, using CatBoost as the meta-learner for improved feature aggregation and classification accuracy. This ensemble approach captures complex tumor characteristics while enhancing robustness and interpretability. The proposed model integrates EfficientNetB0, MobileNetV2, GoogleNet, and a Multi-level CapsuleNet within a stacking framework, utilizing CatBoost as the meta-learner to improve feature aggregation and classification accuracy. We created two large MRI datasets by merging data from four sources: BraTS, Msoud, Br35H, and SARTAJ. To tackle class imbalance, we applied Borderline-SMOTE and data augmentation. We also utilized feature extraction methods, along with PCA and Gray Wolf Optimization (GWO). Our model was validated through confidence interval analysis and statistical tests, demonstrating superior performance. Error analysis revealed misclassification trends, and we assessed computational efficiency regarding inference speed and resource usage. The proposed ensemble achieved 97.81% F1 score and 98.75% PR AUC on M1, and 98.32% F1 score with 99.34% PR AUC on M2. Moreover, the model consistently surpassed state-of-the-art CNNs, Vision Transformers, and other ensemble methods in classifying brain tumors across individual four datasets. Finally, we developed a web-based diagnostic tool that enables clinicians to interact with the proposed model and visualize decision-critical regions in MRI scans using Explainable Artificial Intelligence (XAI). This study connects high-performing AI models with real clinical applications, providing a reliable, scalable, and efficient diagnostic solution for brain tumor classification.

A radiomics approach to distinguish Progressive Supranuclear Palsy Richardson's syndrome from other phenotypes starting from MR images.

Pisani N, Abate F, Avallone AR, Barone P, Cesarelli M, Amato F, Picillo M, Ricciardi C

pubmed logopapersJun 1 2025
Progressive Supranuclear Palsy (PSP) is an uncommon neurodegenerative disorder with different clinical onset, including Richardson's syndrome (PSP-RS) and other variant phenotypes (vPSP). Recognising the clinical progression of different phenotypes would enhance the accuracy of detection and treatment of PSP. The study goal was to identify radiomic biomarkers for distinguishing PSP phenotypes extracted from T1-weighted magnetic resonance images (MRI). Forty PSP patients (20 PSP-RS and 20 vPSP) took part in the present work. Radiomic features were collected from 21 regions of interest (ROIs) mainly from frontal cortex, supratentorial white matter, basal nuclei, brainstem, cerebellum, 3rd and 4th ventricles. After features selection, three tree-based machine learning (ML) classifiers were implemented to classify PSP phenotypes. 10 out of 21 ROIs performed best about sensitivity, specificity, accuracy and area under the receiver operating characteristic curve (AUCROC). Particularly, features extracted from the pons region obtained the best accuracy (0.92) and AUCROC (0.83) values while by using the other 10 ROIs, evaluation metrics range from 0.67 to 0.83. Eight features of the Gray Level Dependence Matrix were recurrently extracted for the 10 ROIs. Furthermore, by combining these ROIs, the results exceeded 0.83 in phenotypes classification and the selected areas were brain stem, pons, occipital white matter, precentral gyrus and thalamus regions. Based on the achieved results, our proposed approach could represent a promising tool for distinguishing PSP-RS from vPSP.

Fine-Tuning Deep Learning Model for Quantitative Knee Joint Mapping With MR Fingerprinting and Its Comparison to Dictionary Matching Method: Fine-Tuning Deep Learning Model for Quantitative MRF.

Zhang X, de Moura HL, Monga A, Zibetti MVW, Regatte RR

pubmed logopapersJun 1 2025
Magnetic resonance fingerprinting (MRF), as an emerging versatile and noninvasive imaging technique, provides simultaneous quantification of multiple quantitative MRI parameters, which have been used to detect changes in cartilage composition and structure in osteoarthritis. Deep learning (DL)-based methods for quantification mapping in MRF overcome the memory constraints and offer faster processing compared to the conventional dictionary matching (DM) method. However, limited attention has been given to the fine-tuning of neural networks (NNs) in DL and fair comparison with DM. In this study, we investigate the impact of training parameter choices on NN performance and compare the fine-tuned NN with DM for multiparametric mapping in MRF. Our approach includes optimizing NN hyperparameters, analyzing the singular value decomposition (SVD) components of MRF data, and optimization of the DM method. We conducted experiments on synthetic data, the NIST/ISMRM MRI system phantom with ground truth, and in vivo knee data from 14 healthy volunteers. The results demonstrate the critical importance of selecting appropriate training parameters, as these significantly affect NN performance. The findings also show that NNs improve the accuracy and robustness of T<sub>1</sub>, T<sub>2</sub>, and T<sub>1ρ</sub> mappings compared to DM in synthetic datasets. For in vivo knee data, the NN achieved comparable results for T<sub>1</sub>, with slightly lower T<sub>2</sub> and slightly higher T<sub>1ρ</sub> measurements compared to DM. In conclusion, the fine-tuned NN can be used to increase accuracy and robustness for multiparametric quantitative mapping from MRF of the knee joint.

High-Performance Computing-Based Brain Tumor Detection Using Parallel Quantum Dilated Convolutional Neural Network.

Shinde SS, Pande A

pubmed logopapersJun 1 2025
In the healthcare field, brain tumor causes irregular development of cells in the brain. One of the popular ways to identify the brain tumor and its progression is magnetic resonance imaging (MRI). However, existing methods often suffer from high computational complexity, noise interference, and limited accuracy, which affect the early diagnosis of brain tumor. For resolving such issues, a high-performance computing model, such as big data-based detection, is utilized. As a result, this work proposes a novel approach named parallel quantum dilated convolutional neural network (PQDCNN)-based brain tumor detection using the Map-Reducer. The data partitioning is the prime process, which is done using the Fuzzy local information C-means clustering (FLICM). The partitioned data is subjected to the map reducer. In the mapper, the Medav filtering removes the noise, and the tumor area segmentation is done by a transformer model named TransBTSV2. After segmenting the tumor part, image augmentation and feature extraction are done. In the reducer phase, the brain tumor is detected using the proposed PQDCNN. Furthermore, the efficiency of PQDCNN is validated using the accuracy, sensitivity, and specificity metrics, and the ideal values of 91.52%, 91.69%, and 92.26% are achieved.

Study of AI algorithms on mpMRI and PHI for the diagnosis of clinically significant prostate cancer.

Luo Z, Li J, Wang K, Li S, Qian Y, Xie W, Wu P, Wang X, Han J, Zhu W, Wang H, He Y

pubmed logopapersMay 31 2025
To study the feasibility of multiple factors in improving the diagnostic accuracy of clinically significant prostate cancer (csPCa). A retrospective study with 131 patients analyzes age, PSA, PHI and pathology. Patients with ISUP > 2 were classified as csPCa, and others are non-csPCa. The mpMRI images were processed by a homemade AI algorithm, obtaining positive or negative AI results. Four logistic regression models were fitted, with pathological findings as the dependent variable. The predicted probability of the patients was used to test the prediction efficacy of the models. The DeLong test was performed to compare differences in the area under the receiver operating characteristic (ROC) curves (AUCs) between the models. The study includes 131 patients: 62 were diagnosed with csPCa and 69 were non-csPCa. Statically significant differences were found in age, PSA, PIRADS score, AI results, and PHI values between the 2 groups (all P ≤ 0.001). The conventional model (R<sup>2</sup> = 0.389), the AI model (R<sup>2</sup> = 0.566), and the PHI model (R<sup>2</sup> = 0.515) were compared to the full model (R<sup>2</sup> = 0.626) with ANOVA and showed statistically significant differences (all P < 0.05). The AUC of the full model (0.921 [95% CI: 0.871-0.972]) was significantly higher than that of the conventional model (P = 0.001), AI model (P < 0.001), and PHI model (P = 0.014). Combining multiple factors such as age, PSA, PIRADS score and PHI, adding AI algorithm based on mpMRI, the diagnostic accuracy of csPCa can be improved.

CineMA: A Foundation Model for Cine Cardiac MRI

Yunguan Fu, Weixi Yi, Charlotte Manisty, Anish N Bhuva, Thomas A Treibel, James C Moon, Matthew J Clarkson, Rhodri Huw Davies, Yipeng Hu

arxiv logopreprintMay 31 2025
Cardiac magnetic resonance (CMR) is a key investigation in clinical cardiovascular medicine and has been used extensively in population research. However, extracting clinically important measurements such as ejection fraction for diagnosing cardiovascular diseases remains time-consuming and subjective. We developed CineMA, a foundation AI model automating these tasks with limited labels. CineMA is a self-supervised autoencoder model trained on 74,916 cine CMR studies to reconstruct images from masked inputs. After fine-tuning, it was evaluated across eight datasets on 23 tasks from four categories: ventricle and myocardium segmentation, left and right ventricle ejection fraction calculation, disease detection and classification, and landmark localisation. CineMA is the first foundation model for cine CMR to match or outperform convolutional neural networks (CNNs). CineMA demonstrated greater label efficiency than CNNs, achieving comparable or better performance with fewer annotations. This reduces the burden of clinician labelling and supports replacing task-specific training with fine-tuning foundation models in future cardiac imaging applications. Models and code for pre-training and fine-tuning are available at https://github.com/mathpluscode/CineMA, democratising access to high-performance models that otherwise require substantial computational resources, promoting reproducibility and accelerating clinical translation.

A European Multi-Center Breast Cancer MRI Dataset

Gustav Müller-Franzes, Lorena Escudero Sánchez, Nicholas Payne, Alexandra Athanasiou, Michael Kalogeropoulos, Aitor Lopez, Alfredo Miguel Soro Busto, Julia Camps Herrero, Nika Rasoolzadeh, Tianyu Zhang, Ritse Mann, Debora Jutz, Maike Bode, Christiane Kuhl, Wouter Veldhuis, Oliver Lester Saldanha, JieFu Zhu, Jakob Nikolas Kather, Daniel Truhn, Fiona J. Gilbert

arxiv logopreprintMay 31 2025
Detecting breast cancer early is of the utmost importance to effectively treat the millions of women afflicted by breast cancer worldwide every year. Although mammography is the primary imaging modality for screening breast cancer, there is an increasing interest in adding magnetic resonance imaging (MRI) to screening programmes, particularly for women at high risk. Recent guidelines by the European Society of Breast Imaging (EUSOBI) recommended breast MRI as a supplemental screening tool for women with dense breast tissue. However, acquiring and reading MRI scans requires significantly more time from expert radiologists. This highlights the need to develop new automated methods to detect cancer accurately using MRI and Artificial Intelligence (AI), which have the potential to support radiologists in breast MRI interpretation and classification and help detect cancer earlier. For this reason, the ODELIA consortium has made this multi-centre dataset publicly available to assist in developing AI tools for the detection of breast cancer on MRI.
Page 136 of 1621612 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.