Sort by:
Page 90 of 1201200 results

Quantitative analysis of ureteral jets with dynamic magnetic resonance imaging and a deep-learning approach.

Wu M, Zeng W, Li Y, Ni C, Zhang J, Kong X, Zhang JL

pubmed logopapersJun 1 2025
To develop dynamic MRU protocol that focuses on the bladder to capture ureteral jets and to automatically estimate frequency and duration of ureteral jets from the dynamic images. Between February and July 2023, we collected 51 sets of dynamic MRU data from 5 healthy subjects. To capture the entire longitudinal trajectory of ureteral jets, we optimized orientation and thickness of the imaging slice for dynamic MRU, and developed a deep-learning method to automatically estimate frequency and duration of ureteral jets from the dynamic images. Among the 15 sets of images with different slice positioning, the positioning with slice thickness of 25 mm and orientation of 30° was optimal. Of the 36 sets of dynamic images acquired with the optimal protocol, 27 sets or 2529 images were used to train a U-Net model for automatically detecting the presence of ureteral jets. On the other 9 sets or 760 images, accuracy of the trained model was found to be 84.9 %. Based on the results of automatic detection, frequency of ureteral jet in each set of dynamic images was estimated as 8.0 ± 1.4 min<sup>-1</sup>, deviating from reference by -3.3 % ± 10.0 %; duration of each individual ureteral jet was estimated as 7.3 ± 2.8 s, deviating from reference by 2.4 % ± 32.2 %. The accumulative duration of ureteral jets estimated by the method correlated well (with coefficient of 0.936) with the bladder expansion recorded in the dynamic images. The proposed method was capable of quantitatively characterizing ureteral jets, potentially providing valuable information on functional status of ureteral peristalsis.

Predictive validity of consensus-based MRI definition of osteoarthritis plus radiographic osteoarthritis for the progression of knee osteoarthritis: A longitudinal cohort study.

Xing X, Wang Y, Zhu J, Shen Z, Cicuttini F, Jones G, Aitken D, Cai G

pubmed logopapersJun 1 2025
Our previous study showed that magnetic resonance imaging (MRI)-defined tibiofemoral osteoarthritis (MRI-OA), based on a Delphi approach, in combination with radiographic OA (ROA) had a strong predictive validity for the progression of knee OA. This study aimed to compare whether the combination using traditional prediction models was superior to the Light Gradient Boosting Machine (LightGBM) models. Data were from the Tasmanian Older Adult Cohort. A radiograph and 1.5T MRI of the right knee was performed. Tibial cartilage volume was measured at baseline, 2.6 and 10.7 years. Knee pain and function were assessed at baseline, 2.6, 5.1, and 10.7 years. Right-sided total knee replacement (TKR) were assessed over 13.5 years. The area under the curve (AUC) was applied to compare the predictive validity of logistic regression with the LightGBM algorithm. For significant imbalanced outcomes, the area under the precision-recall curve (AUC-PR) was used. 574 participants (mean 62 years, 49 ​% female) were included. Overall, the LightGBM showed a clinically acceptable predictive performance for all outcomes but TKR. For knee pain and function, LightGBM showed better predictive performance than logistic regression model (AUC: 0.731-0.912 vs 0.627-0.755). Similar results were found for tibial cartilage loss over 2.6 (AUC: 0.845 vs 0.701, p ​< ​0.001) and 10.7 years (AUC: 0.845 vs 0.753, p ​= ​0.016). For TKR, which exhibited significant class imbalance, both algorithms performed poorly (AUC-PR: 0.647 vs 0.610). Compared to logistic regression combining MRI-OA, ROA, and common covariates, LightGBM offers valuable insights that can inform early risk identification and targeted prevention strategies.

Incorporating radiomic MRI models for presurgical response assessment in patients with early breast cancer undergoing neoadjuvant systemic therapy: Collaborative insights from breast oncologists and radiologists.

Gaudio M, Vatteroni G, De Sanctis R, Gerosa R, Benvenuti C, Canzian J, Jacobs F, Saltalamacchia G, Rizzo G, Pedrazzoli P, Santoro A, Bernardi D, Zambelli A

pubmed logopapersJun 1 2025
The assessment of neoadjuvant treatment's response is critical for selecting the most suitable therapeutic options for patients with breast cancer to reduce the need for invasive local therapies. Breast magnetic resonance imaging (MRI) is so far one of the most accurate approaches for assessing pathological complete response, although this is limited by the qualitative and subjective nature of radiologists' assessment, often making it insufficient for deciding whether to forgo additional locoregional therapy measures. To increase the accuracy and prediction of radiomic MRI with the aid of machine learning models and deep learning methods, as part of artificial intelligence, have been used to analyse the different subtypes of breast cancer and the specific changes observed before and after therapy. This review discusses recent advancements in radiomic MRI models for presurgical response assessment for patients with early breast cancer receiving preoperative treatments, with a focus on their implications for clinical practice.

UniBrain: Universal Brain MRI diagnosis with hierarchical knowledge-enhanced pre-training.

Lei J, Dai L, Jiang H, Wu C, Zhang X, Zhang Y, Yao J, Xie W, Zhang Y, Li Y, Zhang Y, Wang Y

pubmed logopapersJun 1 2025
Magnetic Resonance Imaging (MRI) has become a pivotal tool in diagnosing brain diseases, with a wide array of computer-aided artificial intelligence methods being proposed to enhance diagnostic accuracy. However, early studies were often limited by small-scale datasets and a narrow range of disease types, which posed challenges in model generalization. This study presents UniBrain, a hierarchical knowledge-enhanced pre-training framework designed for universal brain MRI diagnosis. UniBrain leverages a large-scale dataset comprising 24,770 imaging-report pairs from routine diagnostics for pre-training. Unlike previous approaches that either focused solely on visual representation learning or used brute-force alignment between vision and language, the framework introduces a hierarchical alignment mechanism. This mechanism extracts structured knowledge from free-text clinical reports at multiple granularities, enabling vision-language alignment at both the sequence and case levels, thereby significantly improving feature learning efficiency. A coupled vision-language perception module is further employed for text-guided multi-label classification, which facilitates zero-shot evaluation and fine-tuning of downstream tasks without modifying the model architecture. UniBrain is validated on both in-domain and out-of-domain datasets, consistently surpassing existing state-of-the-art diagnostic models and demonstrating performance on par with radiologists in specific disease categories. It shows strong generalization capabilities across diverse tasks, highlighting its potential for broad clinical application. The code is available at https://github.com/ljy19970415/UniBrain.

MCNEL: A multi-scale convolutional network and ensemble learning for Alzheimer's disease diagnosis.

Yan F, Peng L, Dong F, Hirota K

pubmed logopapersJun 1 2025
Alzheimer's disease (AD) significantly threatens community well-being and healthcare resource allocation due to its high incidence and mortality. Therefore, early detection and intervention are crucial for reducing AD-related fatalities. However, the existing deep learning-based approaches often struggle to capture complex structural features of magnetic resonance imaging (MRI) data effectively. Common techniques for multi-scale feature fusion, such as direct summation and concatenation methods, often introduce redundant noise that can negatively affect model performance. These challenges highlight the need for developing more advanced methods to improve feature extraction and fusion, aiming to enhance diagnostic accuracy. This study proposes a multi-scale convolutional network and ensemble learning (MCNEL) framework for early and accurate AD diagnosis. The framework adopts enhanced versions of the EfficientNet-B0 and MobileNetV2 models, which are subsequently integrated with the DenseNet121 model to create a hybrid feature extraction tool capable of extracting features from multi-view slices. Additionally, a SimAM-based feature fusion method is developed to synthesize key feature information derived from multi-scale images. To ensure classification accuracy in distinguishing AD from multiple stages of cognitive impairment, this study designs an ensemble learning classifier model using multiple classifiers and a self-adaptive weight adjustment strategy. Extensive experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset validate the effectiveness of our solution, which achieves average accuracies of 96.67% for ADNI-1 and 96.20% for ADNI-2, respectively. The results indicate that the MCNEL outperforms recent comparable algorithms in terms of various evaluation metrics, demonstrating superior performance and robustness in AD diagnosis. This study markedly enhances the diagnostic capabilities for AD, allowing patients to receive timely treatments that can slow down disease progression and improve their quality of life.

DCE-MRI based deep learning analysis of intratumoral subregion for predicting Ki-67 expression level in breast cancer.

Ding Z, Zhang C, Xia C, Yao Q, Wei Y, Zhang X, Zhao N, Wang X, Shi S

pubmed logopapersJun 1 2025
To evaluate whether deep learning (DL) analysis of intratumor subregion based on dynamic contrast-enhanced MRI (DCE-MRI) can help predict Ki-67 expression level in breast cancer. A total of 290 breast cancer patients from two hospitals were retrospectively collected. A k-means clustering algorithm confirmed subregions of tumor. DL features of whole tumor and subregions were extracted from DCE-MRI images based on 3D ResNet18 pre-trained model. The logistic regression model was constructed after dimension reduction. Model performance was assessed using the area under the curve (AUC), and clinical value was demonstrated through decision curve analysis (DCA). The k-means clustering method clustered the tumor into two subregions (habitat 1 and habitat 2) based on voxel values. Both the habitat 1 model (validation set: AUC = 0.771, 95 %CI: 0.642-0.900 and external test set: AUC = 0.794, 95 %CI: 0.696-0.891) and the habitat 2 model (AUC = 0.734, 95 %CI: 0.605-0.862 and AUC = 0.756, 95 %CI: 0.646-0.866) showed better predictive capabilities for Ki-67 expression level than the whole tumor model (AUC = 0.686, 95 %CI: 0.550-0.823 and AUC = 0.680, 95 %CI: 0.555-0.804). The combined model based on the two subregions further enhanced the predictive capability (AUC = 0.808, 95 %CI: 0.696-0.921 and AUC = 0.842, 95 %CI: 0.758-0.926), and it demonstrated higher clinical value than other models in DCA. The deep learning model derived from subregion of tumor showed better performance for predicting Ki-67 expression level in breast cancer patients. Additionally, the model that integrated two subregions further enhanced the predictive performance.

Exploring the significance of the frontal lobe for diagnosis of schizophrenia using explainable artificial intelligence and group level analysis.

Varaprasad SA, Goel T

pubmed logopapersJun 1 2025
Schizophrenia (SZ) is a complex mental disorder characterized by a profound disruption in cognition and emotion, often resulting in a distorted perception of reality. Magnetic resonance imaging (MRI) is an essential tool for diagnosing SZ which helps to understand the organization of the brain. Functional MRI (fMRI) is a specialized imaging technique to measure and map brain activity by detecting changes in blood flow and oxygenation. The proposed paper correlates the results using an explainable deep learning approach to identify the significant regions of SZ patients using group-level analysis for both structural MRI (sMRI) and fMRI data. The study found that the heat maps for Grad-CAM show clear visualization in the frontal lobe for the classification of SZ and CN with a 97.33% accuracy. The group difference analysis reveals that sMRI data shows intense voxel activity in the right superior frontal gyrus of the frontal lobe in SZ patients. Also, the group difference between SZ and CN during n-back tasks of fMRI data indicates significant voxel activation in the frontal cortex of the frontal lobe. These findings suggest that the frontal lobe plays a crucial role in the diagnosis of SZ, aiding clinicians in planning the treatment.

Semantic segmentation for individual thigh skeletal muscles of athletes on magnetic resonance images.

Kasahara J, Ozaki H, Matsubayashi T, Takahashi H, Nakayama R

pubmed logopapersJun 1 2025
The skeletal muscles that athletes should train vary depending on their discipline and position. Therefore, individual skeletal muscle cross-sectional area assessment is important in the development of training strategies. To measure the cross-sectional area of skeletal muscle, manual segmentation of each muscle is performed using magnetic resonance (MR) imaging. This task is time-consuming and requires significant effort. Additionally, interobserver variability can sometimes be problematic. The purpose of this study was to develop an automated computerized method for semantic segmentation of individual thigh skeletal muscles from MR images of athletes. Our database consisted of 697 images from the thighs of 697 elite athletes. The images were randomly divided into a training dataset (70%), a validation dataset (10%), and a test dataset (20%). A label image was generated for each image by manually annotating 15 object classes: 12 different skeletal muscles, fat, bones, and vessels and nerves. Using the validation dataset, DeepLab v3+ was chosen from three different semantic segmentation models as a base model for segmenting individual thigh skeletal muscles. The feature extractor in DeepLab v3+ was also optimized to ResNet50. The mean Jaccard index and Dice index for the proposed method were 0.853 and 0.916, respectively, which were significantly higher than those from conventional DeepLab v3+ (Jaccard index: 0.810, p < .001; Dice index: 0.887, p < .001). The proposed method achieved a mean area error for 15 objective classes of 3.12%, useful in the assessment of skeletal muscle cross-sectional area from MR images.

Standardized pancreatic MRI-T1 measurement methods: comparison between manual measurement and a semi-automated pipeline with automatic quality control.

Triay Bagur A, Arya Z, Waddell T, Pansini M, Fernandes C, Counter D, Jackson E, Thomaides-Brears HB, Robson MD, Bulte DP, Banerjee R, Aljabar P, Brady M

pubmed logopapersJun 1 2025
Scanner-referenced T1 (srT1) is a method for measuring pancreas T1 relaxation time. The purpose of this multi-centre study is 2-fold: (1) to evaluate the repeatability of manual ROI-based analysis of srT1, (2) to validate a semi-automated measurement method with an automatic quality control (QC) module to identify likely discrepancies between automated and manual measurements. Pancreatic MRI scans from a scan-rescan cohort (46 subjects) were used to evaluate the repeatability of manual analysis. Seven hundred and eight scans from a longitudinal multi-centre study of 466 subjects were divided into training, internal validation (IV), and external validation (EV) cohorts. A semi-automated method for measuring srT1 using machine learning is proposed and compared against manual analysis on the validation cohorts with and without automated QC. Inter-operator agreement between manual ROI-based method and semi-automated method had low bias (3.8 ms or 0.5%) and limits of agreement [-36.6, 44.1] ms. There was good agreement between the 2 methods without automated QC (IV: 3.2 [-47.1, 53.5] ms, EV: -0.5 [-35.2, 34.2] ms). After QC, agreement on the IV set improved, was unchanged in the EV set, and the agreement in both was within inter-operator bounds (IV: -0.04 [-33.4, 33.3] ms, EV: -1.9 [-37.6, 33.7] ms). The semi-automated method improved scan-rescan agreement versus manual analysis (manual: 8.2 [-49.7, 66] ms, automated: 6.7 [-46.7, 60.1] ms). The semi-automated method for characterization of standardized pancreatic T1 using MRI has the potential to decrease analysis time while maintaining accuracy and improving scan-rescan agreement. We provide intra-operator, inter-operator, and scan-rescan agreement values for manual measurement of srT1, a standardized biomarker for measuring pancreas fibro-inflammation. Applying a semi-automated measurement method improves scan-rescan agreement and agrees well with manual measurements, while reducing human effort. Adding automated QC can improve agreement between manual and automated measurements. We describe a method for semi-automated, standardized measurement of pancreatic T1 (srT1), which includes automated quality control. Measurements show good agreement with manual ROI-based analysis, with comparable consistency to inter-operator performance.

Optimized attention-enhanced U-Net for autism detection and region localization in MRI.

K VRP, Bindu CH, Rama Devi K

pubmed logopapersJun 1 2025
Autism spectrum disorder (ASD) is a neurodevelopmental condition that affects a child's cognitive and social skills, often diagnosed only after symptoms appear around age 2. Leveraging MRI for early ASD detection can improve intervention outcomes. This study proposes a framework for autism detection and region localization using an optimized deep learning approach with attention mechanisms. The pipeline includes MRI image collection, pre-processing (bias field correction, histogram equalization, artifact removal, and non-local mean filtering), and autism classification with a Symmetric Structured MobileNet with Attention Mechanism (SSM-AM). Enhanced by Refreshing Awareness-aided Election-Based Optimization (RA-EBO), SSM-AM achieves robust classification. Abnormality region localization utilizes a Multiscale Dilated Attention-based Adaptive U-Net (MDA-AUnet) further optimized by RA-EBO. Experimental results demonstrate that our proposed model outperforms existing methods, achieving an accuracy of 97.29%, sensitivity of 97.27%, specificity of 97.36%, and precision of 98.98%, significantly improving classification and localization performance. These results highlight the potential of our approach for early ASD diagnosis and targeted interventions. The datasets utilized for this work are publicly available at https://fcon_1000.projects.nitrc.org/indi/abide/.
Page 90 of 1201200 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.