Sort by:
Page 132 of 1621612 results

Prediction of therapeutic response to transarterial chemoembolization plus systemic therapy regimen in hepatocellular carcinoma using pretreatment contrast-enhanced MRI based habitat analysis and Crossformer model.

Zhu Y, Liu T, Chen J, Wen L, Zhang J, Zheng D

pubmed logopapersJun 1 2025
To develop habitat and deep learning (DL) models from multi-phase contrast-enhanced magnetic resonance imaging (CE-MRI) habitat images categorized using the K-means clustering algorithm. Additionally, we aim to assess the predictive value of identified regions for early evaluation of the responsiveness of hepatocellular carcinoma (HCC) patients to treatment with transarterial chemoembolization (TACE) plus molecular targeted therapies (MTT) and anti-PD-(L)1. A total of 102 patients with HCC from two institutions (A, n = 63 and B, n = 39) who received TACE plus systemic therapy were enrolled from September 2020 to January 2024. Multiple CE-MRI sequences were used to outline 3D volumes of interest (VOI) of the lesion. Subsequently, K-means clustering was applied to categorize intratumoral voxels into three distinct subgroups, based on signal intensity values of images. Using data from institution A, the habitat model was built with the ExtraTrees classifier after extracting radiomics features from intratumoral habitats. Similarly, the Crossformer model and ResNet50 model were trained on multi-channel data in institution A, and a DL model with Transformer-based aggregation was constructed to predict the response. Finally, all models underwent validation at institution B. The Crossformer model and the habitat model both showed high area under the receiver operating characteristic curves (AUCs) of 0.869 and 0.877 (training cohort). In validation, AUC was 0.762 for the Crossformer model and 0.721 for the habitat model. The habitat model and DL model based on CE-MRI possesses the capability to non-invasively predict the efficacy of TACE plus systemic therapy in HCC patients, which is critical for precision treatment and patient outcomes.

Structural alterations as a predictor of depression - a 7-Tesla MRI-based multidimensional approach.

Schnellbächer GJ, Rajkumar R, Veselinović T, Ramkiran S, Hagen J, Collee M, Shah NJ, Neuner I

pubmed logopapersJun 1 2025
Major depressive disorder (MDD) is a debilitating condition that is associated with changes in the default-mode network (DMN). Commonly reported features include alterations in gray matter volume (GMV), cortical thickness (CoT), and gyrification. A comprehensive examination of these variables using ultra-high field strength MRI and machine learning methods may lead to novel insights into the pathophysiology of depression and help develop a more personalized therapy. Cerebral images were obtained from 41 patients with confirmed MDD and 41 healthy controls, matched for age and gender, using a 7-T-MRI. DMN parcellation followed the Schaefer 600 Atlas. Based on the results of a mixed-model repeated measures analysis, a support vector machine (SVM) calculation followed by leave-one-out cross-validation determined the predictive ability of structural features for the presence of MDD. A consecutive permutation procedure identified which areas contributed to the classification results. Correlating changes in those areas with BDI-II and AMDP scores added an explanatory aspect to this study. CoT did not delineate relevant changes in the mixed model and was excluded from further analysis. The SVM achieved a good prediction accuracy of 0.76 using gyrification data. GMV was not a viable predictor for disease presence, however, it correlated in the left parahippocampal gyrus with disease severity as measured by the BDI-II. Structural data of the DMN may therefore contain the necessary information to predict the presence of MDD. However, there may be inherent challenges with predicting disease course or treatment response due to high GMV variance and the static character of gyrification. Further improvements in data acquisition and analysis may help to overcome these difficulties.

MRI-based radiomic nomogram for predicting disease-free survival in patients with locally advanced rectal cancer.

Liu J, Liu K, Cao F, Hu P, Bi F, Liu S, Jian L, Zhou J, Nie S, Lu Q, Yu X, Wen L

pubmed logopapersJun 1 2025
Individual prognosis assessment is of paramount importance for treatment decision-making and active surveillance in cancer patients. We aimed to propose a radiomic model based on pre- and post-therapy MRI features for predicting disease-free survival (DFS) in locally advanced rectal cancer (LARC) following neoadjuvant chemoradiotherapy (nCRT) and subsequent surgical resection. This retrospective study included a total of 126 LARC patients, which were randomly assigned to a training set (n = 84) and a validation set (n = 42). All patients underwent pre- and post-nCRT MRI scans. Radiomic features were extracted from higher resolution T2-weighted images. Pearson correlation analysis and ANOVA or Relief were utilized for identifying radiomic features associated with DFS. Pre-treatment, post-treatment, and delta radscores were constructed by machine learning algorithms. An individualized nomogram was developed based on significant radscores and clinical variables using multivariate Cox regression analysis. Predictive performance was evaluated by the C-index, calibration curve, and decision curve analysis. The results demonstrated that in the validation set, the clinical model including pre-surgery carcinoembryonic antigen (CEA), chemotherapy after radiotherapy, and pathological stage yielded a C-index of 0.755 (95% confidence interval [CI]: 0.739-0.771). While the optimal pre-, post-, and delta-radscores achieved C-indices of 0.724 (95%CI: 0.701-0.747), 0.701 (95%CI: 0.671-0.731), and 0.625 (95%CI: 0.589-0.661), respectively. The nomogram integrating pre-surgery CEA, pathological stage, alongside pre- and post-nCRT radscore, obtained the highest C-index of 0.833 (95%CI: 0.815-0.851). The calibration curve and decision curves exhibited good calibration and clinical usefulness of the nomogram. Furthermore, the nomogram categorized patients into high- and low-risk groups exhibiting distinct DFS (both P < 0.0001). The nomogram incorporating pre- and post-therapy radscores and clinical factors could predict DFS in patients with LARC, which helps clinicians in optimizing decision-making and surveillance in real-world settings.

Automatic 3-dimensional analysis of posterosuperior full-thickness rotator cuff tear size on magnetic resonance imaging.

Hess H, Gussarow P, Rojas JT, Zumstein MA, Gerber K

pubmed logopapersJun 1 2025
Tear size and shape are known to prognosticate the efficacy of surgical rotator cuff (RC) repair; however, current manual measurements on magnetic resonance images (MRIs) exhibit high interobserver variabilities and exclude 3-dimensional (3D) morphologic information. This study aimed to develop algorithms for automatic 3D analyses of posterosuperior full-thickness RC tear to enable efficient and precise tear evaluation and 3D tear visualization. A deep-learning network for automatic segmentation of the tear region in coronal and sagittal multicenter MRI was trained with manually segmented (consensus of 3 experts) proton density- and T2-weighted MRI of shoulders with full-thickness posterosuperior tears (n = 200). Algorithms for automatic measurement of tendon retraction, tear width, tear area, and automatic Patte classification considering the 3D morphology of the shoulder were implemented and evaluated against manual segmentation (n = 59). Automatic Patte classification was calculated using automatic segmented humerus and scapula on T1-weighted MRI of the same shoulders. Tears were automatically segmented, enabling 3D visualization of the tear, with a mean Dice coefficient of 0.58 ± 0.21 compared to an interobserver variability of 0.46 ± 0.21. The mean absolute error of automatic tendon retraction and tear width measurements (4.98 ± 4.49 mm and 3.88 ± 3.18 mm) were lower than the interobserver variabilities (5.42 ± 7.09 mm and 5.92 ± 1.02 mm). The correlations of all measurements performed on automatic tear segmentations compared with those on consensus segmentations were higher than the interobserver correlation. Automatic Patte classification achieved a Cohen kappa value of 0.62, compared with the interobserver variability of 0.56. Retraction calculated using standard linear measures underestimated the tear size relative to measurements considering the curved shape of the humeral head, especially for larger tears. Even on highly heterogeneous data, the proposed algorithms showed the feasibility to successfully automate tear size analysis and to enable automatic 3D visualization of the tear situation. The presented algorithms standardize cross-center tear analyses and enable the calculation of additional metrics, potentially improving the predictive power of image-based tear measurements for the outcome of surgical treatments, thus aiding in RC tear diagnosis, treatment decision, and planning.

Incorporating Radiologist Knowledge Into MRI Quality Metrics for Machine Learning Using Rank-Based Ratings.

Tang C, Eisenmenger LB, Rivera-Rivera L, Huo E, Junn JC, Kuner AD, Oechtering TH, Peret A, Starekova J, Johnson KM

pubmed logopapersJun 1 2025
Deep learning (DL) often requires an image quality metric; however, widely used metrics are not designed for medical images. To develop an image quality metric that is specific to MRI using radiologists image rankings and DL models. Retrospective. A total of 19,344 rankings on 2916 unique image pairs from the NYU fastMRI Initiative neuro database was used for the neural network-based image quality metrics training with an 80%/20% training/validation split and fivefold cross-validation. 1.5 T and 3 T T1, T1 postcontrast, T2, and FLuid Attenuated Inversion Recovery (FLAIR). Synthetically corrupted image pairs were ranked by radiologists (N = 7), with a subset also scoring images using a Likert scale (N = 2). DL models were trained to match rankings using two architectures (EfficientNet and IQ-Net) with and without reference image subtraction and compared to ranking based on mean squared error (MSE) and structural similarity (SSIM). Image quality assessing DL models were evaluated as alternatives to MSE and SSIM as optimization targets for DL denoising and reconstruction. Radiologists' agreement was assessed by a percentage metric and quadratic weighted Cohen's kappa. Ranking accuracies were compared using repeated measurements analysis of variance. Reconstruction models trained with IQ-Net score, MSE and SSIM were compared by paired t test. P < 0.05 was considered significant. Compared to direct Likert scoring, ranking produced a higher level of agreement between radiologists (70.4% vs. 25%). Image ranking was subjective with a high level of intraobserver agreement ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>94.9</mn> <mo>%</mo> <mo>±</mo> <mn>2.4</mn> <mo>%</mo></mrow> </math> ) and lower interobserver agreement ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>61.47</mn> <mo>%</mo> <mo>±</mo> <mn>5.51</mn> <mo>%</mo></mrow> </math> ). IQ-Net and EfficientNet accurately predicted rankings with a reference image ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>75.2</mn> <mo>%</mo> <mo>±</mo> <mn>1.3</mn> <mo>%</mo></mrow> </math> and <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>79.2</mn> <mo>%</mo> <mo>±</mo> <mn>1.7</mn> <mo>%</mo></mrow> </math> ). However, EfficientNet resulted in images with artifacts and high MSE when used in denoising tasks while IQ-Net optimized networks performed well for both denoising and reconstruction tasks. Image quality networks can be trained from image ranking and used to optimize DL tasks. 3 TECHNICAL EFFICACY: Stage 1.

Evaluation of a deep learning prostate cancer detection system on biparametric MRI against radiological reading.

Debs N, Routier A, Bône A, Rohé MM

pubmed logopapersJun 1 2025
This study aims to evaluate a deep learning pipeline for detecting clinically significant prostate cancer (csPCa), defined as Gleason Grade Group (GGG) ≥ 2, using biparametric MRI (bpMRI) and compare its performance with radiological reading. The training dataset included 4381 bpMRI cases (3800 positive and 581 negative) across three continents, with 80% annotated using PI-RADS and 20% with Gleason Scores. The testing set comprised 328 cases from the PROSTATEx dataset, including 34% positive (GGG ≥ 2) and 66% negative cases. A 3D nnU-Net was trained on bpMRI for lesion detection, evaluated using histopathology-based annotations, and assessed with patient- and lesion-level metrics, along with lesion volume, and GGG. The algorithm was compared to non-expert radiologists using multi-parametric MRI (mpMRI). The model achieved an AUC of 0.83 (95% CI: 0.80, 0.87). Lesion-level sensitivity was 0.85 (95% CI: 0.82, 0.94) at 0.5 False Positives per volume (FP/volume) and 0.88 (95% CI: 0.79, 0.92) at 1 FP/volume. Average Precision was 0.55 (95% CI: 0.46, 0.64). The model showed over 0.90 sensitivity for lesions larger than 650 mm³ and exceeded 0.85 across GGGs. It had higher true positive rates (TPRs) than radiologists equivalent FP rates, achieving TPRs of 0.93 and 0.79 compared to radiologists' 0.87 and 0.68 for PI-RADS ≥ 3 and PI-RADS ≥ 4 lesions (p ≤ 0.05). The DL model showed strong performance in detecting csPCa on an independent test cohort, surpassing radiological interpretation and demonstrating AI's potential to improve diagnostic accuracy for non-expert radiologists. However, detecting small lesions remains challenging. Question Current prostate cancer detection methods often do not involve non-expert radiologists, highlighting the need for more accurate deep learning approaches using biparametric MRI. Findings Our model outperforms radiologists significantly, showing consistent performance across Gleason Grade Groups and for medium to large lesions. Clinical relevance This AI model improves prostate detection accuracy in prostate imaging, serves as a benchmark with reference performance on a public dataset, and offers public PI-RADS annotations, enhancing transparency and facilitating further research and development.

Computer-Aided Detection (CADe) and Segmentation Methods for Breast Cancer Using Magnetic Resonance Imaging (MRI).

Jannatdoust P, Valizadeh P, Saeedi N, Valizadeh G, Salari HM, Saligheh Rad H, Gity M

pubmed logopapersJun 1 2025
Breast cancer continues to be a major health concern, and early detection is vital for enhancing survival rates. Magnetic resonance imaging (MRI) is a key tool due to its substantial sensitivity for invasive breast cancers. Computer-aided detection (CADe) systems enhance the effectiveness of MRI by identifying potential lesions, aiding radiologists in focusing on areas of interest, extracting quantitative features, and integrating with computer-aided diagnosis (CADx) pipelines. This review aims to provide a comprehensive overview of the current state of CADe systems in breast MRI, focusing on the technical details of pipelines and segmentation models including classical intensity-based methods, supervised and unsupervised machine learning (ML) approaches, and the latest deep learning (DL) architectures. It highlights recent advancements from traditional algorithms to sophisticated DL models such as U-Nets, emphasizing CADe implementation of multi-parametric MRI acquisitions. Despite these advancements, CADe systems face challenges like variable false-positive and negative rates, complexity in interpreting extensive imaging data, variability in system performance, and lack of large-scale studies and multicentric models, limiting the generalizability and suitability for clinical implementation. Technical issues, including image artefacts and the need for reproducible and explainable detection algorithms, remain significant hurdles. Future directions emphasize developing more robust and generalizable algorithms, integrating explainable AI to improve transparency and trust among clinicians, developing multi-purpose AI systems, and incorporating large language models to enhance diagnostic reporting and patient management. Additionally, efforts to standardize and streamline MRI protocols aim to increase accessibility and reduce costs, optimizing the use of CADe systems in clinical practice. LEVEL OF EVIDENCE: NA TECHNICAL EFFICACY: Stage 2.

Semi-Supervised Learning Allows for Improved Segmentation With Reduced Annotations of Brain Metastases Using Multicenter MRI Data.

Ottesen JA, Tong E, Emblem KE, Latysheva A, Zaharchuk G, Bjørnerud A, Grøvik E

pubmed logopapersJun 1 2025
Deep learning-based segmentation of brain metastases relies on large amounts of fully annotated data by domain experts. Semi-supervised learning offers potential efficient methods to improve model performance without excessive annotation burden. This work tests the viability of semi-supervision for brain metastases segmentation. Retrospective. There were 156, 65, 324, and 200 labeled scans from four institutions and 519 unlabeled scans from a single institution. All subjects included in the study had diagnosed with brain metastases. 1.5 T and 3 T, 2D and 3D T1-weighted pre- and post-contrast, and fluid-attenuated inversion recovery (FLAIR). Three semi-supervision methods (mean teacher, cross-pseudo supervision, and interpolation consistency training) were adapted with the U-Net architecture. The three semi-supervised methods were compared to their respective supervised baseline on the full and half-sized training. Evaluation was performed on a multinational test set from four different institutions using 5-fold cross-validation. Method performance was evaluated by the following: the number of false-positive predictions, the number of true positive predictions, the 95th Hausdorff distance, and the Dice similarity coefficient (DSC). Significance was tested using a paired samples t test for a single fold, and across all folds within a given cohort. Semi-supervision outperformed the supervised baseline for all sites with the best-performing semi-supervised method achieved an on average DSC improvement of 6.3% ± 1.6%, 8.2% ± 3.8%, 8.6% ± 2.6%, and 15.4% ± 1.4%, when trained on half the dataset and 3.6% ± 0.7%, 2.0% ± 1.5%, 1.8% ± 5.7%, and 4.7% ± 1.7%, compared to the supervised baseline on four test cohorts. In addition, in three of four datasets, the semi-supervised training produced equal or better results than the supervised models trained on twice the labeled data. Semi-supervised learning allows for improved segmentation performance over the supervised baseline, and the improvement was particularly notable for independent external test sets when trained on small amounts of labeled data. Artificial intelligence requires extensive datasets with large amounts of annotated data from medical experts which can be difficult to acquire due to the large workload. To compensate for this, it is possible to utilize large amounts of un-annotated clinical data in addition to annotated data. However, this method has not been widely tested for the most common intracranial brain tumor, brain metastases. This study shows that this approach allows for data efficient deep learning models across multiple institutions with different clinical protocols and scanners. 3 TECHNICAL EFFICACY: Stage 2.
Page 132 of 1621612 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.