Sort by:
Page 125 of 1981980 results

Deep Learning Radiomics Nomogram Based on MRI for Differentiating between Borderline Ovarian Tumors and Stage I Ovarian Cancer: A Multicenter Study.

Wang X, Quan T, Chu X, Gao M, Zhang Y, Chen Y, Bai G, Chen S, Wei M

pubmed logopapersJun 1 2025
To develop and validate a deep learning radiomics nomogram (DLRN) based on T2-weighted MRI to distinguish between borderline ovarian tumors (BOTs) and stage I epithelial ovarian cancer (EOC) preoperatively. This retrospective multicenter study enrolled 279 patients from three centers, divided into a training set (n = 207) and an external test set (n = 72). The intra- and peritumoral radiomics analysis was employed to develop a combined radiomics model. A deep learning model was constructed based on the largest orthogonal slices of the tumor volume, and a clinical model was constructed using independent clinical predictors. The DLRN was then constructed by integrating deep learning, intra- and peritumoral radiomics, and clinical predictors. For comparison, an original radiomics model based solely on tumor volume (excluding the peritumoral area) was also constructed. All models were validated through 10-fold cross-validation and external testing, and their predictive performance was evaluated by the area under the receiver operating characteristic curve (AUC). The DLRN demonstrated superior performance across the 10-fold cross-validation, with the highest AUC of 0.825±0.082. On the external test set, the DLRN significantly outperformed the clinical model and the original radiomics model (AUC = 0.819 vs. 0.708 and 0.670, P = 0.047 and 0.015, respectively). Furthermore, the combined radiomics model performed significantly better than the original radiomics model (AUC = 0.778 vs. 0.670, P = 0.043). The DLRN exhibited promising performance in distinguishing BOTs from stage I EOC preoperatively, thus potentially assisting clinical decision-making.

Integration of Deep Learning and Sub-regional Radiomics Improves the Prediction of Pathological Complete Response to Neoadjuvant Chemoradiotherapy in Locally Advanced Rectal Cancer Patients.

Wu X, Wang J, Chen C, Cai W, Guo Y, Guo K, Chen Y, Shi Y, Chen J, Lin X, Jiang X

pubmed logopapersJun 1 2025
The precise prediction of response to neoadjuvant chemoradiotherapy is crucial for tailoring perioperative treatment in patients diagnosed with locally advanced rectal cancer (LARC). This retrospective study aims to develop and validate a model that integrates deep learning and sub-regional radiomics from MRI imaging to predict pathological complete response (pCR) in patients with LARC. We retrospectively enrolled 768 eligible participants from three independent hospitals who had received neoadjuvant chemoradiotherapy followed by radical surgery. Pretreatment pelvic MRI scans (T2-weighted), were collected for annotation and feature extraction. The K-means approach was used to segment the tumor into sub-regions. Radiomics and deep learning features were extracted by the Pyradiomics and 3D ResNet50, respectively. The predictive models were developed using the radiomics, sub-regional radiomics, and deep learning features with the machine learning algorithm in training cohort, and then validated in the external tests. The models' performance was assessed using various metrics, including the area under the curve (AUC), decision curve analysis, Kaplan-Meier survival analysis. We constructed a combined model, named SRADL, which includes deep learning with sub-regional radiomics signatures, enabling precise prediction of pCR in LARC patients. SRADL had satisfactory performance for the prediction of pCR in the training cohort (AUC 0.925 [95% CI 0.894 to 0.948]), and in test 1 (AUC 0.915 [95% CI 0.869 to 0.949]) and in test 2 (AUC 0.902 [95% CI 0.846 to 0.945]). By employing optimal threshold of 0.486, the predicted pCR group had longer survival compared to predicted non-pCR group across three cohorts. SRADL also outperformed other single-modality prediction models. The novel SRADL, which integrates deep learning with sub-regional signatures, showed high accuracy and robustness in predicting pCR to neoadjuvant chemoradiotherapy using pretreatment MRI images, making it a promising tool for the personalized management of LARC.

MRI-based Radiomics for Predicting Prostate Cancer Grade Groups: A Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies.

Lomer NB, Ashoobi MA, Ahmadzadeh AM, Sotoudeh H, Tabari A, Torigian DA

pubmed logopapersJun 1 2025
Prostate cancer (PCa) is the second most common cancer among men and a leading cause of cancer-related mortalities. Radiomics has shown promising performances in the classification of PCa grade group (GG) in several studies. Here, we aimed to systematically review and meta-analyze the performance of radiomics in predicting GG in PCa. Adhering to PRISMA-DTA guidelines, we included studies employing magnetic resonance imaging-derived radiomics for predicting GG, with histopathologic evaluations as the reference standard. Databases searched included Web of Sciences, PubMed, Scopus, and Embase. The Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) and METhodological RadiomICs Score (METRICS) tools were used for quality assessment. Pooled estimates for sensitivity, specificity, likelihood ratios, diagnostic odds ratio, and area under the curve (AUC) were calculated. Cochran's Q and I-squared tests assessed heterogeneity, while meta-regression, subgroup analysis, and sensitivity analysis addressed potential sources. Publication bias was evaluated using Deek's funnel plot, while clinical applicability was assessed with Fagan nomograms and likelihood ratio scattergrams. Data were extracted from 43 studies involving 9983 patients. Radiomics models demonstrated high accuracy in predicting GG. Patient-based analyses yielded AUCs of 0.93 for GG≥2, 0.91 for GG≥3, and 0.93 for GG≥4. Lesion-based analyses showed AUCs of 0.84 for GG≥2 and 0.89 for GG≥3. Significant heterogeneity was observed, and meta-regression identified sources of heterogeneity. Radiomics model showed moderate power to exclude and confirm the GG. Radiomics appears to be an accurate noninvasive tool for predicting PCa GG. It improves the performance of standard diagnostic methods, enhancing clinical decision-making.

Adaptive Breast MRI Scanning Using AI.

Eskreis-Winkler S, Bhowmik A, Kelly LH, Lo Gullo R, D'Alessio D, Belen K, Hogan MP, Saphier NB, Sevilimedu V, Sung JS, Comstock CE, Sutton EJ, Pinker K

pubmed logopapersJun 1 2025
Background MRI protocols typically involve many imaging sequences and often require too much time. Purpose To simulate artificial intelligence (AI)-directed stratified scanning for screening breast MRI with various triage thresholds and evaluate its diagnostic performance against that of the full breast MRI protocol. Materials and Methods This retrospective reader study included consecutive contrast-enhanced screening breast MRI examinations performed between January 2013 and January 2019 at three regional cancer sites. In this simulation study, an in-house AI tool generated a suspicion score for subtraction maximum intensity projection images during a given MRI examination, and the score was used to determine whether to proceed with the full MRI protocol or end the examination early (abbreviated breast MRI [AB-MRI] protocol). Examinations with suspicion scores under the 50th percentile were read using both the AB-MRI protocol (ie, dynamic contrast-enhanced MRI scans only) and the full MRI protocol. Diagnostic performance metrics for screening with various AI triage thresholds were compared with those for screening without AI triage. Results Of 863 women (mean age, 52 years ± 10 [SD]; 1423 MRI examinations), 51 received a cancer diagnosis within 12 months of screening. The diagnostic performance metrics for AI-directed stratified scanning that triaged 50% of examinations to AB-MRI versus full MRI protocol scanning were as follows: sensitivity, 88.2% (45 of 51; 95% CI: 79.4, 97.1) versus 86.3% (44 of 51; 95% CI: 76.8, 95.7); specificity, 80.8% (1108 of 1372; 95% CI: 78.7, 82.8) versus 81.4% (1117 of 1372; 95% CI: 79.4, 83.5); positive predictive value 3 (ie, percent of biopsies yielding cancer), 23.6% (43 of 182; 95% CI: 17.5, 29.8) versus 24.7% (42 of 170; 95% CI: 18.2, 31.2); cancer detection rate (per 1000 examinations), 31.6 (95% CI: 22.5, 40.7) versus 30.9 (95% CI: 21.9, 39.9); and interval cancer rate (per 1000 examinations), 4.2 (95% CI: 0.9, 7.6) versus 4.9 (95% CI: 1.3, 8.6). Specificity decreased by no more than 2.7 percentage points with AI triage. There were no AI-triaged examinations for which conducting the full MRI protocol would have resulted in additional cancer detection. Conclusion AI-directed stratified MRI decreased simulated scan times while maintaining diagnostic performance. © RSNA, 2025 <i>Supplemental material is available for this article.</i> See also the editorial by Strand in this issue.

Development and External Validation of a Detection Model to Retrospectively Identify Patients With Acute Respiratory Distress Syndrome.

Levy E, Claar D, Co I, Fuchs BD, Ginestra J, Kohn R, McSparron JI, Patel B, Weissman GE, Kerlin MP, Sjoding MW

pubmed logopapersJun 1 2025
The aim of this study was to develop and externally validate a machine-learning model that retrospectively identifies patients with acute respiratory distress syndrome (acute respiratory distress syndrome [ARDS]) using electronic health record (EHR) data. In this retrospective cohort study, ARDS was identified via physician-adjudication in three cohorts of patients with hypoxemic respiratory failure (training, internal validation, and external validation). Machine-learning models were trained to classify ARDS using vital signs, respiratory support, laboratory data, medications, chest radiology reports, and clinical notes. The best-performing models were assessed and internally and externally validated using the area under receiver-operating curve (AUROC), area under precision-recall curve, integrated calibration index (ICI), sensitivity, specificity, positive predictive value (PPV), and ARDS timing. Patients with hypoxemic respiratory failure undergoing mechanical ventilation within two distinct health systems. None. There were 1,845 patients in the training cohort, 556 in the internal validation cohort, and 199 in the external validation cohort. ARDS prevalence was 19%, 17%, and 31%, respectively. Regularized logistic regression models analyzing structured data (EHR model) and structured data and radiology reports (EHR-radiology model) had the best performance. During internal and external validation, the EHR-radiology model had AUROC of 0.91 (95% CI, 0.88-0.93) and 0.88 (95% CI, 0.87-0.93), respectively. Externally, the ICI was 0.13 (95% CI, 0.08-0.18). At a specified model threshold, sensitivity and specificity were 80% (95% CI, 75%-98%), PPV was 64% (95% CI, 58%-71%), and the model identified patients with a median of 2.2 hours (interquartile range 0.2-18.6) after meeting Berlin ARDS criteria. Machine-learning models analyzing EHR data can retrospectively identify patients with ARDS across different institutions.

Automated Coronary Artery Segmentation with 3D PSPNET using Global Processing and Patch Based Methods on CCTA Images.

Chachadi K, Nirmala SR, Netrakar PG

pubmed logopapersJun 1 2025
The prevalence of coronary artery disease (CAD) has become the major cause of death across the world in recent years. The accurate segmentation of coronary artery is important in clinical diagnosis and treatment of coronary artery disease (CAD) such as stenosis detection and plaque analysis. Deep learning techniques have been shown to assist medical experts in diagnosing diseases using biomedical imaging. There are many methods which employ 2D DL models for medical image segmentation. The 2D Pyramid Scene Parsing Neural Network (PSPNet) has potential in this domain but not explored for the segmentation of coronary arteries from 3D Coronary Computed Tomography Angiography (CCTA) images. The contribution of present research work is to propose the modification of 2D PSPNet into 3D PSPNet for segmenting the coronary arteries from 3D CCTA images. The innovative factor is to evaluate the network performance by employing Global processing and Patch based processing methods. The experimental results achieved a Dice Similarity Coefficient (DSC) of 0.76 for Global process method and 0.73 for Patch based method using a subset of 200 images from the ImageCAS dataset.

Structural and metabolic topological alterations associated with butylphthalide treatment in mild cognitive impairment: Data from a randomized, double-blind, placebo-controlled trial.

Han X, Gong S, Gong J, Wang P, Li R, Chen R, Xu C, Sun W, Li S, Chen Y, Yang Y, Luan H, Wen B, Guo J, Lv S, Wei C

pubmed logopapersJun 1 2025
Effective intervention for mild cognitive impairment (MCI) is key for preventing dementia. As a neuroprotective agent, butylphthalide has the potential to treat MCI due to Alzheimer disease (AD). However, the pharmacological mechanism of butylphthalide from the brain network perspective is not clear. Therefore, we aimed to investigate the multimodal brain network changes associated with butylphthalide treatment in MCI due to AD. A total of 270 patients with MCI due to AD received either butylphthalide or placebo at a ratio of 1:1 for 1 year. Effective treatment was defined as a decrease in the Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-cog) > 2.5. Brain networks were constructed using T1-magnetic resonance imaging and fluorodeoxyglucose positron emission tomography. A support vector machine was applied to develop predictive models. Both treatment (drug vs. placebo)-time interactions and efficacy (effective vs. ineffective)-time interactions were detected on some overlapping structural network metrics. Simple effects analyses revealed a significantly increased global efficiency in the structural network under both treatment and effective treatment of butylphthalide. Among the overlapping metrics, an increased degree centrality of left paracentral lobule was significantly related to poorer cognitive improvement. The predictive model based on baseline multimodal network metrics exhibited high accuracy (88.93%) of predicting butylphthalide's efficacy. Butylphthalide may restore abnormal organization in structural networks of patients with MCI due to AD, and baseline network metrics could be predictive markers for therapeutic efficacy of butylphthalide. This study was registered in the Chinese Clinical Trial Registry (Registration Number: ChiCTR1800018362, Registration Date: 2018-09-13).

Predicting hemorrhagic transformation in acute ischemic stroke: a systematic review, meta-analysis, and methodological quality assessment of CT/MRI-based deep learning and radiomics models.

Salimi M, Vadipour P, Bahadori AR, Houshi S, Mirshamsi A, Fatemian H

pubmed logopapersJun 1 2025
Acute ischemic stroke (AIS) is a major cause of mortality and morbidity, with hemorrhagic transformation (HT) as a severe complication. Accurate prediction of HT is essential for optimizing treatment strategies. This review assesses the accuracy and utility of deep learning (DL) and radiomics in predicting HT through imaging, regarding clinical decision-making for AIS patients. A literature search was conducted across five databases (Pubmed, Scopus, Web of Science, Embase, IEEE) up to January 23, 2025. Studies involving DL or radiomics-based ML models for predicting HT in AIS patients were included. Data from training, validation, and clinical-combined models were extracted and analyzed separately. Pooled sensitivity, specificity, and AUC were calculated with a random-effects bivariate model. For the quality assessment of studies, the Methodological Radiomics Score (METRICS) and QUADAS-2 tool were used. 16 studies consisting of 3,083 individual participants were included in the meta-analysis. The pooled AUC for training cohorts was 0.87, sensitivity 0.80, and specificity 0.85. For validation cohorts, AUC was 0.87, sensitivity 0.81, and specificity 0.86. Clinical-combined models showed an AUC of 0.93, sensitivity 0.84, and specificity 0.89. Moderate to severe heterogeneity was noted and addressed. Deep-learning models outperformed radiomics models, while clinical-combined models outperformed deep learning-only and radiomics-only models. The average METRICS score was 62.85%. No publication bias was detected. DL and radiomics models showed great potential in predicting HT in AIS patients. However, addressing methodological issues-such as inconsistent reference standards and limited external validation-is essential for the clinical implementation of these models.

TDSF-Net: Tensor Decomposition-Based Subspace Fusion Network for Multimodal Medical Image Classification.

Zhang Y, Xu G, Zhao M, Wang H, Shi F, Chen S

pubmed logopapersJun 1 2025
Data from multimodalities bring complementary information for deep learning-based medical image classification models. However, data fusion methods simply concatenating features or images barely consider the correlations or complementarities among different modalities and easily suffer from exponential growth in dimensions and computational complexity when the modality increases. Consequently, this article proposes a subspace fusion network with tensor decomposition (TD) to heighten multimodal medical image classification. We first introduce a Tucker low-rank TD module to map the high-level dimensional tensor to the low-rank subspace, reducing the redundancy caused by multimodal data and high-dimensional features. Then, a cross-tensor attention mechanism is utilized to fuse features from the subspace into a high-dimension tensor, enhancing the representation ability of extracted features and constructing the interaction information among components in the subspace. Extensive comparison experiments with state-of-the-art (SOTA) methods are conducted on one self-established and three public multimodal medical image datasets, verifying the effectiveness and generalization ability of the proposed method. The code is available at https://github.com/1zhang-yi/TDSFNet.

HResFormer: Hybrid Residual Transformer for Volumetric Medical Image Segmentation.

Ren S, Li X

pubmed logopapersJun 1 2025
Vision Transformer shows great superiority in medical image segmentation due to the ability to learn long-range dependency. For medical image segmentation from 3-D data, such as computed tomography (CT), existing methods can be broadly classified into 2-D-based and 3-D-based methods. One key limitation in 2-D-based methods is that the intraslice information is ignored, while the limitation in 3-D-based methods is the high computation cost and memory consumption, resulting in a limited feature representation for inner slice information. During the clinical examination, radiologists primarily use the axial plane and then routinely review both axial and coronal planes to form a 3-D understanding of anatomy. Motivated by this fact, our key insight is to design a hybrid model that can first learn fine-grained inner slice information and then generate a 3-D understanding of anatomy by incorporating 3-D information. We present a novel Hybrid Residual TransFormer (HResFormer) for 3-D medical image segmentation. Building upon standard 2-D and 3-D Transformer backbones, HResFormer involves two novel key designs: 1) a Hybrid Local-Global fusion Module (HLGM) to effectively and adaptively fuse inner slice information from 2-D Transformers and intraslice information from 3-D volumes for 3-D Transformers with local fine-grained and global long-range representation and 2) residual learning of the hybrid model, which can effectively leverage the inner slice and intraslice information for better 3-D understanding of anatomy. Experiments show that our HResFormer outperforms prior art on widely used medical image segmentation benchmarks. This article sheds light on an important but neglected way to design Transformers for 3-D medical image segmentation.
Page 125 of 1981980 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.