Sort by:
Page 96 of 1311301 results

Computed Tomography Radiomics-based Combined Model for Predicting Thymoma Risk Subgroups: A Multicenter Retrospective Study.

Liu Y, Luo C, Wu Y, Zhou S, Ruan G, Li H, Chen W, Lin Y, Liu L, Quan T, He X

pubmed logopapersJun 1 2025
Accurately distinguishing histological subtypes and risk categorization of thymomas is difficult. To differentiate the histologic risk categories of thymomas, we developed a combined radiomics model based on non-enhanced and contrast-enhanced computed tomography (CT) radiomics, clinical, and semantic features. In total, 360 patients with pathologically-confirmed thymomas who underwent CT examinations were retrospectively recruited from three centers. Patients were classified using improved pathological classification criteria as low-risk (LRT: types A and AB) or high-risk (HRT: types B1, B2, and B3). The training and external validation sets comprised 274 (from centers 1 and 2) and 86 (center 3) patients, respectively. A clinical-semantic model was built using clinical and semantic variables. Radiomics features were filtered using intraclass correlation coefficients, correlation analysis, and univariate logistic regression. An optimal radiomics model (Rad_score) was constructed using the AutoML algorithm, while a combined model was constructed by integrating Rad_score with clinical and semantic features. The predictive and clinical performances of the models were evaluated using receiver operating characteristic/calibration curve analyses and decision-curve analysis, respectively. Radiomics and combined models (area under curve: training set, 0.867 and 0.884; external validation set, 0.792 and 0.766, respectively) exhibited performance superior to the clinical-semantic model. The combined model had higher accuracy than the radiomics model (0.79 vs. 0.78, p<0.001) in the entire cohort. The original_firstorder_median of venous phase had the highest relative importance among features in the radiomics model. Radiomics and combined radiomics models may serve as noninvasive discrimination tools to differentiate thymoma risk classifications.

Deep learning-based acceleration of high-resolution compressed sense MR imaging of the hip.

Marka AW, Meurer F, Twardy V, Graf M, Ebrahimi Ardjomand S, Weiss K, Makowski MR, Gersing AS, Karampinos DC, Neumann J, Woertler K, Banke IJ, Foreman SC

pubmed logopapersJun 1 2025
To evaluate a Compressed Sense Artificial Intelligence framework (CSAI) incorporating parallel imaging, compressed sense (CS), and deep learning for high-resolution MRI of the hip, comparing it with standard-resolution CS imaging. Thirty-two patients with femoroacetabular impingement syndrome underwent 3 T MRI scans. Coronal and sagittal intermediate-weighted TSE sequences with fat saturation were acquired using CS (0.6 ×0.8 mm resolution) and CSAI (0.3 ×0.4 mm resolution) protocols in comparable acquisition times (7:49 vs. 8:07 minutes for both planes). Two readers systematically assessed the depiction of the acetabular and femoral cartilage (in five cartilage zones), labrum, ligamentum capitis femoris, and bone using a five-point Likert scale. Diagnostic confidence and abnormality detection were recorded and analyzed using the Wilcoxon signed-rank test. CSAI significantly improved the cartilage depiction across most cartilage zones compared to CS. Overall Likert scores were 4.0 ± 0.2 (CS) vs 4.2 ± 0.6 (CSAI) for reader 1 and 4.0 ± 0.2 (CS) vs 4.3 ± 0.6 (CSAI) for reader 2 (p ≤ 0.001). Diagnostic confidence increased from 3.5 ± 0.7 and 3.9 ± 0.6 (CS) to 4.0 ± 0.6 and 4.1 ± 0.7 (CSAI) for readers 1 and 2, respectively (p ≤ 0.001). More cartilage lesions were detected with CSAI, with significant improvements in diagnostic confidence in certain cartilage zones such as femoral zone C and D for both readers. Labrum and ligamentum capitis femoris depiction remained similar, while bone depiction was rated lower. No abnormalities detected in CS were missed in CSAI. CSAI provides high-resolution hip MR images with enhanced cartilage depiction without extending acquisition times, potentially enabling more precise hip cartilage assessment.

Predicting long-term patency of radiocephalic arteriovenous fistulas with machine learning and the PREDICT-AVF web app.

Fitzgibbon JJ, Ruan M, Heindel P, Appah-Sampong A, Dey T, Khan A, Hentschel DM, Ozaki CK, Hussain MA

pubmed logopapersJun 1 2025
The goal of this study was to expand our previously created prediction tool (PREDICT-AVF) and web app by estimating long-term primary and secondary patency of radiocephalic AVFs. The data source was 911 patients from PATENCY-1 and PATENCY-2 randomized controlled trials, which enrolled patients undergoing new radiocephalic AVF creation with prospective longitudinal follow up and ultrasound measurements. Models were built using a combination of baseline characteristics and post-operative ultrasound measurements to estimate patency up to 2.5 years. Discrimination performance was assessed, and an interactive web app was created using the most robust model. At 2.5 years, the unadjusted primary and secondary patency (95% CI) was 29% (26-33%) and 68% (65-72%). Models using baseline characteristics generally did not perform as well as those using post-operative ultrasound measurements. Overall, the Cox model (4-6 weeks ultrasound) had the best discrimination performance for primary and secondary patency, with an integrated Brier score of 0.183 (0.167, 0.199) and 0.106 (0.085, 0.126). Expansion of the PREDICT-AVF web app to include prediction of long-term patency can help guide clinicians in developing comprehensive end-stage kidney disease Life-Plans with hemodialysis access patients.

Deep Learning-Enhanced Ultra-high-resolution CT Imaging for Superior Temporal Bone Visualization.

Brockstedt L, Grauhan NF, Kronfeld A, Mercado MAA, Döge J, Sanner A, Brockmann MA, Othman AE

pubmed logopapersJun 1 2025
This study assesses the image quality of temporal bone ultra-high-resolution (UHR) Computed tomography (CT) scans in adults and children using hybrid iterative reconstruction (HIR) and a novel, vendor-specific deep learning-based reconstruction (DLR) algorithm called AiCE Inner Ear. In a retrospective, single-center study (February 1-July 30, 2023), UHR-CT scans of 57 temporal bones of 35 patients (5 children, 23 male) with at least one anatomical unremarkable temporal bone were included. There is an adult computed tomography dose index volume (CTDIvol 25.6 mGy) and a pediatric protocol (15.3 mGy). Images were reconstructed using HIR at normal resolution (0.5-mm slice thickness, 512² matrix) and UHR (0.25-mm, 1024² and 2048² matrix) as well as with a vendor-specific DLR advanced intelligent clear-IQ engine inner ear (AiCE Inner Ear) at UHR (0.25-mm, 1024² matrix). Three radiologists evaluated 18 anatomic structures using a 5-point Likert scale. Signal-to-noise (SNR) and contrast-to-noise ratio (CNR) were measured automatically. In the adult protocol subgroup (n=30; median age: 51 [11-89]; 19 men) and the pediatric protocol subgroup (n=5; median age: 2 [1-3]; 4 men), UHR-CT with DLR significantly improved subjective image quality (p<0.024), reduced noise (p<0.001), and increased CNR and SNR (p<0.001). DLR also enhanced visualization of key structures, including the tendon of the stapedius muscle (p<0.001), tympanic membrane (p<0.009), and basal aspect of the osseous spiral lamina (p<0.018). Vendor-specific DLR-enhanced UHR-CT significantly improves temporal bone image quality and diagnostic performance.

CT-Based Deep Learning Predicts Prognosis in Esophageal Squamous Cell Cancer Patients Receiving Immunotherapy Combined with Chemotherapy.

Huang X, Huang Y, Li P, Xu K

pubmed logopapersJun 1 2025
Immunotherapy combined with chemotherapy has improved outcomes for some esophageal squamous cell carcinoma (ESCC) patients, but accurate pre-treatment risk stratification remains a critical gap. This study constructed a deep learning (DL) model to predict survival outcomes in ESCC patients receiving immunotherapy combined with chemotherapy. A DL model was developed to predict survival outcomes in ESCC patients receiving immunotherapy and chemotherapy. Retrospective data from 482 patients across three institutions were split into training (N=322), internal test (N=79), and external test (N=81) sets. Unenhanced computed tomography (CT) scans were processed to analyze tumor and peritumoral regions. The model evaluated multiple input configurations: original tumor regions of interest (ROIs), ROI subregions, and ROIs expanded by 1 and 3 pixels. Performance was assessed using Harrell's C-index and receiver operating characteristic (ROC) curves. A multimodal model combined DL-derived risk scores with five key clinical and laboratory features. The Shapley Additive Explanations (SHAP) method elucidated the contribution of individual features to model predictions. The DL model with 1-pixel peritumoral expansion achieved the best accuracy, yielding a C-index of 0.75 for the internal test set and 0.60 for the external test set. Hazard ratios for high-risk patients were 1.82 (95% CI: 1.19-2.46; P=0.02) in internal test set. The multimodal model achieved C-indices of 0.74 and 0.61 for internal and external test sets, respectively. Kaplan-Meier analysis revealed significant survival differences between high- and low-risk groups (P<0.05). SHAP analysis identified tumor response, risk score, and age as critical contributors to predictions. This DL model demonstrates efficacy in stratifying ESCC patients by survival risk, particularly when integrating peritumoral imaging and clinical features. The model could serve as a valuable pre-treatment tool to facilitate the implementation of personalized treatment strategies for ESCC patients undergoing immunotherapy and chemotherapy.

ChatGPT-4o's Performance in Brain Tumor Diagnosis and MRI Findings: A Comparative Analysis with Radiologists.

Ozenbas C, Engin D, Altinok T, Akcay E, Aktas U, Tabanli A

pubmed logopapersJun 1 2025
To evaluate the accuracy of ChatGPT-4o in identifying magnetic resonance imaging (MRI) findings and diagnosing brain tumors by comparing its performance with that of experienced radiologists. This retrospective study included 46 patients with pathologically confirmed brain tumors who underwent preoperative MRI between January 2021 and October 2024. Two experienced radiologists and ChatGPT 4o independently evaluated the anonymized MRI images. Eight questions focusing on MRI sequences, lesion characteristics, and diagnoses were answered. ChatGPT-4o's responses were compared to those of the radiologists and the pathology outcomes. Statistical analyses were performed, which included accuracy, sensitivity, specificity, and the McNemar test, with p<0.05 considered to indicate a statistically significant difference. ChatGPT-4o successfully identified 44 of the 46 (95.7%) lesions; it achieved 88.3% accuracy in identifying MRI sequences, 81% in perilesional edema, 79.5% in signal characteristics, and 82.2% in contrast enhancement. However, its accuracy in localizing lesions was 53.6% and that in distinguishing extra-axial from intra-axial lesions was 26.3%. As such, ChatGPT-4o achieved success rates of 56.8% and 29.5% for differential diagnoses and most likely diagnoses when compared to 93.2-90.9% and 70.5-65.9% for radiologists, respectively (p<0.005). ChatGPT-4o demonstrated high accuracy in identifying certain MRI features but underperformed in diagnostic tasks in comparison with the radiologists. Despite its current limitations, future updates and advancements have the potential to enable large language models to facilitate diagnosis and offer a reliable second opinion to radiologists.

Evaluation of a Deep Learning Denoising Algorithm for Dose Reduction in Whole-Body Photon-Counting CT Imaging: A Cadaveric Study.

Dehdab R, Brendel JM, Streich S, Ladurner R, Stenzl B, Mueck J, Gassenmaier S, Krumm P, Werner S, Herrmann J, Nikolaou K, Afat S, Brendlin A

pubmed logopapersJun 1 2025
Photon Counting CT (PCCT) offers advanced imaging capabilities with potential for substantial radiation dose reduction; however, achieving this without compromising image quality remains a challenge due to increased noise at lower doses. This study aims to evaluate the effectiveness of a deep learning (DL)-based denoising algorithm in maintaining diagnostic image quality in whole-body PCCT imaging at reduced radiation levels, using real intraindividual cadaveric scans. Twenty-four cadaveric human bodies underwent whole-body CT scans on a PCCT scanner (NAEOTOM Alpha, Siemens Healthineers) at four different dose levels (100%, 50%, 25%, and 10% mAs). Each scan was reconstructed using both QIR level 2 and a DL algorithm (ClariCT.AI, ClariPi Inc.), resulting in 192 datasets. Objective image quality was assessed by measuring CT value stability, image noise, and contrast-to-noise ratio (CNR) across consistent regions of interest (ROIs) in the liver parenchyma. Two radiologists independently evaluated subjective image quality based on overall image clarity, sharpness, and contrast. Inter-rater agreement was determined using Spearman's correlation coefficient, and statistical analysis included mixed-effects modeling to assess objective and subjective image quality. Objective analysis showed that the DL denoising algorithm did not significantly alter CT values (p ≥ 0.9975). Noise levels were consistently lower in denoised datasets compared to the Original (p < 0.0001). No significant differences were observed between the 25% mAs denoised and the 100% mAs original datasets in terms of noise and CNR (p ≥ 0.7870). Subjective analysis revealed strong inter-rater agreement (r ≥ 0.78), with the 50% mAs denoised datasets rated superior to the 100% mAs original datasets (p < 0.0001) and no significant differences detected between the 25% mAs denoised and 100% mAs original datasets (p ≥ 0.9436). The DL denoising algorithm maintains image quality in PCCT imaging while enabling up to a 75% reduction in radiation dose. This approach offers a promising method for reducing radiation exposure in clinical PCCT without compromising diagnostic quality.

Deep Learning Radiomics Nomogram Based on MRI for Differentiating between Borderline Ovarian Tumors and Stage I Ovarian Cancer: A Multicenter Study.

Wang X, Quan T, Chu X, Gao M, Zhang Y, Chen Y, Bai G, Chen S, Wei M

pubmed logopapersJun 1 2025
To develop and validate a deep learning radiomics nomogram (DLRN) based on T2-weighted MRI to distinguish between borderline ovarian tumors (BOTs) and stage I epithelial ovarian cancer (EOC) preoperatively. This retrospective multicenter study enrolled 279 patients from three centers, divided into a training set (n = 207) and an external test set (n = 72). The intra- and peritumoral radiomics analysis was employed to develop a combined radiomics model. A deep learning model was constructed based on the largest orthogonal slices of the tumor volume, and a clinical model was constructed using independent clinical predictors. The DLRN was then constructed by integrating deep learning, intra- and peritumoral radiomics, and clinical predictors. For comparison, an original radiomics model based solely on tumor volume (excluding the peritumoral area) was also constructed. All models were validated through 10-fold cross-validation and external testing, and their predictive performance was evaluated by the area under the receiver operating characteristic curve (AUC). The DLRN demonstrated superior performance across the 10-fold cross-validation, with the highest AUC of 0.825±0.082. On the external test set, the DLRN significantly outperformed the clinical model and the original radiomics model (AUC = 0.819 vs. 0.708 and 0.670, P = 0.047 and 0.015, respectively). Furthermore, the combined radiomics model performed significantly better than the original radiomics model (AUC = 0.778 vs. 0.670, P = 0.043). The DLRN exhibited promising performance in distinguishing BOTs from stage I EOC preoperatively, thus potentially assisting clinical decision-making.

Integration of Deep Learning and Sub-regional Radiomics Improves the Prediction of Pathological Complete Response to Neoadjuvant Chemoradiotherapy in Locally Advanced Rectal Cancer Patients.

Wu X, Wang J, Chen C, Cai W, Guo Y, Guo K, Chen Y, Shi Y, Chen J, Lin X, Jiang X

pubmed logopapersJun 1 2025
The precise prediction of response to neoadjuvant chemoradiotherapy is crucial for tailoring perioperative treatment in patients diagnosed with locally advanced rectal cancer (LARC). This retrospective study aims to develop and validate a model that integrates deep learning and sub-regional radiomics from MRI imaging to predict pathological complete response (pCR) in patients with LARC. We retrospectively enrolled 768 eligible participants from three independent hospitals who had received neoadjuvant chemoradiotherapy followed by radical surgery. Pretreatment pelvic MRI scans (T2-weighted), were collected for annotation and feature extraction. The K-means approach was used to segment the tumor into sub-regions. Radiomics and deep learning features were extracted by the Pyradiomics and 3D ResNet50, respectively. The predictive models were developed using the radiomics, sub-regional radiomics, and deep learning features with the machine learning algorithm in training cohort, and then validated in the external tests. The models' performance was assessed using various metrics, including the area under the curve (AUC), decision curve analysis, Kaplan-Meier survival analysis. We constructed a combined model, named SRADL, which includes deep learning with sub-regional radiomics signatures, enabling precise prediction of pCR in LARC patients. SRADL had satisfactory performance for the prediction of pCR in the training cohort (AUC 0.925 [95% CI 0.894 to 0.948]), and in test 1 (AUC 0.915 [95% CI 0.869 to 0.949]) and in test 2 (AUC 0.902 [95% CI 0.846 to 0.945]). By employing optimal threshold of 0.486, the predicted pCR group had longer survival compared to predicted non-pCR group across three cohorts. SRADL also outperformed other single-modality prediction models. The novel SRADL, which integrates deep learning with sub-regional signatures, showed high accuracy and robustness in predicting pCR to neoadjuvant chemoradiotherapy using pretreatment MRI images, making it a promising tool for the personalized management of LARC.

Adaptive Breast MRI Scanning Using AI.

Eskreis-Winkler S, Bhowmik A, Kelly LH, Lo Gullo R, D'Alessio D, Belen K, Hogan MP, Saphier NB, Sevilimedu V, Sung JS, Comstock CE, Sutton EJ, Pinker K

pubmed logopapersJun 1 2025
Background MRI protocols typically involve many imaging sequences and often require too much time. Purpose To simulate artificial intelligence (AI)-directed stratified scanning for screening breast MRI with various triage thresholds and evaluate its diagnostic performance against that of the full breast MRI protocol. Materials and Methods This retrospective reader study included consecutive contrast-enhanced screening breast MRI examinations performed between January 2013 and January 2019 at three regional cancer sites. In this simulation study, an in-house AI tool generated a suspicion score for subtraction maximum intensity projection images during a given MRI examination, and the score was used to determine whether to proceed with the full MRI protocol or end the examination early (abbreviated breast MRI [AB-MRI] protocol). Examinations with suspicion scores under the 50th percentile were read using both the AB-MRI protocol (ie, dynamic contrast-enhanced MRI scans only) and the full MRI protocol. Diagnostic performance metrics for screening with various AI triage thresholds were compared with those for screening without AI triage. Results Of 863 women (mean age, 52 years ± 10 [SD]; 1423 MRI examinations), 51 received a cancer diagnosis within 12 months of screening. The diagnostic performance metrics for AI-directed stratified scanning that triaged 50% of examinations to AB-MRI versus full MRI protocol scanning were as follows: sensitivity, 88.2% (45 of 51; 95% CI: 79.4, 97.1) versus 86.3% (44 of 51; 95% CI: 76.8, 95.7); specificity, 80.8% (1108 of 1372; 95% CI: 78.7, 82.8) versus 81.4% (1117 of 1372; 95% CI: 79.4, 83.5); positive predictive value 3 (ie, percent of biopsies yielding cancer), 23.6% (43 of 182; 95% CI: 17.5, 29.8) versus 24.7% (42 of 170; 95% CI: 18.2, 31.2); cancer detection rate (per 1000 examinations), 31.6 (95% CI: 22.5, 40.7) versus 30.9 (95% CI: 21.9, 39.9); and interval cancer rate (per 1000 examinations), 4.2 (95% CI: 0.9, 7.6) versus 4.9 (95% CI: 1.3, 8.6). Specificity decreased by no more than 2.7 percentage points with AI triage. There were no AI-triaged examinations for which conducting the full MRI protocol would have resulted in additional cancer detection. Conclusion AI-directed stratified MRI decreased simulated scan times while maintaining diagnostic performance. © RSNA, 2025 <i>Supplemental material is available for this article.</i> See also the editorial by Strand in this issue.
Page 96 of 1311301 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.