Sort by:
Page 297 of 3463455 results

Deep Learning-Enhanced Ultra-high-resolution CT Imaging for Superior Temporal Bone Visualization.

Brockstedt L, Grauhan NF, Kronfeld A, Mercado MAA, Döge J, Sanner A, Brockmann MA, Othman AE

pubmed logopapersJun 1 2025
This study assesses the image quality of temporal bone ultra-high-resolution (UHR) Computed tomography (CT) scans in adults and children using hybrid iterative reconstruction (HIR) and a novel, vendor-specific deep learning-based reconstruction (DLR) algorithm called AiCE Inner Ear. In a retrospective, single-center study (February 1-July 30, 2023), UHR-CT scans of 57 temporal bones of 35 patients (5 children, 23 male) with at least one anatomical unremarkable temporal bone were included. There is an adult computed tomography dose index volume (CTDIvol 25.6 mGy) and a pediatric protocol (15.3 mGy). Images were reconstructed using HIR at normal resolution (0.5-mm slice thickness, 512² matrix) and UHR (0.25-mm, 1024² and 2048² matrix) as well as with a vendor-specific DLR advanced intelligent clear-IQ engine inner ear (AiCE Inner Ear) at UHR (0.25-mm, 1024² matrix). Three radiologists evaluated 18 anatomic structures using a 5-point Likert scale. Signal-to-noise (SNR) and contrast-to-noise ratio (CNR) were measured automatically. In the adult protocol subgroup (n=30; median age: 51 [11-89]; 19 men) and the pediatric protocol subgroup (n=5; median age: 2 [1-3]; 4 men), UHR-CT with DLR significantly improved subjective image quality (p<0.024), reduced noise (p<0.001), and increased CNR and SNR (p<0.001). DLR also enhanced visualization of key structures, including the tendon of the stapedius muscle (p<0.001), tympanic membrane (p<0.009), and basal aspect of the osseous spiral lamina (p<0.018). Vendor-specific DLR-enhanced UHR-CT significantly improves temporal bone image quality and diagnostic performance.

Predicting long-term patency of radiocephalic arteriovenous fistulas with machine learning and the PREDICT-AVF web app.

Fitzgibbon JJ, Ruan M, Heindel P, Appah-Sampong A, Dey T, Khan A, Hentschel DM, Ozaki CK, Hussain MA

pubmed logopapersJun 1 2025
The goal of this study was to expand our previously created prediction tool (PREDICT-AVF) and web app by estimating long-term primary and secondary patency of radiocephalic AVFs. The data source was 911 patients from PATENCY-1 and PATENCY-2 randomized controlled trials, which enrolled patients undergoing new radiocephalic AVF creation with prospective longitudinal follow up and ultrasound measurements. Models were built using a combination of baseline characteristics and post-operative ultrasound measurements to estimate patency up to 2.5 years. Discrimination performance was assessed, and an interactive web app was created using the most robust model. At 2.5 years, the unadjusted primary and secondary patency (95% CI) was 29% (26-33%) and 68% (65-72%). Models using baseline characteristics generally did not perform as well as those using post-operative ultrasound measurements. Overall, the Cox model (4-6 weeks ultrasound) had the best discrimination performance for primary and secondary patency, with an integrated Brier score of 0.183 (0.167, 0.199) and 0.106 (0.085, 0.126). Expansion of the PREDICT-AVF web app to include prediction of long-term patency can help guide clinicians in developing comprehensive end-stage kidney disease Life-Plans with hemodialysis access patients.

Deep learning-based acceleration of high-resolution compressed sense MR imaging of the hip.

Marka AW, Meurer F, Twardy V, Graf M, Ebrahimi Ardjomand S, Weiss K, Makowski MR, Gersing AS, Karampinos DC, Neumann J, Woertler K, Banke IJ, Foreman SC

pubmed logopapersJun 1 2025
To evaluate a Compressed Sense Artificial Intelligence framework (CSAI) incorporating parallel imaging, compressed sense (CS), and deep learning for high-resolution MRI of the hip, comparing it with standard-resolution CS imaging. Thirty-two patients with femoroacetabular impingement syndrome underwent 3 T MRI scans. Coronal and sagittal intermediate-weighted TSE sequences with fat saturation were acquired using CS (0.6 ×0.8 mm resolution) and CSAI (0.3 ×0.4 mm resolution) protocols in comparable acquisition times (7:49 vs. 8:07 minutes for both planes). Two readers systematically assessed the depiction of the acetabular and femoral cartilage (in five cartilage zones), labrum, ligamentum capitis femoris, and bone using a five-point Likert scale. Diagnostic confidence and abnormality detection were recorded and analyzed using the Wilcoxon signed-rank test. CSAI significantly improved the cartilage depiction across most cartilage zones compared to CS. Overall Likert scores were 4.0 ± 0.2 (CS) vs 4.2 ± 0.6 (CSAI) for reader 1 and 4.0 ± 0.2 (CS) vs 4.3 ± 0.6 (CSAI) for reader 2 (p ≤ 0.001). Diagnostic confidence increased from 3.5 ± 0.7 and 3.9 ± 0.6 (CS) to 4.0 ± 0.6 and 4.1 ± 0.7 (CSAI) for readers 1 and 2, respectively (p ≤ 0.001). More cartilage lesions were detected with CSAI, with significant improvements in diagnostic confidence in certain cartilage zones such as femoral zone C and D for both readers. Labrum and ligamentum capitis femoris depiction remained similar, while bone depiction was rated lower. No abnormalities detected in CS were missed in CSAI. CSAI provides high-resolution hip MR images with enhanced cartilage depiction without extending acquisition times, potentially enabling more precise hip cartilage assessment.

Computed Tomography Radiomics-based Combined Model for Predicting Thymoma Risk Subgroups: A Multicenter Retrospective Study.

Liu Y, Luo C, Wu Y, Zhou S, Ruan G, Li H, Chen W, Lin Y, Liu L, Quan T, He X

pubmed logopapersJun 1 2025
Accurately distinguishing histological subtypes and risk categorization of thymomas is difficult. To differentiate the histologic risk categories of thymomas, we developed a combined radiomics model based on non-enhanced and contrast-enhanced computed tomography (CT) radiomics, clinical, and semantic features. In total, 360 patients with pathologically-confirmed thymomas who underwent CT examinations were retrospectively recruited from three centers. Patients were classified using improved pathological classification criteria as low-risk (LRT: types A and AB) or high-risk (HRT: types B1, B2, and B3). The training and external validation sets comprised 274 (from centers 1 and 2) and 86 (center 3) patients, respectively. A clinical-semantic model was built using clinical and semantic variables. Radiomics features were filtered using intraclass correlation coefficients, correlation analysis, and univariate logistic regression. An optimal radiomics model (Rad_score) was constructed using the AutoML algorithm, while a combined model was constructed by integrating Rad_score with clinical and semantic features. The predictive and clinical performances of the models were evaluated using receiver operating characteristic/calibration curve analyses and decision-curve analysis, respectively. Radiomics and combined models (area under curve: training set, 0.867 and 0.884; external validation set, 0.792 and 0.766, respectively) exhibited performance superior to the clinical-semantic model. The combined model had higher accuracy than the radiomics model (0.79 vs. 0.78, p<0.001) in the entire cohort. The original_firstorder_median of venous phase had the highest relative importance among features in the radiomics model. Radiomics and combined radiomics models may serve as noninvasive discrimination tools to differentiate thymoma risk classifications.

Deep Learning-Assisted Diagnosis of Malignant Cerebral Edema Following Endovascular Thrombectomy.

Song Y, Hong J, Liu F, Liu J, Chen Y, Li Z, Su J, Hu S, Fu J

pubmed logopapersJun 1 2025
Malignant cerebral edema (MCE) is a significant complication following endovascular thrombectomy (EVT) in the treatment of acute ischemic stroke. This study aimed to develop and validate a deep learning-assisted diagnosis model based on the hyperattenuated imaging marker (HIM), characterized by hyperattenuation on head non-contrast computed tomography immediately after thrombectomy, to facilitate radiologists in predicting MCE in patients receiving EVT. This study included 271 patients, with 168 in the training cohort, 43 in the validation cohort, and 60 in the prospective internal test cohort. Deep learning models including ResNet 50, ResNet 101, ResNeXt50_32×4d, ResNeXt101_32×8d, and DenseNet 121 were constructed. The performance of senior and junior radiologists with and without optimal model assistance was compared. ResNeXt101_32×8d had the best predictive performance, the analysis of the receiver operating characteristic curve indicated an area under the curve (AUC) of 0.897 for the prediction of MCE in the validation group and an AUC of 0.889 in the test group. Moreover, with the assistance of the model, radiologists exhibited a significant improvement in diagnostic performance, the AUC increased by 0.137 for the junior radiologist and 0.096 for the junior radiologist respectively. Our study utilized the ResNeXt-101 neural network, combined with HIM, to validate a deep learning model for predicting MCE post-EVT. The developed deep learning model demonstrated high discriminative ability, and can serve as a valuable adjunct to radiologists in clinical practice.

Machine learning can reliably predict malignancy of breast lesions based on clinical and ultrasonographic features.

Buzatto IPC, Recife SA, Miguel L, Bonini RM, Onari N, Faim ALPA, Silvestre L, Carlotti DP, Fröhlich A, Tiezzi DG

pubmed logopapersJun 1 2025
To establish a reliable machine learning model to predict malignancy in breast lesions identified by ultrasound (US) and optimize the negative predictive value to minimize unnecessary biopsies. We included clinical and ultrasonographic attributes from 1526 breast lesions classified as BI-RADS 3, 4a, 4b, 4c, 5, and 6 that underwent US-guided breast biopsy in four institutions. We selected the most informative attributes to train nine machine learning models, ensemble models and models with tuned threshold to make inferences about the diagnosis of BI-RADS 4a and 4b lesions (validation dataset). We tested the performance of the final model with 403 new suspicious lesions. The most informative attributes were shape, margin, orientation and size of the lesions, the resistance index of the internal vessel, the age of the patient and the presence of a palpable lump. The highest mean negative predictive value (NPV) was achieved with the K-Nearest Neighbors algorithm (97.9%). Making ensembles did not improve the performance. Tuning the threshold did improve the performance of the models and we chose the algorithm XGBoost with the tuned threshold as the final one. The tested performance of the final model was: NPV 98.1%, false negative 1.9%, positive predictive value 77.1%, false positive 22.9%. Applying this final model, we would have missed 2 of the 231 malignant lesions of the test dataset (0.8%). Machine learning can help physicians predict malignancy in suspicious breast lesions identified by the US. Our final model would be able to avoid 60.4% of the biopsies in benign lesions missing less than 1% of the cancer cases.

Uncertainty Estimation for Dual View X-ray Mammographic Image Registration Using Deep Ensembles.

Walton WC, Kim SJ

pubmed logopapersJun 1 2025
Techniques are developed for generating uncertainty estimates for convolutional neural network (CNN)-based methods for registering the locations of lesions between the craniocaudal (CC) and mediolateral oblique (MLO) mammographic X-ray image views. Multi-view lesion correspondence is an important task that clinicians perform for characterizing lesions during routine mammographic exams. Automated registration tools can aid in this task, yet if the tools also provide confidence estimates, they can be of greater value to clinicians, especially in cases involving dense tissue where lesions may be difficult to see. A set of deep ensemble-based techniques, which leverage a negative log-likelihood (NLL)-based cost function, are implemented for estimating uncertainties. The ensemble architectures involve significant modifications to an existing CNN dual-view lesion registration algorithm. Three architectural designs are evaluated, and different ensemble sizes are compared using various performance metrics. The techniques are tested on synthetic X-ray data, real 2D X-ray data, and slices from real 3D X-ray data. The ensembles generate covariance-based uncertainty ellipses that are correlated with registration accuracy, such that the ellipse sizes can give a clinician an indication of confidence in the mapping between the CC and MLO views. The results also show that the ellipse sizes can aid in improving computer-aided detection (CAD) results by matching CC/MLO lesion detects and reducing false alarms from both views, adding to clinical utility. The uncertainty estimation techniques show promise as a means for aiding clinicians in confidently establishing multi-view lesion correspondence, thereby improving diagnostic capability.

Cross-site Validation of AI Segmentation and Harmonization in Breast MRI.

Huang Y, Leotta NJ, Hirsch L, Gullo RL, Hughes M, Reiner J, Saphier NB, Myers KS, Panigrahi B, Ambinder E, Di Carlo P, Grimm LJ, Lowell D, Yoon S, Ghate SV, Parra LC, Sutton EJ

pubmed logopapersJun 1 2025
This work aims to perform a cross-site validation of automated segmentation for breast cancers in MRI and to compare the performance to radiologists. A three-dimensional (3D) U-Net was trained to segment cancers in dynamic contrast-enhanced axial MRIs using a large dataset from Site 1 (n = 15,266; 449 malignant and 14,817 benign). Performance was validated on site-specific test data from this and two additional sites, and common publicly available testing data. Four radiologists from each of the three clinical sites provided two-dimensional (2D) segmentations as ground truth. Segmentation performance did not differ between the network and radiologists on the test data from Sites 1 and 2 or the common public data (median Dice score Site 1, network 0.86 vs. radiologist 0.85, n = 114; Site 2, 0.91 vs. 0.91, n = 50; common: 0.93 vs. 0.90). For Site 3, an affine input layer was fine-tuned using segmentation labels, resulting in comparable performance between the network and radiologist (0.88 vs. 0.89, n = 42). Radiologist performance differed on the common test data, and the network numerically outperformed 11 of the 12 radiologists (median Dice: 0.85-0.94, n = 20). In conclusion, a deep network with a novel supervised harmonization technique matches radiologists' performance in MRI tumor segmentation across clinical sites. We make code and weights publicly available to promote reproducible AI in radiology.

Deep Learning Classification of Ischemic Stroke Territory on Diffusion-Weighted MRI: Added Value of Augmenting the Input with Image Transformations.

Koska IO, Selver A, Gelal F, Uluc ME, Çetinoğlu YK, Yurttutan N, Serindere M, Dicle O

pubmed logopapersJun 1 2025
Our primary aim with this study was to build a patient-level classifier for stroke territory in DWI using AI to facilitate fast triage of stroke to a dedicated stroke center. A retrospective collection of DWI images of 271 and 122 consecutive acute ischemic stroke patients from two centers was carried out. Pretrained MobileNetV2 and EfficientNetB0 architectures were used to classify territorial subtypes as middle cerebral artery, posterior circulation, or watershed infarcts along with normal slices. Various input combinations using edge maps, thresholding, and hard attention versions were explored. The effect of augmenting the three-channel inputs of pre-trained models on classification performance was analyzed. ROC analyses and confusion matrix-derived performance metrics of the models were reported. Of the 271 patients included in this study, 151 (55.7%) were male and 120 (44.3%) were female. One hundred twenty-nine patients had MCA (47.6%), 65 patients had posterior circulation (24%), and 77 patients had watershed (28.0%) infarcts for center 1. Of the 122 patients from center 2, 78 (64%) were male and 44 (34%) were female. Fifty-two patients (43%) had MCA, 51 patients had posterior circulation (42%), and 19 (15%) patients had watershed infarcts. The Mobile-Crop model had the best performance with 0.95 accuracy and a 0.91 mean f1 score for slice-wise classification and 0.88 accuracy on external test sets, along with a 0.92 mean AUC. In conclusion, modified pre-trained models may be augmented with the transformation of images to provide a more accurate classification of affected territory by stroke in DWI.

A Large Language Model to Detect Negated Expressions in Radiology Reports.

Su Y, Babore YB, Kahn CE

pubmed logopapersJun 1 2025
Natural language processing (NLP) is crucial to extract information accurately from unstructured text to provide insights for clinical decision-making, quality improvement, and medical research. This study compared the performance of a rule-based NLP system and a medical-domain transformer-based model to detect negated concepts in radiology reports. Using a corpus of 984 de-identified radiology reports from a large U.S.-based academic health system (1000 consecutive reports, excluding 16 duplicates), the investigators compared the rule-based medspaCy system and the Clinical Assertion and Negation Classification Bidirectional Encoder Representations from Transformers (CAN-BERT) system to detect negated expressions of terms from RadLex, the Unified Medical Language System Metathesaurus, and the Radiology Gamuts Ontology. Power analysis determined a sample size of 382 terms to achieve α = 0.05 and β = 0.8 for McNemar's test; based on an estimate of 15% negated terms, 2800 randomly selected terms were annotated manually as negated or not negated. Precision, recall, and F1 of the two models were compared using McNemar's test. Of the 2800 terms, 387 (13.8%) were negated. For negation detection, medspaCy attained a recall of 0.795, precision of 0.356, and F1 of 0.492. CAN-BERT achieved a recall of 0.785, precision of 0.768, and F1 of 0.777. Although recall was not significantly different, CAN-BERT had significantly better precision (χ2 = 304.64; p < 0.001). The transformer-based CAN-BERT model detected negated terms in radiology reports with high precision and recall; its precision significantly exceeded that of the rule-based medspaCy system. Use of this system will improve data extraction from textual reports to support information retrieval, AI model training, and discovery of causal relationships.
Page 297 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.