Sort by:
Page 62 of 1351347 results

Effect of Deep Learning Image Reconstruction on Image Quality and Pericoronary Fat Attenuation Index.

Mei J, Chen C, Liu R, Ma H

pubmed logopapersJun 1 2025
To compare the image quality and fat attenuation index (FAI) of coronary artery CT angiography (CCTA) under different tube voltages between deep learning image reconstruction (DLIR) and adaptive statistical iterative reconstruction V (ASIR-V). Three hundred one patients who underwent CCTA with automatic tube current modulation were prospectively enrolled and divided into two groups: 120 kV group and low tube voltage group. Images were reconstructed using ASIR-V level 50% (ASIR-V50%) and high-strength DLIR (DLIR-H). In the low tube voltage group, the voltage was selected according to Chinese BMI classification: 70 kV (BMI < 24 kg/m<sup>2</sup>), 80 kV (24 kg/m<sup>2</sup> ≤ BMI < 28 kg/m<sup>2</sup>), 100 kV (BMI ≥ 28 kg/m<sup>2</sup>). At the same tube voltage, the subjective and objective image quality, edge rise distance (ERD), and FAI between different algorithms were compared. Under different tube voltages, we used DLIR-H to compare the differences between subjective, objective image quality, and ERD. Compared with the 120 kV group, the DLIR-H image noise of 70 kV, 80 kV, and 100 kV groups increased by 36%, 25%, and 12%, respectively (all P < 0.001); contrast-to-noise ratio (CNR), subjective score, and ERD were similar (all P > 0.05). In the 70 kV, 80 kV, 100 kV, and 120 kV groups, compared with ASIR-V50%, DLIR-H image noise decreased by 50%, 53%, 47%, and 38-50%, respectively; CNR, subjective score, and FAI value increased significantly (all P < 0.001), ERD decreased. Compared with 120 kV tube voltage, the combination of DLIR-H and low tube voltage maintains image quality. At the same tube voltage, compared with ASIR-V, DLIR-H improves image quality and FAI value.

Machine learning can reliably predict malignancy of breast lesions based on clinical and ultrasonographic features.

Buzatto IPC, Recife SA, Miguel L, Bonini RM, Onari N, Faim ALPA, Silvestre L, Carlotti DP, Fröhlich A, Tiezzi DG

pubmed logopapersJun 1 2025
To establish a reliable machine learning model to predict malignancy in breast lesions identified by ultrasound (US) and optimize the negative predictive value to minimize unnecessary biopsies. We included clinical and ultrasonographic attributes from 1526 breast lesions classified as BI-RADS 3, 4a, 4b, 4c, 5, and 6 that underwent US-guided breast biopsy in four institutions. We selected the most informative attributes to train nine machine learning models, ensemble models and models with tuned threshold to make inferences about the diagnosis of BI-RADS 4a and 4b lesions (validation dataset). We tested the performance of the final model with 403 new suspicious lesions. The most informative attributes were shape, margin, orientation and size of the lesions, the resistance index of the internal vessel, the age of the patient and the presence of a palpable lump. The highest mean negative predictive value (NPV) was achieved with the K-Nearest Neighbors algorithm (97.9%). Making ensembles did not improve the performance. Tuning the threshold did improve the performance of the models and we chose the algorithm XGBoost with the tuned threshold as the final one. The tested performance of the final model was: NPV 98.1%, false negative 1.9%, positive predictive value 77.1%, false positive 22.9%. Applying this final model, we would have missed 2 of the 231 malignant lesions of the test dataset (0.8%). Machine learning can help physicians predict malignancy in suspicious breast lesions identified by the US. Our final model would be able to avoid 60.4% of the biopsies in benign lesions missing less than 1% of the cancer cases.

Intra-Individual Reproducibility of Automated Abdominal Organ Segmentation-Performance of TotalSegmentator Compared to Human Readers and an Independent nnU-Net Model.

Abel L, Wasserthal J, Meyer MT, Vosshenrich J, Yang S, Donners R, Obmann M, Boll D, Merkle E, Breit HC, Segeroth M

pubmed logopapersJun 1 2025
The purpose of this study is to assess segmentation reproducibility of artificial intelligence-based algorithm, TotalSegmentator, across 34 anatomical structures using multiphasic abdominal CT scans comparing unenhanced, arterial, and portal venous phases in the same patients. A total of 1252 multiphasic abdominal CT scans acquired at our institution between January 1, 2012, and December 31, 2022, were retrospectively included. TotalSegmentator was used to derive volumetric measurements of 34 abdominal organs and structures from the total of 3756 CT series. Reproducibility was evaluated across three contrast phases per CT and compared to two human readers and an independent nnU-Net trained on the BTCV dataset. Relative deviation in segmented volumes and absolute volume deviations (AVD) were reported. Volume deviation within 5% was considered reproducible. Thus, non-inferiority testing was conducted using a 5% margin. Twenty-nine out of 34 structures had volume deviations within 5% and were considered reproducible. Volume deviations for the adrenal glands, gallbladder, spleen, and duodenum were above 5%. Highest reproducibility was observed for bones (- 0.58% [95% CI: - 0.58, - 0.57]) and muscles (- 0.33% [- 0.35, - 0.32]). Among abdominal organs, volume deviation was 1.67% (1.60, 1.74). TotalSegmentator outperformed the reproducibility of the nnU-Net trained on the BTCV dataset with an AVD of 6.50% (6.41, 6.59) vs. 10.03% (9.86, 10.20; p < 0.0001), most notably in cases with pathologic findings. Similarly, TotalSegmentator's AVD between different contrast phases was superior compared to the interreader AVD for the same contrast phase (p = 0.036). TotalSegmentator demonstrated high intra-individual reproducibility for most abdominal structures in multiphasic abdominal CT scans. Although reproducibility was lower in pathologic cases, it outperforms both human readers and a nnU-Net trained on the BTCV dataset.

Deep Learning-Assisted Diagnosis of Malignant Cerebral Edema Following Endovascular Thrombectomy.

Song Y, Hong J, Liu F, Liu J, Chen Y, Li Z, Su J, Hu S, Fu J

pubmed logopapersJun 1 2025
Malignant cerebral edema (MCE) is a significant complication following endovascular thrombectomy (EVT) in the treatment of acute ischemic stroke. This study aimed to develop and validate a deep learning-assisted diagnosis model based on the hyperattenuated imaging marker (HIM), characterized by hyperattenuation on head non-contrast computed tomography immediately after thrombectomy, to facilitate radiologists in predicting MCE in patients receiving EVT. This study included 271 patients, with 168 in the training cohort, 43 in the validation cohort, and 60 in the prospective internal test cohort. Deep learning models including ResNet 50, ResNet 101, ResNeXt50_32×4d, ResNeXt101_32×8d, and DenseNet 121 were constructed. The performance of senior and junior radiologists with and without optimal model assistance was compared. ResNeXt101_32×8d had the best predictive performance, the analysis of the receiver operating characteristic curve indicated an area under the curve (AUC) of 0.897 for the prediction of MCE in the validation group and an AUC of 0.889 in the test group. Moreover, with the assistance of the model, radiologists exhibited a significant improvement in diagnostic performance, the AUC increased by 0.137 for the junior radiologist and 0.096 for the junior radiologist respectively. Our study utilized the ResNeXt-101 neural network, combined with HIM, to validate a deep learning model for predicting MCE post-EVT. The developed deep learning model demonstrated high discriminative ability, and can serve as a valuable adjunct to radiologists in clinical practice.

Computed Tomography Radiomics-based Combined Model for Predicting Thymoma Risk Subgroups: A Multicenter Retrospective Study.

Liu Y, Luo C, Wu Y, Zhou S, Ruan G, Li H, Chen W, Lin Y, Liu L, Quan T, He X

pubmed logopapersJun 1 2025
Accurately distinguishing histological subtypes and risk categorization of thymomas is difficult. To differentiate the histologic risk categories of thymomas, we developed a combined radiomics model based on non-enhanced and contrast-enhanced computed tomography (CT) radiomics, clinical, and semantic features. In total, 360 patients with pathologically-confirmed thymomas who underwent CT examinations were retrospectively recruited from three centers. Patients were classified using improved pathological classification criteria as low-risk (LRT: types A and AB) or high-risk (HRT: types B1, B2, and B3). The training and external validation sets comprised 274 (from centers 1 and 2) and 86 (center 3) patients, respectively. A clinical-semantic model was built using clinical and semantic variables. Radiomics features were filtered using intraclass correlation coefficients, correlation analysis, and univariate logistic regression. An optimal radiomics model (Rad_score) was constructed using the AutoML algorithm, while a combined model was constructed by integrating Rad_score with clinical and semantic features. The predictive and clinical performances of the models were evaluated using receiver operating characteristic/calibration curve analyses and decision-curve analysis, respectively. Radiomics and combined models (area under curve: training set, 0.867 and 0.884; external validation set, 0.792 and 0.766, respectively) exhibited performance superior to the clinical-semantic model. The combined model had higher accuracy than the radiomics model (0.79 vs. 0.78, p<0.001) in the entire cohort. The original_firstorder_median of venous phase had the highest relative importance among features in the radiomics model. Radiomics and combined radiomics models may serve as noninvasive discrimination tools to differentiate thymoma risk classifications.

Bridging innovation to implementation in artificial intelligence fracture detection : a commentary piece.

Khattak M, Kierkegaard P, McGregor A, Perry DC

pubmed logopapersJun 1 2025
The deployment of AI in medical imaging, particularly in areas such as fracture detection, represents a transformative advancement in orthopaedic care. AI-driven systems, leveraging deep-learning algorithms, promise to enhance diagnostic accuracy, reduce variability, and streamline workflows by analyzing radiograph images swiftly and accurately. Despite these potential benefits, the integration of AI into clinical settings faces substantial barriers, including slow adoption across health systems, technical challenges, and a major lag between technology development and clinical implementation. This commentary explores the role of AI in healthcare, highlighting its potential to enhance patient outcomes through more accurate and timely diagnoses. It addresses the necessity of bridging the gap between AI innovation and practical application. It also emphasizes the importance of implementation science in effectively integrating AI technologies into healthcare systems, using frameworks such as the Consolidated Framework for Implementation Research and the Knowledge-to-Action Cycle to guide this process. We call for a structured approach to address the challenges of deploying AI in clinical settings, ensuring that AI's benefits translate into improved healthcare delivery and patient care.

Deep learning-based acceleration of high-resolution compressed sense MR imaging of the hip.

Marka AW, Meurer F, Twardy V, Graf M, Ebrahimi Ardjomand S, Weiss K, Makowski MR, Gersing AS, Karampinos DC, Neumann J, Woertler K, Banke IJ, Foreman SC

pubmed logopapersJun 1 2025
To evaluate a Compressed Sense Artificial Intelligence framework (CSAI) incorporating parallel imaging, compressed sense (CS), and deep learning for high-resolution MRI of the hip, comparing it with standard-resolution CS imaging. Thirty-two patients with femoroacetabular impingement syndrome underwent 3 T MRI scans. Coronal and sagittal intermediate-weighted TSE sequences with fat saturation were acquired using CS (0.6 ×0.8 mm resolution) and CSAI (0.3 ×0.4 mm resolution) protocols in comparable acquisition times (7:49 vs. 8:07 minutes for both planes). Two readers systematically assessed the depiction of the acetabular and femoral cartilage (in five cartilage zones), labrum, ligamentum capitis femoris, and bone using a five-point Likert scale. Diagnostic confidence and abnormality detection were recorded and analyzed using the Wilcoxon signed-rank test. CSAI significantly improved the cartilage depiction across most cartilage zones compared to CS. Overall Likert scores were 4.0 ± 0.2 (CS) vs 4.2 ± 0.6 (CSAI) for reader 1 and 4.0 ± 0.2 (CS) vs 4.3 ± 0.6 (CSAI) for reader 2 (p ≤ 0.001). Diagnostic confidence increased from 3.5 ± 0.7 and 3.9 ± 0.6 (CS) to 4.0 ± 0.6 and 4.1 ± 0.7 (CSAI) for readers 1 and 2, respectively (p ≤ 0.001). More cartilage lesions were detected with CSAI, with significant improvements in diagnostic confidence in certain cartilage zones such as femoral zone C and D for both readers. Labrum and ligamentum capitis femoris depiction remained similar, while bone depiction was rated lower. No abnormalities detected in CS were missed in CSAI. CSAI provides high-resolution hip MR images with enhanced cartilage depiction without extending acquisition times, potentially enabling more precise hip cartilage assessment.

A systematic review on deep learning-enabled coronary CT angiography for plaque and stenosis quantification and cardiac risk prediction.

Shrivastava P, Kashikar S, Parihar PH, Kasat P, Bhangale P, Shrivastava P

pubmed logopapersJun 1 2025
Coronary artery disease (CAD) is a major worldwide health concern, contributing significantly to the global burden of cardiovascular diseases (CVDs). According to the 2023 World Health Organization (WHO) report, CVDs account for approximately 17.9 million deaths annually. This emphasizies the need for advanced diagnostic tools such as coronary computed tomography angiography (CCTA). The incorporation of deep learning (DL) technologies could significantly improve CCTA analysis by automating the quantification of plaque and stenosis, thus enhancing the precision of cardiac risk assessments. A recent meta-analysis highlights the evolving role of CCTA in patient management, showing that CCTA-guided diagnosis and management reduced adverse cardiac events and improved event-free survival in patients with stable and acute coronary syndromes. An extensive literature search was carried out across various electronic databases, such as MEDLINE, Embase, and the Cochrane Library. This search utilized a specific strategy that included both Medical Subject Headings (MeSH) terms and pertinent keywords. The review adhered to PRISMA guidelines and focused on studies published between 2019 and 2024 that employed deep learning (DL) for coronary computed tomography angiography (CCTA) in patients aged 18 years or older. After implementing specific inclusion and exclusion criteria, a total of 10 articles were selected for systematic evaluation regarding quality and bias. This systematic review included a total of 10 studies, demonstrating the high diagnostic performance and predictive capabilities of various deep learning models compared to different imaging modalities. This analysis highlights the effectiveness of these models in enhancing diagnostic accuracy in imaging techniques. Notably, strong correlations were observed between DL-derived measurements and intravascular ultrasound findings, enhancing clinical decision-making and risk stratification for CAD. Deep learning-enabled CCTA represents a promising advancement in the quantification of coronary plaques and stenosis, facilitating improved cardiac risk prediction and enhancing clinical workflow efficiency. Despite variability in study designs and potential biases, the findings support the integration of DL technologies into routine clinical practice for better patient outcomes in CAD management.

Deep learning driven interpretable and informed decision making model for brain tumour prediction using explainable AI.

Adnan KM, Ghazal TM, Saleem M, Farooq MS, Yeun CY, Ahmad M, Lee SW

pubmed logopapersJun 1 2025
Brain Tumours are highly complex, particularly when it comes to their initial and accurate diagnosis, as this determines patient prognosis. Conventional methods rely on MRI and CT scans and employ generic machine learning techniques, which are heavily dependent on feature extraction and require human intervention. These methods may fail in complex cases and do not produce human-interpretable results, making it difficult for clinicians to trust the model's predictions. Such limitations prolong the diagnostic process and can negatively impact the quality of treatment. The advent of deep learning has made it a powerful tool for complex image analysis tasks, such as detecting brain Tumours, by learning advanced patterns from images. However, deep learning models are often considered "black box" systems, where the reasoning behind predictions remains unclear. To address this issue, the present study applies Explainable AI (XAI) alongside deep learning for accurate and interpretable brain Tumour prediction. XAI enhances model interpretability by identifying key features such as Tumour size, location, and texture, which are crucial for clinicians. This helps build their confidence in the model and enables them to make better-informed decisions. In this research, a deep learning model integrated with XAI is proposed to develop an interpretable framework for brain Tumour prediction. The model is trained on an extensive dataset comprising imaging and clinical data and demonstrates high AUC while leveraging XAI for model explainability and feature selection. The study findings indicate that this approach improves predictive performance, achieving an accuracy of 92.98% and a miss rate of 7.02%. Additionally, interpretability tools such as LIME and Grad-CAM provide clinicians with a clearer understanding of the decision-making process, supporting diagnosis and treatment. This model represents a significant advancement in brain Tumour prediction, with the potential to enhance patient outcomes and contribute to the field of neuro-oncology.

Predicting long-term patency of radiocephalic arteriovenous fistulas with machine learning and the PREDICT-AVF web app.

Fitzgibbon JJ, Ruan M, Heindel P, Appah-Sampong A, Dey T, Khan A, Hentschel DM, Ozaki CK, Hussain MA

pubmed logopapersJun 1 2025
The goal of this study was to expand our previously created prediction tool (PREDICT-AVF) and web app by estimating long-term primary and secondary patency of radiocephalic AVFs. The data source was 911 patients from PATENCY-1 and PATENCY-2 randomized controlled trials, which enrolled patients undergoing new radiocephalic AVF creation with prospective longitudinal follow up and ultrasound measurements. Models were built using a combination of baseline characteristics and post-operative ultrasound measurements to estimate patency up to 2.5 years. Discrimination performance was assessed, and an interactive web app was created using the most robust model. At 2.5 years, the unadjusted primary and secondary patency (95% CI) was 29% (26-33%) and 68% (65-72%). Models using baseline characteristics generally did not perform as well as those using post-operative ultrasound measurements. Overall, the Cox model (4-6 weeks ultrasound) had the best discrimination performance for primary and secondary patency, with an integrated Brier score of 0.183 (0.167, 0.199) and 0.106 (0.085, 0.126). Expansion of the PREDICT-AVF web app to include prediction of long-term patency can help guide clinicians in developing comprehensive end-stage kidney disease Life-Plans with hemodialysis access patients.
Page 62 of 1351347 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.